WO2024192399A1 - Techniques for a cable termination protection apparatus in a prefab factory - Google Patents
Techniques for a cable termination protection apparatus in a prefab factory Download PDFInfo
- Publication number
- WO2024192399A1 WO2024192399A1 PCT/US2024/020258 US2024020258W WO2024192399A1 WO 2024192399 A1 WO2024192399 A1 WO 2024192399A1 US 2024020258 W US2024020258 W US 2024020258W WO 2024192399 A1 WO2024192399 A1 WO 2024192399A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- networking
- prefab
- region
- cable
- ports
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0803—Configuration setting
- H04L41/0806—Configuration setting for initial configuration or provisioning, e.g. plug-and-play
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/60—Software deployment
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0895—Configuration of virtualised networks or elements, e.g. virtualised network function or OpenFlow elements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/20—Network management software packages
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/34—Network arrangements or protocols for supporting network services or applications involving the movement of software or configuration parameters
-
- H—ELECTRICITY
- H05—ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
- H05K—PRINTED CIRCUITS; CASINGS OR CONSTRUCTIONAL DETAILS OF ELECTRIC APPARATUS; MANUFACTURE OF ASSEMBLAGES OF ELECTRICAL COMPONENTS
- H05K7/00—Constructional details common to different types of electric apparatus
- H05K7/14—Mounting supporting structure in casing or on frame or rack
- H05K7/1485—Servers; Data center rooms, e.g. 19-inch computer racks
- H05K7/1497—Rooms for data centers; Shipping containers therefor
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0803—Configuration setting
- H04L41/0813—Configuration setting characterised by the conditions triggering a change of settings
- H04L41/082—Configuration setting characterised by the conditions triggering a change of settings the condition being updates or upgrades of network functionality
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0866—Checking the configuration
- H04L41/0869—Validating the configuration within one network element
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/12—Discovery or management of network topologies
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/40—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using virtualisation of network functions or resources, e.g. SDN or NFV entities
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/50—Testing arrangements
Definitions
- a cloud infrastructure provider may operate one or more data centers in geographic areas around the world.
- a "region” is a logical abstraction around a collection of the computing, storage, and networking resources of the data centers of a given geographical area that are used to provide the cloud computing infrastructure. Building new regions can include provisioning the computing resources, configuring infrastructure, and deploying code to those resources, typically over network connections to the data centers.
- building regions with physical resources located at the final destination data center sites requires significant preparation work at the data centers that can complicate the logistics and scheduling of completing the building of a region.
- Embodiments of the present disclosure relate to automatically building a region using a prefab factory.
- a prefab factory may be a facility dedicated to configuring computing devices, networking devices, and other physical resources for delivery to a destination site (e.g., a destination region—one or more data centers in a geographic area, a customer facility, etc.).
- Operations for building a region can include bootstrapping (e.g., provisioning and/or deploying) resources (e.g., infrastructure components, artifacts, etc.) for any suitable number of services available from the region when delivered to the destination.
- Resources used for bootstrapping may be provided in a bootstrapping environment in an existing region (e.g., one or more data centers of a host region).
- the host region can be selected based on network proximity to the prefab factory, and in a complimentary fashion, the prefab factory may be sited to have high performance network connectivity to one or more host regions to support the bootstrapping environment.
- Building the region may be orchestrated by one or more cloud-based services that can manage the inventory of physical computing devices used to build regions in the prefab factory, generate and specify the configurations of regions to be built in the prefab factory, manage the bootstrapping of the regions, configure the regions for transmission to a destination site, and test and verify the physical resources after the physical resources have been installed at the destination site.
- a prefab region may be built to meet a specific customer’s configuration preferences (built-to- order) or built to a common specification that may be further customized during installation at a specific customer’s site (built-to-stock).
- One embodiment is directed to a cable termination protection apparatus that includes a frame having a plurality of ports arranged on a face of the frame. Each of the ports can be configured to accept a cable termination connector of a networking cable of a static network fabric in a data center (e.g., a prefab factory).
- the ports can be arranged on the face of the frame to substantially align with a second plurality of ports of a computing device (e.g., a top of rack switch) when the frame is positioned at a location adjacent to the computing device.
- the alignment may be vertical or horizontal.
- the apparatus can include port covers for the ports that can be inserted into the ports when a networking cable is not connected to the apparatus.
- the ports can each correspond to one of several physical standards defining cable termination connectors for networking cables.
- Another embodiment is directed to a system comprising a data center having a rack of devices positionable at a location in the data center, a cable termination protection apparatus mounted adjacent to the location with the rack of devices, and a plurality of networking cables routed through the data center, wherein at least one networking cable of the plurality of networking cables includes a terminal end at the location in the data center, the terminal end of the at least one networking cable coupled to the cable termination connector.
- at least one networking port of the plurality of networking ports is configured to accept the cable termination connector and at least one networking cable is movable between the networking port and the port of the cable terminal protection apparatus.
- Still another embodiment is directed to a method for generating instructions usable to connect a data center rack networking cables of a data center having a cable termination protection apparatus at a location for installing the data center rack.
- the method can be performed by a computing device.
- the method can include receiving a build request for the region data center rack.
- the computing device can obtain physical configuration parameters for computing devices on the data center rack.
- the computing devices can include a networking device having a plurality of networking ports.
- the physical configuration parameters can specify at least one of the networking ports of the networking device.
- the method can also include obtaining cabling specification information corresponding to the location at the data center and a plurality of networking cables configured to terminate at the cable terminal protection apparatus.
- the method can also include generating the instructions using the physical configuration parameters and the cabling specification information.
- the instructions may be usable to disconnect a networking cable from the cable terminal protection apparatus and reconnect the networking cable at the networking port of the networking device.
- FIG. 2 is a block diagram illustrating a prefab factory connected to services provided by a CSP for building regions, according to at least one embodiment.
- FIG. 3 is a block diagram illustrating a CSP system that includes multiple host regions that can support a ViBE for deploying software resources to a prefab region being built at a prefab factory, according to at least one embodiment.
- FIG. 4 is a block diagram illustrating an arrangement of physical computing resources in a prefab factory managed with a manager service and inventory service, according to at least one embodiment.
- FIG. 5 is a diagram illustrating managing a network configuration of computing resources of a region being built in a prefab factory using a manager service and a network service, according to at least one embodiment.
- FIG. 3 is a block diagram illustrating a CSP system that includes multiple host regions that can support a ViBE for deploying software resources to a prefab region being built at a prefab factory, according to at least one embodiment.
- FIG. 4 is a block diagram illustrating an arrangement of physical computing
- FIG. 6 is a diagram illustrating a testing and evaluation of a region after delivery to a destination site using a manager service and a testing service, according to at least one embodiment.
- FIG. 7 is an example method for deploying software resources to physical resources of a region being built in a prefab factory and preparing the physical resources for transportation to a destination data center, according to at least one embodiment.
- FIG. 8 is an example method for booting physical resources built at a prefab factory after delivery to a destination data center and verifying a network configuration of the physical resources, according to at least one embodiment.
- FIGS. 9A and 9B are diagrams illustrating example arrangements of a cable termination protection apparatus that can be positioned within a prefab factory, according to some embodiments.
- FIG. 10 is a diagram depicting example steps for disconnecting networking cables from a CTPA and re-connecting the networking cables to a networking device according to instructions generated based on the network fabric of a prefab factory and physical configuration parameters of the networking device, according to at least one embodiment.
- FIG. 11 is an example method for generating instructions usable to disconnect networking cables from a CTPA and reconnecting the networking cables to a networking device in a prefab factory, according to at least one embodiment.
- FIG. 12 is a block diagram illustrating one pattern for implementing a cloud infrastructure as a service system, according to at least one embodiment. [0021] FIG.
- FIG. 13 is a block diagram illustrating another pattern for implementing a cloud infrastructure as a service system, according to at least one embodiment.
- FIG. 14 is a block diagram illustrating another pattern for implementing a cloud infrastructure as a service system, according to at least one embodiment.
- FIG. 15 is a block diagram illustrating another pattern for implementing a cloud infrastructure as a service system, according to at least one embodiment.
- FIG. 16 is a block diagram illustrating an example computer system, according to at least one embodiment. DETAILED DESCRIPTION OF DRAWINGS Example Automated Data Center Build (Region Build) Infrastructure [0025] The adoption of cloud services has seen a rapid uptick in recent times.
- cloud service is generally used to refer to a service or functionality that is made available by a CSP to users or customers on demand (e.g., via a subscription model) using systems and infrastructure (cloud infrastructure) provided by the CSP.
- systems and infrastructure cloud infrastructure
- Cloud services are designed to provide a subscribing customer easy, scalable, and on-demand access to applications and computing resources without the customer having to invest in procuring the infrastructure that is used for providing the services or functions.
- Various different types or models of cloud services may be offered such as Software-as-a-Service (SaaS), Platform-as- a-Service (PaaS), Infrastructure-as-a-Service (IaaS), and others.
- SaaS Software-as-a-Service
- PaaS Platform-as- a-Service
- IaaS Infrastructure-as-a-Service
- a customer can subscribe to one or more cloud services provided by a CSP.
- the customer can be any entity such as an individual, an organization, an enterprise, a government entity, and the like.
- a CSP is responsible for providing the infrastructure and resources that are used for providing cloud services to subscribing customers.
- the resources provided by the CSP can include both hardware and software resources. These resources can include, for example, compute resources (e.g., virtual machines, containers, applications, processors, bare-metal computers), memory resources (e.g., databases, data stores), networking resources (e.g., routers, host machines, load balancers), identity, and other resources.
- compute resources e.g., virtual machines, containers, applications, processors, bare-metal computers
- memory resources e.g., databases, data stores
- networking resources e.g., routers, host machines, load balancers
- identity e.g., identity, and other resources.
- the resources provided by a CSP for providing a set of cloud services CSP are organized into data centers.
- a data center may be configured to provide a particular set of cloud services.
- the CSP is responsible for equipping the data center with infrastructure and resources that are used to provide that particular set of cloud services.
- a CSP may build one or more data centers. [0027] Data centers provided
- a region is a localized geographic area and may be identified by a region name. Regions are generally independent of each other and can be separated by vast distances, such as across countries or even continents. Regions are grouped into realms. Examples of regions for a CSP may include US West, US East, Australia East, Australia Southeast, and the like. [0028] A region can include one or more data centers, where the data centers are located within a certain geographic area corresponding to the region. As an example, the data centers in a region may be located in a city within that region.
- a CSP builds or deploys data centers to provide cloud services to its customers.
- the CSP typically builds new data centers in new regions or increases the capacity of existing data centers to service the customers’ growing demands and to better serve the customers.
- a data center is built in close geographical proximity to the location of customers serviced by that data center.
- a CSP typically builds new data centers in new regions in geographical areas that are geographically proximal to the customers serviced by the data centers. For example, for a growing customer base in Germany, a CSP may build one or more data centers in a new region in Germany. [0030] Building a data center (or multiple data centers)) and configuring it to provide cloud services in a region is sometimes also referred to as building a region.
- region build is used to refer to building one or more data centers in a region.
- Building a region involves provisioning or creating a set of new resources that are needed or used for providing a set of services that the data center is configured to provide.
- the end result of the region build process is the creation of a region, where the data center, together with the contained hardware and software resources, is capable of providing a set of services intended for that region and includes a set of resources that are used to provide the set of services.
- Building a new region is a very complex activity requiring extensive coordination between various bootstrapping activities.
- this involves the performance and coordination of various tasks such as: identifying the set of services to be provided by the data center; identifying various resources that are needed for providing the set of services; creating, provisioning, and deploying the identified resources; wiring the underlying hardware properly so that they can be used in an intended manner; and the like.
- Each of these tasks further have subtasks that need to be coordinated, further adding to the complexity. Due to this complexity, presently, the building of a region involves several manually initiated or manually controlled tasks that require careful manual coordination. As a result, the task of building a new region (i.e., building one or more data centers in a region and configuring the hardware and software in each data center to provide the requisite cloud services) is very time consuming.
- a CSP may employ an orchestration service to bootstrap services into a new region.
- the orchestration service may be a cloud- based service hosted within a separate region (e.g., an orchestration region) from the target region.
- the orchestration service can create a bootstrapping environment to host instances of one or more cloud services. The orchestration service can then use the services in the bootstrapping environment to support the deployment of services into the target region.
- CSPs CSPs to centralize the region build operations to one or more facilities that can act as "factories" to produce partially or fully configured physical infrastructure for subsequent delivery to a destination site.
- a CSP can build regions in a prefab factory, ship the configured physical components, like racks, to the destination data center, and then finalize and verify the components of the region once the racks arrive at the destination site.
- the prefab factory is capable of building multiple regions simultaneously. Each region being built at the prefab factory can have separate configurations, network topologies, and services.
- a prefab factory can also be used to build computing components to be integrated into on-premises solutions for customers, for example, when the customer controls and manages its own data center environment.
- the centralized prefab factory supports additional innovations for building regions in an efficient manner.
- the prefab factory can include a static network fabric consisting of networking infrastructure (e.g., network switches, routers, cabling, etc.) designed to support any potential configuration of region components built in the factory.
- the static network fabric can allow for physical resources of the region to be placed in the factory and quickly connected to the existing network fabric. Regions with different network topologies can also be quickly connected to the same network fabric according to connection plans that match the static network fabric with the physical components of the region.
- the static network fabric can reduce the complexity of network connections of the regions within the factory, increasing the speed at which the region components are installed in the factory and removed from the factory in preparation for transmission.
- CTPA cable terminal protection apparatus
- the present disclosure is directed to a prefab factory in which automated region builds are performed using one or more prefab services.
- a prefab manager service can orchestrate the overall building of a region at the prefab factory.
- the manager service can work in conjunction with the one or more additional prefab services to manage the inventory of physical components used to construct the region at the prefab factory, configure the network (e.g., endpoints, network topology, addresses and/or other identifiers of the components within the region), bootstrapping services onto the region infrastructure, preparing the components for transmission of the region (including encrypting data volumes to provide security during transit), verifying the region after delivery to and installation at the destination site, and finalizing the configuration of the region, including performing any remaining bootstrapping or updating operations for the services deployed to the region infrastructure previously at the prefab factory.
- the network e.g., endpoints, network topology, addresses and/or other identifiers of the components within the region
- bootstrapping services onto the region infrastructure
- preparing the components for transmission of the region including
- a "region" is a logical abstraction corresponding to a collection of computing, storage, and networking resources associated with a geographical location.
- a region can include any suitable number of one or more execution targets.
- a region may be associated with one or more data centers.
- a "prefab region” describes a region built in a prefab factory environment prior to delivery to the corresponding geographical location.
- an execution target could correspond to the destination data center as opposed to the prefab factory data center.
- An “execution target” refers to a smallest unit of change for executing a release.
- a “release” refers to a representation of an intent to orchestrate a specific change to a service (e.g., deploy version 8, “add an internal DNS record,” etc.).
- an execution target represents an “instance” of a service or an instance of change to be applied to a service.
- a single service can be bootstrapped to each of one or more execution targets.
- An execution target may be associated with a set of devices (e.g., a data center).
- a set of devices e.g., a data center.
- “Bootstrapping" a single service is intended to refer to the collective tasks associated with provisioning and deployment of any suitable number of resources (e.g., infrastructure components, artifacts, etc.) corresponding to a single service.
- Bootstrapping a region is intended to refer to the collective of tasks associated with each of the bootstrap of each of the services intended to be in the region.
- a "service” refers to functionality provided by a set of resources, typically in the form of an API that customers can invoke to achieve some useful outcome.
- a set of resources for a service includes any suitable combination of infrastructure, platform, or software (e.g., an application) hosted by a cloud provider that can be configured to provide the functionality of a service.
- a service can be made available to users through the Internet.
- An "artifact” refers to code being deployed to an infrastructure component or a Kubernetes engine cluster, this may include software (e.g., an application), configuration information (e.g., a configuration file), credentials, for an infrastructure component, or the like.
- IaaS provisioning (or “provisioning”) refers to acquiring computers or virtual hosts for use, and even installing needed libraries or services on them.
- provisioning a device refers to evolving a device to a state in which it can be utilized by an end-user for their specific use.
- a device that has undergone the provisioning process may be referred to as a “provisioned device.”
- Preparing the provisioned device (installing libraries and daemons) may be part of provisioning; this preparation is different from deploying new applications or new versions of an application onto the prepared device. In most cases, deployment does not include provisioning, and the provisioning may need to be performed first.
- IaaS deployment refers to the process of providing and/or installing a new application, or a new version of an application, onto a provisioned infrastructure component.
- additional software may be deployed (e.g., provided to and installed on the infrastructure component).
- the infrastructure component can be referred to as a "resource” or “software resource” after provisioning and deployment has concluded. Examples of resources may include, but are not limited to, virtual machines, databases, object storage, block storage, load balancers, and the like.
- a "virtual bootstrap environment” refers to a virtual cloud network that is provisioned in the overlay of an existing region (e.g., a "host region”). Once provisioned, a ViBE is connected to a new region using a communication channel (e.g., an IPSec Tunnel VPN).
- a communication channel e.g., an IPSec Tunnel VPN.
- Certain essential core services like a deployment orchestrator, a public key infrastructure (PKI) service, a dynamic host configuration protocol service (DHCP), a domain name service (DNS), and the like can be provisioned in a ViBE. These services can provide the capabilities required to bring the hardware online, establish a chain of trust to the new region, and deploy the remaining services in the new region.
- a "Manager Service” may refer to a service configured to manage provisioning and deployment operations for any suitable number of services as part of a prefab region build.
- a manager service may be used in conjunction with one or more additional prefab services to orchestrate a region build in a prefab factory as well as for managing how the prefabbed region is installed and configured at the destination data center after it is built and shipped over.
- the manager service and other prefab services may be hosted in an existing region of a CSP.
- a "host region” refers to a region that hosts a virtual bootstrap environment (ViBE). A host region may be used to bootstrap a ViBE.
- a "target region” refers to a region under build in the prefab factory. During a prefab region build, the target region is associated with physical space, power, and cooling provided by the prefab factory. After bootstrapping, once the prefabbed region has been shipped to the destination data center, the prefabbed region is associated with the destination data center into which it gets installed. Prefab Region Build [0048] In some examples, techniques for building a region at a prefab factory are described herein.
- Such techniques can include one or more prefab services (e.g., manager service, network service, inventory service, testing service, deployment orchestration system) hosted by a CSP that can manage bootstrapping (e.g., provisioning and deploying software to) infrastructure components for one or more regions within the prefab factory.
- the prefab factory may be configured to support multiple region builds simultaneously. For example, physical resources (e.g., server racks, network switches, etc.) of a first prefab region may be installed at one location in the prefab factory while physical resources of a second prefab region may be installed at a second location in the prefab factory.
- Each prefab region can be connected to a dedicated network fabric of the prefab factory to provide networking connections to each prefab region independently, so that each region can communicate with the prefab services and/or other cloud services to support the region build.
- a build request a specification of the region, e.g., a number of server racks for the region, a number of computing devices, a number and type services to be hosted by the region, a network topology of the region, etc.
- the prefab services can generate instructions to install (e.g., by factory personnel) the corresponding physical infrastructure in the prefab factory, which can include networking the physical devices together on their racks, positioning the racks at locations in the prefab factory, and connecting the devices to the static network fabric of the prefab factory.
- the manager service can then orchestrate the provisioning of the region infrastructure and deployment of software resources to the prefab region infrastructure, configure the prefab region for transmission, manage (e.g., schedule and monitor) the transmission of the prefab region, and perform testing and verification of the prefab region once it reaches its destination site.
- the prefab factory can centralize the region build process to provide more efficient use of computing and networking resources that support region build. For example, the prefab factory may be sited "close" (e.g., with low-latency and high data rate networking connections) to a host region that includes the prefab services and/or a ViBE.
- the prefab factory also provides improved physical and computational security for the devices during region build, as the CSP can control the prefab factory and the network connections therein.
- the prefab factory improves the management of the inventory of physical components.
- the manager service can determine which computing devices are needed for a particular region build, which may be stored at or near the prefab factory. As regions are built and shipped, infrastructure for new regions can be quickly moved into the prefab factory and installed, increasing efficiency. [0051] Turning now to the figures, FIG.
- FIG. 1 is a block diagram illustrating a prefabrication system 100 including a prefab factory 102 for building regions (e.g., Prefab Region 106A, Prefab Region 106B, Prefab Region 106C) and preparing the region computing devices for transmission to target data centers (e.g., data center 108, data center 110), according to at least one embodiment.
- regions e.g., Prefab Region 106A, Prefab Region 106B, Prefab Region 106C
- target data centers e.g., data center 108, data center 110
- Each region being built in the prefab factory 102 can include one or more devices that form the computing environment of a data center.
- the prefab factory 102 can be used to build multiple regions simultaneously. For example, prefab factory 102 can build all of Prefab Region 106A, Prefab Region 106B, and Prefab Region 106C at the same time.
- the devices of a region may be installed and staged in the prefab factory 102 prior to beginning infrastructure provisioning and software deployment operations.
- the prefab factory 102 can be a facility similar to a data center, including sufficient power, cooling, and networking infrastructure to support building one or more regions.
- the prefab factory 102 may be located in proximity to existing computing infrastructure of a CSP (e.g., CSP 104).
- CSP 104 can operate existing data centers for one or more regions.
- the prefab factory 102 can be located close to or even adjacent to an existing data center of a host region to provide high data rate network connections between the cloud services of the CSP and the computing devices of the regions being built in the prefab factory 102.
- a prefab region being built in the prefab factory 102 can include any suitable number of physical resources, including computing devices (e.g., servers, racks of multiple servers, etc.), storage (e.g., block storage devices, object storage devices, etc.), networking devices (e.g., switches, routers, gateways, etc.), and the like.
- computing devices e.g., servers, racks of multiple servers, etc.
- storage e.g., block storage devices, object storage devices, etc.
- networking devices e.g., switches, routers, gateways, etc.
- Each region may have different physical resources according to the specific requirements of the destination region and data centers.
- Prefab Region 106A may include 100 racks each having 40 computing devices
- Prefab Region 106B may include 20 racks each having 30 computing devices.
- Each rack of computing devices can include one or more networking devices communicatively connected to the server devices on the rack and configured to connect to networking infrastructure of the prefab factory 102 to form a network with other computing devices of the prefab region.
- Each rack can also include power supplies and cooling devices to support the operation of the computing devices on the racks.
- the prefab factory 102 can include any suitable number of networking devices to support the installation and connection of the one or more computing devices of the prefab regions being built.
- the prefab factory 102 can include any suitable number of leaf and spine switches to support the connection of computing devices on multiple racks to form the network of a prefab region.
- the prefab factory 102 can include network cabling installed in the facility that can provide network connections to the networking infrastructure of the prefab factory 102.
- the network cabling may be positioned to terminate at locations within the prefab factory 102 where racks of computing devices for the prefab regions may be installed during region build operations. Additional details about the networking infrastructure and configuration of the prefab factory are provided below with respect to FIGS. 9-11.
- the prefab factory 102 may be connected over one or more networks to services provided by CSP 104.
- CSP 104 can provision infrastructure components on the physical resources of the prefab regions and deploy software resources, configurations, and/or other artifacts to the provisioned infrastructure components.
- CSP 104 can provision the computing devices of Prefab Region 106A to host one or more virtual machines, provide hostnames, network addresses, and other network configurations for the provisioned physical and virtual devices, and then deploy one or more services to be executed on the provisioned infrastructure.
- the prefab region may be brought to a state that is close to the final production state of the devices when they are installed at the destination facility.
- the physical resources may be configured for transmission/transportation to the destination facility.
- transmission may be used synonymously with the term "transportation” within the context of moving the physical resources associated with the prefab region from the prefab factory to a destination site.
- Configuring the prefab region for transmission can include obtaining a "snapshot" of the current network configuration of the computing devices in the prefab region, storing the snapshot, providing a portion of the snapshot to each computing device that includes identifiers for each device and its neighboring devices within the network, encrypting data volumes of the computing devices, and configuring the devices to boot into a test state when powered on after transmission.
- the prefab services of the CSP 104 may also capture device snapshots which are disk images taken of fully configured individual switches, compute devices, and smart NICs in the various racks to be shipped to the destination site. The device snapshots can enable rapid replacement of any device in the racks that get shipped if that device is non-functional after arrival and has to be replaced.
- Prefab Region 106B may be configured to be delivered by truck 112 to data center 108
- Prefab Region 106C may be configured to be delivered by aircraft 114 to data center 110.
- the destination facilities can be data centers that have been built to host the prefab region devices, with networking , power, cooling, and other infrastructure provided according to the configuration of the prefab region.
- the data centers can have network connections to the CSP 104.
- Installation of the prefab region can include manual operations for connecting racks and their computing devices to the network infrastructure of the data centers and other related tasks. Once the physical connections have been made, the devices of the prefab region can be powered on, which can initiate one or more testing operations by the devices based on the configuration that was performed at the prefab factory 102 prior to transmission.
- the prefab regions can also connect to the CSP 104 via one or more network connections to the data center to communicate with prefab services. For example, Prefab Region 106B can connect to CSP 104 via connection 118, while Prefab Region 106C can connect to CSP 104 via connection 116.
- FIG. 2 is a block diagram illustrating a prefabrication system 200 including a prefab factory 202 connected to prefab services 210 provided by a CSP 204 for building regions, according to at least one embodiment.
- the prefab factory 202 may be an example of prefab factory 102 of FIG. 1, and CSP 204 may be an example of CSP 104 of FIG. 1.
- the prefab factory 202 may interface with the CSP 204 via network 208, which may be a public network like the Internet, a private network, or other network.
- the prefab services 210 can include manager service 212, inventory service 214, testing service 216, orchestration service 218, and network service 220.
- the prefab services 210 can perform operations corresponding to building the prefab region 206 in the prefab factory 202, including managing a bootstrapping environment (e.g., ViBE 222), provisioning infrastructure components in the Prefab Region 206, deploying software resources to the Prefab Region 206, configuring the network of the Prefab Region 206, testing the Prefab Region at various points during the build process, and managing the physical inventory (e.g., physical inventory 224) of computing devices used to build Prefab Region 206 and other prefab regions being built at prefab factory 202.
- a bootstrapping environment e.g., ViBE 222
- provisioning infrastructure components in the Prefab Region 206 deploying software resources to the Prefab Region 206, configuring the network of the Prefab Region 206, testing the Prefab Region at various points during the build process, and managing the physical inventory (e.g., physical inventory 224)
- the manager service 212 can perform tasks to coordinate the operations of the prefab services 210, including scheduling prefab region build operations by other prefab services 210, generating physical build requests and corresponding instructions, initiating shipping of the prefab region 206 to a destination site, and managing the provisioning and deployment of resources in the prefab region 206 both in the prefab factory 202 and at the destination site.
- a physical build request can specify the number and type of physical resources to be used in Prefab Region 206.
- the physical build request can also include a set of instructions usable by personnel to install the corresponding physical resources in the prefab factory 202.
- the manager service 212 may generate a physical build request that specifies the number of racks and server devices for Prefab Region 206, the number of networking devices usable to connect the server devices to form the network of Prefab Region 206, and the connection plan that determines the networking connections between the specified server devices, networking devices, and the existing networking infrastructure of the prefab factory 20.
- the physical build request can also include instructions for personnel to obtain physical devices from an associated location (e.g., physical inventory 224) and instructions to install the devices in the prefab factory 202 at specified locations.
- operations of the physical build request may be performed by automated systems under the control of the manager service 212.
- obtaining racks of server devices from physical inventory 224 and installing the racks at prefab factory 202 may be performed by a robotic system configured to move physical racks from site to site.
- the inventory service 214 may be configured to track and monitor physical devices corresponding to one or more regions (e.g., one or more data centers of a region).
- the inventory service 214 can also track physical devices for one or more prefab regions (e.g., Prefab Region 206) in the prefab factory 202. Tracking and monitoring the physical devices can include maintaining an inventory of the devices according to an identifier of the device (e.g., serial number, device name, etc.) and the association of the devices with a data center.
- an identifier of the device e.g., serial number, device name, etc.
- the inventory service 214 can provide inventory information to other prefab services 210, including manager service 212, for use in the prefab region build process. For example, inventory service 214 can determine if a physical device is located at prefab factory 202 or at a destination site. Inventory service 214 can query devices to determine their location and/or association with a region, prefab region, or data center via a network (e.g., network 208). Inventory service 214 can also maintain a physical inventory (e.g., physical inventory 224) of devices that are stored for use in prefab region build operations. For example, inventory service 214 can track physical devices as they are received at the physical inventory 224 and then retrieved from the physical inventory 224 to be used as part of a prefab region at prefab factory 202.
- a physical inventory e.g., physical inventory 224
- inventory service 214 can provide inventory information to manager service 212 that is usable to generate a physical build request for Prefab Region 206 that includes instructions to obtain physical resources from physical inventory 224 and install the physical resources at the prefab factory 202.
- the physical inventory 224 may be a warehouse or storage facility for storing physical resources (e.g., computing devices) for use in prefab region build operations.
- the physical inventory 224 may be located near the prefab factory 202 to facilitate retrieval of physical resources according to a physical build request.
- the physical inventory 224 may be a building adjacent to a building used for the prefab factory 202.
- the physical inventory 224 may be located within the prefab factory 202.
- Physical resources may be placed into and retrieved from the physical inventory 224 by personnel associated with the CSP and the prefab factory 202. In some instances, during prefab region build operations, the retrieval and installation of physical resources from physical inventory 224 may be done by robots, automated guided vehicles, or other similar autonomous or semi- autonomous systems using instructions provided by the physical build request.
- the orchestration service 218 may be configured to perform bootstrapping operations to provision infrastructure components in the Prefab Region 206 and to deploy software resources to the Prefab Region 206.
- the orchestration service 218 can also construct a bootstrapping environment (e.g., ViBE 222) for use when bootstrapping resources into the Prefab Region 206.
- the orchestration service 218 may be an example of a deployment orchestrator described above.
- the orchestration service 218 may be configured to bootstrap (e.g., provision and deploy) services into a prefab region (e.g., Prefab Region 206) based on predefined configuration files that identify the resources (e.g., infrastructure components and software to be deployed) for implementing a given change to the prefab region.
- the orchestration service 218 can parse and analyze configuration files to identify dependencies between resources.
- the orchestration service 218 may generate specific data structures from the analysis and may use these data structures to drive operations and to manage an order by which services are bootstrapped to a region.
- the orchestration service 218 may utilize these data structures to identify when it can bootstrap a service, when bootstrapping is blocked, and/or when bootstrapping operations associated with a previously blocked service can resume.
- the orchestration service 218 may include components configured to execute bootstrapping tasks that are associated with a single service of a prefab region.
- the orchestration service 218 can maintain current state data indicating any suitable aspect of the current state of the resources associated with a service.
- desired state data may include a configuration that declares (e.g., via declarative statements) a desired state of resources associated with a service.
- orchestration service 218 can identify, through a comparison of the desired state data and the current state data, that changes are needed to one or more resources.
- orchestration service 218 can determine that one or more infrastructure components need to be provisioned, one or more artifacts deployed, or any suitable change needed to the resources of the service to bring the state of those resources in line with the desired state. Specific details about a particular implementation of orchestration service 218 is provided in U.S. Patent Application No. 17/016,754, entitled “Techniques for Deploying Infrastructure Resources with a Declarative Provisioning Tool,” the entire contents of which are incorporated in its entirety for all purposes. [0064]
- the ViBE 222 may be an example of a bootstrapping environment that can be used to deploy resources to a prefab region in a prefab factory 202.
- a ViBE can include a virtual cloud network (e.g., a network of cloud resources) implemented within a suitable region of a CSP (e.g., CSP 204).
- the ViBE can have one or more nodes (e.g., compute nodes, storage nodes, load balancers, etc.) to support operations to host services deployed by orchestration service 218.
- the ViBE services can in turn be used to support deployment of services into the Prefab Region 206.
- orchestration service 218 may deploy an instance of one or more constituent services of the orchestration service 218 into the bootstrapping environment (e.g., an instance of orchestration service 218), which in turn may be used to deploy resources from the ViBE 222 to the Prefab Region 206.
- any suitable amount of region infrastructure may be provisioned to support the deployed services within the ViBE (as compared to the fixed hardware resources of a seed server).
- the orchestration service 218 may be configured to provision infrastructure resources (e.g., virtual machines, compute instances, storage, etc.) for the ViBE 222 in addition to deploying software resources to the ViBE 222.
- the ViBE 222 can support bootstrapping operations for more than one prefab region in the prefab factory 202 at the same time.
- the ViBE 222 can be connected to the Prefab Region 206 so that services in the ViBE 222 can interact with the services and/or infrastructure components of the Prefab Region 206. This can enable deployment of production level services, instead of self-contained seed services as in previous systems, and will require connectivity over the internet to the target region. Conventionally, a seed service was deployed as part of a container collection and used to bootstrap dependencies necessary to build out the region.
- resources may be bootstrapped into the ViBE 222 and connected to the Prefab Region 206 in order to provision hardware and deploy services until the Prefab Region 206 reaches a self-sufficient state (e.g., self-sufficient with respect to services hosted within the Prefab Region 206).
- a self-sufficient state e.g., self-sufficient with respect to services hosted within the Prefab Region 206.
- Utilizing the ViBE 222 allows for standing up the dependencies and services needed to be able to provision/prepare infrastructure and deploy software while making use of the host region's resources in order to break circular dependencies of core services.
- the testing service 216 may be configured to perform one or more test operations or validation operations on the Prefab Region 206 following the provisioning and/or deployment of resources.
- testing service 216 may perform a test that interacts with an instance of a service deployed to the Prefab Region 206 to verify an expected operation of the queried service.
- testing service 216 may perform a networking test to obtain hostnames, networking addresses, and/or other identifiers of the components of the Prefab Region 206 to compare to the expected identifiers of the components as specified in a build request or other specification for the Prefab Region 206.
- Testing service 216 may perform test operations both during the prefab region build process at prefab factory 202 and after delivery of the Prefab Region 206 to a destination site.
- the testing operations performed at the prefab factory 202 may be the same or different from testing operations performed after the Prefab Region 206 is delivered to the destination site.
- the network service 220 may be configured to determine the network configuration of the devices in the Prefab Region 206.
- the network service 220 can use configuration information from a build request to determine a network topology of the devices (e.g., servers, networking devices, racks of servers and networking devices, etc.).
- a network topology may refer to a graph representation of all the networking connections between each computing device in a prefab region.
- the network service 220 can use the configuration information to determine physical networking connections (e.g., network cabling connections) to be made between the device in the prefab region.
- FIG. 3 is a block diagram illustrating a CSP system 300 that includes multiple host regions (e.g., host regions 304A-304C) that can support a ViBE (e.g., ViBEs 308A-308C) for deploying software resources to a Prefab Region 306 being built at a prefab factory 302, according to at least one embodiment.
- host regions 304A-304C multiple host regions
- ViBE ViBEs 308A-308C
- Prefab factory 302 may be an example of prefab factory 202 described above with respect to FIG. 2.
- ViBEs 308A-308C may each be examples of ViBE 222 of FIG. 2, while manager service 312 may be an example of manager service 212 of FIG. 2.
- Host regions 304A-304C may correspond to regions of the CSP and can be associated with one or more data centers having computing resources for hosting a ViBE.
- the host regions 304A-304C may correspond to different geographical locations.
- the manager service 312 may be an instance within one host region (e.g., host region 304B).
- the manager service 312 may correspond to a tenancy of the CSP and may therefore have an instance in multiple regions (e.g., host regions 304A, 304C) from which prefab services can be provided. Similarly, other prefab services (e.g., prefab services 210) may also be instances of services within a host region.
- a ViBE may be hosted within a host region to support prefab region build operations at prefab factory 302. Because a ViBE may be constructed by an orchestration service (e.g., orchestration service 218) as needed for bootstrapping a prefab region, the ViBE can be built in any suitable host region.
- Suitability as a host region can be based on network connectivity to the prefab factory 302 (e.g., high-bandwidth, high data rate, low latency network connection between the data center(s) of the host region to the prefab factory 302), sufficient infrastructure resources to support the ViBE for one or more prefab region build operations (e.g., availability of computing resources in the host region for the length of time to provision and deploy the prefab region(s), and/or jurisdictional considerations (e.g., a host region in the same country as the prefab factory to comply with regulations regarding data security).
- host region 304A may include a data center in close proximity to prefab factory 302, resulting in a low latency network connection between ViBE 308A and Prefab Region 306.
- a ViBE used to support the prefab region build may be constructed in a different host region.
- ViBE 308A may be used as part of a prefab region build at prefab factory 302 for one prefab region, but then ViBE 308B in host region 304B or ViBE 308C in host region 304C may be constructed and used for a subsequent region build operation.
- the prefab factory 302 may be built in a location to provide suitable connectivity to one or more host regions.
- prefab factory 302 may be constructed at a site adjacent to a data center of host region 304A, to provide suitable network connectivity between host region 304A and prefab factory 302.
- Prefab factory 402 may be an example of prefab factory 202 of FIG. 2.
- Prefab services 410 may be provided by the CSP and may be examples of prefab services 210 described above with respect to FIG. 2, including manager service 412 as an example of manager service 212 of FIG.2 and inventory service 414 as an example of inventory service 214 of FIG. 2.
- Prefab Region 430 and Prefab Region 440 may be examples of other prefab regions described herein, including Prefab Region 206 of FIG. 2.
- prefab factory 402 may support multiple prefab region build operations at the same time.
- prefab factory 402 includes Prefab Region 430 and Prefab Region 440.
- Prefab Region 430 can include one or more server racks 432A- 432C.
- Each server rack can include one or more devices, including server devices and networking devices.
- server rack 432A can include switch 434 and server device 436.
- Switch 434 may be a top-of-rack switch that provides networking connections to other server racks (e.g., via top-of-rack switches at the other server racks) or other physical resources of the Prefab Region 430.
- Networking connections between physical resources in Prefab Region 430 may include parts of networking infrastructure 438.
- Networking infrastructure 438 may include a portion of the networking infrastructure of prefab factory 402, including network cabling, network switches, routers, and the like that form the network fabric of the prefab factory 402.
- Prefab Region 440 can include one or more server racks 442A-442B, which can include more or fewer computing devices than server racks 432A-432C of Prefab Region 430.
- server rack 442A can include switch 444 and server device 446.
- Each prefab region may be at a different point of the prefab region build process at any given time. For example, Prefab Region 430 may be undergoing infrastructure provisioning and resource deployment while Prefab Region 440 may be undergoing installation of physical resources.
- each prefab region at the prefab factory 402 may include a different arrangement of physical resources.
- Prefab Region 430 can include a greater number of server racks (e.g., racks 432A-432C) than Prefab Region 440, with each server rack supporting a greater number of computing devices than the server racks of Prefab Region 440 (e.g., server racks 442A, 442B). Because the number and arrangement of physical resources in each prefab region can be different, the network topology corresponding to the connections between the physical resources can be different for each prefab region. [0075] Inventory service 414 can track physical resources used to form the prefab regions in the prefab factory 402.
- the physical resources tracked by inventory service 414 can included server devices and networking devices as well as racks of server devices and networking devices. Inventory service 414 can also track physical resources at data centers for deployed regions, including prefab region devices after delivery to and installation at a destination site. In some embodiments, inventory service 414 can connect to the prefab regions (e.g., via a network) and query device identifiers for devices in the prefab regions. Inventory service 414 may provide information corresponding to the physical resources in a prefab region to manager service 412 as part of prefab region build operations. For example, manager service 412 may use inventory information from inventory service 414 to determine if physical resources for a prefab region were installed according to a physical build request.
- inventory service 414 can also maintain information corresponding to physical inventory 424 (e.g., a repository, warehouse, or other storage for computing devices and other physical resources used to construct a prefab region). Maintaining the physical inventory 424 can include tracking the number and type of physical resources available for use in a prefab region, maintaining a database or other datastore of inventory information, updating the inventory information as new physical resources are added to physical inventory 424 (e.g., delivery of new devices, construction of a server rack, etc.), and updating the inventory information as devices leave the physical inventory for use in the prefab factory 402 (as depicted by the arrows in FIG 4).
- CSP personnel may interact with inventory service 414 to provide manual updates to inventory information.
- the manager service 412 can obtain inventory information from inventory service 414 for use when generating a physical build request. For example, the inventory information may be used by manager service 412 to determine which physical resources to install in the prefab factory 402 for a prefab region corresponding to the physical build request.
- FIG. 5 is a diagram illustrating a CSP system 500 for managing a network configuration of computing resources of a Prefab Region 530 being built in a prefab factory 502 using a manager service 512 and a network service 520, according to at least one embodiment.
- the prefab factory 502 and Prefab Region 530 may be examples of other prefab factories and prefab regions described herein, including prefab factory 202 and Prefab Region 206 of FIG. 2.
- Prefab services 510 may be provided by the CSP and may be examples of prefab services 210 described above with respect to FIG. 2, including manager service 512 as an example of manager service 212 of FIG. 2 and network service 520 as an example of network service 220 of FIG. 2. [0078] As described above with respect to FIG. 2, the manager service 512 can perform tasks to coordinate the operations of the prefab services 510, including scheduling prefab region build operations by other prefab services 510, generating physical build requests and corresponding instructions, and configuring Prefab Region 206 for shipping to a destination site.
- a physical build request can specify the number and type of physical resources to be used in Prefab Region 206.
- the network service 520 can use configuration information from a build request to determine a network topology of the devices (e.g., servers, networking devices, racks of servers and networking devices, etc.). The network service 520 can also determine the network configuration of devices of the Prefab Region 530 after the provisioning of infrastructure components in the Prefab Region 530. [0079] In some examples, the network service 520 can store a snapshot of the network configuration of a prefab region (e.g., Prefab Region 530).
- a prefab region e.g., Prefab Region 530
- a snapshot can include information about the network topology of the prefab region at a specific point in time, including network identifiers (e.g., network addresses, hostnames, etc.) for the devices in the prefab region, the current network connections between the devices, the physical networking interfaces between the devices and the networking infrastructure 538 of the prefab factory 502, and network settings for the devices (e.g., port configurations, gateway configurations, etc.).
- server device 536 may be a computing device in server rack 532A of Prefab Region 530.
- Server device 536 may have a networking connection 540 to switch 534 of server rack 532.
- the network configuration of Prefab Region 530 can then include information associating server device 536 to switch 534, including information specifying the type of network connection 540, the port of switch 534 to which server device 536 is connected, and the settings of both server device 536 and switch 534 that correspond to the networking connection 540 between them.
- the network configuration can include information that associates server device 536 with "neighboring" devices in Prefab Region 530 that have networking connections 542, 544 between them.
- the networking connections 542 and 544 may be via switch 534, so that server device 536 may be communicatively connected to other devices in server rack 532A via network connections 542, 544.
- "neighboring" devices of a given device in Prefab Region 530 can include each computing device on the same server rack.
- switch 534 may have a network connections to one or more other switches within Prefab Region 530 (e.g., network connection 546 to a switch of server rack 532B).
- the network snapshot may be used to validate the physical installation (e.g., physical networking connections) of Prefab Region 530 after the devices are installed at the destination site.
- network service 520 can provide the network snapshot (or a portion of the snapshot) to each device in the Prefab Region 530 as part of configuring the Prefab Region 530 for transportation to a destination site.
- network service 520 may provide network snapshot 526 to server device 536 for storage at server device 536.
- Network snapshot 526 may be a portion of the network snapshot corresponding to the network configuration of the entire Prefab Region 530.
- Network snapshot 526 can include an identifier (e.g., network address, hostname, etc.) for server device 536 and information associating server device 536 with one or more other devices in Prefab Region 530.
- the information associating server device 536 with a neighboring device can include an identifier for the neighboring device and information about the network connection between them.
- server device 536 can use network snapshot 526 to identify neighboring devices and communicate with the neighboring devices over the network connection.
- the network service 520 may also maintain a network configuration for the network fabric of the prefab factory 502.
- the prefab factory 502 can have networking infrastructure to support multiple, separate prefab regions being built at the same time.
- the prefab factory 502 can have multiple dedicated locations for placing server racks for the prefab regions being built. Each location may have a set of networking cables of the networking infrastructure that terminate at the location that can be connected to the server racks. Based on the devices placed at the location, specific cables from the set of networking cables can be connected to the devices (e.g., to a top-of-rack switch) to connect the devices to other devices in the prefab region using a portion of the network fabric of the prefab factory 502.
- server rack 532A may be placed at a location within the prefab factory 502 and connected to networking infrastructure 538 using switch 534, while server rack 532B may be placed at a second location and connected to networking infrastructure 538.
- configuring Prefab Region 530 for transportation to a destination site can also include the manager service 512 configuring each device to enter a testing state during a subsequent power-on of the device, encrypting data volumes of the devices with encryption keys, storing the encryption keys at a device that can act as a key server for the Prefab Region 530 during initialization at the destination site, and configuring one of the devices to act as dynamic host configuration protocol (DHCP) server during initialization of the Prefab Region 530 at the destination site.
- DHCP dynamic host configuration protocol
- Manager service 512 may also generate instructions usable by personnel or robotic systems associated with the prefab factory 502 for packing the devices for transmission. Manager service 512 may also generate instructions usable by personnel associated with the destination facility for installing and connecting the devices at the destination facility. [0083] In some embodiments, configuring the devices of Prefab Region 530 can also include operations to capture device snapshots of each device.
- a device snapshot can include a software image of one or more disk drives or other memory of a computing device, which can be used to duplicate the software configuration of the device onto a replacement device.
- the manager service 512 can generate the device snapshots in conjunction with one or more of the prefab service 510. The device snapshots may be stored along with the network snapshot(s) in a database or datastore (e.g., snapshot(s) 524).
- manager service 512 can generate device snapshot 552 of server device 550 of Prefab Region 530 at the prefab factory 502.
- the device snapshot 552 may be used to image another physical device that has the same or similar physical configuration as server device 550 in order to create a duplicate server device in the event that server device 550 fails (e.g., damaged or lost during transit to the destination site).
- FIG. 6 is a diagram illustrating a CSP system 600 for testing and evaluation of a Prefab Region 530 after delivery to a destination site 602 using a manager service 612 and a testing service 616, according to at least one embodiment.
- the destination site 602 may be a data center facility at a location corresponding to new region to be deployed for the CSP using the computing resources of Prefab Region 630.
- Prefab services 610 may be provided by the CSP and may be similar to prefab services 210 of FIG. 2, including manager service 612 as an example of manager service 212, testing service 616 as an example of testing service 216, and orchestration service 618 as an example of orchestration service 218 of FIG 2.
- Shipping Prefab Region 530 to the destination site 602 can include powering down each device, disconnecting the devices from the networking infrastructure of the prefab factory, and packing the devices as appropriate for transit.
- Server racks e.g., server racks 532A, 532B may be shipped intact, without disconnecting individual devices of the server rack.
- the server racks may be positioned in the destination site 602 per the physical layout of the resulting data center and connected to the networking infrastructure 638 of the destination site. For example, networking connections may be made between the networking infrastructure 638 and the switches of the server racks 532A, 532B by connecting one or more networking cables to the switches (e.g., switch 534).
- the devices in Prefab Region 530 may have been configured to boot into a test mode when first powered on at the destination site 602.
- the devices may have a dedicated boot volume to support the test mode during initialization at the destination site 602.
- the boot volume may be configured on an external device connected to each device in the Prefab Region 530.
- each server device e.g., server device 536) may be connected to a smart network interface card (SmartNIC) that provides a low-overhead boot volume that can be used to boot the server device into a test mode. Because the boot volume may only be used to support the test mode, the data on the boot volume may not need to be encrypted as with data volumes on the server devices.
- the test mode may be configured to cause each computing device to validate its connection to other devices in the Prefab Region 530. The validation can determine if the physical network connections of the devices to the networking infrastructure 638 at the destination site 602 were made correctly.
- a device in the test mode may use a stored network configuration or portion of the network configuration that was determined by a network service (e.g., network service 520 of FIG. 5) and stored at each device.
- server device 536 can use network snapshot 526 to determine a neighboring computing device that is communicatively connected to server device 536 by network connection 542.
- server device 536 may send a validation request to the neighboring computing device. If the network connection 542 is intact, then server device may receive a validation indication from the neighboring computing device that indicates that the validation request was successfully received at the neighboring computing device.
- the server device 536 may validate all of the connections specified in network snapshot 526.
- one device of Prefab Region 530 may be configured to act as a DHCP server (e.g., DHCP server 646).
- the DHCP server 646 may provide network addresses or other identifiers to the devices in Prefab Region 530 during initialization. For example, during test mode, each device may validate a connection to the DHCP server 646 and then receive an address, identifier, or other network configuration information from the DHCP server 646.
- the device may compare the received identifier to an identifier included in the network configuration that was generated by the network service during prefab region build operations at the prefab factory. For example, server device 536 can receive an identifier from DHCP server 646 and then compare the received identifier to an identifier in network snapshot 526. Because the Prefab Region 530 should not have undergone any component changes during transit, the network configuration of the Prefab Region 530 at the destination site 602 should be unchanged, including configuration information from DHCP server 646. That is to say, server devices in the Prefab Region should receive the same network addresses from DHCP server 646 after installation of the devices at the destination site 602. If the network configuration changes, then the server devices can indicate that the network configuration of Prefab Region 530 may be incorrect.
- server device 550 may be damaged during transportation to the destination site 602. Discovery of the non-functional state of server device 550 may occur during testing operations to validate the network configuration of the Prefab Region 530. To recover, the manager service 612 can generate instructions to replace server device 550 with an identical physical device at the same location on server rack 532B. Once the replacement device is installed, the manager service 612 can deploy the device snapshot 552 that was generated during prefab region build operations in the prefab factory 502.
- Deploying the device snapshot 552 can include imaging one or more disk drives or other memories of the replacement server device to bring the replacement server device to the same software configuration as server device 550 in the Prefab Region 530 prior to transportation to the destination site 602.
- Other devices including networking devices like switch 534, may be similarly replaced and restored using the captured device snapshots.
- the DHCP server 646 can perform test mode validation operations similar to other devices within Prefab Region 530. If DHCP server 646 can successfully validate the network connections between neighboring devices and itself, DHCP server 646 can exit test mode and begin operating as a DHCP server to other devices in the Prefab Region 530.
- DHCP server 646 may complete its test mode validation operations prior to other devices in Prefab Region 530 completing their test mode validation operations. For example, server device 536 may boot into test mode and attempt to validate a network connection to DHCP server 646 before validating network connection 542 or network connection 544 between itself and neighboring computing devices. DHCP server 646 may not send a validation indication to server device 536 until DHCP server 646 has completed its own test mode validation operations. Server device 536 can then wait a predetermined amount of time and retry the validation request to DHCP server 646. Similarly, other computing devices performing test mode validation operations may wait and retry validation requests until DHCP server 646 is operational.
- data volumes of the devices in Prefab Region 530 may be encrypted prior to transportation to the destination site 602.
- the encryption keys used to encrypt the data volumes of each device may be associated with that specific device.
- the encryption keys 644 may be stored at one of the computing devices in Prefab Region 530 configured to act as a key server for the Prefab Region 530 during initialization(e.g., stored at key server 642).
- the encryption keys 644 may themselves be encrypted by a master key.
- encryption keys 644 may be secured by a hardware security module (e.g., a trusted platform module (TPM)).
- TPM trusted platform module
- the hardware security module may be part of key server 642 or may be part of another device connected to key server 642 (e.g., a SmartNIC, an external security device, etc.).
- the master key or external security device may be delivered to the destination site 602 separately from the Prefab Region 530 (e.g., by operations personnel) and provided to or installed at the key server 642 as part of the installation operations for Prefab Region 530.
- Key server 642 may perform test mode validation operations similar to other computing devices in Prefab Region 530. If test mode validation operations complete successfully, key server 642 may begin providing encryption keys 644 to other computing devices in the Prefab Region to decrypt the data volumes. For example, key server 642 may receive a key request from server device 536.
- key server 642 can decrypt the data volume storing encryption keys 644 (e.g., via a master key, via a hardware security module), retrieve an encryption key corresponding to server device 536, and send the encryption key to server device 536.
- testing service 616 can perform one or more acceptance tests.
- An acceptance test can include verifying that all services are functioning as expected.
- testing service 616 can interact with a service executing at Prefab Region 530 to verify that the service is operating according to the requirements that define the acceptance test.
- Testing service 616 can provide results of an acceptance test to manager service 612 indicating that Prefab Region build is complete.
- updates or other changes may be specified for one or more infrastructure components and/or software resources that had been provisioned at and/or deployed to Prefab Region 530 at the prefab factory. For example, a service may have been updated to a newer version during the transit time.
- orchestration service 618 can deploy updated software resources to Prefab Region 530 at destination site 602. Deploying an updated software resource may occur similar to deployment of software resources to the Prefab Region 530 at the prefab factory.
- the method 700 is an example method for deploying software resources to physical resources of a region being built in a prefab factory and preparing the physical resources for transmission to a destination data center, according to at least one embodiment.
- the method 700 may be performed by one or more components of a computer system, including one or more components of a computer system of a CSP (e.g., CSP 204 of FIG. 2) that execute a manager service (e.g., manager service 212 of FIG. 2).
- the operations of method 700 may be performed in any suitable order, and method 700 may include more or fewer operations than those depicted in FIG. 7.
- Some or all of the method 700 may be performed under the control of one or more computer systems configured with executable instructions and may be implemented as code (e.g., executable instructions, one or more computer programs, or one or more applications) executing collectively on one or more processors, by hardware or combinations thereof.
- the code may be stored on a computer-readable storage medium, for example, in the form of a computer program comprising a plurality of instructions executable by one or more processors.
- the computer-readable storage medium may be non-transitory.
- the method 700 may begin at block 702 with a manager service receiving a build request.
- the manager service may be an example of any manager services described herein, including manager service 212 of FIG. 2.
- the manager service may execute on one or more computing devices of a CSP computer system.
- the manager service may be one of a plurality of services of the CSP that are configured to perform operations to build a prefab region (e.g., Prefab Region 206 of FIG. 2) in a prefab factory (e.g., prefab factory 202 of FIG. 2).
- the build request may be a specification or configuration containing information characterizing a prefab region.
- the build request can include information that defines the size of the prefab region (e.g., the number of computing devices, server racks, etc.), the number and type of services, applications, and other software that will be executed on the computing devices in the prefab region, requirements for computing capabilities (e.g., the number of processors in each computing device, the computing speed of the processors, etc.) in the prefab region, requirements for the types of storage provided in the prefab region, and other similar definitions.
- the build request may be provided by operations personnel or system architects that design the prefab regions.
- the manager service may generate a physical build request for building physical resources within a first data center.
- the first data center may be a prefab factory (e.g., prefab factory 202 of FIG. 2).
- the manager service can use information in the build request to generate the physical build request.
- the physical resources can include server devices, networking devices, and other computing devices that may be used to build a prefab region in the first data center.
- the physical build request can include information identifying specific physical resources to be built in the first data center.
- the manager service can interact with an inventory service (e.g., inventory service 214 of FIG. 2) to obtain an inventory of available server devices and server racks in a physical inventory of such devices (e.g., physical inventory 224 of FIG. 2).
- the manager service can then determine the specific server racks to be used to build the prefab region corresponding to the build request and include that information in the physical build request.
- the physical build request can also include instructions usable by, for example, operations personnel to retrieve, move, and install the physical resources into the first data center.
- the instructions can identify specific locations within the first data center to place server racks and instructions for completing specific network connections to the server rack to form the network of the prefab region.
- the manager service can implement a ViBE (e.g., ViBE 222 of FIG. 2) at a second data center.
- the second data center can be communicatively connected to the first data center.
- the manager service may implement the ViBE in a host region of a CSP that is connected to a prefab factory via a network (e.g., network 208 of FIG. 2).
- the manager service can implement the ViBE in conjunction with an orchestration service (e.g., orchestration service 218 of FIG. 2).
- the manager service may provide an indication to the orchestration service to build the ViBE, specifying the second data center in which the ViBE is to be constructed.
- the manager service may implement the ViBE in response to an indication that the physical resources corresponding to the physical build request have been built (e.g., installed, powered on, and functioning normally in the first data center).
- the indication may be provided by operations personnel after installing the physical resources.
- the indication may be provided by the one or more of the physical resources after completing a self-check or other validation of the installation.
- the manager service can use the ViBE to deploy software resources to the physical resources.
- the software resources may be associated with cloud services executed on the physical resources.
- the software resources may be components of a production service (e.g., a database service) that will execute in a prefab region after the prefab region is delivered to a destination site.
- the manager service can deploy software resources in conjunction with the orchestration service.
- the manager service can generate an inventory of the physical resources.
- the manager service may operate in conjunction with the inventory service to generate the inventory.
- the manager service can use the inventory to generate a network configuration corresponding to a network topology of the physical resources in the first data center.
- the manager service may operate in conjunction with a network service (e.g., network service 220 of FIG. 2) to generate the network configuration.
- the network configuration may be a network snapshot of the prefab region.
- the network configuration can include an identifier for a physical resource in the inventory (e.g., a network address for a server device) and information associating the physical resource with neighboring physical resources according to the network topology.
- the network configuration may identify each computing device and the network connections between the computing device and one or more neighboring computing devices.
- the operations of block 710 and block 712 may be operations for configuring the physical resources for transmission to a destination site.
- the destination site may be a third data center.
- the manager service may send a portion of the network configuration to the physical resource.
- the portion of the network configuration can include the identifier corresponding to the physical resource and the information associating the physical resource with the neighboring physical resources in the network topology.
- configuring the physical resources for transmission to a destination site can include encrypting, using an encryption key associated with each physical resource, at least a portion of the software resources deployed to each physical resource and storing each encryption key at one of the physical resources (e.g., key server 642 of FIG. 6) designated to host a key service at the second data center.
- the manager service can receive an indication that the physical resources have been delivered to and built (e.g., installed) at the destination site.
- the manager service can validate the topology of the physical resources at the destination site. For example, the manager service, in conjunction with the network service, may obtain a network configuration of the physical resources at the destination site and compare the network configuration to information included in a stored network snapshot that was obtained before the physical resources were shipped to the destination site. If the network topology of the physical resources at the destination site is validated, the manager service may deploy one or more updated software resources to the physical resources.
- the manager service may operate with the orchestration service to deploy updated software components for a service that was deployed in the prefab region at the prefab factory, but which were moved to a newer version during transit of the physical resources to the destination site.
- the manager service can perform operations to support the initialization of the physical resources at the destination site.
- the manager service can determine a dependency of a first cloud service (e.g., a deployed application) on a second cloud service (e.g., a database service).
- the first cloud service can include software resources hosted on a first physical resource while the second cloud service can include software resources hosted on a second physical resource of the physical resources. Because of the dependency, the first cloud service may not function correctly until the second cloud service is operating normally.
- the manager service can determine whether a portion of the network topology associated with the second physical resource was validated successfully and then send an indication that the first cloud service is available.
- the indication may be sent to an operations console or other system that configured to report the availability of services and applications in the prefab region at the destination site as they become available. For example, the indication may be used to initiate one or more user acceptance tests on the newly available first cloud service.
- changes may be made to the configuration of the prefab region.
- a prefab region may need to have additional computing resources to support additional or expanded applications and/or services once delivered to the destination site.
- the techniques described herein can address modifications to a prefab region while it is being built in the prefab factory.
- the manager service can generate an updated physical build request that can be used to modify the physical resources.
- the updated physical build request can specify the installation of an additional server rack into the prefab region at the prefab factory.
- one or more server devices may be replaced with a different type of server device (e.g., a device with a faster processor, additional processors, additional memory, etc.).
- the updated physical build request can include instructions usable to obtain, install, and/or modify the physical resources, for example by operations personnel at the prefab factory.
- the manager service can deploy updated software resources to the modified physical resources.
- the manager service can use the orchestration service and the ViBE to deploy software components of a new service to a new server rack in the prefab region.
- the manager service may deploy the updated software resources in response to receiving an indication that the physical resources were successfully modified.
- configuring the physical resources for transportation to the second data center can include generating device snapshots for one or more of the physical resources.
- the manager service can generate a software image of each server device in the prefab region and store the software images in a datastore or similar repository. When validating the prefab region after installation at the destination site, the manger service can determine that one of the physical resources has failed.
- FIG. 8 is an example method 800 for booting physical resources built at a prefab factory after delivery to a destination data center and verifying a network configuration of the physical resources, according to at least one embodiment.
- the method 800 may be performed by one or more components of a computer system, including one or more components of a computer system of a prefab region (e.g., Prefab Region 206 of FIG. 2) that are communicatively connected to a CSP hosting a manager service (e.g., manager service 212 of FIG. 2).
- the method 800 may be performed by a computing device of a prefab region, including server device 536 of FIG. 5, DHCP server 646, or key server 642 of FIG. 6.
- the operations of method 800 may be performed in any suitable order, and method 800 may include more or fewer operations than those depicted in FIG. 8.
- Method 800 may begin at block 802 with the computing device receiving a network configuration (e.g., network snapshot 526 of FIG.
- the network configuration can include information specifying a network topology of physical resources in a first data center (e.g., prefab factory 202 of FIG. 2).
- the network configuration can include a first identifier (e.g., network address, hostname, etc.) associated with the computing device, a second identifier (e.g., network address, hostname, etc.) associated with a neighboring computing device, and information associating the computing device with the neighboring computing device (e.g., information specifying network connection 544 of FIG 5).
- the computing device can be configured to use the network configuration to communicate over a network connection with the neighboring computing device.
- the computing device can be configured for transmission to a second data center (e.g., destination site 602 of FIG. 6). Configuring the computing device for transmission can include operations similar to those described above for blocks 710 and 712 of FIG. 7. Additionally, configuring the computing device for transmission to the second data center can include configuring the computing device to boot into a test mode during a subsequent power on sequence. At block 806, the computing device may be booted into the test mode at the second data center. For example, once the computing device and other physical resources of the prefab region have been delivered to and installed at the second data center, the computing device may be powered on and enter the test mode.
- a second data center e.g., destination site 602 of FIG. 6
- Configuring the computing device for transmission can include operations similar to those described above for blocks 710 and 712 of FIG. 7.
- configuring the computing device for transmission to the second data center can include configuring the computing device to boot into a test mode during a subsequent power on sequence.
- the computing device may be booted into the test mode at
- booting the computing device into the test mode can include booting from a boot volume stored on a SmartNIC connected to the computing device.
- the computing device can receive a new identifier.
- the new identifier can be received from a server device at the second data center.
- the server device can be a device configured to act as a DHCP server at the second data center.
- the identifier may be a network address for the computing device. As described above, the identifier may be the same as the first identifier associated with the computing device in the prefab region at the prefab factory, since no changes to the network configuration should have occurred during transit and installation of the physical resources at the second data center.
- the computing device can verify the new identifier by comparing the new identifier with the first identifier.
- the computing device can obtain the first identifier from the network configuration stored at the computing device prior to transmission.
- the computing device can send a validation request to the neighboring computing device.
- the validation request sent according to the second identifier associated with the neighboring computing device.
- the computing device can ping the neighboring device at a network address associated with the neighboring computing device.
- the computing device can validate a network connection to the neighboring computing device.
- the network connection can be characterized by the network configuration.
- the validation of the network connection can include receiving a response to the validation request, which may be a validation indication from the neighboring computing device.
- the response to the validation request may be an indication that validation request was not received by the neighboring computing device, for example a request time out indication.
- the validation indication can indicate that the physical networking between the computing device and the neighboring computing device has been installed correctly at the second data center.
- the computing device once the computing device successfully validates its connection to each neighboring computing device, the computing device can send an indication to the manager service that the network connections associated with the computing device were successfully validated at the destination site.
- the computing device may be configured to operate as a key server in the prefab region at the second data center.
- the computing device can obtain the master key from the secure storage volume, decrypt the data volume storing the encryption keys, and vend the encryption keys in response to key requests from the neighboring computing device or other computing devices.
- the computing system can determine that one or more of the computing devices at the second data center has failed or is otherwise not functioning correctly. For example, a server device of a server rack may have been damaged during transportation. To complete the installation of the prefab region at the second data center, the failed or otherwise non-functional computing device can be replaced with another device and configured with a software image of the failed device prior to the transportation of the devices to the second data center.
- the computing system can configure the neighboring computing device for transmission to the second data center by generating a device snapshot of the neighboring computing device.
- the device snapshot can include a software image of the neighboring computing device.
- the device snapshot may be generated by the manager service and/or other prefab services performing prefab region build operations at the prefab factory.
- the computing system can determine that the neighboring computing device is non-functional. For example, the computing device can receive a response to the validation request that indicates that the neighboring computing device has been damaged or is not functioning properly. In response to this determination, the manager service can generate instructions to replace the neighboring computing device with a replacement computing device.
- the instructions may be usable by personnel at the second data center to make the replacement (e.g., a like-for-like swap of the device on a server rack).
- the manager service can then deploy the device snapshot for the neighboring computing device to the functional neighboring computing device, resulting in a device that can be identical to the failed device.
- the computing device can then re-send the validation request to determine the correct operation of the network connection between the computing device and the neighboring computing device.
- Cable Termination Protection Apparatus [0114] As described briefly above and in related U.S. Non-Provisional Application No.
- a static network fabric in a prefab factory can allow for installation of prefab regions without modifying the network infrastructure.
- Server racks with computing devices for the prefab region can be positioned at locations within the prefab factory that have cable terminations (e.g., overhead cable drops) for network cables of the static network fabric.
- the network cables and the corresponding cable termination connectors at each location can include multiple different types of cables/connectors as well as multiple cables/connectors of the same type (e.g., to provide redundancy).
- the network cables can connect to different ports of a networking device or computing device of the server racks that correspond to the cable termination connector to connect the server racks and form the region network.
- CTPA cable termination protection apparatus
- the network cables that terminate at the location may be connected to ports of the CTPA when the network cables are not connected to a computing device of the prefab region.
- the CTPA ports may be sockets that match the cable termination connectors of the network cables.
- FIGS. 9A and 9B are diagrams illustrating example arrangements 900, 920 of a CTPA 902 that can be positioned within a prefab factory (e.g., prefab factory 202 of FIG. 2), according to some embodiments.
- the arrangements 900, 920 depict CTPA 902 positionable adjacent to a location in a prefab factory where the network cabling (e.g., network cables 908 of FIG.
- a set of network cables 908 can be configured to terminate at the location, so that one or more cables have a cable termination connector (e.g., a plug or other connector) at an end of the network cable at the location.
- the CTPA 902 may also be positioned in a prefab factory for cables installed in a raised floor configuration with the network cables coming from below the floor (e.g., cable tray below a raised floor) to terminate at the location.
- the CTPA 902 can include a frame 903 positionable at a location in the prefab factory.
- the frame 903 may be "positionable at a location” if the frame 903 is adjacent to (e.g., above, horizontally adjacent, etc.) the location and the set of network cables terminating at the location (e.g., set of network cables 908) can be connected to the CTPA 902 as described below.
- the frame 903 may have dimensions corresponding to a standard rack frame used in data center environments.
- the frame 903 may be 1U (i.e., 1.75 inches) in height and 19 inches wide according to a standard 19 inch rack frame width.
- Other standard dimensions can include 2U, 3U, or other heights, while other standard rack frame widths can include 23 inch widths.
- the frame 903 may include a body that occupies a volume similar to a computing device mounted into a server rack of standard dimensions.
- the frame 903 may be 1U in height, 19 inches in width, and 24 inches in depth.
- the frame 903 may be a face plate having standard dimensions without an enclosed body that extends at a depth from a mounting point.
- the frame 903 may be a metal plate 1U in height and 19 inches in width and mounted to rails 906 at the sides of the frame 903.
- the rails 906 may be standard rack rails that are mounted to a point in the ceiling of the prefab factory to support the CTPA 902 above a location where a server rack 912 is installed.
- the frame 903 can have a face on which a plurality of ports 904 is arranged.
- the face may be the side of the frame 903 that is at the front of the frame 903 when the frame 903 is positioned at the location.
- the face may be the side of the frame 903 that is accessible from the same side as the network connections of server rack 912 when server rack 912 is installed at the location.
- the plurality of ports 904 can be networking ports characterized by one or more physical standards.
- the physical standard can define ports including, but not limited to, multi-fiber push on (MPO), multi-fiber pull off, small form-factor pluggable (SFP), SFP+, SFP28, quad small form-factor pluggable (QSFP), QSFP+, QSFP28, or RJ45.
- MPO multi-fiber push on
- SFP small form-factor pluggable
- QSFP quad small form-factor pluggable
- the CTPA 902 includes a plurality of "blank" network ports that allow for a physical connection to a set of network cables 908 in the prefab factory, but which do not provide a communication connection.
- the set of network cables 908 can include cable termination connectors that also conform to the physical standard.
- the set of network cables 908 can include a cable that terminates with a QSFP28 connector.
- the QSFP28 connector of this cable can be connected to a QSFP28 port on the frame 903 of CTPA 902.
- the plurality of ports on the frame 903 can also include any suitable number and combination of physical standards.
- the plurality of ports could have four MPO ports, four QSFP28 ports, two QSFP+ ports, and two RJ45 ports.
- the plurality of ports 904 may include a greater number of ports than are cables in the set of network cables 908. As shown in FIG. 9A, the CTPA 902 includes eight ports, while the set of network cables 908 includes five network cables. The set of network cables 908 can be connected to corresponding ports of the plurality of ports 904 to protect the cable termination connector of each cable by enclosing it in the corresponding port as if it were connected to an active computing device. [0121] The plurality of ports 904 may be arranged on the face of the CTPA 902 to match an arrangement of networking ports on a computing device, for example networking device 914 of server rack 912 or networking device 924 of server rack 922.
- Matching an arrangement of some ports on the networking device can include a substantial alignment between the plurality of ports 904 and a plurality of ports on the computing device.
- networking device 914 can include networking ports (e.g., two QSFP28 ports) for connection to the static network fabric of the prefab factory located on one side of the face of the networking device 914.
- the most efficient method to move two QSFP28 cables from the CTPA 902 to the ports on the networking device 914 may be to move the two QSFP28 cables straight down from a matched port of the plurality of ports 904.
- Matching an arrangement of some ports on the networking devices can also reduce the likelihood of cabling errors when installing server racks in the prefab factory by increasing the number of like-for-like connections to be made. [0122] FIG.
- FIG. 9A shows arrangement 900 with the CTPA 902 positioned above the server rack 912, which may be a typical arrangement for a prefab factory with network cables contained in overhead cable tray 910 above locations for the computing devices of the prefab regions in the prefab factory.
- the CTPA 902 may have a vertical alignment 916 between one or more of the plurality of ports 904 and ports on the networking device 914.
- FIG. 9B shows arrangement 920 with the CTPA 902 positioned adjacent to a location at which server rack 922 is installed.
- the CTPA 902 may have a horizontal alignment 926 between one or more of the plurality of ports 904 and ports on the networking device 924.
- Both the vertical alignment 916 and the horizontal alignment 926 can be examples of a substantial alignment between the plurality of ports 904 and a plurality of ports on a computing device positioned at a location in the prefab factory, according to certain embodiments.
- the CTPA 902 can include a port cover 917 for at least one port of the plurality of ports 904. Because the set of network cables 908 may include fewer cables than there are ports of the CTPA 902, some of the plurality of ports 904 may not be connected to a network cable. To prevent dust and debris from entering the open ports, the port cover 917 can be connected to the open port to substantially cover the aperture of the open port.
- the port cover 917 can include a molded rubber plug that fits into the open port.
- FIG. 10 is a diagram depicting example steps for disconnecting networking cables from a CTPA 1002 and re-connecting the networking cables to a networking device 1014 according to instructions generated based on the network fabric (e.g., network fabric 900 of FIG. 9) of a prefab factory (e.g., prefab factory 902 of FIG. 9) and physical configuration parameters of the networking device 1014, according to at least one embodiment.
- the networking device 1014 may be a computing device of server rack 1012.
- the physical configuration parameters can include information specifying the number and types of networking ports of the networking device 1014, a port name or other identifier for each of the networking ports, networking connections between the networking device 1014 and other computing devices on server rack 1012, and/or the locations of the networking ports on the networking device 1014 with respect to the physical dimensions of the networking device 1014.
- a manager service e.g., manager service 212 of FIG. 2
- a network service e.g., network service 220 of FIG. 2
- the instructions may be usable by personnel 1006 to move the server rack 1012 to a location in the prefab factory and connect one or more network cables of the set of network cables 1008 to the networking device 1014.
- the instructions may be sent to a user device and presented to personnel 1006 on a display of the user device.
- the instructions can identify the network cables to be connected to the networking device 1014 and the order in which the identified cables are to be disconnected from the CTPA 1002 and connected to ports on the networking device 1014. By determining the order of disconnecting/reconnecting the network cables, the instructions can reduce the likelihood of incorrect connections and reducing the time needed to connect the server rack 1012 to a prefab region network by utilizing the substantial alignment of the ports on the CTPA 1002 with ports on the networking device 1014.
- personnel 1006 can disconnect a first network cable from a first port of the plurality of ports 1004.
- the first network cable can be identified by the instructions.
- the first port may be aligned vertically with a corresponding network port of the networking device 1014.
- the personnel 1006 may then reconnect the first network cable to the corresponding network port. Because of the vertical alignment, the personnel 1006 may only move the first network cable directly downward to the corresponding port.
- personnel 1006 can disconnect a second network cable from a second port of the plurality of ports 1004.
- the second network cable can be identified by the instructions. As with the first port, the second port may be aligned vertically with a corresponding network port of the networking device 1014.
- the personnel 1006 may then reconnect the second network cable to the corresponding network port.
- personnel 1006 can disconnect a third network cable from a third port of the plurality of ports 1004.
- the third network cable may not be vertically aligned with a corresponding network port. Because of the lack of vertical alignment and/or the position of the corresponding network port on the networking device 1014, the instructions may have the third network cable disconnected and reconnected to the networking device after the first network cable and the second network cable have been connected, thus avoiding connection errors.
- FIG. 11 is an example method 1100 for generating instructions usable to disconnect networking cables from a CTPA (e.g., CTPA 1002 of FIG. 10) and reconnecting the networking cables to a networking device (e.g., networking device 1014 of FIG. 10) in a prefab factory, according to at least one embodiment.
- a CTPA e.g., CTPA 1002 of FIG.
- a networking device e.g., networking device 1014 of FIG.
- the method 1100 may be performed by a computing device of a CSP, including a computing device configured to execute one or more prefab services (e.g., manager service 212 and/or network service 220 of FIG. 2).
- the method 1100 may begin at block 1102 with the computing device receiving a build request for a prefab region data center rack.
- the build request may be similar to the build request described above with respect to FIGS 2, 7, and 8.
- the data center rack may be an example of any of the server racks described herein containing computing devices (e.g., server devices, networking devices, etc.), including server rack 1012 of FIG. 10.
- the computing device can obtain physical configuration parameters for computing devices on the data center rack.
- the physical configuration parameters can include information that identifies the connection between each computing device and specific ports on a networking device to which it is connected.
- the physical configuration parameters can also include information that specifies at least one of the networking ports on the networking device that is configured to be connected to the static network fabric of a prefab factory.
- the physical configuration parameters can be stored in a datastore accessible to the computing device.
- the computing device can obtain cabling specification information corresponding to a location at the data center and a plurality of network cables (e.g., the set of network cables 1008 of FIG. 10) configured to terminate at a CTPA at the location.
- the cabling specification information may be an example of information for a static network fabric in a prefab factory described above with respect to FIG. 10.
- the cabling specification information may identify the number of network cables that terminate at the location in the prefab factory with a CTPA, including physical standards that characterize the cable termination connector for each network cable. Similarly to the physical configuration parameters, the cabling specification information may be stored in a datastore accessible to the computing device. [0134] At block 1108, the computing device can use the physical configuration parameters of and the cabling specification to generate instructions that can be used (e.g., by personnel 1006 of FIG. 3) to disconnect a networking cable from the CTPA and reconnect the networking cable at the network port of the networking device.
- the instructions may include steps that identify which network cables connected to the CTPA are to be connected to the data center rack and in what order the identified cables are to be disconnected/reconnected.
- generating the instructions can include determining the correspondence between the identified network cables connected to the CTPA and the networking ports of the networking device on the data center rack and determining, using the physical configuration parameters (e.g., the physical locations of the corresponding network ports) an order for disconnecting/reconnecting the identified network cables that takes advantage of the alignment between the CTPA to improve the speed of the disconnect/reconnect operations and minimizes entanglement of the network cables.
- the computing device can send the instructions to a user device and cause the user device to present the instructions to a user (e.g., personnel 1006).
- the user device can be a tablet device that can display the instructions as part of a connection plan that can visually identify the cables and provide the steps for connecting the data center rack to the network cables.
- the computing device can execute a connection test to verify that the network cables have been successfully connected to the networking device. For example, the computing device may ping or send a request or other network traffic to the networking device and/or other computing devices on the data center rack. If the request is successfully received at the networking device, the computing device may receive a response indicating that a network connection has been established to the computing devices of the data center rack. The computing device may cause the user device to display an indication corresponding to the result of the connection test.
- the computing device can generate additional instructions to disconnect one or more of the network cables from the networking device, reconnect them to the CTPA, and then identify additional network cables to be disconnected from the CTPA and connected to the networking device (e.g., instructions to undo the incorrection connection and perform the correct connection of the network cable).
- network devices or compute devices may observe link layer discovery protocol (LLDP) packets being passed by neighboring devices to determine that a network link has been established between them. By collating information about all active links observed at the various devices in the network and comparing that list to the list of all expected active links, the manager service may instruct the personnel 1006 to check on the cabling associated with the observed inactive link.
- LLDP link layer discovery protocol
- IaaS infrastructure as a service
- IaaS can be configured to provide virtualized computing resources over a public network (e.g., the Internet).
- a cloud computing provider can host the infrastructure components (e.g., servers, storage devices, network nodes (e.g., hardware), deployment software, platform virtualization (e.g., a hypervisor layer), or the like).
- an IaaS provider may also supply a variety of services to accompany those infrastructure components (example services include billing software, monitoring software, logging software, load balancing software, clustering software, etc.).
- IaaS customers may access resources and services through a wide area network (WAN), such as the Internet, and can use the cloud provider's services to install the remaining elements of an application stack.
- WAN wide area network
- the user can log in to the IaaS platform to create virtual machines (VMs), install operating systems (OSs) on each VM, deploy middleware such as databases, create storage buckets for workloads and backups, and even install enterprise software into that VM.
- VMs virtual machines
- OSs install operating systems
- middleware such as databases
- storage buckets for workloads and backups
- enterprise software such as databases
- Customers can then use the provider's services to perform various functions, including balancing network traffic, troubleshooting application issues, monitoring performance, managing disaster recovery, etc.
- a cloud computing model may require the participation of a cloud provider.
- the cloud provider may, but need not be, a third-party service that specializes in providing (e.g., offering, renting, selling) IaaS.
- An entity might also opt to deploy a private cloud, becoming its own provider of infrastructure services.
- IaaS deployment is the process of putting a new application, or a new version of an application, onto a prepared application server or the like. It may also include the process of preparing the server (e.g., installing libraries, daemons, etc.). This is often managed by the cloud provider, below the hypervisor layer (e.g., the servers, storage, network hardware, and virtualization).
- IaaS provisioning may refer to acquiring computers or virtual hosts for use, and even installing needed libraries or services on them. In most cases, deployment does not include provisioning, and the provisioning may need to be performed first.
- IaaS provisioning there are two different challenges for IaaS provisioning. First, there is the initial challenge of provisioning the initial set of infrastructure before anything is running. Second, there is the challenge of evolving the existing infrastructure (e.g., adding new services, changing services, removing services, etc.) once everything has been provisioned.
- an infrastructure may have many interconnected elements. For example, there may be one or more virtual private clouds (VPCs) (e.g., a potentially on- demand pool of configurable and/or shared computing resources), also known as a core network.
- VPCs virtual private clouds
- inbound/outbound traffic group rules provisioned to define how the inbound and/or outbound traffic of the network will be set up and one or more virtual machines (VMs).
- VMs virtual machines
- Other infrastructure elements may also be provisioned, such as a load balancer, a database, or the like. As more and more infrastructure elements are desired and/or added, the infrastructure may incrementally evolve.
- continuous deployment techniques may be employed to enable deployment of infrastructure code across various virtual computing environments. Additionally, the described techniques can enable infrastructure management within these environments.
- service teams can write code that is desired to be deployed to one or more, but often many, different production environments (e.g., across various different geographic locations, sometimes spanning the entire world).
- FIG. 12 is a block diagram 1200 illustrating an example pattern of an IaaS architecture, according to at least one embodiment.
- Service operators 1202 can be communicatively coupled to a secure host tenancy 1204 that can include a virtual cloud network (VCN) 1206 and a secure host subnet 1208.
- VCN virtual cloud network
- the service operators 1202 may be using one or more client computing devices, which may be portable handheld devices (e.g., an iPhone®, cellular telephone, an iPad®, computing tablet, a personal digital assistant (PDA)) or wearable devices (e.g., a Google Glass® head mounted display), running software such as Microsoft Windows Mobile®, and/or a variety of mobile operating systems such as iOS, Windows Phone, Android, BlackBerry 8, Palm OS, and the like, and being Internet, e-mail, short message service (SMS), Blackberry®, or other communication protocol enabled.
- the client computing devices can be general purpose personal computers including, by way of example, personal computers and/or laptop computers running various versions of Microsoft Windows®, Apple Macintosh®, and/or Linux operating systems.
- the client computing devices can be workstation computers running any of a variety of commercially-available UNIX® or UNIX-like operating systems, including without limitation the variety of GNU/Linux operating systems, such as for example, Google Chrome OS.
- client computing devices may be any other electronic device, such as a thin-client computer, an Internet-enabled gaming system (e.g., a Microsoft Xbox gaming console with or without a Kinect® gesture input device), and/or a personal messaging device, capable of communicating over a network that can access the VCN 1206 and/or the Internet.
- the VCN 1206 can include a local peering gateway (LPG) 1210 that can be communicatively coupled to a secure shell (SSH) VCN 1212 via an LPG 1210 contained in the SSH VCN 1212.
- the SSH VCN 1212 can include an SSH subnet 1214, and the SSH VCN 1212 can be communicatively coupled to a control plane VCN 1216 via the LPG 1210 contained in the control plane VCN 1216.
- the SSH VCN 1212 can be communicatively coupled to a data plane VCN 1218 via an LPG 1210.
- the control plane VCN 1216 and the data plane VCN 1218 can be contained in a service tenancy 1219 that can be owned and/or operated by the IaaS provider.
- the control plane VCN 1216 can include a control plane demilitarized zone (DMZ) tier 1220 that acts as a perimeter network (e.g., portions of a corporate network between the corporate intranet and external networks).
- the DMZ-based servers may have restricted responsibilities and help keep breaches contained.
- the DMZ tier 1220 can include one or more load balancer (LB) subnet(s) 1222, a control plane app tier 1224 that can include app subnet(s) 1226, a control plane data tier 1228 that can include database (DB) subnet(s) 1230 (e.g., frontend DB subnet(s) and/or backend DB subnet(s)).
- LB load balancer
- the LB subnet(s) 1222 contained in the control plane DMZ tier 1220 can be communicatively coupled to the app subnet(s) 1226 contained in the control plane app tier 1224 and an Internet gateway 1234 that can be contained in the control plane VCN 1216, and the app subnet(s) 1226 can be communicatively coupled to the DB subnet(s) 1230 contained in the control plane data tier 1228 and a service gateway 1236 and a network address translation (NAT) gateway 1238.
- the control plane VCN 1216 can include the service gateway 1236 and the NAT gateway 1238.
- the control plane VCN 1216 can include a data plane mirror app tier 1240 that can include app subnet(s) 1226.
- the app subnet(s) 1226 contained in the data plane mirror app tier 1240 can include a virtual network interface controller (VNIC) 1242 that can execute a compute instance 1244.
- the compute instance 1244 can communicatively couple the app subnet(s) 1226 of the data plane mirror app tier 1240 to app subnet(s) 1226 that can be contained in a data plane app tier 1246.
- the data plane VCN 1218 can include the data plane app tier 1246, a data plane DMZ tier 1248, and a data plane data tier 1250.
- the data plane DMZ tier 1248 can include LB subnet(s) 1222 that can be communicatively coupled to the app subnet(s) 1226 of the data plane app tier 1246 and the Internet gateway 1234 of the data plane VCN 1218.
- the app subnet(s) 1226 can be communicatively coupled to the service gateway 1236 of the data plane VCN 1218 and the NAT gateway 1238 of the data plane VCN 1218.
- the data plane data tier 1250 can also include the DB subnet(s) 1230 that can be communicatively coupled to the app subnet(s) 1226 of the data plane app tier 1246.
- the Internet gateway 1234 of the control plane VCN 1216 and of the data plane VCN 1218 can be communicatively coupled to a metadata management service 1252 that can be communicatively coupled to public Internet 1254.
- Public Internet 1254 can be communicatively coupled to the NAT gateway 1238 of the control plane VCN 1216 and of the data plane VCN 1218.
- the service gateway 1236 of the control plane VCN 1216 and of the data plane VCN 1218 can be communicatively couple to cloud services 1256.
- the service gateway 1236 of the control plane VCN 1216 or of the data plane VCN 1218 can make application programming interface (API) calls to cloud services 1256 without going through public Internet 1254.
- API application programming interface
- the API calls to cloud services 1256 from the service gateway 1236 can be one-way: the service gateway 1236 can make API calls to cloud services 1256, and cloud services 1256 can send requested data to the service gateway 1236. But, cloud services 1256 may not initiate API calls to the service gateway 1236.
- the secure host tenancy 1204 can be directly connected to the service tenancy 1219, which may be otherwise isolated.
- the secure host subnet 1208 can communicate with the SSH subnet 1214 through an LPG 1210 that may enable two-way communication over an otherwise isolated system. Connecting the secure host subnet 1208 to the SSH subnet 1214 may give the secure host subnet 1208 access to other entities within the service tenancy 1219.
- the control plane VCN 1216 may allow users of the service tenancy 1219 to set up or otherwise provision desired resources. Desired resources provisioned in the control plane VCN 1216 may be deployed or otherwise used in the data plane VCN 1218.
- the control plane VCN 1216 can be isolated from the data plane VCN 1218, and the data plane mirror app tier 1240 of the control plane VCN 1216 can communicate with the data plane app tier 1246 of the data plane VCN 1218 via VNICs 1242 that can be contained in the data plane mirror app tier 1240 and the data plane app tier 1246.
- users of the system, or customers can make requests, for example create, read, update, or delete (CRUD) operations, through public Internet 1254 that can communicate the requests to the metadata management service 1252.
- the metadata management service 1252 can communicate the request to the control plane VCN 1216 through the Internet gateway 1234.
- the request can be received by the LB subnet(s) 1222 contained in the control plane DMZ tier 1220.
- the LB subnet(s) 1222 may determine that the request is valid, and in response to this determination, the LB subnet(s) 1222 can transmit the request to app subnet(s) 1226 contained in the control plane app tier 1224.
- the call to public Internet 1254 may be transmitted to the NAT gateway 1238 that can make the call to public Internet 1254.
- Memory that may be desired to be stored by the request can be stored in the DB subnet(s) 1230.
- the data plane mirror app tier 1240 can facilitate direct communication between the control plane VCN 1216 and the data plane VCN 1218. For example, changes, updates, or other suitable modifications to configuration may be desired to be applied to the resources contained in the data plane VCN 1218. Via a VNIC 1242, the control plane VCN 1216 can directly communicate with, and can thereby execute the changes, updates, or other suitable modifications to configuration to, resources contained in the data plane VCN 1218.
- control plane VCN 1216 and the data plane VCN 1218 can be contained in the service tenancy 1219.
- the user, or the customer, of the system may not own or operate either the control plane VCN 1216 or the data plane VCN 1218.
- the IaaS provider may own or operate the control plane VCN 1216 and the data plane VCN 1218, both of which may be contained in the service tenancy 1219.
- This embodiment can enable isolation of networks that may prevent users or customers from interacting with other users’, or other customers’, resources.
- this embodiment may allow users or customers of the system to store databases privately without needing to rely on public Internet 1254, which may not have a desired level of threat prevention, for storage.
- the LB subnet(s) 1222 contained in the control plane VCN 1216 can be configured to receive a signal from the service gateway 1236.
- the control plane VCN 1216 and the data plane VCN 1218 may be configured to be called by a customer of the IaaS provider without calling public Internet 1254.
- Customers of the IaaS provider may desire this embodiment since database(s) that the customers use may be controlled by the IaaS provider and may be stored on the service tenancy 1219, which may be isolated from public Internet 1254.
- FIG. 13 is a block diagram 1300 illustrating another example pattern of an IaaS architecture, according to at least one embodiment.
- Service operators 1302 e.g., service operators 1202 of FIG.
- a secure host tenancy 1304 e.g., the secure host tenancy 1204 of FIG. 12
- a secure host tenancy 1304 e.g., the secure host tenancy 1204 of FIG. 12
- VCN virtual cloud network
- the VCN 1306 can include a local peering gateway (LPG) 1310 (e.g., the LPG 1210 of FIG. 12) that can be communicatively coupled to a secure shell (SSH) VCN 1312 (e.g., the SSH VCN 1212 of FIG. 12) via an LPG 1210 contained in the SSH VCN 1312.
- LPG local peering gateway
- SSH secure shell
- the SSH VCN 1312 can include an SSH subnet 1314 (e.g., the SSH subnet 1214 of FIG. 12), and the SSH VCN 1312 can be communicatively coupled to a control plane VCN 1316 (e.g., the control plane VCN 1216 of FIG. 12) via an LPG 1310 contained in the control plane VCN 1316.
- the control plane VCN 1316 can be contained in a service tenancy 1319 (e.g., the service tenancy 1219 of FIG. 12), and the data plane VCN 1318 (e.g., the data plane VCN 1218 of FIG. 12) can be contained in a customer tenancy 1321 that may be owned or operated by users, or customers, of the system.
- the control plane VCN 1316 can include a control plane DMZ tier 1320 (e.g., the control plane DMZ tier 1220 of FIG. 12) that can include LB subnet(s) 1322 (e.g., LB subnet(s) 1222 of FIG. 12), a control plane app tier 1324 (e.g., the control plane app tier 1224 of FIG. 12) that can include app subnet(s) 1326 (e.g., app subnet(s) 1226 of FIG. 12), a control plane data tier 1328 (e.g., the control plane data tier 1228 of FIG.
- a control plane DMZ tier 1320 e.g., the control plane DMZ tier 1220 of FIG. 12
- LB subnet(s) 1322 e.g., LB subnet(s) 1222 of FIG. 12
- a control plane app tier 1324 e.g., the control plane app tier 1224 of FIG. 12
- the LB subnet(s) 1322 contained in the control plane DMZ tier 1320 can be communicatively coupled to the app subnet(s) 1326 contained in the control plane app tier 1324 and an Internet gateway 1334 (e.g., the Internet gateway 1234 of FIG. 12) that can be contained in the control plane VCN 1316, and the app subnet(s) 1326 can be communicatively coupled to the DB subnet(s) 1330 contained in the control plane data tier 1328 and a service gateway 1336 (e.g., the service gateway of FIG.
- the control plane VCN 1316 can include the service gateway 1336 and the NAT gateway 1338.
- the control plane VCN 1316 can include a data plane mirror app tier 1340 (e.g., the data plane mirror app tier 1240 of FIG. 12) that can include app subnet(s) 1326.
- the app subnet(s) 1326 contained in the data plane mirror app tier 1340 can include a virtual network interface controller (VNIC) 1342 (e.g., the VNIC of 1242) that can execute a compute instance 1344 (e.g., similar to the compute instance 1244 of FIG. 12).
- VNIC virtual network interface controller
- the compute instance 1344 can facilitate communication between the app subnet(s) 1326 of the data plane mirror app tier 1340 and the app subnet(s) 1326 that can be contained in a data plane app tier 1346 (e.g., the data plane app tier 1246 of FIG. 12) via the VNIC 1342 contained in the data plane mirror app tier 1340 and the VNIC 1342 contained in the data plane app tier 1346.
- the Internet gateway 1334 contained in the control plane VCN 1316 can be communicatively coupled to a metadata management service 1352 (e.g., the metadata management service 1252 of FIG. 12) that can be communicatively coupled to public Internet 1354 (e.g., public Internet 1254 of FIG. 12).
- Public Internet 1354 can be communicatively coupled to the NAT gateway 1338 contained in the control plane VCN 1316.
- the service gateway 1336 contained in the control plane VCN 1316 can be communicatively couple to cloud services 1356 (e.g., cloud services 1256 of FIG. 12).
- the data plane VCN 1318 can be contained in the customer tenancy 1321.
- the IaaS provider may provide the control plane VCN 1316 for each customer, and the IaaS provider may, for each customer, set up a unique compute instance 1344 that is contained in the service tenancy 1319.
- Each compute instance 1344 may allow communication between the control plane VCN 1316, contained in the service tenancy 1319, and the data plane VCN 1318 that is contained in the customer tenancy 1321.
- the compute instance 1344 may allow resources, that are provisioned in the control plane VCN 1316 that is contained in the service tenancy 1319, to be deployed or otherwise used in the data plane VCN 1318 that is contained in the customer tenancy 1321.
- the customer of the IaaS provider may have databases that live in the customer tenancy 1321.
- the control plane VCN 1316 can include the data plane mirror app tier 1340 that can include app subnet(s) 1326.
- the data plane mirror app tier 1340 can reside in the data plane VCN 1318, but the data plane mirror app tier 1340 may not live in the data plane VCN 1318. That is, the data plane mirror app tier 1340 may have access to the customer tenancy 1321, but the data plane mirror app tier 1340 may not exist in the data plane VCN 1318 or be owned or operated by the customer of the IaaS provider.
- the data plane mirror app tier 1340 may be configured to make calls to the data plane VCN 1318 but may not be configured to make calls to any entity contained in the control plane VCN 1316.
- the customer may desire to deploy or otherwise use resources in the data plane VCN 1318 that are provisioned in the control plane VCN 1316, and the data plane mirror app tier 1340 can facilitate the desired deployment, or other usage of resources, of the customer.
- the customer of the IaaS provider can apply filters to the data plane VCN 1318.
- the customer can determine what the data plane VCN 1318 can access, and the customer may restrict access to public Internet 1354 from the data plane VCN 1318.
- the IaaS provider may not be able to apply filters or otherwise control access of the data plane VCN 1318 to any outside networks or databases.
- cloud services 1356 can be called by the service gateway 1336 to access services that may not exist on public Internet 1354, on the control plane VCN 1316, or on the data plane VCN 1318.
- the connection between cloud services 1356 and the control plane VCN 1316 or the data plane VCN 1318 may not be live or continuous.
- Cloud services 1356 may exist on a different network owned or operated by the IaaS provider. Cloud services 1356 may be configured to receive calls from the service gateway 1336 and may be configured to not receive calls from public Internet 1354.
- Some cloud services 1356 may be isolated from other cloud services 1356, and the control plane VCN 1316 may be isolated from cloud services 1356 that may not be in the same region as the control plane VCN 1316.
- the control plane VCN 1316 may be located in "Region 1," and cloud service “Deployment 12,” may be located in Region 1 and in "Region 2.” If a call to Deployment 12 is made by the service gateway 1336 contained in the control plane VCN 1316 located in Region 1, the call may be transmitted to Deployment 12 in Region 1.
- the control plane VCN 1316, or Deployment 12 in Region 1 may not be communicatively coupled to, or otherwise in communication with, Deployment 12 in Region 2. [0165] FIG.
- Service operators 1402 can be communicatively coupled to a secure host tenancy 1404 (e.g., the secure host tenancy 1204 of FIG. 12) that can include a virtual cloud network (VCN) 1406 (e.g., the VCN 1206 of FIG. 12) and a secure host subnet 1408 (e.g., the secure host subnet 1208 of FIG. 12).
- VCN virtual cloud network
- the VCN 1406 can include an LPG 1410 (e.g., the LPG 1210 of FIG.
- the SSH VCN 1412 can include an SSH subnet 1414 (e.g., the SSH subnet 1214 of FIG. 12), and the SSH VCN 1412 can be communicatively coupled to a control plane VCN 1416 (e.g., the control plane VCN 1216 of FIG. 12) via an LPG 1410 contained in the control plane VCN 1416 and to a data plane VCN 1418 (e.g., the data plane 1218 of FIG. 12) via an LPG 1410 contained in the data plane VCN 1418.
- a control plane VCN 1416 e.g., the control plane VCN 1216 of FIG. 12
- a data plane VCN 1418 e.g., the data plane 1218 of FIG. 12
- the control plane VCN 1416 and the data plane VCN 1418 can be contained in a service tenancy 1419 (e.g., the service tenancy 1219 of FIG. 12).
- the control plane VCN 1416 can include a control plane DMZ tier 1420 (e.g., the control plane DMZ tier 1220 of FIG. 12) that can include load balancer (LB) subnet(s) 1422 (e.g., LB subnet(s) 1222 of FIG. 12), a control plane app tier 1424 (e.g., the control plane app tier 1224 of FIG. 12) that can include app subnet(s) 1426 (e.g., similar to app subnet(s) 1226 of FIG.
- LB load balancer
- app tier 1424 e.g., the control plane app tier 1224 of FIG. 12
- app subnet(s) 1426 e.g., similar to app subnet(s) 1226 of FIG.
- a control plane data tier 1428 (e.g., the control plane data tier 1228 of FIG. 12) that can include DB subnet(s) 1430.
- the LB subnet(s) 1422 contained in the control plane DMZ tier 1420 can be communicatively coupled to the app subnet(s) 1426 contained in the control plane app tier 1424 and to an Internet gateway 1434 (e.g., the Internet gateway 1234 of FIG. 12) that can be contained in the control plane VCN 1416, and the app subnet(s) 1426 can be communicatively coupled to the DB subnet(s) 1430 contained in the control plane data tier 1428 and to a service gateway 1436 (e.g., the service gateway of FIG.
- the control plane VCN 1416 can include the service gateway 1436 and the NAT gateway 1438.
- the data plane VCN 1418 can include a data plane app tier 1446 (e.g., the data plane app tier 1246 of FIG. 12), a data plane DMZ tier 1448 (e.g., the data plane DMZ tier 1248 of FIG. 12), and a data plane data tier 1450 (e.g., the data plane data tier 1250 of FIG. 12).
- the data plane DMZ tier 1448 can include LB subnet(s) 1422 that can be communicatively coupled to trusted app subnet(s) 1460 and untrusted app subnet(s) 1462 of the data plane app tier 1446 and the Internet gateway 1434 contained in the data plane VCN 1418.
- the trusted app subnet(s) 1460 can be communicatively coupled to the service gateway 1436 contained in the data plane VCN 1418, the NAT gateway 1438 contained in the data plane VCN 1418, and DB subnet(s) 1430 contained in the data plane data tier 1450.
- the untrusted app subnet(s) 1462 can be communicatively coupled to the service gateway 1436 contained in the data plane VCN 1418 and DB subnet(s) 1430 contained in the data plane data tier 1450.
- the data plane data tier 1450 can include DB subnet(s) 1430 that can be communicatively coupled to the service gateway 1436 contained in the data plane VCN 1418.
- the untrusted app subnet(s) 1462 can include one or more primary VNICs 1464(1)- (N) that can be communicatively coupled to tenant virtual machines (VMs) 1466(1)-(N).
- Each tenant VM 1466(1)-(N) can be communicatively coupled to a respective app subnet 1467(1)-(N) that can be contained in respective container egress VCNs 1468(1)-(N) that can be contained in respective customer tenancies 1470(1)-(N).
- Respective secondary VNICs 1472(1)-(N) can facilitate communication between the untrusted app subnet(s) 1462 contained in the data plane VCN 1418 and the app subnet contained in the container egress VCNs 1468(1)-(N).
- Each container egress VCNs 1468(1)-(N) can include a NAT gateway 1438 that can be communicatively coupled to public Internet 1454 (e.g., public Internet 1254 of FIG. 12).
- the Internet gateway 1434 contained in the control plane VCN 1416 and contained in the data plane VCN 1418 can be communicatively coupled to a metadata management service 1452 (e.g., the metadata management system 1252 of FIG. 12) that can be communicatively coupled to public Internet 1454.
- Public Internet 1454 can be communicatively coupled to the NAT gateway 1438 contained in the control plane VCN 1416 and contained in the data plane VCN 1418.
- the service gateway 1436 contained in the control plane VCN 1416 and contained in the data plane VCN 1418 can be communicatively couple to cloud services 1456.
- the data plane VCN 1418 can be integrated with customer tenancies 1470.
- This integration can be useful or desirable for customers of the IaaS provider in some cases such as a case that may desire support when executing code.
- the customer may provide code to run that may be destructive, may communicate with other customer resources, or may otherwise cause undesirable effects.
- the IaaS provider may determine whether to run code given to the IaaS provider by the customer. [0171]
- the customer of the IaaS provider may grant temporary network access to the IaaS provider and request a function to be attached to the data plane app tier 1446. Code to run the function may be executed in the VMs 1466(1)-(N), and the code may not be configured to run anywhere else on the data plane VCN 1418.
- Each VM 1466(1)-(N) may be connected to one customer tenancy 1470.
- Respective containers 1471(1)-(N) contained in the VMs 1466(1)-(N) may be configured to run the code.
- there can be a dual isolation e.g., the containers 1471(1)-(N) running code, where the containers 1471(1)-(N) may be contained in at least the VM 1466(1)-(N) that are contained in the untrusted app subnet(s) 1462), which may help prevent incorrect or otherwise undesirable code from damaging the network of the IaaS provider or from damaging a network of a different customer.
- the containers 1471(1)-(N) may be communicatively coupled to the customer tenancy 1470 and may be configured to transmit or receive data from the customer tenancy 1470.
- the containers 1471(1)-(N) may not be configured to transmit or receive data from any other entity in the data plane VCN 1418.
- the IaaS provider may kill or otherwise dispose of the containers 1471(1)-(N).
- the trusted app subnet(s) 1460 may run code that may be owned or operated by the IaaS provider.
- the trusted app subnet(s) 1460 may be communicatively coupled to the DB subnet(s) 1430 and be configured to execute CRUD operations in the DB subnet(s) 1430.
- the untrusted app subnet(s) 1462 may be communicatively coupled to the DB subnet(s) 1430, but in this embodiment, the untrusted app subnet(s) may be configured to execute read operations in the DB subnet(s) 1430.
- the containers 1471(1)-(N) that can be contained in the VM 1466(1)-(N) of each customer and that may run code from the customer may not be communicatively coupled with the DB subnet(s) 1430.
- the control plane VCN 1416 and the data plane VCN 1418 may not be directly communicatively coupled. In this embodiment, there may be no direct communication between the control plane VCN 1416 and the data plane VCN 1418.
- FIG. 15 is a block diagram 1500 illustrating another example pattern of an IaaS architecture, according to at least one embodiment.
- Service operators 1502 e.g., service operators 1202 of FIG.
- a secure host tenancy 1504 e.g., the secure host tenancy 1204 of FIG. 12
- a secure host tenancy 1504 e.g., the secure host tenancy 1204 of FIG. 12
- the VCN 1506 can include an LPG 1510 (e.g., the LPG 1210 of FIG. 12) that can be communicatively coupled to an SSH VCN 1512 (e.g., the SSH VCN 1212 of FIG. 12) via an LPG 1510 contained in the SSH VCN 1512.
- the SSH VCN 1512 can include an SSH subnet 1514 (e.g., the SSH subnet 1214 of FIG. 12), and the SSH VCN 1512 can be communicatively coupled to a control plane VCN 1516 (e.g., the control plane VCN 1216 of FIG. 12) via an LPG 1510 contained in the control plane VCN 1516 and to a data plane VCN 1518 (e.g., the data plane 1218 of FIG. 12) via an LPG 1510 contained in the data plane VCN 1518.
- the control plane VCN 1516 and the data plane VCN 1518 can be contained in a service tenancy 1519 (e.g., the service tenancy 1219 of FIG. 12).
- the control plane VCN 1516 can include a control plane DMZ tier 1520 (e.g., the control plane DMZ tier 1220 of FIG. 12) that can include LB subnet(s) 1522 (e.g., LB subnet(s) 1222 of FIG. 12), a control plane app tier 1524 (e.g., the control plane app tier 1224 of FIG. 12) that can include app subnet(s) 1526 (e.g., app subnet(s) 1226 of FIG. 12), a control plane data tier 1528 (e.g., the control plane data tier 1228 of FIG.
- a control plane DMZ tier 1520 e.g., the control plane DMZ tier 1220 of FIG. 12
- LB subnet(s) 1522 e.g., LB subnet(s) 1222 of FIG. 12
- a control plane app tier 1524 e.g., the control plane app tier 1224 of FIG. 12
- the LB subnet(s) 1522 contained in the control plane DMZ tier 1520 can be communicatively coupled to the app subnet(s) 1526 contained in the control plane app tier 1524 and to an Internet gateway 1534 (e.g., the Internet gateway 1234 of FIG. 12) that can be contained in the control plane VCN 1516, and the app subnet(s) 1526 can be communicatively coupled to the DB subnet(s) 1530 contained in the control plane data tier 1528 and to a service gateway 1536 (e.g., the service gateway of FIG.
- a service gateway 1536 e.g., the service gateway of FIG.
- the control plane VCN 1516 can include the service gateway 1536 and the NAT gateway 1538.
- the data plane VCN 1518 can include a data plane app tier 1546 (e.g., the data plane app tier 1246 of FIG. 12), a data plane DMZ tier 1548 (e.g., the data plane DMZ tier 1248 of FIG. 12), and a data plane data tier 1550 (e.g., the data plane data tier 1250 of FIG. 12).
- the data plane DMZ tier 1548 can include LB subnet(s) 1522 that can be communicatively coupled to trusted app subnet(s) 1560 (e.g., trusted app subnet(s) 1460 of FIG. 14) and untrusted app subnet(s) 1562 (e.g., untrusted app subnet(s) 1462 of FIG. 14) of the data plane app tier 1546 and the Internet gateway 1534 contained in the data plane VCN 1518.
- the trusted app subnet(s) 1560 can be communicatively coupled to the service gateway 1536 contained in the data plane VCN 1518, the NAT gateway 1538 contained in the data plane VCN 1518, and DB subnet(s) 1530 contained in the data plane data tier 1550.
- the untrusted app subnet(s) 1562 can be communicatively coupled to the service gateway 1536 contained in the data plane VCN 1518 and DB subnet(s) 1530 contained in the data plane data tier 1550.
- the data plane data tier 1550 can include DB subnet(s) 1530 that can be communicatively coupled to the service gateway 1536 contained in the data plane VCN 1518.
- the untrusted app subnet(s) 1562 can include primary VNICs 1564(1)-(N) that can be communicatively coupled to tenant virtual machines (VMs) 1566(1)-(N) residing within the untrusted app subnet(s) 1562.
- VMs tenant virtual machines
- Each tenant VM 1566(1)-(N) can run code in a respective container 1567(1)-(N), and be communicatively coupled to an app subnet 1526 that can be contained in a data plane app tier 1546 that can be contained in a container egress VCN 1568.
- Respective secondary VNICs 1572(1)-(N) can facilitate communication between the untrusted app subnet(s) 1562 contained in the data plane VCN 1518 and the app subnet contained in the container egress VCN 1568.
- the container egress VCN can include a NAT gateway 1538 that can be communicatively coupled to public Internet 1554 (e.g., public Internet 1254 of FIG. 12).
- the Internet gateway 1534 contained in the control plane VCN 1516 and contained in the data plane VCN 1518 can be communicatively coupled to a metadata management service 1552 (e.g., the metadata management system 1252 of FIG. 12) that can be communicatively coupled to public Internet 1554.
- Public Internet 1554 can be communicatively coupled to the NAT gateway 1538 contained in the control plane VCN 1516 and contained in the data plane VCN 1518.
- the service gateway 1536 contained in the control plane VCN 1516 and contained in the data plane VCN 1518 can be communicatively couple to cloud services 1556.
- the pattern illustrated by the architecture of block diagram 1500 of FIG. 15 may be considered an exception to the pattern illustrated by the architecture of block diagram 1400 of FIG.
- the respective containers 1567(1)-(N) that are contained in the VMs 1566(1)-(N) for each customer can be accessed in real-time by the customer.
- the containers 1567(1)-(N) may be configured to make calls to respective secondary VNICs 1572(1)-(N) contained in app subnet(s) 1526 of the data plane app tier 1546 that can be contained in the container egress VCN 1568.
- the secondary VNICs 1572(1)-(N) can transmit the calls to the NAT gateway 1538 that may transmit the calls to public Internet 1554.
- the containers 1567(1)-(N) that can be accessed in real-time by the customer can be isolated from the control plane VCN 1516 and can be isolated from other entities contained in the data plane VCN 1518.
- the containers 1567(1)-(N) may also be isolated from resources from other customers.
- the customer can use the containers 1567(1)-(N) to call cloud services 1556.
- the customer may run code in the containers 1567(1)-(N) that requests a service from cloud services 1556.
- the containers 1567(1)-(N) can transmit this request to the secondary VNICs 1572(1)-(N) that can transmit the request to the NAT gateway that can transmit the request to public Internet 1554.
- Public Internet 1554 can transmit the request to LB subnet(s) 1522 contained in the control plane VCN 1516 via the Internet gateway 1534.
- the LB subnet(s) can transmit the request to app subnet(s) 1526 that can transmit the request to cloud services 1556 via the service gateway 1536.
- IaaS architectures 1200, 1300, 1400, 1500 depicted in the figures may have other components than those depicted. Further, the embodiments shown in the figures are only some examples of a cloud infrastructure system that may incorporate an embodiment of the disclosure.
- the IaaS systems may have more or fewer components than shown in the figures, may combine two or more components, or may have a different configuration or arrangement of components.
- the IaaS systems described herein may include a suite of applications, middleware, and database service offerings that are delivered to a customer in a self-service, subscription-based, elastically scalable, reliable, highly available, and secure manner.
- An example of such an IaaS system is the Oracle Cloud Infrastructure (OCI) provided by the present assignee.
- FIG. 16 illustrates an example computer system 1600, in which various embodiments may be implemented. The system 1600 may be used to implement any of the computer systems described above.
- computer system 1600 includes a processing unit 1604 that communicates with a number of peripheral subsystems via a bus subsystem 1602. These peripheral subsystems may include a processing acceleration unit 1606, an I/O subsystem 1608, a storage subsystem 1618 and a communications subsystem 1624.
- Storage subsystem 1618 includes tangible computer-readable storage media 1622 and a system memory 1610.
- Bus subsystem 1602 provides a mechanism for letting the various components and subsystems of computer system 1600 communicate with each other as intended. Although bus subsystem 1602 is shown schematically as a single bus, alternative embodiments of the bus subsystem may utilize multiple buses.
- Bus subsystem 1602 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures.
- bus architectures may include an Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus, which can be implemented as a Mezzanine bus manufactured to the IEEE P1386.1 standard.
- Processing unit 1604 which can be implemented as one or more integrated circuits (e.g., a conventional microprocessor or microcontroller), controls the operation of computer system 1600.
- processors may be included in processing unit 1604.
- processors may include single core or multicore processors.
- processing unit 1604 may be implemented as one or more independent processing units 1632 and/or 1634 with single or multicore processors included in each processing unit.
- processing unit 1604 may also be implemented as a quad-core processing unit formed by integrating two dual-core processors into a single chip.
- processing unit 1604 can execute a variety of programs in response to program code and can maintain multiple concurrently executing programs or processes. At any given time, some or all of the program code to be executed can be resident in processor(s) 1604 and/or in storage subsystem 1618. Through suitable programming, processor(s) 1604 can provide various functionalities described above.
- Computer system 1600 may additionally include a processing acceleration unit 1606, which can include a digital signal processor (DSP), a special-purpose processor, and/or the like.
- I/O subsystem 1608 may include user interface input devices and user interface output devices.
- User interface input devices may include a keyboard, pointing devices such as a mouse or trackball, a touchpad or touch screen incorporated into a display, a scroll wheel, a click wheel, a dial, a button, a switch, a keypad, audio input devices with voice command recognition systems, microphones, and other types of input devices.
- User interface input devices may include, for example, motion sensing and/or gesture recognition devices such as the Microsoft Kinect® motion sensor that enables users to control and interact with an input device, such as the Microsoft Xbox® 360 game controller, through a natural user interface using gestures and spoken commands.
- User interface input devices may also include eye gesture recognition devices such as the Google Glass® blink detector that detects eye activity (e.g., ‘blinking’ while taking pictures and/or making a menu selection) from users and transforms the eye gestures as input into an input device (e.g., Google Glass®).
- user interface input devices may include voice recognition sensing devices that enable users to interact with voice recognition systems (e.g., Siri® navigator), through voice commands.
- User interface input devices may also include, without limitation, three dimensional (3D) mice, joysticks or pointing sticks, gamepads and graphic tablets, and audio/visual devices such as speakers, digital cameras, digital camcorders, portable media players, webcams, image scanners, fingerprint scanners, barcode reader 3D scanners, 3D printers, laser rangefinders, and eye gaze tracking devices. Additionally, user interface input devices may include, for example, medical imaging input devices such as computed tomography, magnetic resonance imaging, position emission tomography, medical ultrasonography devices. User interface input devices may also include, for example, audio input devices such as MIDI keyboards, digital musical instruments and the like.
- User interface output devices may include a display subsystem, indicator lights, or non-visual displays such as audio output devices, etc.
- the display subsystem may be a cathode ray tube (CRT), a flat-panel device, such as that using a liquid crystal display (LCD) or plasma display, a projection device, a touch screen, and the like.
- CTR cathode ray tube
- LCD liquid crystal display
- plasma display a projection device
- touch screen a touch screen
- output device is intended to include all possible types of devices and mechanisms for outputting information from computer system 1600 to a user or other computer.
- user interface output devices may include, without limitation, a variety of display devices that visually convey text, graphics and audio/video information such as monitors, printers, speakers, headphones, automotive navigation systems, plotters, voice output devices, and modems.
- Computer system 1600 may comprise a storage subsystem 1618 that provides a tangible non-transitory computer-readable storage medium for storing software and data constructs that provide the functionality of the embodiments described in this disclosure.
- the software can include programs, code, instructions, scripts, etc., that when executed by one or more cores or processors of processing unit 1604 provide the functionality described above.
- Storage subsystem 1618 may also provide a repository for storing data used in accordance with the present disclosure.
- storage subsystem 1618 can include various components including a system memory 1610, computer-readable storage media 1622, and a computer readable storage media reader 1620.
- System memory 1610 may store program instructions that are loadable and executable by processing unit 1604.
- System memory 1610 may also store data that is used during the execution of the instructions and/or data that is generated during the execution of the program instructions.
- Various different kinds of programs may be loaded into system memory 1610 including but not limited to client applications, Web browsers, mid-tier applications, relational database management systems (RDBMS), virtual machines, containers, etc.
- RDBMS relational database management systems
- System memory 1610 may also store an operating system 1616.
- operating system 1616 may include various versions of Microsoft Windows®, Apple Macintosh®, and/or Linux operating systems, a variety of commercially-available UNIX® or UNIX-like operating systems (including without limitation the variety of GNU/Linux operating systems, the Google Chrome® OS, and the like) and/or mobile operating systems such as iOS, Windows® Phone, Android® OS, BlackBerry® OS, and Palm® OS operating systems.
- the virtual machines along with their guest operating systems (GOSs) may be loaded into system memory 1610 and executed by one or more processors or cores of processing unit 1604.
- System memory 1610 can come in different configurations depending upon the type of computer system 1600.
- system memory 1610 may be volatile memory (such as random access memory (RAM)) and/or non-volatile memory (such as read-only memory (ROM), flash memory, etc.). Different types of RAM configurations may be provided including a static random access memory (SRAM), a dynamic random access memory (DRAM), and others.
- system memory 1610 may include a basic input/output system (BIOS) containing basic routines that help to transfer information between elements within computer system 1600, such as during start-up.
- BIOS basic input/output system
- Computer-readable storage media 1622 may represent remote, local, fixed, and/or removable storage devices plus storage media for temporarily and/or more permanently containing, storing, computer-readable information for use by computer system 1600 including instructions executable by processing unit 1604 of computer system 1600.
- Computer-readable storage media 1622 can include any appropriate media known or used in the art, including storage media and communication media, such as but not limited to, volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage and/or transmission of information.
- This can include tangible computer-readable storage media such as RAM, ROM, electronically erasable programmable ROM (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disk (DVD), or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or other tangible computer readable media.
- computer-readable storage media 1622 may include a hard disk drive that reads from or writes to non-removable, nonvolatile magnetic media, a magnetic disk drive that reads from or writes to a removable, nonvolatile magnetic disk, and an optical disk drive that reads from or writes to a removable, nonvolatile optical disk such as a CD ROM, DVD, and Blu-Ray® disk, or other optical media.
- Computer-readable storage media 1622 may include, but is not limited to, Zip® drives, flash memory cards, universal serial bus (USB) flash drives, secure digital (SD) cards, DVD disks, digital video tape, and the like.
- Computer-readable storage media 1622 may also include, solid-state drives (SSD) based on non-volatile memory such as flash-memory based SSDs, enterprise flash drives, solid state ROM, and the like, SSDs based on volatile memory such as solid state RAM, dynamic RAM, static RAM, DRAM-based SSDs, magnetoresistive RAM (MRAM) SSDs, and hybrid SSDs that use a combination of DRAM and flash memory based SSDs.
- SSD solid-state drives
- volatile memory such as solid state RAM, dynamic RAM, static RAM, DRAM-based SSDs, magnetoresistive RAM (MRAM) SSDs, and hybrid SSDs that use a combination of DRAM and flash memory based SSDs.
- the disk drives and their associated computer-readable media may provide non-volatile storage of computer-readable instructions, data structures, program services, and other data for computer system 1600.
- Machine-readable instructions executable by one or more processors or cores of processing unit 1604 may be stored on a non-transitory computer-readable storage medium.
- a non-transitory computer-readable storage medium can include physically tangible memory or storage devices that include volatile memory storage devices and/or non-volatile storage devices.
- Examples of non-transitory computer-readable storage medium include magnetic storage media (e.g., disk or tapes), optical storage media (e.g., DVDs, CDs), various types of RAM, ROM, or flash memory, hard drives, floppy drives, detachable memory drives (e.g., USB drives), or other type of storage device.
- Communications subsystem 1624 provides an interface to other computer systems and networks. Communications subsystem 1624 serves as an interface for receiving data from and transmitting data to other systems from computer system 1600. For example, communications subsystem 1624 may enable computer system 1600 to connect to one or more devices via the Internet.
- communications subsystem 1624 can include radio frequency (RF) transceiver components for accessing wireless voice and/or data networks (e.g., using cellular telephone technology, advanced data network technology, such as 3G, 4G or EDGE (enhanced data rates for global evolution), WiFi (IEEE 802.11 family standards, or other mobile communication technologies, or any combination thereof)), global positioning system (GPS) receiver components, and/or other components.
- RF radio frequency
- communications subsystem 1624 can provide wired network connectivity (e.g., Ethernet) in addition to or instead of a wireless interface.
- communications subsystem 1624 may also receive input communication in the form of structured and/or unstructured data feeds 1626, event streams 1628, event updates 1630, and the like on behalf of one or more users who may use computer system 1600.
- communications subsystem 1624 may be configured to receive data feeds 1626 in real-time from users of social networks and/or other communication services such as Twitter® feeds, Facebook® updates, web feeds such as Rich Site Summary (RSS) feeds, and/or real-time updates from one or more third party information sources.
- RSS Rich Site Summary
- communications subsystem 1624 may also be configured to receive data in the form of continuous data streams, which may include event streams 1628 of real- time events and/or event updates 1630, that may be continuous or unbounded in nature with no explicit end.
- applications that generate continuous data may include, for example, sensor data applications, financial tickers, network performance measuring tools (e.g., network monitoring and traffic management applications), clickstream analysis tools, automobile traffic monitoring, and the like.
- Communications subsystem 1624 may also be configured to output the structured and/or unstructured data feeds 1626, event streams 1628, event updates 1630, and the like to one or more databases that may be in communication with one or more streaming data source computers coupled to computer system 1600.
- Computer system 1600 can be one of various types, including a handheld portable device (e.g., an iPhone® cellular phone, an iPad® computing tablet, a PDA), a wearable device (e.g., a Google Glass® head mounted display), a PC, a workstation, a mainframe, a kiosk, a server rack, or any other data processing system.
- a handheld portable device e.g., an iPhone® cellular phone, an iPad® computing tablet, a PDA
- a wearable device e.g., a Google Glass® head mounted display
- PC personal computer system
- workstation e.g., a workstation
- mainframe e.g., a mainframe
- a kiosk e.g., a server rack
- server rack e.g., a server rack
- Processes can communicate using a variety of techniques including but not limited to conventional techniques for inter process communication, and different pairs of processes may use different techniques, or the same pair of processes may use different techniques at different times.
- the specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. It will, however, be evident that additions, subtractions, deletions, and other modifications and changes may be made thereunto without departing from the broader spirit and scope as set forth in the claims. Thus, although specific disclosure embodiments have been described, these are not intended to be limiting. Various modifications and equivalents are within the scope of the following claims.
- Disjunctive language such as the phrase "at least one of X, Y, or Z," unless specifically stated otherwise, is intended to be understood within the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present.
- Preferred embodiments of this disclosure are described herein, including the best mode known for carrying out the disclosure. Variations of those preferred embodiments may become apparent to those of ordinary skill in the art upon reading the foregoing description.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Microelectronics & Electronic Packaging (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- Stored Programmes (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
- Computer And Data Communications (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
Description
Claims
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202480019296.3A CN120883737A (en) | 2023-03-16 | 2024-03-15 | Technology for cable termination protection devices in prefabrication plants |
| EP24719389.9A EP4681509A1 (en) | 2023-03-16 | 2024-03-15 | Techniques for a cable termination protection apparatus in a prefab factory |
Applications Claiming Priority (10)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US18/122,677 US12493457B2 (en) | 2023-03-16 | 2023-03-16 | Mobile prefab factory for building cloud regions |
| US18/122,674 US20240314026A1 (en) | 2023-03-16 | 2023-03-16 | Techniques for building cloud regions at a prefab factory |
| US18/122,676 US20240314038A1 (en) | 2023-03-16 | 2023-03-16 | Static network fabric at a prefab factory |
| US18/122,678 | 2023-03-16 | ||
| US18/122,678 US20240313460A1 (en) | 2023-03-16 | 2023-03-16 | Techniques for a cable termination protection apparatus in a prefab factory |
| US18/122,676 | 2023-03-16 | ||
| US18/122,674 | 2023-03-16 | ||
| US18/122,677 | 2023-03-16 | ||
| US18/122,675 | 2023-03-16 | ||
| US18/122,675 US12481795B2 (en) | 2023-03-16 | 2023-03-16 | Techniques for validating cloud regions built at a prefab factory |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2024192399A1 true WO2024192399A1 (en) | 2024-09-19 |
Family
ID=90731350
Family Applications (5)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2024/020258 Ceased WO2024192399A1 (en) | 2023-03-16 | 2024-03-15 | Techniques for a cable termination protection apparatus in a prefab factory |
| PCT/US2024/020248 Ceased WO2024192393A1 (en) | 2023-03-16 | 2024-03-15 | Techniques for building cloud regions at a prefab factory |
| PCT/US2024/020250 Ceased WO2024192394A1 (en) | 2023-03-16 | 2024-03-15 | Static network fabric at a prefab factory |
| PCT/US2024/020255 Ceased WO2024192397A1 (en) | 2023-03-16 | 2024-03-15 | Mobile prefab factory for building cloud regions |
| PCT/US2024/020261 Ceased WO2024192402A1 (en) | 2023-03-16 | 2024-03-15 | Techniques for validating cloud regions built at a prefab factory |
Family Applications After (4)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2024/020248 Ceased WO2024192393A1 (en) | 2023-03-16 | 2024-03-15 | Techniques for building cloud regions at a prefab factory |
| PCT/US2024/020250 Ceased WO2024192394A1 (en) | 2023-03-16 | 2024-03-15 | Static network fabric at a prefab factory |
| PCT/US2024/020255 Ceased WO2024192397A1 (en) | 2023-03-16 | 2024-03-15 | Mobile prefab factory for building cloud regions |
| PCT/US2024/020261 Ceased WO2024192402A1 (en) | 2023-03-16 | 2024-03-15 | Techniques for validating cloud regions built at a prefab factory |
Country Status (3)
| Country | Link |
|---|---|
| EP (5) | EP4681509A1 (en) |
| CN (5) | CN120883585A (en) |
| WO (5) | WO2024192399A1 (en) |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20190208290A1 (en) * | 2018-01-03 | 2019-07-04 | Infinera Corp. | Telecommunication appliance having high density embedded pluggable optics |
| US10674625B1 (en) * | 2018-08-07 | 2020-06-02 | Facebook, Inc. | Rack sideplane for interconnecting devices |
Family Cites Families (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US1675408A (en) | 1925-11-02 | 1928-07-03 | G & W Electric Speciality Co | Electrical-conductor-protecting means |
| US7850260B2 (en) * | 2007-06-22 | 2010-12-14 | Oracle America, Inc. | Injection/ejection mechanism |
| EP2583211B1 (en) * | 2010-06-15 | 2020-04-15 | Oracle International Corporation | Virtual computing infrastructure |
| US11038752B1 (en) * | 2020-06-16 | 2021-06-15 | Hewlett Packard Enterprise Development Lp | Creating a highly-available private cloud gateway based on a two-node hyperconverged infrastructure cluster with a self-hosted hypervisor management system |
| US11343079B2 (en) * | 2020-07-21 | 2022-05-24 | Servicenow, Inc. | Secure application deployment |
| US11546228B2 (en) * | 2021-01-04 | 2023-01-03 | Oracle International Corporation | Zero-touch configuration of network devices using hardware metadata |
| US11496364B1 (en) * | 2021-06-10 | 2022-11-08 | Hewlett Packard Enterprise Development Lp | Logical rack controller |
-
2024
- 2024-03-15 WO PCT/US2024/020258 patent/WO2024192399A1/en not_active Ceased
- 2024-03-15 EP EP24719389.9A patent/EP4681509A1/en active Pending
- 2024-03-15 CN CN202480019371.6A patent/CN120883585A/en active Pending
- 2024-03-15 WO PCT/US2024/020248 patent/WO2024192393A1/en not_active Ceased
- 2024-03-15 EP EP24720355.7A patent/EP4681403A1/en active Pending
- 2024-03-15 WO PCT/US2024/020250 patent/WO2024192394A1/en not_active Ceased
- 2024-03-15 CN CN202480019280.2A patent/CN120937315A/en active Pending
- 2024-03-15 WO PCT/US2024/020255 patent/WO2024192397A1/en not_active Ceased
- 2024-03-15 CN CN202480019281.7A patent/CN120883583A/en active Pending
- 2024-03-15 CN CN202480019299.7A patent/CN120883584A/en active Pending
- 2024-03-15 WO PCT/US2024/020261 patent/WO2024192402A1/en not_active Ceased
- 2024-03-15 EP EP24719388.1A patent/EP4681400A1/en active Pending
- 2024-03-15 CN CN202480019296.3A patent/CN120883737A/en active Pending
- 2024-03-15 EP EP24719390.7A patent/EP4681401A1/en active Pending
- 2024-03-15 EP EP24719921.9A patent/EP4681402A1/en active Pending
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20190208290A1 (en) * | 2018-01-03 | 2019-07-04 | Infinera Corp. | Telecommunication appliance having high density embedded pluggable optics |
| US10674625B1 (en) * | 2018-08-07 | 2020-06-02 | Facebook, Inc. | Rack sideplane for interconnecting devices |
Also Published As
| Publication number | Publication date |
|---|---|
| EP4681400A1 (en) | 2026-01-21 |
| CN120883583A (en) | 2025-10-31 |
| WO2024192397A1 (en) | 2024-09-19 |
| CN120883585A (en) | 2025-10-31 |
| EP4681401A1 (en) | 2026-01-21 |
| WO2024192402A1 (en) | 2024-09-19 |
| WO2024192394A1 (en) | 2024-09-19 |
| EP4681402A1 (en) | 2026-01-21 |
| CN120883584A (en) | 2025-10-31 |
| EP4681509A1 (en) | 2026-01-21 |
| EP4681403A1 (en) | 2026-01-21 |
| CN120883737A (en) | 2025-10-31 |
| WO2024192393A1 (en) | 2024-09-19 |
| CN120937315A (en) | 2025-11-11 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US12306734B2 (en) | Techniques for automated service monitoring and remediation in a distributed computing system | |
| US12135991B2 (en) | Management plane orchestration across service cells | |
| JP2024546424A (en) | Edge attestation for authorization of computing nodes in a cloud infrastructure system | |
| US20240313460A1 (en) | Techniques for a cable termination protection apparatus in a prefab factory | |
| US12481795B2 (en) | Techniques for validating cloud regions built at a prefab factory | |
| US20240314038A1 (en) | Static network fabric at a prefab factory | |
| US20240314026A1 (en) | Techniques for building cloud regions at a prefab factory | |
| EP4681509A1 (en) | Techniques for a cable termination protection apparatus in a prefab factory | |
| US12493457B2 (en) | Mobile prefab factory for building cloud regions | |
| US12483530B2 (en) | Techniques for rotating network addresses in prefab regions | |
| US12541355B2 (en) | Techniques for image-based region build | |
| US12425300B2 (en) | Techniques for rotating resource identifiers in prefab regions | |
| US12034595B2 (en) | Dynamically reprogrammable region lattices | |
| US20250266988A1 (en) | Techniques for device encryption in prefab region data centers | |
| US20250133056A1 (en) | Techniques for rotating service endpoints in prefab regions | |
| US12229026B2 (en) | Replicating resources between regional data centers | |
| US12210400B2 (en) | Techniques for performing fault tolerance validation for a data center | |
| WO2025174990A1 (en) | Techniques for device encryption in prefab region data centers |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24719389 Country of ref document: EP Kind code of ref document: A1 |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 202547080424 Country of ref document: IN |
|
| WWP | Wipo information: published in national office |
Ref document number: 202547080424 Country of ref document: IN |
|
| ENP | Entry into the national phase |
Ref document number: 2025553734 Country of ref document: JP Kind code of ref document: A |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2025553734 Country of ref document: JP |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 202480019296.3 Country of ref document: CN |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2024719389 Country of ref document: EP |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| WWP | Wipo information: published in national office |
Ref document number: 202480019296.3 Country of ref document: CN |
|
| ENP | Entry into the national phase |
Ref document number: 2024719389 Country of ref document: EP Effective date: 20251016 |
|
| ENP | Entry into the national phase |
Ref document number: 2024719389 Country of ref document: EP Effective date: 20251016 |
|
| ENP | Entry into the national phase |
Ref document number: 2024719389 Country of ref document: EP Effective date: 20251016 |
|
| ENP | Entry into the national phase |
Ref document number: 2024719389 Country of ref document: EP Effective date: 20251016 |
|
| ENP | Entry into the national phase |
Ref document number: 2024719389 Country of ref document: EP Effective date: 20251016 |