WO2015123849A1 - Method and apparatus for extending the internet into intranets to achieve scalable cloud network - Google Patents
Method and apparatus for extending the internet into intranets to achieve scalable cloud network Download PDFInfo
- Publication number
- WO2015123849A1 WO2015123849A1 PCT/CN2014/072339 CN2014072339W WO2015123849A1 WO 2015123849 A1 WO2015123849 A1 WO 2015123849A1 CN 2014072339 W CN2014072339 W CN 2014072339W WO 2015123849 A1 WO2015123849 A1 WO 2015123849A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- network
- communication
- nic
- intranet
- vms
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0803—Configuration setting
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/12—Discovery or management of network topologies
- H04L41/122—Discovery or management of network topologies of virtualised topologies, e.g. software-defined networks [SDN] or network function virtualisation [NFV]
Definitions
- An intranet is a privately owned computer network that uses Internet Protocol technology to connect privately owned and controlled computing resources. This term is used in contrast to the Internet, a public network bridging intranets.
- the really meaningful characteristic difference between an intranet and the Internet is scale: an intranet always has a limited scale which is bound to the economical limit of its private owner, while the Internet has the scale which is unbound to any economical limit of any organization in the world.
- Cloud computing in this disclosure for constructing a network of unbound scale, cloud computing is considered as service; the so-called “private cloud", with a small, non-scalable size and nothing to do with service, is an unreasonable notion not to be regarded as cloud— should provide a practically unbound scale of computing resources which for example, include network resources.
- a very large network is optional to provide disaster avoidance, elastic bursting, or even distributing split user data to non-cooperative authorities spanning continent geographical regions for protecting data against abuse of power by corrupted authorities.
- any cloud computing service provider is going to have a limited economic power to own computing resources of a bound scale.
- Network resource for cloud Infrastructure as a Service can include the OSI reference model Layer 2 (Link Layer) in order to provide without loss of generality all upper layer services.
- Layer 2 describes physically— copper, optical fiber, radio, etc.— wired connections.
- patching algorithm protocol
- MAC Layer-2
- the physically separate intranets can see and operate with the MAC packets of the other side, and both operate as enlarged networks with the other side as its extension.
- MAC encapsulation technologies also known as large layer 2 networks: VPN, GRE, VXLAN, NVGRE, STT, MPLS, LISP to provide examples. It would appear that MAC-encapsulation of layer 2 in layer 3 technology is available to patch physically separate intranet into a network of unbound scale.
- Layer 2 is physical. Communications described in layer 2 is done through data packets, called MAC packets, exchanged between physical network interface cards (NICs). Each NIC has a unique MAC address (id), and a MAC packet is in the form of the following triple: (Destination-MAC-id, Source-MAC-id, Payload).
- a MAC id is similar to a person's finger print or some other uniquely identifiable physical attributes being the person's MAC id. They are unique, however they are not convenient to use as the person's id, e.g., for daily communications purposes. Typically, MAC ids are not easy to use. Moreover, applications will need move around in a bigger environment than a physical network. Hence,
- Layer 3 a logical network in which an entity is identified in a unique IP address (id), and communications is in the form of
- IP packets in the following triple format: (Destination-IP-id, Source-IP-id, Payload).
- IP id can be constructed unique, and can even be changed, if necessary. IP ids are convenient to use for communications purposes.
- MAC id is physical, unique, fixed to a NIC, cannot be changed, and the NIC is wired for sending or receiving data;
- IP id is logical, movable, changeable, and convenient to use by applications. Plug-and-play standards for MAC/IP interplay
- a computer with a NIC When a computer with a NIC is wired to a network cable, the computer need be associated with an IP address in order to perform operations.
- the standard is that the computer initiates a DHCP request (Dynamic Host Configuration Protocol); this is to broadcast an IP id request message to the network environment with its MAC id in the broadcast. Why broadcast? The computer has no idea to whom in the network it sends the message.
- the network system has one or more DHCP server(s).
- the first DHCP server which receives the IP id request will arrange for an available IP id and broadcast back to the requestor with the MAC id. Why broadcast the response?
- the DHCP server also has no idea where the computer with this NIC of this MAC-id is in the network.
- IP ids When an application in a machine (having a NIC) initiates a communication with a destination (application) machine (also having a NIC), the communication should conveniently use IP ids.
- these machines in fact, their operating systems, OSes
- OSes can only communicate in physical way by exchanging data packets between NICs, i.e., the OSes can only communicate by exchanging MAC packets. Then how can the source OS know where the destination IP addressed machine is?
- the standard is: the source OS will initiate an ARP (Address Resolution Protocol) message by broadcasting: "Who having this destination IP, give me your MAC id!" This time it is easier to understand why the source OS broadcasts: no server's help is needed, no configuration is needed; the protocol is purely in plug-and-play manner. All OSes in the network will hear the ARP broadcast, but only the one with the wanted IP address will respond with the MAC id. Having received the response, now the source OS can send the data packet in MAC packets through the physical wire linking the two NICs.
- ARP Address Resolution Protocol
- the conventional broadcast messages e.g., ARP and DHCP
- DHCP MAC/IP association
- ARP IP/MAC resolution
- UDP unlike TCP link which needs handshake establishment, a UDP message can simply be sent and received without requiring the sender and receiver to engage in any agreement for confirming a good connection, thus UDP is well suited for broadcasting.
- existing large layer 2 technologies suffer from scalability problems in trans-intranet patching networks. Broadcasting in trans-intranet scale requires very high bandwidth Internet connections in order to obtain reasonable response time. High bandwidth Internet connections are very costly. There are not any trans-datacenter clouds in successful commercial operation currently, in large part due to the costs of the high bandwidth that would be required by conventional approaches.
- firewall for a trans-intranet tenant.
- a tenant's firewall is distributed in trans-intranet manner so that VM-Internet communication packets are filtered locally and in distribution at each intranet.
- routing forward table must be updated to all intranets in which a tenant has VMs, which is essentially trying to reach agreement over UDP
- connectionless channel all "good" large layer 2 patching protocols, e.g., STT and VXLAN, are UDP based in order to serve without loss of generality any applications, e.g., video and broadcast). This translates to the infamous Byzantine Generals
- VM-Internet communications suffer from a chokepointed firewall having a bandwidth bottleneck.
- the size of the Internet is unbound. Any segment of network can join the Internet by interconnecting itself with the Internet provided the network segment is constructed, and the interconnection is implemented, in compliance with the OSI seven-layer reference model.
- Network interconnection i.e., scaling up the size for a network, if using the OSI reference model, is in the formulation that a network packet of a layer is the payload data for the packet of one immediate layer down.
- Interconnection at layers 2 and 3 in this formulation is stateless and connection-less, i.e., the interconnection needs no prior protocol negotiation.
- a web client accessing a search engine web server does not need any prior protocol negotiation.
- network interconnection using the conventional "large layer 2" patching technologies such as VPN, MPLS, STT, VXLAN, and NVGRE protocols do not use the OSI layered formulation. These protocols encapsulate a layer 2 packet as the payload data for a layer 3 packet, as opposed to the OSI interconnection formulation.
- network patching using these "large layer 2" protocols cannot be done in a stateless streamlined fashion; prior protocol negotiation is necessary or else the interconnection peers misinterpret each other, and the interconnection will fail.
- a novel intranet topology is managed by a communication controller that controls a software defined networking ("SDN") component.
- SDN software defined networking
- the SDN component executes on a plurality of servers within the intranet and coordinates the communication between virtual machines hosted on the plurality of servers and entities outside the intranet network, under the control of the communication controller.
- the plurality of servers in the intranet can each be configured with at least two network interface cards ("NICs").
- NICs network interface cards
- a first external NIC can be connected to an external communication network (e.g., the Internet) and an internal second NIC can be connected to the other ones of the plurality of servers within the intranet.
- each internal NIC can be connected to a switch and through the switch to the other servers.
- the communication between each VM hosted on the plurality of servers can be dynamically programmed (e.g., by the SDN component operating under the control of the communication component) to route through a respective external NIC or over external NICs of the plurality of servers connected by their respective internal NICs.
- the distributed servers having the external connected NICs can perform a network gateway role for the hosted VMs.
- the gateway role can include interfacing with entities outside the local network (e.g., entities connected via the Internet) on an external side of the network, and the VMs on the internal side of the network.
- the SDN component can be configured to implement network isolation and firewall policies at the locality and deployment of the VM.
- the SDN component can also define a region of the intranet (e.g., an "Internet within intranet") where the network isolation and the firewall policies are not executed.
- the SDN component does not execute any control in terms of tenant network isolation and firewall policy within the Internet within intranet region of the intranet.
- the network region is configured to provide through network routes between any VM on the distributed servers and any of the external NICs on respective distributed servers, for example, under the control of the communication controller. Under this topology, the SDN component executes full programmatic control on the packet routes between any VM and any of the external NICs.
- a local network system comprises at least one communication controller and a plurality of distributed servers, wherein the at least one communication controller controls the distributed servers and manages a SDN component deployed and executed on each of the distributed servers; the distributed servers hosting virtual machines (VMs) and managing communication for the VMs; wherein at least two of the distributed servers have at least two network interface cards (NICs): one NIC-ext, and one NIC-int; the NIC-ext is wired to an external network; the NIC-int is wired to a switch; wherein the distributed servers having the NIC-ext and NIC-int execute a network gateway role for the VMs, the gateway role including interfacing with entities outside the local network, and the VMs on an inner side of the network; the communication between each VM on a distributed server and the entities outside the local network can interface using the NIC-ext on the distributed server, or using the other NIC-exts on the other servers via the NIC-ints connected by the switch;
- NICs network interface cards
- a network communication system comprises at least one communication controller configured to manage communication within a logical network executing on resources of a plurality of distributed servers; the plurality of distributed servers hosting virtual machines (VMs) and handling the communication for the VMs; wherein at least two of the plurality of distributed servers are connected within an intranet segment, wherein the at least two of the distributed servers within the intranet segment include at least two respective network interface cards (NICs): at least one NIC-ext connected to an external network, and at least one NIC-int connected to a switch, wherein each server of the at least two of the plurality of distributed servers within the intranet segment execute communication gateway functions for interfacing with external entities on an external side of the network; and wherein the at least one
- NICs network interface cards
- communication controller dynamically programs communication pathways for the communication of the logical network to occur over any one or more of the at least two of the distributed servers within the intranet segment over respective NIC-exts by managing an SDN component executing on the at least two of the distributed servers.
- a local network system comprises at least one communication controller coordinating the execution of a SDN component; a plurality of distributed servers; wherein the at least one communication controller manages communication by the plurality of distributed servers and coordinates execution of the SDN component deployed and executing on the plurality of distributed servers; wherein the plurality distributed servers host virtual machines (VMs) and manage communication for the VMs; wherein at least two of the plurality of servers include at least two respective network interface cards (NICs) at least one NIC-ext connected to entities outside the local network, and at least one NIC-int connected to a switch, wherein the communication between a VM on a server and the entities outside the local network interfaces on the external NIC on the distributed server or interfaces on NIC-exts on other distributed servers connected to the server by the switch and respective NIC-ints; wherein the SDN component is configured to coordinate the communication between the VMs and entities outside the local network under the management of the at least one communication controller.
- NICs network interface cards
- the following embodiments are use in conjunction with the preceding network systems (e.g., local network and network communication systems).
- the preceding network systems e.g., local network and network communication systems.
- the SDN component is configured to execute network isolation and firewall policies for the VMs of a tenant at the locality of each VM software existence and deployment.
- the at least one communication controller manages the SDN execution of the network isolation and the firewall policies.
- the SDN component is configured to control pass or drop of network packets which are output from and input to the VM.
- the SDN component is configured to intercept and examine the network packets to be received by and have been communicated from the VM to manage the pass or the drop of the network packets.
- the SDN component further comprises defining a network region, an "Internet within the intranet," in the local network, other than and away from VMs existence and deployment localities where the SDN component executes tenants' network isolation and firewall policies, in which the SDN component does not execute any control in terms of tenant network isolation and firewall policy.
- the intranet region, the SDN component is configured to provide through network routes between any VM and any of the NIC-exts on respective distributed servers, and wherein the SDN component under management of the at least one communication controller executes control on the dynamicity of the packet forwarding routes between VMs and any respective
- At least one other local network system including a respective Internet within intranet region is controlled by the at least one
- the local network and the at least one other local network are patch connected to one another through any pair of NIC-exts of the two local networks to form an enlarged trans- local-network system.
- additional other local network systems having a respective Internet within intranet region are patch connected to join a trans- local-network system to form a further enlarged trans- local-network system including elements having the Internet within intranet topology.
- trans- local-network communication traffic between a first and second VM in any two patch participating local networks are controlled by the SDN component running on the distributed servers in the respective local networks, and wherein the SDN component is programmed to generate dynamic and distributed routes between the first VM and respective external NICs in a first respective local network and the second VM and respective external NICs in a second respective local network.
- the external communication traffic between a VM in a local network system and an external entity is programmed by the SDN component running on the distributed servers in the local network system to take dynamic routes between the VM and the distributed NIC-exts in the local network system, and to have the network packets delivered by the Internet linking an external NICs of the local network system and the external entity over the Internet.
- the preceding systems can be include or be further described by one or more the of the following elements: wherein the SDN component is configured to execute network isolation and firewall policies for VMs of one or more tenants local to each VM; wherein the SDN component is configured to execute the network isolation and firewall policies where network packets are output from the VM or communicated to the VM; wherein the SDN component executes the network isolation and firewall policies for VMs of the one or more tenants at localities where network packets are output from the VM prior to them reaching any other software or hardware component in the local network, or input to the VM without enrouting any other software or hardware component in the local network; wherein the at least one communication controller manages the SDN execution of the network isolation and the firewall policies; wherein the SDN component is configured to control pass or drop of network packets which are output from and input to the VM; wherein the SDN component is configured to intercept and examine the network packets for receipt by and outbound from the VM to manage the pass or the drop of the network packets;
- communication traffic between a first and second VM in any two patch participating local networks are controlled by the SDN component running on the distributed servers in the respective local networks, and wherein the SDN component is programmed to generate programmed routes, the programmed routes including one or more of dynamic or distributed routes, between the first VM and respective external NICs in a first respective local network over at least one intermediate connection to the second VM and respective external NICs in a second respective local network; wherein the external communication traffic between a VM in a local network system and an external entity is programmed by the SDN component running on the distributed servers in the local network system to take programmed routes between the VM and the distributed NIC-exts in the local network system, and to have the network packets delivered by the Internet linking external NICs of the local network system and the external entity over the Internet; wherein the programmed routes include one or more of dynamic or distributed routes.
- a computer implemented method for managing communications of virtual machines (“VMs) hosted on an intranet segment.
- the method comprises managing, by at least one communication controller, network communication for at least one VM hosted on the intranet segment; programming, by the at least one communication controller, a route for the network communication, wherein the act of programming includes selecting for an external network
- the method further comprises an act of patching a plurality of intranet segments, wherein each of the plurality of intranet segments include at least two distributed servers, each having at least one NIC-int and at least one NIC-ext.
- the method further comprises programming, by the at least one communication controller communication routes between the plurality of intranet segments based on selection of or distribution between external connections to respective at least one NIC-exts within each intranet segment.
- the method further comprises managing network configuration messages from VM by the at least one communication controller such that broadcast configuration messages are captured at respective intranet segments.
- the method further comprises an act of managing a plurality of VMs to provide distributed network isolation and firewall policies at the locality of each VM software existence and deployment.
- programming, by the at least one communication controller includes managing SDN execution of network isolation and the firewall policies.
- the method further comprises defining, by the at least one controller, a network region in the intranet segment, other than and away from VMs existence and deployment localities, in which the at least one controller does not execute any control in terms of tenant network isolation and firewall policy.
- programming, by the at least one controller includes providing through network routes between any VM hosted on the intranet segment and any of the NIC-exts on respective distributed servers, and controlling dynamicity of packet forwarding routes between VMs and any respective NIC-exts.
- FIG. 1 is a block diagram of a conventional network architecture including for example, a gateway chokepoint;
- FIG. 2 is a block diagram of a proposed intranet topology, according to various embodiments.
- FIG. 3 is a block diagram of an intra-inter-net interfacing topology, according to various embodiments.
- FIG. 4 is a block diagram of an example NVI system, according to one embodiment
- FIG. 5 is a block diagram of an example NVI system, according to one embodiment
- FIG. 6 is a block diagram of an example distributed firewall, according to one embodiment
- FIG. 7 is an example process for defining and/or maintaining a tenant network, according to one embodiment
- FIG. 8 is an example certification employed in various embodiments.
- FIG. 9 is an example process for execution of a tenant defined communication policy, according to one embodiment.
- FIG. 10 is an example process for execution of a tenant defined
- FIG. 11 is an example user interface, according to one embodiment.
- FIG. 12 is a block diagram of an example tenant programmable trusted network, according to one embodiment.
- FIG. 13 is a block diagram of a general purpose computer system on which various aspects and embodiments may be practiced.
- FIG. 14 is a block diagram of an example logical network, according to one embodiment.
- FIG. 15 is a process flow for programming network communication, according to one embodiment.
- At least some embodiments disclosed herein include apparatus and processes for an Internet within intranet topology.
- the Internet within intranet topology enables SDN route programming.
- SDN route programming can be executed for trans-datacenter virtual clouds, virtual machine to Internet routes, and further can enable scalable patching of intranets.
- the Internet within intranet topology includes a plurality of distributed servers hosting VMs.
- the plurality of distributed servers can perform a network gateway role, and include an external NIC having a connection to the Internet and an internal NIC connected to other ones of the distributed servers, for example, through a switch.
- the distributed servers can each operate as programmable forwarding devices for VMs.
- the configuration enables fully SDN controlled intranet networks that can fully leverage redundant Internet connections. These fully SDN controlled intranet networks can be patched into large scale cloud networks (including for example trans-datacenter cloud networks).
- vNICs virtual NICs
- isolation of a tenant's network of virtual machines can be executed by NVI (Network Virtualization Infrastructure) software and each VM hosted by a server in the tenant network can be controlled in a distributed manner at the point of each virtual NIC of the VM (discussed in greater detail below).
- NVI Network Virtualization Infrastructure
- the underlying servers can be open to communicate without restriction.
- the underlying servers can operate like the Internet (e.g., open and accessible) but under the SDN programming control.
- SDN control and/or software alone is insufficient to provide fully distributed routing.
- SDN can do little without route redundancy.
- Shown in Fig. 1 is a conventional network architecture 100. Even with SDN programming implemented, the Internet traffic from the plurality of servers (e.g., 102-108 each having their respective NICs 110-116) cannot be fully SDN distributed.
- each server is connected to at least one switch (e.g., 118), which is connected to a gateway node 120.
- the gateway node 120 can be a "neutron node" from the commercially available Openstack cloud software.
- the gateway node 120 connects to Internet 122 via an external NIC 124 and routes the traffic to the servers via an internal NIC 126. However, based on the intranet to Internet topology shown, the intranet to Internet traffic cannot be SDN distributed. The gateway node 120 forms a chokepoint through which all intranet traffic must pass.
- FIG. 2 is a block diagram of an example Internet-intranet topology 200 that can be used to support a scalable cloud computing network.
- a plurality of servers can host a plurality of virtual machines as part of a distributed cloud.
- the servers e.g., 202-208) can be configured with at least two NICs.
- Each server is configured with an internal NIC (e.g., 210-216) which connects the servers (e.g., 202-208) to each other through at least one switch (e.g., 218).
- each of the servers can include an external NIC (e.g., 220-226) each of which provides a connection to the Internet 228 (or other external network).
- each of the connections e.g., 230-236) can be low bandwidth, low cost, Internet connections (including, for example, subscriber lines).
- route programming can take full advantage of all the available Internet connections (e.g., 230-236), providing, in effect, a high bandwidth low cost connection.
- the three dots shown in Fig. 2 illustrate the potential to add additional servers (e.g., at 238 with respective connections to switch 218 at 240 and respective Internet connections at 242).
- each server in the Internet-intranet-interfacing topology can execute the functions of a SDN programmable gateway.
- each server can include SDN components executing on the server to control communication of traffic from and to VMs.
- the Internet traffic to and from any VM hosted on one or more of the plurality of servers can go via any external NIC of the connected servers.
- fully distributed routing of network traffic is made available.
- the SDN components executing on the servers can dynamically reprogram network routes to avoid bottlenecks, increase throughput, distribute large communication jobs, etc.
- the SDN components are managed by a
- the communication controller can be configured to co-ordinate operation of the SDN components on the respective servers.
- a variety of virtualization infrastructures can be used to provide virtual machine (VMs) to a tenant seeking computing resources.
- a network virtualization infrastructure (“NVI") software is used to mange a tenant network of VMs (discussed in greater detailed below).
- the NVI system/software can be implemented to provide network isolation processing and/or system components.
- the virtualization software e.g., NVI software
- vNIC virtual NIC
- the management at respective vNICs divides the inside and outside of the tenant's network at the vNIC of each virtual machine.
- the VM which is "north" of the vNIC is inside the tenant network, and the software cable which is plugged to the vNIC on one end and connected to the underlying physical server hosting the VM the other end is "south.”
- the vNIC likewise divides the tenant network between the VM on the "north" and any external connections of the physical server.
- the tenant's network border is distributed to the point of the respective vNICs of each VM of the tenant. From this configuration it is realized that the tenant's logical layer 2 network is patched by a plural number of intranets, each having the minimum size of containing one VM.
- the DHCP and ARP broadcasting protocol messages which are initiated by the OS in the VMs can be received and processed by the NVI software.
- the NVI In response to DHCP and ARP messages from the VMs, the NVI generates IP/MAC associations in a global database.
- the global database is accessible by NVI hypervisors hosting the VMs of the tenant.
- the new large layer 2 patching method discussed does not involve any broadcasting message in the DHCP and ARP plug-and-play standard. From the perspective of the OS of the VMs, the two standard protocols continue to serve the needed interplay role for the layer 2 and layer 3 without change. However, the network configuration messages no longer need broadcastings as the addressing associations are handled by the NVI infrastructure (e.g., NVI hypervisors managing entries in a global database).
- the logical layer 2 of the tenant can be implemented in trans-datacenter manner based on handling network broadcast within the virtualization infrastructure. For example, by limiting broadcasting to the range of the minimum intranet of one VM, the disclosed layer 2 patching is scalable to an arbitrary size. Communications between trans-datacenter VMs of the same tenant occur in logical layer 2 fashion.
- the functions of the global database instructed NVI hypervisors permit the DHCP and ARP standards to remain constrained to their normal plug-and-play execution for the VM users.
- the combination of the SDN software and the network topology enables traffic engineering and/or enlarging the trans-datacenter bandwidth.
- the underlying servers of the VMs of the tenant can become public just like the Internet.
- the underlying servers e.g., NVI configured servers
- the underlying servers can be configured as publically accessible resources similar to any Internet accessible resource, and at the same time the servers themselves are under route programming control.
- the route programming control can be executed by SDN components executing on the underlying servers.
- the SDN components can be managed by one or more communication controllers.
- the underlying servers can be directly connected to the Internet in one of at least two respective NICs, denoted by NIC-external ("NIC-ext").
- the servers are locally (e.g., within a building) connected by switches in the other of the NICs, denoted by NIC-internal ("NIC-int").
- NIC-int NIC-internal
- all of the Internet connected NICs can be used by any VM to provide redundant communication routes, either for in-cloud trans-datacenter traffics, or for off-cloud external communications traffics with the rest of the world.
- the available redundancy greatly increases the utilization of the Internet traffics - known to have been architected for containing high redundancy, and to have been over-provisioned through many years of commercial deployment.
- the Internet connected servers of the disclosed topology are the programmable forwarding devices, and can therefore be so used to exploit the un-utilized Internet bandwidth potential.
- Fig. 3 is a block diagram of an Internet-intranet interfacing topology 300.
- VMs of different tenants e.g., each shape can correspond to different tenant network
- the VMs are provisioned and controlled via the virtualization infrastructure (e.g., 328 and 330), which is connected to the Internet over distributed Internet-intranet interfaces (e.g., 332-348 and 350-366).
- communication controllers and/or SDN components can leverage the distributed Internet-intranet interfaces for fully dynamic and programmatic route control of traffic.
- a conventional intranet network is connected via multiple Internet connections (at 374), however, interface 372 represents a chokepoint where traffic can still bottleneck. Even with SDN
- the interface 372 cannot fully distribute traffic and cannot fully exploit available bandwidth.
- the various intranet topologies discussed above can be implemented to provide for dynamic and distributed bandwidth exploitation.
- the underlying hardware server for the VM (denoted Server-1 (e.g., 202 of Fig. 2)) is externally connected to the Internet on NIC-external- 1 (e.g., 220), and is internally connected to many other servers in an intranet (e.g., a local intranet housed in a building) on NIC-internal- 1 (e.g., 210) via switches (e.g., 218).
- NIC-external- 1 e.g., 220
- switches e.g., 218).
- Server-i 2, 3, ..., n (e.g., 204, 206, and 208).
- Each of Server-i has a NIC-External-i directly connected to the Internet.
- typical intranet connection are over-provisioned, that is with a copper-switch, or even faster an
- optical- fiber-switch, intranet connections in a datacenter have high bandwidth.
- web requests for the VM web server can be distributed to the n low-bandwidth NIC-external-i's and redirected to Server- 1 and to the VM.
- the web service provider only needs to rent low bandwidth Internet connections which can be aggregated into a very high bandwidth. It is well-known that the Internet dollar-bandwidth curve is concave function that increases with a rather fast speed as the desired bandwidth increases.
- high bandwidth can be achieved at low cost making this a valid Internet traffic engineering technology.
- traffic engineering embodiments can be implemented. For instance, upon detecting a NIC-external for a trans-datacenter connection is in congestion, real-time route programming to select another server's NIC-external can evade the congestion (e.g., detected by a communication controller and re-routed by SDN components).
- congestion e.g., detected by a communication controller and re-routed by SDN components.
- a very big file in one datacenter in need of being backed up (e.g., for disaster recovery purposes) to another datacenter can be divided into much smaller parts and transmitted via many low-cost Internet connections to the other end, and reassembled, to greatly increase the transfer efficiency.
- an NVI architecture achieves network virtualization and provides a decoupling between the logical network of VMs and the physical network of the servers.
- the decoupling facilitates the implementation of a software-defined network ("SDN") in the cloud space.
- SDN software-defined network
- the functions of the SDN are extended to achieve programmable traffic control and to better utilize the potential of the underlying physical network. It is realized that SDN is not just using software programming language to realize network function boxes such as switch, router, firewall, network slicing, etc, which are mostly provisioned in hardware boxes, as many understand at a superficial level.
- A's packet Z-IP, A-IP, Payload>.
- a network function box e.g., a network gateway B.
- B makes the following IP packet: ⁇ C-IP, B-IP, As packet as a payload>.
- C repeats: ⁇ D-IP, C-IP, As packet as a payload>..., until Y (e.g., the gateway of Z) repeats: Z-IP, Y-IP, As packet as a payload.
- Y e.g., the gateway of Z
- the route is an a-priori function of the packet which is received by a network function, and therefore is fixed, once sent out, and cannot be rerouted, e.g., upon traffic congestion, even though the Internet does have tremendous redundancy.
- Fig. 15 is an example process flow 1500 for programming network
- Process 1500 begins at 1502 where network traffic is received or accepted.
- the received message is evaluated to determine where the message is addressed. If the message is addressed internal to an intranet segment on which the message originated 1504 internal (e.g., between VMs on one intranet segment) the message is routed via NIC-ints of the respective servers. If the message is address external to the intranet segment, 1504 external, then a route is programmed to traverse one or more NIC-ext of the servers within the intranet segment.
- a communication controller manages the programming of the routes to be taken. The controller can be configured to evaluate available bandwidth or the one or more NIC-ext, determine congestion on one or more of the NIC-ext, and respond by programming a route accordingly.
- the communication controller manages SDN components executing on the servers that make up the intranet segment to provide SDN programming of traffic.
- SDN as implemented herein, enables such network traffic to be programmed on route, and therefore to utilize the unused potential of the Internet's redundancy.
- the underlying physical network topology can be re-designed to add route redundancy. For example, let each server in the intranet act as a gateway, with one NIC directly wired to the Internet, and one NIC wired to other such servers in the intranet via a back-end switch. Once configured in this manner, VM-Internet communication routes can be SDN
- Intranet lines have high bandwidth, easily at gigabits per second levels, like freeway traffic, while the Internet bandwidth is typically low, easily orders of magnitude lower, and rental fee for high bandwidth rises sharply in a convex function (like x A 2, e A x functions), due to under utilization, and hence low return of the heavy investment on the infrastructure.
- a convex function like x A 2, e A x functions
- This new intranet network wiring topology provides sufficient route redundancy between each VM and the Internet, and can employ SDN to program the Internet- VM traffic over the redundant routes.
- many low-cost low-bandwidth Internet lines can be connected to many external facing NICs with intranet elements, and can be aggregated into a high bandwidth communication channel.
- the servers of each intranet form distributed gateways interfacing the Internet.
- the distributed gateways avoid traffic, just like the widened tollgates for the freeway, thus avoiding forming a traffic bottleneck, and/or avoiding the very high cost for renting high-bandwidth Internet services.
- Fig. 2 is a example intranet network topology according to various embodiments. Under the topology illustrated intranet to Internet traffic can be SDN distributed, permitting, for example, aggregation of many low bandwidth Internet communication channels, and further, permitting distributed network routing from the intranet to Internet.
- Fig. 3 is a diagram of an example novel intra-Internet interfacing topology, according to various embodiments that take advantage of the distributed Internet-intranet interfaces to provide programmatic traffic engineering.
- the novel intranet topology can be implemented in conjunction with various virtualization infrastructures.
- One example virtualization infrastructure includes an NVI infrastructure.
- the intranet topology is configured to facilitate dynamic route programming for VMs through the underlying servers that make up the intranet.
- Each server within such intranet segments can operate as a gateway for the VMs hosted on the intranet.
- a minimum of two servers having the two NIC configuration e.g., at least one NIC internal and at least one NIC external
- the VMs are provisioned and managed under an NVI infrastructure.
- NVI infrastructure Various properties and benefits of the NVI infrastructure are discussed below with respect to examples and embodiments.
- the functions, system elements, and operations discussed above, for example, with respect to intranet topology and/or patching can be implemented on or in conjunction with the systems, functions, and/or operations of the NVI systems below.
- the NVI systems and/or functions provide distributed VM control on tenant networks, providing network isolation and/or distributed firewall services.
- the intranet topology discuss above enables SDN route programming for trans-datacenter and VM-Internet routes, and scalable intranets patching.
- the NVI infrastructure is configured to provide communication functions to a group of virtual machines (VMs), which in some examples, can be distributed across a plurality of dataclouds or cloud providers.
- VMs virtual machines
- the NVI implements a logical network between the VMs enabling intelligent virtualization and programmable configuration of the logical network.
- the NVI can include software components (including, for example, hypervisors (i.e. VM managers)) and database management systems (DBMS) configured to manage network control functions.
- hypervisors i.e. VM managers
- DBMS database management systems
- the NVI manages communication between a plurality of virtual machines by managing physical communication pathways between a plurality of physically associated network addresses which are mapped to respective globally unique logical identities of the respective plurality of virtual machines.
- network control is implemented on vNICs of VMs within the logical network.
- the NVI can direct communication on the logical network according to mappings between logical addresses (e.g., assigned at vNICs for VMs) of VMs and physically associated addresses assign by respective clouds with the mappings being stored by the DBMS.
- the mappings can be updated, for example, as VMs change location.
- a logical address can be remapped to a new physically associated address when a virtual machine changes physical location with the new physically associated address being recorded in the DBMS to replace the previous physically associated address before the VM changing physical location.
- the network control is fully logical enabling the network dataflow for the logical network to continue over the physical networking components (e.g., assigned by cloud providers) that are mapped to and underlie the logical network.
- enabling the network control functions directly at vNICs of respective VMs provides for definition and/or management of arbitrarily scalable virtual or logical networks.
- Such control functions can be action of "plugging” / "unplugging” logically defined unicast cables between vNICs of pairs of VMs to implement network isolations policy, transform formats for network packets (e.g., between IPv6-IPv4 packets), provide cryptographic services on applications data in network packets to implement cryptographic protection on tenants' data, monitor and/or manage traffic to implement advanced network QoS (e.g., balance load, divert traffic, etc.), provide intrusion detection and/or resolution to implement network security QoS, allocate expenses to tenants based on network utilization, among other options.
- advanced network QoS e.g., balance load, divert traffic, etc.
- such logical networks can target a variety of quality of service goals.
- Some example goals include providing a cloud datacenter configured to operate in resource rental, multi-tenancy, and in some preferred embodiments, Trusted Multi-tenancy, and in further preferred embodiments, on-demand and self-serviceable manners.
- resource rental refers to a tenant (e.g., an organization or for compute project), who rents a plural number of virtual machines (VMs) for its users (e.g., employees of the tenant) for computations the tenant wishes to execute.
- VMs virtual machines
- the users, applications, and/or processes of the tenant use the compute resources of a provider through the rental VMs, which can include operating systems, databases, web/mail services, applications, and other software resources installed on the VMs.
- multi-tenancy refers to a cloud datacenter or cloud compute provider that is configured to serve a plural number of tenants.
- the multi-tenancy model is conventional throughout compute providers, which typically allows the datacenter to operate with economy of scale.
- multi-tenancy can be extended to trusted multi-tenancy, where VMs and associated network resources are isolated from accessing by the system operators of the cloud providers, and unless with explicitly instructed permission(s) from the tenants involved, any two VMs and associated network resources which are rented by different tenants respectively are configured to be isolated from one another. VMs and associated network resources which are rented by one tenant can be configured to communicate with one another according to any security policy set by the tenant.
- on-demand and self-serviceability refers to the ability of a tenant to rent a dynamically changeable quantity/amount/volume of resources according to need, and in preferred embodiment, in a self-servicing manner (e.g., by editing a restaurant menu like webpage).
- self-servicing can include instructing the datacenter using simple web-service-like interfaces for resource rental at a location outside the datacenter.
- self-servicing resource rental can include a tenant renting resourced from a plural number of cloud providers which have trans-datacenter physical and/or geographical distributions. Conventional approaches may fail to provide any one or more of: multi-tenancy, trusted
- LAN Local Area Network
- IT security e.g., cloud security
- isolation of LAN in cloud datacenters for tenants can be necessary.
- LAN isolation turns out to be a very challenging task unresolved by conventional approaches.
- the systems and methods provide logical de-coupling of a tenant network through globally uniquely identifiable identities assigned to VMs.
- Virtualization infrastructure (VI) at each provider can be configured to manage communication over a logical virtual network created via the global identifiers for VMs rented by the tenant.
- the logical virtual network can be configured to extend past cloud provider boundaries, and in some embodiments, allows a tenant to specify the VMs and associated logical virtual network (located at any provider) via whitelist definition.
- Shown in Fig. 4 is an example embodiment of a network virtualization infrastructure (NVI) or NVI system 400.
- NVI network virtualization infrastructure
- system 400 can be implemented on and/or in conjunction with resources allocated by cloud resource providers.
- system 400 can be hosted, at least in part, external to virtual machines and/or cloud resources rented from cloud service providers.
- the system 400 can also serve as a front end for accessing pricing and rental information for cloud compute resources.
- a tenant can access system 400 to allocate cloud resources from a variety of providers. Once the tenant has acquired specific resources, for example, in the form of virtual machines hosted at one or more cloud service providers, the tenant can identify those resources to define their network via the NVI system 400.
- the logic and/or functions executed by system 400 can be executed on one or more NVI components (e.g., hypervisors (virtual machine managers)) within respective cloud service providers.
- one or more NVI components can include proxy entities configured to operate in conjunction with hypervisors at respective cloud providers.
- the proxy entities can be created as specialized virtual machines that facilitate the creation, definition and control function of a logical network (e.g., a tenant isolated network). Creation of the logical network can include, for example, assignment of globally unique logical addresses to VMs and mapping of the globally unique logical addresses to physically associated addresses of the resources executing the VMs.
- the proxy entities can be configured to define logical communication channels (e.g., logically defined virtual unicast cables) between pairs of VMs based on the globally unique logical addresses. Communication between VMs can occur over the logical communication channels without regard to physically associated addressing which are mapped to the logical addresses/identities of the VMs.
- the proxy entities can be configured to perform translations of hardware addressed communication into purely logical addressing and vice versa.
- a proxy entity operates in conjunction with a respective hypervisor at a respective cloud provider to capture VM communication events, route VM communication between a vNIC of the VM and a software switch or bridge in the underlying hypervisor upon which the proxy entity is serving the VM.
- a proxy entity is a specialized virtual machine at respective cloud providers or respective hypervisors configured for back end servicing.
- a proxy entity manages internal or external
- communication according to communication policy defined on logical addresses of the tenants' isolated network (e.g., according to network edge policy).
- the NVI system 400 can also include various other components
- the NVI system 400 can be configured to map globally unique identities of respective virtual machines to the physically associated addresses of the respective resources.
- the NVI system 400 can include an NVI engine 404 configured to assign globally unique identities of a set of virtual machines to resources allocated by hypervisors to a specific tenant. The set of virtual machines can then be configured to communicate with each other using the globally unique identities.
- the NVI system and/or NVI engine is configured provide network control functions over logically defined unicast channels between virtual machines within a tenant network. For example, the NVI system 400 can provide for network control at each VM in the logical network.
- the NVI system 400 can be configured to provide network control at a vNIC of each VM, allowing direct control of network communication of the VMs in the logical network.
- the NVI system 400 can be configured to define point-to-point connections, including for example, virtual cable connections between vNICs of the virtual machines of the logical network using their globally unique addresses.
- Communication within the network can proceed over the virtual cable connections defined between a source VM and a destination VM.
- the NVI system 400 and/or NVI engine 404 can be configured to open and close communication channels between a source and a destination (including, for example, internal and external network addresses).
- the NVI system 400 and/or NVI engine 404 can be configured to establish virtual cables providing direct connections between virtual machines that can be connected and disconnected according to a communication policy defined on the system.
- each tenant can define a communication policy according to their needs.
- the communication policy can be defined on a connection by connection basis, both internally to the tenant network and by identifying external communication connections.
- the tenant can specify for an originating VM in the logical network what destination VMs the originating VM is permitted to
- the tenant can define communication policy according to source and destination logical identities.
- the NVI system 400 and/or NVI engine 404 can manage each VM of the logical network according to an infinite number of virtual cables defined at vNICs for the VMs.
- virtual cables can be defined between pairs of VMs and their vNICs for every VM in the logical network.
- the tenant can define communication policy for each cable, allowing or denying traffic according to programmatic if then else logic.
- the NVI system and/or engine are configured to provide distributed firewall services.
- distribution of connection control can eliminate the chokepoint limitations of conventional architectures, and in further embodiments, permit dynamic
- re-architecting of a tenant network topology e.g., adding, eliminating, and/or moving cloud resources that underlie the logical network.
- the NVI system 400 and/or engine 404 can be configured to allocate resources at various cloud compute providers.
- system and/or engine can be executed by one or more hypervisors at the respective cloud providers.
- the system and/or engine can be configured to request respective hypervisors create virtual machines and provide identifying information for the created virtual machines (e.g., to store mappings between logical addresses and physically associated address of the resources).
- the functions of system 400 and/or engine 404 can executed by a respective hypervisor within a cloud provider system.
- the functions of system 400 and/or engine 404 can executed by and/or include a specialized virtual machine or proxy entity configured to interact with a respective hypervisor.
- the proxy entity can be configured to request resources and respective cloud provider identifying information (including physically associated addresses for resources assigned by hypervisors).
- the system and/or engine can be configured to request, capture, and/or assign temporary addresses to any allocated resources.
- the temporary addresses are "physically associated" addresses assigned to resources by respective cloud providers.
- the temporary addresses are used in conventional networking technologies to provide communication between resources and to other, for example, Internet addresses.
- the physically associated addresses are included in network packet metadata, either as a MAC address or an IP address or a context tag.
- the NVI system 400 de-couples any physical association in its network topology by defining logical addresses for each VM in the logical network.
- communication can occurs over virtual cables that connect pairs of virtual machines using their respective logical addresses.
- the system and/or engine 404 can be configured to manage creation/allo cation of virtual machines and also manage communication between the VMs of the logical network at respective vNICs.
- the system 400 and/or engine 404 can also be configured to identify communication events at the vNICs of the virtual machines when the virtual machines initiate or respond to a communication event.
- Such direct control can provide advantages over conventional approaches.
- the system and/or engine can include proxy entities at respective cloud providers.
- the proxy entities can be configured to operate in conjunction with respective hypervisors to obtain hypervisor assigned addresses, identify
- a proxy entity can be created at each cloud provider involved in a tenant network, such that the proxy entity manages the virtualization/logical isolation of the tenant's network.
- each proxy entity can be a back-end servicing VM configured to provide network control functions on the vNICs of front-end business VMs (between vNICs of business VM and hypervisor switch or hypervisor bridge), to avoid programming in the hypervisor directly.
- the system 400 and/or engine 404 can also be configured to implement communication policies within the tenant network. For example, when a virtual machine begins a communication session with another virtual machine, the NVI system 400 and/or NVI engine 404 can identify the communication event and test the communication against tenant defined policy.
- the NVI system 400 and/or NVI engine 404 component can be configured to reference physically associated addresses for VMs in the communication and lookup their associated globally unique addresses and/or connection certificates (e.g., stored in a DBMS). In some settings, encryption certificates can be employed to protect/validate network mappings.
- a PKI certificate can be used to encode a VM's identity - Cert(UUID/IPv6) with a digital signature for its global identity (e.g., UUID/IPv6) and physically associated address (e.g., IP) - Sign(UUID/IPv6, IP).
- the correctness of the mapping (UUID/IPv6, IP) can then be crypto graphically verified by any entity using Cert(UUID/IPv6) and Sign(UUID/IPv6, IP).
- the NVI system 400 and/or NVI engine 404 can verify each communication with a certificate lookup and handle each communication event according to a distributed communication policy defined on the logical connections.
- the NVI system 400 provides a logically defined network 406 de-coupled from any underlying physical resources. Responsive to any network communication event 402 (including for example, VM to VM, VM to external, and/or external to VM communication), the NVI system is configured to abstract the communication event into the logical architecture of the network 406. In one embodiment, the NVI system "plugs" or “unplugs" a virtual cable at respective vNICs of VMs to carry the communication between a source and a destination. The NVI system can control internal network and external network communication according to the logical addresses by "plugging" and/or "unplugging" virtual cables between the logical addresses at respective vNICs of VMs. As the logical address for any resource within a tenant network are globally unique, new resources can be readily added to the tenant network, and can be readily incorporated into
- the NVI system 400 and/or NVI engine 404 can be configured to accept tenant identification of virtual resources to create a tenant network.
- the tenant can specify VMs to include in their network, and as reaction to the tenant request, the NVI can provide physically associated addressing information to map to the logical addresses of the tenant requested VMs allocated by respective cloud providers for the resources executing the VMs to define the tenant network.
- the system can be configured assign new globally unique identifiers to each resource.
- the connection component 408 can also be configured to accept tenant defined communication policies for the new resources.
- the tenant can define their network using a whitelist of included resources.
- the tenant can access a user interface display provided by system 400 to input identifying information for the tenant resources.
- the tenant can add, remove, and/or re-architect their network as desired. For example, the tenant can access the system 400 to dynamically add resources to their whitelist, remove resources, and/or create communication policies.
- the NVI system 400 can also provide for encryption and decryption services to enable additional security within the tenant's network and/or communications.
- the NVI system and/or NVI engine 404 can be configured to provide for encryption.
- the NVI system 400 can also be configured to provision additional resources responsive to tenant requests.
- the NVI system 400 can dynamically respond to requests for additional resources by creating global addresses for any new resources.
- a tenant can define a list of resources to include in the tenant's network using system 400. For example, upon receipt the tenant's resource request, the NVI can create resources for the tenant in the form of virtual machines and specify identity information for the virtual machines to execute as allocated by whatever cloud provider they used.
- the system 400 can be configured to assign globally unique identifiers to each virtual machine identified by the NVI for the tenant and store associations between globally unique identifiers and resource addresses for use in communicating over the resulting NVI network.
- the system can create encryption certificates for a tenant for each VM in the NVI logical network, which is rented by the tenant.
- the NVI can specify encryption certificates for a tenant as part of providing identity information for virtual machines to use in the tenant's network. The NVI system can then provide for encryption and decryption services as discussed in greater detail herein.
- At least some embodiments disclosed herein include apparatus and processes for creating and managing a globally distributed and intelligent NVI or NVI system.
- the NVI is configured to provide a logical network implemented on cloud resources.
- the logical network enables communication between VMs using logically defined unicast channels defined on logical addresses within the logical network.
- Each logical address can be a globally unique identifier that is associated by the NVI with addresses assigned to the cloud resources (e.g., physical addresses or physically associated addresses) by respective cloud datacenters or providers.
- the logical addresses remain unchanged even as physical network resources supporting the logical network change, for example, in response to migration of a VM of the logical network to a new location or a new cloud provider.
- the NVI includes a database or other data storage element that records a logical address for each virtual machine of the logical network.
- the database can also include a mapping between each logical address and a physically associated address for the resource(s) executing the VM.
- a logical network ID e.g., UUID or IPv6 address
- UUID or IPv6 address is assigned to a vNIC of a VM and mapped to a physical network address and/or context tag assigned by the cloud provider to the resources executing the VM.
- the NVI can be associated with a database management system (DBMS) that stores and manages the associations between logical identities/addresses of VMs and underlying physical addresses of the resources.
- DBMS database management system
- the NVI is configured to update the mappings to permanent logical addresses of the VMs with physically associated addresses as resources assigned to the logical network change.
- Further embodiments include apparatus and processes for provisioning and isolating network resources in cloud environments.
- the network resources can be rented from one or more providers hosting respective cloud datacenters.
- the isolated network can be configured to provide various quality of service (“QoS") guaranties and or levels of service.
- QoS features can be performed according to software developed network principals.
- the isolated network can be purely logical, relying on no information of the physical locations of the underlying hardware network devices.
- implementation of purely logical network isolation can enable trans-datacenter implementations and facilitate distributed firewall policies.
- the logical network is configured to pool underlying hardware network devices (e.g., those abstracted by the logical network topology) for network control into a network resource pool.
- underlying hardware network devices e.g., those abstracted by the logical network topology
- Some properties provided by the logical network include, for example: a tenant only sees and on-demand rents resources for its business logic; the tenant should never care where the underlying hardware resource pool is located; and/or how the underlying hardware operates.
- the system provides a globally distributed and intelligent network virtualization infrastructure ("NVI").
- the hardware basis of the NVI can consist of globally distributed and connected physical computer servers which can communicate one another using any conventional computer networking technology.
- the software basis of the NVI consists of hypervisors (i.e., virtual machine managers) and database management systems (DBMS) which can execute on the hardware basis of the NVI.
- the NVI can include the following properties: first, any two hypervisors of a cloud provider or different cloud providers in the NVI can be configured to communicate one another on their respective physical locations. If necessary, the system can use dedicated cable connection technologies or well-known virtual private network (VPN) technology to connect any two or more hypervisors to form a globally connected NVI. Second, the system and/or virtualization infrastructure knows of any communication event which is initiated by a virtual machine (VM) more directly and earlier than a switch does when the latter sees a network packet.
- VM virtual machine
- the latter event (detection at a switch) is only observed as a result of the NVI sending the packet from a vNIC of the VM to the switch.
- the prior event (e.g., detection at initiation) is a property of the NVI managing the VM's operation, for example at a vNIC of the VM, which can include identifying communication by the NVI at initiation of a communication event (e.g., prior to transmission, at receipt, etc.).
- the NVI can control and manage communications for globally distributed VMs via its intelligently connected network of globally distributed hypervisors and DBMS.
- these properties of the NVI enable the NVI to construct a purely logical network for globally distributed VMs.
- control functions for the logical network of globally distributed VMs which defines the communications semantics of logical network (i.e., governs how VMs in the logical network communicate), is implemented in, and executed by, software components which work with hypervisors and DBMS of the NVI to cause some function to take effect at vNICs of VMs; and the network dataflow for logical network of globally distributed VMs passes through the physical networking components which underlie the logical network and connect the globally distributed hypervisors of the NVI. It is realized that the separation of network control function in software (e.g., operating at vNICs of VMs), from network dataflow through the physical networking components allows definition of the logical network without physical network attributes. In some implementations, the logical network definition can be completely de-coupled from the underlying physical network.
- the separation of network control function on vNICs of VMs, from network dataflow through underlying physical network of the NVI result in communications semantics of logical network of globally distributed VMs that can be completely software defined, or in other words, results in a logical network of globally distributed VMs that according to some embodiments can be a software defined network (SDN): where communications semantics can be provisioned automatically, fast and dynamically changing, with trans-datacenter distribution, and with a practically unlimited size and scalability for the logical network.
- SDN software defined network
- using software network control functions that take effect directly on vNICs enables construction of a logical network of VMs of global distribution and unlimited size and scalability. It is realized that network control methods/functions whether in software or hardware in conventional systems (including, e.g., OpenFlow) take effect in switches, routers and/or other network devices. Thus, it is further realized that, e.g., construction of a large scale of logical network in conventional approaches is at best in step-by-step system upgrading of switches, routers and/or other network devices, which is impractical for constructing a globally distributed, trans-datacenter, or unlimited scalability network.
- OpenFlow OpenFlow
- control function to take effect directly on vNICs of VMs of some embodiments include any one or more of: (i) plug/unplug logically defined unicast cables to implement network isolation policy, (ii) transform IPv6-IPv4 versions of packets, (iii) encrypt/decrypt or IPsec based protection on packets, (iv) monitor and/or divert traffics, (v) detect intrusion and/or DDoS attacks, (vi) account fees for traffic volume usage, among other options.
- the system can distribute firewall packet filtering at the locality of each VM (e.g., at the vNIC). Any pair of VMs, or a VM and an external entity, can communicate in "out-in” fashion, provided isolation and firewall policies permit - whether these communication entities are in the same intranet or in trans-global locations separate by the Internet
- the region outside the distributed points of VM packet filtering can be configured outside the firewalls of any tenant, exactly like the Internet.
- the OSI layers 1, 2, and 3 of this "Internet within intranet" region are fully under the centralized control and distributed SDN programmability on each server.
- this topology can be used in conjunction with a variety of virtualization systems (in one example under the control node of Openstack), to achieve an Internet within intranet region is under communication control and SDN programmability.
- the distributed servers With the Internet within intranet topology, the distributed servers become SDN programmable forwarding devices that participate in traffic route dynamicity and bandwidth distribution, and in particular can act as a distributed gateway to enlarge the bandwidth for VM-Internet traffic.
- the new SDN route dynamicity programmability in intranets with the Internet within intranet topology has thus successfully eliminated any chokepoint from the Internet-intranet interface, and in further embodiments, optimally widened routes for intranet patching and Internet traffic.
- optimally widened routes for intranet patching and Internet traffic by including Internet route redundancy into local intranets the full potential of SDN can be achieved.
- Every physical IT business processing box (below, IT box) included a physical network interface card (NIC) which can be plugged to establish a connection between two ends (a wireless NIC has the same property of "being plugged as a cable”), and the other end of the cable is a network control device.
- NIC physical network interface card
- Any two IT boxes may or may not communicate with one another provided they are under the control of some network control devices in-between them.
- the means of controlling communications between IT boxes occurs by the control devices inspecting and processing some metadata— addresses and possibly more refined contexts called tags— in the head part of network packets: permitting some packets to pass through, or dropping others, according to the properties of the metadata in the packets against some pre-specified communications policy.
- This control through physically associated addressing e.g., MAC addresses, IP addresses and or context tags) has a number of drawbacks.
- Openstack operation includes sending network packets of a VM to a centralized network device (of course, in Openstack the network device may be a software module in a hypervisor, called hypervisor switch or hypervisor bridge) via a network cable (which may also be software implemented in a hypervisor), for passing through or dropping packets at centralized control points.
- a centralized network device of course, the network device may be a software module in a hypervisor, called hypervisor switch or hypervisor bridge
- a network cable which may also be software implemented in a hypervisor
- This conventional network control technology of processing packets metadata at centralized control points has various limitations in spite of virtualization.
- the centralized packets processing method which processes network control in the meta-data or head part, and forwards dataflow in the main-body part, of a network packet at a centralized point (called chokepoint) cannot make efficient use of the distributed computing model of the VI; centralized packets processing points can form a performance bottleneck at large scale.
- the packet metadata inspection method examines a fraction of metadata (an address or a context tag) in the head of a whole network packet, and then may drop the whole packet (resulting in wasted network traffic).
- the metadata (addresses and tags) used in the head of a network packet are still physically associated (i.e., related to) the physical location of hardware of respective virtualized resources.
- Physical associations are not an issue for on on-site and peak-volume provisioned physical resources (IT as an asset model), where changes in topology are infrequent.
- IT on on-site and peak-volume provisioned physical resources
- QoS network quality of services
- the user or tenant may require an on-demand elastic way to rent IT resources, and may also rent from geographically different and scattered locations of distributed cloud datacenters (e.g., to increase availability and/or reliability). Cloud providers may also require the ability to move assigned resources to maximize utilization and/or minimize maintenance. These requirements in cloud computing translate to needs for resource provisioning with the following properties: automatic, fast and dynamic changing, trans-datacenter scalable, and for IT resource being network per se, a tenant's network should support a tenant-definable arbitrary topology, which can also have a trans-datacenter distribution.
- the network inside a cloud datacenter upon which various QoS can be performed in SDN should be a purely logical one.
- the properties provided by various embodiments can include: logical addressing containing no information on the physical locations of the underlying physical network devices; and enabling pooling of hardware devices for network control into a network resource pool.
- Various implementations can also take advantage of conventional approaches to allow hypervisors of respective cloud providers to connect with each other (e.g., VPN connections) underneath the logical topology.
- various embodiments can leverage management of VMs by the hypervisors and/or proxy entities to capture and process communication events. Such control allows communication events to be captured more directly and earlier than, for example, switch based control (which must first receive the communication prior to action).
- various embodiments can control and manage communications for globally distributed VMs without need of inspecting and processing any metadata in network packets.
- Conventional firewall implementations focus on a "chokepoint" model: an organization first wires its owned, physically close-by IT boxes to some hardware network devices to form the organization's internal local area network (LAN); the organization then designates a "chokepoint” at a unique point where the LAN and wide area network (WAN) meet, and deploys the organization's internal and external communications policy only at that point to form the organization's network edge.
- Conventional firewall technologies can use network packet metadata such as IP / MAC addresses to define LAN and configure firewall. Due to the seldom changing nature of network configurations, it suffices for the organization to hire specialized network personnel to configure the network and firewall, and suffices for them to use command-line-interface (CLI) configuration methods.
- CLI command-line-interface
- firewalls are based on the VLAN technology.
- the physical hardware switches are "virtualized” into software counterparts in hypervisors, which are either called “hypervisor learning bridges", or “virtual switches” ("hypervisor switch" is a more meaningful name).
- hypervisors connecting vNICs of VMs to the hardware NIC on the server. They are referred to below interchangeably as a hypervisor switch.
- a hypervisor switch Like a hardware switch, a hypervisor switch involves in a LAN construction also by learning and processing network packet metadata such as addresses. Also like the hardware counterpart, a hypervisor switch can refine a LAN by adding more contexts to the packet metadata. The additional contexts which can be added to the packet metadata part by a switch (virtual or real) are called tags. The hypervisor switch can add different tags to the network packets of IT boxes which are rented by different tenants. These different tenants' tags divide a LAN into isolated virtual LANs, isolating tenants' networks in a multi-tenancy datacenter.
- VLAN technology is for network cable virtualization: packets sharing some part of a network cable are labeled differently and thereby can be sent to different destinations, just like passengers in an airport sharing some common corridors before boarding at different gates, according to the labels (tags) on their boarding passes.
- a network virtualization infrastructure leveraging direct communication control over VMs to establish a fully logical network architecture.
- Direct control over each VM for example through a hypervisor and/or proxy entity, is completely distributed and at the location where the VM with vNICs currently is executing.
- An advantage of the direct network control function on a vNIC is that the communication control can avoid complex processing network packets metadata which are tight coupled with physical locations of the network control devices, instead, using purely logical addresses of vNICs.
- the resultant logical network eliminates any location specific attributes of the underlying physical network. SDN work over the NVI can be implemented simply and as straightforward high-level language programming.
- each VM can be viewed by the NVI to have an infinite number of vNIC cards, where each can be plugged as a logically defined unicast cable for exclusive use with a single given communications partner.
- a hypervisor in the NVI is responsible for passing network packets from/to the vNIC of a VM right at the spot of the VM, the NVI can be configured for direct quality of control, either by controlling communication directly with the hypervisor or by using a proxy entity coupled with the hypervisor.
- a switch even a software coded hypervisor switch, can only control VM's communications via packets metadata received from a multicast network cable.
- Fig. 5 illustrates an example implementation of network virtualization infrastructure (NVI) technology according to one embodiment.
- the NVI system 500 and corresponding virtualization infrastructure (VI) which can be globally distributed over a physical network can be configured to plug/unplug a logically defined unicast network cable 502 for any given two globally distributed VMs (e.g., 501 and 503 hosted, for example, at different cloud datacenters 504 and 506).
- the respective VMs e.g. 501 and 503 are managed throughout their lifecycle by respective virtual machine managers (VMMs) 508 and 510.
- VMMs virtual machine managers
- the VM From the moment of a VM's (e.g., 501 and 503) inception and operation, the VM obtains a temporary IP address assigned by a respective hypervisor (e.g., VMM 508 and 510).
- the temporary IP address can be stored and maintained in respective databases in the NVI (e.g., 512 and 514).
- the temporary IP addresses can change, however, as the addresses change or resources are added and/or removed any temporary IP addresses are maintained in respective databases.
- the databases (e.g., 512 and 514) are also configured to store globally identifiable identities in association with each virtual machines' assigned address.
- the NVI can be configured to plug/unplug logically defined unicast cable between any two given network entities using unchanging unique IDs (so long as one of communicating entities is a VM within the NVI).
- the NVI constructs the logical network by defining unicast cables to plug/unplug avoiding processing of packet metadata.
- centrally positioned switches (software or hardware) can still be employed for connecting the underlying physical network, but
- the network control for VMs can therefore be globally distributed given that the VM ID is globally identifiable, and operates without location specific packet metadata.
- respective hypervisors and associated hypervisors are associated
- DBMS in the NVI have fixed locations, i.e., they typically do not move and/or change their physical locations.
- globally distributed hypervisors and DBMS can use the conventional network technologies to establish connections underlying the logical network.
- Such conventional network technologies for constructing the underlying architecture used by the NVI can be hardware based, for which command-line-interface (CLI) based configuration methods are sufficient and very suitable.
- CLI command-line-interface
- VPN virtual private network
- UUID Universally Unique Identity
- IPv6 addresses can be assigned to provide globally unique addresses. Once assigned, the relationship between the UUID and the physically associated address for any virtual machine can be stored for later access (e.g., in response to a communication event). In other embodiments, other globally identifiable unique and unchanging identifiers can be used in place of UUID.
- the UUID of a VM will not change throughout the VM's complete lifecycle.
- each virtual cable between two VMs is then defined on the respective global identifiers.
- the resulting logical network constructed by plugged unicast cables over the NVI is also completely defined by the UUIDs of the plugged VMs.
- the NVI is configured to plug/unplug the unicast cables in real-time according to a given set of network control policy in the DBMS.
- a tenant 516 can securely access (e.g., via SSL 518) the control hub of the logical network to define a firewall policy for each communication cable in the logical network.
- any logic network defined on the never changing UUIDs of the VMs can have network QoS (including, for example, scalability) addressed by programming purely in software.
- such logic networks are easy to change, both in topology or in scale, by SDN methods, even across datacenters.
- the tenant can implement a desired firewall using, for example, SDN programming.
- the tenant can construct a firewall with a trans-datacenter distribution.
- Shown in Fig. 6 is an example of a distributed firewall 600.
- Virtual resources of the tenant A 602, 604, and 606 span a number of data centers (e.g., 608, 610, and 612) connected over a communication network (e.g., the Internet 620).
- Each datacenter provides virtual resources to other tenants (e.g., at 614, 616, and 618), which are isolated from the tenant A's network.
- the tenant A is able to define a communication policy that enables communication on a cable by cable basis. As communication events occur, the communication policy is checked to insure that each communication event is permitted. For example, a cable can be plugged in real-time in response to VM 602 attempting to communicate with VM 604. For example, the communication policy defined by the tenant A can permit all communication between VM 602 and VM 604. Thus, a communication initiated at 602 with destination 604 passes the firewall at 622. Upon receipt, the communication policy can be checked again to insure that a given communication is permitted, in essence passing the firewall at 624. VM 606 can likewise be protected from both internal VM communication and externally involved communication, shown for illustrative purposes at 626.
- Fig. 7 illustrates an example process 700 for defining and/or maintaining a tenant network.
- the process 700 can be executed by an NVI system to enable a tenant to acquire resources and define their own network across rented cloud resources.
- the process 700 begins at 702 with a tenant requesting resources.
- various processes or entities can also request resources to begin process 700 at 702.
- a hypervisor or VMM having available resources can be selected.
- hypervisors can be selected based on pricing criteria, availability, etc.
- the hypervisor creates a VM assigned to the requestor with a globally uniquely identifiable id (e.g., a globally uniquely identifiable id (e.g., a globally uniquely identifiable id).
- the global ID can be added to a database for the tenant network.
- Each global id is associated with a temporary physical address (e.g., an IP address available from the NVI) assigned to the VM by its hypervisor.
- the global id and the temporary physical address for the VM are associated and stored at 706.
- a hypervisor creates in a tenant's entry in the NVI DB a new entry:
- IP UUID/IPv6 for the newly created VM with the current network address of the VM (IP below denotes the current physical network address which is mapped to the
- the tenant and/or resource requestor can also implement cryptographic services.
- the tenant may wish to provide integrity protection on VM IDs to provide additional protection.
- crypto protection is enabled 708 YES
- optional cryptographic functions include applying public-key cryptography to create a PKI certificate Cert(UUID/IPv6) and a digital signature Sign(UUID/IPv6, IP) for each tenant VM such that the correctness of the mapping (UUID/IPv6, IP) can be crypto graphically verified by any entity using
- a cryptographic certificate for the VM ID and signature for the mapping between the ID and the VM's current physical location in IP address are created at 710 and stored, for example, in the tenant database at 712.
- Process 700 can continue at 714.
- Responsive to re-allocation of VM resource including, for example, movement of VM resources
- a respective hypervisor for example a destination hypervisor ("DH") takes over the tenant's entry in the NVI DB maintenance job for the moved VM.
- the moved VM is assigned a new address consistent with the destination hypervisors network.
- a new mapping between the VM's global ID and the new hypervisor address is created (let IP' denote the new network address for the VM over DH).
- let IP' denote the new network address for the VM over DH.
- the DH updates encryption certifications Sign(UUID/IPv6, IP') in the UUID/IPv6 entry to replace the prior and now invalid certificate Sign(UUID/IPv6, IP).
- VMs in the tenant network can be managed at 716, by the DH associating a new physical address with the global ID assigned to the VM.
- the new association is stored in a tenant's entry in the NVI DB, defining the tenant network.
- a tenant may already have allocated resources through cloud datacenter providers.
- the tenant may access an NVI system to know identifying
- the NVI can then assign global ID of VMs to the physically associated addresses of resources. As discussed above, the identities and mappings can be crypto graphically protected to provide additional security.
- Fig. 8 Shown in Fig. 8 is an example PKI certificate than can be employed in various embodiments.
- known security methodologies can be implemented to protect the cryptographic credential of a VM (the private key used for signing Sign(UUID/IPv6, IP) and to migrate credentials between hypervisors within a tenant network (e.g., at 714 of process 700).
- known "Trusted Computing Group" (TCG) technology is implemented to protect and manage cryptographic credentials.
- TPM module can be configured to protect and manage credentials within the NVI system and/or tenant network.
- known protection methodologies can include hardware based implementation, and hence can prevent very strong attacks to the NVI, and for example, can protect against attacks launched by a datacenter system administrator.
- TCG technology also supports credential migration (e.g., at 714).
- the tenant can establish a communication policy within their network.
- the tenant can define algorithms for plugging/unplugging unicast cables defined between VMs in the tenant networks, and unicast cables connecting external address to internal VMs for the tenant network.
- the algorithms can be referred to as communication protocols.
- the tenant can define
- FIG. 9 Shown in Fig. 9 is an example process flow 900 for execution of a tenant defined communication policy.
- the process 900 illustrates an example flow for a sender defined protocol (i.e., initiated by a VM in the tenant network).
- SIP physically associated address
- DIP physically associated address
- DST global ID
- DH global ID
- control components in the NVI system can include the respective hypervisors of respective cloud providers where the hypervisors are specially configured to perform at least some of the functions for generating, maintaining, and/or managing communication in an NVI network.
- each hypervisor can be coupled with one or more proxy entities configured to work with respective hypervisors to provide the functions for generating, maintaining, and/or managing communication in the tenant network.
- the processes for executing communication policies (e.g., 900 and 1000) are discussed in some examples with reference to hypervisors performing operations, however, one should appreciate that the operations discussed with respect to the hypervisors can be performed by a control component, the hypervisors, and/or respective hypervisors and respective proxy entities.
- the process 900 beings at 902 with SH intercepting a network packet generated by VM1, wherein the network packet includes physically associated addressing (to DIP).
- the hypervisor SH and/or the hypervisor in conjunction with a proxy entity can be configured to capture communication events at 902.
- the communication event includes a communication initiated at VM1 address to VM2.
- the logical and/or physically associated addresses for each resource within the tenant's network can be retrieved, for example, by SH.
- a tenant database entry defines the tenant's network based on globally unique identifiers for each tenant resource (e.g., VMs) and their respective physically associated addresses (e.g., addresses assigned by respective cloud providers to each VM).
- the tenant database entry also includes certificates and signatures for confirming mappings between global ID and physical addresses for each VM.
- the tenant database can be accessed to look up the logical addressing for VM2 based on the physically associated address (e.g. DIP) in the communication event. Additionally, the validity of the mapping can also be confirmed at 906 using Cert(DST), Sign(DST, DIP), for example, as stored in the tenant database. If the mapping is not found and/or the mapping is not validated against the digital certificate, the communication event is terminated (e.g., the virtual communication cable VM1 is attempting to use is unplugged by the SH). Once a mapping is found and/or validated at 906, a system communication policy is checked at 908. In some embodiments, the communication policy can be defined by the tenant at part of creation of their network. In some implementations, the NVI system can provide default communication policies. Additionally, tenants can update and/or modify existing communication policies as desired. Communication policies may be stored in the tenant's entry in the NVI database or may be referenced from other data locations within the tenant network.
- the communication event is terminated (e.g., the virtual communication cable
- Each communication policy can be defined based on the global IDs assigned to communication partners. If for example, the communication policy specifies (SRC, DST: unplug), the communication policy prohibits communication between SRC and DST, 910 NO. At 912, the communication event is terminated. If for example, the communication policy permits communication between SRC and DST (SRC, DST: plug), SH can plug the unicast virtual cable between SRC and DST permitting communication at 914.
- the process 900 can also include additional but optional cryptographic steps. For example, once SH plugs the cable between SRC and DST, SH can initiate a cryptographic protocol (e.g., IPsec) with DH to provide
- a cryptographic protocol e.g., IPsec
- process 900 can be executed on all types of communication for the tenant network.
- communication events can include VM to external address communication.
- DST is a conventional network identity rather than a global ID assigned to the logical network (e.g., an IP address).
- the communication policy defined for such communication can be defined based on a network edged policy for VMl .
- the tenant can define a network edge policy for the entire network implement through execution of, for example, process 900.
- the tenant can define network edge policies for each VM in the tenant network.
- Fig. 10 illustrates another example execution of a communication policy within a tenant network.
- DIP physically associated address
- SIP physically associated address
- SH global ID
- a communication event is captured.
- the communication event is captured.
- the communication event is the receipt of a message of a communication from VMl .
- the communication event can be captured by a control component in the NVI.
- the communication event is captured by DH.
- the logical addressing information for the communication can be retrieved.
- the tenant's entry in the NVI database can be used to perform a lookup for a logical address for the source VM based on SIP within a communication packet of the communication event at 1004.
- validity of the communication can be determined based on whether the mapping between the source VM and destination VM exist in the tenant's entry in the NVI DB, for example, as accessible by DH.
- validity at 1006 can also be determined using certificates for logical mappings.
- DH can retrieve a digital certificate and signature for VM1 (e.g., Cert(SRC), Sign(SRC,SIP)).
- the certificate and signature can be used to verify the communication at 1006. If the mapping does not exist in the tenant database or the certificate/signature is not valid 1006 NO, then the communication event is terminated at 1008.
- DH can operate according to any defined communication policy at 1010. If the communication policy prohibits communication between SRC and DST (e.g., the tenant database can include a policy record "SRC, DST : unplug") 1012 NO, then the communication event is terminated at 1008. If the communication is allowed 1012 YES (e.g., the tenant database can include a record "SRC, DST: plug"), then DH permits communication between VM1 and VM2 at 1014. In some examples, once DH determines a communication event is valid and allowed, DH can be configured to use a virtual cable between the
- DH can execute cryptographic protocols (e.g., IPsec) to create and/or respond to communications of SH to provide cryptographic protection of application layer data in the network packets.
- cryptographic protocols e.g., IPsec
- process 1000 can be executed on all types of communication for the tenant network.
- communication events can include external to VM address communication.
- SRC is a conventional network identity rather than a global ID assigned to the logical network (e.g., an IP address).
- the communication policy defined for such communication can be defined based on a network edge policy for the receiving VM.
- the tenant can define a network edge policy for the entire network implemented through execution of, for example, process 1000.
- the tenant can define network edge policies for each VM in the tenant network.
- the tenant can define communication protocols for both senders and recipients, and firewall rules can be executed at each end of a communication over the logical tenant network.
- Shown in Fig. 11 is a screen shot of an example user interface 1100.
- the user interface (“UI") 1100 is configured to accept tenant definition of network topology.
- the user interface is configured to enable a tenant to add virtual resources (e.g., VMs) to security groups (e.g., at 1110 and 1130).
- the UI 1100 can be configured to allow the tenant to name such security groups.
- Responsive to adding a VM to a security group the system creates and plugs virtual cables between the members of the security group. For example, VMs windowsl (1112), mailserver (1114), webserver (1116), and windows3 (1118) are members of the HR-Group.
- Each member has a unicast cable defining a connection between each other member of the group.
- windowsl there is a respective connection for windowsl as a source to mailserver, webserver, and windows3 defined within HR-Group 1110.
- virtual cables exist for R&D-Group 1130.
- User interface 1100 can also be configured to provide other management functions.
- a tenant can access UI 1100 to define communication policies, including network edge policies at 1140, manage security groups by selecting 1142, password control at 1144, manage VMs at 1146 (including for example, adding VMs to the tenant network, requesting new VMs, etc.), and mange users at 1148.
- the communications protocol suite operates on communication inputs or addressing that is logical. For example, execution of communication in processes 900 and 1000 can occur using global IDs in the tenant network. Thus communication does not require any network location information about the underlying physical network. All physical associated addresses (e.g., IP addresses) which the tenant's rental VMs (the tenant's internal nodes) have temporary IP addresses assigned by respective provides. These temporary IP addresses are maintained in a tenant database, which can be updated as the VMs move, replicate, terminate, etc. (e.g., through execution of process 700). Accordingly, these temporary IP addresses play no role in the definition of the tenant's distributed logical network and firewall/communication policy in the cloud. The temporary IP addresses are best envisioned as pooled network resources.
- IP addresses which the tenant's rental VMs (the tenant's internal nodes) have temporary IP addresses assigned by respective provides.
- These temporary IP addresses are maintained in a tenant database, which can be updated as the VMs move, replicate, terminate, etc. (e.g., through execution of
- the pool networks resources are employed as commodities for use in the logical network, and may be consumed and even discarded depending on the tenant's needs.
- the tenant's logical network is completely and thoroughly de-coupled from the underlying physical network.
- software developed network functions can be executed to provide network QoS in simplified "if-then-else" style of high-level language programming. This simplification allows a variety of QoS guaranties to be implemented in the tenants' logical network.
- QoS Network QoS which can be implemented as SDN programming at vNICs include: Traffic diversion, Load-balancing, Intrusion detection, DDoS scrubbing, among other options.
- an SDN task that the NVI system can implement can include automatic network traffic diversion.
- Various embodiments, of NVI systems/tenant logical networks distribute network traffic to the finest possible granularity: at the very spot of each VM making up the tenant network. If one uses such VMs to host web services, the network traffic generated by web services requests can be measured and monitored to the highest precision at each VM.
- the system can be configured to execute automatic replication of the VM and balance requests between the pair of VMs (e.g., the NVI system can request a new resource, replicate the responding VM, and create a diversion policy to the new VM).
- the system can automatically replicate an overburden or over threshold VM and new network requests can be diverted to the newly created replica.
- any one or more of following advantages can be realized in various embodiments over conventional centralized deployment: (i) on-VM-spot unplug avoids sending/dropping packets to the central control points, and reducing network bandwidth; (ii) fine granularity distribution makes the execution of security policy less vulnerable to DDoS-like attacks; (iii) upon detect of DDoS-like attacks to a VM, moving the VM being attacked or even simply changing the temporary IP address can resolve the attack.
- the resulting logical network provides an intelligent layer-2 network or practically unlimited size (e.g., at 2 A 128 level if the logical network is defined over IPv6 addresses) on cloud based resources. It is further realized that various implementations of the logical network manage communication without broadcast, as every transmission is delivered over a unicast cable between source and destination (e.g., between two VMs in the network). Thus, the NVI system and/or logical network solve a long felt but unsolved need for a large layer-2 network.
- NVI-based new overlay technology in this disclosure is the world first overlay technology which uses the global management and global mapping intelligence of hypervisors and DBs formed infrastructure to achieve for the first time a practically unlimited size, globally distributed logical network, without need of protocol negotiation among component networks.
- the NVI-based overlay technology enables simple web-service controllable and manageable inter-operability for constructing a practically unlimited large scale and on-demand elastic cloud network.
- Table 1 below provides network traffic measurements in three instances of comparisons, which are measured by the known tool NETPERF. The numbers shown in the table are in megabits (10 A 6) per second.
- the packets drop must take place behind the consolidated switch, and that means, the firewall edge point to drop packets can be quite distant from the message sending VM, which translates to a large amount of wasted network traffic in the system.
- Various embodiments also provide: virtual machines that each have PKI certificates; thus, not only can the ID of the VM get crypto quality protection, but also the VM's IP packets and 10 storage blocks can be encrypted by the VMM.
- the crypto credential of a VM's certificate is protected and managed by the VMM and the crypto mechanisms, which manage VM credentials are in turn protected by a TPM of the physical server.
- Further embodiments provide for vNIC of a VM that never need to change its identity (i.e., the global address in the logical network does not change, even when the VM changes location, and even when the location change is in trans-datacenter). This results in network QoS programming at a vNIC that can avoid VM location changing complexities.
- a global ID used in the tenant network can include an IPv6 address.
- a cloud datacenter (1) runs a plural number of network virtualization infrastructure (NVI) hypervisors, and each NVI hypervisor hosts a plural number of virtual machines (VMs) which are rented by one or more tenants.
- NVI hypervisor also runs a mechanism for public-key based crypto key management and for the related crypto credential protection. This key-management and credential-protection mechanism cannot be affected or influenced by any entity in any non-prescribed manner.
- credential-protection mechanism can be implemented using known approaches (e.g., in the US Patent Application 13/601,053, which claims priority to Provisional Application number 61530543), which application is incorporated herein by reference in its entirety. Additional known security approaches include the Trusted Computing Group technology and TXT technology of Intel. Thus, the protection on the crypto-credential management system can be implemented even against a potentially rogue system administrator of the NVI.
- the NVI uses the key-management and
- Each VM has an individually and distinctly managed public key, and also has the related crypto credential so protected.
- the NVI executes known cryptographic algorithms to protect the network traffic and the storage input/output data for a VM: Whenever the VM initiates a network sending event or a storage output event, the NVI operates an encryption service for the VM, and whenever the VM responds to a network receiving event or a storage input event, the NVI operates a decryption service for the VM.
- the network encryption service in (3) uses the public key of the communication peer of the VM; and the storage output encryption service in (3) uses the public key of the VM; both decryption services in (3) use the protected crypto credential that the NVI-hypervisor protects for the VM.
- the communication peer of the VM in (4) does not possess a public key
- the communication between the VM and the peer should route via a proxy entity (PE) which is a designated server in the datacenter.
- the PE manages a public key and protects the related crypto credentials for each tenant of the datacenter.
- the network encryption service in (3) shall use a public key of the tenant which has rented the VM.
- the PE Upon receipt of an encrypted communication packet from an NVI-hypervisor for a VM, the PE will provide a decryption service, and further forward the decrypted packet to the communication peer which does not possess a public key.
- the PE Upon receipt of an unencrypted communication packet from the no-public-key communication peer to the VM, the PE will provide an encryption service using the VM's public key.
- the NVI-hypervisor and PE provide
- the whitelist contains (i) public-key certificates of the VMs which are rented by the tenant, and (ii) the ids of some communication peers which are designated by the tenant.
- the NVI-hypervisor and PE will perform
- a tenant uses the well-known web-service CRUD (create, retrieve, update, or delete) to compose the whitelist in (6).
- a tenant may also compose the whitelist using any other appropriate interface or method.
- Elements in the whitelist are the public-key certificates of the VMs which are rented by the tenant, and the ids of the communication peers which are designated by the tenant.
- the tenant uses this typical web-service CRUD manner to compose its whitelist.
- NVI-hypervisor and PE use the tenant-composed whitelist to provide
- the tenant achieves instructing the datacenter in a self-servicing manner to define, maintain and manage a virtual private network (VPN) for the VMs it rents and for the communication peers it designates for its rental VMs.
- VPN virtual private network
- the PE can periodically create a symmetric conference key for T, and securely distribute the conference key to each NVI-hypervisor which hosts the VM(s) of T.
- the crypto graphically protected secure communications among the VMs, and those between the VMs and the PE in (3), (5) and (6) can use symmetric
- each NVI-hypervisor secures it using its crypto-credential protection mechanism in (1) and (2).
- Fig. 12 Shown in Fig. 12 is an example embodiment of a tenant programmable trusted network 1200.
- Fig. 12 illustrates both cases of the tenant T's private communication channels (e.g. 1202-1218) among its rental VMs (e.g., 1220 - 1230) and the PE (e.g., 1232). These communication channels can be secured either by the public keys of the VMs involved, or by a group's conference key.
- Shown in this example are 20 VMs rented by a tenant 1250.
- the tenant 1250 can define their trusted network using the known CRUD service 1252.
- the tenant uses the CRUD service to define a whitelist 1254.
- the whitelist can include a listing for identifying information on each VM in the tenant network.
- the whitelist can also include public-key certificates of the VMs in the tenant network, and the ids of the communication peers which are designated by the tenant.
- the PE 1232 further provides functions of NAT (Network Address Translation) and firewall, as shown.
- the PE can be the external communications facing interface 1234 for the virtual network.
- a VM in the trusted tenant network can only communicate or input/output data necessarily and exclusively via the communication and storage services which are provided by its underlying NVI-hypervisor. Thus, there can be no any other channel or route for a VM to bypass its underlying
- NVI-hypervisor to attempt to achieve or bypass communication and/or input/output data with any entity outside the VM.
- the NVI-hypervisor cannot be bypassed to perform encryption/decryption services for the VMs according to the instructions provided by the tenant.
- the non-bypassable property can be implemented via known approaches (e.g., by using VMware's ESX, Citrix's Xen, Microsoft's Hyper- V, Oracle's VirtualBox, and open source community's KVM, etc, for the underlying NVI technology).
- Various embodiments achieve a tenant defined, maintained, and managed virtual private network in a cloud datacenter.
- the tenant defines their network by providing information on their rental VMs.
- the tenant can maintain and managing the whitelist for its rental VMs through the system.
- the tenant network is implemented such that network definition and maintain can be done in a self-servicing and on-demand manner.
- VPC Virtual Private Cloud
- a large number of small tenants can now securely share network resources of the hosting cloud, e.g., share a large VLA of the hosting cloud which is low-cost configured by the datacenter, which in some examples can be executed and/or managed using SDN technology. Accordingly, the small tenant does not need to maintain any high-quality onsite IT infrastructure. The tenant now uses purely on-demand IT.
- the VPC provisioning methods discussed are also globally provisioned, i.e., a tenant is not confined to renting IT resources from one datacenter. Therefore, the various aspect and embodiments, enable break tradition vendor-locked-in style of cloud computing and provide truly open- vendor global utilities.
- a proxy entity 1402 is configured to operate in conjunction with a hypervisor 1404 of a respective cloud according to any QoS definitions for the logical network (e.g., as stored in database 1406).
- the three dots indicate that respective proxy entities and hypervisors can be located throughout the logical network to handle mapping and control of communication.
- proxy entities and/or hypervisors can manage mapping between logical addresses of vNICs (1410-1416) and underlying physical resources managed by the hypervisor (e.g., physical NIC 1418), mapping between logical addresses of VMs, and execute communication control at vNICs of the front-end VMs (e.g., 1410-1416).
- mapping enables construction of an arbitrarily large, arbitrary topology
- trans-datacenter layer-2 logical network i.e., achieved the de-coupling of physical addressing.
- control enables programmatic communication control, or in other words achieves a SDN.
- the proxy entity 1402 is a specialized virtual machine (e.g. at respective cloud providers or respective hypervisors) configured for back end servicing.
- a proxy entity manages internal or external communication according to communication policy defined on logical addresses of the tenants' isolated network (e.g., according to network edge policy).
- the proxy entity executes the programming controls on vNICs of an arbitrary number of front end VMs (e.g., 1408).
- the proxy entity can be configured to manage logical mappings in the network, and to update respective mappings when the hypervisor assigns new physical resources to front end VMs (e.g., 1408).
- aspects and functions described herein may be implemented as specialized hardware or software components executing in one or more computer systems or cloud based computer resources.
- computer systems that are currently in use. These examples include, among others, network appliances, personal computers, workstations, mainframes, networked clients, servers, media servers, application servers, database servers and web servers.
- Other examples of computer systems may include mobile computing devices, such as cellular phones and personal digital assistants, and network equipment, such as load balancers, routers and switches.
- aspects may be located on a single computer system, may be distributed among a plurality of computer systems connected to one or more communications networks, or may be virtualized over any number of computer systems.
- aspects and functions may be distributed among one or more computer systems configured to provide a service to one or more client computers, or to perform an overall task as part of a distributed system or a cloud based system. Additionally, aspects may be performed on a client-server or multi-tier system that includes components distributed among one or more server systems that perform various functions, and may be distributed through a plurality of cloud providers and cloud resources. Consequently, examples are not limited to executing on any particular system or group of systems. Further, aspects and functions may be implemented in software, hardware or firmware, or any combination thereof. Thus, aspects and functions may be implemented within methods, acts, systems, system elements and components using a variety of hardware and software configurations, and examples are not limited to any particular distributed architecture, network, or communication protocol.
- the distributed computer system 1300 includes one or more computer systems that exchange information. More specifically, the distributed computer system 1300 includes computer systems 1302, 1304 and 1306. As shown, the computer systems 1302, 1304 and 1306 are interconnected by, and may exchange data through, a communication network 1308. For example, components of an NVI-hypervisor system, NVI engine, can be implemented on 1302, which can communicate with other systems (1304-1306), which operate together to provide the functions and operations as discussed herein.
- system 1302 can provide functions for request and managing cloud resources to define a tenant network execution on a plurality of cloud providers.
- Systems 1304 and 1306 can include systems and/or virtual machines made available through the plurality of cloud providers.
- system 1304 and 1306 can represent the cloud provider networks, including respective hypervisors, proxy entities, and/or virtual machines the cloud providers assign to the tenant.
- all systems 1302-1306 can represent cloud resources accessible to an end user via a communication network (e.g., the Internet) and the functions discussed herein can be executed on any one or more of systems 1302-1306.
- system 1302 can be used by an end user or tenant to access resources of an NVI-hypervisor system (for example, implemented on at least computer systems 1304-1306).
- the tenant may access the NVI system using network 1308.
- the network 1308 may include any communication network through which computer systems may exchange data.
- the computer systems 1302, 1304 and 1306 and the network 1308 may use various methods, protocols and standards, including, among others, Fibre Channel, Token Ring, Ethernet, Wireless Ethernet, Bluetooth, IP, IPV6, TCP/IP, UDP, DTN, HTTP, FTP, SNMP, SMS, MMS, SSI 3, JSON, SOAP, CORBA, REST and Web Services.
- the computer systems 1302, 1304 and 1306 may transmit data via the network 1308 using a variety of security measures including, for example, TLS, SSL or VPN. While the distributed computer system 1300 illustrates three networked computer systems, the distributed computer system 1300 is not so limited and may include any number of computer systems and computing devices, networked using any medium and communication protocol.
- the computer system 1302 includes a processor 1310, a memory 1312, a bus 1314, an interface 1316 and data storage 1318.
- the processor 1310 performs a series of instructions that result in manipulated data.
- the processor 1310 may be any type of processor, multiprocessor or controller.
- Some exemplary processors include commercially available processors such as an Intel Xeon, Itanium, Core, Celeron, or Pentium processor, an AMD Opteron processor, a Sun UltraSPARC or IBM Power5+ processor and an IBM mainframe chip.
- the processor 1310 is connected to other system components, including one or more memory devices 1312, by the bus 1314.
- the memory 1312 stores programs and data during operation of the computer system 1302.
- the memory 1312 may be a relatively high performance, volatile, random access memory such as a dynamic random access memory (DRAM) or static memory (SRAM).
- the memory 1312 may include any device for storing data, such as a disk drive or other non- volatile storage device.
- Various examples may organize the memory 1312 into particularized and, in some cases, unique structures to perform the functions disclosed herein. These data structures may be sized and organized to store values for particular data and types of data.
- each tenant can be associated with a data structured for managing information on a respective tenant network.
- the data structure can include information on virtual machines assigned to the tenant network, certificates for network members, globally unique identifiers assigned to the network members, etc.
- the bus 1314 may include one or more physical busses, for example, busses between components that are integrated within the same machine, but may include any communication coupling between system elements including specialized or standard computing bus technologies such as IDE, SCSI, PCI and
- the bus 1314 enables communications, such as data and instructions, to be exchanged between system components of the computer system 1302.
- the computer system 1302 also includes one or more interface devices 1316 such as input devices, output devices and combination input/output devices.
- Interface devices may receive input or provide output. More particularly, output devices may render information for external presentation. Input devices may accept information from external sources. Examples of interface devices include keyboards, mouse devices, trackballs, microphones, touch screens, printing devices, display screens, speakers, network interface cards, etc. Interface devices allow the computer system 1302 to exchange information and to communicate with external entities, such as users and other systems.
- the data storage 1318 includes a computer readable and writeable nonvolatile, or non-transitory, data storage medium in which instructions are stored that define a program or other object that is executed by the processor 1310.
- the data storage 1318 also may include information that is recorded, on or in, the medium, and that is processed by the processor 1310 during execution of the program. More specifically, the information may be stored in one or more data structures specifically configured to conserve storage space or increase data exchange performance.
- the instructions stored in the data storage may be persistently stored as encoded signals, and the instructions may cause the processor 1310 to perform any of the functions described herein.
- the medium may be, for example, optical disk, magnetic disk or flash memory, among other options.
- the processor 1310 or some other controller causes data to be read from the nonvolatile recording medium into another memory, such as the memory 1312, that allows for faster access to the information by the processor 1310 than does the storage medium included in the data storage 1318.
- the memory may be located in the data storage 1318 or in the memory 1312, however, the processor 1310 manipulates the data within the memory, and then copies the data to the storage medium associated with the data storage 1318 after processing is completed.
- a variety of components may manage data movement between the storage medium and other memory elements and examples are not limited to particular data management components. Further, examples are not limited to a particular memory system or data storage system.
- the computer system 1302 is shown by way of example as one type of computer system upon which various aspects and functions may be practiced, aspects and functions are not limited to being implemented on the computer system 1302 as shown in FIG. 13. Various aspects and functions may be practiced on one or more computers having different architectures or components than that shown in FIG. 13.
- the computer system 1302 may include specially programmed, special-purpose hardware, such as an application-specific integrated circuit (ASIC) tailored to perform a particular operation disclosed herein.
- ASIC application-specific integrated circuit
- another example may perform the same function using a grid of several general-purpose computing devices (e.g., running MAC OS System X with Motorola PowerPC processors) and several specialized computing devices running proprietary hardware and operating systems.
- the computer system 1302 may be a computer system or virtual machine, which may include an operating system that manages at least a portion of the hardware elements included in the computer system 1302.
- a processor or controller such as the processor 1310, executes an operating system. Examples of a particular operating system that may be executed include a
- Windows-based operating system such as, Windows NT, Windows 2000 (Windows ME), Windows XP, Windows Vista, Windows 7 or 8 operating systems, available from the Microsoft Corporation, a MAC OS System X operating system available from Apple Computer, one of many Linux-based operating system distributions, for example, the Enterprise Linux operating system available from Red Hat Inc., a Solaris operating system available from Sun Microsystems, or a UNIX operating systems available from various sources. Many other operating systems may be used, and examples are not limited to any particular operating system.
- the processor 1310 and operating system together define a computer platform for which application programs in high-level programming languages are written.
- These component applications may be executable, intermediate, bytecode or interpreted code which communicates over a communication network, for example, the Internet, using a communication protocol, for example, TCP/IR
- aspects may be implemented using an object-oriented programming language, such as .Net, SmallTalk, Java, C++, Ada, C# (C-Sharp), Objective C, or Javascript.
- object-oriented programming languages such as .Net, SmallTalk, Java, C++, Ada, C# (C-Sharp), Objective C, or Javascript.
- object-oriented programming languages may also be used.
- functional, scripting, or logical programming languages may be used.
- various aspects and functions may be implemented in a non-programmed environment, for example, documents created in HTML, XML or other format that, when viewed in a window of a browser program, can render aspects of a graphical-user interface or perform other functions.
- various examples may be implemented as programmed or non-programmed elements, or any combination thereof.
- a web page may be implemented using HTML while a data object called from within the web page may be written in C++.
- the examples are not limited to a specific programming language and any suitable programming language could be used.
- the functional components disclosed herein may include a wide variety of elements, e.g., specialized hardware, virtualized hardware, executable code, data structures or data objects, that are configured to perform the functions described herein.
- the components disclosed herein may read parameters that affect the functions performed by the components. These parameters may be physically stored in any form of suitable memory including volatile memory (such as RAM) or nonvolatile memory (such as a magnetic hard drive). In addition, the parameters may be logically stored in a propriety data structure (such as a database or file defined by a user mode application) or in a commonly shared data structure (such as an application registry that is defined by an operating system). In addition, some examples provide for both system and user interfaces that allow external entities to modify the parameters and thereby configure the behavior of the components.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
Various implementations discussed resolve at least some of the issues associated with conventional patching of network resources, including for example, patching a multitude of local intranets. In one example, a novel intranet topology enables fully distributed and/or dynamic traffic routes by SDN programming, an Internet within intranet topology. The intranet topology can be managed by a communication controller that controls a software defined networking ("SDN") component. According to one embodiment, the SDN component executes on a plurality of servers within the intranet and coordinates the communication between virtual machines hosted on the plurality of servers and entities outside the intranet network, under the control of the communication controller. The SDN defines an Internet within the intranet region where no firewall or network isolation control is executed.
Description
METHOD AND APPARATUS FOR EXTENDING THE INTERNET INTO INTRANETS TO ACHIEVE SCALABLE CLOUD NETWORK
BACKGROUND
The need of large networks, and existing technologies
An intranet is a privately owned computer network that uses Internet Protocol technology to connect privately owned and controlled computing resources. This term is used in contrast to the Internet, a public network bridging intranets. In fact, the really meaningful characteristic difference between an intranet and the Internet is scale: an intranet always has a limited scale which is bound to the economical limit of its private owner, while the Internet has the scale which is unbound to any economical limit of any organization in the world.
Cloud computing needs very large network
Cloud computing— in this disclosure for constructing a network of unbound scale, cloud computing is considered as service; the so-called "private cloud", with a small, non-scalable size and nothing to do with service, is an absurd notion not to be regarded as cloud— should provide a practically unbound scale of computing resources which for example, include network resources. A very large network is optional to provide disaster avoidance, elastic bursting, or even distributing split user data to non-cooperative authorities spanning continent geographical regions for protecting data against abuse of power by corrupted authorities. However, any cloud computing service provider is going to have a limited economic power to own computing resources of a bound scale. Some conventional approaches have sought to solve the size problem by patching intranets, however such approaches encounter scalability problems.
Conventional Network Patching Technologies and Scalability
Network resource for cloud Infrastructure as a Service (IaaS) can include the OSI reference model Layer 2 (Link Layer) in order to provide without loss of generality all upper layer services. Layer 2 describes physically— copper, optical fiber,
radio, etc.— wired connections. As physical wiring always has a given (small) scale, e.g., confined to the size of a building hosting privately owned equipment, scalability for Layer 2 has always been via patching. Thanks to the fact that any two physically separate intranets have the Internet in-between them, the patching algorithm (protocol) encapsulates (wrap) Layer-2 (MAC) packets of these intranets inside an OSI Layer-3 (Internet Layer) packet and exchanges these wrapped MAC packets between the two intranets. Thus, the physically separate intranets can see and operate with the MAC packets of the other side, and both operate as enlarged networks with the other side as its extension. There are many such MAC encapsulation technologies, also known as large layer 2 networks: VPN, GRE, VXLAN, NVGRE, STT, MPLS, LISP to provide examples. It would appear that MAC-encapsulation of layer 2 in layer 3 technology is available to patch physically separate intranet into a network of unbound scale.
However, the MAC-encapsulation of layer 2 in layer 3 technology has very poor scalability.
How Layer-2 and Layer-3 work in inter-play
Layer 2 is physical. Communications described in layer 2 is done through data packets, called MAC packets, exchanged between physical network interface cards (NICs). Each NIC has a unique MAC address (id), and a MAC packet is in the form of the following triple: (Destination-MAC-id, Source-MAC-id, Payload).
A MAC id is similar to a person's finger print or some other uniquely identifiable physical attributes being the person's MAC id. They are unique, however they are not convenient to use as the person's id, e.g., for daily communications purposes. Typically, MAC ids are not easy to use. Moreover, applications will need move around in a bigger environment than a physical network. Hence,
communication models include Layer 3: a logical network in which an entity is identified in a unique IP address (id), and communications is in the form of
exchanging IP packets in the following triple format: (Destination-IP-id, Source-IP-id, Payload).
IP id can be constructed unique, and can even be changed, if necessary. IP ids
are convenient to use for communications purposes.
Summary of conventional protocols: MAC id is physical, unique, fixed to a NIC, cannot be changed, and the NIC is wired for sending or receiving data;
applications usually do not use MAC id to communication due to their inconvenient to use.
IP id is logical, movable, changeable, and convenient to use by applications. Plug-and-play standards for MAC/IP interplay
DHCP broadcast standard
When a computer with a NIC is wired to a network cable, the computer need be associated with an IP address in order to perform operations. The standard is that the computer initiates a DHCP request (Dynamic Host Configuration Protocol); this is to broadcast an IP id request message to the network environment with its MAC id in the broadcast. Why broadcast? The computer has no idea to whom in the network it sends the message. The network system has one or more DHCP server(s). The first DHCP server which receives the IP id request will arrange for an available IP id and broadcast back to the requestor with the MAC id. Why broadcast the response? The DHCP server also has no idea where the computer with this NIC of this MAC-id is in the network. There are probably more broadcasting or unicasting message exchanges between these two entities for them to eventually agree that the given MAC-id/IP-id pair is now associated for the requesting computer in the system. Once the DHCP server is there in the network (minimum configuration by a network admin), there is no need for a machine (e.g., a laptop) user to do any configuration in her/his machine: obtaining an IP id is as simple as plug-and-play.
ARP broadcast standard
When an application in a machine (having a NIC) initiates a communication with a destination (application) machine (also having a NIC), the communication should conveniently use IP ids. However, these machines (in fact, their operating systems, OSes) can only communicate in physical way by exchanging data packets between NICs, i.e., the OSes can only communicate by exchanging MAC packets.
Then how can the source OS know where the destination IP addressed machine is? The standard is: the source OS will initiate an ARP (Address Resolution Protocol) message by broadcasting: "Who having this destination IP, give me your MAC id!" This time it is easier to understand why the source OS broadcasts: no server's help is needed, no configuration is needed; the protocol is purely in plug-and-play manner. All OSes in the network will hear the ARP broadcast, but only the one with the wanted IP address will respond with the MAC id. Having received the response, now the source OS can send the data packet in MAC packets through the physical wire linking the two NICs.
SUMMARY
It is realized that conventional network architectures and conventional methodologies for patching various network segments are not sufficient to provide a scalable cloud network. For example, if a cloud tenant has its machines in
trans-continental distributions, then the conventional broadcast messages (e.g., ARP and DHCP) between those machines must also send in MAC-encapsulation manner over to the other continent, frequently and unavoidably. Without broadcasting for MAC/IP association (DHCP), or for IP/MAC resolution (ARP) in trans-intranet range, the two machines will not be able to communicate, since, e.g., without knowing the destination MAC id, how can a MAC packet which must contain the destination MAC id to be encapsulated in a trans-intranet IP packet.
It is further realized, that the need of trans-intranet broadcasting must be addressed in order to achieve a scalable cloud network architecture that can be built on patched intranets. Conventional approaches use MAC-in-UDP encapsulation:
unlike TCP link which needs handshake establishment, a UDP message can simply be sent and received without requiring the sender and receiver to engage in any agreement for confirming a good connection, thus UDP is well suited for broadcasting. Even with UDP, existing large layer 2 technologies suffer from scalability problems in trans-intranet patching networks. Broadcasting in trans-intranet scale requires very
high bandwidth Internet connections in order to obtain reasonable response time. High bandwidth Internet connections are very costly. There are not any trans-datacenter clouds in successful commercial operation currently, in large part due to the costs of the high bandwidth that would be required by conventional approaches.
Further complicating the problems with conventional patching approaches is the need to provide network isolation. Thus far, practical technology for multi-tenancy network isolation is achieved in VLA like technology. Similar to the problems discussed above with conventional broadcasting, these intranet patching technologies must also maintain the isolation information for each trans-datacenter tenant, and thus requiring patching of the layer 2 by encapsulating the MAC packet metadata of each layer 2 network over to the other layer 2 network, e.g., a tenant's VLANs identities in intranets to be trans-intranet maintained by large-layer-2 patching standards such as VXLAN. In the current practice, large-layer-2 patching points in the patching participation intranets are a small number of specially dedicated ones which often form a bandwidth bottleneck for trans-intranet traffic. The need of trans-intranet maintenance for tenant's VLAN ids also limits the patching to only work for intranets which are operated by the same operator.
Even further complicating the matter is the firewall for a trans-intranet tenant. Suppose a tenant's firewall is distributed in trans-intranet manner so that VM-Internet communication packets are filtered locally and in distribution at each intranet. Upon VM motion, routing forward table must be updated to all intranets in which a tenant has VMs, which is essentially trying to reach agreement over UDP
connectionless channel (all "good" large layer 2 patching protocols, e.g., STT and VXLAN, are UDP based in order to serve without loss of generality any applications, e.g., video and broadcast). This translates to the infamous Byzantine Generals
Problem, a well-known hard problem in communications when reach of consensus is attempted over non-reliable channels. "Good" patching technologies are probably reserved exclusively to a few resourceful players who can afford laying optical fibers to connect their trans-global intranets. For a great number of grassroots players
(e.g., Openstack) a tenant "deserves" to have its gateway chokepointed
to a centralized (e.g., Neutron) network node in one of the intranets. Thus, in a trans-datacenter layer 2 which is patched from two or more separate layer 2 networks, VM-Internet communications suffer from a chokepointed firewall having a bandwidth bottleneck.
It is further realized that another problem in the large layer 2 patching standards such as VPN, MPLS, STT, VXLAN, NVGRE protocols can be described as follows. The size of the Internet is unbound. Any segment of network can join the Internet by interconnecting itself with the Internet provided the network segment is constructed, and the interconnection is implemented, in compliance with the OSI seven-layer reference model. Network interconnection, i.e., scaling up the size for a network, if using the OSI reference model, is in the formulation that a network packet of a layer is the payload data for the packet of one immediate layer down.
Interconnection at layers 2 and 3 in this formulation is stateless and connection-less, i.e., the interconnection needs no prior protocol negotiation. For example, a web client accessing a search engine web server does not need any prior protocol negotiation. However, network interconnection using the conventional "large layer 2" patching technologies such as VPN, MPLS, STT, VXLAN, and NVGRE protocols do not use the OSI layered formulation. These protocols encapsulate a layer 2 packet as the payload data for a layer 3 packet, as opposed to the OSI interconnection formulation. Thus, network patching using these "large layer 2" protocols cannot be done in a stateless streamlined fashion; prior protocol negotiation is necessary or else the interconnection peers misinterpret each other, and the interconnection will fail.
Stemming from this requirement is the realization that a network patching technology in need of prior protocol negotiations cannot patch a network of an unbound size.
Stateful interconnections for a large scale patching simply cannot be stable, and have a prohibitive cost for maintaining the stateful connections. These so-called "large layer 2 protocol technologies" were developed as pre-cloud era technology, and while sufficient for patching privately owned small networks, the technology is unsuitable
for patching cloud networks to provide an unbound scalability.
To illustrate the difficulties faces by conventional approaches examine a VM of a trans-intranet tenant communication to an external entity, where the VM is in intranet A and the firewall gateway is in intranet B. The traffic generated by the VM must first queue at the patching point in A to travel from A to B, and then queue at the firewall gateway in B to travel outside to the Internet. Returning packets traverse the same route. Waiting in the two queues is necessary as these points are shared by all VMs in the tenant's network, e.g., VLAN and VXLAN. These queuing points in conventional large layer 2 patching technologies are called chokepoints, and are traffic bottlenecks.
Unfortunately, conventional cloud network technologies resort to using a chokepointed tenant firewall model, or have chokepointed patching points. Some conventional solutions include Openstack, Juniper, and VMware Nicira. These models fail to address the issues associated with chokepointed firewall or patching points. It is further realized that a cloud network patching technology which can remove chokepoints from the Internet-intranet interface is highly needed.
According to aspects and embodiments, various implementations discussed resolve at least some of the issues associated with conventional patching of network resources, specifically intranet patching. According to one aspect, provided is a novel intranet topology. The intranet topology is managed by a communication controller that controls a software defined networking ("SDN") component. According to one embodiment, the SDN component executes on a plurality of servers within the intranet and coordinates the communication between virtual machines hosted on the plurality of servers and entities outside the intranet network, under the control of the communication controller.
The plurality of servers in the intranet can each be configured with at least two network interface cards ("NICs"). A first external NIC can be connected to an external communication network (e.g., the Internet) and an internal second NIC can be connected to the other ones of the plurality of servers within the intranet. In some
embodiments, each internal NIC can be connected to a switch and through the switch to the other servers.
In some embodiments, the communication between each VM hosted on the plurality of servers can be dynamically programmed (e.g., by the SDN component operating under the control of the communication component) to route through a respective external NIC or over external NICs of the plurality of servers connected by their respective internal NICs. Within the intranet, the distributed servers having the external connected NICs can perform a network gateway role for the hosted VMs. The gateway role can include interfacing with entities outside the local network (e.g., entities connected via the Internet) on an external side of the network, and the VMs on the internal side of the network. In further embodiments, the SDN component can be configured to implement network isolation and firewall policies at the locality and deployment of the VM.
According to another aspect, the SDN component can also define a region of the intranet (e.g., an "Internet within intranet") where the network isolation and the firewall policies are not executed. In other words, the SDN component does not execute any control in terms of tenant network isolation and firewall policy within the Internet within intranet region of the intranet. In the absence of tenant network isolation and firewall control, the network region is configured to provide through network routes between any VM on the distributed servers and any of the external NICs on respective distributed servers, for example, under the control of the communication controller. Under this topology, the SDN component executes full programmatic control on the packet routes between any VM and any of the external NICs.
According to one aspect, a local network system is provided. The local network system comprises at least one communication controller and a plurality of distributed servers, wherein the at least one communication controller controls the distributed servers and manages a SDN component deployed and executed on each of the distributed servers; the distributed servers hosting virtual machines (VMs) and
managing communication for the VMs; wherein at least two of the distributed servers have at least two network interface cards (NICs): one NIC-ext, and one NIC-int; the NIC-ext is wired to an external network; the NIC-int is wired to a switch; wherein the distributed servers having the NIC-ext and NIC-int execute a network gateway role for the VMs, the gateway role including interfacing with entities outside the local network, and the VMs on an inner side of the network; the communication between each VM on a distributed server and the entities outside the local network can interface using the NIC-ext on the distributed server, or using the other NIC-exts on the other servers via the NIC-ints connected by the switch; and the SDN component executing on each servers coordinates the communication between the VMs and entities outside the local network under the control of the at least one communication controller.
According to one aspect, a network communication system is provided. The network communication system comprises at least one communication controller configured to manage communication within a logical network executing on resources of a plurality of distributed servers; the plurality of distributed servers hosting virtual machines (VMs) and handling the communication for the VMs; wherein at least two of the plurality of distributed servers are connected within an intranet segment, wherein the at least two of the distributed servers within the intranet segment include at least two respective network interface cards (NICs): at least one NIC-ext connected to an external network, and at least one NIC-int connected to a switch, wherein each server of the at least two of the plurality of distributed servers within the intranet segment execute communication gateway functions for interfacing with external entities on an external side of the network; and wherein the at least one
communication controller dynamically programs communication pathways for the communication of the logical network to occur over any one or more of the at least two of the distributed servers within the intranet segment over respective NIC-exts by managing an SDN component executing on the at least two of the distributed servers.
According to one aspect, a local network system is provided. The local
network system comprises at least one communication controller coordinating the execution of a SDN component; a plurality of distributed servers; wherein the at least one communication controller manages communication by the plurality of distributed servers and coordinates execution of the SDN component deployed and executing on the plurality of distributed servers; wherein the plurality distributed servers host virtual machines (VMs) and manage communication for the VMs; wherein at least two of the plurality of servers include at least two respective network interface cards (NICs) at least one NIC-ext connected to entities outside the local network, and at least one NIC-int connected to a switch, wherein the communication between a VM on a server and the entities outside the local network interfaces on the external NIC on the distributed server or interfaces on NIC-exts on other distributed servers connected to the server by the switch and respective NIC-ints; wherein the SDN component is configured to coordinate the communication between the VMs and entities outside the local network under the management of the at least one communication controller.
The following embodiments are use in conjunction with the preceding network systems (e.g., local network and network communication systems). In one
embodiment, the SDN component is configured to execute network isolation and firewall policies for the VMs of a tenant at the locality of each VM software existence and deployment. In one embodiment, the at least one communication controller manages the SDN execution of the network isolation and the firewall policies. In one embodiment, the SDN component is configured to control pass or drop of network packets which are output from and input to the VM. In one embodiment, the SDN component is configured to intercept and examine the network packets to be received by and have been communicated from the VM to manage the pass or the drop of the network packets. In one embodiment, the SDN component further comprises defining a network region, an "Internet within the intranet," in the local network, other than and away from VMs existence and deployment localities where the SDN component executes tenants' network isolation and firewall policies, in which the SDN component does not execute any control in terms of tenant network isolation and
firewall policy. In one embodiment, the intranet region, the SDN component is configured to provide through network routes between any VM and any of the NIC-exts on respective distributed servers, and wherein the SDN component under management of the at least one communication controller executes control on the dynamicity of the packet forwarding routes between VMs and any respective
NIC-exts.
In one embodiment, at least one other local network system, including a respective Internet within intranet region is controlled by the at least one
communication controller and SDN component, wherein the local network and the at least one other local network are patch connected to one another through any pair of NIC-exts of the two local networks to form an enlarged trans- local-network system. In one embodiment, additional other local network systems having a respective Internet within intranet region are patch connected to join a trans- local-network system to form a further enlarged trans- local-network system including elements having the Internet within intranet topology.
In one embodiment, trans- local-network communication traffic between a first and second VM in any two patch participating local networks are controlled by the SDN component running on the distributed servers in the respective local networks, and wherein the SDN component is programmed to generate dynamic and distributed routes between the first VM and respective external NICs in a first respective local network and the second VM and respective external NICs in a second respective local network. In one embodiment, the external communication traffic between a VM in a local network system and an external entity is programmed by the SDN component running on the distributed servers in the local network system to take dynamic routes between the VM and the distributed NIC-exts in the local network system, and to have the network packets delivered by the Internet linking an external NICs of the local network system and the external entity over the Internet.
According to some embodiments, the preceding systems can be include or be further described by one or more the of the following elements: wherein the SDN
component is configured to execute network isolation and firewall policies for VMs of one or more tenants local to each VM; wherein the SDN component is configured to execute the network isolation and firewall policies where network packets are output from the VM or communicated to the VM; wherein the SDN component executes the network isolation and firewall policies for VMs of the one or more tenants at localities where network packets are output from the VM prior to them reaching any other software or hardware component in the local network, or input to the VM without enrouting any other software or hardware component in the local network; wherein the at least one communication controller manages the SDN execution of the network isolation and the firewall policies; wherein the SDN component is configured to control pass or drop of network packets which are output from and input to the VM; wherein the SDN component is configured to intercept and examine the network packets for receipt by and outbound from the VM to manage the pass or the drop of the network packets; wherein the SDN component further defines a network region, an "Internet within the intranet," in the local network, other than and away from the localities where the SDN component executes VMs' network isolation and firewall policies, in which the SDN component does not execute any control in terms of tenant network isolation and firewall policy; wherein within the Internet within intranet region, the SDN component is configured to provide through network routes between any VM and any of the NIC-exts on respective distributed servers, and wherein the SDN component under management of the at least one communication controller executes control on the programming of the packet forwarding routes between VMs and any respective NIC-exts, wherein the programming of the packet forwarding routes includes one or more of dynamicity and distribution of the packet forwarding routes; wherein at least one other local network system, including a respective Internet within intranet region is controlled by the at least one communication controller and SDN component, wherein the local network and the at least one other local network are patch connected to one another through any pair of NIC-exts of the two local networks and at least one network to form an
enlarged trans- local-network system including elements having the Internet within intranet topology; wherein additional other local network systems having a respective Internet within intranet region are patch connected to join a trans- local-network system to form a further enlarged trans- local-network system including elements having the Internet within intranet topology; wherein trans- local-network
communication traffic between a first and second VM in any two patch participating local networks are controlled by the SDN component running on the distributed servers in the respective local networks, and wherein the SDN component is programmed to generate programmed routes, the programmed routes including one or more of dynamic or distributed routes, between the first VM and respective external NICs in a first respective local network over at least one intermediate connection to the second VM and respective external NICs in a second respective local network; wherein the external communication traffic between a VM in a local network system and an external entity is programmed by the SDN component running on the distributed servers in the local network system to take programmed routes between the VM and the distributed NIC-exts in the local network system, and to have the network packets delivered by the Internet linking external NICs of the local network system and the external entity over the Internet; wherein the programmed routes include one or more of dynamic or distributed routes.
According to one aspect, a computer implemented method is provided for managing communications of virtual machines ("VMs) hosted on an intranet segment. The method comprises managing, by at least one communication controller, network communication for at least one VM hosted on the intranet segment; programming, by the at least one communication controller, a route for the network communication, wherein the act of programming includes selecting for an external network
communication from a first route for the network communication, wherein the first route traverses a NIC-ext of a distributed server within the intranet segment hosting the VM, and at least a second route, wherein the at least a second route traverses a NIC-int of the distributed server to a second NIC-int of a second distributed server
having a second NIC-ext.
In one embodiment, the method further comprises an act of patching a plurality of intranet segments, wherein each of the plurality of intranet segments include at least two distributed servers, each having at least one NIC-int and at least one NIC-ext. In one embodiment, the method further comprises programming, by the at least one communication controller communication routes between the plurality of intranet segments based on selection of or distribution between external connections to respective at least one NIC-exts within each intranet segment. In one embodiment, the method further comprises managing network configuration messages from VM by the at least one communication controller such that broadcast configuration messages are captured at respective intranet segments.
In one embodiment, the method further comprises an act of managing a plurality of VMs to provide distributed network isolation and firewall policies at the locality of each VM software existence and deployment. In one embodiment, programming, by the at least one communication controller, includes managing SDN execution of network isolation and the firewall policies. In one embodiment, the method further comprises defining, by the at least one controller, a network region in the intranet segment, other than and away from VMs existence and deployment localities, in which the at least one controller does not execute any control in terms of tenant network isolation and firewall policy. In one embodiment programming, by the at least one controller, includes providing through network routes between any VM hosted on the intranet segment and any of the NIC-exts on respective distributed servers, and controlling dynamicity of packet forwarding routes between VMs and any respective NIC-exts.
Still other aspects, embodiments and advantages of these exemplary aspects and embodiments, are discussed in detail below. Moreover, it is to be understood that both the foregoing information and the following detailed description are merely illustrative examples of various aspects and embodiments, and are intended to provide an overview or framework for understanding the nature and character of the claimed
aspects and embodiments. Any embodiment disclosed herein may be combined with any other embodiment. References to "an embodiment," "an example," "some embodiments," "some examples," "an alternate embodiment," "various
embodiments," "one embodiment," "at least one embodiment," "this and other embodiments" or the like are not necessarily mutually exclusive and are intended to indicate that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment. The appearances of such terms herein are not necessarily all referring to the same embodiment.
BRIEF DESCRIPTION OF DRAWINGS
Various aspects of at least one embodiment are discussed below with reference to the accompanying figures, which are not intended to be drawn to scale. The figures are included to provide an illustration and a further understanding of the various aspects and embodiments, and are incorporated in and constitute a part of this specification, but are not intended as a definition of the limits of any particular embodiment. The drawings, together with the remainder of the specification, serve to explain principles and operations of the described and claimed aspects and
embodiments. In the figures, each identical or nearly identical component that is illustrated in various figures is represented by a like numeral. For purposes of clarity, not every component may be labeled in every figure. In the figures:
FIG. 1 is a block diagram of a conventional network architecture including for example, a gateway chokepoint;
FIG. 2 is a block diagram of a proposed intranet topology, according to various embodiments;
FIG. 3 is a block diagram of an intra-inter-net interfacing topology, according to various embodiments;
FIG. 4 is a block diagram of an example NVI system, according to one embodiment;
FIG. 5 is a block diagram of an example NVI system, according to one
embodiment;
FIG. 6 is a block diagram of an example distributed firewall, according to one embodiment;
FIG. 7 is an example process for defining and/or maintaining a tenant network, according to one embodiment;
FIG. 8 is an example certification employed in various embodiments;
FIG. 9 is an example process for execution of a tenant defined communication policy, according to one embodiment;
FIG. 10 is an example process for execution of a tenant defined
communication policy, according to one embodiment;
FIG. 11 is an example user interface, according to one embodiment;
FIG. 12 is a block diagram of an example tenant programmable trusted network, according to one embodiment;
FIG. 13 is a block diagram of a general purpose computer system on which various aspects and embodiments may be practiced; and
FIG. 14 is a block diagram of an example logical network, according to one embodiment;
FIG. 15 is a process flow for programming network communication, according to one embodiment.
DETAILED DESCRIPTION
At least some embodiments disclosed herein include apparatus and processes for an Internet within intranet topology. The Internet within intranet topology enables SDN route programming. For example, under the Internet within intranet topology, SDN route programming can be executed for trans-datacenter virtual clouds, virtual machine to Internet routes, and further can enable scalable patching of intranets.
According to some embodiments, the Internet within intranet topology includes a plurality of distributed servers hosting VMs. The plurality of distributed servers can perform a network gateway role, and include an external NIC having a
connection to the Internet and an internal NIC connected to other ones of the distributed servers, for example, through a switch. The distributed servers can each operate as programmable forwarding devices for VMs. The configuration enables fully SDN controlled intranet networks that can fully leverage redundant Internet connections. These fully SDN controlled intranet networks can be patched into large scale cloud networks (including for example trans-datacenter cloud networks).
According to another aspect, by implementing software for controlling and managing VMs at respective virtual NICs ("vNICs") at least some of the problems associated with conventional broadcast communication methodologies and network patching can be avoided.
Examples of the methods and systems discussed herein are not limited in application to the details of construction and the arrangement of components set forth in the following description or illustrated in the accompanying drawings. The methods and systems are capable of implementation in other embodiments and of being practiced or of being carried out in various ways. Examples of specific
implementations are provided herein for illustrative purposes only and are not intended to be limiting. In particular, acts, components, elements and features discussed in connection with any one or more examples are not intended to be excluded from a similar role in any other examples.
Also, the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. Any references to examples, embodiments, components, elements or acts of the systems and methods herein referred to in the singular may also embrace embodiments including a plurality, and any references in plural to any embodiment, component, element or act herein may also embrace embodiments including only a singularity. References in the singular or plural form are not intended to limit the presently disclosed systems or methods, their components, acts, or elements. The use herein of "including," "comprising,"
"having," "containing," "involving," and variations thereof is meant to encompass the items listed thereafter and equivalents thereof as well as additional items.
References to "or" may be construed as inclusive so that any terms described using "or" may indicate any of a single, more than one, and all of the described terms.
According to some embodiments, isolation of a tenant's network of virtual machines can be executed by NVI (Network Virtualization Infrastructure) software and each VM hosted by a server in the tenant network can be controlled in a distributed manner at the point of each virtual NIC of the VM (discussed in greater detail below). With the inside/outside isolation of a tenant network controlled in a distributed fashion, the underlying servers can be open to communicate without restriction. For example, the underlying servers can operate like the Internet (e.g., open and accessible) but under the SDN programming control.
According to various embodiments, SDN control and/or software alone is insufficient to provide fully distributed routing. According to one aspect, it is realized that SDN can do little without route redundancy. Shown in Fig. 1 is a conventional network architecture 100. Even with SDN programming implemented, the Internet traffic from the plurality of servers (e.g., 102-108 each having their respective NICs 110-116) cannot be fully SDN distributed. In the architecture shown, each server is connected to at least one switch (e.g., 118), which is connected to a gateway node 120. In one conventional implementation, the gateway node 120 can be a "neutron node" from the commercially available Openstack cloud software. The gateway node 120 connects to Internet 122 via an external NIC 124 and routes the traffic to the servers via an internal NIC 126. However, based on the intranet to Internet topology shown, the intranet to Internet traffic cannot be SDN distributed. The gateway node 120 forms a chokepoint through which all intranet traffic must pass.
According to one embodiment, an improved Internet-intranet interfacing topology is implemented. Fig. 2 is a block diagram of an example Internet-intranet topology 200 that can be used to support a scalable cloud computing network. For example, a plurality of servers can host a plurality of virtual machines as part of a distributed cloud. The servers (e.g., 202-208) can be configured with at least two NICs. Each server is configured with an internal NIC (e.g., 210-216) which connects
the servers (e.g., 202-208) to each other through at least one switch (e.g., 218).
Additionally, each of the servers (e.g., 202-208) can include an external NIC (e.g., 220-226) each of which provides a connection to the Internet 228 (or other external network). In some examples, each of the connections (e.g., 230-236) can be low bandwidth, low cost, Internet connections (including, for example, subscriber lines). Under the control of SDN software executing on the plurality of servers, route programming can take full advantage of all the available Internet connections (e.g., 230-236), providing, in effect, a high bandwidth low cost connection. The three dots shown in Fig. 2, illustrate the potential to add additional servers (e.g., at 238 with respective connections to switch 218 at 240 and respective Internet connections at 242).
According various aspects, each server in the Internet-intranet-interfacing topology can execute the functions of a SDN programmable gateway. In some embodiments, each server can include SDN components executing on the server to control communication of traffic from and to VMs. For example, the Internet traffic to and from any VM hosted on one or more of the plurality of servers can go via any external NIC of the connected servers. Thus, according to some embodiments, fully distributed routing of network traffic is made available. In addition, the SDN components executing on the servers can dynamically reprogram network routes to avoid bottlenecks, increase throughput, distribute large communication jobs, etc.
In some embodiments, the SDN components are managed by a
communication controller executing on the one or more of the servers in the
Internet-Internet-topology. The communication controller can be configured to co-ordinate operation of the SDN components on the respective servers.
Example Approaches For Constructing A Scalable Cloud Network
According to various embodiments, a variety of virtualization infrastructures can be used to provide virtual machine (VMs) to a tenant seeking computing resources. In on embodiment, a network virtualization infrastructure ("NVI") software
is used to mange a tenant network of VMs (discussed in greater detailed below). The NVI system/software can be implemented to provide network isolation processing and/or system components. In some embodiments, the virtualization software (e.g., NVI software) can be configured to provide for control and management of communication at a virtual NIC ("vNIC") of each virtual machine of the tenant.
According to one embodiment, the management at respective vNICs divides the inside and outside of the tenant's network at the vNIC of each virtual machine. For example, the VM which is "north" of the vNIC is inside the tenant network, and the software cable which is plugged to the vNIC on one end and connected to the underlying physical server hosting the VM the other end is "south." The vNIC likewise divides the tenant network between the VM on the "north" and any external connections of the physical server. Thus, the tenant's network border is distributed to the point of the respective vNICs of each VM of the tenant. From this configuration it is realized that the tenant's logical layer 2 network is patched by a plural number of intranets, each having the minimum size of containing one VM.
Now that each logical layer 2 network of the tenant has the minimum size of containing one VM, the DHCP and ARP broadcasting protocol messages which are initiated by the OS in the VMs can be received and processed by the NVI software. In response to DHCP and ARP messages from the VMs, the NVI generates IP/MAC associations in a global database. According to one embodiment, the global database is accessible by NVI hypervisors hosting the VMs of the tenant. According to some embodiments, the new large layer 2 patching method discussed does not involve any broadcasting message in the DHCP and ARP plug-and-play standard. From the perspective of the OS of the VMs, the two standard protocols continue to serve the needed interplay role for the layer 2 and layer 3 without change. However, the network configuration messages no longer need broadcastings as the addressing associations are handled by the NVI infrastructure (e.g., NVI hypervisors managing entries in a global database).
According to another aspect, the logical layer 2 of the tenant can be
implemented in trans-datacenter manner based on handling network broadcast within the virtualization infrastructure. For example, by limiting broadcasting to the range of the minimum intranet of one VM, the disclosed layer 2 patching is scalable to an arbitrary size. Communications between trans-datacenter VMs of the same tenant occur in logical layer 2 fashion. In some embodiments, the functions of the global database instructed NVI hypervisors permit the DHCP and ARP standards to remain constrained to their normal plug-and-play execution for the VM users.
According to another aspect, the combination of the SDN software and the network topology enables traffic engineering and/or enlarging the trans-datacenter bandwidth. According to one embodiment, with the inside/outside isolation for the tenant network border controlled in a distributed manner at the point of the vNIC of each VM of the tenant, the underlying servers of the VMs of the tenant can become public just like the Internet. Although the Internet is not controlled, the underlying servers (e.g., NVI configured servers), can be configured to act as Internet forwarding devices for the VMs, operating under the control of NVI software and the globally accessible database. The underlying servers can be configured as publically accessible resources similar to any Internet accessible resource, and at the same time the servers themselves are under route programming control. The route programming control can be executed by SDN components executing on the underlying servers. In further embodiments, the SDN components can be managed by one or more communication controllers.
In some embodiments, the underlying servers can be directly connected to the Internet in one of at least two respective NICs, denoted by NIC-external ("NIC-ext"). In further embodiments, the servers are locally (e.g., within a building) connected by switches in the other of the NICs, denoted by NIC-internal ("NIC-int"). In this novel cloud datacenter hardware network topology, since the NIC-external of each server can be reached by NIC-internal via switches by the other servers, all of the Internet connected NICs can be used by any VM to provide redundant communication routes, either for in-cloud trans-datacenter traffics, or for off-cloud external communications
traffics with the rest of the world.
The available redundancy greatly increases the utilization of the Internet traffics - known to have been architected for containing high redundancy, and to have been over-provisioned through many years of commercial deployment. The
over-provisioned resources of the Internet have, unfortunately, not been well utilized due to the fact that most of its conventional forwarding devices cannot be
programmed. According to one aspect, the Internet connected servers of the disclosed topology are the programmable forwarding devices, and can therefore be so used to exploit the un-utilized Internet bandwidth potential.
Fig. 3 is a block diagram of an Internet-intranet interfacing topology 300.
Various VMs of different tenants can take advantage of the SDN programmed
Internet-intranet interfaces of the underlying physical servers. At 302-326, shown are VMs of different tenants (e.g., each shape can correspond to different tenant network) having respective firewalls that are distributed to the respective vNICs of the tenants' VMs (e.g., implemented through NVI infrastructure). The VMs are provisioned and controlled via the virtualization infrastructure (e.g., 328 and 330), which is connected to the Internet over distributed Internet-intranet interfaces (e.g., 332-348 and 350-366). As discussed above, in various embodiments, communication controllers and/or SDN components can leverage the distributed Internet-intranet interfaces for fully dynamic and programmatic route control of traffic. At 370, a conventional intranet network is connected via multiple Internet connections (at 374), however, interface 372 represents a chokepoint where traffic can still bottleneck. Even with SDN
programming implemented, the interface 372 cannot fully distribute traffic and cannot fully exploit available bandwidth.
According another aspect, the various intranet topologies discussed above can be implemented to provide for dynamic and distributed bandwidth exploitation.
According to one implementation, let a VM on a tenant network host a web server. The underlying hardware server for the VM (denoted Server-1 (e.g., 202 of Fig. 2)) is externally connected to the Internet on NIC-external- 1 (e.g., 220), and is internally
connected to many other servers in an intranet (e.g., a local intranet housed in a building) on NIC-internal- 1 (e.g., 210) via switches (e.g., 218). Let all the other servers which are switch connected to Server 1 be denoted Server-i for i = 2, 3, ..., n (e.g., 204, 206, and 208). Each of Server-i has a NIC-External-i directly connected to the Internet. In some implementations, it is realized that typical intranet connection are over-provisioned, that is with a copper-switch, or even faster an
optical- fiber-switch, intranet connections in a datacenter have high bandwidth.
According to some embodiments, web requests for the VM web server can be distributed to the n low-bandwidth NIC-external-i's and redirected to Server- 1 and to the VM. Thus, the web service provider only needs to rent low bandwidth Internet connections which can be aggregated into a very high bandwidth. It is well-known that the Internet dollar-bandwidth curve is concave function that increases with a rather fast speed as the desired bandwidth increases. Thus, under various
embodiments, high bandwidth can be achieved at low cost making this a valid Internet traffic engineering technology.
According to other embodiments, other traffic engineering embodiments can be implemented. For instance, upon detecting a NIC-external for a trans-datacenter connection is in congestion, real-time route programming to select another server's NIC-external can evade the congestion (e.g., detected by a communication controller and re-routed by SDN components). In another example, a very big file in one datacenter in need of being backed up (e.g., for disaster recovery purposes) to another datacenter can be divided into much smaller parts and transmitted via many low-cost Internet connections to the other end, and reassembled, to greatly increase the transfer efficiency.
According to one embodiment, an NVI architecture (for example as discussed below) achieves network virtualization and provides a decoupling between the logical network of VMs and the physical network of the servers. The decoupling facilitates the implementation of a software-defined network ("SDN") in the cloud space. In various embodiments, the functions of the SDN are extended to achieve
programmable traffic control and to better utilize the potential of the underlying physical network. It is realized that SDN is not just using software programming language to realize network function boxes such as switch, router, firewall, network slicing, etc, which are mostly provisioned in hardware boxes, as many understand at a superficial level. Some examples discussed herein illustrate a more significant meaning of SDN, that is, to achieve programmable routing between two
communication points.
For example, when communicating between A and Z, there are usually many routes. Because the Internet is architected to contain redundancy there are a multitude of paths any communication may take between A and Z. Unfortunately, the
redundancy potential is significantly under-utilized. One reason for the
under-utilization is due to the IP packet switching algorithm. When A wants to communicate with Z, A makes an IP packet of the following form: <A's packet = Z-IP, A-IP, Payload>. However, A has no idea how to reach Z, thus A usually sends the packet to a network function box (e.g., a network gateway B). Upon receipt the packet, B makes the following IP packet: <C-IP, B-IP, As packet as a payload>. C repeats: <D-IP, C-IP, As packet as a payload>..., until Y (e.g., the gateway of Z) repeats: Z-IP, Y-IP, As packet as a payload. Eventually Z receives A's payload. (any two persons in the world can use a very short acquaintance link to send things through, usually no more than 6 hops). In this packet switching algorithm example (modeled on the IP protocol in the OSI standard), the route is an a-priori function of the packet which is received by a network function, and therefore is fixed, once sent out, and cannot be rerouted, e.g., upon traffic congestion, even though the Internet does have tremendous redundancy.
Fig. 15 is an example process flow 1500 for programming network
communication. Process 1500 begins at 1502 where network traffic is received or accepted. At 1504, the received message is evaluated to determine where the message is addressed. If the message is addressed internal to an intranet segment on which the message originated 1504 internal (e.g., between VMs on one intranet segment) the
message is routed via NIC-ints of the respective servers. If the message is address external to the intranet segment, 1504 external, then a route is programmed to traverse one or more NIC-ext of the servers within the intranet segment. In one embodiment, a communication controller manages the programming of the routes to be taken. The controller can be configured to evaluate available bandwidth or the one or more NIC-ext, determine congestion on one or more of the NIC-ext, and respond by programming a route accordingly. In some examples, the communication controller manages SDN components executing on the servers that make up the intranet segment to provide SDN programming of traffic.
SDN as implemented herein, enables such network traffic to be programmed on route, and therefore to utilize the unused potential of the Internet's redundancy. In various embodiments, it is observed that, within an intranet, the underlying physical network topology can be re-designed to add route redundancy. For example, let each server in the intranet act as a gateway, with one NIC directly wired to the Internet, and one NIC wired to other such servers in the intranet via a back-end switch. Once configured in this manner, VM-Internet communication routes can be SDN
programmed, for instance, distributed to many lines between the VM and the Internet.
Intranet lines have high bandwidth, easily at gigabits per second levels, like freeway traffic, while the Internet bandwidth is typically low, easily orders of magnitude lower, and rental fee for high bandwidth rises sharply in a convex function (like xA2, eAx functions), due to under utilization, and hence low return of the heavy investment on the infrastructure.
This new intranet network wiring topology provides sufficient route redundancy between each VM and the Internet, and can employ SDN to program the Internet- VM traffic over the redundant routes. According to one aspect, many low-cost low-bandwidth Internet lines can be connected to many external facing NICs with intranet elements, and can be aggregated into a high bandwidth communication channel. Now the servers of each intranet form distributed gateways interfacing the Internet. To provide an analogy, the distributed gateways avoid traffic, just like the
widened tollgates for the freeway, thus avoiding forming a traffic bottleneck, and/or avoiding the very high cost for renting high-bandwidth Internet services.
As discussed above with respect to Fig. 1, SDN can do little without route redundancy. Even with SDN control implemented, a chokepoint gateway still limits route distribution. Shown in Fig. 2, is a example intranet network topology according to various embodiments. Under the topology illustrated intranet to Internet traffic can be SDN distributed, permitting, for example, aggregation of many low bandwidth Internet communication channels, and further, permitting distributed network routing from the intranet to Internet. Fig. 3 is a diagram of an example novel intra-Internet interfacing topology, according to various embodiments that take advantage of the distributed Internet-intranet interfaces to provide programmatic traffic engineering.
NVI Infrastructure
According to some embodiments, the novel intranet topology can be implemented in conjunction with various virtualization infrastructures. One example virtualization infrastructure includes an NVI infrastructure. In some embodiments, the intranet topology is configured to facilitate dynamic route programming for VMs through the underlying servers that make up the intranet. Each server within such intranet segments can operate as a gateway for the VMs hosted on the intranet. In various embodiments, a minimum of two servers having the two NIC configuration (e.g., at least one NIC internal and at least one NIC external) are included in each intranet segment and provide the underlying servers for a virtualization infrastructure and VMs.
According to some embodiments, the VMs are provisioned and managed under an NVI infrastructure. Various properties and benefits of the NVI infrastructure are discussed below with respect to examples and embodiments. The functions, system elements, and operations discussed above, for example, with respect to intranet topology and/or patching can be implemented on or in conjunction with the systems, functions, and/or operations of the NVI systems below. According to some
embodiments, the NVI systems and/or functions provide distributed VM control on
tenant networks, providing network isolation and/or distributed firewall services. The intranet topology discuss above enables SDN route programming for trans-datacenter and VM-Internet routes, and scalable intranets patching.
Various embodiments of the NVI infrastructure solve for issues associated with conventional cloud networking isolation approaches by using globally distributed and intelligent network virtualization infrastructure. The NVI infrastructure is configured to provide communication functions to a group of virtual machines (VMs), which in some examples, can be distributed across a plurality of dataclouds or cloud providers. The NVI implements a logical network between the VMs enabling intelligent virtualization and programmable configuration of the logical network. The NVI can include software components (including, for example, hypervisors (i.e. VM managers)) and database management systems (DBMS) configured to manage network control functions.
According to one aspect, the NVI manages communication between a plurality of virtual machines by managing physical communication pathways between a plurality of physically associated network addresses which are mapped to respective globally unique logical identities of the respective plurality of virtual machines.
According to another aspect, network control is implemented on vNICs of VMs within the logical network. The NVI can direct communication on the logical network according to mappings between logical addresses (e.g., assigned at vNICs for VMs) of VMs and physically associated addresses assign by respective clouds with the mappings being stored by the DBMS. The mappings can be updated, for example, as VMs change location. For example, a logical address can be remapped to a new physically associated address when a virtual machine changes physical location with the new physically associated address being recorded in the DBMS to replace the previous physically associated address before the VM changing physical location. According to one embodiment, the network control is fully logical enabling the network dataflow for the logical network to continue over the physical networking components (e.g., assigned by cloud providers) that are mapped to and underlie the
logical network.
According to some embodiments, enabling the network control functions directly at vNICs of respective VMs provides for definition and/or management of arbitrarily scalable virtual or logical networks. Such control functions can be action of "plugging" / "unplugging" logically defined unicast cables between vNICs of pairs of VMs to implement network isolations policy, transform formats for network packets (e.g., between IPv6-IPv4 packets), provide cryptographic services on applications data in network packets to implement cryptographic protection on tenants' data, monitor and/or manage traffic to implement advanced network QoS (e.g., balance load, divert traffic, etc.), provide intrusion detection and/or resolution to implement network security QoS, allocate expenses to tenants based on network utilization, among other options. In some example implementations, such logical networks can target a variety of quality of service goals. Some example goals include providing a cloud datacenter configured to operate in resource rental, multi-tenancy, and in some preferred embodiments, Trusted Multi-tenancy, and in further preferred embodiments, on-demand and self-serviceable manners.
In some implementations, resource rental refers to a tenant (e.g., an organization or for compute project), who rents a plural number of virtual machines (VMs) for its users (e.g., employees of the tenant) for computations the tenant wishes to execute. The users, applications, and/or processes of the tenant use the compute resources of a provider through the rental VMs, which can include operating systems, databases, web/mail services, applications, and other software resources installed on the VMs.
In some implementations, multi-tenancy refers to a cloud datacenter or cloud compute provider that is configured to serve a plural number of tenants. The multi-tenancy model is conventional throughout compute providers, which typically allows the datacenter to operate with economy of scale. In further implementations, multi-tenancy can be extended to trusted multi-tenancy, where VMs and associated network resources are isolated from accessing by the system operators of the cloud
providers, and unless with explicitly instructed permission(s) from the tenants involved, any two VMs and associated network resources which are rented by different tenants respectively are configured to be isolated from one another. VMs and associated network resources which are rented by one tenant can be configured to communicate with one another according to any security policy set by the tenant.
In some embodiments, on-demand and self-serviceability refers to the ability of a tenant to rent a dynamically changeable quantity/amount/volume of resources according to need, and in preferred embodiment, in a self-servicing manner (e.g., by editing a restaurant menu like webpage). In one example, self-servicing can include instructing the datacenter using simple web-service-like interfaces for resource rental at a location outside the datacenter. In some embodiments, self-servicing resource rental can include a tenant renting resourced from a plural number of cloud providers which have trans-datacenter physical and/or geographical distributions. Conventional approaches may fail to provide any one or more of: multi-tenancy, trusted
multi-tenancy, on-demand, trans-datacenter resource rental and self serviceability with respect to network resources, and for example with respect to Local Area Network (LAN). LAN is typically a shared hardware resource in a datacenter. For IT security, (e.g., cloud security), isolation of LAN in cloud datacenters for tenants, can be necessary. However, it is realized that LAN isolation turns out to be a very challenging task unresolved by conventional approaches.
Accordingly, provided are systems and methods for isolating network resources within cloud compute environments. According to some embodiments, the systems and methods provide logical de-coupling of a tenant network through globally uniquely identifiable identities assigned to VMs. Virtualization infrastructure (VI) at each provider can be configured to manage communication over a logical virtual network created via the global identifiers for VMs rented by the tenant. The logical virtual network can be configured to extend past cloud provider boundaries, and in some embodiments, allows a tenant to specify the VMs and associated logical virtual network (located at any provider) via whitelist definition.
Shown in Fig. 4 is an example embodiment of a network virtualization infrastructure (NVI) or NVI system 400. According to one embodiment, the system 400 can be implemented on and/or in conjunction with resources allocated by cloud resource providers. In further embodiments, system 400 can be hosted, at least in part, external to virtual machines and/or cloud resources rented from cloud service providers. In one example, the system 400 can also serve as a front end for accessing pricing and rental information for cloud compute resources. According to one embodiment, a tenant can access system 400 to allocate cloud resources from a variety of providers. Once the tenant has acquired specific resources, for example, in the form of virtual machines hosted at one or more cloud service providers, the tenant can identify those resources to define their network via the NVI system 400.
In another example, the logic and/or functions executed by system 400 can be executed on one or more NVI components (e.g., hypervisors (virtual machine managers)) within respective cloud service providers. In other embodiments, one or more NVI components can include proxy entities configured to operate in conjunction with hypervisors at respective cloud providers. The proxy entities can be created as specialized virtual machines that facilitate the creation, definition and control function of a logical network (e.g., a tenant isolated network). Creation of the logical network can include, for example, assignment of globally unique logical addresses to VMs and mapping of the globally unique logical addresses to physically associated addresses of the resources executing the VMs. In one embodiment, the proxy entities can be configured to define logical communication channels (e.g., logically defined virtual unicast cables) between pairs of VMs based on the globally unique logical addresses. Communication between VMs can occur over the logical communication channels without regard to physically associated addressing which are mapped to the logical addresses/identities of the VMs. In some examples, the proxy entities can be configured to perform translations of hardware addressed communication into purely logical addressing and vice versa. In one example, a proxy entity operates in conjunction with a respective hypervisor at a respective cloud provider to capture VM
communication events, route VM communication between a vNIC of the VM and a software switch or bridge in the underlying hypervisor upon which the proxy entity is serving the VM. In some embodiments, a proxy entity is a specialized virtual machine at respective cloud providers or respective hypervisors configured for back end servicing. In some examples, a proxy entity manages internal or external
communication according to communication policy defined on logical addresses of the tenants' isolated network (e.g., according to network edge policy).
In further examples, the NVI system 400 can also include various
combinations of rented cloud resources, hypervisors, and/or systems for allocating cloud resources through various providers. Once a set of cloud resources have been assigned, the NVI system 400 can be configured to map globally unique identities of respective virtual machines to the physically associated addresses of the respective resources. According to some embodiments, the NVI system 400 can include an NVI engine 404 configured to assign globally unique identities of a set of virtual machines to resources allocated by hypervisors to a specific tenant. The set of virtual machines can then be configured to communicate with each other using the globally unique identities. In one embodiment, the NVI system and/or NVI engine is configured provide network control functions over logically defined unicast channels between virtual machines within a tenant network. For example, the NVI system 400 can provide for network control at each VM in the logical network. According to one embodiment, the NVI system 400 can be configured to provide network control at a vNIC of each VM, allowing direct control of network communication of the VMs in the logical network. The NVI system 400 can be configured to define point-to-point connections, including for example, virtual cable connections between vNICs of the virtual machines of the logical network using their globally unique addresses.
Communication within the network can proceed over the virtual cable connections defined between a source VM and a destination VM.
According to one embodiment, the NVI system 400 and/or NVI engine 404 can be configured to open and close communication channels between a source and a
destination (including, for example, internal and external network addresses). The NVI system 400 and/or NVI engine 404 can be configured to establish virtual cables providing direct connections between virtual machines that can be connected and disconnected according to a communication policy defined on the system. In some examples, each tenant can define a communication policy according to their needs. The communication policy can be defined on a connection by connection basis, both internally to the tenant network and by identifying external communication connections. In one example, the tenant can specify for an originating VM in the logical network what destination VMs the originating VM is permitted to
communicate with based on globally unique logical identities assigned. Further, the tenant can define communication policy according to source and destination logical identities.
According to one implementation, the NVI system 400 and/or NVI engine 404 can manage each VM of the logical network according to an infinite number of virtual cables defined at vNICs for the VMs. For example, virtual cables can be defined between pairs of VMs and their vNICs for every VM in the logical network. The tenant can define communication policy for each cable, allowing or denying traffic according to programmatic if then else logic.
By allowing tenants to establish communication policies according to the global unique identities/addresses, the NVI system and/or engine are configured to provide distributed firewall services. According to various implementations, distribution of connection control can eliminate the chokepoint limitations of conventional architectures, and in further embodiments, permit dynamic
re-architecting of a tenant network topology (e.g., adding, eliminating, and/or moving cloud resources that underlie the logical network).
As discussed, the NVI system 400 and/or engine 404 can be configured to allocate resources at various cloud compute providers. In some embodiments, system and/or engine can be executed by one or more hypervisors at the respective cloud providers. In other embodiments, the system and/or engine can be configured to
request respective hypervisors create virtual machines and provide identifying information for the created virtual machines (e.g., to store mappings between logical addresses and physically associated address of the resources). In further embodiments, the functions of system 400 and/or engine 404 can executed by a respective hypervisor within a cloud provider system. In some implementations, the functions of system 400 and/or engine 404 can executed by and/or include a specialized virtual machine or proxy entity configured to interact with a respective hypervisor. The proxy entity can be configured to request resources and respective cloud provider identifying information (including physically associated addresses for resources assigned by hypervisors).
In one example, the system and/or engine can be configured to request, capture, and/or assign temporary addresses to any allocated resources. The temporary addresses are "physically associated" addresses assigned to resources by respective cloud providers. For example, the temporary addresses are used in conventional networking technologies to provide communication between resources and to other, for example, Internet addresses. Typically in conventional communications, the physically associated addresses are included in network packet metadata, either as a MAC address or an IP address or a context tag.
Rather than permit VMs to communicate via physically associated addresses directly, the NVI system 400 de-couples any physical association in its network topology by defining logical addresses for each VM in the logical network. In some examples, communication can occurs over virtual cables that connect pairs of virtual machines using their respective logical addresses. For example, the system and/or engine 404 can be configured to manage creation/allo cation of virtual machines and also manage communication between the VMs of the logical network at respective vNICs. The system 400 and/or engine 404 can also be configured to identify communication events at the vNICs of the virtual machines when the virtual machines initiate or respond to a communication event. Such direct control can provide advantages over conventional approaches.
As discussed, the system and/or engine can include proxy entities at respective cloud providers. The proxy entities can be configured to operate in conjunction with respective hypervisors to obtain hypervisor assigned addresses, identify
communication events initiated by respective virtual machines, map physically associated addresses assigned by the hypervisors to logical addresses of the VMs, and define virtual cables between network members, etc. In some embodiments, a proxy entity can be created at each cloud provider involved in a tenant network, such that the proxy entity manages the virtualization/logical isolation of the tenant's network. For example, each proxy entity can be a back-end servicing VM configured to provide network control functions on the vNICs of front-end business VMs (between vNICs of business VM and hypervisor switch or hypervisor bridge), to avoid programming in the hypervisor directly.
According to one embodiment, the system 400 and/or engine 404 can also be configured to implement communication policies within the tenant network. For example, when a virtual machine begins a communication session with another virtual machine, the NVI system 400 and/or NVI engine 404 can identify the communication event and test the communication against tenant defined policy. The NVI system 400 and/or NVI engine 404 component can be configured to reference physically associated addresses for VMs in the communication and lookup their associated globally unique addresses and/or connection certificates (e.g., stored in a DBMS). In some settings, encryption certificates can be employed to protect/validate network mappings. In one example, a PKI certificate can be used to encode a VM's identity - Cert(UUID/IPv6) with a digital signature for its global identity (e.g., UUID/IPv6) and physically associated address (e.g., IP) - Sign(UUID/IPv6, IP). The correctness of the mapping (UUID/IPv6, IP) can then be crypto graphically verified by any entity using Cert(UUID/IPv6) and Sign(UUID/IPv6, IP). The NVI system 400 and/or NVI engine 404 can verify each communication with a certificate lookup and handle each communication event according to a distributed communication policy defined on the logical connections.
According to some implementations, the NVI system 400 provides a logically defined network 406 de-coupled from any underlying physical resources. Responsive to any network communication event 402 (including for example, VM to VM, VM to external, and/or external to VM communication), the NVI system is configured to abstract the communication event into the logical architecture of the network 406. In one embodiment, the NVI system "plugs" or "unplugs" a virtual cable at respective vNICs of VMs to carry the communication between a source and a destination. The NVI system can control internal network and external network communication according to the logical addresses by "plugging" and/or "unplugging" virtual cables between the logical addresses at respective vNICs of VMs. As the logical address for any resource within a tenant network are globally unique, new resources can be readily added to the tenant network, and can be readily incorporated into
communication policies defined for the tenant network. Further, as new resources are assigned and mapped to logical addresses, the physical location of a newly added resource is irrelevant. As the physical location is irrelevant, new resources can be provisioned from any cloud provider.
In some embodiments, the NVI system 400 and/or NVI engine 404 can be configured to accept tenant identification of virtual resources to create a tenant network. For example, the tenant can specify VMs to include in their network, and as reaction to the tenant request, the NVI can provide physically associated addressing information to map to the logical addresses of the tenant requested VMs allocated by respective cloud providers for the resources executing the VMs to define the tenant network.
The system can be configured assign new globally unique identifiers to each resource. The connection component 408 can also be configured to accept tenant defined communication policies for the new resources. In some implementations, the tenant can define their network using a whitelist of included resources. In one example, the tenant can access a user interface display provided by system 400 to input identifying information for the tenant resources. In further embodiments, the
tenant can add, remove, and/or re-architect their network as desired. For example, the tenant can access the system 400 to dynamically add resources to their whitelist, remove resources, and/or create communication policies.
In some embodiments, the NVI system 400 can also provide for encryption and decryption services to enable additional security within the tenant's network and/or communications. In some embodiments, the NVI system and/or NVI engine 404, can be configured to provide for encryption.
According to another aspect, the NVI system 400 can also be configured to provision additional resources responsive to tenant requests. The NVI system 400 can dynamically respond to requests for additional resources by creating global addresses for any new resources. In some implementations, a tenant can define a list of resources to include in the tenant's network using system 400. For example, upon receipt the tenant's resource request, the NVI can create resources for the tenant in the form of virtual machines and specify identity information for the virtual machines to execute as allocated by whatever cloud provider they used. The system 400 can be configured to assign globally unique identifiers to each virtual machine identified by the NVI for the tenant and store associations between globally unique identifiers and resource addresses for use in communicating over the resulting NVI network. In further embodiments, the system can create encryption certificates for a tenant for each VM in the NVI logical network, which is rented by the tenant. In some examples, the NVI can specify encryption certificates for a tenant as part of providing identity information for virtual machines to use in the tenant's network. The NVI system can then provide for encryption and decryption services as discussed in greater detail herein.
At least some embodiments disclosed herein include apparatus and processes for creating and managing a globally distributed and intelligent NVI or NVI system. The NVI is configured to provide a logical network implemented on cloud resources. According to some embodiments, the logical network enables communication between VMs using logically defined unicast channels defined on logical addresses
within the logical network. Each logical address can be a globally unique identifier that is associated by the NVI with addresses assigned to the cloud resources (e.g., physical addresses or physically associated addresses) by respective cloud datacenters or providers. In some embodiments, the logical addresses remain unchanged even as physical network resources supporting the logical network change, for example, in response to migration of a VM of the logical network to a new location or a new cloud provider.
In some embodiments, the NVI includes a database or other data storage element that records a logical address for each virtual machine of the logical network. The database can also include a mapping between each logical address and a physically associated address for the resource(s) executing the VM. For example, a logical network ID (e.g., UUID or IPv6 address) is assigned to a vNIC of a VM and mapped to a physical network address and/or context tag assigned by the cloud provider to the resources executing the VM. In further embodiments, the NVI can be associated with a database management system (DBMS) that stores and manages the associations between logical identities/addresses of VMs and underlying physical addresses of the resources. According to some embodiments, the logical
identities/addresses of the VMs never change even as the location of the VM changes. For example, the NVI is configured to update the mappings to permanent logical addresses of the VMs with physically associated addresses as resources assigned to the logical network change.
Further embodiments include apparatus and processes for provisioning and isolating network resources in cloud environments. According to some embodiments, the network resources can be rented from one or more providers hosting respective cloud datacenters. The isolated network can be configured to provide various quality of service ("QoS") guaranties and or levels of service. In some implementations, QoS features can be performed according to software developed network principals.
According to one embodiment, the isolated network can be purely logical, relying on no information of the physical locations of the underlying hardware network devices.
In some embodiments, implementation of purely logical network isolation can enable trans-datacenter implementations and facilitate distributed firewall policies.
According to one embodiment, the logical network is configured to pool underlying hardware network devices (e.g., those abstracted by the logical network topology) for network control into a network resource pool. Some properties provided by the logical network include, for example: a tenant only sees and on-demand rents resources for its business logic; the tenant should never care where the underlying hardware resource pool is located; and/or how the underlying hardware operates.
According to some embodiments, the system provides a globally distributed and intelligent network virtualization infrastructure ("NVI"). The hardware basis of the NVI can consist of globally distributed and connected physical computer servers which can communicate one another using any conventional computer networking technology. The software basis of the NVI consists of hypervisors (i.e., virtual machine managers) and database management systems (DBMS) which can execute on the hardware basis of the NVI. According to some embodiments, the NVI can include the following properties: first, any two hypervisors of a cloud provider or different cloud providers in the NVI can be configured to communicate one another on their respective physical locations. If necessary, the system can use dedicated cable connection technologies or well-known virtual private network (VPN) technology to connect any two or more hypervisors to form a globally connected NVI. Second, the system and/or virtualization infrastructure knows of any communication event which is initiated by a virtual machine (VM) more directly and earlier than a switch does when the latter sees a network packet.
It is realized that, the latter event (detection at a switch) is only observed as a result of the NVI sending the packet from a vNIC of the VM to the switch. The prior event (e.g., detection at initiation) is a property of the NVI managing the VM's operation, for example at a vNIC of the VM, which can include identifying communication by the NVI at initiation of a communication event (e.g., prior to transmission, at receipt, etc.). Thus, according to some embodiments, the NVI can
control and manage communications for globally distributed VMs via its intelligently connected network of globally distributed hypervisors and DBMS.
According to some aspects, these properties of the NVI enable the NVI to construct a purely logical network for globally distributed VMs. In some
embodiments, the control functions for the logical network of globally distributed VMs, which defines the communications semantics of logical network (i.e., governs how VMs in the logical network communicate), is implemented in, and executed by, software components which work with hypervisors and DBMS of the NVI to cause some function to take effect at vNICs of VMs; and the network dataflow for logical network of globally distributed VMs passes through the physical networking components which underlie the logical network and connect the globally distributed hypervisors of the NVI. It is realized that the separation of network control function in software (e.g., operating at vNICs of VMs), from network dataflow through the physical networking components allows definition of the logical network without physical network attributes. In some implementations, the logical network definition can be completely de-coupled from the underlying physical network.
According to one aspect, the separation of network control function on vNICs of VMs, from network dataflow through underlying physical network of the NVI result in communications semantics of logical network of globally distributed VMs that can be completely software defined, or in other words, results in a logical network of globally distributed VMs that according to some embodiments can be a software defined network (SDN): where communications semantics can be provisioned automatically, fast and dynamically changing, with trans-datacenter distribution, and with a practically unlimited size and scalability for the logical network.
According to some embodiments, using software network control functions that take effect directly on vNICs enables construction of a logical network of VMs of global distribution and unlimited size and scalability. It is realized that network control methods/functions whether in software or hardware in conventional systems
(including, e.g., OpenFlow) take effect in switches, routers and/or other network devices. Thus, it is further realized that, e.g., construction of a large scale of logical network in conventional approaches is at best in step-by-step system upgrading of switches, routers and/or other network devices, which is impractical for constructing a globally distributed, trans-datacenter, or unlimited scalability network.
Examples of control function to take effect directly on vNICs of VMs of some embodiments include any one or more of: (i) plug/unplug logically defined unicast cables to implement network isolation policy, (ii) transform IPv6-IPv4 versions of packets, (iii) encrypt/decrypt or IPsec based protection on packets, (iv) monitor and/or divert traffics, (v) detect intrusion and/or DDoS attacks, (vi) account fees for traffic volume usage, among other options.
According to another embodiment, using a virtualization server the system can distribute firewall packet filtering at the locality of each VM (e.g., at the vNIC). Any pair of VMs, or a VM and an external entity, can communicate in "out-in" fashion, provided isolation and firewall policies permit - whether these communication entities are in the same intranet or in trans-global locations separate by the Internet
Within such an intranet, the region outside the distributed points of VM packet filtering can be configured outside the firewalls of any tenant, exactly like the Internet. However, the OSI layers 1, 2, and 3 of this "Internet within intranet" region are fully under the centralized control and distributed SDN programmability on each server.
According to one embodiment, this topology can be used in conjunction with a variety of virtualization systems (in one example under the control node of Openstack), to achieve an Internet within intranet region is under communication control and SDN programmability. With the Internet within intranet topology, the distributed servers become SDN programmable forwarding devices that participate in traffic route dynamicity and bandwidth distribution, and in particular can act as a distributed gateway to enlarge the bandwidth for VM-Internet traffic.
According to another aspect, when patching intranets with the new "Internet within intranet" topology, there is no longer any tenants' network ids (e.g., VLAN,
VXLAN) to maintain, and the patching job can be done at any of the distributed gateways. Thus, the new SDN route dynamicity programmability in intranets with the Internet within intranet topology has thus successfully eliminated any chokepoint from the Internet-intranet interface, and in further embodiments, optimally widened routes for intranet patching and Internet traffic. According to one aspect, by including Internet route redundancy into local intranets the full potential of SDN can be achieved.
In contrast, as long as intranet patching resorts to maintaining tenants' network ids in centralized patching points (e.g., running STT, VXLAN, MPLS and the like protocols), and as long as cloud firewalls are chokepointed at centralized gateways, SDN will be subject to the limitation of the chokepoint. Similarly, even advanced proposals for future standard (e.g., OpenFlow) are so constrained. For example, using OpenFlow in an intranet with a chokepointed gateway, the route dynamic
programmability capability of OpenFlow is limited by the gateway, hence having the power of SDN choked.
NVI - Example Advantages Over Existing Approaches
Various advantages and features of some embodiments and aspects can be better appreciated with an understanding of conventional approaches. Some conventional approaches are described as well as the limitations discovered for such systems. Various embodiments are discussed with respect to overcoming the discovered limitations of some conventional approaches. With the rise of cloud computing, the conventional approach of IT asset ownership is transitioning to IT as a service utilization. The conventional lifestyle of owning hardware machines standing on floors, and desks is evolving into a new lifestyle of virtual machines (VMs) standing on powerfully intelligent and globally distributed software of virtualization infrastructure (VI) which consists of globally distributed hypervisors under centralized management of also globally distributed database management systems.
Before virtualization, every physical IT business processing box (below, IT box) included a physical network interface card (NIC) which can be plugged to
establish a connection between two ends (a wireless NIC has the same property of "being plugged as a cable"), and the other end of the cable is a network control device. Any two IT boxes may or may not communicate with one another provided they are under the control of some network control devices in-between them. The means of controlling communications between IT boxes occurs by the control devices inspecting and processing some metadata— addresses and possibly more refined contexts called tags— in the head part of network packets: permitting some packets to pass through, or dropping others, according to the properties of the metadata in the packets against some pre-specified communications policy. This control through physically associated addressing (e.g., MAC addresses, IP addresses and or context tags) has a number of drawbacks.
Although physical resources are currently becoming virtualized, various virtual implementation still use conventional network packet metadata processing (i.e., physically associated addressing) to control communications for virtual machines. For example, in Openstack operation includes sending network packets of a VM to a centralized network device (of course, in Openstack the network device may be a software module in a hypervisor, called hypervisor switch or hypervisor bridge) via a network cable (which may also be software implemented in a hypervisor), for passing through or dropping packets at centralized control points.
This conventional network control technology of processing packets metadata at centralized control points has various limitations in spite of virtualization. The centralized packets processing method which processes network control in the meta-data or head part, and forwards dataflow in the main-body part, of a network packet at a centralized point (called chokepoint) cannot make efficient use of the distributed computing model of the VI; centralized packets processing points can form a performance bottleneck at large scale. The packet metadata inspection method examines a fraction of metadata (an address or a context tag) in the head of a whole network packet, and then may drop the whole packet (resulting in wasted network traffic). Additionally, the metadata (addresses and tags) used in the head of a network
packet are still physically associated (i.e., related to) the physical location of hardware of respective virtualized resources. Physical associations are not an issue for on on-site and peak-volume provisioned physical resources (IT as an asset model), where changes in topology are infrequent. Generally, for IT as an asset, network scalability, dynamic changes of network topology and distribution, and the like network quality of services (QoS) do not pose the same issues as is found with off-site and on-demand utilization of services such as virtualized infrastructure of cloud computing.
With virtualization technology enabled cloud computing, the user or tenant may require an on-demand elastic way to rent IT resources, and may also rent from geographically different and scattered locations of distributed cloud datacenters (e.g., to increase availability and/or reliability). Cloud providers may also require the ability to move assigned resources to maximize utilization and/or minimize maintenance. These requirements in cloud computing translate to needs for resource provisioning with the following properties: automatic, fast and dynamic changing, trans-datacenter scalable, and for IT resource being network per se, a tenant's network should support a tenant-definable arbitrary topology, which can also have a trans-datacenter distribution.
It is realized that the network inside a cloud datacenter upon which various QoS can be performed in SDN should be a purely logical one. By implementing a purely logical network topology over physical network resources, the properties provided by various embodiments can include: logical addressing containing no information on the physical locations of the underlying physical network devices; and enabling pooling of hardware devices for network control into a network resource pool. Various implementations can also take advantage of conventional approaches to allow hypervisors of respective cloud providers to connect with each other (e.g., VPN connections) underneath the logical topology. Further, various embodiments can leverage management of VMs by the hypervisors and/or proxy entities to capture and process communication events. Such control allows communication events to be captured more directly and earlier than, for example, switch based control (which
must first receive the communication prior to action). Thus, various embodiments can control and manage communications for globally distributed VMs without need of inspecting and processing any metadata in network packets.
Conventional firewall implementations focus on a "chokepoint" model: an organization first wires its owned, physically close-by IT boxes to some hardware network devices to form the organization's internal local area network (LAN); the organization then designates a "chokepoint" at a unique point where the LAN and wide area network (WAN) meet, and deploys the organization's internal and external communications policy only at that point to form the organization's network edge. Conventional firewall technologies can use network packet metadata such as IP / MAC addresses to define LAN and configure firewall. Due to the seldom changing nature of network configurations, it suffices for the organization to hire specialized network personnel to configure the network and firewall, and suffices for them to use command-line-interface (CLI) configuration methods.
In cloud computing, an organization's private network and firewall are often deployed in the server virtualization multi-tenancy datacenters. A firewall in this setting needs be virtualized and isolated from the remainder of the datacenters. In some known approaches (e.g., Openstack) firewalls are based on the VLAN technology. To deploy the VLAN technology in the VI, the physical hardware switches are "virtualized" into software counterparts in hypervisors, which are either called "hypervisor learning bridges", or "virtual switches" ("hypervisor switch" is a more meaningful name). These are software components in hypervisors connecting vNICs of VMs to the hardware NIC on the server. They are referred to below interchangeably as a hypervisor switch.
Like a hardware switch, a hypervisor switch involves in a LAN construction also by learning and processing network packet metadata such as addresses. Also like the hardware counterpart, a hypervisor switch can refine a LAN by adding more contexts to the packet metadata. The additional contexts which can be added to the packet metadata part by a switch (virtual or real) are called tags. The hypervisor
switch can add different tags to the network packets of IT boxes which are rented by different tenants. These different tenants' tags divide a LAN into isolated virtual LANs, isolating tenants' networks in a multi-tenancy datacenter. In some examples, VLAN technology is for network cable virtualization: packets sharing some part of a network cable are labeled differently and thereby can be sent to different destinations, just like passengers in an airport sharing some common corridors before boarding at different gates, according to the labels (tags) on their boarding passes.
Accordingly, provided in various implementations is a network virtualization infrastructure (NVI) leveraging direct communication control over VMs to establish a fully logical network architecture. Direct control over each VM, for example through a hypervisor and/or proxy entity, is completely distributed and at the location where the VM with vNICs currently is executing. An advantage of the direct network control function on a vNIC is that the communication control can avoid complex processing network packets metadata which are tight coupled with physical locations of the network control devices, instead, using purely logical addresses of vNICs. In some examples, the resultant logical network eliminates any location specific attributes of the underlying physical network. SDN work over the NVI can be implemented simply and as straightforward high-level language programming.
According to some embodiments, if VMs' communications are intermediated by the NVI, then each VM can be viewed by the NVI to have an infinite number of vNIC cards, where each can be plugged as a logically defined unicast cable for exclusive use with a single given communications partner. As a hypervisor in the NVI is responsible for passing network packets from/to the vNIC of a VM right at the spot of the VM, the NVI can be configured for direct quality of control, either by controlling communication directly with the hypervisor or by using a proxy entity coupled with the hypervisor. By contrast, a switch, even a software coded hypervisor switch, can only control VM's communications via packets metadata received from a multicast network cable. It is appreciated that the difference between a VM being plugged to multiple unicast cables under the NVI's direct control, and the VM being
plugged into one multicast cable under the switch's indirect and packet metadata control, is non-trivial. A logical network which is constructed by multiple, real-time plugged/unplugged, unicast cables needn't manage any packet metadata with physical network attributes anymore. Thus, by enabling direct control of VM communication through the NVI, the resultant logical network can be completely de-coupled from the underlying physical network.
Example NVI Implementations
Fig. 5 illustrates an example implementation of network virtualization infrastructure (NVI) technology according to one embodiment. The NVI system 500 and corresponding virtualization infrastructure (VI) which can be globally distributed over a physical network can be configured to plug/unplug a logically defined unicast network cable 502 for any given two globally distributed VMs (e.g., 501 and 503 hosted, for example, at different cloud datacenters 504 and 506). As discussed, the respective VMs (e.g. 501 and 503) are managed throughout their lifecycle by respective virtual machine managers (VMMs) 508 and 510.
From the moment of a VM's (e.g., 501 and 503) inception and operation, the VM obtains a temporary IP address assigned by a respective hypervisor (e.g., VMM 508 and 510). The temporary IP address can be stored and maintained in respective databases in the NVI (e.g., 512 and 514). Through the whole lifecycle of assigned VMs, the temporary IP addresses can change, however, as the addresses change or resources are added and/or removed any temporary IP addresses are maintained in respective databases. The databases (e.g., 512 and 514) are also configured to store globally identifiable identities in association with each virtual machines' assigned address.
By maintaining a mapping between unchanging unique IDs and potentially changing but maintained temporary IP addresses, then the NVI can be configured to plug/unplug logically defined unicast cable between any two given network entities using unchanging unique IDs (so long as one of communicating entities is a VM
within the NVI). According some embodiments, the NVI constructs the logical network by defining unicast cables to plug/unplug avoiding processing of packet metadata. In some embodiments, centrally positioned switches (software or hardware) can still be employed for connecting the underlying physical network, but
conventional problems associated with dropping multicast traffic based on packet metadata do not arise, as communications occur over the virtual unicast cables, or communication policy prevents communication directly at the VM. The network control for VMs can therefore be globally distributed given that the VM ID is globally identifiable, and operates without location specific packet metadata.
According to some embodiments, respective hypervisors and associated
DBMS in the NVI have fixed locations, i.e., they typically do not move and/or change their physical locations. Thus, in some embodiments, globally distributed hypervisors and DBMS can use the conventional network technologies to establish connections underlying the logical network. Such conventional network technologies for constructing the underlying architecture used by the NVI can be hardware based, for which command-line-interface (CLI) based configuration methods are sufficient and very suitable. For example, a CLI-based virtual private network (VPN) configuration technology can be used for connecting globally distributed hypervisors and DBMS.
Various methodologies exist for assigning globally unique identifiers. In some embodiments, the known Universally Unique Identity, ("UUID") methodology is executed to identify each VM. For example, each VM can be assigned a UUID upon creation of its image file. In another example, IPv6 addresses can be assigned to provide globally unique addresses. Once assigned, the relationship between the UUID and the physically associated address for any virtual machine can be stored for later access (e.g., in response to a communication event). In other embodiments, other globally identifiable unique and unchanging identifiers can be used in place of UUID.
In some embodiments, the UUID of a VM will not change throughout the VM's complete lifecycle. According to one embodiment, each virtual cable between two VMs is then defined on the respective global identifiers. In some implementations,
the resulting logical network constructed by plugged unicast cables over the NVI is also completely defined by the UUIDs of the plugged VMs. In further embodiments, the NVI is configured to plug/unplug the unicast cables in real-time according to a given set of network control policy in the DBMS. For example, a tenant 516 can securely access (e.g., via SSL 518) the control hub of the logical network to define a firewall policy for each communication cable in the logical network.
According to some aspects, any logic network defined on the never changing UUIDs of the VMs, can have network QoS (including, for example, scalability) addressed by programming purely in software. According to other aspects, such logic networks are easy to change, both in topology or in scale, by SDN methods, even across datacenters.
Once a tenant network is established, the tenant can implement a desired firewall using, for example, SDN programming. According to some embodiments, the tenant can construct a firewall with a trans-datacenter distribution. Shown in Fig. 6 is an example of a distributed firewall 600. Virtual resources of the tenant A 602, 604, and 606 span a number of data centers (e.g., 608, 610, and 612) connected over a communication network (e.g., the Internet 620). Each datacenter provides virtual resources to other tenants (e.g., at 614, 616, and 618), which are isolated from the tenant A's network. Based on management of communication, both internal and external through plug/unplug unicast cables, the tenant A is able to define a communication policy that enables communication on a cable by cable basis. As communication events occur, the communication policy is checked to insure that each communication event is permitted. For example, a cable can be plugged in real-time in response to VM 602 attempting to communicate with VM 604. For example, the communication policy defined by the tenant A can permit all communication between VM 602 and VM 604. Thus, a communication initiated at 602 with destination 604 passes the firewall at 622. Upon receipt, the communication policy can be checked again to insure that a given communication is permitted, in essence passing the firewall at 624. VM 606 can likewise be protected from both internal VM
communication and externally involved communication, shown for illustrative purposes at 626.
Fig. 7 illustrates an example process 700 for defining and/or maintaining a tenant network. The process 700 can be executed by an NVI system to enable a tenant to acquire resources and define their own network across rented cloud resources. The process 700 begins at 702 with a tenant requesting resources. In some embodiments, various processes or entities can also request resources to begin process 700 at 702. In response to a request to allocate/rent resources, a hypervisor or VMM having available resources can be selected. In some embodiments, hypervisors can be selected based on pricing criteria, availability, etc. At 704, the hypervisor creates a VM assigned to the requestor with a globally uniquely identifiable id (e.g., a
UUID/IPv6 address). The global ID can be added to a database for the tenant network. Each global id is associated with a temporary physical address (e.g., an IP address available from the NVI) assigned to the VM by its hypervisor. The global id and the temporary physical address for the VM are associated and stored at 706. In one example, a hypervisor creates in a tenant's entry in the NVI DB a new entry:
UUID/IPv6 for the newly created VM with the current network address of the VM (IP below denotes the current physical network address which is mapped to the
UUID/IPv6 of the VM over the hypervisor).
According to various embodiments, the tenant and/or resource requestor can also implement cryptographic services. For example, the tenant may wish to provide integrity protection on VM IDs to provide additional protection. If crypto protection is enabled 708 YES, then optional cryptographic functions include applying public-key cryptography to create a PKI certificate Cert(UUID/IPv6) and a digital signature Sign(UUID/IPv6, IP) for each tenant VM such that the correctness of the mapping (UUID/IPv6, IP) can be crypto graphically verified by any entity using
Cert(UUID/IPv6) and Sign(UUID/IPv6, IP). In one embodiment, a cryptographic certificate for the VM ID and signature for the mapping between the ID and the VM's current physical location in IP address are created at 710 and stored, for example, in
the tenant database at 712. Process 700 can continue at 714. Responsive to re-allocation of VM resource (including, for example, movement of VM resources) a respective hypervisor (for example a destination hypervisor ("DH") takes over the tenant's entry in the NVI DB maintenance job for the moved VM. The moved VM is assigned a new address consistent with the destination hypervisors network. Once the new address is assigned, a new mapping between the VM's global ID and the new hypervisor address is created (let IP' denote the new network address for the VM over DH). At 714 , the DH updates encryption certifications Sign(UUID/IPv6, IP') in the UUID/IPv6 entry to replace the prior and now invalid certificate Sign(UUID/IPv6, IP).
If crypto protection of VM IDs is not enabled 708 NO, movement of VMs in the tenant network can be managed at 716, by the DH associating a new physical address with the global ID assigned to the VM. The new association is stored in a tenant's entry in the NVI DB, defining the tenant network. In some embodiments, a tenant may already have allocated resources through cloud datacenter providers. In some examples, the tenant may access an NVI system to know identifying
information for already allocated resources. The NVI can then assign global ID of VMs to the physically associated addresses of resources. As discussed above, the identities and mappings can be crypto graphically protected to provide additional security.
Shown in Fig. 8 is an example PKI certificate than can be employed in various embodiments. In some implementations, known security methodologies can be implemented to protect the cryptographic credential of a VM (the private key used for signing Sign(UUID/IPv6, IP) and to migrate credentials between hypervisors within a tenant network (e.g., at 714 of process 700). In one example, known "Trusted Computing Group" (TCG) technology is implemented to protect and manage cryptographic credentials. For example, a TPM module can be configured to protect and manage credentials within the NVI system and/or tenant network. In some implementations, known protection methodologies can include hardware based
implementation, and hence can prevent very strong attacks to the NVI, and for example, can protect against attacks launched by a datacenter system administrator. According to various embodiments, TCG technology also supports credential migration (e.g., at 714).
Once a tenant logical network is defined (for example, by execution of process 700) the tenant can establish a communication policy within their network. For example, the tenant can define algorithms for plugging/unplugging unicast cables defined between VMs in the tenant networks, and unicast cables connecting external address to internal VMs for the tenant network. According to some embodiments, as the algorithms are executed for communications, they can be referred to as communication protocols. In some embodiments, the tenant can define
communication protocols for VMs as senders and as receivers.
Example Distributed Cloud Tenant Firewall
Shown in Fig. 9 is an example process flow 900 for execution of a tenant defined communication policy. The process 900 illustrates an example flow for a sender defined protocol (i.e., initiated by a VM in the tenant network). At execution of 900 VM1 associated with a physically associated address ("SIP") having a globally unique id assigned by the logical network (uuid/ipv6=SRC) is managed by respective hypervisor ("SH") and is attempting communication to VM2 with a physically associated address ("DIP") and global ID ("uuid/ipv6=DST") managed by its respective hypervisor ("DH"). As discussed above, control components in the NVI system can include the respective hypervisors of respective cloud providers where the hypervisors are specially configured to perform at least some of the functions for generating, maintaining, and/or managing communication in an NVI network. In other implementations, each hypervisor can be coupled with one or more proxy entities configured to work with respective hypervisors to provide the functions for generating, maintaining, and/or managing communication in the tenant network. The processes for executing communication policies (e.g., 900 and 1000) are discussed in some examples with reference to hypervisors performing operations, however, one
should appreciate that the operations discussed with respect to the hypervisors can be performed by a control component, the hypervisors, and/or respective hypervisors and respective proxy entities.
According to one embodiment, the process 900 beings at 902 with SH intercepting a network packet generated by VM1, wherein the network packet includes physically associated addressing (to DIP). In some embodiments, the hypervisor SH and/or the hypervisor in conjunction with a proxy entity can be configured to capture communication events at 902. The communication event includes a communication initiated at VM1 address to VM2. The logical and/or physically associated addresses for each resource within the tenant's network can be retrieved, for example, by SH. In one example, a tenant database entry defines the tenant's network based on globally unique identifiers for each tenant resource (e.g., VMs) and their respective physically associated addresses (e.g., addresses assigned by respective cloud providers to each VM). In some embodiments, the tenant database entry also includes certificates and signatures for confirming mappings between global ID and physical addresses for each VM.
At 904, the tenant database can be accessed to look up the logical addressing for VM2 based on the physically associated address (e.g. DIP) in the communication event. Additionally, the validity of the mapping can also be confirmed at 906 using Cert(DST), Sign(DST, DIP), for example, as stored in the tenant database. If the mapping is not found and/or the mapping is not validated against the digital certificate, the communication event is terminated (e.g., the virtual communication cable VM1 is attempting to use is unplugged by the SH). Once a mapping is found and/or validated at 906, a system communication policy is checked at 908. In some embodiments, the communication policy can be defined by the tenant at part of creation of their network. In some implementations, the NVI system can provide default communication policies. Additionally, tenants can update and/or modify existing communication policies as desired. Communication policies may be stored in the tenant's entry in the NVI database or may be referenced from other data locations within the tenant
network.
Each communication policy can be defined based on the global IDs assigned to communication partners. If for example, the communication policy specifies (SRC, DST: unplug), the communication policy prohibits communication between SRC and DST, 910 NO. At 912, the communication event is terminated. If for example, the communication policy permits communication between SRC and DST (SRC, DST: plug), SH can plug the unicast virtual cable between SRC and DST permitting communication at 914. The process 900 can also include additional but optional cryptographic steps. For example, once SH plugs the cable between SRC and DST, SH can initiate a cryptographic protocol (e.g., IPsec) with DH to provide
cryptographic protection on application layer data in the network packet.
According to some embodiments, process 900 can be executed on all types of communication for the tenant network. For example, communication events can include VM to external address communication. In such an example DST is a conventional network identity rather than a global ID assigned to the logical network (e.g., an IP address). The communication policy defined for such communication can be defined based on a network edged policy for VMl . In some settings, the tenant can define a network edge policy for the entire network implement through execution of, for example, process 900. In additional settings, the tenant can define network edge policies for each VM in the tenant network.
Fig. 10 illustrates another example execution of a communication policy within a tenant network. At execution of 1000 VM2 associated with a physically associated address ("DIP") having a globally unique id assigned by the logical network (uuid/ipv6=DST) is managed by respective hypervisor ("DH") and is receiving communication from VMl with a physically associated address ("SIP") and global ID ("uuid/ipv6=SRC") managed by its respective hypervisor ("SH").
At 1002, a communication event is captured. In this example, the
communication event is the receipt of a message of a communication from VMl . The communication event can be captured by a control component in the NVI. In one
example, the communication event is captured by DH. Once the communication event is captured, the logical addressing information for the communication can be retrieved. For example, the tenant's entry in the NVI database can be used to perform a lookup for a logical address for the source VM based on SIP within a communication packet of the communication event at 1004. At 1006, validity of the communication can be determined based on whether the mapping between the source VM and destination VM exist in the tenant's entry in the NVI DB, for example, as accessible by DH.
Additionally, validity at 1006 can also be determined using certificates for logical mappings. In one example, DH can retrieve a digital certificate and signature for VM1 (e.g., Cert(SRC), Sign(SRC,SIP)). The certificate and signature can be used to verify the communication at 1006. If the mapping does not exist in the tenant database or the certificate/signature is not valid 1006 NO, then the communication event is terminated at 1008.
If the mapping exists and is valid 1006 YES, then DH can operate according to any defined communication policy at 1010. If the communication policy prohibits communication between SRC and DST (e.g., the tenant database can include a policy record "SRC, DST : unplug") 1012 NO, then the communication event is terminated at 1008. If the communication is allowed 1012 YES (e.g., the tenant database can include a record "SRC, DST: plug"), then DH permits communication between VM1 and VM2 at 1014. In some examples, once DH determines a communication event is valid and allowed, DH can be configured to use a virtual cable between the
communicating entities (e.g., VM1 and VM2). As discussed above with respect to process 900, additional cryptographic protections can be executed as part of communication between VM1 and VM2. For example, DH can execute cryptographic protocols (e.g., IPsec) to create and/or respond to communications of SH to provide cryptographic protection of application layer data in the network packets.
According to some embodiments, process 1000 can be executed on all types of communication for the tenant network. For example, communication events can include external to VM address communication. In such an example SRC is a
conventional network identity rather than a global ID assigned to the logical network (e.g., an IP address). The communication policy defined for such communication can be defined based on a network edge policy for the receiving VM. In some settings, the tenant can define a network edge policy for the entire network implemented through execution of, for example, process 1000. In additional settings, the tenant can define network edge policies for each VM in the tenant network.
According to some embodiments, the tenant can define communication protocols for both senders and recipients, and firewall rules can be executed at each end of a communication over the logical tenant network.
User Interface for Self Service Network Definition
Shown in Fig. 11 is a screen shot of an example user interface 1100. The user interface ("UI") 1100 is configured to accept tenant definition of network topology. In some embodiments, the user interface is configured to enable a tenant to add virtual resources (e.g., VMs) to security groups (e.g., at 1110 and 1130). The UI 1100 can be configured to allow the tenant to name such security groups. Responsive to adding a VM to a security group, the system creates and plugs virtual cables between the members of the security group. For example, VMs windowsl (1112), mailserver (1114), webserver (1116), and windows3 (1118) are members of the HR-Group. Each member has a unicast cable defining a connection between each other member of the group. In one example, for windowsl there is a respective connection for windowsl as a source to mailserver, webserver, and windows3 defined within HR-Group 1110. Likewise, virtual cables exist for R&D-Group 1130.
User interface 1100 can also be configured to provide other management functions. A tenant can access UI 1100 to define communication policies, including network edge policies at 1140, manage security groups by selecting 1142, password control at 1144, manage VMs at 1146 (including for example, adding VMs to the tenant network, requesting new VMs, etc.), and mange users at 1148.
According to some aspects, the communications protocol suite operates on communication inputs or addressing that is logical. For example, execution of
communication in processes 900 and 1000 can occur using global IDs in the tenant network. Thus communication does not require any network location information about the underlying physical network. All physical associated addresses (e.g., IP addresses) which the tenant's rental VMs (the tenant's internal nodes) have temporary IP addresses assigned by respective provides. These temporary IP addresses are maintained in a tenant database, which can be updated as the VMs move, replicate, terminate, etc. (e.g., through execution of process 700). Accordingly, these temporary IP addresses play no role in the definition of the tenant's distributed logical network and firewall/communication policy in the cloud. The temporary IP addresses are best envisioned as pooled network resources. The pool networks resources are employed as commodities for use in the logical network, and may be consumed and even discarded depending on the tenant's needs. According to another aspect, the tenant's logical network is completely and thoroughly de-coupled from the underlying physical network. In such an architecture, software developed network functions can be executed to provide network QoS in simplified "if-then-else" style of high-level language programming. This simplification allows a variety of QoS guaranties to be implemented in the tenants' logical network. For example, QoS Network QoS which can be implemented as SDN programming at vNICs include: Traffic diversion, Load-balancing, Intrusion detection, DDoS scrubbing, among other options.
For example, an SDN task that the NVI system can implement can include automatic network traffic diversion. Various embodiments, of NVI systems/tenant logical networks distribute network traffic to the finest possible granularity: at the very spot of each VM making up the tenant network. If one uses such VMs to host web services, the network traffic generated by web services requests can be measured and monitored to the highest precision at each VM. When requests made to a given VM reach a threshold, the system can be configured to execute automatic replication of the VM and balance requests between the pair of VMs (e.g., the NVI system can request a new resource, replicate the responding VM, and create a diversion policy to the new VM). In one example, the system can automatically replicate an overburden
or over threshold VM and new network requests can be diverted to the newly created replica.
Because the logic network over the NVI technology can have trans-datacenter distribution, such replicas can be created in a different datacenter to make the tenant network highly elastic in trans-datacenter scalability. In some implementations, new resources can be requested from cloud providers to advantageously locate the new resources.
As the NVI technology completely distributes network control policy to each VM, additional advantages can be realized in various embodiments. In particular any one or more of following advantages can be realized in various embodiments over conventional centralized deployment: (i) on-VM-spot unplug avoids sending/dropping packets to the central control points, and reducing network bandwidth; (ii) fine granularity distribution makes the execution of security policy less vulnerable to DDoS-like attacks; (iii) upon detect of DDoS-like attacks to a VM, moving the VM being attacked or even simply changing the temporary IP address can resolve the attack.
It is realized that the resulting logical network provides an intelligent layer-2 network or practically unlimited size (e.g., at 2A128 level if the logical network is defined over IPv6 addresses) on cloud based resources. It is further realized that various implementations of the logical network manage communication without broadcast, as every transmission is delivered over a unicast cable between source and destination (e.g., between two VMs in the network). Thus, the NVI system and/or logical network solve a long felt but unsolved need for a large layer-2 network. By contrast, all previous technologies (overlay technologies) for constructing large scaled layer-2 network, e.g., MPLS, STT, VXLAN, NVGRE, resort to various protocol formulations of physical patching, i.e., combining, component logical networks which are defined over component switches and/or switches over routers. These previous overlay technologies all involve protocol negotiations among component logical networks over their respective underlying component physical networks, and hence
inevitably involve physical attributes of the underlying physical networks. They are complex, inefficient, are very difficult to be inter-operable past different cloud operators, and consequently very hard to form a cloud network standard. The
NVI-based new overlay technology in this disclosure is the world first overlay technology which uses the global management and global mapping intelligence of hypervisors and DBs formed infrastructure to achieve for the first time a practically unlimited size, globally distributed logical network, without need of protocol negotiation among component networks. The NVI-based overlay technology enables simple web-service controllable and manageable inter-operability for constructing a practically unlimited large scale and on-demand elastic cloud network.
Testing Examples
Network bandwidth tests have been executed and compared in the following two cases:
1 ) Two VMs which are plugged with a unicast cable using embodiments of the NVI technology, and
2) Two VMs which are allowed to communicate under the conventional packet filtering technique ebtables, which is the well-known Linux Ethernet bridge firewall technique.
In both cases, the two pairs of VMs (4 VMs) are running on the same VMM on the same hardware server. Table 1 below provides network traffic measurements in three instances of comparisons, which are measured by the known tool NETPERF. The numbers shown in the table are in megabits (10A6) per second.
There is no perceivable difference in network bandwidths between a firewall in the NVI plug/unplug unicast cable technology, and the conventional Linux Ethernet bridge firewall technology. One should realize that the comparison has been executed on plug-cable / pass-packets. It is expected that if the comparison was executed on
operations including unplug-cable / drop-packets, then the difference is traffic would be greater, and would increase as the number of VMs rented by a tenant increases. In conventional approaches, the packet drop will take place in a centralized network edge point. In for example an OpenFlow consolidated logical switch, the packets drop must take place behind the consolidated switch, and that means, the firewall edge point to drop packets can be quite distant from the message sending VM, which translates to a large amount of wasted network traffic in the system.
Various embodiments also provide: virtual machines that each have PKI certificates; thus, not only can the ID of the VM get crypto quality protection, but also the VM's IP packets and 10 storage blocks can be encrypted by the VMM. In one example, the crypto credential of a VM's certificate is protected and managed by the VMM and the crypto mechanisms, which manage VM credentials are in turn protected by a TPM of the physical server. Further embodiments provide for vNIC of a VM that never need to change its identity (i.e., the global address in the logical network does not change, even when the VM changes location, and even when the location change is in trans-datacenter). This results in network QoS programming at a vNIC that can avoid VM location changing complexities. By contrast, packet metadata processing in a switch inevitably involve the location complexities. As discussed, NVI systems and logical network implementations provide load-balancing, traffic diversion, intrusion detection, DDoS scrubbing, and the like network QoS tasks, as simplified, SDN programming at vNICs. In some examples, a global ID used in the tenant network can include an IPv6 address.
Example Implementation of Tenant Programmable Trusted Network
According to one embodiment, a cloud datacenter (1) runs a plural number of network virtualization infrastructure (NVI) hypervisors, and each NVI hypervisor hosts a plural number of virtual machines (VMs) which are rented by one or more tenants. Each NVI hypervisor also runs a mechanism for public-key based crypto key management and for the related crypto credential protection. This key-management and credential-protection mechanism cannot be affected or influenced by any entity in
any non-prescribed manner. The in-NVI-hypervisor key-management and
credential-protection mechanism can be implemented using known approaches (e.g., in the US Patent Application 13/601,053, which claims priority to Provisional Application number 61530543), which application is incorporated herein by reference in its entirety. Additional known security approaches include the Trusted Computing Group technology and TXT technology of Intel. Thus, the protection on the crypto-credential management system can be implemented even against a potentially rogue system administrator of the NVI.
2) In one embodiment, the NVI uses the key-management and
credential-protection mechanism to manage a public key and protect the related crypto credential for a VM: Each VM has an individually and distinctly managed public key, and also has the related crypto credential so protected.
3) According to one embodiment, the NVI executes known cryptographic algorithms to protect the network traffic and the storage input/output data for a VM: Whenever the VM initiates a network sending event or a storage output event, the NVI operates an encryption service for the VM, and whenever the VM responds to a network receiving event or a storage input event, the NVI operates a decryption service for the VM.
4) In one embodiment, the network encryption service in (3) uses the public key of the communication peer of the VM; and the storage output encryption service in (3) uses the public key of the VM; both decryption services in (3) use the protected crypto credential that the NVI-hypervisor protects for the VM.
5) According to one embodiment, if the communication peer of the VM in (4) does not possess a public key, then the communication between the VM and the peer should route via a proxy entity (PE) which is a designated server in the datacenter. The PE manages a public key and protects the related crypto credentials for each tenant of the datacenter. In this case of (5), the network encryption service in (3) shall use a public key of the tenant which has rented the VM. Upon receipt of an encrypted communication packet from an NVI-hypervisor for a VM, the PE will
provide a decryption service, and further forward the decrypted packet to the communication peer which does not possess a public key. Upon receipt of an unencrypted communication packet from the no-public-key communication peer to the VM, the PE will provide an encryption service using the VM's public key.
6) In one embodiment, the NVI-hypervisor and PE provide
encryption/decryption services for a tenant using instructions in a whitelist which is composed by the tenant. The whitelist contains (i) public-key certificates of the VMs which are rented by the tenant, and (ii) the ids of some communication peers which are designated by the tenant. The NVI-hypervisor and PE will perform
encryption/decryption services only for the VMs and communication peers which have public-key certificates and/or ids listed in the tenant's whitelist.
7) In one embodiment, a tenant uses the well-known web-service CRUD (create, retrieve, update, or delete) to compose the whitelist in (6). A tenant may also compose the whitelist using any other appropriate interface or method. Elements in the whitelist are the public-key certificates of the VMs which are rented by the tenant, and the ids of the communication peers which are designated by the tenant. The tenant uses this typical web-service CRUD manner to compose its whitelist. The
NVI-hypervisor and PE use the tenant-composed whitelist to provide
encryption/decryption services. In this way, the tenant achieves instructing the datacenter in a self-servicing manner to define, maintain and manage a virtual private network (VPN) for the VMs it rents and for the communication peers it designates for its rental VMs.
8) According to one embodiment, for the VMs which are rented by a tenant T, the PE can periodically create a symmetric conference key for T, and securely distribute the conference key to each NVI-hypervisor which hosts the VM(s) of T. The crypto graphically protected secure communications among the VMs, and those between the VMs and the PE in (3), (5) and (6) can use symmetric
encryption/decryption algorithms and the conference key. The secure distribution of the conference key from PE to each NVI-hypervisor can use the public key of each
VM which is managed by the underlying NVI-hypervisor in (1) and (2). Upon receipt of the conference key, each NVI-hypervisor secures it using its crypto-credential protection mechanism in (1) and (2).
Shown in Fig. 12 is an example embodiment of a tenant programmable trusted network 1200. Fig. 12 illustrates both cases of the tenant T's private communication channels (e.g. 1202-1218) among its rental VMs (e.g., 1220 - 1230) and the PE (e.g., 1232). These communication channels can be secured either by the public keys of the VMs involved, or by a group's conference key. Shown in this example are 20 VMs rented by a tenant 1250. As shown, the tenant 1250 can define their trusted network using the known CRUD service 1252. In one example, the tenant uses the CRUD service to define a whitelist 1254. The whitelist can include a listing for identifying information on each VM in the tenant network. The whitelist can also include public-key certificates of the VMs in the tenant network, and the ids of the communication peers which are designated by the tenant. In some embodiments, the PE 1232 further provides functions of NAT (Network Address Translation) and firewall, as shown. In the embodiment illustrated, the PE can be the external communications facing interface 1234 for the virtual network.
According to some aspects, a VM in the trusted tenant network can only communicate or input/output data necessarily and exclusively via the communication and storage services which are provided by its underlying NVI-hypervisor. Thus, there can be no any other channel or route for a VM to bypass its underlying
NVI-hypervisor to attempt to achieve or bypass communication and/or input/output data with any entity outside the VM. The NVI-hypervisor cannot be bypassed to perform encryption/decryption services for the VMs according to the instructions provided by the tenant. The non-bypassable property can be implemented via known approaches (e.g., by using VMware's ESX, Citrix's Xen, Microsoft's Hyper- V, Oracle's VirtualBox, and open source community's KVM, etc, for the underlying NVI technology).
Various embodiments achieve a tenant defined, maintained, and managed
virtual private network in a cloud datacenter. In some examples, the tenant defines their network by providing information on their rental VMs. The tenant can maintain and managing the whitelist for its rental VMs through the system. The tenant network is implemented such that network definition and maintain can be done in a self-servicing and on-demand manner.
Various embodiments, provide for very low-cost Virtual Private Cloud (VPC) for arbitrarily small sized tenants. For example, a large number of small tenants can now securely share network resources of the hosting cloud, e.g., share a large VLA of the hosting cloud which is low-cost configured by the datacenter, which in some examples can be executed and/or managed using SDN technology. Accordingly, the small tenant does not need to maintain any high-quality onsite IT infrastructure. The tenant now uses purely on-demand IT.
As the public-key certificates can be globally defined, the VPC provisioning methods discussed are also globally provisioned, i.e., a tenant is not confined to renting IT resources from one datacenter. Therefore, the various aspect and embodiments, enable break tradition vendor-locked-in style of cloud computing and provide truly open- vendor global utilities.
Shown in Fig. 14 is an example logical network 1400. According to one embodiment, a proxy entity 1402 is configured to operate in conjunction with a hypervisor 1404 of a respective cloud according to any QoS definitions for the logical network (e.g., as stored in database 1406). The three dots indicate that respective proxy entities and hypervisors can be located throughout the logical network to handle mapping and control of communication. For example, proxy entities and/or hypervisors can manage mapping between logical addresses of vNICs (1410-1416) and underlying physical resources managed by the hypervisor (e.g., physical NIC 1418), mapping between logical addresses of VMs, and execute communication control at vNICs of the front-end VMs (e.g., 1410-1416). In one embodiment, mapping enables construction of an arbitrarily large, arbitrary topology,
trans-datacenter layer-2 logical network (i.e., achieved the de-coupling of physical
addressing). In another embodiment, control enables programmatic communication control, or in other words achieves a SDN.
According to one embodiment, the proxy entity 1402 is a specialized virtual machine (e.g. at respective cloud providers or respective hypervisors) configured for back end servicing. In some examples, a proxy entity manages internal or external communication according to communication policy defined on logical addresses of the tenants' isolated network (e.g., according to network edge policy). In other embodiment, the proxy entity executes the programming controls on vNICs of an arbitrary number of front end VMs (e.g., 1408). The proxy entity can be configured to manage logical mappings in the network, and to update respective mappings when the hypervisor assigns new physical resources to front end VMs (e.g., 1408).
As discussed above, various aspects and functions described herein may be implemented as specialized hardware or software components executing in one or more computer systems or cloud based computer resources. There are many examples of computer systems that are currently in use. These examples include, among others, network appliances, personal computers, workstations, mainframes, networked clients, servers, media servers, application servers, database servers and web servers. Other examples of computer systems may include mobile computing devices, such as cellular phones and personal digital assistants, and network equipment, such as load balancers, routers and switches. Further, aspects may be located on a single computer system, may be distributed among a plurality of computer systems connected to one or more communications networks, or may be virtualized over any number of computer systems.
For example, various aspects and functions may be distributed among one or more computer systems configured to provide a service to one or more client computers, or to perform an overall task as part of a distributed system or a cloud based system. Additionally, aspects may be performed on a client-server or multi-tier system that includes components distributed among one or more server systems that perform various functions, and may be distributed through a plurality of cloud
providers and cloud resources. Consequently, examples are not limited to executing on any particular system or group of systems. Further, aspects and functions may be implemented in software, hardware or firmware, or any combination thereof. Thus, aspects and functions may be implemented within methods, acts, systems, system elements and components using a variety of hardware and software configurations, and examples are not limited to any particular distributed architecture, network, or communication protocol.
Referring to FIG. 13, there is illustrated a block diagram of a distributed computer system 1300, in which various aspects and functions are practiced. As shown, the distributed computer system 1300 includes one or more computer systems that exchange information. More specifically, the distributed computer system 1300 includes computer systems 1302, 1304 and 1306. As shown, the computer systems 1302, 1304 and 1306 are interconnected by, and may exchange data through, a communication network 1308. For example, components of an NVI-hypervisor system, NVI engine, can be implemented on 1302, which can communicate with other systems (1304-1306), which operate together to provide the functions and operations as discussed herein. In one example, system 1302 can provide functions for request and managing cloud resources to define a tenant network execution on a plurality of cloud providers. Systems 1304 and 1306 can include systems and/or virtual machines made available through the plurality of cloud providers.
In some embodiments, system 1304 and 1306 can represent the cloud provider networks, including respective hypervisors, proxy entities, and/or virtual machines the cloud providers assign to the tenant. In other embodiments, all systems 1302-1306 can represent cloud resources accessible to an end user via a communication network (e.g., the Internet) and the functions discussed herein can be executed on any one or more of systems 1302-1306. In further embodiments, system 1302 can be used by an end user or tenant to access resources of an NVI-hypervisor system (for example, implemented on at least computer systems 1304-1306). The tenant may access the NVI system using network 1308.
In some embodiments, the network 1308 may include any communication network through which computer systems may exchange data. To exchange data using the network 1308, the computer systems 1302, 1304 and 1306 and the network 1308 may use various methods, protocols and standards, including, among others, Fibre Channel, Token Ring, Ethernet, Wireless Ethernet, Bluetooth, IP, IPV6, TCP/IP, UDP, DTN, HTTP, FTP, SNMP, SMS, MMS, SSI 3, JSON, SOAP, CORBA, REST and Web Services. To ensure data transfer is secure, the computer systems 1302, 1304 and 1306 may transmit data via the network 1308 using a variety of security measures including, for example, TLS, SSL or VPN. While the distributed computer system 1300 illustrates three networked computer systems, the distributed computer system 1300 is not so limited and may include any number of computer systems and computing devices, networked using any medium and communication protocol.
As illustrated in FIG. 13, the computer system 1302 includes a processor 1310, a memory 1312, a bus 1314, an interface 1316 and data storage 1318. To implement at least some of the aspects, functions and processes disclosed herein, the processor 1310 performs a series of instructions that result in manipulated data. The processor 1310 may be any type of processor, multiprocessor or controller. Some exemplary processors include commercially available processors such as an Intel Xeon, Itanium, Core, Celeron, or Pentium processor, an AMD Opteron processor, a Sun UltraSPARC or IBM Power5+ processor and an IBM mainframe chip. The processor 1310 is connected to other system components, including one or more memory devices 1312, by the bus 1314.
The memory 1312 stores programs and data during operation of the computer system 1302. Thus, the memory 1312 may be a relatively high performance, volatile, random access memory such as a dynamic random access memory (DRAM) or static memory (SRAM). However, the memory 1312 may include any device for storing data, such as a disk drive or other non- volatile storage device. Various examples may organize the memory 1312 into particularized and, in some cases, unique structures to perform the functions disclosed herein. These data structures may be sized and
organized to store values for particular data and types of data. In some embodiments, each tenant can be associated with a data structured for managing information on a respective tenant network. The data structure can include information on virtual machines assigned to the tenant network, certificates for network members, globally unique identifiers assigned to the network members, etc.
Components of the computer system 1302 are coupled by an interconnection element such as the bus 1314. The bus 1314 may include one or more physical busses, for example, busses between components that are integrated within the same machine, but may include any communication coupling between system elements including specialized or standard computing bus technologies such as IDE, SCSI, PCI and
InfiniBand. The bus 1314 enables communications, such as data and instructions, to be exchanged between system components of the computer system 1302.
The computer system 1302 also includes one or more interface devices 1316 such as input devices, output devices and combination input/output devices. Interface devices may receive input or provide output. More particularly, output devices may render information for external presentation. Input devices may accept information from external sources. Examples of interface devices include keyboards, mouse devices, trackballs, microphones, touch screens, printing devices, display screens, speakers, network interface cards, etc. Interface devices allow the computer system 1302 to exchange information and to communicate with external entities, such as users and other systems.
The data storage 1318 includes a computer readable and writeable nonvolatile, or non-transitory, data storage medium in which instructions are stored that define a program or other object that is executed by the processor 1310. The data storage 1318 also may include information that is recorded, on or in, the medium, and that is processed by the processor 1310 during execution of the program. More specifically, the information may be stored in one or more data structures specifically configured to conserve storage space or increase data exchange performance.
The instructions stored in the data storage may be persistently stored as
encoded signals, and the instructions may cause the processor 1310 to perform any of the functions described herein. The medium may be, for example, optical disk, magnetic disk or flash memory, among other options. In operation, the processor 1310 or some other controller causes data to be read from the nonvolatile recording medium into another memory, such as the memory 1312, that allows for faster access to the information by the processor 1310 than does the storage medium included in the data storage 1318. The memory may be located in the data storage 1318 or in the memory 1312, however, the processor 1310 manipulates the data within the memory, and then copies the data to the storage medium associated with the data storage 1318 after processing is completed. A variety of components may manage data movement between the storage medium and other memory elements and examples are not limited to particular data management components. Further, examples are not limited to a particular memory system or data storage system.
Although the computer system 1302 is shown by way of example as one type of computer system upon which various aspects and functions may be practiced, aspects and functions are not limited to being implemented on the computer system 1302 as shown in FIG. 13. Various aspects and functions may be practiced on one or more computers having different architectures or components than that shown in FIG. 13. For instance, the computer system 1302 may include specially programmed, special-purpose hardware, such as an application-specific integrated circuit (ASIC) tailored to perform a particular operation disclosed herein. While another example may perform the same function using a grid of several general-purpose computing devices (e.g., running MAC OS System X with Motorola PowerPC processors) and several specialized computing devices running proprietary hardware and operating systems.
The computer system 1302 may be a computer system or virtual machine, which may include an operating system that manages at least a portion of the hardware elements included in the computer system 1302. In some examples, a processor or controller, such as the processor 1310, executes an operating system.
Examples of a particular operating system that may be executed include a
Windows-based operating system, such as, Windows NT, Windows 2000 (Windows ME), Windows XP, Windows Vista, Windows 7 or 8 operating systems, available from the Microsoft Corporation, a MAC OS System X operating system available from Apple Computer, one of many Linux-based operating system distributions, for example, the Enterprise Linux operating system available from Red Hat Inc., a Solaris operating system available from Sun Microsystems, or a UNIX operating systems available from various sources. Many other operating systems may be used, and examples are not limited to any particular operating system.
The processor 1310 and operating system together define a computer platform for which application programs in high-level programming languages are written. These component applications may be executable, intermediate, bytecode or interpreted code which communicates over a communication network, for example, the Internet, using a communication protocol, for example, TCP/IR Similarly, aspects may be implemented using an object-oriented programming language, such as .Net, SmallTalk, Java, C++, Ada, C# (C-Sharp), Objective C, or Javascript. Other object-oriented programming languages may also be used. Alternatively, functional, scripting, or logical programming languages may be used.
Additionally, various aspects and functions may be implemented in a non-programmed environment, for example, documents created in HTML, XML or other format that, when viewed in a window of a browser program, can render aspects of a graphical-user interface or perform other functions. Further, various examples may be implemented as programmed or non-programmed elements, or any combination thereof. For example, a web page may be implemented using HTML while a data object called from within the web page may be written in C++. Thus, the examples are not limited to a specific programming language and any suitable programming language could be used. Accordingly, the functional components disclosed herein may include a wide variety of elements, e.g., specialized hardware, virtualized hardware, executable code, data structures or data objects, that are
configured to perform the functions described herein.
In some examples, the components disclosed herein may read parameters that affect the functions performed by the components. These parameters may be physically stored in any form of suitable memory including volatile memory (such as RAM) or nonvolatile memory (such as a magnetic hard drive). In addition, the parameters may be logically stored in a propriety data structure (such as a database or file defined by a user mode application) or in a commonly shared data structure (such as an application registry that is defined by an operating system). In addition, some examples provide for both system and user interfaces that allow external entities to modify the parameters and thereby configure the behavior of the components.
Having thus described several aspects of at least one example, it is to be appreciated that various alterations, modifications, and improvements will readily occur to those skilled in the art. For instance, examples disclosed herein may also be used in other contexts. Such alterations, modifications, and improvements are intended to be part of this disclosure, and are intended to be within the scope of the examples discussed herein. Accordingly, the foregoing description and drawings are by way of example only.
What is claimed is:
Claims
1. A local network system comprising:
at least one communication controller and a plurality of distributed servers: wherein the at least one communication controller controls the distributed servers and manages a SDN component deployed and executed on each of the distributed servers;
the distributed servers hosting virtual machines (VMs) and managing communication for the VMs;
wherein at least two of the distributed servers have at least two network interface cards (NICs): one NIC-ext, and one NIC-int;
the NIC-ext is wired to an external network;
the NIC-int is wired to a switch;
wherein the distributed servers having the NIC-ext and NIC-int execute a network gateway role for the VMs, the gateway role including interfacing with entities outside the local network, and the VMs on an inner side of the network;
the communication between each VM on a distributed server and the entities outside the local network can interface using the NIC-ext on the distributed server, or using the other NIC-exts on the other servers via the NIC-ints connected by the switch; and
the SDN component executing on each servers coordinates the communication between the VMs and entities outside the local network under the control of the at least one communication controller.
2. A network communication system, the system comprising:
at least one communication controller configured to manage communication within a logical network executing on resources of a plurality of distributed servers; the plurality of distributed servers hosting virtual machines (VMs) and handling the communication for the VMs;
wherein at least two of the plurality of distributed servers are connected within an intranet segment, wherein the at least two of the distributed servers within the intranet segment include at least two respective network interface cards (NICs): at least one NIC-ext connected to an external network, and
at least one NIC-int connected to a switch,
wherein each server of the at least two of the plurality of distributed servers within the intranet segment execute communication gateway functions for interfacing with external entities on an external side of the network; and
wherein the at least one communication controller dynamically programs communication pathways for the communication of the logical network to occur over any one or more of the at least two of the distributed servers within the intranet segment over respective NIC-exts by managing an SDN component executing on the at least two of the distributed servers.
3. A local network system comprising:
at least one communication controller coordinating the execution of a SDN component;
a plurality of distributed servers;
wherein the at least one communication controller manages communication by the plurality of distributed servers and coordinates execution of the SDN component deployed and executing on the plurality of distributed servers;
wherein the plurality distributed servers host virtual machines (VMs) and manage communication for the VMs;
wherein at least two of the plurality of servers include at least two respective network interface cards (NICs):
at least one NIC-ext connected to entities outside the local network, and at least one NIC-int connected to a switch,
wherein the communication between a VM on a server and the entities outside the local network interfaces on the external NIC on the distributed server or interfaces
on NIC-exts on other distributed servers connected to the server by the switch and respective NIC-ints;
wherein the SDN component is configured to coordinate the communication between the VMs and entities outside the local network under the management of the at least one communication controller.
4. The system according to claims 1, 2, or 3, wherein the SDN component is configured to execute network isolation and firewall policies for VMs of one or more tenants local to each VM.
5. The system according to claim 1, 2, or 3, wherein the SDN component is configured to execute the network isolation and firewall policies where network packets are output from the VM or communicated to the VM.
6. The system according to claim 1, 2, or 3, wherein the SDN component executes the network isolation and firewall policies for VMs of the one or more tenants at localities where network packets are output from the VM prior to them reaching any other software or hardware component in the local network, or input to the VM without enrouting any other software or hardware component in the local network.
7. The system according to claim 4, wherein the at least one communication controller manages the SDN execution of the network isolation and the firewall policies.
8. The system according to claim 4, wherein the SDN component is configured to control pass or drop of network packets which are output from and input to the VM.
9. The system according to claim 8, wherein the SDN component is configured to intercept and examine the network packets for receipt by and outbound from the VM to manage the pass or the drop of the network packets.
10. The system according to claim 8, wherein the SDN component further defines a network region, an "Internet within the intranet," in the local network, other than and away from the localities where the SDN component executes VMs' network isolation and firewall policies, in which the SDN component does not execute any control in terms of tenant network isolation and firewall policy.
11. The system according to claim 10, wherein within the Internet within intranet region, the SDN component is configured to provide through network routes between any VM and any of the NIC-exts on respective distributed servers, and wherein the SDN component under management of the at least one communication controller executes control on the programming of the packet forwarding routes between VMs and any respective NIC-exts, wherein the programming of the packet forwarding routes includes one or more of
dynamicity and distribution of the packet forwarding routes.
12. The system according to claim 11 , wherein at least one other local network system, including a respective Internet within intranet region is controlled by the at least one communication controller and SDN component, wherein the local network and the at least one other local network are patch connected to one another through any pair of NIC-exts of the two local networks and at least one network to form an enlarged trans- local-network system including elements having the Internet within intranet topology.
13. The system according claim 12, wherein additional other local network systems having a respective Internet within intranet region are patch connected to join
a trans- local-network system to form a further enlarged trans- local-network system including elements having the Internet within intranet topology.
14. The system according to claim 12, wherein trans- local-network
communication traffic between a first and second VM in any two patch participating local networks are controlled by the SDN component running on the distributed servers in the respective local networks, and
wherein the SDN component is programmed to generate programmed routes, the programmed routes including one or more of dynamic or distributed routes, between the first VM and respective external NICs in a first respective local network over at least one intermediate connection to the second VM and respective external NICs in a second respective local network.
15. The system according to claim 12, wherein the external communication traffic between a VM in a local network system and an external entity is programmed by the SDN component running on the distributed servers in the local network system to take programmed routes between the VM and the distributed NIC-exts in the local network system, and to have the network packets delivered by the Internet linking external NICs of the local network system and the external entity over the Internet.
16. The system according to claim 15, wherein the programmed routes include one or more of dynamic or distributed routes.
17. A computer implemented method for managing communications of virtual machines ("VMs") hosted on an intranet segment, the method comprising:
managing, by at least one communication controller, network communication for at least one VM hosted on the intranet segment;
programming, by the at least one communication controller, a route for the network communication, wherein the act of programming includes selecting for an
external network communication from:
a first route for the network communication, wherein the first route traverses a NIC-ext of a distributed server within the intranet segment hosting the VM, and at least a second route, wherein the at least a second route traverses a NIC-int of the distributed server to a NIC-int of a second distributed server having a second NIC-ext.
18. The method according to claim 17, further comprising an act of patching a plurality of intranet segments, wherein each of the plurality of intranet segments include at least two distributed servers, each having at least one NIC-int and at least one NIC-ext.
19. The method according to claim 18, wherein the method further comprises programming, by the at least one communication controller communication routes between the plurality of intranet segments based on selection of or distribution between external connections to respective at least one NIC-exts within each intranet segment.
20. The method according to claim 18, further comprising managing network configuration messages from VM by the at least one communication controller such that broadcast configuration messages are captured at respective intranet segments.
21. The method according to claim 17, further comprising an act of managing a plurality of VMs to provide distributed network isolation and firewall policies at the locality where each VM outputs network packets prior to them reaching any other software or hardware network component in the intranet segment, or where network packets are input to each VM without them enrouting any other software or hardware network component in the intranet segment.
22. The method according to claim 21, wherein programming, by the at least one communication controller, includes managing SDN execution of network isolation and the firewall policies.
23. The method according to claim 21, further comprising defining, by the at least one controller, a network region in the intranet segment, other than and away from VM network isolation and the firewall policy controlling localities, in which the at least one controller does not execute any control in terms of tenant network isolation and firewall policy.
24. The method according to claim 23, wherein programming, by the at least one controller, includes:
providing through network routes between any VM hosted on the intranet segment and any of the NIC-exts on respective distributed servers, and
controlling programming of packet forwarding routes between VMs and any respective NIC-exts.
25. The method according to claim 24, wherein programming of the packet forwarding routes includes controlling one or more of dynamicity or distribution of the packet forwarding routes.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/CN2014/072339 WO2015123849A1 (en) | 2014-02-20 | 2014-02-20 | Method and apparatus for extending the internet into intranets to achieve scalable cloud network |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/CN2014/072339 WO2015123849A1 (en) | 2014-02-20 | 2014-02-20 | Method and apparatus for extending the internet into intranets to achieve scalable cloud network |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2015123849A1 true WO2015123849A1 (en) | 2015-08-27 |
Family
ID=53877536
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2014/072339 Ceased WO2015123849A1 (en) | 2014-02-20 | 2014-02-20 | Method and apparatus for extending the internet into intranets to achieve scalable cloud network |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2015123849A1 (en) |
Cited By (18)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105224385A (en) * | 2015-09-03 | 2016-01-06 | 成都中机盈科科技有限公司 | A kind of virtualization system based on cloud computing and method |
| CN105530259A (en) * | 2015-12-22 | 2016-04-27 | 华为技术有限公司 | Message filtering method and equipment |
| CN106571945A (en) * | 2015-10-13 | 2017-04-19 | 中兴通讯股份有限公司 | Control surface and business surface separating method and system, server and cloud calculating platform |
| WO2017113300A1 (en) * | 2015-12-31 | 2017-07-06 | 华为技术有限公司 | Route determining method, network configuration method and related device |
| CN109495485A (en) * | 2018-11-29 | 2019-03-19 | 深圳市永达电子信息股份有限公司 | Support the full duplex Firewall Protection method of forced symmetric centralization |
| US10841274B2 (en) | 2016-02-08 | 2020-11-17 | Hewlett Packard Enterprise Development Lp | Federated virtual datacenter apparatus |
| US20200402294A1 (en) | 2019-06-18 | 2020-12-24 | Tmrw Foundation Ip & Holding S. À R.L. | 3d structure engine-based computation platform |
| EP3757788A1 (en) * | 2019-06-18 | 2020-12-30 | TMRW Foundation IP & Holding S.A.R.L. | Software engine virtualization and dynamic resource and task distribution across edge and cloud |
| CN112637342A (en) * | 2020-12-22 | 2021-04-09 | 唐旸 | File ferrying system, method and device and ferrying server |
| CN113783765A (en) * | 2021-08-10 | 2021-12-10 | 济南浪潮数据技术有限公司 | Method, system, equipment and medium for realizing intercommunication between cloud internal network and cloud external network |
| US20230362245A1 (en) * | 2020-12-31 | 2023-11-09 | Nutanix, Inc. | Orchestrating allocation of shared resources in a datacenter |
| US12034785B2 (en) | 2020-08-28 | 2024-07-09 | Tmrw Foundation Ip S.Àr.L. | System and method enabling interactions in virtual environments with virtual presence |
| US12039354B2 (en) | 2019-06-18 | 2024-07-16 | The Calany Holding S. À R.L. | System and method to operate 3D applications through positional virtualization technology |
| US12200032B2 (en) | 2020-08-28 | 2025-01-14 | Tmrw Foundation Ip S.Àr.L. | System and method for the delivery of applications within a virtual environment |
| US12273401B2 (en) | 2020-08-28 | 2025-04-08 | Tmrw Foundation Ip S.Àr.L. | System and method to provision cloud computing-based virtual computing resources within a virtual environment |
| US12273400B2 (en) | 2020-08-28 | 2025-04-08 | Tmrw Foundation Ip S.Àr.L. | Graphical representation-based user authentication system and method |
| US12273402B2 (en) | 2020-08-28 | 2025-04-08 | Tmrw Foundation Ip S.Àr.L. | Ad hoc virtual communication between approaching user graphical representations |
| US12432265B2 (en) | 2020-08-28 | 2025-09-30 | Tmrw Group Ip | System and method for virtually broadcasting from within a virtual environment |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6108786A (en) * | 1997-04-25 | 2000-08-22 | Intel Corporation | Monitor network bindings for computer security |
| WO2008028270A1 (en) * | 2006-09-08 | 2008-03-13 | Bce Inc. | Method, system and apparatus for controlling a network interface device |
| US7369556B1 (en) * | 1997-12-23 | 2008-05-06 | Cisco Technology, Inc. | Router for virtual private network employing tag switching |
| WO2012092263A1 (en) * | 2010-12-28 | 2012-07-05 | Citrix Systems, Inc. | Systems and methods for policy based routing for multiple next hops |
| CN103583022A (en) * | 2011-03-28 | 2014-02-12 | 思杰系统有限公司 | Systems and methods for handling NIC congestion via NIC-aware applications |
-
2014
- 2014-02-20 WO PCT/CN2014/072339 patent/WO2015123849A1/en not_active Ceased
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6108786A (en) * | 1997-04-25 | 2000-08-22 | Intel Corporation | Monitor network bindings for computer security |
| US7369556B1 (en) * | 1997-12-23 | 2008-05-06 | Cisco Technology, Inc. | Router for virtual private network employing tag switching |
| WO2008028270A1 (en) * | 2006-09-08 | 2008-03-13 | Bce Inc. | Method, system and apparatus for controlling a network interface device |
| WO2012092263A1 (en) * | 2010-12-28 | 2012-07-05 | Citrix Systems, Inc. | Systems and methods for policy based routing for multiple next hops |
| CN103583022A (en) * | 2011-03-28 | 2014-02-12 | 思杰系统有限公司 | Systems and methods for handling NIC congestion via NIC-aware applications |
Cited By (29)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105224385A (en) * | 2015-09-03 | 2016-01-06 | 成都中机盈科科技有限公司 | A kind of virtualization system based on cloud computing and method |
| CN106571945A (en) * | 2015-10-13 | 2017-04-19 | 中兴通讯股份有限公司 | Control surface and business surface separating method and system, server and cloud calculating platform |
| CN106571945B (en) * | 2015-10-13 | 2020-07-10 | 中兴通讯股份有限公司 | Control plane and service plane separation method and system, server and cloud computing platform |
| CN105530259B (en) * | 2015-12-22 | 2019-01-18 | 华为技术有限公司 | Message filtering method and equipment |
| CN105530259A (en) * | 2015-12-22 | 2016-04-27 | 华为技术有限公司 | Message filtering method and equipment |
| CN107113241B (en) * | 2015-12-31 | 2020-09-04 | 华为技术有限公司 | Route determination method, network configuration method and related device |
| CN107113241A (en) * | 2015-12-31 | 2017-08-29 | 华为技术有限公司 | Route determining methods, network collocating method and relevant apparatus |
| WO2017113300A1 (en) * | 2015-12-31 | 2017-07-06 | 华为技术有限公司 | Route determining method, network configuration method and related device |
| US10841274B2 (en) | 2016-02-08 | 2020-11-17 | Hewlett Packard Enterprise Development Lp | Federated virtual datacenter apparatus |
| CN109495485A (en) * | 2018-11-29 | 2019-03-19 | 深圳市永达电子信息股份有限公司 | Support the full duplex Firewall Protection method of forced symmetric centralization |
| CN109495485B (en) * | 2018-11-29 | 2021-05-14 | 深圳市永达电子信息股份有限公司 | Full-duplex firewall protection method supporting mandatory access control |
| US12033271B2 (en) | 2019-06-18 | 2024-07-09 | The Calany Holding S. À R.L. | 3D structure engine-based computation platform |
| US20200402294A1 (en) | 2019-06-18 | 2020-12-24 | Tmrw Foundation Ip & Holding S. À R.L. | 3d structure engine-based computation platform |
| EP3757788A1 (en) * | 2019-06-18 | 2020-12-30 | TMRW Foundation IP & Holding S.A.R.L. | Software engine virtualization and dynamic resource and task distribution across edge and cloud |
| US12395451B2 (en) | 2019-06-18 | 2025-08-19 | The Calany Holding S. À R.L. | Software engine virtualization and dynamic resource and task distribution across edge and cloud |
| US12374028B2 (en) | 2019-06-18 | 2025-07-29 | The Calany Holdings S. À R.L. | 3D structure engine-based computation platform |
| US12039354B2 (en) | 2019-06-18 | 2024-07-16 | The Calany Holding S. À R.L. | System and method to operate 3D applications through positional virtualization technology |
| US12040993B2 (en) | 2019-06-18 | 2024-07-16 | The Calany Holding S. À R.L. | Software engine virtualization and dynamic resource and task distribution across edge and cloud |
| US12200032B2 (en) | 2020-08-28 | 2025-01-14 | Tmrw Foundation Ip S.Àr.L. | System and method for the delivery of applications within a virtual environment |
| US12034785B2 (en) | 2020-08-28 | 2024-07-09 | Tmrw Foundation Ip S.Àr.L. | System and method enabling interactions in virtual environments with virtual presence |
| US12273401B2 (en) | 2020-08-28 | 2025-04-08 | Tmrw Foundation Ip S.Àr.L. | System and method to provision cloud computing-based virtual computing resources within a virtual environment |
| US12273400B2 (en) | 2020-08-28 | 2025-04-08 | Tmrw Foundation Ip S.Àr.L. | Graphical representation-based user authentication system and method |
| US12273402B2 (en) | 2020-08-28 | 2025-04-08 | Tmrw Foundation Ip S.Àr.L. | Ad hoc virtual communication between approaching user graphical representations |
| US12432265B2 (en) | 2020-08-28 | 2025-09-30 | Tmrw Group Ip | System and method for virtually broadcasting from within a virtual environment |
| CN112637342B (en) * | 2020-12-22 | 2021-12-24 | 唐旸 | File ferry system and method, device, and ferry server |
| CN112637342A (en) * | 2020-12-22 | 2021-04-09 | 唐旸 | File ferrying system, method and device and ferrying server |
| US20230362245A1 (en) * | 2020-12-31 | 2023-11-09 | Nutanix, Inc. | Orchestrating allocation of shared resources in a datacenter |
| US12401718B2 (en) * | 2020-12-31 | 2025-08-26 | Nutanix, Inc. | Orchestrating allocation of shared resources in a datacenter |
| CN113783765A (en) * | 2021-08-10 | 2021-12-10 | 济南浪潮数据技术有限公司 | Method, system, equipment and medium for realizing intercommunication between cloud internal network and cloud external network |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2015123849A1 (en) | Method and apparatus for extending the internet into intranets to achieve scalable cloud network | |
| US12363115B2 (en) | Hybrid cloud security groups | |
| US20140052877A1 (en) | Method and apparatus for tenant programmable logical network for multi-tenancy cloud datacenters | |
| US12231558B2 (en) | Mechanism to provide customer VCN network encryption using customer-managed keys in network virtualization device | |
| JP7771192B2 (en) | End-to-end network encryption from customer on-premises networks to customer virtual cloud networks using customer-managed keys | |
| US10560431B1 (en) | Virtual private gateway for encrypted communication over dedicated physical link | |
| CN116210204A (en) | System and method for VLAN switching and routing services | |
| EP2909780B1 (en) | Providing a virtual security appliance architecture to a virtual cloud infrastructure | |
| US8683023B1 (en) | Managing communications involving external nodes of provided computer networks | |
| CN119895789A (en) | Connectivity of virtual private label cloud | |
| EP2891271A1 (en) | System and method providing policy based data center network automation | |
| JP2024541997A (en) | Transparent mounting of external endpoints across private networks | |
| JP2024541998A (en) | Secure two-way network connection system between private networks | |
| CN116982306A (en) | Extending IP addresses in overlay networks | |
| CN120035975A (en) | Network link establishment in multi-cloud infrastructure | |
| US20160057171A1 (en) | Secure communication channel using a blade server | |
| CN117561705A (en) | Routing strategy for graphics processing units | |
| Benomar et al. | Extending openstack for cloud-based networking at the edge | |
| US11218918B2 (en) | Fast roaming and uniform policy for wireless clients with distributed hashing | |
| CN120113192A (en) | Using client hello for intelligent routing and firewall in multi-path secure access system | |
| Bakshi | Network considerations for open source based clouds | |
| EP4272413B1 (en) | Synchronizing communication channel state information for high flow availability | |
| Chang et al. | Design and architecture of a software defined proximity cloud | |
| CN120418779A (en) | Secure bidirectional network connectivity system between private networks | |
| CN121399584A (en) | Scalable hub and spoke topology for routing using compute instances |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14883276 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 14883276 Country of ref document: EP Kind code of ref document: A1 |