HK1173575A - Smart client routing - Google Patents
Smart client routing Download PDFInfo
- Publication number
- HK1173575A HK1173575A HK13100331.3A HK13100331A HK1173575A HK 1173575 A HK1173575 A HK 1173575A HK 13100331 A HK13100331 A HK 13100331A HK 1173575 A HK1173575 A HK 1173575A
- Authority
- HK
- Hong Kong
- Prior art keywords
- policy
- local
- cloud
- connectivity
- host
- Prior art date
Links
Description
Background
Routing between two hosts (e.g., on the internet) is typically handled based on IP (internet protocol) addresses assigned to the hosts. The selected route is thus determined using the IP addresses of the hosts involved. This technique may not be a problem when the hosts are on the same network; however, this can be very problematic when the hosts are on different networks, as more administrative interaction is required. Moreover, because most hosts can connect to the Internet, the inability to conveniently interconnect hosts on the Internet is a technical and administrative challenge.
SUMMARY
The following presents a simplified summary in order to provide a basic understanding of some novel embodiments described herein. This summary is not an extensive overview, and it is not intended to identify key/critical elements or to delineate the scope of the invention. Its sole purpose is to present some concepts in a simplified form as a prelude to the more detailed description that is presented later.
The disclosed architecture facilitates communication between two network nodes of different networks using an alternative modality that is driven entirely by policies that are authored and stored in the cloud and enforced on the client and/or outside the cloud as needed. This allows one network path to be selected over another based on criteria such as the physical location of the host and the Service Level Agreement (SLA) to be provided. This may be facilitated at least by the virtual layer employed at the physical layer in the network stack and the support program to provide intelligent client capabilities in routing between hosts.
With respect to path selection, packets may be routed over peer-to-peer (peer) connections or relay connections. With respect to SLAs, there may be different SLAs available to different clients. For clients with the highest bandwidth/uptime or other guarantees, a different network path may be selected than for other types of clients. Furthermore, connectivity may be enabled or disabled based on other kinds of policy rules, such as virtual circles to which the host may belong.
To the accomplishment of the foregoing and related ends, certain illustrative aspects are described herein in connection with the following description and the annexed drawings. These aspects are indicative of various ways in which the principles disclosed herein can be practiced, and all aspects and equivalents thereof are intended to be within the scope of the claimed subject matter. Other advantages and novel features may become apparent from the following detailed description when considered in conjunction with the drawings.
Brief Description of Drawings
FIG. 1 illustrates a computer-implemented connectivity system in accordance with the disclosed architecture.
FIG. 2 illustrates an alternative embodiment of a connectivity system, wherein the connectivity system includes an authoring component.
Fig. 3 illustrates a local node system that utilizes virtual adapters to facilitate path selection as defined according to policies received from a computing cloud.
FIG. 4 illustrates a computer-implemented host connectivity method in accordance with the disclosed architecture.
FIG. 5 illustrates further aspects of the method of FIG. 4.
Fig. 6 illustrates an alternative host connectivity method.
FIG. 7 illustrates further aspects of the method of FIG. 6.
Fig. 8 illustrates yet another alternative host connectivity method.
FIG. 9 illustrates a block diagram of a computing system operable to execute cloud-based connectivity in accordance with the disclosed architecture.
FIG. 10 illustrates a schematic block diagram of a computing environment in which cloud-based connectivity is deployed.
Detailed Description
The disclosed architecture is policy-based intelligent network switching. Network switching is used in conjunction with cloud computing to allow on premise connectivity with cloud-based resources. Note, however, that this is only one scenario for intelligent routing, as intelligent routing may be generally applicable to any two or more endpoints, whether those endpoints are in the cloud or an intranet. The architecture provides a combination of policy authoring of connectivity rules, name service resolution hooks (hooks), and dynamic routing on the client, which allows for the selection of the most appropriate path through which two host machines can connect to each other. This allows flexibility in terms of ensuring the minimum operating cost for the vendor, providing the highest Service Level Agreement (SLA) for the customer, or a combination of both.
Policies (e.g., cloud-based) may be used to determine connectivity rules that are applied to determine the order in which routes are to be executed. For example, the policy manifest may indicate that for a client connected to another client: these two clients should be present in the same circle and the connection will first attempt tunneling (e.g., Teredo based) in peer-to-peer (P2P) connectivity, if available, before attempting a cloud-mediated connectivity method, such as SSTP (secure socket tunneling protocol).
Policies may be defined at various levels. For example, policies may be defined to determine connections between two particular hosts, and in the context of circles and links, to determine connections between two different hosts within the same circle, linked circle, or even to define connectivity between any two machines without requiring the two machines to be part of the same circle or linked in any way. (circles may be described as associations of users or organizations over a network, while links are relationships between users or organizations.) these policies may be determined a priori (such as statistically), or may be generated based on a set of rules whose inputs include, but are not limited to: geographic location, SLA, expected cost, circle membership, etc.
Reference will now be made to the drawings wherein like reference numerals are used to refer to like elements throughout. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It may be evident, however, that the novel embodiments can be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to facilitate describing the subject invention. The intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the claimed subject matter.
Fig. 1 illustrates a computer-implemented connectivity system 100 in accordance with the disclosed architecture. The system 100 includes a resolution component 102 of the local node 104, the resolution component 102 attempting to resolve the identification information 106 of the remote node 108 for which connectivity is intended using a resolution call 110. The translation component 112 of the local node 104 intercepts the resolution call 110 and translates the resolution call 110 into a web service call 114 to a resource 116 of the computing cloud 118. A policy component 120 of the local node 104 receives the policy 122 from the computing cloud 118 and establishes a connection 124 between the local node 104 and the remote node 108 based on the policy 122.
The policy 122 received from the computing cloud 118 may be selected based on the identification information 106, the identification information 106 being a name of the remote node 108 and the resolution call 110, the resolution call 110 being a name resolution call. The policy 122 also facilitates the creation of the connection 124 by selecting one network path over another. Moreover, the policy 122 facilitates creation of the connection 124 based on at least one of a physical location of the local node 104 or a physical location of the remote node 108 relative to the data center. Further, the policy 122 facilitates creation of the connection 124 based on a service level agreement. The policy 122 also facilitates creation of the connection 124 based on a virtual circle associated with at least one of the local node 104 or the remote node 108. The policy component 120 configures a connection 124 to the remote node 108 through the relay server.
In other words, a host (e.g., local node 104) attempting to connect to another host (e.g., remote node 108, which may be a client or server) first attempts to resolve the name of the other node (e.g., remote node 108). The name resolution call (e.g., resolution call 110) is interpreted on the client (local node 104) and converted into a cloud-based web service call (web service call 114) to the implementation of name resolution in cloud 118. Note, however, that this is merely one example of an interception technique, as interception may be performed using various implementations (such as local-based and/or cloud-based). The implementation uses policies (e.g., policy 122) to determine the network cost of each network, which is then sent to the aforementioned host.
The host (local node 104) uses information obtained from the cloud-based service to notify the name resolver of the local node 104 of the address to be used for the connection 124, configure appropriate connectivity to the relay server (e.g., SSTP, etc.), and set up a routing table on the virtual adapter of the local node 104 to allow the desired connection to be used. However, it should be understood that in a broader context, the instant architecture allows connectivity between any two network endpoints regardless of the endpoint locations. Thus, while described in the context of an on-premise and/or cloud-based implementation, the architecture may be applied more generally. One or more of these steps may be performed at client initialization time (e.g., when the machine boots or a correct user logs onto the machine), while other steps may be performed at actual connection setup time.
FIG. 2 illustrates an alternative embodiment of a connectivity system 200, wherein the connectivity system 200 includes an authoring component 202. The authoring component 202 is used to author policies that include connectivity rules for establishing communications between local and remote nodes and store the policies in the cloud.
As shown, system 200 also includes the entities and components described in fig. 1. The system 200 includes a resolution component 102 of a local node 104, the resolution component 102 attempting to resolve identification information 106 of a remote node 108 for which connectivity is intended using a resolution call 110, the resolution call 110 being a name resolution call. A translation component 112 of the local node 104 intercepts the resolution call 110 (e.g., a name resolution call) and translates the resolution call 110 into a web service call 114 (e.g., cloud-based) to a resource 116 (one of many possible cloud resources) of a computing cloud 118. A policy component 120 of the local cloud 104 receives the policy 122 from the computing cloud 118 and establishes a connection 124 between the local node 104 and the remote node 108 based on the policy 122.
It is to be appreciated that remote node 108 may include the same components as local node 104 when attempting connectivity to local node 104 and/or another node.
Fig. 3 illustrates a local node system 300, the local node system 300 utilizing a virtual adapter 302 to facilitate path selection as defined according to policies received from a computing cloud. In this particular example, consider that the cloud is designed as an IPv6 (IPv 6-only) only infrastructure, however the applicability of the disclosed architecture is not so limited. Consider further that a well-written application 304 will process all addresses returned from the cloud until the application 304 successfully connects to the remote node. Based on the policy received from the cloud, the application 304 will pick the IPv6 destination address (of the remote node) and the IPv6 source address (of the local node) exposed by the virtual adapter 302 of the local node.
As further shown, virtual adapter 302 is created to include IPv4 and IPv6 addresses associated with IPv4/IPv6 adapter 306, adapter 306 interfacing to SSTP IPv6 adapter 308 and TeredoIPv6 adapter 310. Based on which transport (Teredo for SSTP) is available, virtual adapter 302 encapsulates IPv4/IPv6 packets in the appropriate IPv6 packets and injects the packets back into the client stack for processing by SSTP or Teredo. As shown, the virtual adapter 302 may be designed to overlay the physical layer 312.
With respect to the assignment of interfaces, the local node can talk to the cloud service to retain the IPv4 address. The home IP address is assigned to the virtual adapter. In addition, prefixed (prefixed) IPv6 addresses will be generated from this IPv4 address and assigned to the same interface. When a Teredo local node talks with a Teredo server, a Teredo IPv6 address is assigned to the local node.
Similarly, when an SSTP local node attempts to set up an SSTP tunnel, the SSTP server can bring out an Ipv6 address by combining the reserved Ipv4 address and the site ID. The local node uploads all of the IP addresses belonging to the cloud service. Thus, there may be four associated addresses per node. A virtual adapter IPv4 address, a virtual adapter IPv6 address, a Teredo IPv6 address, and an SSTP IPv6 address. Of these four addresses, the virtual adapter address is exposed to the application (e.g., application 304). A Domain Name Service (DNS) name query from the node resolves both addresses.
The set of related addresses may be maintained as tuples, e.g., < virtual adapter IPv4 address, TeredoIPv6 address, relay IPv6 address >. Routing may then be performed accordingly, given the destination IPv4 address. Another variation is that if the source and/or destination has global IPv4/IPv6 connectivity, the system is bypassed. If the global IPv6 address is added to this tuple, the global IPv6 address can also be utilized.
Within the virtual adapter 302, the four addresses associated with the destination may be cached. The connectivity handling may be according to a predetermined order. For example, connectivity to a destination may be attempted in the following order: first, the Teredo IPv6 address, followed by the SSTP IPv6 address, the adapter IPv4 address, and then the encapsulation in the adapter SSTP IPv 6. A Teredo connection is attempted on a peer (peer) IPv6 Teredo address. If the connection is successful, virtual adapter 302 encapsulates the IPv6 traffic within the Teredo IPv6 address and sends the encapsulated IPv6 traffic out on the Teredo interface (Teredo adapter 310).
If the above action fails, virtual adapter 302 attempts to reach the destination (remote node) using SSTP IPv6 address. Reachability (reachability) can be verified through a ping mechanism. If the connection is successful, virtual adapter 302 encapsulates the traffic in SSTP IPv6 packets and sends the encapsulated traffic out on the SSTP interface (SSTP adapter 308).
If the above-described connection via IPv6 fails, application 304 may pick the remote node IPv4 address and attempt the connection. This connection has a high probability of success because most, if not all, applications will listen on IPv 4. In response to this success, virtual adapter 302 encapsulates the traffic in SSTP IPv6 and sends the encapsulated traffic out through the SSTP interface (SSTP adapter 308).
With respect to IPv4 over IPv6 (IPv 4 over IPv 6) encapsulation by virtual adapter 302, virtual adapter 302 may be implemented as an NDIS (network driver interface specification) miniport driver. Virtual adapter 302 may be configured to be selected for IPv4 communication. In this manner, virtual adapter 302 may intercept all IPv4 traffic.
Which IPv6 address (e.g., SSTP versus Teredo) is used for encapsulation may be determined as follows. If the Teredo interface is connected, an attempt is made to ping (ping) the Teredo IP of the peer (peer). The Teredo IPv6 address of the peer may be determined by combining the destination IPv4 address, the Teredo prefix, and the site ID. If the ping is successful on a Teredo IPv6 address, the Teredo IPv6 address may be used for encapsulation. If the action fails, the retry is done at the SSTP IPv6 address of the peer. If successful, the SSTP IPv6 address is used for encapsulation.
After IPv4 is encapsulated on IPv6 (IPv 4-over-IPv 6), virtual adapter 302 inserts the packet back into the TCP/IP stack so that the packet can be properly picked up by SSTP or Teredo. To make this happen, at the virtual adapter 302, additional information (set flag) may be added into the IPv6 header during encapsulation. When the receiver receives the IPv6 packet, the receiver looks at the flag and decides whether the IPv6 packet needs to be decapsulated by the virtual adapter 302 or simply bypasses the virtual adapter 302. One option is to set the next header field in the external IPv6 header to 4, thereby indicating that the IPv4 packet is encapsulated within this IPv6 packet. Other alternatives are also applicable.
Included herein is a set of flow charts representative of exemplary methodologies for performing novel aspects of the disclosed architecture. While, for purposes of simplicity of explanation, the one or more methodologies shown herein, e.g., in the form of a flow chart, are shown and described as a series of acts, it is to be understood and appreciated that the methodologies are not limited by the order of acts, as some acts may, in accordance with the present invention, occur in a different order and/or concurrently with other acts from that shown and described herein. For example, those skilled in the art will understand and appreciate that a methodology could alternatively be represented as a series of interrelated states or events, such as in a state diagram. Moreover, not all acts illustrated in a methodology may be required for a novel implementation.
FIG. 4 illustrates a computer-implemented host connectivity method in accordance with the disclosed architecture. At 400, at a local host, an attempt is made to resolve identification information of a remote host for which connectivity is intended using a resolution call. At 402, the identification information is sent to cloud-based resources of the computing cloud. At 404, policy information is received from the cloud-based resource. At 406, the local host is connected to the remote host based on the policy information.
FIG. 5 illustrates further aspects of the method of FIG. 4. At 500, a resolution call on the local host is intercepted for communication to the cloud-based resource. At 502, the resolution call is converted to a cloud-based web service call at the local host for communication to the cloud-based resource. At 504, network costs for utilizing a particular network are defined in the policy information for processing by the local host. At 506, a routing path to the remote host is selected based on the policy information. At 508, a routing path is obtained from the policy information. At 510, connectivity to the protocol service is allowed based on the routing path. At 512, the local host is assigned to a virtual circle in the computing cloud of which the remote host is a part. At 514, a routing table in the local host is configured that allows use of the connectivity.
Fig. 6 illustrates an alternative host connectivity method. At 600, at a local host, a name resolution call is used to attempt to resolve the name of a remote host for which connectivity is intended. At 602, the name resolution call is intercepted. At 604, the name resolution call is converted to a cloud-based web service call. At 606, the cloud-based web service call is submitted. This submission may be to a computing cloud. At 608, policy information is received in response to the cloud-based web service call. At 610, the local host is connected to the remote host based on the policy information.
FIG. 7 illustrates further aspects of the method of FIG. 6. At 700, the local host is assigned to a virtual circle of which the remote host is a part. At 702, routing information to the remote host is obtained from the policy information. At 704, connectivity to the protocol service is configured. At 706, a routing table in the local host is configured that allows use of the connectivity.
Fig. 8 illustrates yet another alternative host connectivity method. At 800, a local client attempts to open a website on a remote client. At 802, the local client application performs a Domain Name Service (DNS) lookup for the name of the remote client. The local client application receives two addresses: an IPv4 address for the remote client and an IPv6 address for the remote client. A well-written application will process all returned addresses until it succeeds. At 804, the DNS resolution call is intercepted and converted to a web service call to the cloud.
At 806, the policies received from the cloud are processed to obtain associated routing rules. For example, based on the policy, the application will pick the IPv6 remote (destination) address and the IPv6 local (source) address exposed by the local client's virtual adapter. At 808, the addresses are processed according to a predetermined order. For example, the rules may indicate that IPv6 addresses are processed before IPv4 addresses and then processed according to one protocol (e.g., Teredo) over another protocol (e.g., SSTP). At 810, a connection to the destination is established based on the first successful attempt.
As used in this application, the terms "component" and "system" are intended to refer to a computer-related entity, either hardware, a combination of hardware and software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, a hard disk drive, multiple storage drives (optical, solid state, and/or magnetic storage media), an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a server and the server can be a component. One or more components can reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers. The word "exemplary" may be used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects or designs.
Referring now to fig. 9, there is illustrated a block diagram of a computing system 900 operable to execute cloud-based connectivity in accordance with the disclosed architecture. In order to provide additional context for various aspects thereof, FIG. 9 and the following discussion are intended to provide a brief, general description of a suitable computing system 900 in which the various aspects can be implemented. While the description above is in the general context of computer-executable instructions that may run on one or more computers, those skilled in the art will recognize that a novel embodiment also can be implemented in combination with other program modules and/or as a combination of hardware and software.
The computing system 900 for implementing various aspects includes a computer 902 having a processing unit 904, a computer-readable storage such as a system memory 906, and a system bus 908. The processing unit 904 can be any of various commercially available processors, such as a single processor, multiple processors, a single core unit, and a multi-core unit. Moreover, those skilled in the art will appreciate that the novel methods can be practiced with other computer system configurations, including minicomputers, mainframe computers, as well as personal computers (e.g., desktop, laptop, etc.), hand-held computing devices, microprocessor-based or programmable consumer electronics, and the like, each of which can be operatively coupled to one or more associated devices.
The system memory 906 may include computer-readable storage, such as Volatile (VOL) memory 910 (e.g., Random Access Memory (RAM)) and NON-volatile memory (NON-VOL) 912 (e.g., ROM, EPROM, EEPROM, etc.). A basic input/output system (BIOS) can be stored in the non-volatile memory 912, and includes the basic routines that facilitate the transfer of data and signals between components within the computer 902, such as during start-up. The volatile memory 910 can also include a high-speed RAM such as static RAM for caching data.
The system bus 908 provides an interface for system components including, but not limited to, the memory subsystem 906 to the processing unit 904. The system bus 908 can be any of several types of bus structure that can further interconnect to a memory bus (with or without a memory controller), and a peripheral bus (e.g., PCI, PCIe, AGP, LPC, etc.), using any of a variety of commercially available bus architectures.
The computer 902 also includes a machine-readable storage subsystem 914 and a storage interface 916 for interfacing the storage subsystem 914 to the system bus 908 and other required computer components. The storage subsystem 914 may include, for example, one or more of a Hard Disk Drive (HDD), a magnetic Floppy Disk Drive (FDD), and/or an optical disk storage drive (e.g., CD-ROM drive, DVD drive). The storage interface 916 may include interface technologies such as, for example, EIDE, ATA, SATA, and IEEE 1394.
One or more programs and data can be stored in memory subsystem 906, machine-readable and removable memory subsystem 918 (e.g., flash drive form factor technology), and/or storage subsystem 914 (e.g., optical, magnetic, solid state), including an operating system 920, one or more application programs 922, other program modules 924, and program data 926.
One or more application programs 922, other program modules 924, and program data 926 may include local node 104, policies 122, and entities/components of local node 104 of fig. 1, virtual adapter layer 302 of fig. 3, and methods represented, for example, by the flowcharts of fig. 4-8.
Generally, programs include routines, methods, data structures, other software components, etc., that perform particular tasks or implement particular abstract data types. All or portions of the operating system 920, applications 922, modules 924, and/or data 926 can also be cached in memory such as the volatile memory 910, for example. It is to be appreciated that the disclosed architecture can be implemented with various commercially available operating systems or combinations of operating systems (e.g., as virtual machines).
Storage subsystem 914 and memory subsystems (906 and 918) serve as computer-readable media for the volatile and nonvolatile storage of data, data structures, computer-executable instructions, and the like. Computer readable media can be any available media that can be accessed by computer 902 and includes volatile and nonvolatile, internal and/or external media that is removable or non-removable. For the computer 902, the media accommodate the storage of data in any suitable digital format. It should be appreciated by those skilled in the art that other types of computer readable media can be used such as zip drives, magnetic tape, flash memory cards, flash drives, cartridges, and the like, for storing computer executable instructions for performing the novel methods of the disclosed architecture.
A user can interact with the computer 902, programs, and data using external user input devices 928 such as a keyboard and mouse. Other external user input devices 928 can include a microphone, an IR (infrared) remote control, a joystick, a game pad, camera recognition systems, a stylus pen, touch screen, gesture systems (e.g., eye movement, head movement, etc.), and/or the like. Where the computer 902 is a portable computer, for example, a user can interact with the computer 902, programs, and data using onboard user input devices 930 such as a touchpad, microphone, keyboard, and the like. These and other input devices are connected to the processing unit 904 via the system bus 908 via an input/output (I/O) device interface 932, but can be connected by other interfaces, such as a parallel port, IEEE 1394 serial port, a game port, a USB port, an IR interface, etc. The I/O device interface 932 may also facilitate the use of output peripherals 934 such as printers, audio devices, camera devices, etc., e.g., sound cards, and/or onboard audio processing capabilities.
One or more graphics interfaces 936 (also commonly referred to as a Graphics Processing Unit (GPU)) provide graphics and video signals between the computer 902 and an external display 938 (e.g., LCD, plasma) and/or an on-board display 940 (e.g., for portable computers). Graphics interface 936 may also be manufactured as part of a computer system board.
The computer 902 can operate in a networked environment (e.g., IP-based) using logical connections via a wired/wireless communications subsystem 942 to one or more networks and/or other computers. The other computers can include workstations, servers, routers, personal computers, microprocessor-based entertainment appliances, peer devices or other common network nodes, and typically include many or all of the elements described relative to the computer 902. Logical connections may include wired/wireless connections to Local Area Networks (LANs), Wide Area Network (WAN) hotspots, and the like. LAN and WAN networking environments are commonplace in offices and companies, and facilitate enterprise-wide computer networks, such as intranets, all of which may connect to a global communications network, e.g., the Internet.
When used in a networking environment, the computer 902 connects to the network via a wired/wireless communication subsystem 942 (e.g., a network interface adapter, onboard transceiver subsystem, etc.) to communicate with wired/wireless networks, wired/wireless printers, wired/wireless input devices 944, and so on. The computer 902 can include a modem or other means for establishing communications over the network. In a networked environment, programs and data relative to the computer 902 can be stored in the remote memory/storage device, as is associated with a distributed system. It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers may be used.
The computer 902 is operable to communicate with wire/wireless devices or entities using the radio technologies such as the IEEE802.xx family of standards, such as wireless devices operatively disposed in wireless communication (e.g., IEEE802.11 over-the-air modulation techniques) with, for example, a printer, scanner, desktop and/or portable computer, Personal Digital Assistant (PDA), communications satellite, any piece of equipment or location associated with a wirelessly detectable tag (e.g., a kiosk, news stand, restroom), and telephone. This includes at least Wi-Fi (or Wireless Fidelity), WiMax, and Bluetooth for hotspotsTMWireless technology. Thus, the communication may be a predefined structure as for a conventional network, or simply an ad hoc (ad hoc) communication between at least two devices. Wi-Fi networks use radio technologies called IEEE802.11x (a, b, g, etc.) to provide secure, reliable, fast wireless connectivity. A Wi-Fi network can be used to connect computers to each other, to the Internet, and to wired networks (which use IEEE 802.3-related media and functions).
Referring now to fig. 10, there is illustrated a schematic block diagram of a computing environment 1000 in which cloud-based connectivity is deployed. Environment 1000 includes one or more clients 1002. The client(s) 1002 can be hardware and/or software (e.g., threads, processes, computing devices). For example, the client 1002 can house cookies and/or associated contextual information.
The environment 1000 also includes one or more server(s) 1004. The server(s) 1004 can also be hardware and/or software (e.g., threads, processes, computing devices). The servers 1004 can house threads to perform transformations by employing the present architecture, for example. One possible communication between a client 1002 and a server 1004 can be in the form of a data packet adapted to be transmitted between two or more computer processes. For example, the data packet may include a cookie and/or associated contextual information. The environment 1000 includes a communication framework 1006 (e.g., a global communication network such as the Internet) that can be employed to facilitate communications between the client(s) 1002 and the server(s) 1004.
Communication can be facilitated via a wired (including optical fiber) and/or wireless technology. The client(s) 1002 are operatively connected to one or more client data store(s) 1008 that can be employed to store information local to the client(s) 1002 (e.g., cookie(s) and/or associated contextual information). Likewise, the server(s) 1004 are operatively connected to one or more server data store(s) 1010 that can be employed to store information local to the servers 1004.
What has been described above includes examples of the disclosed architecture. It is, of course, not possible to describe every conceivable combination of components and/or methodologies, but one of ordinary skill in the art may recognize that many further combinations and permutations are possible. Accordingly, the novel architecture is intended to embrace all such alterations, modifications and variations that fall within the spirit and scope of the appended claims. Furthermore, to the extent that the term "includes" is used in either the detailed description or the claims, such term is intended to be inclusive in a manner similar to the term "comprising" as "comprising" is interpreted when employed as a transitional word in a claim.
Claims (15)
1. A computer-implemented connectivity system, comprising:
a resolution component of a local node that attempts to resolve identification information of a remote node for which connectivity is intended using a resolution call;
a conversion component of the local node that intercepts the resolution call and converts the resolution call to a web service call to a resource; and
a policy component of the local node that receives a policy and establishes a connection between the local node and the remote node based on the policy.
2. The system of claim 1, wherein the received policy is selected based on the identification information and the resolution call, the identification information being a name of the remote node and the resolution call being a name resolution call.
3. The system of claim 1, wherein the policy facilitates creation of the connection by selecting one network path over another network path.
4. The system of claim 1, wherein the policy facilitates creation of the connection based on at least one of a physical location of the local node or a physical location of the remote node relative to a data center.
5. The system of claim 1, wherein the policy facilitates creation of the connection based on a service level agreement.
6. The system of claim 1, wherein the policy facilitates creation of the connection based on a virtual circle associated with at least one of the local node or the remote node.
7. The system of claim 1, wherein the policy component configures connectivity to the remote node through a relay server.
8. The system of claim 1, further comprising an authoring component to author the policy and store the policy, the policy comprising connectivity rules for establishing communication between the local and remote nodes.
9. A computer-implemented host connectivity method, comprising:
at the local host, attempting to resolve identification information of a remote host for which connectivity is intended using a resolution call;
sending the identification information to a cloud-based resource of a computing cloud (402);
receiving policy information from the cloud-based resource; and
connecting the local host to the remote host based on the policy information.
10. The method of claim 9, further comprising intercepting the resolution call on the local host for transmission to the cloud-based resource.
11. The method of claim 9, further comprising translating the resolution call into a cloud-based web service call at the local host for communication to the cloud-based resource.
12. The method of claim 9, further comprising defining a network cost to utilize a particular network in the policy information to be handled by the local host.
13. The method of claim 12, further comprising selecting a routing path to the remote host based on the policy information.
14. The method of claim 9, further comprising:
obtaining a routing path from the policy information; and
connectivity to protocol services is allowed based on the routing path.
15. The method of claim 9, further comprising assigning the local host to a virtual circle in the computing cloud of which the remote host is a part.
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US12/616,157 | 2009-11-11 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| HK1173575A true HK1173575A (en) | 2013-05-16 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US8650326B2 (en) | Smart client routing | |
| US10541836B2 (en) | Virtual gateways and implicit routing in distributed overlay virtual environments | |
| US9246819B1 (en) | System and method for performing message-based load balancing | |
| US9485147B2 (en) | Method and device thereof for automatically finding and configuring virtual network | |
| CN109889618B (en) | Method and system for processing DNS request | |
| US8121146B2 (en) | Method, apparatus and system for maintaining mobility resistant IP tunnels using a mobile router | |
| US10659430B2 (en) | Systems and methods for dynamic network address modification related applications | |
| US20160380966A1 (en) | Media Relay Server | |
| US11595306B2 (en) | Executing workloads across multiple cloud service providers | |
| US20200007444A1 (en) | Systems and methods for dynamic connection paths for devices connected to computer networks | |
| US20160380789A1 (en) | Media Relay Server | |
| CN106330492B (en) | A kind of method, apparatus and system configuring user equipment forwarding table | |
| US20120300776A1 (en) | Method for creating virtual link, communication network element, and ethernet network system | |
| CN110266715B (en) | Remote access method, device, equipment and computer readable storage medium | |
| JP2013126219A (en) | Transfer server and transfer program | |
| CN101572729A (en) | Processing method of node information of virtual private network, interrelated equipment and system | |
| HK1173575A (en) | Smart client routing | |
| KR101712922B1 (en) | Virtual Private Network System of Dynamic Tunnel End Type, Manager Apparatus and Virtual Router for the same | |
| US10212196B2 (en) | Interface discovery and authentication in a name-based network | |
| KR20170140051A (en) | Virtual Private Network System of Dynamic Tunnel End Type, Manager Apparatus and Virtual Router for the same |