[go: up one dir, main page]

WO2012158241A1 - Automated adjustment of cluster policy - Google Patents

Automated adjustment of cluster policy Download PDF

Info

Publication number
WO2012158241A1
WO2012158241A1 PCT/US2012/027010 US2012027010W WO2012158241A1 WO 2012158241 A1 WO2012158241 A1 WO 2012158241A1 US 2012027010 W US2012027010 W US 2012027010W WO 2012158241 A1 WO2012158241 A1 WO 2012158241A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual machine
cluster
servers
policy
machine servers
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/US2012/027010
Other languages
French (fr)
Inventor
Peter Bookman
Ashton R. SNELGROVE
Thomas S. MCCULLAGH
David E. YOUNGBERG
Chris R. FEATHERSTONE
Harold C. SIMONSEN
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
V3 Systems Holdings Inc
Original Assignee
V3 Systems Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US13/175,771 external-priority patent/US9223605B2/en
Application filed by V3 Systems Inc filed Critical V3 Systems Inc
Publication of WO2012158241A1 publication Critical patent/WO2012158241A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/485Task life-cycle, e.g. stopping, restarting, resuming execution
    • G06F9/4856Task life-cycle, e.g. stopping, restarting, resuming execution resumption being on a different machine, e.g. task migration, virtual machine migration
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources

Definitions

  • the desktop environment may include an intuitive visualization of various icons, windows, and other tools that that user may interact with to manipulate the various applications and environments offered by the desktop environment.
  • desktop virtualization involves the offloading of the desktop processing to a location other the client (hereinafter referred to as a "centralized desktop location"), which location is perhaps even remote from the client. That offloaded location may be a server, a server cluster, or a server cloud.
  • the centralized desktop location maintains a virtual machine for each supported desktop environment.
  • the virtual machine has access to all of the desktop state necessary to construct rendering data (such as an image or rendering instructions) representing how the desktop environment should appear.
  • the virtual machine also manages the processing that serves up the rendering data to the corresponding client, after which the client renders the corresponding desktop environment.
  • the client interacts with the displayed desktop image
  • that client input is transmitted to the centralized desktop location.
  • the corresponding virtual machine at the centralized desktop location interprets the client input, and processes the desktop.
  • the virtual machine changes the state of the desktop if appropriate. If this changed state results in a change in how the desktop appears, the virtual machine constructs a different desktop image, and causes the centralized desktop location to transmit the altered desktop image to the client. From the user's perspective, this occurs often fast enough that the displayed desktop at the client is substantially immediately responsive to the user input at the client.
  • the desktop as is exists on the centralized desktop location is often referred to as a "virtual desktop", and the application-level logic on the centralized desktop that is used to process the desktop is often referred to a "virtual machine”.
  • the centralized desktop location may manage a number of virtual desktops for a corresponding number of clients. For instance, the centralized desktop location may manage hundreds of virtual desktops. In some cases, the centralized desktop location is a physical machine, which is referred to herein as a "physical appliance".
  • At least one embodiment described herein relates to the automatic adjustment of policy within a cluster that includes a set of virtual machine servers.
  • Each of the virtual machine servers has a server-specific hypervisor running thereon and may also execute one or more and potentially many virtual machines.
  • the cluster policy is automatically adjusted to accommodate the change.
  • a cluster assignment policy may be changed such that the changed set is considered when assigning new virtual machines to given virtual machine servers within the set.
  • a store replication policy in which a store is replicated across multiple virtual machine servers in the cluster, may be adjusted to account for the changed set of virtual machine servers.
  • a contingency policy may be affected that determines policy for what to do if a virtual machine server within the cluster goes offline or loses some level of functionality.
  • a cluster manager is abstracted above the server-specific hypervisors in the cluster, and this manager is also caused to be adjusted in response to the changed set.
  • the cluster manager is capable of interacting with server-specific hypervisors of multiple different types.
  • Figure 1 illustrates a computing system in which some embodiments described herein may be employed
  • Figure 2 abstractly illustrates a system that includes multiple virtual machine servers in which the cluster policy adjustment described herein may be employed
  • Figure 3 abstractly illustrates an example virtual machine server, which represents an example of a virtual machine server illustrated in Figure 2;
  • Figure 4A illustrates the virtual machine server of Figure 3 in the context in which hard support resources are provided to the virtual machines through a server-specific hypervisor;
  • Figure 4B illustrates the virtual machine server of Figure 3 in the context in which soft support resources are provided to the virtual machines
  • Figure 5 abstractly illustrates an environment in which a virtual machine is shown in conjunction with its soft support resources
  • Figure 6 illustrates a system that is similar to the system of Figure 2, except with more detail shown;
  • Figure 7 illustrates a flowchart of a method for automatically adjusting a policy of a cluster;
  • Figure 8A illustrates a state flow in which a virtual machine server is added to the set of available servers in a cluster
  • Figure 8B illustrates a state flow in which a virtual machine server is removed from the set of available servers in a cluster
  • FIG. 9 abstractly illustrates elements of the cluster policy.
  • policy of a cluster (that includes a set of virtual machine servers) is automatically adjusted.
  • Each of the virtual machine servers has a server-specific hypervisor running thereon and may also execute one or more and potentially many virtual machines.
  • the cluster policy is automatically adjusted to accommodate the change.
  • Computing systems are now increasingly taking a wide variety of forms.
  • Computing systems may, for example, be handheld devices, appliances, laptop computers, desktop computers, mainframes, distributed computing systems, or even devices that have not conventionally been considered a computing system.
  • the term "computing system” is defined broadly as including any device or system (or combination thereof) that includes at least one physical and tangible processor, and a physical and tangible memory capable of having thereon computer-executable instructions that may be executed by the processor.
  • the memory may take any form and may depend on the nature and form of the computing system.
  • a computing system may be distributed over a network environment and may include multiple constituent computing systems.
  • a computing system 100 typically includes at least one processing unit 102 and memory 104.
  • the memory 104 may be physical system memory, which may be volatile, non-volatile, or some combination of the two.
  • the term “memory” may also be used herein to refer to nonvolatile mass storage such as physical storage media. If the computing system is distributed, the processing, memory and/or storage capability may be distributed as well.
  • the term “module” or “component” can refer to software objects or routines that execute on the computing system. The different components, modules, engines, and services described herein may be implemented as objects or processes that execute on the computing system (e.g., as separate threads).
  • embodiments are described with reference to acts that are performed by one or more computing systems. If such acts are implemented in software, one or more processors of the associated computing system that performs the act direct the operation of the computing system in response to having executed computer-executable instructions.
  • An example of such an operation involves the manipulation of data.
  • the computer-executable instructions (and the manipulated data) may be stored in the memory 104 of the computing system 100.
  • Computing system 100 may also contain communication channels 108 that allow the computing system 100 to communicate with other message processors over, for example, network 110.
  • Embodiments described herein may comprise or utilize a special purpose or general-purpose computer including computer hardware, such as, for example, one or more processors and system memory, as discussed in greater detail below.
  • Embodiments described herein also include physical and other computer-readable media for carrying or storing computer-executable instructions and/or data structures.
  • Such computer-readable media can be any available media that can be accessed by a general purpose or special purpose computer system.
  • Computer-readable media that store computer-executable instructions are physical storage media.
  • Computer-readable media that carry computer-executable instructions are transmission media.
  • embodiments of the invention can comprise at least two distinctly different kinds of computer-readable media: computer storage media and transmission media.
  • Computer storage media includes RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer.
  • a "network" is defined as one or more data links that enable the transport of electronic data between computer systems and/or modules and/or other electronic devices. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or a combination of hardwired or wireless) to a computer, the computer properly views the connection as a transmission medium.
  • Transmissions media can include a network and/or data links which can be used to carry or desired program code means in the form of computer- executable instructions or data structures and which can be accessed by a general purpose or special purpose computer. Combinations of the above should also be included within the scope of computer-readable media.
  • program code means in the form of computer-executable instructions or data structures can be transferred automatically from transmission media to computer storage media (or vice versa).
  • computer-executable instructions or data structures received over a network or data link can be buffered in RAM within a network interface module (e.g., a "NIC"), and then eventually transferred to computer system RAM and/or to less volatile computer storage media at a computer system.
  • a network interface module e.g., a "NIC”
  • NIC network interface module
  • computer storage media can be included in computer system components that also (or even primarily) utilize transmission media.
  • Computer-executable instructions comprise, for example, instructions and data which, when executed at a processor, cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions.
  • the computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, or even source code.
  • the invention may be practiced in network computing environments with many types of computer system configurations, including, personal computers, desktop computers, laptop Computers, message processors, hand-held devices, multi-processor systems, microprocessor- based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, mobile telephones, PDAs, pagers, routers, switches, and the like.
  • the invention may also be practiced in distributed system environments where local and remote computer systems, which are linked (either by hardwired data links, wireless data links, or by a combination of hardwired and wireless data links) through a network, both perform tasks.
  • program modules may be located in both local and remote memory storage devices.
  • Figure 2 abstractly illustrates a system 200 that includes multiple virtual machine servers 210.
  • there are five virtual machine servers illustrated including virtual machine servers 211 A, 211 B, 211 C, 211D and 21 IE.
  • the ellipses 21 IF abstractly represent that the system 200 may include any number of virtual machine servers from as few as one to as many as enumerable.
  • Each virtual machine server includes a server-specific hypervisor as will be described further below.
  • An example of a system 200 is a server rack that is capable of holding multiple virtual machine servers.
  • Each virtual machine server may, for example, be structured as described above for the computing system 100 of Figure 1.
  • the virtual machine server may be any device, system, or combination thereof, that is capable of running multiple virtual machines and providing supporting resources to such virtual machines.
  • the virtual machine server may be a physical appliance that is capable of being fit into a slot of a server rack. For instance, if the system 200 of Figure 2 were a server rack having multiple slots, the system 200 could fit multiple of such physical appliances.
  • Figure 3 abstractly illustrates an example virtual machine server 300, which represents an example of a virtual machine server illustrated in Figure 2.
  • the example virtual machine server 300 may be an example of each of any of the virtual machine servers 210 of Figure 2.
  • the virtual machine server 300 operates a set of virtual machines 310, and also provides resources 320 to the virtual machines 310.
  • the ellipses 314 represent that the number of virtual machines 310 may be as few as one, but potentially as many as thousands, or even more.
  • the virtual machine server 300 may be a centralized location that manages many virtual machines.
  • Each virtual machine manages state (e.g., a desktop state) for a corresponding client that may perhaps be remotely located.
  • the virtual machine provides an image (or perhaps other rendering instructions) representing a desktop image to the corresponding client, and alters the image or other desktop state in response to detected events, such as, for example, a user interfacing with the current desktop image.
  • the client interacts with the displayed desktop image corresponding to a virtual machine
  • that client input is transmitted to the virtual machine server 300, and ultimately to the virtual machine corresponding to that client input.
  • the corresponding virtual machine interprets the client input, and processes the client input.
  • the virtual machine changes the state of the virtual desktop if appropriate. If this changed state results in a change in how the desktop appears, the virtual machine constructs and transmits another desktop image to the client. From the user's perspective, this occurs often fast enough that the displayed desktop is substantially immediately responsive to the user input.
  • the virtual machine server 300 provides a variety of support resources 320 for each of the virtual machines 310. For instance, some of that support includes hard (physical) support resources such as processing resources, memory resources, storage resources, network access resources, and the like. However, each virtual machine also uses soft support resources, such as software and data. As far as software, the virtual machine may use an operating system, one or more applications, and/or one or more other software modules. As far as data, the virtual machine server 300 may host some or all of data that is used by the virtual machine in order to operate, such as user preference data, and other application state.
  • the "soft" support resources are so named because the resources (such as software and data) are capable of being copied from one location to another, or accessed remotely over a network.
  • Figure 4 A illustrates the virtual machine server 300 of Figure 3 in the context 400A in which hard support resources 410 are provided to the virtual machines 310 through a server-specific hypervisor 401.
  • Hypervisors are known in the art, and abstract away the hard support resources such that, from the perspective of each individual virtual machine, the virtual machine has isolated and exclusive access to particular hard support resources, such as processing resources, memory resources, network resources, storage resources, and the like.
  • hardware resources 410 of Figure 4A are an example of the support resources 320 of Figure 2.
  • Figure 4B illustrates the virtual machine server 300 of Figure 3 in the context 400B in which soft support resources are provided to the virtual machines 310.
  • an environment 500 is shown in which a virtual machine 501 is shown in conjunction with its soft support resources 510.
  • the soft support resources 510 include a first portion 511, and a second portion 512.
  • the ellipses 513 represent that there may be other portions of the soft support resources 510 as well.
  • a portion of the soft support resources are "internal" soft support resources in the sense that they are provided by the virtual machine server 300, and a portion of the soft support resources are “external” in the sense that they are provided from a location that is external to the virtual machine server 300.
  • the context 400B of Figure 4B shows that virtual machine 311 has access to internal soft support resources 411 as well as external soft support resources 412.
  • the allocation of soft resources is made in a way that improves performance of the virtual machine, and perhaps also allows for efficient migration of the virtual machine from one physical support environment to another.
  • the internal soft support resources for that virtual machine would be located on the virtual machine server 211 A.
  • the external soft support resources may be located on any other virtual machine server(s) or perhaps even not within the system 200 at all, and perhaps on a storage area network.
  • the soft support resources may be represented as a desktop image.
  • VMDK stands for Virtual Machine Disk
  • a desktop image may be split into three pieces. In that case, perhaps one or two of those pieces may be part of the internal soft support resource portion 511, and the remaining two or one (respectively) of those pieces may part of the external soft support resource portion 512.
  • the desktop image may be divided into five pieces. In that case, one, two, three, or even four, of those pieces may be part of the internal soft support resource portion 51 1, and the remaining four, three, two or one (respectively) of those pieces may part of the external soft support resource portion 512.
  • the desktop state is divided into at least N different pieces (where N is a positive integer that is two or greater), and where M of those pieces (where M is any positive integer less than N) are part of the internal soft support resource portion 511, and where N-M (read N minus M) of those pieces are part of the external soft support resource portion 512.
  • the allocation of which soft resources are to be internal and which are to be external to the physical support appliance may be made according to any criteria.
  • the criteria are selected carefully to promote faster operating times, and greater ease of migration should the virtual machine be migrated to another physical support environment.
  • the criteria may be that resources that are more operational, that may be accessed, on average, more frequently, are allocated as internal resources.
  • soft resources that are less susceptible to latency are allocated externally.
  • an operating system is a soft resource (e.g., a piece of software) which is frequently accessed during operation.
  • certain applications may be frequently accessed.
  • some user data may be less frequently accessed. For instance, user preference data may perhaps just be accessed once during rendering of a user interface corresponding to a particular virtual machine. Accordingly, even if there is a larger latency access such user preference data, that longer latency would not significantly impact the user experience at the client corresponding to the virtual machine.
  • Such an allocation may significantly improve operational performance. However, this improvement may be obtained in conjunction with improved efficiency in migration if the more common soft resources are made internal at multiple physical support environments. For instance, perhaps a subset of operating systems is made available at multiple physical support environments. If the internal soft resources are provisioned at both the source physical support environment, and the target physical support environment for a particular virtual machine migration, this reduces or even perhaps eliminates any copying needed to facilitate the migration. Accordingly, migration of a virtual machine may be made more efficiently, and even may appear seamless to the client user.
  • Figure 6 illustrates a system 600 that is similar to the system 200 of Figure 2, except with more detail shown.
  • the system 600 includes the plurality 210 of virtual machine servers including virtual machine servers 211 A, 21 IB, 211C, 21 ID and 21 IE, each capable of running a server-specific hypervisor.
  • the ellipses 21 IF represent flexibility in the number of virtual machine servers within the system.
  • each virtual machine server 211 A though 21 IE is illustrated as including a respective server-specific hypervisor 611 A through 61 IE.
  • the virtual machine server 300 runs the hypervisor 401.
  • the system 600 also includes a cluster 601 of virtual machine servers 21 1 A through 21 IF.
  • Such virtual machine servers are often also referred to in the art as “hosts” or “nodes”.
  • a “cluster” of virtual machine servers is a collection of virtual machine servers
  • a cluster component 610 manages the cluster of virtual machines as a single entity such that the resources from all of the virtual machine servers within the cluster are aggregated.
  • the cluster component 610 is configured to run a cluster manager 611 that is abstracted above each of the virtual machine servers so as to manage the resources of the cluster of virtual machine servers in an aggregated manner in accordance with a cluster policy.
  • the "cluster policy” more generally defines a set of one or more rules governing how the cluster is to be managed. As an example only, the cluster policy might define the resource allocation of the cluster of virtual machine servers.
  • the cluster manager 611 may be, for example, capable of interfacing with a single type of server-specific hypervisor, or may alternatively be capable of interfacing with server-specific hypervisors of a variety of different types. In the latter case, each server-specific hypervisor may perhaps register its type with the cluster manager 611. When communicating with the server-specific hypervisor, the cluster manager 611 may perhaps perform appropriate communication translation given the type of the server-specific hypervisor. Thus, the server-specific hypervisor may be different in the kinds of application program interfaces that they provide.
  • the system 600 also includes a virtual machine assignment component 620 configured to automatically detect when the set of virtual machine servers operating within the cluster has changed, and further configured to automatically adjust the server domain policy 612 accordingly such that the changed set of virtual machine servers are considered when assigning virtual machines to the set of virtual machine servers in the cluster.
  • a virtual machine assignment component 620 configured to automatically detect when the set of virtual machine servers operating within the cluster has changed, and further configured to automatically adjust the server domain policy 612 accordingly such that the changed set of virtual machine servers are considered when assigning virtual machines to the set of virtual machine servers in the cluster.
  • the cluster component 610 could be implemented using hardware, or a combination of hardware and software.
  • the virtual machine assignment component 620 could be implemented using hardware, or a combination of hardware and software.
  • the computing system may instantiate and/or operate the corresponding component (e.g., the cluster component 610 and/or the virtual machine assignment component 620) by having its processor(s) execute one or more computer-executable instructions that are on one or more computer-storage media that are comprised by a computer program product.
  • the components may be executed on hardware that is separate and apart from the virtual machine server.
  • one or both of the cluster component 610 and/or the virtual machine assignment component 620 may be at least partial executed in a distributed manner on multiple virtual machine servers in such a manner that if one of the virtual machine servers were to shut down, the remaining virtual machine servers could continue operation of the corresponding component.
  • Figure 7 illustrates a flowchart of a method 700 for automatically adjusting a policy of a cluster.
  • the method 700 may, for example, be performed by the virtual machine assignment component 620 of Figure 6.
  • the virtual machine assignment component detects that the set of available virtual machine servers for a particular cluster has changed (act 701).
  • the set of virtual machine servers may be expanded by adding an additional server as represented by the context 800A of Figure 8A, in which the previous set 81 OA of virtual machine servers 811 through 815 are expanded (as represented by arrow 801 A) to include an expanded virtual machine server set 820A that includes an additional virtual machine server 816.
  • Virtual machine servers may be detected to be added, for example, when a new virtual machine server is plugged into, or otherwise provisioned to operate, within the cluster.
  • the set of virtual machine servers may be contracted by removing a virtual machine server as represented by the context 800B of Figure 8A, in which the previous set 81 OB of virtual machine servers 811 through 815 are contracted (as represented by arrow 80 IB) to include a contracted virtual machine server set 820B in which the virtual machine server 812 is removed.
  • Virtual machine servers may be detected to be removed, for example, when a virtual machine server is unplugged from, or otherwise rendered operable, within the cluster.
  • the virtual machine assignment component determines how to adjust a cluster policy of the particular server domain (act 702), and further causes the cluster policy to be adjusted in the determined way (act 703).
  • the cluster policy may be adjusted by allowing the virtual machine assignment policy to recognize the existence of the expanded set of virtual machine servers.
  • Figure 9 abstractly illustrates a cluster policy 900 that includes a virtual machine assignment policy 901.
  • the assignment policy defines how new virtual machines are to be assigned to the virtual machine servers when they are provisioned. Such policy may take into consideration load balancing, latency, currently server performance, or any other factor relevant to the assignment of new virtual machines to the virtual machine servers.
  • the store replication policy 902 may also be altered to recognize the expanded set.
  • the new virtual machine server may store data redundantly with other virtual machine servers in order to provide for a proper level of reliability.
  • the contingency policy 903 may also be affected by the expanded set, since the new virtual machine server is now available to serve as a backup should one or more of the other virtual machine servers fail.
  • the store may be a file system, although not required.
  • a store may be any device, system, or combination of devices and/or systems that are capable of storing structured data.
  • the cluster policy may be adjusted by allowing the virtual machine assignment policy to recognize the existence of the contracted set of virtual machine servers.
  • the store replication policy 902 may also be altered to recognize the contracted set. For instance, the existing virtual machine server may re-replicate data amongst each other to provide the proper level of redundancy responsive to the removed virtual machine server no longer being available.
  • the contingency policy 903 may also be affected by the contacted set, since the removed virtual machine server is no longer available to serve as a backup should one or more of the other virtual machine servers fail.
  • the principles described herein provide an automated mechanism for adjusting cluster policy when there is a change in the set of virtual machine servers that are within the cluster.
  • the present invention may be embodied in other specific forms without departing from its spirit or essential characteristics.
  • the described embodiments are to be considered in all respects only as illustrative and not restrictive.
  • the scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Stored Programmes (AREA)

Abstract

The automatic adjustment of policy within a cluster that includes a set of virtual machine servers (e.g., virtualization nodes). Each of the virtual machine servers has a server-specific hypervisor running thereon and may also execute one or more and potentially many virtual machines. Upon detecting that the set of virtual machine servers in the cluster has changed, the cluster policy is automatically adjusted to accommodate the change. As an example, a cluster policy may be changed such that the changed set is considered when assigning new virtual machines to given virtual machine servers within the set. As another example, a store replication policy, in which a store is replicated across multiple virtual machine servers in the cluster, may be adjusted to account for the changed set of virtual machine servers.

Description

AUTOMATED ADJUSTMENT OF CLUSTER POLICY
CROSS-REFERENCE TO RELATED APPLICATION
This patent application claims the benefit of United States provisional patent application No. 61/447,572, filed February 28, 2011, which provisional patent application is expressly incorporated herein by reference in its entirety. This patent application is also a continuation-in-part of U.S. Application No. 13/175,771 filed July 1, 2011, which is also incorporated herein by reference in its entirety.
BACKGROUND
For more than 40 years, technologists have known that one way to lower computing costs is to simultaneously share resources across multiple components and/or machines. This concept eventually led to the so-called client/server networking model where multiple desktop computers were linked together to a server where files and printer resources could be shared. Given the success achieved in improved performance and lowered costs through virtual servers, companies have been diligently attempting to replicate their efforts with "virtual desktops", which will now be explained.
As a user interfaces with a client computing system (hereinafter referred to as a "client"), the user is presented with a desktop environment. The desktop environment may include an intuitive visualization of various icons, windows, and other tools that that user may interact with to manipulate the various applications and environments offered by the desktop environment.
As events occur (such as user input), the desktop environment is processed in a manner that is appropriate given the event, resulting in perhaps some change to the state of the desktop environment. Conventionally, such desktop processing occurs on the client. However, desktop virtualization involves the offloading of the desktop processing to a location other the client (hereinafter referred to as a "centralized desktop location"), which location is perhaps even remote from the client. That offloaded location may be a server, a server cluster, or a server cloud.
The centralized desktop location maintains a virtual machine for each supported desktop environment. The virtual machine has access to all of the desktop state necessary to construct rendering data (such as an image or rendering instructions) representing how the desktop environment should appear. The virtual machine also manages the processing that serves up the rendering data to the corresponding client, after which the client renders the corresponding desktop environment.
As the client interacts with the displayed desktop image, that client input is transmitted to the centralized desktop location. The corresponding virtual machine at the centralized desktop location interprets the client input, and processes the desktop. In response to this input, or in response to some other detected event, the virtual machine changes the state of the desktop if appropriate. If this changed state results in a change in how the desktop appears, the virtual machine constructs a different desktop image, and causes the centralized desktop location to transmit the altered desktop image to the client. From the user's perspective, this occurs often fast enough that the displayed desktop at the client is substantially immediately responsive to the user input at the client. The desktop as is exists on the centralized desktop location is often referred to as a "virtual desktop", and the application-level logic on the centralized desktop that is used to process the desktop is often referred to a "virtual machine".
Typically, the centralized desktop location may manage a number of virtual desktops for a corresponding number of clients. For instance, the centralized desktop location may manage hundreds of virtual desktops. In some cases, the centralized desktop location is a physical machine, which is referred to herein as a "physical appliance".
BRIEF SUMMARY
At least one embodiment described herein relates to the automatic adjustment of policy within a cluster that includes a set of virtual machine servers. Each of the virtual machine servers has a server-specific hypervisor running thereon and may also execute one or more and potentially many virtual machines. Upon detecting that the set of virtual machine servers in the cluster has changed (e.g., by having one or more virtual machine servers added to or removed from the cluster), the cluster policy is automatically adjusted to accommodate the change.
As an example, a cluster assignment policy may be changed such that the changed set is considered when assigning new virtual machines to given virtual machine servers within the set. As another example, a store replication policy, in which a store is replicated across multiple virtual machine servers in the cluster, may be adjusted to account for the changed set of virtual machine servers. As a further example, a contingency policy may be affected that determines policy for what to do if a virtual machine server within the cluster goes offline or loses some level of functionality.
In at least one embodiment described herein, a cluster manager is abstracted above the server-specific hypervisors in the cluster, and this manager is also caused to be adjusted in response to the changed set. In some cases, the cluster manager is capable of interacting with server-specific hypervisors of multiple different types.
This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
In order to describe the manner in which the above-recited and other advantages and features can be obtained, a more particular description of various embodiments will be rendered by reference to the appended drawings. Understanding that these drawings depict only sample embodiments and are not therefore to be considered to be limiting of the scope of the invention, the embodiments will be described and explained with additional specificity and detail through the use of the accompanying drawings in which:
Figure 1 illustrates a computing system in which some embodiments described herein may be employed;
Figure 2 abstractly illustrates a system that includes multiple virtual machine servers in which the cluster policy adjustment described herein may be employed;
Figure 3 abstractly illustrates an example virtual machine server, which represents an example of a virtual machine server illustrated in Figure 2;
Figure 4A illustrates the virtual machine server of Figure 3 in the context in which hard support resources are provided to the virtual machines through a server- specific hypervisor;
Figure 4B illustrates the virtual machine server of Figure 3 in the context in which soft support resources are provided to the virtual machines;
Figure 5 abstractly illustrates an environment in which a virtual machine is shown in conjunction with its soft support resources;
Figure 6 illustrates a system that is similar to the system of Figure 2, except with more detail shown; Figure 7 illustrates a flowchart of a method for automatically adjusting a policy of a cluster;
Figure 8A illustrates a state flow in which a virtual machine server is added to the set of available servers in a cluster;
Figure 8B illustrates a state flow in which a virtual machine server is removed from the set of available servers in a cluster; and
Figure 9 abstractly illustrates elements of the cluster policy.
DETAILED DESCRIPTION
In accordance with embodiments described herein, policy of a cluster (that includes a set of virtual machine servers) is automatically adjusted. Each of the virtual machine servers has a server-specific hypervisor running thereon and may also execute one or more and potentially many virtual machines. Upon detecting that the set of virtual machine servers in the cluster has changed (e.g., by having one or more virtual machine servers added to or removed from the cluster), the cluster policy is automatically adjusted to accommodate the change. First, some introductory discussion regarding computing systems will be described with respect to Figure 1. Then, an example environment in which the cluster policy may be adjusted will be described with respect to Figure 2 through 7. Finally, embodiments of the cluster policy adjustment will be described with respect to Figures 8 A through 9.
First, introductory discussion regarding computing systems is described with respect to Figure 1. Computing systems are now increasingly taking a wide variety of forms. Computing systems may, for example, be handheld devices, appliances, laptop computers, desktop computers, mainframes, distributed computing systems, or even devices that have not conventionally been considered a computing system. In this description and in the claims, the term "computing system" is defined broadly as including any device or system (or combination thereof) that includes at least one physical and tangible processor, and a physical and tangible memory capable of having thereon computer-executable instructions that may be executed by the processor. The memory may take any form and may depend on the nature and form of the computing system. A computing system may be distributed over a network environment and may include multiple constituent computing systems.
As illustrated in Figure 1 , in its most basic configuration, a computing system 100 typically includes at least one processing unit 102 and memory 104. The memory 104 may be physical system memory, which may be volatile, non-volatile, or some combination of the two. The term "memory" may also be used herein to refer to nonvolatile mass storage such as physical storage media. If the computing system is distributed, the processing, memory and/or storage capability may be distributed as well. As used herein, the term "module" or "component" can refer to software objects or routines that execute on the computing system. The different components, modules, engines, and services described herein may be implemented as objects or processes that execute on the computing system (e.g., as separate threads).
In the description that follows, embodiments are described with reference to acts that are performed by one or more computing systems. If such acts are implemented in software, one or more processors of the associated computing system that performs the act direct the operation of the computing system in response to having executed computer-executable instructions. An example of such an operation involves the manipulation of data. The computer-executable instructions (and the manipulated data) may be stored in the memory 104 of the computing system 100. Computing system 100 may also contain communication channels 108 that allow the computing system 100 to communicate with other message processors over, for example, network 110.
Embodiments described herein may comprise or utilize a special purpose or general-purpose computer including computer hardware, such as, for example, one or more processors and system memory, as discussed in greater detail below. Embodiments described herein also include physical and other computer-readable media for carrying or storing computer-executable instructions and/or data structures. Such computer-readable media can be any available media that can be accessed by a general purpose or special purpose computer system. Computer-readable media that store computer-executable instructions are physical storage media. Computer-readable media that carry computer-executable instructions are transmission media. Thus, by way of example, and not limitation, embodiments of the invention can comprise at least two distinctly different kinds of computer-readable media: computer storage media and transmission media.
Computer storage media includes RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer. A "network" is defined as one or more data links that enable the transport of electronic data between computer systems and/or modules and/or other electronic devices. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or a combination of hardwired or wireless) to a computer, the computer properly views the connection as a transmission medium. Transmissions media can include a network and/or data links which can be used to carry or desired program code means in the form of computer- executable instructions or data structures and which can be accessed by a general purpose or special purpose computer. Combinations of the above should also be included within the scope of computer-readable media.
Further, upon reaching various computer system components, program code means in the form of computer-executable instructions or data structures can be transferred automatically from transmission media to computer storage media (or vice versa). For example, computer-executable instructions or data structures received over a network or data link can be buffered in RAM within a network interface module (e.g., a "NIC"), and then eventually transferred to computer system RAM and/or to less volatile computer storage media at a computer system. Thus, it should be understood that computer storage media can be included in computer system components that also (or even primarily) utilize transmission media.
Computer-executable instructions comprise, for example, instructions and data which, when executed at a processor, cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. The computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, or even source code. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the described features or acts described above. Rather, the described features and acts are disclosed as example forms of implementing the claims.
Those skilled in the art will appreciate that the invention may be practiced in network computing environments with many types of computer system configurations, including, personal computers, desktop computers, laptop Computers, message processors, hand-held devices, multi-processor systems, microprocessor- based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, mobile telephones, PDAs, pagers, routers, switches, and the like. The invention may also be practiced in distributed system environments where local and remote computer systems, which are linked (either by hardwired data links, wireless data links, or by a combination of hardwired and wireless data links) through a network, both perform tasks. In a distributed system environment, program modules may be located in both local and remote memory storage devices.
Figure 2 abstractly illustrates a system 200 that includes multiple virtual machine servers 210. In this example, there are five virtual machine servers illustrated including virtual machine servers 211 A, 211 B, 211 C, 211D and 21 IE. However, the ellipses 21 IF abstractly represent that the system 200 may include any number of virtual machine servers from as few as one to as many as enumerable. Each virtual machine server includes a server-specific hypervisor as will be described further below. An example of a system 200 is a server rack that is capable of holding multiple virtual machine servers.
Each virtual machine server may, for example, be structured as described above for the computing system 100 of Figure 1. However, the virtual machine server may be any device, system, or combination thereof, that is capable of running multiple virtual machines and providing supporting resources to such virtual machines. In one example, the virtual machine server may be a physical appliance that is capable of being fit into a slot of a server rack. For instance, if the system 200 of Figure 2 were a server rack having multiple slots, the system 200 could fit multiple of such physical appliances.
Figure 3 abstractly illustrates an example virtual machine server 300, which represents an example of a virtual machine server illustrated in Figure 2. For instance, the example virtual machine server 300 may be an example of each of any of the virtual machine servers 210 of Figure 2. The virtual machine server 300 operates a set of virtual machines 310, and also provides resources 320 to the virtual machines 310. There may be any number of virtual machines 310 operating in the virtual machine server 300. In Figure 3, there are three virtual machines 311, 312 and 313 shown. However, the ellipses 314 represent that the number of virtual machines 310 may be as few as one, but potentially as many as thousands, or even more. Thus, the virtual machine server 300 may be a centralized location that manages many virtual machines. Each virtual machine manages state (e.g., a desktop state) for a corresponding client that may perhaps be remotely located. The virtual machine provides an image (or perhaps other rendering instructions) representing a desktop image to the corresponding client, and alters the image or other desktop state in response to detected events, such as, for example, a user interfacing with the current desktop image.
As the client interacts with the displayed desktop image corresponding to a virtual machine, that client input is transmitted to the virtual machine server 300, and ultimately to the virtual machine corresponding to that client input. The corresponding virtual machine interprets the client input, and processes the client input. In response to this input, or in response to some other detected event, the virtual machine changes the state of the virtual desktop if appropriate. If this changed state results in a change in how the desktop appears, the virtual machine constructs and transmits another desktop image to the client. From the user's perspective, this occurs often fast enough that the displayed desktop is substantially immediately responsive to the user input.
Each virtual machine needs resources in order to operate properly. The virtual machine server 300 provides a variety of support resources 320 for each of the virtual machines 310. For instance, some of that support includes hard (physical) support resources such as processing resources, memory resources, storage resources, network access resources, and the like. However, each virtual machine also uses soft support resources, such as software and data. As far as software, the virtual machine may use an operating system, one or more applications, and/or one or more other software modules. As far as data, the virtual machine server 300 may host some or all of data that is used by the virtual machine in order to operate, such as user preference data, and other application state. The "soft" support resources are so named because the resources (such as software and data) are capable of being copied from one location to another, or accessed remotely over a network.
Figure 4 A illustrates the virtual machine server 300 of Figure 3 in the context 400A in which hard support resources 410 are provided to the virtual machines 310 through a server-specific hypervisor 401. Hypervisors are known in the art, and abstract away the hard support resources such that, from the perspective of each individual virtual machine, the virtual machine has isolated and exclusive access to particular hard support resources, such as processing resources, memory resources, network resources, storage resources, and the like. Thus, even though the virtual machine server 300 supports many virtual machines, each virtual machine seems to have its own dedicated hardware resources from the perspective of the virtual machine itself. Hardware resources 410 of Figure 4A are an example of the support resources 320 of Figure 2.
Figure 4B illustrates the virtual machine server 300 of Figure 3 in the context 400B in which soft support resources are provided to the virtual machines 310. First referring to Figure 5, an environment 500 is shown in which a virtual machine 501 is shown in conjunction with its soft support resources 510. The soft support resources 510 include a first portion 511, and a second portion 512. The ellipses 513 represent that there may be other portions of the soft support resources 510 as well. In accordance with the principles described herein, for each of at least some of the virtual machines operating in the virtual machine server 300, a portion of the soft support resources are "internal" soft support resources in the sense that they are provided by the virtual machine server 300, and a portion of the soft support resources are "external" in the sense that they are provided from a location that is external to the virtual machine server 300. For instance, the context 400B of Figure 4B shows that virtual machine 311 has access to internal soft support resources 411 as well as external soft support resources 412.
In accordance with the embodiments described herein, the allocation of soft resources is made in a way that improves performance of the virtual machine, and perhaps also allows for efficient migration of the virtual machine from one physical support environment to another. Referring to Figure 2, for example, suppose a virtual machine were located on the virtual machine server 211 A. The internal soft support resources for that virtual machine would be located on the virtual machine server 211 A. However, the external soft support resources may be located on any other virtual machine server(s) or perhaps even not within the system 200 at all, and perhaps on a storage area network.
The principles described herein are not limited to the components that are included in the internal soft support resource portion 511, nor to the components that are included in the external soft support resource portion 512. As an example only, suppose that the soft support resources may be represented as a desktop image. In the VMDK (stands for Virtual Machine Disk) format, a desktop image may be split into three pieces. In that case, perhaps one or two of those pieces may be part of the internal soft support resource portion 511, and the remaining two or one (respectively) of those pieces may part of the external soft support resource portion 512. In another format offered by Unidesk Corporation, the desktop image may be divided into five pieces. In that case, one, two, three, or even four, of those pieces may be part of the internal soft support resource portion 51 1, and the remaining four, three, two or one (respectively) of those pieces may part of the external soft support resource portion 512.
These are, however, just examples, as the principles described herein may be applied to any case in which the desktop state is divided into at least N different pieces (where N is a positive integer that is two or greater), and where M of those pieces (where M is any positive integer less than N) are part of the internal soft support resource portion 511, and where N-M (read N minus M) of those pieces are part of the external soft support resource portion 512.
The allocation of which soft resources are to be internal and which are to be external to the physical support appliance may be made according to any criteria. In one embodiment, the criteria are selected carefully to promote faster operating times, and greater ease of migration should the virtual machine be migrated to another physical support environment.
In the case in which access latency is less when accessing internal soft resources as compared to external soft resources, the criteria may be that resources that are more operational, that may be accessed, on average, more frequently, are allocated as internal resources. On the other hand, soft resources that are less susceptible to latency are allocated externally. For instance, an operating system is a soft resource (e.g., a piece of software) which is frequently accessed during operation. Furthermore, certain applications may be frequently accessed. On the other hand, some user data may be less frequently accessed. For instance, user preference data may perhaps just be accessed once during rendering of a user interface corresponding to a particular virtual machine. Accordingly, even if there is a larger latency access such user preference data, that longer latency would not significantly impact the user experience at the client corresponding to the virtual machine.
Such an allocation may significantly improve operational performance. However, this improvement may be obtained in conjunction with improved efficiency in migration if the more common soft resources are made internal at multiple physical support environments. For instance, perhaps a subset of operating systems is made available at multiple physical support environments. If the internal soft resources are provisioned at both the source physical support environment, and the target physical support environment for a particular virtual machine migration, this reduces or even perhaps eliminates any copying needed to facilitate the migration. Accordingly, migration of a virtual machine may be made more efficiently, and even may appear seamless to the client user.
Figure 6 illustrates a system 600 that is similar to the system 200 of Figure 2, except with more detail shown. Again the system 600 includes the plurality 210 of virtual machine servers including virtual machine servers 211 A, 21 IB, 211C, 21 ID and 21 IE, each capable of running a server-specific hypervisor. As previously mentioned, the ellipses 21 IF represent flexibility in the number of virtual machine servers within the system. Also, each virtual machine server 211 A though 21 IE is illustrated as including a respective server-specific hypervisor 611 A through 61 IE. For instance, referring to Figure 4A, the virtual machine server 300 runs the hypervisor 401.
The system 600 also includes a cluster 601 of virtual machine servers 21 1 A through 21 IF. Such virtual machine servers are often also referred to in the art as "hosts" or "nodes". A "cluster" of virtual machine servers is a collection of virtual machine servers Furthermore, a cluster component 610 manages the cluster of virtual machines as a single entity such that the resources from all of the virtual machine servers within the cluster are aggregated.
The cluster component 610 is configured to run a cluster manager 611 that is abstracted above each of the virtual machine servers so as to manage the resources of the cluster of virtual machine servers in an aggregated manner in accordance with a cluster policy. The "cluster policy" more generally defines a set of one or more rules governing how the cluster is to be managed. As an example only, the cluster policy might define the resource allocation of the cluster of virtual machine servers.
The cluster manager 611 may be, for example, capable of interfacing with a single type of server-specific hypervisor, or may alternatively be capable of interfacing with server-specific hypervisors of a variety of different types. In the latter case, each server-specific hypervisor may perhaps register its type with the cluster manager 611. When communicating with the server-specific hypervisor, the cluster manager 611 may perhaps perform appropriate communication translation given the type of the server-specific hypervisor. Thus, the server-specific hypervisor may be different in the kinds of application program interfaces that they provide.
The system 600 also includes a virtual machine assignment component 620 configured to automatically detect when the set of virtual machine servers operating within the cluster has changed, and further configured to automatically adjust the server domain policy 612 accordingly such that the changed set of virtual machine servers are considered when assigning virtual machines to the set of virtual machine servers in the cluster.
The cluster component 610 could be implemented using hardware, or a combination of hardware and software. Likewise, the virtual machine assignment component 620 could be implemented using hardware, or a combination of hardware and software. In the case of the combination of hardware and software, the computing system may instantiate and/or operate the corresponding component (e.g., the cluster component 610 and/or the virtual machine assignment component 620) by having its processor(s) execute one or more computer-executable instructions that are on one or more computer-storage media that are comprised by a computer program product.
In some embodiments, the components may be executed on hardware that is separate and apart from the virtual machine server. In other embodiments, one or both of the cluster component 610 and/or the virtual machine assignment component 620 may be at least partial executed in a distributed manner on multiple virtual machine servers in such a manner that if one of the virtual machine servers were to shut down, the remaining virtual machine servers could continue operation of the corresponding component.
Figure 7 illustrates a flowchart of a method 700 for automatically adjusting a policy of a cluster. The method 700 may, for example, be performed by the virtual machine assignment component 620 of Figure 6. The virtual machine assignment component detects that the set of available virtual machine servers for a particular cluster has changed (act 701).
As an example, the set of virtual machine servers may be expanded by adding an additional server as represented by the context 800A of Figure 8A, in which the previous set 81 OA of virtual machine servers 811 through 815 are expanded (as represented by arrow 801 A) to include an expanded virtual machine server set 820A that includes an additional virtual machine server 816. Virtual machine servers may be detected to be added, for example, when a new virtual machine server is plugged into, or otherwise provisioned to operate, within the cluster.
As another example, the set of virtual machine servers may be contracted by removing a virtual machine server as represented by the context 800B of Figure 8A, in which the previous set 81 OB of virtual machine servers 811 through 815 are contracted (as represented by arrow 80 IB) to include a contracted virtual machine server set 820B in which the virtual machine server 812 is removed. Virtual machine servers may be detected to be removed, for example, when a virtual machine server is unplugged from, or otherwise rendered operable, within the cluster.
In response to the detected change, the virtual machine assignment component determines how to adjust a cluster policy of the particular server domain (act 702), and further causes the cluster policy to be adjusted in the determined way (act 703).
When a new virtual machine server is added to the set of available virtual machine servers as in the case of Figure 8A, the cluster policy may be adjusted by allowing the virtual machine assignment policy to recognize the existence of the expanded set of virtual machine servers. Figure 9 abstractly illustrates a cluster policy 900 that includes a virtual machine assignment policy 901. The assignment policy defines how new virtual machines are to be assigned to the virtual machine servers when they are provisioned. Such policy may take into consideration load balancing, latency, currently server performance, or any other factor relevant to the assignment of new virtual machines to the virtual machine servers.
The store replication policy 902 may also be altered to recognize the expanded set. For instance, the new virtual machine server may store data redundantly with other virtual machine servers in order to provide for a proper level of reliability. The contingency policy 903 may also be affected by the expanded set, since the new virtual machine server is now available to serve as a backup should one or more of the other virtual machine servers fail. As an example, the store may be a file system, although not required. For instance, a store may be any device, system, or combination of devices and/or systems that are capable of storing structured data.
When a virtual machine server is removed from the set of available virtual machine servers as in the case of Figure 8B, the cluster policy may be adjusted by allowing the virtual machine assignment policy to recognize the existence of the contracted set of virtual machine servers. The store replication policy 902 may also be altered to recognize the contracted set. For instance, the existing virtual machine server may re-replicate data amongst each other to provide the proper level of redundancy responsive to the removed virtual machine server no longer being available. The contingency policy 903 may also be affected by the contacted set, since the removed virtual machine server is no longer available to serve as a backup should one or more of the other virtual machine servers fail.
Accordingly, the principles described herein provide an automated mechanism for adjusting cluster policy when there is a change in the set of virtual machine servers that are within the cluster. The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.

Claims

CLAIMS What is claimed is:
1. A computer program product comprising one or more computer- storage media having thereon computer-executable instructions that are structured such that, when executed by one or more processors of the computing system, cause the computing system to perform a method for automatically adjusting a policy of a cluster that includes a set of one or more virtual machine servers, the method comprising:
an act of detecting that a set of virtual machine servers for a particular cluster has changed, wherein each of the set of servers runs a server-specific hypervisor; an act of determining how to adjust a cluster policy of the particular cluster responsive to the change in the set of available virtual machine servers; and
an act of causing the cluster policy to be adjusted as determined in the act of determining.
2. The computer program product in accordance with Claim 1, wherein the set of available servers has changed by one or more hypervisor-enabled virtual machine servers being added to the set.
3. The computer program product in accordance with Claim 2, wherein the cluster policy is adjusted by allowing virtual machine assignment policy to recognize the existence of the expanded set of virtual machine servers.
4. The computer program product in accordance with Claim 1, wherein the set of available servers has changed by one or more hypervisor-enabled virtual machine servers being removed from the set.
5. The computer program product in accordance with Claim 4, wherein the cluster policy is adjusted by allowing virtual machine assignment policy to consider that the set of virtual machine servers has been reduced.
6. The computer program product in accordance with Claim 5, wherein the cluster policy includes a store replication policy, wherein the adjustment of the cluster policy comprises re-replicating a store to another virtual machine server in the set responsive to one of the virtual machine servers on which the store was previously replicated being removed from the set.
7. The computer program product in accordance with Claim 6, wherein the store comprises at least a portion of a file system.
8. The computer program product in accordance with Claim 1, wherein the method further comprising:
an act of causing a cluster manager to be abstracted above each of the server- specific hypervisors.
9. The computer program product in accordance with Claim 8, wherein the cluster manager is capable of interfacing with server- specific hypervisors of a plurality of hypervisor types.
10. The computer program product in accordance with Claim 1 , wherein the computer-executed instructions are at least partially executed in a distributed manner across a plurality of the virtual machine servers in the set of virtual machine servers.
11. The computer program product in accordance with Claim 1 , wherein the computing system that executes the computer-executable instructions is separate from each of the set of virtual machine servers.
12. A method for automatically adjusting a policy of a cluster that includes set of one or more virtual machine servers, the method comprising:
an act of detecting that a set of virtual machine servers for a particular cluster has changed, wherein each of the set of virtual machine servers runs a server-specific hypervisor;
an act of determining how to adjust a cluster policy of the particular cluster responsive to the change in the set of available virtual machine servers; and
an act of causing the cluster policy to be adjusted as determined in the act of determining.
13. The method in accordance with Claim 12, wherein the set of available servers has changed by one or more hypcrvisor-cnablcd virtual machine servers being added to the set, wherein the cluster policy is adjusted by allowing virtual machine assignment policy to recognize the existence of the expanded set of virtual machine servers.
14. The method in accordance with Claim 12, wherein the set of available servers has changed by one or more hypervisor-enabled virtual machine servers being removed from the set, wherein the cluster policy is adjusted by allowing virtual machine assignment policy to consider that the set of virtual machine servers has been reduced.
15. The method in accordance with Claim 12, wherein the cluster policy includes a contingency policy, wherein the adjustment of the cluster policy comprises adjusting a contingency action that would occur should one of the virtual machine servers in the set lose at least some functionality.
16. The method in accordance with Claim 12, further comprising:
an act of causing a cluster manager to be abstracted above each of the server- specific hypervisors.
17. A system comprising:
a plurality of virtual machine servers, each capable of running a server- specific hypervisor;
a cluster component configured to run a cluster manager that is abstracted above each of the at least some of the plurality of virtual machine servers such that the at least some of the plurality of virtual machine servers operate within the cluster having the cluster manager; and
a virtual machine assignment component configured to automatically detect when the set of virtual machine servers operating within the cluster has changed, and further configured to automatically adjust a cluster policy accordingly such that the changed set of virtual machine servers are considered when assigning virtual machines to the set of virtual machine servers in the server domain.
18. The system in accordance with Claim 17, wherein the cluster component operates at least in part by execution that is distributed across a plurality of the set of virtual machine servers.
19. The system in accordance with Claim 17, wherein the virtual machine assignment component operates at least in part by execution that is distributed across a plurality of the set of virtual machine servers.
20. The system in accordance with Claim 17, further comprising:
a management server that is separate from the set of virtual machine servers, wherein one or both of the cluster component and the virtual machine assignment component operates at least in part by execution on the management server.
PCT/US2012/027010 2011-02-28 2012-02-28 Automated adjustment of cluster policy Ceased WO2012158241A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201161447572P 2011-02-28 2011-02-28
US61/447,572 2011-02-28
US13/175,771 2011-07-01
US13/175,771 US9223605B2 (en) 2011-07-01 2011-07-01 Virtual machine allocation internal and external to physical environment

Publications (1)

Publication Number Publication Date
WO2012158241A1 true WO2012158241A1 (en) 2012-11-22

Family

ID=46001725

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2012/027010 Ceased WO2012158241A1 (en) 2011-02-28 2012-02-28 Automated adjustment of cluster policy

Country Status (1)

Country Link
WO (1) WO2012158241A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10341203B2 (en) * 2015-01-02 2019-07-02 Gigamon Inc. Policy tracking in a network that includes virtual devices
EP3568764A1 (en) * 2017-01-12 2019-11-20 Bull SAS Method for evaluating the performance of a chain of applications within an it infrastructure

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090138752A1 (en) * 2007-11-26 2009-05-28 Stratus Technologies Bermuda Ltd. Systems and methods of high availability cluster environment failover protection
US20090144800A1 (en) * 2007-12-03 2009-06-04 International Business Machines Corporation Automated cluster member management based on node capabilities
US20100257269A1 (en) * 2009-04-01 2010-10-07 Vmware, Inc. Method and System for Migrating Processes Between Virtual Machines
US20100333089A1 (en) * 2009-06-29 2010-12-30 Vanish Talwar Coordinated reliability management of virtual machines in a virtualized system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090138752A1 (en) * 2007-11-26 2009-05-28 Stratus Technologies Bermuda Ltd. Systems and methods of high availability cluster environment failover protection
US20090144800A1 (en) * 2007-12-03 2009-06-04 International Business Machines Corporation Automated cluster member management based on node capabilities
US20100257269A1 (en) * 2009-04-01 2010-10-07 Vmware, Inc. Method and System for Migrating Processes Between Virtual Machines
US20100333089A1 (en) * 2009-06-29 2010-12-30 Vanish Talwar Coordinated reliability management of virtual machines in a virtualized system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10341203B2 (en) * 2015-01-02 2019-07-02 Gigamon Inc. Policy tracking in a network that includes virtual devices
US10951499B2 (en) 2015-01-02 2021-03-16 Gigamon Inc. Tracking changes in network configurations
EP3568764A1 (en) * 2017-01-12 2019-11-20 Bull SAS Method for evaluating the performance of a chain of applications within an it infrastructure

Similar Documents

Publication Publication Date Title
US9542215B2 (en) Migrating virtual machines from a source physical support environment to a target physical support environment using master image and user delta collections
EP3792760B1 (en) Live migration of clusters in containerized environments
US10007533B2 (en) Virtual machine migration
US9582221B2 (en) Virtualization-aware data locality in distributed data processing
CA2978889C (en) Opportunistic resource migration to optimize resource placement
US9977698B2 (en) Virtual machine migration into the cloud
US8924969B2 (en) Virtual machine image write leasing
US20050240932A1 (en) Facilitating access to input/output resources via an I/O partition shared by multiple consumer partitions
US9223605B2 (en) Virtual machine allocation internal and external to physical environment
JP2015518997A (en) Integrated storage / VDI provisioning method
US11336519B1 (en) Evaluating placement configurations for distributed resource placement
CN120677467A (en) Determining silence timeout for containerized workload
US8959383B2 (en) Failover estimation using contradiction
WO2012158241A1 (en) Automated adjustment of cluster policy
US12413522B2 (en) Method and system for optimizing internal network traffic in Kubernetes
Ahmadpanah et al. FlexiMigrate: enhancing live container migration in heterogeneous computing environments
US20140059538A1 (en) Virtual machine state tracking using object based storage
WO2016109743A1 (en) Systems and methods for implementing stretch clusters in a virtualization environment
WO2023132928A1 (en) Method and system for performing computational offloads for composed information handling systems
EP3374882B1 (en) File system with distributed entity state
WO2012118849A1 (en) Migration of virtual machine pool
HK1259159B (en) Distributed data set storage and retrieval
HK1259159A1 (en) Distributed data set storage and retrieval

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12716796

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205N DATED 06/11/2013)

122 Ep: pct application non-entry in european phase

Ref document number: 12716796

Country of ref document: EP

Kind code of ref document: A1