[go: up one dir, main page]

CN110914804A - Virtual Machine Migration Manager and Method - Google Patents

Virtual Machine Migration Manager and Method Download PDF

Info

Publication number
CN110914804A
CN110914804A CN201780093133.XA CN201780093133A CN110914804A CN 110914804 A CN110914804 A CN 110914804A CN 201780093133 A CN201780093133 A CN 201780093133A CN 110914804 A CN110914804 A CN 110914804A
Authority
CN
China
Prior art keywords
migration
plan
created
manager
database
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201780093133.XA
Other languages
Chinese (zh)
Inventor
普拉迪普·贾卡迪许
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN110914804A publication Critical patent/CN110914804A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/485Task life-cycle, e.g. stopping, restarting, resuming execution
    • G06F9/4856Task life-cycle, e.g. stopping, restarting, resuming execution resumption being on a different machine, e.g. task migration, virtual machine migration
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5066Algorithms for mapping a plurality of inter-dependent sub-tasks onto a plurality of physical CPUs
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present invention provides a VM migration manager 100, comprising: a database 101 for storing at least one pre-created VM migration plan 102; a receiving unit 103 for receiving a VM migration request 104; a processing unit 105, configured to query the database 101 for a pre-created VM migration plan 102 according to the received VM migration request 104, and if the query returns a pre-created VM migration plan 102, perform migration of at least one VM106 according to the pre-created VM migration plan 102.

Description

Virtual machine migration manager and method
Technical Field
The present invention relates to a Virtual Machine (VM) migration manager, a method of operating the VM migration manager, a computing system including the VM migration manager, and a method of operating the computer system. In particular, the present invention relates to fast VM migration in a data center environment.
Background
In the cloud computing era, data center level computing has become an important solution. The data center may be sized to one or more clusters. Each cluster may have one or more racks. Each rack may have one or more physical compute boxes (physical nodes). Fig. 8 illustrates such a data center organization with cluster level, rack level, and physical nodes.
Virtualization of all components (e.g., computing devices, network devices, storage devices, and other devices) can help cloud computing service providers to allocate resources in a very fast manner, as well as accommodate more and more customers and service more requests. By virtualization, a software defined data center may even be created within the actual data center, e.g., for resource management purposes.
As cloud computing grows, more and more applications are moving from user terminals to data center level computing. Requests to enter the data center (e.g., for performing processing with respect to the application) are also very frequent and fast. To manage resources, schedule tasks, and allocate resources, at least one cloud-scale scheduler (labeled "data center resource allocator/scheduler" in fig. 8, also referred to as a "coordinator") is required. And when the task is scheduled, the coordinator can provide optimal scheduling according to the data center resource information.
In the case of cloud computing, the coordinator should master the situation with respect to the entire data center and with respect to scheduled tasks. Each request has many scheduling constraints and Service Level Agreements (SLAs) that require the processing of the request to be completed within a specified time. This makes scheduling more complex. However, the coordinator should be able to schedule requests quickly and optimally.
Virtualization is a major technology that can implement cloud computing at an existing level. It has many advantages. For example, due to virtualization, applications (including applications of multiple VMs) may be operated on a single physical node, and energy consumption of a data center in which the physical node is located may be reduced. In addition, one or more VMs may also be migrated. Migration is a technique for moving a VM from one physical node to another. Live migration thus means that the VM is moved while it is servicing a request without affecting the service.
Migration requests may be triggered for a variety of reasons, e.g., for load balancing, security reasons, for resource consolidation, for resolving bottlenecks, for resolving hotspots, for optimal scheduling, etc. The initialization phase is typically initiated upon triggering a migration request. During this initialization phase, an optimal migration plan needs to be created. The optimal plan includes optimal target physical nodes and paths that can communicate the state of the VM to be migrated without interfering with the services provided by the VM. To determine the target nodes and the paths for migration, a complete view of the data center and a current view of resource utilization and status are typically required. After the migration plan is created, the plan will be pushed to the actual physical node to move the VM to the target physical node.
Fig. 9 illustrates a conventional solution for VM migration within an existing conventional cloud coordinator.
Multiple VMs (which may be applications/containers) are deployed and managed by a centralized coordinator or decentralized coordinator. These coordinators provide the functionality to migrate one or more VMs, i.e. to migrate the entire container/application also based on certain constraints. The constraints may be related to resource consolidation, optimal scheduling, security, or may exist for any other reason. The VM is migrated in real-time, and the coordinator ensures that services associated with the migrated VM are not affected. The VM is migrated from one physical node to another physical node. In the case of resource consolidation, the VM is migrated from one physical node to another, and then the other physical node will be shut down.
However, said conventional solutions, including the one shown in fig. 9, also have drawbacks. To migrate a VM, a path needs to be created, and the target node must be determined based on the constraints, the VM's existing state, and its configuration in response to the migration request. The constraints are based on the reason for migrating the VM, etc. Before starting the migration process, paths and similar target nodes are determined. An optimal migration path will then be created, and the migration of the VM will proceed next.
Creating a plan that contains the optimal target node and optimal path for migrating the VM is very time consuming. In particular, the larger the system with multiple VMs (e.g., for a distributed system), the more complex the process of determining the target node and the path. The same applies to a system with accelerators, or to a macro service or a group of VMs containing multiple micro services. Creating a plan for such a large system and for a larger number of VMs is very time consuming.
In more detail, the disadvantages of the conventional solution for migrating VMs are as follows. Conventional cloud coordinators will only create migration plans after receiving a migration request. This can delay migration of VMs and can have negative impacts on services and SLAs. Furthermore, conventional cloud coordinators do not have a function of collecting resource information and other information required for VM migration in a granular manner.
Furthermore, conventional cloud coordinators do not have the capability to migrate a complete distributed system, or even a set of VMs or a set of microservices. Furthermore, conventional coordinators do not take into account many constraints when creating migration plans. It is also not possible to create multiple plans based on service types/VMs and constraints and to store, update, manage and discard the multiple plans.
Disclosure of Invention
In view of the above disadvantages, the present invention aims to improve the conventional coordinator. It is an object of the present invention to provide a VM migration manager (coordinator) and a method of faster VM migration. In particular, the VM migration manager and method should provide the best way to migrate one or more VMs, should have the functionality to migrate distributed systems and macro services, and should use all possible constraints and information to create a migration plan. Thus, the VM migration manager and method are directed to better fulfill the migration SLA.
The object of the invention is achieved by the solution presented in the attached independent claims. Advantageous implementations of the invention are further defined in the dependent claims.
In particular, the solution of the invention is based on a pre-created migration plan in order to initiate VM migration more easily and faster. In particular, the solution provides N pre-created migration plans, and a way to select one of the N plans based on constraints and/or based on triggers for migration. Thus, a faster implementation of migration as a service is also provided.
A first aspect of the present invention provides a VM migration manager, comprising: a database for storing at least one pre-created VM migration plan; a receiving unit configured to receive a VM migration request; and the processing unit is used for inquiring a pre-created VM migration plan in the database according to the received VM migration request, and executing migration of at least one VM according to the pre-created VM migration plan if the inquiry returns the pre-created VM migration plan.
Because the VM migration plan is pre-created, the VM migration manager can perform VM migration faster. The VM migration manager may select a migration plan that provides the best way to migrate one or more VMs. In addition, distributed systems and macro services may also be created. Since the plan is created in advance, it is easier to comply with all possible constraints and information available in the network.
In an implementation form of the first aspect, the database is further configured to: storing state information provided by at least one external agent connected to the VM migration manager; the processing unit is further to: and inquiring at least one pre-created VM migration plan in the database according to the state information.
And searching for the optimal migration plan is supported by inquiring according to the state information. Thus, migration can be performed faster without interfering with current service.
In another implementation manner of the first aspect, the processing unit is further configured to: and inquiring a pre-created VM migration plan in the database according to explicit constraints defined in the received VM migration request and/or implicit constraints applied by the received VM migration request.
Thus, the constraints are taken into account when selecting the migration plan, which allows better fulfillment of the SLA.
In another implementation manner of the first aspect, the processing unit is further configured to: obtaining at least one VM migration plan when deploying a VM associated with the VM migration manager; and storing the obtained VM migration plan in the database as a pre-created migration plan.
Thus, a VM migration plan is most efficiently pre-created when the VMs are provisioned.
In another implementation manner of the first aspect, the processing unit is further configured to: and when the VM associated with the VM migration manager is deployed, acquiring the at least one VM migration plan according to the configuration stored in the VM migration manager and/or according to the state information.
This improves the pre-creation of a migration plan for the associated VM.
In another implementation manner of the first aspect, the processing unit is further configured to: when the query does not return a pre-created VM migration plan, acquiring one or more VM migration plans according to the display constraint and/or the implicit constraint and/or according to the state information; executing the migration of the VM according to the obtained VM migration plan; and storing the obtained VM migration plan in the database as a pre-created migration plan.
Thus, while the VM migration manager operates in a conventional manner, the performance of the VM migration manager is steadily improved when a pre-created VM migration plan is not found.
In another implementation manner of the first aspect, the processing unit is further configured to: and determining the type and the number of the obtained VM migration plans according to the display constraints and/or the implicit constraints.
In another implementation manner of the first aspect, the processing unit is further configured to: and when the query does not return a pre-created VM migration plan, acquiring the VM migration plan according to the configuration stored in the VM migration manager and/or according to the state information.
Thus, an improved migration plan is created and stored, and the performance of the VM migration manager is steadily improved.
In another implementation manner of the first aspect, the processing unit is further configured to: periodically selecting a pre-created VM migration plan from the database; updating and/or optimizing the selected VM migration plan according to the state information; storing the optimized VM migration plan in the database.
This improves the performance of the VM migration manager and keeps the migration plan up to date.
In another implementation manner of the first aspect, the processing unit is further configured to: periodically selecting a pre-created VM migration plan from the database; updating and/or optimizing the selected VM migration plan based on the configuration stored in the VM migration manager, and in particular based on a configuration change of the configuration stored in the VM migration manager.
That is, the configuration and configuration changes are taken into account, which improves the performance of the VM migration manager.
In another implementation manner of the first aspect, the processing unit is further configured to: selecting a pre-created VM migration plan from the database, the selection triggered by an event; updating and/or optimizing the selected VM migration plan based on the event; storing the optimized VM migration plan in the database.
That is, event-based changes are taken into account, which further improves the performance of the VM migration manager.
In another implementation manner of the first aspect, the processing unit is further configured to: determining a type of the received request; obtaining the implicit constraint based on a type of the received request.
In another implementation manner of the first aspect, each pre-created VM migration plan includes: a source; and/or a target; and/or a network path configuration of the VM.
The migration plan may be used for migration without having to determine a path or target node separately.
A second aspect of the present invention provides a method of operating a VM migration, the method comprising: storing at least one pre-created VM migration plan in a database of a VM migration manager; a receiving unit of the VM migration manager receives a VM migration request; the processing unit of the VM migration manager queries a pre-created VM migration plan in the database according to the received VM migration request; and when the query returns a pre-created VM migration plan, the processing unit executes the migration of at least one VM according to the pre-created VM migration plan.
In one implementation of the second aspect, the database further: storing state information provided by at least one external agent connected to the VM migration manager; the processing unit is further to: and inquiring at least one pre-created VM migration plan in the database according to the state information.
In another implementation manner of the second aspect, the processing unit further: and inquiring a pre-created VM migration plan in the database according to explicit constraints defined in the received VM migration request and/or implicit constraints applied by the received VM migration request.
In another implementation manner of the second aspect, the processing unit further: obtaining at least one VM migration plan when deploying a VM associated with the VM migration manager; and storing the obtained VM migration plan in the database as a pre-created migration plan.
In another implementation manner of the second aspect, the processing unit further: and when the VM associated with the VM migration manager is deployed, acquiring at least one VM migration plan according to the configuration stored in the VM migration manager and/or according to the state information.
In another implementation manner of the second aspect, the processing unit further: when the query does not return a pre-created VM migration plan, acquiring one or more VM migration plans according to the display constraint and/or the implicit constraint and/or according to the state information; executing the migration of the VM according to the obtained VM migration plan; and storing the obtained VM migration plan in the database as a pre-created migration plan.
In another implementation manner of the second aspect, the processing unit further: and determining the type and the number of the obtained VM migration plans according to the display constraints and/or the implicit constraints.
In another implementation manner of the second aspect, the processing unit further: and when the query does not return a pre-created VM migration plan, acquiring the VM migration plan according to the configuration stored in the VM migration manager and/or according to the state information.
In another implementation manner of the second aspect, the processing unit further: periodically selecting a pre-created VM migration plan from the database; updating and/or optimizing the selected VM migration plan according to the state information; storing the optimized VM migration plan in the database.
In another implementation manner of the second aspect, the processing unit further: periodically selecting a pre-created VM migration plan from the database; updating and/or optimizing the selected VM migration plan based on the configuration stored in the VM migration manager, and in particular based on a configuration change of the configuration stored in the VM migration manager.
In another implementation manner of the second aspect, the processing unit further: selecting a pre-created VM migration plan from the database, the selection triggered by an event; updating and/or optimizing the selected VM migration plan based on the event; storing the optimized VM migration plan in the database.
In another implementation manner of the first aspect, the processing unit is further configured to: determining a type of the received request; obtaining the implicit constraint based on a type of the received request.
In another implementation of the second aspect, each pre-created VM migration plan includes: a source; and/or a target; and/or a network path configuration of the VM.
By using the method and the implementation manner of the second aspect, the effects and advantages of the VM migration manager and the implementation manner of the VM migration manager of the first aspect are achieved.
A third aspect of the present invention provides a computing system for VM migration management, comprising: the VM migration manager of the first aspect or any implementation of the first aspect; at least one agent running on a node for: monitoring a VM executing on the node and/or resources used by the node; obtaining status information from the monitored VM and/or the monitored resource; providing the obtained state information to the VM migration manager.
The proxy enables the VM migration manager to use the state information to achieve better performance.
In an implementation form of the third aspect, the node is a physical computing node, preferably a VM monitor, or a Software Defined Network (SDN) controller, or a switch.
With the computing system of the third aspect, all the effects and advantages of the VM migration manager of the first aspect are achieved.
A fourth aspect of the present invention provides a method for operating a VM migration managed computing system, the method comprising: a processing unit of a VM migration manager stores at least one pre-created VM migration plan in a database of the VM migration manager; at least one agent running on a node monitors VMs executing on the node and/or resources used by the node; the at least one agent obtaining status information from the monitored VM and/or the monitored resource; the at least one agent providing the obtained state information to the VM migration manager; the processing unit stores the provided state information in the database; a receiving unit of the VM migration manager receives a VM migration request; the processing unit inquires a pre-created VM migration plan in the database according to the received VM migration request and the state information in the database; and when the query returns a pre-created VM migration plan, the processing unit executes the migration of at least one VM according to the pre-created VM migration plan.
With the method of the fourth aspect, the advantages and effects of the computing system of the third aspect are achieved.
It should be noted that all devices, elements, units and means described in the present application may be implemented in software or hardware elements or any combination thereof. All steps performed by the various entities described in the present application and the functions described to be performed by the various entities are intended to indicate that the respective entities are adapted or arranged to perform the respective steps and functions. Although in the following description of specific embodiments specific functions or steps performed by an external entity are not reflected in the description of specific elements of the entity performing the specific steps or functions, it should be clear to a skilled person that these methods and functions may be implemented in respective hardware or software elements or any combination thereof.
Drawings
The foregoing aspects and many of the attendant aspects of this invention will become more readily appreciated as the same become better understood by reference to the following detailed description, when taken in conjunction with the accompanying drawings, wherein:
FIG. 1 illustrates a VM migration manager provided by an embodiment of the invention;
FIG. 2 illustrates a VM migration manager provided by an embodiment of the invention;
FIG. 3 illustrates a method provided by an embodiment of the invention;
FIG. 4 illustrates a computing system provided by embodiments of the present invention;
FIG. 5 illustrates a method provided by an embodiment of the invention;
FIG. 6 illustrates a computing system architecture including a VM migration manager, provided by an embodiment of the invention;
FIG. 7 illustrates a computing system architecture including a VM migration manager, provided by an embodiment of the invention;
FIG. 8 illustrates a conventional data center organization;
fig. 9 illustrates a conventional solution for a VM migration manager (cloud coordinator).
Detailed Description
Fig. 1 illustrates a VM migration manager 100 provided by an embodiment of the present invention. The VM migration manager 100 includes a database 101, a receiving unit 103, and a processing unit 105.
The database 101 is used for storing at least one pre-created VM migration plan 102; the pre-created VM migration plan 102 preferably includes: a source of the VM 106; and/or a target node; and/or network path configuration (i.e., the path by which the VM106 migrates from a source to a destination in the network). Most preferably, the VM migration plan 102 includes a source node, a target node, and a network path configuration.
The receiving unit 103 is configured to receive the VM migration request 104 and forward the VM migration request to the processing unit 105. The VM migration request 104 includes one or more VMs to be migrated. Further, the VM migration request preferably includes at least one constraint.
The processing unit 105 is then configured to: querying a pre-created VM migration plan 102 in the database 101 according to the received VM migration request 104, preferably taking into account the at least one constraint; when the query returns a pre-created VM migration plan 102, the migration of at least one VM106 is performed according to the pre-created VM migration plan 102. If the query does not return a pre-created VM migration plan 102, the processing unit 105 preferably follows the conventional manner of creating a migration plan 102.
Fig. 2 illustrates a VM migration manager 200 provided by an embodiment of the invention. The VM migration manager 200 is constructed based on the VM migration manager 100 shown in fig. 1, and also includes the receiving unit 103, the processing unit 105, and the database 101.
In the VM migration manager 200 shown in fig. 2, the database 101 is further configured to: storing state information 201 provided by at least one external agent 202 connected to the VM migration manager 200. The external agent 202 may run on a physical node, controller (e.g., SDN controller), or switch. The processing unit 105 is further configured to: querying at least one pre-created VM migration plan 102 in the database 101 according to the state information 201.
Furthermore, in the VM migration manager 200 shown in fig. 2, the processing unit 105 is further configured to: querying a pre-created VM migration plan 102 in the database 101 according to explicit constraints 203 defined in the received VM migration request 104 and/or implicit constraints 204 imposed by the received VM migration request 104. Explicit constraints 203 are required items that are explicitly defined in the VM migration request 104 (e.g., source and target nodes of migration, corresponding network paths, or hardware resources required by the VM to be migrated at the target node and from which the target node can be determined). Implicit constraints 204 are additional information that may be obtained from the VM migration request 104, such as information about the sending user of the request 104 or the sending source of the request 104.
Fig. 3 illustrates a method 300 provided by an embodiment of the invention. The method 300 operates VM migration and may be performed by, for example, the VM migration managers 100 and 200 shown in fig. 1 and 2, respectively.
The method comprises the following steps: step 301, storing at least one pre-created VM migration plan 102 in a database 101 of the VM migration manager 100 or 200; step 302, the receiving unit 103 of the VM migration manager 100 or 200 receives a VM migration request 104; step 303, the processing unit 105 of the VM migration manager 100 or 200 queries a VM migration plan 102 created in advance in the database 101 according to the received VM migration request 104; step 304, when the query returns to the pre-created VM migration plan 102, the processing unit 105 executes migration of at least one VM106 according to the pre-created VM migration plan 102.
Fig. 4 illustrates a computing system 400 for performing VM migration management. The computing system 400 includes a VM migration manager, such as the VM migration manager 200 shown in fig. 2. Further, the computing system 400 includes at least one agent 202 running on the node 401.
The at least one agent 202 is to: one or more VMs 106, 106', 106 ″ executing on the node 401 and/or resources used by the node 401 are monitored. The node 401 may be any physical computing node, preferably a VM monitor or an SDN controller or a switch. In addition, the agent 202 is further configured to: obtaining status information 201 from the monitored VM106, 106' and/or from the monitored resource; the obtained state information 201 is provided to the VM migration manager 200.
Fig. 5 illustrates a method 500 provided by an embodiment of the invention. The method 500 operates a computing system, such as the computing system 400 shown in FIG. 4. Thus, the VM migration manager 200 as shown in FIG. 2 may preferably be used in the computing system 400.
The method 500 comprises: step 501, the processing unit 105 of the VM migration manager 200 stores at least one pre-created VM migration plan 102 in the database 101 of the VM migration manager 200; step 502, at least one agent 202 running on a node 401 monitors the VMs 106, 106', 106 ″ executing on said node 401 and/or the resources used by said node 401; step 503, the at least one agent 202 obtaining status information 201 from the monitored VM106, 106', 106 "and/or the monitored resource; step 504, said at least one agent 202 providing said obtained state information 201 to said VM migration manager 200; step 505, the processing unit 105 stores the provided state information 201 in the database 101; step 506, the receiving unit 103 of the VM migration manager 200 receives the VM migration request 104; step 507, the processing unit 105 queries a VM migration plan 102 created in advance in the database 101 according to the received VM migration request 104 and according to the state information 201 in the database 101; step 508, when the query returns to the VM migration plan 102 created in advance, the processing unit 105 executes migration of at least one VM106 according to the VM migration plan 102 created in advance.
Fig. 6 illustrates a computing system architecture provided by the present invention that includes a VM manager 100. The computing system architecture is based on the computing system 400 shown in fig. 4. The computing system architecture shown in FIG. 6 may also include the VM manager 200 shown in FIG. 2. The VM manager 100 shown in fig. 6 acts as a cloud coordinator containing a pre-created migration plan 102.
The idea is to solve the problems existing in the first phase of conventional VM migration, as described in the background section above. Based on constraints, the migration plan 102 is created and stored in the database 101 for a VM106, or for a group of VMs 106, or for an application or container that includes multiple VMs 106. These migration plans 102 are preferably updated from time to time based on the current state of the associated VM106, and based on the state of other resources in the data center environment, particularly the state of the physical nodes 401 in the data center. The processing unit 105 of the VM migration manager 100 shown in fig. 6 is responsible for creating the migration plans 102, and storing and constantly updating these plans. The processing unit is also responsible for reserving an alternative plan 102 for the VM migration. The migration plan 102 includes the best target node for migrating the associated VM106, and the path of how the migration must be performed. The processing unit 105 preferably uses current resource information available in the resource information base 600 in order to create the plan 102. The resource information base 600 may also be included in the VM migration manager 100. Thus, the resource information base 600 may be specifically integrated into the databases 101, such that only one database 101 must be included in the VM migration manager 100 and the computing system, respectively.
Network bandwidth availability and resulting migration time are preferably considered in creating the plan 102. Moreover, different VMs 106 may correspond to different constraints, that is, a faster migration may be necessary, or a slower migration may be sufficient, and so on. It is even preferable to consider the current load on the target node when creating the migration plan 102. The number of migration plans 102 may be configured based on requirements. The migration plan 102 will be calculated/updated from time to time so as to have the most up-to-date migration plan 102 when the VM106 needs to be migrated.
In the computing system architecture shown in fig. 6, when a migration request 104 is triggered, the VM migration manager 100 receives the request 104 at its receiving unit 103, and the receiving unit 103 forwards the request 104 to the processing unit 105. The processing unit 105 then checks for pre-created migration plans 102 available in the database 101 based on the constraints defined or imposed by the request 104. As described above, these constraints may be explicit constraints 203 or implicit constraints 204. For example, the constraints for selecting the pre-created migration plan 102 may be the reason for migrating the VM106, or may be obtained from a migration trigger. If the migration plan 102 is available to the VM106 or VMs, the processing unit 105 retrieves the plan 102 and begins a migration operation.
The VM migration manager 100, acting as a coordinator in the computing system shown in fig. 6, is preferably also responsible for servicing requests from clients. The purpose of these requests may be to create a new VM106, deploy an application, manage the application, and the like. Further, the resource agent 202 preferably updates the information about the VM106 in the resource information database 600 once deployment of the VM106 begins. The VM106 control lines are shown by dotted arrows in fig. 6. These control lines are used to manage the VM106 after it is created.
The solid arrows in FIG. 6 illustrate the flow of information in the computing system architecture. Resource information and real-time monitored data will flow from the physical node 401 to the resource information repository 600. The resource agent 202 running on all the physical nodes 401 and preferably on a network controller and/or switch 601 will monitor the state of the VMs 106 and the resource utilization of each VM106, as well as the resource utilization of the physical node 401 running the VMs 106, and then push them to the resource information base 600. These agents 202 keep the resource information up to date.
The resource information base 600 contains computing node information, network component information, topology information, task information, service information, bandwidth information, load information, information about other components, and/or application information (an application may contain multiple VMs 106). Preferably, this information will be fed to the resource information base 600 by the resource agent 202 running on the physical node 401 and the network controller and/or switch, containing real-time resource information and the VM information and the network topology information. As mentioned before, this information may also be stored in the database 101, i.e. the resource information base 600 is comprised in the database 101.
The VM migration manager 100 is preferably further configured to: the plan 102 is created for the VM106 whenever it receives a plan creation request. Preferably, the plan 102 is created at the beginning of deployment of the VM 106. The created migration plan 102 is preferably based on different constraints. For each VM106, multiple plans 102 may be created, stored, and updated in a timely manner based on resource information changes in the database 600, such that each possible constraint has an optimal plan 102. The migration plan 102 may even have as high a granularity as the application. That is, they may contain many VMs 106. When the migration request 104 is received, creating such a migration plan 102 is time consuming, as with conventional solutions. Thus, according to the solution provided by the present invention, the migration plan 102 is created in advance and saved in advance.
The migration request 104 may be for one VM106, or a group of VMs 106, or an application, or a container including multiple VMs 106 to be migrated. The migration request 104 preferably contains information about its trigger, as well as its trigger reason. With this information in mind, the VM migration manager 100 may select an optimal plan 102 from the set of pre-created plans 102. The migration request 104 may be triggered for a variety of reasons, and many other modules may issue the request 104. For example, the request 104 may be triggered by a load balancer, a resource consolidator for security reasons to resolve bottlenecks and predictions about the physical nodes 401, or to migrate an entire application from one cluster to another as a single entity.
Preferably, the VM migration manager 100 will read the configuration file 602, and the VM migration manager 100 will self-configure accordingly when it is first started. The configuration file 602 may be the configuration 205 as shown in fig. 2. The profile 602 also updates the plan 102 whenever it changes. The configuration file 602 preferably contains the types of plans 102 to be created for different kinds of VMs 106 or applications/containers, and the number of plans 102 that must be created for each constraint. The configuration file 602 may also contain: in addition to the reason for event-based changes, the frequency of the plan 102 must be updated. Event-based changes depend on changes in the value of a particular item in the resource information repository 600. Preferably, however, the VM migration manager 100 also takes into account other real-time resource information from the resource information base 600, preferably from the provisioning request.
The following structure shows an example of a configuration file 602:
Figure BDA0002365261950000091
Figure BDA0002365261950000101
preferably, the resource information is collected and placed in a central database, the resource information repository 600. The resource information may include at least one of: node type, node memory, CPU, accelerator type, node information, PCI device information, cache architecture information, accelerator information, reconfigurable hardware information, core information, network information, storage information, task information.
The network topology and resource status information may include at least one of: network switch, router information, topology information, bandwidth information, real-time network state information, network accelerator information.
The task information may include at least task related information, memory, CPU, and/or other resources.
In the above scenario from leaf node to data center level, the resource broker 202 is preferably used to collect which information needs to be collected. The collected information is then stored in the resource information repository 600, preferably updated very frequently, in order to learn about resource information in real time. The resource agent 202 runs on a physical node 401 and preferably on the network controller and/or switch.
Fig. 7 illustrates a computing architecture including the VM migration manager 100 provided by an embodiment of the present invention. The computing system architecture may include many components and may also use some of the existing components of a traditional coordinator, such as a scheduling information base.
The components of the VM migration manager 100 of the computing system architecture in fig. 7 are: receiving means 103 for receiving requests, in particular for creating the plan 102 and for migrating the VMs 106; a processing unit 105; a migration plan database 101 created in advance. Other components in the computing system architecture may be constraint solver 700, resource information base 600, and resource agents 202 running on physical nodes 401. The constraint solver 700 and/or the resource information base 600 may be included in the VM migration manager 100. Furthermore, the resource information repository 600 and the migration plan database 101 may be provided by a single database 101, and the database 101 may be included in the VM migration manager 100.
The dotted arrows in fig. 7 represent the creation, storage and update paths of the plan. Upon receipt of the migration plan creation request 701, the VM migration manager 100 creates and stores the migration plan 102. The solid arrows in fig. 7 represent the trigger and execution paths of the migration plan. Upon receiving the migration request 104, the VM migration manager 100 selects a migration plan 102 and migrates the VM 106. The dashed arrows in fig. 7 illustrate a fallback path, provided that there is no suitable plan 102 that satisfies the constraints for migrating the particular VM 106.
The application/distributed system co-ordination request is shown below by way of example.
Figure BDA0002365261950000111
In the request shown above, the application requires three VMs 106 with different constraints and requirements. There are also other requirements, such as the number of copies, here 2 copies, to be created for each VM106 in order to have high availability. A load balancer and an accelerator are additionally required. In view of all of these constraints, the VM migration manager 100 is used to propose a specific plan to schedule the application. Notably, the replicas should not be created on the same physical node 401, because if the physical node 401 fails, all of the VMs 106 will fail. Therefore, this does not meet the purpose of high availability.
The migration plan 102 may have two parts, a target and a path. That is, it includes at least a suitable target node or nodes for the VM106 and a suitable path for migrating the VM 106. The migration plan 102 may also have more sections, as described above.
After deployment of one or more VMs 106 or an application including multiple VMs 106, the VM migration manager 100 will use the same coordination request constraints to address the appropriate migration goals. In addition, the VM migration manager 100 creates a plurality of migration plans 102 and stores them in the database 101. The plurality of plans 102 created depends on the kind of trigger points. Possible trigger points include load balancers, dynamic optimizers, security, power-aware schedulers, data-aware schedulers, and accelerator-aware schedulers. Thus, each of these trigger points requires a different kind of plan 102 to perform the migration. Thus, the VM migration manager 100 will create these kinds of plans 102 and manage them.
Another part of each migration plan 102 is the "path" of the migration. The path depends on some constraints, such as user type (e.g., silver, gold, and platinum), secure path, fast path, slow path, shortest path, and so on. Based on the trigger of the request 104, an available path will be selected therefrom. The resource information base 600 has all the information needed to create the plan 102.
The migration plan creation request 701 is a request for scheduling a task or for provisioning a resource or for constraint-based service provisioning. These requests inherently contain information and constraints used by the VM migration manager 100 to create the plan 102. Some of which may be explicit constraints 203 and some implicit constraints 204. The implicit constraints 204 are based on the type of request and/or user, etc. If the user is a gold or platinum user, the device may implicitly create and maintain the migration plan. The request 701 may also contain the number of plans 102 to be created for each VM106, the frequency with which the plans 102 must be updated, and so on.
An example of a migration plan creation request 701 is shown below.
Figure BDA0002365261950000121
The principle of operation of the migration (trigger) request 104 is as previously described. Some requests 104 may be triggered based on time or based on load on the data center. The request 104 may identify some VMs 106 to be moved and may trigger the creation and storage of the plan 102. The request 104 may be directed to a single VM106, or to an entire application that includes multiple VMs 106.
The processing unit 105 of the VM migration manager 100 is responsible for creating the migration plans 102 and storing them in the database 101, and in addition it is responsible for maintaining these plans. The processing unit 105 may work with the constraint solver 700 with the help of it to create the migration plan 102. The constraint solver 700 may also be part of the VM migration manager 100. Preferably, as described above, N plans 102 based on the migration plan creation request 701 are created and stored. Preferably, the processing unit 105 is also responsible for: updating the migration plan 102 based on the current state of the node 401 involved in the plan 102, etc.; discarding the migration plan 102; and creates a new migration plan 102 if necessary. Thus, the processing unit 105 preferably uses the resource information base 600 to update and create the migration plan 102. The following will explain detailed information on how it is created, stored and managed.
The following is an exemplary algorithm for plan creation (the same for both target and path):
1. a plan creation request 701 is received.
2. Constraints are extracted from the request 701.
3. The constraint solver 700 is requested to obtain the best plan 102.
4. Constraint solver 700 uses the data present in resource information base 600 to solve the constraints.
5. The migration plan 102 is returned and stored in the migration plan database 101.
6. The same steps 2 to 5 are repeated for different constraints.
The following is an exemplary migration plan triggering algorithm:
1. the migration request 104 is received.
2. The constraints are extracted from the request 104.
3. Request the constraint solver 700 to obtain the appropriate plan 102.
4. The migration is initiated.
5. If there is no plan, a migration plan 102 is created.
6. And (4) repeating the step.
The following is an exemplary algorithm for updating the migration plan:
1. triggered by a time or event in the resource information base 600, either due to a change in a key resource or due to a change in configuration 205.
2. A request is constructed.
3. The migration plan is updated with constraint solver 700.
The migration plan 102 is created, maintained, updated, and discarded for the VM106 in the same manner, and the migration plan 102 is processed for the entire application including the plurality of VMs 106. That is, the database 101 may store migration plans for a single VM106 and/or a set of VMs 106, particularly for individual applications. If a migration trigger 104 requests a migration plan 102 for the entire application, the migration plan 102 will be selected based on the constraints in the request 104.
The constraint solver 700 is preferably a first order logic that accepts the constraints as a request and solves to provide an optimal solution. The constraint solver 700 may specifically do two things, one is to create the plan 102 by solving the constraints running on the resource information base 600. Another piece may be to select the particular plan 102 from a plurality of plans 102 during execution of the migration.
The resource information base 600 contains all the computing node information, network component information, topology information, task information, service information, bandwidth information, load information, component information, and the like. This information will be fed by the agents 202 running in the nodes 401 and switches. This information helps to better service the request. Besides the two components, that is, the two databases, there may be only one database 101 containing all information, including the information in the resource information base 600, which may enable the SDN controller to provide comprehensive services, including resource deployment and scheduling tasks. This saves a lot of time and effort. The library 600 also contains the real-time state and application level state of the VMs 106. This facilitates the creation and updating of the optimal migration plan 102.
After the VM migration manager 100 creates the migration plan 102, it stores them in the database, which may be provided as a key-value store. The plan 102 is stored on request. There may be multiple plans 102 per VM106 and/or application. They are stored in the database 101 and updated as needed. They may be updated periodically or according to an event-based mechanism. The event may be a trigger change in any node 401 involved in the resource information base 600 or the plan 102. For example, if node X is the target node, if the node X is currently more loaded, the plan 102 needs to be updated since this is no longer the best target for moving the VM 106. The plans 102 may also be discarded by the manager 100 because they are no longer optimal or the manager 100 has found a better plan 102 based on the constraints. The plan 102 will be stored as follows. The number of plans 102 per VM/application is based on the configuration.
VM1 MPlan1
MPlan2
MPlan3
VM2 MPlan1
MPlan2
MPlan3
App1 Mplan1,Mpaln2,Mplan3
In particular implementations, the resource agent 202 collects resource information and other VM/task runtime behaviors within the node 401 and feeds the collected information to a local database within the node 401. The local database retains fine-grained information and feeds only coarse-grained information into the above-mentioned resource information repository 600, which is sufficient in the global resource information repository 600. The resource broker 202 collects the computation, IO, memory, and network resource information in real time and updates it to the local database and the resource information base 600. This information is very helpful in having the best migration plan 102 at each given point in time.
This has the advantage that since the solution is based on different constraints, it can have different migration plans 102, the SLA can be easily met, and there can be multiple plans 102 for the same set of constraints. This saves a lot of time.
The differences between a conventional VM migration manager and the VM migration managers 100 and 200 provided by embodiments of the present invention are as follows, among others. The VM migration managers 100 and 200 migrate the VM106 and the application using a pre-created migration plan 102.
In a conventional manner, the VM migration manager creates a migration plan after receiving a migration request, and then performs migration. This is time consuming because in the case of larger applications and distributed systems, more VMs 106 will be involved, creating the plan for such large systems is time consuming. Even for a single VM 106. If there are multiple constraints, it will take more time to come up with the best migration plan.
On the other hand, in the VM migration manager 100 or 200, only the plan 102 is created when the VM/application is deployed. Not just one plan 102, even multiple plans 102 may be created. When a migration request 104 is received, the VM migration manager 100 or 200 need only select the top ranked migration plan 102 and then trigger the migration. Without waiting for the plan to be created. This saves a lot of time and provides the best path and target node for migrating the VM106 or the application.
If a particular VM/application does not have an existing plan 102, then a fallback to the conventional manner of migrating the VM106 occurs. This means that the plan will be created and then migration will be triggered.
The invention has been described in connection with various embodiments and implementations as examples. Other variations will be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure, and the independent claims. In the claims and in the description, the term "comprising" does not exclude other elements or steps, and "a" or "an" does not exclude a plurality. A single element or other unit may fulfill the functions of several entities or items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.

Claims (17)

1. A Virtual Machine (VM) migration manager (100), comprising:
a database (101) for storing at least one pre-created VM migration plan (102);
a receiving unit (103) for receiving a VM migration request (104); and
a processing unit (105) for: querying the database (101) for a pre-created VM migration plan (102) according to the received VM migration request (104); when the query returns a pre-created VM migration plan (102), performing migration of at least one VM (106) according to the pre-created VM migration plan (102).
2. The VM migration manager (200) of claim 1, wherein said database (101) is further configured to: storage state information (201) provided by at least one external agent (202) connected to the VM migration manager (200); the processing unit (105) is further configured to: querying the database (101) for at least one pre-created VM migration plan (102) based on the state information (201).
3. The VM migration manager (200) of claim 2, wherein the processing unit (105) is further configured to: querying the database (101) for a pre-created VM migration plan (102) based on explicit constraints (203) defined in the received VM migration request (104) and/or implicit constraints (204) imposed by the received VM migration request (104).
4. The VM migration manager (200) of any of the preceding claims, wherein the processing unit (105) is further configured to: obtaining at least one VM migration plan when deploying a VM (106) associated with the VM migration manager (200); storing the obtained VM migration plan as a pre-created migration plan (102) in the database (101).
5. The VM migration manager (200) of claim 4, wherein the processing unit (105) is further configured to: upon provisioning a VM (106) associated with the VM migration manager (200), obtaining the at least one VM migration plan according to a configuration (205) stored in the VM migration manager (200) and/or according to the state information (201).
6. The VM migration manager (200) of any of claims 3-5, wherein the processing unit (105) is further configured to: when the query does not return a pre-created VM migration plan (102), acquiring one or more VM migration plans according to the display constraints (203) and/or the implicit constraints (204) and/or according to the state information (201); executing the migration of the VM (106) according to the obtained VM migration plan; storing the obtained VM migration plan as a pre-created migration plan (102) in the database (101).
7. The VM migration manager (200) of claim 6, wherein the processing unit (105) is further configured to: determining the type and number of the obtained VM migration plans according to the display constraints (203) and/or the implicit constraints (204).
8. The VM migration manager (200) of claim 6 or 7, wherein the processing unit (105) is further configured to: when the query does not return a pre-created VM migration plan (102), the VM migration plan is obtained according to the configuration (205) stored in the VM migration manager (200) and/or according to the state information (201).
9. The VM migration manager (200) of any of claims 2-8, wherein the processing unit (105) is further configured to: periodically selecting a pre-created VM migration plan (102) from the database (101); updating and/or optimizing the selected VM migration plan (102) according to the state information (201); storing the optimized VM migration plan in the database (101).
10. The VM migration manager (200) of claim 9, wherein the processing unit (105) is further configured to: periodically selecting a pre-created VM migration plan (102) from the database (101); updating and/or optimizing the selected VM migration plan (102) according to the configuration (205) stored in the VM migration manager (200), in particular according to a configuration change of the configuration (205) stored in the VM migration manager (200).
11. The VM migration manager (200) of any of claims 1-10, wherein the processing unit (105) is further configured to: selecting a pre-created VM migration plan (102) from the database (101), the selection being triggered by an event; updating and/or optimizing the selected VM migration plan (102) in accordance with the event; storing the optimized VM migration plan in the database (101).
12. The VM migration manager (200) of any of claims 3-11, wherein the processing unit (105) is further configured to: determining a type of the received request (104); obtaining the implicit constraint (204) based on a type of the received request (104).
13. The VM migration manager (200) according to any of the preceding claims, wherein each pre-created VM migration plan (102) comprises: a source; and/or a target; and/or a network path configuration of the VM (106).
14. A method (300) for operating a Virtual Machine (VM) migration, the method comprising the steps of:
storing (301) at least one pre-created VM migration plan (102) in a database (101) of a VM migration manager (100);
receiving (302) a VM migration request (104) by a receiving unit (103) of the VM migration manager (100);
a processing unit (105) of the VM migration manager (100) querying (303) a pre-created VM migration plan (102) in the database (101) according to the received VM migration request (104);
when the query returns a pre-created VM migration plan (102), the processing unit (105) performs (304) migration of at least one VM (106) according to the pre-created VM migration plan (102).
15. A computing system (400) for Virtual Machine (VM) migration management, comprising:
the VM migration manager (200) of any of claims 2 to 13; and
at least one agent (202) running on the node (401) for: monitoring resources used by VMs (106, 106', 106 ") executing on the node (401) and/or the node (401); obtaining status information (201) from the monitored VM (106, 106', 106 ") and/or the monitored resource; providing the obtained state information (201) to the VM migration manager (200).
16. The computing system (400) of claim 15, wherein the node (401) is a physical computing node, preferably a VM monitor, or a Software Defined Network (SDN) controller, or a switch.
17. A method for operating a Virtual Machine (VM) migration managed computing system (400), the method comprising the steps of:
a processing unit (105) of a VM migration manager (200) storing (501) at least one pre-created VM migration plan (102) in a database (101) of the VM migration manager (200);
at least one agent (202) running on a node (401) monitors (502) resources used by VMs (106, 106', 106 ") executing on the node (401) and/or the node (401);
the at least one agent (202) obtaining (503) status information (201) from the monitored VM (106, 106', 106 ") and/or the monitored resource;
the at least one agent (202) providing (504) the obtained state information (201) to the VM migration manager (200);
-the processing unit (105) storing (505) the provided status information (201) in the database (101);
receiving (506) a VM migration request (104) by a receiving unit (103) of the VM migration manager (200);
the processing unit (105) querying (507) a pre-created VM migration plan (102) in the database (101) according to the received VM migration request (104) and according to the state information (201) in the database (101); and
when the query returns a pre-created VM migration plan (102), the processing unit (105) performs (508) migration of at least one VM (106) according to the pre-created VM migration plan (102).
CN201780093133.XA 2017-07-12 2017-07-12 Virtual Machine Migration Manager and Method Pending CN110914804A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2017/067525 WO2019011421A1 (en) 2017-07-12 2017-07-12 Virtual machine migration manager and method

Publications (1)

Publication Number Publication Date
CN110914804A true CN110914804A (en) 2020-03-24

Family

ID=59315640

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201780093133.XA Pending CN110914804A (en) 2017-07-12 2017-07-12 Virtual Machine Migration Manager and Method

Country Status (2)

Country Link
CN (1) CN110914804A (en)
WO (1) WO2019011421A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11233747B2 (en) * 2019-01-23 2022-01-25 Servicenow, Inc. Systems and methods for acquiring server resources at schedule time
US12026535B2 (en) 2021-09-27 2024-07-02 UiPath, Inc. System and computer-implemented method for controlling a robot of a virtual machine
FR3150022A1 (en) * 2023-06-19 2024-12-20 Orange Multi-cluster cloning system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102412978A (en) * 2010-09-21 2012-04-11 杭州华三通信技术有限公司 Method and system for network configuration aiming at virtual host
CN103914458A (en) * 2012-12-29 2014-07-09 中国移动通信集团河北有限公司 Mass data migration method and device
US20150007178A1 (en) * 2013-06-28 2015-01-01 Kabushiki Kaisha Toshiba Virtual machines management apparatus, virtual machines management method, and computer readable storage medium
US20160139946A1 (en) * 2014-11-18 2016-05-19 International Business Machines Corporation Workload-aware load balancing to minimize scheduled downtime during maintenance of host or hypervisor of a virtualized computing system
US9697266B1 (en) * 2013-09-27 2017-07-04 EMC IP Holding Company LLC Management of computing system element migration

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9946564B2 (en) * 2015-06-23 2018-04-17 International Business Machines Corporation Adjusting virtual machine migration plans based on alert conditions related to future migrations

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102412978A (en) * 2010-09-21 2012-04-11 杭州华三通信技术有限公司 Method and system for network configuration aiming at virtual host
CN103914458A (en) * 2012-12-29 2014-07-09 中国移动通信集团河北有限公司 Mass data migration method and device
US20150007178A1 (en) * 2013-06-28 2015-01-01 Kabushiki Kaisha Toshiba Virtual machines management apparatus, virtual machines management method, and computer readable storage medium
US9697266B1 (en) * 2013-09-27 2017-07-04 EMC IP Holding Company LLC Management of computing system element migration
US20160139946A1 (en) * 2014-11-18 2016-05-19 International Business Machines Corporation Workload-aware load balancing to minimize scheduled downtime during maintenance of host or hypervisor of a virtualized computing system

Also Published As

Publication number Publication date
WO2019011421A1 (en) 2019-01-17

Similar Documents

Publication Publication Date Title
US20220329651A1 (en) Apparatus for container orchestration in geographically distributed multi-cloud environment and method using the same
US8612987B2 (en) Prediction-based resource matching for grid environments
EP3400535B1 (en) System and method for distributed resource management
Safaei Real-time processing of streaming big data
CN107273185B (en) Load balancing control method based on virtual machine
US8117641B2 (en) Control device and control method for information system
CN109478973B (en) SDN controller, system and method for task scheduling, resource distribution and service provision
Shetty et al. Analysis of load balancing in cloud data centers
US9184982B2 (en) Balancing the allocation of virtual machines in cloud systems
CN111104548B (en) Data feedback method, system and storage medium
US9262217B1 (en) Computation resource cyclic utilization
Lee et al. Automating system configuration of distributed machine learning
CN110914804A (en) Virtual Machine Migration Manager and Method
Sajal et al. Kerveros: Efficient and scalable cloud admission control
Mershad et al. A study of the performance of a cloud datacenter server
EP3042282A1 (en) Hierarchical dynamic scheduling
Zarrin et al. ElCore: Dynamic elastic resource management and discovery for future large-scale manycore enabled distributed systems
Tzenetopoulos et al. Interference-aware workload placement for improving latency distribution of converged HPC/Big Data cloud infrastructures
Seybold et al. Gibbon: An availability evaluation framework for distributed databases
Skałkowski et al. QoS-based storage resources provisioning for grid applications
Abyaneh et al. Malcolm: Multi-agent learning for cooperative load management at rack scale
Hwang et al. FitScale: scalability of legacy applications through migration to cloud
Saad et al. Wide area bonjourgrid as a data desktop grid: Modeling and implementation on top of redis
Thiyyakat et al. Eventually-consistent federated scheduling for data center workloads
Fernández-Cerero et al. Quality of cloud services determined by the dynamic management of scheduling models for complex heterogeneous workloads

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200324

RJ01 Rejection of invention patent application after publication