[go: up one dir, main page]

GB2516944A - Live migration of a virtual machine using a peripheral function - Google Patents

Live migration of a virtual machine using a peripheral function Download PDF

Info

Publication number
GB2516944A
GB2516944A GB1314184.1A GB201314184A GB2516944A GB 2516944 A GB2516944 A GB 2516944A GB 201314184 A GB201314184 A GB 201314184A GB 2516944 A GB2516944 A GB 2516944A
Authority
GB
United Kingdom
Prior art keywords
memory area
source
destination
virtual machine
memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1314184.1A
Other versions
GB201314184D0 (en
Inventor
Angel Nunez Mencias
Einar Lueck
Michael Jung
Stefan Amann
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Priority to GB1314184.1A priority Critical patent/GB2516944A/en
Publication of GB201314184D0 publication Critical patent/GB201314184D0/en
Priority to DE102014110804.3A priority patent/DE102014110804A1/en
Publication of GB2516944A publication Critical patent/GB2516944A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/10Address translation
    • G06F12/1081Address translation for peripheral access to main memory, e.g. direct memory access [DMA]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45583Memory management, e.g. access or allocation

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

A virtual machine is to be migrated. In order to enable the virtual machine to use a peripheral function without a long interruption, a migration assistance unit (1705) is provided, wherein migration assistance unit (1705) is adapted to receive a source write request generated by a peripheral function, and to generate a destination write request comprising the same write data as said source write request and directed to said destination memory area of said virtual machine. The peripheral function may be blocked while the memory data is transferred, and the source virtual machine is paused when nearly all memory data is transferred to the destination. Further write requests are exclusively saved in the destination memory, which may be on the same or different computers.

Description

DESCRIPTION
LIVE MIGRATION OF A VIRTUAL MACHINE USING A PERIPHERAL FUNCTION
I. BACKGROUND OF THE INVENTION
A. FIELD OF THE INVENTION
The present invention relates to a method for the live migration according to the preamble of claim 1, a migration assistance unit according to the preamble of claim 10, a computer system according to the preamble of claim 11, a data processing program according to the preamble of claim 14, and a computer program product according to the preamble of claim 15.
B. DE.SCBTPTTON OF TOE! ROtATED APT Computer clusters comprise several loosely or tightly connected computers and may have one or several expansion buses, wherein at least one hardware device (peripheral device) is connected to each expansion bus. Each of the peripheral devices can execute one or several peripheral functions enhancing the functionality of the computers and may communicate with any of the computers.
Typically, the expansion buses are defined by the PCIe (Periph-eral Component Tnterconnect Express) standard and are referred to as PCIe buses, the peripheral devices are referred to as PCIe cards and the peripheral functions (functions executed by pe-ripheral devices) are referred to as PCIe functions.
Blade systems are special embodiments of computer clusters, whe-rein the computers known as blade servers are stripped down. The blade servers comprise only the essential components of com-puters such as a central prccessing unit (CPU) and a memcry and are located in one or several blade enclosures. The blade enclo-sures provide basic services such as power, cooling, networking, various interconnects, and management and typically comprise one or several expansion buses wherein at least one peripheral de-vice is connected to each expansion bus.
Blade systems are especially used for virtualizaticn. Virtual-ization refers to the creation of a virtual machine which is a software implemented abstraction of the hardware cf a computer.
The virtual machine executes programs like a physical machine, enables the limitation of the software running inside the re-scurces and abstractions, and in particular is used in clcud computing enabling the sharing cf resources by multiple users over a network. Several virtual machines (guest machines) can run on a single blade server (host machine) . The virtual ma-chines are generated and run by hyperviscrs which are pieces of ccmputer software, firmware or hardware. Each hypervisor can run several virtual machines.
The provider may wish to move (migrate) a running virtual ma-chine from one blade server (source) to another blade server (destination) or on the same blade server (source and destina-tion) The migration to another blade server might be necessary for maintaining the blade server, updating the software of the blade server or balancing the workload of several blade servers.
The migration on the same blade server may be necessary for the update of one of several operating systems running on the blade server or a change of the user. The interruption of the virtual machine is to be minimized.
Tn U58,281,013, 1552013/0014103, W02012/009843 various live mi- gration methods according to the preamble of claim 1 are dis-closed. In general, live migration methods can be categorized in two different techniques: pre-copy memory migration and post-copy memory migration.
In pre-copy memory migration, the hypervisor copies all the mem-ory pages from the source to the destination while the virtual machine is running on the source. Memory pages changing (becom-ing "dirty") during copying are copied again until the rate of change of the paqes (page dirtying rate) on the source is less than the rate of recopied pages. Then the virtual machine is stopped at the source, the remaining dirty pages are copied to the destination, and the virtual machine is resumed on the des-tination. The time between stoppage of the virtual machine on the source and the resuming on the virtual machine on the desti-nation is known as down time. Typically, the down-time ranges from a few milliseconds to a few seconds depending on the size of memory and applications of the virtual machine.
In post-copy memory migration, the virtual machine is suspended.
Then a minimal subset of the execution state of the virtual ma-chine (CPU registers and non-pageable memory) is transferred to the destination. Thereupon the virtual machine is resumed at the destination and further pages are transferred. When the virtual machine tries to access pages which have not yet been trans-ferred, page faults are generated and directed to the source. As a response, the faulted pages are sent to the destination.
The live migration of virtual machines using peripheral func-tions causes serious problems as different memory areas are used by the virtual machine before and after migration. When in use, peripheral devices execiting peripheral functions are therefore disconnected before the migration and reconnected after the mi-gration in the state of the art.
TI. SUMMARY OF THE INVENTION
According to a first aspect of the present invention, a method for the live migration of a virtual machine from a source memory area to a destination memory area is provided, wherein the source memory area and the destination memory area are accessi-ble by a peripheral function through a shared infrastructure, and wherein memory data stored in the source memory area are transferred to the destination memory area, wherein a source write request addressed to the source memory area and generated by a peripheral function is received, and wherein a destination write request comprising the same write data as the source write request and directed to the destination memory area is gener-ated. As the write data is directed to the destination memory area the use of different memory areas by the virtual machine before and after mHgraton does not create problems. The source memory area is a part of a memory or a complete memory which is allocated to the virtual machine to be migrated. The destination memory area is a part of a memory or a complete memory which is allocated to the migrated virtual machine (the virtual machine after migration) . Write data is data to be written in a memory.
A source write request is a write request to the source memory area. A destination write request is a write request to the des-tination memory area. A destination write request can consist of the same data as a source write request, wherein the source write request and the destination write request only differ in the memory areas to which they are directed. Thus, the genera- tion of a destination write request can correspond to the out-putting of a source write request at a different output directed to the destination memory area. Preferably, the shared infra-structure connects one or several computers with one or several hardware devices, enables the transfer of data from one or sev- eral hardware devices to one or several computers and can in-dude a PCIe multi-root (F4R) I/O fabric. Preferably, the shared infrastructure enables as well the transfer of data from one or several computers to one or several hardware devices and is re-ferred to as T/O infrastructure. The transfer of memory data stored in the source memory area to the destination memory area ensures that the virtual machine is identical before and after migration. In this context, memory data are data stored in a memory. The memory may be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of memories would in-clude the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random ac- cess memory (RAM) , a read-only memory (RUM), an erasable pro-grammable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM) , an op-tical storage device, a magnetic storage device, or any suitable combination of the foregoing. Iii this context, a memory may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, appa- ratus, or device. Preferably, the source write request is gener-ated by a peripheral function. Preferably, a destination write request is generated for each source write request after the mi-gration of the virtual machine. Preferably, a destination write reguest is generated for each source write reguest from a point of time during the migration of the virtual machine. Preferably, the source write reguest(s) and destination write requests are direct memory access (DMA) write requests.
In a preferred embodiment, the destination write request is an additional write reguest. The source write request can still be sent to the source memory area so that the virtual machine can be running during the migration. As the write data is also sent to the destination memory the write data does not have to be transferred from the sojroe memory area to the destination mem-ory area thus shortening the migration procedure.
In a further development of the last preferred embodiment, a write request addressed to said source memory area by a periph-eral function is blocked while memory data are transferred to said destination memory area. Preferably, the write request by the peripheral function is only blocked if the write request is addressed to a section of the source memory area for which the memory data transfer has started and not yet ended. Preferably, write accesses by the peripheral function on destination memory areas in which the transferred memory data is stored or is to be stored are blocked.
Tn yet another preferred embodiment, the source virtual machine is paused when the destination memory area comprises nearly all the memory data of the source memory area. The destination mem- ory area comprises nearly all the memory data of the source mem-ory area when the size of the memory data stored on the source memory area and not on the destination memory area does not ex- ceed a certain threshold. This threshold can be chosen arbitrar-ily. The suitable choice of the threshold is important for the execution of the live migration method. Typically, the threshold is equal to several memory pages (memory block of the same si-ze) . "Pausing the virtual machine" means that the CPU cannot perform any write requests on the memory pages allocated to the virtual machine. The paisinq makes the complete transfer of mem-ory data from the source memory area to the source memory area easier, as no memory data of the source virtual machine are overwritten by the Cpu.
Tn a further development of the last preferred embodiment, mem-ory data stored in a part of the source memory area accessible by a write request of a peripheral function are transferred to the destination memory area while the virtual machine is paused.
Preferably, these memory data are transferred before all further memory data to be transferred from the source memory area to the destination memory area. After the complete transfer, write data generated by peripheral functions can be stored in the destina-tion memory area.
In a yet another further development of the last preferred em-bodiment, further write data comprised in further write requests addressed to the source memory area are exclusively saved in the destination memory area. Preferably, the further write requests are generated by one or several peripheral functions. Prefera-bly, this exclusive storage is performed after the complete transfer.
In a yet another further development of the last preferred em-bodiment, memory data stored in a part of the source memory area not accessible by the peripheral function are transferred to the destination memory area while the virtual machine is paused. The storage of the write data generated by the peripheral functions does not interfere with this transfer.
In yet another preferred embodiment, the destination memory area and the source memory area are located on the same computer.
This embodiment has specific requirements for devices performing the live migration method.
In yet another preferred embodiment, the destination memory area and the source memory area are located on different computers.
This embodiment also has specific requirements for devices per-forming the live migration method.
Tn general, a method for the live migration can comprise any possible combination of features of the preferred embodiments and further developments.
According to a second aspect of the present invention, a migra-tion assistance unit for the live migration of a virtual machine from a source memory area to a destination memory area is pro-vided, wherein the source memory area and the destination memory area are accessible by a peripheral function through a shared infrastructure, and wherein memory data stored in the source memory area are transferred to the destination memory area, wherein the migration assistance unit is adapted to receive a source write request addressed to the source memory area and generated by a peripheral function, and to generate a destina-tion write request comprising the same write data as the source write request and directed to the destination memory area of the virtual machine. The migration assistance unit can be located at different places of a computer cluster. The migration assistance unit can take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment containing both hardware and software elements. The migration assistance unit can also be integrated in another device (e.g., an extended PCIe multi-root (141<) I/C fabric or TOMMU) . Preferably, the migration assistance unit is located either on a computer or in the shared infrastructure, in particular between a conventional POle multi- root (141<) I/o fabric and one or several server blades. A migra-tion assistance unit located on a computer is referred to as a computer migration assistance unit. A migration assistance unit located in the infrastructure is referred to as a infrastructure migration assistance unit. Every computer can comprise a com-puter migration assistance unit in order to enable the migration of a virtual machine on the same computer. A single infrastruc- ture migration assistance unit can be sufficient in order to en-able the migration of a virtual maohine from a souroe computer to a destination computer.
According to a third aspect of the present invention, a computer system is provided, wherein the computer system comprises a mi-gration assistance unit adapted to generate a destination write request comprising the same write data as a source write request addressed to a source memory area of a virtual machine and to direct said destination write request to a destination memory area of said virtual machine. Preferably, the computer system comprises at least two computers, each computer including at le- ast one processor coupled directly or indirectly to memory ele-ments through a system bus.
Tn a preferred embodiment, the migration assistance unit is lo-cated on a computer comprising the source memory area and the destination memory area. This migration assistance unit is re-ferred to as a computer migration assistance unit.
Tn yet another preferred embodiment, the migration assistance unit is located in the shared infrastructure, and the computer system comprises a first computer on which the source memory area is located and a second computer on which the destination memory area is located. This migration assistance unit is re- ferred to as a infrastricture migration assistance unit. Pref- erably, the computer system comprises a computer migration as-sistance unit and a infrastructure migration assistance unit.
Preferably, the computer system comprises several computer mi-gration assistance units, wherein each computer assistance unit is located on a different computer.
According to a fourth aspect of the present invention, a data processing program for execution in a data processing system -10 -comprising software code portions for performing a method for the live migration of a virtual machine from a source memory area to a destination memory area is provided, wherein the source memory area and the destination memory area are accessi-ble by a peripheral function through a shared infrastructure, and wherein memory data stored in the source memory area are transferred to the destination memory area, wherein a source write request addressed to the source memory area and generated by a peripheral function is received, and wherein a destination write request comprising the same write data as the source write request and directed to the destination memory area is generated when the data processing program is run on the data processing system. The data processing program is accessible from a com-puter-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system. For the purposes of this description, a com-puter-usable or computer readable medium can be any apparatus that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. The medium can be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium. Examples of a computer-readable medium include a semi- conductor or solid state memory, magnetic tape, a removable com-puter diskette, a random access memory (RAN), a read-only memory (RON), a rigid magnetic disk and an optical disk. Current exam-ples of optical disks include compact disk -read only memory (CD-RON) , compact disk -read/write (CD-R/W) and DVD. A data processing system suitable for storing and/or executing program code will include at least one processor coupled directly or in-directly to memory elements through a system bus. The memory elements can include local memory employed during actual execu-tion of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk -11 -storage during execution. Input/output or I/O devices (including but not limited to keyboards, displays, pointing devices, etc.) can be coupled to the system either directly or through inter-vening I/C controllers. Network adapters may also be coupled to the system to enable the data processing system to become cou-pled to other data processing systems or remote printers or storage devices through intervening private or public networks.
Modems, cable modems and Ethernet cards are just a few of the currently available types of network adapters.
According to a fifth aspect of the present invention, a computer program product for the live migration of a virtual machine from a source memory area to a destination memory area is provided, wherein the source memory area and the destination memory area are accessible by a peripheral function through a shared infra-structure, and wherein memory data stored in the source memory area are transferred to the destination memory area, wherein the computer program product comprises a computer readable storage medium having program code embodied therewith, to perform a method comprising: -receiving a source write request addressed to the source mem-ory area and generated by a peripheral function, and -generating a destination write reguest comprising the same write data as the source write request and directed to the des-tination memory area.
In general, the migration assistance unit and the computer sys- tem according to the invention can comprise any possible combi- nation of features enabling the execution of the preferred em-bodiments and further developments of the method for the live migration mentioned above. In general, a data processing program and a computer program product according to the invention can comprise any possible combination of features of the preferred -12 -embodiments and further developments of the method for the live migration mentioned above.
III. BRIEF DESCRIPTION OF THE DRAWINGS
A detailed description of the invention is given in the follow-ing drawings in which: FIG. 1 shows a blade system; FIG. 2 shows a server blade of FIG. 1; FIG. 3 shows an extended PCIe multi-root I/o fabric of FIG. 1; FIG. 4A shows the first part of a live migration process; and FIG. 4B shows the second part of a live migration process.
IV. DETAILED DESCRIPTION OF THE INVENTION
FIG. 1 illustrates a blade system comprising several server bla-des 210, 220... 230, an extended PCIe multi-root I/O fabric 170, POle buses 110... 120, a blade enclosure 100, and a configuration manager 130. Each of the PCIe buses 110... 120 comprises several PCIe cards 111, 112... 121, 122... The server blades 210, 220 230 and the PCIe buses 110, 120..., and the configuration manager are connected to the extended PCIe multi-root I/O fabric 170 through the I/O links 140... 150, 160, 180 190... 200 so that each server can communicate with any of the POle cards 111, 112 121, 122... 131, 132.... Each of the POle cards can execute one or several PCIe functions. A virtual machine VM is run by the hy-pervisor HV on the server blade 210 (More precisely, VM and HV represent the memory areas allocated to a virtual machine and a hypervisor, respectively) . The virtual machine VM migrated on the same server blade 210 is represented by VM' which is run by the hypervisor HV' (More precisely, VM' and HV' represent the memory areas allocated to a virtual machine migrated on the same -13 -server blade 210 and a hypervisor running this virtual machine, respectively) . The virtual machine VM migrated to the different server blade 220 is represented by VM'' which is run by the hy- pervisor HV'' (More precisely, VM'' and MV'' represent the mem- ory areas allocated to a virtual machine migrated to the differ- ent server blade 220 and a hypervisor running this virtual ma-chine, respectively) . Each hypervisor MV, MV', and MV'' includes correspondence tables C, C', and C'', wherein each correspon- dence table associates peripheral addresses used by the periph-eral function with the corresponding virtual addresses used by the hypervisors HV, HV', and MV'', respectively. The virtual ma-chines VM' and VM'' are referred to as migrated virtual machines in order to distinguish them from the virtual machine VM. The migrated virtual machines could also be referred to as destina- tion virtual machines, and the virtual machine VM could be re-ferred to as source virtual machine. The basic layout of the blade system of FIG. 1 is known from the state of the art. The-refore it is riot necessary to describe all features of the blade system in detail. However, the present blade system comprises additional features relating to the migration of a virtual ma-chine VM on the same server blade 210 or to a different server blade 220. Hereinafter, the focus is on these additional fea-tures.
FIG. 2 illustrates the server blade 210 of FIG. 1 which com-prises a root complex 211, a memory controller 212, a memory 213, a memory management unit (MMU) 214, and a central processor unit (CPU) 215. The CPU 215 is coupled to the MMU 214 through the link 219. The MI4U 214, the memory 213, and the root complex 211 are connected to the memory controller 212 via the links 218, 217, and 216, respectively. The root complex 211 is con-nected to the extended PCIe multi-root I/C fabric 170 via the I/C link 180 which is formed as a PCIe bus. All server blades 210, 220, 230... are typically formed identically, but can be -14 -formed differently. The CPU 215, the MMU 214, and the memory controller 212 are known from the state of the art and are therefore not described in detail hereinafter. The memory 213 differs from memories known from the state of the art only by the stored data and the data storing procedure and is therefore also not described in detail hereinafter. The root complex 211 comprises many features which are as well known from the state of the art and therefore not described in detail hereinafter.
However, the root complex comprises additional features relating to the migration of the virtual machine VM which are described in more detail hereinafter.
The memory 213 comprises memory pages (memory blocks with the same size) each having a unique address and non-pageable memory data. Specific memory areas are allocated to CPU translation ta-bles 2131, main peripheral translation tables 2132, additional peripheral translation tables, a hypervisor FIV running the vir- tual machine VF4, and a hypervisor HV' running the virtual ma-chine VM' . The CPU translation tables 2131 contain translations of CPU memory addresses to physical memory addresses and can be stored in any fashion. CPU memory addresses are the addresses of memory pages in the memory 213 used by the CPU. Physical memory addresses are the addresses of memory pages in the memory 213 used by the memory 213. The main peripheral translation tables 2132 contain translations of peripheral memory addresses to phy-sical memory addresses and can be stored in any fashion, wherein these physical memory addresses are addresses of memory pages allocated to the virtual machine VF4. Peripheral memory addresses are the addresses of memory pages in the memory 213 used by one or several peripheral functions. In general, the main peripheral translation tables 2132 are generated by the hypervisor EiV. The additional peripheral translation tables 2133 contain transla- tions of peripheral memory addresses to physical memory ad-dresses and can be stored in any fashion, wherein these physical -15 -memory addresses are addresses of memory pages allooated to the virtual machine VM' . In general, the additional peripheral translation tables 2133 are generated by the hypervisor WI' The I'4MU 214 receives read and write requests from the Cpu 215.
The read and write requests use Cpu memory addresses of memory areas (typically memory pages) of the memory 213. However, the memory 213 uses only physical addresses in order to address mem- ory areas. Therefore the 1'4MU 214 translates the CPU memory ad-dresses to physical addresses. The MMU 214 oheoks whether the translations of the Cpu memory address to the corresponding phy-sical address is stored in a translation lookaside buffer (TLB) 2141. If no translation is stored in the translation lookaside buffer (TLB) 2141, the MMU 214 generates a Cpu translation mem- ory reguest in order to search the translation in the Cpu trans-lation tables 2131. Thereupon the 1'4MU 214 stores the translation in the translation lookaside buffer 2141 (TLB) The root complex 211 comprises an input/output memory management unit with an integrated computer migration assistant unit (1CM-MU/CMAU) . The computer migration assistance unit is a migration assistance unit located on a server and can also be referred to as input/output memory management unit with an integrated migra-tion assistant unit (ICMMu/MAU) as the location of the migration assistance unit is determined by the TOMF4U. The input/output me-mory unit with an integrated computer migration assistance unit (ICMMU/CMAu) is an ICMMU capable of performing the standard functions as known from the state of the art and additional functions. These additional functions relate to the migration of a virtual machine on the same server and are described hereinaf-ter. The additional functions could be performed by a separate migration assistance unit. However, the functionality of the IOMMU is enhanced in order to perform these functions instead of providing a separate migration assistance unit located on the -16 -same server. The root oomplex 211 receives direct memory acoess (DMA) read and write requests from the POle functions executed by the POle devices 111, 122... 121, 122... 131, 132... via the ex-tended POle multi-root I/O fabric 170 (see FIG. 1) . Hereinafter, all read and write requests from POle functions are DMA read and write requests hereinafter even if it is not mentioned. The read and write requests from POle functions use peripheral memory ad-dresses of memory pages of the memory 213, wherein the memory 213 uses exclusively physical addresses in order to address mem-ory pages as mentioned before. Therefore, the IOMMU 2111 which comprises a table walker 2112, a main IOTLB 2113, an additional IOTLB, control registers 2115, and a controller 2116 translates the peripheral memory addresses to physical addresses.
When the IOMF4U 2111 receives a read or write request comprising a peripheral memory address from a POle function, the IOMMU/OMAU 2111 checks whether the translation of the peripheral memory ad-dress to the corresponding physical memory address is stored in the main TOTIJB 2113. In order to find the translation, the con-troller 2116 accesses the main IOTLB 2113 and searches the translation. If the controller 2116 does not find the transla-tion in the IOTLB 2113, the controller 2116 invokes the table walker 2112 which searches the main peripheral translation ta-bles 2132 in the memory 213. Thereupon, the controller 2116 stores the translation in the main TOTLB 2113. The invocation of the table walker 2112 is not necessary if the main IOTLB 2113 is complete as is preferably the case.
When the virtual machine VM is migrated on the server blade 210, the IOMMU/OMAU 2111 may have to generate write request or addi-tional write requests addressed to memory pages allocated to the migrated virtual machine VM' on the basis of write requests ad-dressed to one or several memory pages allocated to the virtual machine VM. For this purpose, the IOMMU/OMAU 2111 checks whether -17 - the peripheral memory address and the translation of the periph-eral memory address to the corresponding memory address of the memory page allocated to the migrated virtual machine VM' is stored in the additional ICTLB 2114. Tn order to find the pe-ripheral memory address and the translation, the controller 2116 accesses the additional IOTLB 2114 and searches the translation.
If the controller 2116 does not find the peripheral memory ad- dress and the translation in the additional IOTIJB 2114, the con-trolley 2116 invokes the table walker 2112 which searches the additional peripheral translation tables 2133 in the memory 213.
Thereupon, the controller 2116 stores the translation in the ad-ditional 10113 2114. The invocation of the table walker 2112 is not necessary if the additional IOTLB 2114 is complete as is preferably the case. Then the controller 2116 invokes a write request generator 2115 which generates a write request or an ad-ditional write request comprising the same write data (data to be written) as the write request on which the write request or additional write request is based. Basically, the write request generator 2115 exchanges the address of the write request. Fi-nally, the IOMNU/CMAU 2111 sends the generated write request or additional write request to a corresponding memory area of the destination memory area, wherein there is a corresponding desti-nation memory area for each source memory area. As the additions to a standard IOMMU can be implemented by a person skilled in the art on the basis of the given description, no more detailed description is required, wherein the IOMMU/CMAU 2111 can take the form of an entirely hardware embodiment, an entirely soft-ware embodiment or an embodiment containing both hardware and software elements.
FIG. 3 illustrates the extended POle multi-root I/O fabric 170 of FIG.l which comprises a conventional PCIe multi-root I/O fab-ric 1701 as known from the state of the art which is configured by the configuration manager 130 (see FIG. 1) and a infrastruc- -18 -ture migration assistance unit (IMAU) 1705. The conventional POle multi-root I/O fabric 1701 is connected to the POle buses 110... 120 and the configuration manager 130 (see FIG. 1) through the I/o links 140... 150, 160 and to the IMAU 1705 through the I/O links 1702... 1703, 1704. The 1MM) 1705 is connected to the server blades 210... 220, 230 through the I/O links 180... 190, 200. In normal operation, the 1MM) 1705 is configured to connect the I/O links 1702... 1704, 1704 to the respective I/O links 180 190, 200 so that all request are transmitted from the I/O link 1702... 1703, 1704 to the respective I/O links 180... 190, 200 and vice-versa.
The infrastructure migration assistance unit 1705 comprises a buffer 1706, a controller 1708, a write request generator 1709, and an ID memory 1707. When the virtual machine VM is migrated from the server blade 210 to the server blade 220 (see FIG. 1), the infrastructure migration assistance unit 1705 may generate write requests or additional write requests directed to a desti-nation virtual machine ID (the migrated virtual machine \714'' on the server blade 220) on the basis of write requests directed to a source virtual machine ID (the virtual machine to be migrated on the server blade 210, see FIG. 1), wherein requests not di-rected to the virtual machine are still being transmitted from the I/o link 1702... 1703, 1704 to the respective I/o links 180 and vice-versa as in normal operation. For generating write requests or additional write requests, the infrastructure migra-tion assistance unit 1705 stores each request received from the conventional Pole multi-root I/O fabric 1701 in the buffer 1706 and checks whether the request is a write request and whether the write request is addressed to a memory page allocated to the virtual machine VM if the request is a write request. In order to find out whether the write request is directed to a periph- eral memory page allocated to the virtual machine VM, the con-troller 1708 checks whether the request comprises a function ID -19 -stored in the ID memory 1707. For this check, the controller 1706 accesses the ID memory 1707 and searches an entry compris-ing the function ID and the associated link ID (the ID of an I/O link) . If the controller 1708 finds the function ID and the cor-responding I/O link 180... 190, 200, the controller invokes the write request generator 1709 which generates a write request or an additional write reqiest comprising the same write data (data to be written) as the write request on which the write request or additional write request is based. Finally, the infrastruc-ture migration assistance unit 1705 sends the generated write request or additional write request via the associated I/O link 180... 190, 200 to a corresponding memory area of the destination memory area, wherein there is a corresponding destination memory area for each source memory area. In order to generate one of the entries of the ID memory 1707, the controller 1708 prompts the storing of such an entry comprising a function ID and a link ID when the infrastructire migration assistance unit 1705 re-ceives the function ID via one of the I/O links 180... 190, 200 associated with the link ID from a destination hypervisor. As the infrastructure migration assistance unit 1705 comprising a buffer 1706, a controller 1708, a write request generator 1709, and an ID memory 1707 can be implemented by a person skilled in the art on the basis of the given description, no more detailed description is required, wherein the infrastructure migration assistance unit 1705 can take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment containing both hardware and software elements.
FIG. 4A shows the first part of a live migration process. FIG. 4B shows the second part of a live migration process. The first part shown in FIG. 4A and the second part shown in FIG. 43 form together the complete live migration process according to the scenario A. The migration process uses the infrastructure migra-tion assistance unit 1705 (see FIG. 3) and may be initiated by a -20 - further routine suoh as a routine performed by IBM's Active En-ergy Manager, Director \TM Control or the like (not shown) . This routine informs a source hypervisor running a virtual machine and a destination hypervisor which shall run the virtual machine that the virtual machine is to be migrated from the source server blade on which the source hypervisor is installed to the destination server blade on which the destination hypervisor is installed. This information includes a source virtual machine ID enabling the addressing of the source virtual machine (virtual machine to be migrated) and a destination virtual machine ID enabling the addressing of the destination virtual machine (mi-grated virtual machine) . The source hypervisor in installed on a source server blade. The destination hypervisor is installed on a destination server blade. The source and the destination server blades are not identical. The virtual machine occupies a specific area of the memory of the source server blade which is partitioned in memory pages. In the following, the virtual ma- chine which is formed 0:1 the destination server blade is re-ferred to as the migrated virtual machine even if it is still being formed and not yet a complete virtual machine in order to distinguish this virtual machine from the virtual machine run-ning on the source server blade. The source server blade could be the server blade 210, the source hypervisor could be the hy-pervisor HV, the virtual machine could be the virtual machine VM, the destination server blade could be the server blade 220, the destination hypervisor could be the hypervisor HV'', and the migrated virtual machine could be the virtual machine \JM'' (see FIG. 1) . The CPU can perform write and read accesses on all mem-ory pages of the virtual machine. The MMU manages the access to the memory by the CPU. Additionally, PCIe functions performed by the PCIe cards (see FIG. 1) can perform write and read requests on the memory pages accessible by the PCIe functions. The IOMMU manages the access to the memory pages by the PCIe functions and allows only the access to specific memory pages. The addresses -21 -of these specific memory pages are stored in the main peripheral translation tables 2131 (see FIG. 2) In step 51, the source hypervisor sends to the destination hy-pervisor the IDs of the PCIe functions (function IDs) which are allowed to access the memory pages allocated to the virtual ma-chine, the source correspondence tables, and the size of the virtual machine. The so-irce correspondence tables are the corre-spondence tables of the hypervisor running the virtual machine to be migrated. The size of the virtual machine is typically specified in multiples of memory pages, wherein the virtual ma-chine can comprise non-pageable memory data.
In step 52, the destination hypervisor allocates memory to the migrated virtual machine on the destination server blade depend-ing on the size of the virtual machine on the source server blade. The size of the allocated memory on the destination ser-ver blade is identical to the size of the memory of the virtual machine on the source server blade. Additionally, the destina-tion hypervisor sends the function IDs of the POle functions which are allowed to access the memory pages allocated to the virtual machine to the infrastructure migration assistance unit which stores the function IDs and the respective link IDs of the I/o links through which the POle function is received in the ID memory as explained with reference to FIG. 3.
Tn step 53, the destination hypervisor generates destination correspondence tables and sends them to the IOMMU of the desti- nation server blade (destination I0L"ll'4U) . The destination corre-spondence tables are the correspondence tables of the hypervisor running the migrated virtual machine.
-22 -Tn step 34, the IOMI4U of the destination server blade generates main peripheral address translation tables for the translation of peripheral addresses to memory addresses used by the IOMMU of the destination server blade and vice-versa on the basis of the destination correspondence tables (The peripheral translation tables for the destination virtual machine on the same server as the source virtual machine are referred to as additional periph- eral translation tables.) and stores these main peripheral ad-dress translation tables in the memory (see FIG. 2) Tn step 35, the infrastructure migration assistance unit is con-figured to generate additional write requests addressed to the migrated virtual machine on the basis of write requests ad-dressed to the virtual maohine. Eaoh additional write requests comprises the same write data as the write request addressed to the virtual machine. The write request addressed to the virtual machine and the associated additional write request are always directed to corresponding memory areas allocated to the virtual machine or the migrated virtual machine, respectively. For the generation of additional write requests, the migration assis-tance unit uses the procedure described with reference to FIG. 3. As the migration assistance unit generates an additional write request addressed to the migrated virtual machine for any write request addressed to the virtual machine, any data to be written in a memory areas(s) allocated to the virtual machine is also written in the corresponding memory area(s) dedicated to the migrated virtual machine.
Tn step 36, the source hypervisor marks all memory pages of the virtual machine as dirty. In this context, dirty memory pages are memory areas comprising memory date which have not been transferred to the migrated virtual machine.
-23 -Tn step 37, the source hypervisor selects several dirty memory pages on the source virtual machine. The selection can be made dependent on the physical address of the memory page (e.g., the memory page with the lowest physical address is selected first) or any other selection scheme.
In step 38, the infrastructure migration assistance unit blocks the write accesses to the selected memory pages on the source server blade and the corresponding memory pages on the destina- tion server blade by any PCIe function, wherein the infrastruc-ture migration assistance migration unit recognizes the write accesses to be blocked by their function IDs. In order to block the selected memory page(s) and the oorresponding memory pages, the infrastructure migration assistance unit does not forward any write request directed to the selected memory page(s) and corresponding memory pages as long as the memory pages and cor-responding memory pages are blooked and stores the blocked write requests temporarily. Write aocesses by the Cpu are not blooked.
Thus, the CPU can access the selected memory pages via the NMIJ and overwrite them. The MP4U marks the overwritten memory pages as dirty.
In step 39, the source hypervisor marks the selected memory pa-ges as olean and sends them to the destination hypervisor.
Tn step 310, the destination hypervisor stores the write data of the selected memory pages on the corresponding memory pages of the migrated virtual machine.
Tn step 511, the infrastructure migration assistance unit un-blocks the selected memory pages on the source server blade and corresponding memory pages on the destination server blade which are blocked. The infrastructure migration assistance unit for- -24 -wards the temporarily stored write requests and transmits again any write requests directed to the selected memory page of the virtual machine and the corresponding memory pages of the mi- grated virtual machine. The source hypervisor unselects the mem-ory pages.
In step 512, the source hypervisor checks whether the number of dirty memory pages is lower than the threshold. The threshold may be a certain number of memory pages of the given size. If the number of dirty memory pages is not lower than the thresh-old, the source hypervisor goes back to step 57. If the number of dirty memory pages is lower than the threshold, the source hypervisor goes to step S13.
In step 513, the MMU pauses the virtual machine, i.e. the Cpu cannot perform any write requests on the memory pages allocated to the virtual machine. In order to pause the virtual machine, the source hypervisor sends a pause request to the NMIJ.
In step 514, the source hypervisor selects all dirty memory pa-ges accessible by a write request of any PCIe function.
In step 515, the steps 58 to 511 are performed for the selected memory pages while the virtual machine is paused. The CPU cannot overwrite the memory pages of the virtual machine while memory pages are transferred from the source server blade to the desti-nation server blade.
In step 516, the infrastructure migration assistance unit is configured to send all write requests addressed to the virtual machine exclusively to the migrated virtual machine.
-25 -Tn step 517, the source hypervisor selects all remaining dirty memory pages and non-pageable memory data. The remaining dirty memory pages are exclusively memory pages such as Cpu registers which cannot be accessed by any peripheral function. The non-pageable memory data comprise PCIe buffers.
In step 518, the source hypervisor sends all selected memory pa-ges and non-pageable memory data to the destination hypervisor.
In step 319, the destination hypervisor stores the selected mem-ory pages as corresponding memory pages and the non-pageable memory data as corresponding nori-pageable memory data of the mi-grated virtual machine.
Tn step 520, the migrated virtual machine is started. The mi-grated virtual machine ises the virtual machine ID sent to the destination hypervisor in step 52.
In step 321, the migration assistance unit ends the access of the virtual machine to the peripheral devices.
In step 322, the source HV stops the virtual machine and deletes the memory pages of the virtual machine.
In contrast to the live migration process according to the sce-nario A which uses the infrastructure migration assistance unit 1705, a live migration process according to the scenario B uses the computer migration assistance unit CNAtJ 2111 which is inte-grated in the IOMMU (see FIG. 2) . For a live migration process according to the scenario A, the source hypervisor and the des-tination hypervisor are installed on the same server blade and could even be identical. The live migration process of the sce- -26 - nario B also comprises steps 51 to S22 which are basically iden- tical to the steps 51 to 522 of the live migration process ac-cording to the soenario A. The few differences which are due to the different location of the migration assistance unit, the computer migration assistance unit, and its integration in the IOMMU are mentioned for each step separately hereinafter.
Tn step 52, the destination hypervisor does not send the func-tion IDs of the POle functions which are allowed to access the memory pages allocated to the virtual maohine to the infrastruc-ture migration assistance unit and the infrastructure migration assistance unit does not store the function IDs and the respec-tive link IDs of the I/O links through whioh the POle funotion is received in the ID memory.
In step 34, the translation tables are not stored as main pe-ripheral translation tables 2132, but as additional peripheral translation tables 2133 (see FIG. 2) Tn step 55, the migration assistance unit does not use the pro-cedure described with reference to FIG. 3 for the generation of additional write requests, but the procedure described with ref-erence to FIG. 2.
In step 38, the oomputer migration assistance unit blocks write accesses and not the infrastructure migration assistance unit.
The infrastructure migration assistance unit always forwards write requests directed to the virtual machine. The computer mi- gration assistanoe stores the blocked write requests temporar-ily.
-27 -Tn step Sli, the computer migration assistance unit unblocks the selected memory pages allocated to the virtual machine (source virtual machine) and corresponding memory pages allccated tc the migrated virtual machine (destination virtual machine) . The com-puter migration assistance unit forwards the temporarily stored write requests and transmits again any write requests directed to the selected memory page of the virtual machine and the cor-responding memory pages of the migrated virtual machine. The source hypervisor unselects the memory pages.
Tn step 515, memory pages on the same server blade are trans-ferred from a memory area allocated to the virtual machine to a memory area allocated to the migrated virtual machine.

Claims (15)

  1. -28 -CLAIMS1. A method for the live migration of a virtual maohine from a source memory area (V14) to a destination memory area flJM', V14''), wherein said source memory area (VM) and said destination memory area (VF4', VN'' ) are accessible by a peripheral function through a shared infrastructure, and wherein memory data stored in said source memory area (Vt4) are transferred to said destina-tion memory area (VN', \7M''), characterized in that a source write request addressed to said source memory area (VF4) and gen- erated by a peripheral function is received, and that a destina-tion write request comprising the same write data as said source write request and directed to said destination memory area (VM', VN'') is generated.
  2. 2. The method according to claim 1, characterized in that said destination write request is an additional write request.
  3. 3. The method according to claim 1, characterized in that a wri- te request addressed to said source memory area (VM) by a pe-ripheral function is blocked while memory data are transferred to said destination memory area (VF4', VM' )
  4. 4. The method according to claim 1, characterized in that said source virtual machine is paused when said destination memory area (VM', yF4'') comprises nearly all memory data of said source memory area fl/N)
  5. 5. The method according to claim 4, characterized in that memory data stored in a part of said source memory area accessible by a write request of a peripheral function is transferred to said destination memory area (VM' , VM' ) while said virtual machine is paused.-29 -
  6. 6. The method according to claim 5, characterized in that fur-ther write data comprised in further write requests addressed to the source memory area (VM) are exclusively saved in said desti-nation memory area (VF4', VM'')
  7. 7. The method according to claim 6, characterized in that memory data stored in a part of said source memory area (VN) not acces- sible by said peripheral function are transferred to said desti-nation memory area (V14', VM'') while said virtual machine is paused.
  8. 8. The method according to claim 1, characterized in that the destination memory area (VM', VM'') and the source memory area (VM) are located on the same computer.
  9. 9. The method according to claim 1, characterized in that the destination memory area (VM'') and the source memory area (VM) are located on different computers (210, 220)
  10. 10. A migration assistaoce unit for the live migration of a vir-tual machine from a source memory area (VM) to a destination memory area (VM', VM''), wherein said source memory area (VM) and said destination memory area (VM', VF4'' ) are accessible by a peripheral function through a shared infrastructure, and wherein memory data stored in said source memory area (VM) are trans- ferred to said destination memory area (VM', VM''), character-ized in that said migration assistance unit (1705, 2111) is adapted to receive a source write request addressed to said source memory area (VM) and generated by a peripheral function, and to generate a destioation write request comprising the same write data as said source write request and directed to said destination memory area (VM', VM'') of said virtual machine.-30 -
  11. 11. A computer system comprising a migration assistance unit for the live migration of a virtual machine from a source memory area (VM) to a destination memory area (VM', VM''), wherein said source memory area (VM) and said destination memory area (1/N', 1/N'') are accessible by a peripheral function through a shared infrastructure, and wherein memory data stored in said source memory area (1/N) are transferred to said destination memory area (1/N', 1/N''), characterized in that said migration assistance unit (1705, 2111) is adapted to receive a source write reguest addressed to said source memory area (1/N) and generated by a pe-ripheral function, and to generate a destination write request comprising the same write data as said source write request and directed to said destination memory area (1/N' , 1/N' ) of said virtual machine.
  12. 12. A computer system according to claim 11, characterized in that said migration assistance unit is located on a computer (210) comprising said source memory area (1/N) and said destina-tion memory area (1/N')
  13. 13. A computer system according to claim 11, characterized in that said migration assistance unit is located in the shared in-frastructure, and that the computer system comprises a first computer (210) on which said source memory area (1/N) is located and a second computer (220) on which said destination memory area (1/N'') is located.
  14. 14. A data processing program for execution in a data processing system comprising software code portions for performing a method for the live migration of a virtual machine from a source memory area (1/N) to a destination memory area (1/N', 1/M''), wherein said source memory area (1/N) and said destination memory area (1/N', -31 -VF4'') are accessible by a peripheral function through a shared infrastructure, and wherein memory data stored in said source memory area (VM) are transferred to said destination memory area (VM', VM''), characterized in that a source write request ad- dressed to said source memory area (VM) and generated by a pe- ripheral function is received, and that a destination write re- quest comprising the same write data as said source write re-quest and directed to said destination memory area (VN', VM'') is generated when said data processing program is run on said data processing system.
  15. 15. A computer program product for the live migration of a vir-tual machine from a source memory area (VM) to a destination memory area (1/N', 1/N''), wherein said source memory area (1/N) and said destination memory area (VM', 1/N'') are accessible by a peripheral function through a shared infrastructure, and wherein memory data stored in said source memory area (VM) are trans- ferred to said destination memory area (1/N', 1/N''), character-ized in that said computer program product comprises a computer readable storage medium having program code embodied therewith, to perform a method comprising: -receiving a source write request addressed to said source mem-ory area (1/N) and generated by a peripheral function, and -generating a destination write request comprising the same write data as said source write request and directed to said destination memory area (1/N' , 1/N' )
GB1314184.1A 2013-08-08 2013-08-08 Live migration of a virtual machine using a peripheral function Withdrawn GB2516944A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB1314184.1A GB2516944A (en) 2013-08-08 2013-08-08 Live migration of a virtual machine using a peripheral function
DE102014110804.3A DE102014110804A1 (en) 2013-08-08 2014-07-30 Live migration of a virtual machine using a peripheral function

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1314184.1A GB2516944A (en) 2013-08-08 2013-08-08 Live migration of a virtual machine using a peripheral function

Publications (2)

Publication Number Publication Date
GB201314184D0 GB201314184D0 (en) 2013-09-25
GB2516944A true GB2516944A (en) 2015-02-11

Family

ID=49261867

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1314184.1A Withdrawn GB2516944A (en) 2013-08-08 2013-08-08 Live migration of a virtual machine using a peripheral function

Country Status (2)

Country Link
DE (1) DE102014110804A1 (en)
GB (1) GB2516944A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12118376B2 (en) 2021-04-20 2024-10-15 Stmicroelectronics International N.V. Virtual mode execution manager

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090248973A1 (en) * 2008-03-26 2009-10-01 Venkatesh Deshpande System and method for providing address decode and virtual function (VF) migration support in a peripheral component interconnect express (PCIE) multi-root input/output virtualization (IOV) environment
US20110197039A1 (en) * 2010-02-08 2011-08-11 Microsoft Corporation Background Migration of Virtual Storage

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7996484B2 (en) 2008-12-11 2011-08-09 Microsoft Corporation Non-disruptive, reliable live migration of virtual machines with network data reception directly into virtual machines' memory
WO2012009843A1 (en) 2010-07-19 2012-01-26 Empire Technology Development Llc Virtual machine live migration with continual memory write monitor and send
US8490092B2 (en) 2011-07-06 2013-07-16 Microsoft Corporation Combined live migration and storage migration using file shares and mirroring

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090248973A1 (en) * 2008-03-26 2009-10-01 Venkatesh Deshpande System and method for providing address decode and virtual function (VF) migration support in a peripheral component interconnect express (PCIE) multi-root input/output virtualization (IOV) environment
US20110197039A1 (en) * 2010-02-08 2011-08-11 Microsoft Corporation Background Migration of Virtual Storage

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12118376B2 (en) 2021-04-20 2024-10-15 Stmicroelectronics International N.V. Virtual mode execution manager

Also Published As

Publication number Publication date
GB201314184D0 (en) 2013-09-25
DE102014110804A1 (en) 2015-02-12

Similar Documents

Publication Publication Date Title
US7783858B2 (en) Reducing memory overhead of a page table in a dynamic logical partitioning environment
US9330013B2 (en) Method of cloning data in a memory for a virtual machine, product of computer programs and computer system therewith
EP2879053B1 (en) Virtual machine memory data migration method, related apparatus, and cluster system
US9854036B2 (en) Method for migrating memory data of virtual machine, and related apparatus and cluster system
US9697024B2 (en) Interrupt management method, and computer implementing the interrupt management method
US11113089B2 (en) Sharing data via virtual machine to host device bridging
KR20070100367A (en) Methods, devices, and systems for dynamically reallocating memory from one virtual machine to another
US9529618B2 (en) Migrating processes between source host and destination host using a shared virtual file system
US10540292B2 (en) TLB shootdowns for low overhead
US9875132B2 (en) Input output memory management unit based zero copy virtual machine to virtual machine communication
KR20230084300A (en) Chip system, virtual interrupt processing method and corresponding device
JP2007122305A (en) Virtual computer system
US10853259B2 (en) Exitless extended page table switching for nested hypervisors
Dong et al. HYVI: a hybrid virtualization solution balancing performance and manageability
US10430221B2 (en) Post-copy virtual machine migration with assigned devices
CN107491340A (en) Across the huge virtual machine realization method of physical machine
WO2016101282A1 (en) Method, device and system for processing i/o task
US20160239325A1 (en) Virtual device timeout by memory offlining
US7370137B2 (en) Inter-domain data mover for a memory-to-memory copy engine
KR20120070326A (en) A apparatus and a method for virtualizing memory
US20140208034A1 (en) System And Method for Efficient Paravirtualized OS Process Switching
GB2516944A (en) Live migration of a virtual machine using a peripheral function
US7389398B2 (en) Methods and apparatus for data transfer between partitions in a computer system
US12174749B2 (en) Page table manager
US20070220231A1 (en) Virtual address translation by a processor for a peripheral device

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)