[go: up one dir, main page]

CN114115703B - Bare metal server online migration method and system - Google Patents

Bare metal server online migration method and system

Info

Publication number
CN114115703B
CN114115703B CN202011337002.1A CN202011337002A CN114115703B CN 114115703 B CN114115703 B CN 114115703B CN 202011337002 A CN202011337002 A CN 202011337002A CN 114115703 B CN114115703 B CN 114115703B
Authority
CN
China
Prior art keywords
hardware card
memory
bare metal
metal server
bms
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011337002.1A
Other languages
Chinese (zh)
Other versions
CN114115703A (en
Inventor
龚磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Cloud Computing Technologies Co Ltd
Original Assignee
Huawei Cloud Computing Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Cloud Computing Technologies Co Ltd filed Critical Huawei Cloud Computing Technologies Co Ltd
Priority to EP21859684.9A priority Critical patent/EP4195021A4/en
Priority to PCT/CN2021/092962 priority patent/WO2022041839A1/en
Publication of CN114115703A publication Critical patent/CN114115703A/en
Priority to US18/175,853 priority patent/US20230214245A1/en
Application granted granted Critical
Publication of CN114115703B publication Critical patent/CN114115703B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0646Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
    • G06F3/0647Migration mechanisms
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Hardware Redundancy (AREA)

Abstract

本申请提供了一种裸金属服务器在线迁移方法以及系统。其中,所述方法包括:第一硬件卡接收针对第一裸金属服务器的迁移命令,其中所述第一硬件卡插置于所述第一裸金属服务器;所述第一硬件卡根据所述迁移命令通知所述第一裸金属服务器启动所述第一裸金属服务器中的虚拟机管理器,所述虚拟机管理器记录所述第一裸金属服务器产生的针对所述第一裸金属服务器的内存的第一内存脏页位置信息,并发送所述第一内存脏页位置信息至所述第一硬件卡;所述第一硬件卡根据所述第一内存脏页位置信息将所述第一裸金属服务器的内存脏页在线迁移至第二裸金属服务器。上述方法能够对BMS实现在线迁移。

The present application provides a bare metal server online migration method and system. The method includes: a first hardware card receives a migration command for a first bare metal server, wherein the first hardware card is inserted into the first bare metal server; the first hardware card notifies the first bare metal server to start the virtual machine manager in the first bare metal server according to the migration command, and the virtual machine manager records the first memory dirty page location information generated by the first bare metal server for the memory of the first bare metal server, and sends the first memory dirty page location information to the first hardware card; the first hardware card migrates the memory dirty page of the first bare metal server to the second bare metal server online according to the first memory dirty page location information. The above method can realize online migration of BMS.

Description

Online migration method and system for bare metal server
Technical Field
The application relates to the field of servers, in particular to a bare metal server online migration method and system.
Background
Bare metal server (bare METAL SERVER, BMS) is an upgrade of traditional physical server, has both excellent performance of traditional physical server and convenient management platform of cloud host, brings excellent computing performance for tenant, can satisfy core application scenario, such as high-performance computing load, big data, distributed database, and predictable performance type service scenario requiring consistency, and needs for high performance and stability.
At present, the BMS cannot realize online migration, but the BMS can realize balanced load and hardware fault early avoidance under the condition that service is not interrupted, and the like, only can be realized by using online migration, so that the BMS has great significance in realizing online migration.
Disclosure of Invention
In order to solve the problems, the application provides a bare metal server online migration method and a bare metal server online migration system, which can realize online hot migration of BMS.
In a first aspect, there is provided an online migration method of a bare metal server, the method comprising:
The first hardware card receives a migration command aiming at a first bare metal server, wherein the first hardware card is inserted into the first bare metal server, the first hardware card notifies the first bare metal server to start a virtual machine manager according to the migration command, the virtual machine manager records first memory dirty page position information aiming at a memory of the first bare metal server and generated by the first bare metal server, the first memory dirty page position information is sent to the first hardware card, and the first hardware card carries out online migration on a memory dirty page of the first bare metal server to a second bare metal server according to the first memory dirty page position information.
In the above scheme, after receiving the migration command, the first hardware card notifies the first bare metal server to start the virtual machine manager to record the first internal memory dirty page position information, which is generated by the first bare metal server and aims at the internal memory of the first bare metal server, so that the internal memory dirty page of the first bare metal server is migrated to the second bare metal server on line, on-line migration of the BMS can be realized, and the work of migrating the internal memory dirty page of the first bare metal server on line according to the first internal memory dirty page position information is borne by the first hardware card, so that the burden of the first bare metal server can be effectively reduced.
In some possible designs, after the first hardware card receives the online migration command for the first bare metal server, the method includes the first hardware card recording second memory dirty page location information for a memory of the first bare metal server generated by the first hardware card.
In the above scheme, the first hardware card may further record second memory dirty page position information generated by the first hardware card and directed to the memory of the first bare metal server, so as to send the second memory dirty page according to the second memory dirty page position information.
In some scenarios, the memory of the first bare metal server may be written with data by the first hardware card in a direct memory access (Direct Memory Acess, DMA) manner, and the modification of the memory data of the first bare metal server by the manner is not monitored by the virtual machine manager of the first bare metal server, so that the first hardware card needs to record the dirty page position information of the second memory, so that the first hardware card transfers the memory data modified due to the DMA to the second bare metal server as a target.
In some possible designs, the first hardware card online migrates the memory dirty page of the first bare metal server to the second bare metal server according to the first memory dirty page position information, and the method comprises the steps that the first hardware card obtains at least one first memory page generating the dirty page from the memory according to the first memory dirty page position information, obtains at least one second memory page generating the dirty page from the memory according to the second memory dirty page position information, and sends the at least one first memory page and the at least one second memory page to the second hardware card, wherein the second hardware card is connected with the first hardware card in a network mode, and the second hardware card is arranged in a memory of the second bare metal server according to the at least one first memory page and the at least one second memory page, and the second hardware card is inserted in the second bare metal server.
The first hardware card obtains corresponding memory pages from the memory of the first bare metal server through the first memory dirty page position information and the second memory dirty page position information respectively, wherein the memory pages are memory dirty pages, namely, memory pages with data written or modified, and the first hardware card sends the memory pages to the second hardware card through a network, so that the second hardware card sets the memory of the second bare metal server according to the memory pages, and the memory change of the first bare metal server can be synchronized to the memory of the second bare metal server in real time, so that the online migration of the bare metal server is realized.
In some possible designs, after the at least one first memory page and the at least one second memory page are sent to a second hardware card, the method includes the first hardware card obtaining a first I/O device state of an I/O device of the first bare metal server and obtaining a second I/O device state of the I/O device of the first hardware card, sending the first I/O device state and the second I/O device state to the second hardware card, the second hardware card setting the I/O device of the second hardware card according to the second I/O device state and sending the first I/O device state to the second bare metal server, so that the second bare metal server sets the I/O device of the second bare metal server according to the first I/O device state.
In the above scheme, the first hardware card may send the I/O device status to the second hardware card, so as to restore the I/O device status on the second hardware card.
In some possible designs, before the virtual machine manager records the first dirty page location information for the memory of the first bare metal server generated by the first bare metal server, the method further includes the virtual machine manager sending a full memory page of the first bare metal server to the first hardware card, the first hardware card sending the full memory page to the second hardware card, the second hardware card initializing the memory of the second bare metal server according to the full memory page.
In some possible designs, the method further comprises the step that the second hardware card receives the migration command, and the second hardware card mounts the network disk mounted by the first hardware card according to the migration command and notifies the second bare metal server to start a virtual machine manager in the second bare metal server.
In some possible designs, the method further includes the first hardware card sending network configuration information of the source BMS to the second hardware card, the second hardware card performing network configuration according to the network configuration information.
The network configuration information of the source BMS includes network related information such as an IP address of the source BMS, a bandwidth packet (for indicating uplink and downlink traffic speed limit configuration of the source BMS), and the like.
In some possible designs, after the first hardware card sends the network configuration information of the source BMS to the second hardware card, the method further includes the first hardware card notifying a cloud management platform that the first bare metal server has been migrated.
In some possible designs, a shared memory is disposed within the first hardware card, the shared memory being accessible by a virtual machine manager of the first bare metal server.
In some possible designs, the first hardware card starts a virtual machine manager according to the migration command, including the first hardware card generating an interrupt signal according to the migration command, the first bare metal server receiving the interrupt signal, starting the virtual machine manager of the first bare metal server according to the interrupt signal.
In some possible designs, the interrupt signal is a system management interrupt for the X86 processor, or the interrupt signal invokes an SMC or a secure interrupt for secure monitoring of the Arm processor.
The first bare metal server, a first hardware card, a second bare metal server and a second hardware card, wherein the first hardware card is used for receiving a migration command for the first bare metal server, the first hardware card is inserted into the first bare metal server and used for informing the first bare metal server to start a virtual machine manager according to the migration command, the virtual machine manager records first memory dirty page position information, which is generated by the first bare metal server, for a memory of the first bare metal server, and sends the first memory dirty page position information to the first hardware card, and the first hardware card is used for online migration of the memory dirty page of the first bare metal server to the second bare metal server according to the first memory dirty page position information.
In some possible designs, the first hardware card is configured to record second memory dirty page location information for a memory of the first bare metal server generated by the first hardware card.
In some possible designs, the first hardware card is configured to obtain at least one first memory page that generates a dirty page from the memory according to the first memory dirty page location information, obtain at least one second memory page that generates a dirty page from the memory according to the second memory dirty page location information, and send the at least one first memory page and the at least one second memory page to a second hardware card, where the second hardware card is connected to the first hardware card network, and the second hardware card is configured to set a memory of a second bare metal server according to the at least one first memory page and the at least one second memory page, where the second hardware card is inserted into the second bare metal server.
In some possible designs, the first hardware card is configured to obtain a first I/O device state of an I/O device of the first bare metal server, obtain a second I/O device state of the I/O device of the first hardware card, send the first I/O device state and the second I/O device state to the second hardware card, and the second hardware card is configured to set the I/O device of the second hardware card according to the second I/O device state, and send the first I/O device state to the second bare metal server, so that the second bare metal server sets the I/O device of the second bare metal server according to the first I/O device state.
In some possible designs, the first bare metal server is configured to send a full memory page of the first bare metal server to the first hardware card, the first hardware card is configured to send the full memory page to the second hardware card, and the second hardware card is configured to initialize memory of the second bare metal server according to the full memory page.
In some possible designs, the second hardware card receives the migration command, mounts the network disk mounted on the first hardware card according to the migration command, and notifies the second bare metal server to start a virtual machine manager in the second bare metal server.
In some possible designs, the method further comprises the step that the first hardware card sends the network configuration information of the source BMS to the second hardware card, and the second hardware card performs network configuration according to the network configuration information.
In some possible designs, the first hardware card is configured to notify the cloud management platform that the first bare metal server has been migrated.
In some possible designs, a shared memory is disposed within the first hardware card, the shared memory being accessible to a virtual machine manager of the first bare metal server.
In some possible designs, the first hardware card is configured to generate an interrupt signal according to the migration command, the first bare metal server is configured to receive the interrupt signal, and the virtual machine manager of the first bare metal server is started according to the interrupt signal.
In some possible designs, the interrupt signal is a system management interrupt for the X86 processor, or the interrupt signal invokes an SMC or a secure interrupt for secure monitoring of the Arm processor.
In a third aspect, a bare metal server system is provided, where the bare metal server system includes a first bare metal server and a first hardware card, where the first hardware card is configured to receive a migration command for the first bare metal server, where the first hardware card is configured to notify the first bare metal server to start a virtual machine manager according to the migration command, the virtual machine manager records first memory dirty page location information for a memory of the first bare metal server generated by the first bare metal server, and send the first memory dirty page location information to the first hardware card, where the first hardware card is configured to online migrate a memory dirty page of the first bare metal server to a second bare metal server according to the first memory dirty page location information.
In some possible designs, the first hardware card is configured to record second memory dirty page location information for a memory of the first bare metal server generated by the first hardware card.
In some possible designs, the first hardware card is configured to notify the cloud management platform that the first bare metal server has been migrated.
In some possible designs, a shared memory is disposed within the first hardware card, the shared memory being accessible to a virtual machine manager of the first bare metal server.
In a fourth aspect, a hardware card is provided, comprising a dynamic configuration module and an intelligent transfer module,
The dynamic configuration module is used for receiving a migration command aiming at a first bare metal server, wherein the first hardware card is inserted into the first bare metal server, the intelligent transfer module is used for notifying the first bare metal server to start a virtual machine manager according to the migration command, the virtual machine manager records first memory dirty page position information aiming at a memory of the first bare metal server and generated by the first bare metal server, and sends the first memory dirty page position information to the first hardware card, and the intelligent transfer module is used for online migration of a memory dirty page of the first bare metal server to a second bare metal server according to the first memory dirty page position information.
In a fifth aspect, there is provided a hardware card comprising a processor and a memory, the processor executing a program in the memory to perform the method of the first or second aspect.
In a sixth aspect, there is provided a computer readable storage medium comprising instructions which, when run on a first hardware card, cause the first hardware card to perform the method of the first or second aspect.
In a seventh aspect, there is provided a computer readable storage medium comprising instructions which, when run on a first hardware card, cause the first hardware card to perform the method of the first or second aspect.
Drawings
In order to more clearly describe the embodiments of the present application or the technical solutions in the background art, the following description will describe the drawings that are required to be used in the embodiments of the present application or the background art.
Fig. 1 is a schematic structural diagram of a BMS online migration system according to the present application;
fig. 2 is a schematic structural view of a BMS in a bare metal state according to an embodiment of the present application;
Fig. 3 is a schematic structural diagram of a BMS in a virtualized state according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a BMS online migration system according to an embodiment of the present application;
FIG. 5 is a flow chart of a method interaction for disabling a first VMM by a source BMS according to the present application;
FIG. 6 is a schematic diagram of transferring online migrated dirty pages between a source hardware card and a target hardware card according to the present application;
FIG. 7 is a flowchart illustrating interaction of a method for activating a first VMM by a source BMS according to the present application;
FIG. 8 is a flowchart illustrating another method for activating a first VMM by a source BMS according to the present application;
FIG. 9 is a schematic diagram of a hardware card according to the present application;
fig. 10 is a schematic structural view of a BMS according to the present application;
fig. 11 is a schematic structural view of another BMS according to the present application.
Detailed Description
Noun interpretation:
the cloud management platform provides an access interface, the access interface lists cloud services provided by public cloud, a tenant can access the cloud management platform through a browser or other clients and pay for buying corresponding cloud services, and after buying the cloud services, the cloud management platform provides the tenant with permission to access the cloud services, so that the tenant can remotely access the cloud services and perform corresponding configuration.
Public cloud-public cloud refers to cloud services provided by cloud providers for tenants (such as tenants), wherein the tenants can access a cloud management platform through the internet (internet), the cloud services provided by the public cloud are purchased and used by the cloud management platform, the core attribute of the public cloud is shared resource services, the public cloud can be realized through a data center of the public cloud service provider, and the data center is provided with a plurality of physical servers, wherein the plurality of physical servers provide computing resources, network resources and storage resources required by the public services.
The bare metal server (bear METAL SERVER, BMS) is a computing service with the elasticity of the virtual machine and the performance of the physical machine, provides a dedicated physical server on the cloud, and provides excellent computing performance and data security for services such as a core database, a key application system, high-performance computing, big data and the like. The tenant pays and purchases the use right of the BMS at the cloud management platform, the cloud management platform provides remote login BMS, an operating system required by the BMS is installed, an application is arranged in the operating system, service specific to the tenant is provided by running the application, and the service is realized based on computing resources, network resources and storage resources provided by the BMS.
Specifically, the BMS is a physical server in a data center of a public cloud service provider, and a hardware card is inserted into the physical server, and can perform data communication of a control plane with the cloud management platform and communicate with the physical server, so that the cloud management platform can manage the physical server through the hardware card, for example, an operating system is installed for the physical server, and a remote login service of the physical server is opened, so that a tenant can remotely log in the physical server.
The management and control surface is realized on the hardware card, so that the BMS does not need to process the work of the management and control surface irrelevant to the own business of the tenant, and therefore, the tenant can completely use the BMS to run own application, and the consumer benefits of the tenant can be further ensured.
For example, the application may be, for example, web service software, after the BMS installs the web service software, the tenant uploads a web page to be published to the external network to the BMS, the BMS runs the web service software, the web service software shares the web page of the tenant to the external network through an open 80 port or an 8080 port, the web page on the BMS can be accessed by the tenant of the external network through a browser to access a domain name bound with the web service software, wherein the web page is provided in a storage space of the BMS, uplink and downlink network traffic related to interaction with the browser or other devices accessing the BMS, and a CPU and a memory required for running the web service software are provided by the BMS, and the tenant can obtain different computing resources, network resources and storage resources by purchasing the BMS with different specifications at the cloud management platform. Applications installed on the BMS may also be database software or other applications that the tenant wants to configure, which is not limited by the embodiment of the present invention.
And the memory dirty pages are memory pages which need to be synchronized to the target BMS in the source BMS, so that the consistency of the memories of the source BMS and the target BMS is ensured.
Online migration, also known as live migration (live migration) or thermal migration. In the embodiment of the invention, when a physical server serving as a source BMS needs to perform firmware upgrade, restart, power failure maintenance or other conditions affecting application operation in a data center of a public cloud service provider, a cloud management platform needs to select another physical server serving as a target BMS in the data center, and the specification of the physical server is the same as that of the physical server serving as the source BMS, copy a memory page of the source BMS into the target BMS, and mount a network disk of the source BMS into the target BMS, so that the target BMS can operate the application of the source BMS.
Specifically, in the process of online migration of the memory pages, the memory pages of the source BMS are migrated to the target BMS in real time while ensuring normal operation of the application of the source BMS. To ensure that BMS applications are available during the migration process, the migration process has only a very short downtime. The method comprises the following steps that an application operates in a source BMS in the previous stage of migration, when memory page migration is carried out to a certain stage, the memory pages of a target BMS and the memory pages of the source BMS are completely consistent (or very nearly completely consistent, such as more than 99% of the memory pages are identical), the cloud management platform transfers the control right of a tenant on the source BMS to the target BMS through a very short switching (such as within seconds), and BMS services continue to operate on the target BMS. For the BMS service itself, the migration process is transparent to the tenant because the switching time is very short and the tenant cannot feel that the BMS has been replaced. Therefore, online migration is suitable for scenes with high requirements on service continuity.
A Virtual Machine Manager (VMM) is implemented through the operating system kernel, which may manage and maintain operating system-created virtual machines.
The definition of the system management mode (SYSTEM MANAGEMENT mode, SMM) and trust zone (trustzone) to which the present application relates is described in detail below.
SMM is an execution mode of an x86 processor that has the highest privilege level, and thus, various privileged instructions and input/output (I/O) operations may be performed in SMM. In the event that the processor's SMM interrupt pin (SMM interrupt pin) is activated or a System Management Interrupt (SMI) is received from an advanced programmable interrupt controller (advanced programming interrupt controlle, APIC), the processor will enter SMM. After entering SMM, the processor stops running the current operating system, saves the current operating system's CPU register state to secure system management memory (SYSTEM MANAGEMENT RAM, SMRAM), closes other interrupts and exceptions, and executes SMI interrupt handler specified code in SMRAM. The SMM is transparent to the operating system, i.e. it is not clear to the operating system when the processor enters SMM, what is done in SMM mode, and when the processor exits SMM. Where SMI is an interrupt with a relatively high priority, for example, SMI has a priority of 3, so most interrupts can be masked. After receiving the RSM instruction, the processor will exit SMM. After exiting SMM, execution of SMI interrupt handler-specified code in SMRAM is stopped, the current operating system CPU register state is read from SMRAM and restored, and other interrupts and exceptions are initiated.
The EL3 mode is an execution mode of an ARM processor having the highest privilege level, and thus, various privileged instructions and I/O operations can be performed in the EL3 mode. In the event that the processor receives a secure monitor call (secure monitor call, SMC) or a secure interrupt (secure interrupt), the processor will enter EL3 mode. After entering the EL3 mode, the processor stops running the current operating system, saves the CPU register state of the current operating system to a secure memory area (secure memery region, SMR), closes other interrupts and exceptions, and executes the codes specified by the SMC exception handling logic in the SMR. After receiving the ERET instruction, the processor will exit EL3 mode. After exiting the EL3 mode, execution of the SMC exception handler-specific code in the SMR is stopped, the current operating system CPU register state is read from the SMR and restored, and other interrupts and exceptions are initiated. The EL3 mode is transparent to the operating system, i.e., the operating system does not know when the processor enters the EL3 mode, what is done in the EL3 mode, and when the processor exits the EL3 mode.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a BMS online migration system provided by the present application. As shown in fig. 1, the online migration system of the present application includes a cloud management platform 110 and a plurality of BMS systems. The BMS systems may include BMS system 120-1, BMS system 120-2, BMS system 120-3, and BMS system 120-4, among others. The BMS system 120-1 may include a BMS 121-1 and a hardware card 122-1. The BMS system 120-2 may include a BMS 121-2 and a hardware card 122-2. The BMS system 120-3 may include a BMS 121-3 and a hardware card 122-3. The BMS system 120-4 may include a BMS 121-4 and a hardware card 122-4. The cloud management platform 110 may connect to each hardware card through a network, and the hardware card may connect to the BMS through a preset interface, for example, a high-speed serial computer expansion bus standard (PERIPHERAL COMPONENT INTERCONNECT EXPRESS, PCIE) interface. The different hardware cards and may communicate with each other via a network. Wherein the online migration system may be disposed in a data center of a public cloud service provider.
The cloud management platform 110 is used to manage a plurality of BMSs.
The BMS is an independent physical server, the tenant purchases the BMS at the cloud management platform 110, the cloud management platform 110 sends management information to a hardware card inserted into the purchased BMS according to the purchase information, the hardware card sets the BMS according to the management information, for example, installs a corresponding operating system according to the needs of the tenant, and opens a telnet server, so that the tenant can remotely log in to the BMS, and complete exclusive use of physical resources (including computing resources, network resources and storage resources) of the BMS is achieved.
In the embodiment of the invention, before and after online migration, the operating system of the BMS can be switched between a bare metal state and a virtualized state through the control of the hardware card.
The left part of fig. 2 shows a schematic structure of the BMS in a bare metal state. The BMS in bare metal state may include a hardware layer including a guest operating system and a software layer including hardware such as a first processor 210, a first memory 220, a Root Complex (RC) chip 230, etc. for illustration. In other embodiments, the number of processors 210 and the number of virtual machines may be greater or fewer.
The hardware layer may include one or more first processors 210, a first memory 220, and an RC chip 230. The first processor 210 may be a central processing unit (central processing unit, CPU). The CPU may employ a complex instruction set computing (complex instruction set computer, CISC) architecture (e.g., x86 architecture), a reduced instruction set computer (reduced instruction set computing, RISC) architecture (e.g., MIPS (microprocessor without interlocked PIPED STAGES) architecture), and so forth.
The first memory 220 may store code for an operating system, virtual machine monitor (virtual machine monitor, VMM), etc. The guest operating system may be a system that the tenant installs itself. The VMM is optional, i.e., the VMM does not operate when not performing online migration, but only when online migration. RC chip 230 includes one or more PCIe interfaces for connecting the subsystem of first processor 210 and first memory 220 to a hardware card. The RC chip may be implemented in a separate device or may be integrated in the first processor 210. The first processing unit also includes a digital signal processor (DIGITAL SIGNAL processor, DSP), a graphics processor (graphics processing unit, GPU), and a neural network processor (neural-network processing unit, NPU), among others. When the number of processors in the first processor 210 is plural, the processors in the first processor 210 may have a homogeneous structure or a heterogeneous structure, and a common heterogeneous structure may be a cpu+dsp, a cpu+npu, a cpu+gpu, a cpu+dsp+gpu, or the like.
The software layer includes a guest operating system. It will be appreciated that the VMM need not be run either before or after the thermomigration is performed, thereby effectively reducing the consumption of resources by the BMS.
The hardware card may be an Application SPECIFIC INTEGRATED Circuit (ASIC) board, or a field programmable gate array (field programmable GATE ARRAY, FPGA) board, or the like. As shown on the right in fig. 2 and 3, the hardware card may include one or more second processors 311, an End Point (EP) EP chip 313, and a network card 314.
The second processor 311 includes one or more second processors 311, for example, the second processor 311 may be a digital signal processor (DIGITAL SIGNAL processor, DSP), a central processor (central processing unit, CPU), and/or a neural network processor (neural-network processing unit, NPU), etc. The processing power of the second processor 311 may be weaker than that of the first processor 210.
The EP chip is a hardware interface defined in the PCIe specification, and is responsible for sending PCIe messages to the BMS or may also receive PCIe messages sent by the BMS, where the EP chip is a peripheral interface of the hardware card.
The embodiment of the application does not limit the specific implementation of the RC chip and the EP chip, and any RC chip and EP chip which are implemented only by following PCIe specifications can be used.
It should be noted that, the hardware card may also be connected to the network disk through the network card, so that the hardware card forwards the IO request issued in the BMS to the network disk for processing.
The software layer of the hardware card comprises an I/O processing module, an intelligent transfer module and a dynamic configuration module. Here, the I/O processing module may implement device state save recovery and memory dirty page tracking for data written to the first memory 220 by the hardware card. The RC chip of the BMS and the EP chip of the hardware card can be connected through a PCIE interface. The functions of the I/O processing module, the intelligent transfer module, and the dynamic configuration module will be described in detail later, and will not be described here.
In the bare metal state, the BMS has its guest operating system running directly on hardware.
The first processor 210 may include one or more physical cores (the physical cores are sometimes referred to herein as simply cores). The "physical core" represents the smallest processing unit in the present application. Each first processor 210 in this embodiment has two physical cores, core 0 and core 1, and a plurality of registers. The registers may be high-speed memory elements of limited memory capacity, which may be used to temporarily store OS state data such as instructions, data, and addresses, for example, instruction Registers (IR), program Counters (PC), and Accumulators (ACC), among others. In other embodiments, the number of cores included in the first processor 210 may be greater or lesser, and the number of cores included in each first processor 210 may be different. The first memory 220 is used to hold instructions or data, such as a cache memory or the like, that has just been used or recycled by the first processor 210. If the first processor 210 needs to reuse the instruction or data, it can be called directly from the first memory 220, reducing the latency of the first processor 210, thus improving the efficiency of the system. The first memory 220 may store code for an operating system, virtual machine monitor (virtual machine monitor, VMM), etc. The guest operating system may be a system that the tenant installs itself. The VMM is optional, i.e., the VMM does not operate when not performing online migration, but only when online migration. RC chip 230 includes one or more PCIe interfaces for connecting the subsystem of first processor 210 and first memory 220 to a hardware card. The RC chip may be implemented in a separate device or may be integrated in the first processor 210. The first processing unit also includes a digital signal processor (DIGITAL SIGNAL processor, DSP), a graphics processor (graphics processing unit, GPU), and a neural network processor (neural-network processing unit, NPU), among others.
Referring to fig. 3 below, the left part of fig. 3 is a schematic structural diagram of a BMS in a virtualized state, and in fig. 3, compared with fig. 2, a VMM is added to a software part of the BMS, where the VMM is equivalent to a hypervisor or other type of virtual monitoring device in other virtualization architectures. The VMM may be deployed within or separate from the guest operating system. The VMM is responsible for managing the virtual machines (the number is not limited) running on it. The VMM acts as a virtual monitoring device, responsible for scheduling virtual processors of the individual virtual machines. For example, a kernel-based virtual machine (KVM) is a typical VMM. The scheduling of virtual processors by the VMM includes swapping in and out of the virtual processors. First, the VMM creates and initializes objects for one virtual machine, and then creates three virtual processors for that virtual machine. When a virtual machine includes multiple virtual processors, there will typically be one master virtual processor and the other slave virtual processors. Virtual processors have not been created as yet associated with a physical core. The VMM will schedule a virtual processor to a physical core according to a policy, referred to as the virtual processor being swapped in, and will suspend or migrate the virtual processor from the physical core, referred to as the virtual processor being swapped out. In the context of binding cores, a virtual processor is scheduled to the same core each time it is swapped in. In the unbound core scenario, the VMM may determine to which core to schedule the virtual processor based on the current operating conditions of the system and/or scheduling algorithms prior to scheduling.
It should be noted that, a virtual processor may not be immediately involved in running after being swapped in, and before the virtual processor is swapped in and not involved in running, the host (specifically, VMM) may also implement some configuration for the virtual processor, and then the virtual processor is involved in the guest mode.
Further, the VMM has a dirty page tracking function, specifically, after the VMM is started, the guest operating system of the software layer needs to access to the hardware through the VMM, which is equivalent to the guest operating system being managed by the VMM as an operating system of a virtual machine, when an application in the guest operating system runs, the VMM may enable the first processor 210 to write data into the first memory 220, and the VMM may monitor the writing action and record the address of the written memory page in the first memory 220, thereby implementing the dirty page tracking function.
In order to more clearly see the connection relationship among the source BMS121-1, the source hardware card 122-1, the target hardware card 122-4, and the target BMS121-4, reference may be made to FIG. 4. Fig. 4 shows that the RC chip of the source BMS121-1 may be connected to the EP chip of the source hardware card 122-1 through a PCIE interface, and the network card of the source hardware card 122-1 may be connected to the network card of the target hardware card 122-4 through a network. The EP chip of the target hardware card 122-4 is connected to the RC chip of the target BMS121-4 through PCIE. The network card of the source hardware card 122-1 may also be connected to a memory, which may also be connected to the network card of the destination hardware card 122-4. The above-described BMS online migration system may implement online migration of the source BMS121-1 to the target BMS121-4 with the assistance of the source hardware card 122-1 and the target hardware card 122-4. Here, online migration includes BMS online migration, online migration of storage resources, and online migration of a network environment.
On-line migration between the source BMS and the target BMS must be aided by a first VMM of the active BMS and a second VMM of the target BMS. However, the first VMM and the second VMM need to be in a non-working state at ordinary times, and the first VMM and the second VMM are activated to enter a working state when online migration is performed, so as to reduce the consumption of resources of the source BMS and the target BMS. After the source BMS starts, the first VMM needs to be started for initialization, but the first VMM cannot be kept in an operating state all the time, and therefore, the first VMM needs to be deactivated. And when online migration is required, activating the first VMM to enter a working state. The method shown in fig. 5 below is a method in which the source BMS disables the first VMM.
Referring to fig. 5, fig. 5 is an interaction flow chart of a method for disabling a first VMM by a source BMS according to the present application. As shown in fig. 5, the method for disabling the first VMM by the source BMS includes:
And S101, after the source BMS is powered on, supplying power to the source hardware card, and starting the BIOS of the source BMS.
In a specific embodiment of the present application, the BIOS is preset to need to boot the source hardware card. After the BIOS of the source BMS is started, the BIOS guides each program into a working state according to a preset sequence. In the boot process, if the BIOS smoothly boots the first program into the working state, the BIOS continues to boot the next program into the working state, and so on, until the last program is booted into the working state. If the BIOS does not successfully enter the working state when booting a certain program, the BIOS will continue waiting until the program enters the working state or an error is reported. Therefore, when the BIOS of the source BMS is booted to the source hardware card, the BIOS of the source BMS enters a waiting process to wait for the configuration of the source hardware card to be completed, and transmits a start flag to the source BMS.
S102, in the BIOS waiting process of the source BMS, the source hardware card performs resource allocation, and a first shared memory is set in a second internal memory of the source hardware card.
In a specific embodiment of the present application, the first shared memory may be disposed in a first memory of the source BMS, where the first shared memory may be accessed by a first operating system in the source BMS and a first VMM in the source BMS. The first VMM in the first shared memory may be used to store the source BMS.
In particular embodiments of the present application, resource allocation by the source hardware card may include initializing the hardware (masking all interrupts, shutting down the processor internal instruction/data Cache, etc.), preparing RAM space (e.g., reading program code into RAM and setting up a first shared memory), setting up a stack, initializing the hardware devices to be used in this stage, detecting a first operating system memory map, reading the kernel image and root file system image of the first operating system from Flash to RAM, setting up boot parameters for the kernel of the first operating system, and invoking the kernel of the first operating system, etc.
In a specific embodiment of the present application, after the resource allocation is completed, the dynamic allocation module in the source hardware card generates a start flag, and sends the start flag to the BIOS of the source BMS through the first shared memory or the hardware register to notify the BIOS that the source hardware card has been started, and the BIOS may be restarted.
It should be understood that, in the foregoing example, the first shared memory is set in the second internal memory of the source hardware card, and in other embodiments, the second shared memory may also be set in the network disk, where the second shared memory may be accessed by the first operating system in the source BMS and the first VMM in the source BMS. And, bare metal server online migration can choose to set up only the first shared memory, only the second shared memory or set up first shared memory and second shared memory at the same time.
And S103, after the source BMS receives the start-up mark, the source hardware card is confirmed to start normally, and the source hardware card is guided to a first VMM of the source BMS.
In a specific embodiment of the present application, the BIOS also presets that the first VMM needs to be booted. The first VMM may be stored in a first memory of the source BMS.
S104, the first VMM of the source BMS completes initialization of the I/O device and saves the state of the I/O device into the first shared memory.
In a specific embodiment of the present application, the states of the I/O devices may be stored in a register of the I/O device, after the initialization of the I/O device is completed by the first VMM, the states of the I/O devices may be taken out of the register and stored in the first shared memory of the source hardware card, so that when the first VMM is re-enabled, the states of the I/O devices in the first shared memory may be taken out of the first shared memory and restored, thereby restoring the working state of the first VMM. It can be appreciated that the state of the I/O device is stored in the first shared memory of the source hardware card, and does not occupy the storage resources of the source BMS, thereby reducing the consumption of resources of the source BMS.
In a specific embodiment of the present application, the state of the I/O device is saved to the first shared memory, so that the state of the I/O device may be accessed by the first operating system or by the first VMM.
S105, the source BMS reads the CPU register state of the source BMS from the hardware auxiliary virtualization module and stores the CPU register state into the first shared memory.
And S106, the source BMS adjusts page table entries of the first VMM and the first operating system kernel through a switching program.
In a specific embodiment of the present application, a switching program is used to take care of switching between the first VMM and the first operating system. When the switching from the first VMM to the first operating system is needed, the first VMM calls the switching program to switch, and when the switching from the first operating system to the first VMM is needed, the first operating system can call the switching program to switch. In a specific embodiment, the switching program may be a program pre-stored in the first shared memory, thereby reducing the consumption of resources of the source BMS. Here, the source BMS adjusts page table entries of the first VMM and the first operating system kernel through the switching procedure, so that illegal memory access after switching can be avoided.
S107, the source BMS loads the CPU register state to the first processor in the source BMS through the switching procedure, thereby disabling the first VMM.
In a specific embodiment of the present application, after the source BMS loads the CPU register state to the first processor in the source BMS, the first VMM stops operating state and the first operating system enters the bare metal operating state.
It should be noted that the above method is also applicable to other BMS (including the target BMS) in the data center, and the embodiments of the present invention will not be repeated.
Referring to fig. 6, fig. 6 is a schematic diagram of transferring online migration of dirty pages in memory between a source hardware card and a target hardware card according to the present application. Specifically, in the initial state, the source hardware card is loaded with a network disk (shown in fig. 4), and provides the network disk for the source BMS to use, and after the tenant logs in the source BMS remotely, the tenant may store the data of the tenant into the network disk, and it is worth noting that the network disk may also be a cloud service, and the tenant may purchase the network disk at the cloud management platform and mount the network disk into the source BMS.
Specifically, the migration method of the embodiment of the invention comprises the following steps:
And S201, the cloud management platform respectively sends migration commands to the source hardware card and the target hardware card. Correspondingly, the source hardware card and the target hardware card respectively receive migration commands sent by the cloud management platform.
In a specific embodiment of the present application, the migration command is used to instruct the source BMS to online migrate the dirty memory page to the target BMS. The migration command may include, among other things, an IP address of the source BMS, a MAC address of the source BMS, an IP address of the target BMS, a MAC address of the target BMS, or other address information that identifies the source BMS and the target BMS, etc.
In a specific embodiment of the present application, the migration command is issued in the case where the migration condition is satisfied.
The migration condition is, for example, a case that the source BMS needs to perform firmware upgrade, restart, power failure maintenance or other conditions affecting the normal operation of the source BMS, and the cloud management platform may obtain the above case in advance, and send a migration command to the source hardware card and the target hardware card after selecting a target BMS suitable as a migration target in the data center according to the above case.
S202, the source hardware card informs the source BMS to enable the first VMM. And, the target hardware card enables the second VMM through the target BMS.
After the source hardware card notifies the source BMS to enable the first VMM, the source BMS will activate the first VMM. During the activation of the first VMM, the source BMS saves the first I/O device state into the first shared memory. Referring specifically to fig. 7 and 8 below, the source BMS activates the first VMM.
And S203, the source BMS sends the total memory pages to the target BMS through the source hardware card and the target hardware card.
In a specific embodiment of the present application, the first VMM of the source BMS first sends the full memory pages to the source hardware card. Accordingly, the source hardware card receives the full memory pages sent by the source BMS. The source hardware card sends the full memory pages to the target hardware card. Accordingly, the target hardware card receives the full memory pages sent by the source hardware card. The target hardware card sends the full memory pages to the second VMM of the target BMS. Accordingly, the second VMM of the target BMS receives the full memory pages sent by the target hardware card.
The second VMM of the target BMS sets the memory of the target BMS according to the total memory pages, so that the memory of the target BMS is consistent with the memory of the source BMS.
In general, after the target BMS sets the full memory, the purpose of memory page migration is achieved, but in the embodiment of the present invention, network resources and storage resources of the target BMS must be guaranteed to be the same as those of the source BMS, so after the full memory of the source BMS is set in the target BMS, before the network resources and storage resources are migrated from the source BMS to the target BMS, a tenant may also access the source BMS, and an operating system of the source BMS may continue to perform a write operation on the first memory, thereby generating a dirty memory page, and meanwhile, the source hardware card may also perform a direct memory access write operation on the first memory, thereby generating a dirty memory page.
The following steps are aimed at enabling the source hardware card to acquire the memory dirty pages generated in the two cases and send the memory dirty pages to the destination hardware card, and sending the memory dirty pages to a second VMM of the destination BMS through the destination hardware card, wherein the second VMM updates the full memory according to the memory dirty pages, so that the memory dirty pages generated by the source BMS before the network resources and the storage resources are migrated can be synchronized on the destination BMS.
S204, the target hardware card mounts the network disk which is being used by the source hardware card.
In a specific embodiment of the present application, after the target BMS mounts the network disk being used by the source hardware card, the target hardware card and the source hardware card share the network disk, at this time, the target BMS may access the network disk, and the target hardware card provides the network disk for the target BMS to use.
Through the step, the migration of the storage resource can be completed.
It is noted that in the subsequent step, after the migration of the network resources is completed, the source hardware card may stop mounting the network disk, so that the target hardware card may independently use the network disk after the migration is completed, thereby ensuring the data security of the tenant.
S205, the first VMM of the source BMS opens a dirty page tracking function to track dirty page conditions generated by the first operating system in the first memory of the source BMS, so as to generate first memory dirty page position information in the first memory of the source BMS.
The first operating system generating a dirty page in the first memory of the source BMS specifically means that, when the first processor of the source BMS runs the first operating system, a data writing operation is performed on the first memory, so that modification of data in the memory pages is involved, and the first VMM may record which memory pages are modified in this case.
It should be noted that, in the embodiment of the present invention, the memory dirty page location information may be, for example, a memory dirty page bitmap, where the memory dirty page bitmap may identify a memory page of an operating system of the source BMS through 0 and 1, the bitmap value is 1 when the memory page is written with data, the bitmap value is 0 when the memory page is not written with data, the memory dirty page bitmap records a memory page number, and records 0 or 1 for different memory page numbers.
Or the memory dirty page bitmap marks 0 or 1 on a plurality of continuous memory pages in sequence, so as to obtain a binary character string consisting of 0 and 1.
The memory dirty page location information may also be implemented in other manners, and it may be known which memory page in the source BMS is modified according to the memory dirty page location information.
S206, the I/O processing module of the source hardware card opens a dirty page tracking function to track the dirty page condition generated by the I/O device in the source hardware card, so as to generate second memory dirty page position information in the source hardware card.
The I/O device of the source hardware card refers to a device that the source hardware card provides to the source BMS, for example, a Virtual Function (VF) provided to the source BMS in the network card 314 of the source hardware card in fig. 3.
In other examples, the source hardware card may connect external input devices such as a mouse and keyboard and provide the external input devices to the source BMS, which may also be referred to as I/O devices of the source hardware card.
In the embodiment of the present invention, the I/O device of the source hardware card may write data in the first memory of the source BMS by means of direct memory access (Direct Memory Access, DMA), in this case, since the first memory is written by the source hardware card as an external device by DMA, the first VMM cannot monitor this situation, and therefore the source hardware card is required to start the dirty page tracking function to track the dirty page situation generated in the source hardware card itself.
S207, the intelligent transfer module of the source hardware card acquires the first memory dirty page position information from the source BMS, and acquires the memory dirty page generated by the first operating system from the source BMS according to the first memory dirty page position information.
In a specific embodiment of the present application, the intelligent transfer module of the source hardware card obtains at least one first memory page that generates a dirty page from the memory according to the first memory dirty page location information.
In a specific embodiment of the present application, the first memory dirty page location information may be mapped by the source BMS to the source hardware card (e.g., a Base ADDRESS REGISTER (BAR) space of the I/O device) through a memory mapping mechanism, or may be stored in the first memory of the source BMS. Correspondingly, the intelligent transfer module of the source hardware card can read the position information of the first memory dirty page, and then acquire the memory dirty page generated by the first operating system from the first memory of the source BMS in a DMA transmission mode. Or the intelligent transfer module of the source hardware card can also read the first memory dirty page position information from the first memory of the source BMS in a DMA transmission mode, and acquire the memory dirty page generated by the first operating system from the first memory of the source BMS in the DMA transmission mode.
S208, the intelligent transfer module of the source hardware card acquires the second memory dirty page position information, and acquires the memory dirty page generated by the I/O equipment according to the second memory dirty page position information.
In a specific embodiment of the present application, the intelligent transfer module of the source hardware card obtains at least one second memory page generated via the I/O device of the source hardware card from the first memory according to the second memory dirty page location information.
S209, the source hardware card sends the memory dirty pages generated by the first operating system and the memory dirty pages generated by the I/O device to the target hardware card.
In a specific embodiment of the present application, the intelligent transfer module of the source hardware card sends at least one first memory page and at least one second memory page to the target hardware card.
And S210, the target hardware card restores the memory dirty page generated by the first operating system to the first memory of the target BMS.
S211, the target hardware card restores the memory dirty pages generated by the I/O equipment to the target hardware card.
In a specific embodiment of the present application, the target hardware card sets the memory of the second BMS according to at least one first memory page and at least one second memory page.
S212, the intelligent transfer module of the source hardware card determines whether the shutdown standard has been reached. If the shutdown criterion is not met, returning to step S207, if the shutdown criterion is met, notification 1 is sent to the source hardware card.
In a specific embodiment of the present application, the shutdown criterion is that the amount of data of a memory dirty page generated by the first operating system in the source BMS and a memory dirty page generated by the I/O device in the source hardware card is smaller than the capacity of the current network bandwidth.
If the data size of the memory dirty pages generated by the first operating system in the source BMS and the memory dirty pages generated by the I/O devices in the source hardware card is greater than or equal to the capacity of the current network bandwidth, the source hardware card cannot transmit all the memory dirty pages to the target hardware card through the current network at one time, and the source BMS cannot reach the shutdown standard, so that the data size of the memory dirty pages generated by the first operating system in the source BMS and the memory dirty pages generated by the I/O devices in the source hardware card is smaller than the current network bandwidth until the source BMS is shutdown after the source hardware card acquires the memory dirty pages generated by the first operating system in the source BMS and the memory dirty pages generated by the I/O devices in the source hardware card for the first time, and then the source hardware card needs to repeatedly execute the steps of acquiring the new memory dirty pages generated by the first operating system in the source BMS and the new memory dirty pages generated by the I/O devices in the source hardware card and sending the acquired new memory dirty pages to the target hardware card until the data size of the new memory dirty pages generated by the first operating system in the source BMS and the I/O devices in the source hardware card is smaller than the current network bandwidth and reaches the shutdown standard.
The source BMS sends a notification 1 to the source hardware card through a channel physically connected between the source BMS and the source hardware card, where the channel is, for example, a PCIE channel, and the notification 1 is used to notify the source hardware card that the source BMS can stop, where the source hardware card obtains a state of an I/O device of the source hardware card (i.e., a first I/O device state) and obtains a state of an I/O device of the source BMS (i.e., a second I/O device state).
S213, the intelligent transfer module of the source hardware card acquires the first I/O device state in the first shared memory and acquires the second I/O device state in the source hardware card.
Specifically, after the source BMS confirms that the source BMS itself reaches the shutdown standard in step S212, the state of the I/O device of the source BMS is recorded in the first shared memory, and the intelligent transfer module of the source hardware card acquires the state of the first I/O device in the first shared memory.
The I/O device of the source BMS is, for example, a driver called by an operating system of the source BMS through the first processor to read and write data in the first memory, and the first VMM of the source BMS may record a state of the I/O device of the source BMS.
S214, the intelligent transfer module of the source hardware card sends the first I/O device state and the second I/O device state to the target hardware card. Accordingly, the target hardware card receives the first I/O device state and the second I/O device state sent by the intelligent transfer module of the source hardware card.
S215, the target hardware card restores the first I/O device state to the I/O device of the target BMS.
Specifically, the target hardware card sends the first I/O device state to a second VMM of the target BMS, which sets the first I/O device state into an I/O device of the target BMS.
The I/O device of the target BMS may be, for example, a driver called by an operating system of the target BMS through a third processor (shown in fig. 4) to read and write data in a second memory (shown in fig. 4), and the second VMM of the target BMS may set a state of the I/O device of the target BMS to the first I/O device state, so that the state of the I/O device of the target BMS is the same as the state of the I/O device of the source BMS.
S216, the target hardware card restores the second I/O device state to the target hardware card.
Specifically, the target hardware card sets the state of its own I/O device to the second I/O device state such that the state of the I/O device of the target hardware card is the same as the state of the I/O device of the source hardware card.
The I/O device of the target hardware card is, for example, a VF provided by the target hardware card to the target BMS, and the VF of the target hardware card is set to the second I/O device state, so that the VF of the target hardware card is the same as the device state of the VF of the source hardware card.
S217, the source BMS disables the first VMM so that the source BMS operates in a bare metal state. And, the target BMS disables the second VMM and transmits a notification 3 to the target hardware card, the notification 3 being for notifying the target hardware card that the target BMS switches to the bare metal state.
S218, the source hardware card stops mounting the network disk.
It will be understood that the data to be migrated includes the data (i.e., the storage resource) in the network disk, then, at the beginning of the migration, the network disk is first shared with the target hardware card (step S204), so that the source hardware card and the target hardware card share the network disk, and at the end of the migration, the source hardware card stops mounting the network disk.
S219, the source hardware card sends network configuration information to the target hardware card. Accordingly, the target hardware card receives the network configuration information sent by the source hardware card.
The network configuration information includes information such as an IP address of the source BMS, bandwidth packet configuration, and the like, and specifically, information related to network resources provided by the source hardware card for the source BMS.
And S220, the source hardware card informs the cloud management platform that the source BMS is migrated.
S221, the target hardware card carries out network configuration on the target hardware card according to the network configuration information.
In this step, the target hardware card configures the tenant-related network to be consistent with the source hardware card and sets the network configurations as those of the target BMS, for example, the target hardware card records the IP address and bandwidth packet configuration of the source BMS as those of the target BMS.
In this step, the network resources of the source BMS are transferred to the target hardware card through the processing of the source hardware card and the target hardware card.
S222, the target hardware card informs the cloud management platform that the target BMS is ready.
In this step, after the target hardware card transfers the storage resources, the computing resources and the network resources of the source BMS to the target BMS, the target BMS is notified to the cloud management platform that the target BMS is ready, and at this time, when the tenant remotely logs in the source BMS according to the IP address of the source BMS, the actually logs in the target BMS, but because the switching process is embodied in steps S219-S221 related to the network configuration information, that is, steps S219-S221 may cause that the tenant cannot log in the source BMS and the target BMS temporarily, but the suspension of these steps may be controlled within seconds, and in general, the tenant does not feel the same, so that the migration process may be performed by the tenant without perception, and the tenant experience can be ensured on the premise of migrating the BMS.
In summary, the embodiment of the invention can realize the unaware migration of BMS by the tenant, so that the tenant experience can be greatly improved.
The method shown in fig. 7 below is a method in which the first VMM of the source BMS is activated when the first processor in the source BMS is an x86 processor, and the method shown in fig. 8 below is a method in which the first VMM of the source BMS is activated when the first processor in the source BMS is an ARM processor. Referring to fig. 7, fig. 7 is an interaction flow chart of a method for activating a first VMM by a source BMS according to the present application. As shown in fig. 7, the method for activating the first VMM by the source BMS includes:
S301, the dynamic configuration module of the source hardware card sends an interrupt signal of the SMI interrupt to the source BMS. Accordingly, the source BMS receives an interrupt signal of the SMI interrupt sent by the dynamic configuration module of the source hardware card.
In a specific embodiment of the present application, after the source BMS receives the SMI interrupt sent by the dynamic configuration module of the source hardware card, the first processor of the source BMS will enter SMM mode. See in particular the description of SMM above, which will not be described here.
S302, the source BMS executes the SMI interrupt handling program to save the CPU register state in the source BMS into the SMRAM, and then saves the CPU register state into the first shared memory.
In a specific embodiment of the present application, the first shared memory may be disposed in a first memory of the source BMS, where the first shared memory may be accessed by a first operating system in the source BMS and a first VMM in the source BMS.
And S303, loading the first VMM into a first processor of the source BMS by the source BMS.
S304, the first VMM of the source BMS saves the CPU register state into the hardware-assisted virtualization module.
In particular embodiments of the present application, the hardware-assisted virtualization module may be a virtual machine control architecture (virtual machine control structure, VMCS), a virtual machine control block (virtual machine control structure block, VMCB), or the like.
And S305, the first VMM of the source BMS executes the hardware auxiliary virtualization instruction to complete the activation of the first VMM.
In a specific embodiment of the present application, after the first VMM is activated, a first operating system in the source BMS runs on top of the virtual machine, and both CPU virtualization and memory virtualization functions are turned on, while, to avoid I/O performance degradation of the source BMS, the I/O devices are presented to the virtual machine for use in a pass-through manner.
Referring to fig. 8, fig. 8 is an interaction flow chart of another method for activating the first VMM by the source BMS provided in the present application. As shown in fig. 8, the method for activating the first VMM by the source BMS includes:
s401, the dynamic configuration module of the source hardware card sends SMC exception to the source BMS. Accordingly, the source BMS receives SMC exceptions sent by the dynamic configuration module of the source hardware card.
In a specific embodiment of the present application, after the source BMS receives the SMC exception sent by the dynamic configuration module of the source hardware card, the first processor of the source BMS will enter EL3 mode. See above for details regarding the EL3 mode, and will not be described here.
S402, the source BMS executes an SMC exception handler to save the CPU register state in the source BMS into the SMR and then save the CPU register state into the first shared memory.
In a specific embodiment of the present application, the first shared memory may be disposed in a first memory of the source BMS, where the first shared memory may be accessed by a first operating system in the source BMS and a first VMM in the source BMS.
S403, the source BMS loads the first VMM into the first processor of the source BMS.
S404, the first VMM of the source BMS saves the CPU register state into the hardware-assisted virtualization module.
And S405, the first VMM of the source BMS executes the hardware auxiliary virtualization instruction to complete the activation of the first VMM.
In a specific embodiment of the present application, after the first VMM is activated, a first operating system in the source BMS runs on top of the virtual machine, and both CPU virtualization and memory virtualization functions are turned on, while, to avoid I/O performance degradation of the source BMS, the I/O devices are presented to the virtual machine for use in a pass-through manner.
Referring to fig. 9, fig. 9 is a schematic structural diagram of a hardware card according to the present application. As shown in FIG. 9, the hardware card of the present application includes a dynamic configuration module 410, an intelligent transfer module 420, and an I/O processing module 430.
The dynamic configuration module 410 is configured to receive a migration command for a first bare metal server, where the first hardware card is inserted into the first bare metal server;
The intelligent transfer module 420 is configured to notify the first bare metal server to start a virtual machine manager in the first bare metal server according to the migration command, where the virtual machine manager records first memory dirty page location information generated by the first bare metal server for a memory of the first bare metal server, and sends the first memory dirty page location information to the first hardware card;
the intelligent transfer module 420 is configured to online transfer the memory dirty page of the first bare metal server to a second bare metal server according to the first memory dirty page location information.
Optionally, the I/O processing module 430 is configured to obtain a first I/O device state of an I/O device of the first bare metal server, obtain a second I/O device state of an I/O device of the first hardware card, and send the first I/O device state and the second I/O device state to the second hardware card.
For simplicity, the hardware card is not specifically described herein, see fig. 2, 3 and the related description. In addition, each module in the hardware card may perform the steps performed by each module in fig. 5 to 8, please refer to fig. 5 to 8 and the related description, which are not repeated here.
The embodiment of the application provides a BMS system. The BMS system of the present embodiment includes a BMS and a hardware card, wherein the hardware card may be inserted on the BMS.
As shown in fig. 10, the BMS includes one or more processors 510, a communication interface 520, and a memory 530. Among them, the processor 510, the communication interface 520, and the memory 530 may be connected by a bus 540. The bus may be a PCIE bus or other high-speed bus.
Processor 510 includes one or more general-purpose processors, which may be any type of device capable of processing electronic instructions, including a central processing unit (Central Processing Unit, CPU), microprocessor, microcontroller, main processor, controller, and ASIC (Application SPECIFIC INTEGRATED Circuit), among others. The processor 510 executes various types of digital storage instructions, such as software or firmware programs stored in the memory 530, which enable the BMS to provide a wide variety of services. For example, the processor 510 can execute programs or process data to perform at least a portion of the methods discussed herein.
The communication interface 520 may be a wired interface (e.g., an ethernet interface) for communicating with clients. When communication interface 520 is a wired interface, communication interface 520 may employ a family of protocols over TCP/IP, such as RAAS protocol, remote function call (Remote Function Call, RFC) protocol, simple object Access protocol (Simple Object Access Protocol, SOAP) protocol, simple network management protocol (Simple Network Management Protocol, SNMP) protocol, common object request broker architecture (Common Object Request Broker Architecture, CORBA) protocol, distributed protocol, and so forth.
The Memory 530 may include Volatile Memory (RAM), such as random access Memory (Random Access Memory), non-Volatile Memory (Non-Volatile Memory), such as Read-Only Memory (ROM), flash Memory (Flash Memory), hard disk (HARD DISK DRIVE, HDD), or Solid state disk (Solid-state-STATE DRIVE, SSD) Memory, and may include combinations of the above. The memory may be used to store a guest operating system and a VMM.
It will be appreciated that the above-described BMS may be used to perform steps performed by the source BMS or the target BMS as in fig. 5 to 8, with specific reference to fig. 5 to 8 and the related description.
As shown in fig. 11, the hardware card includes one or more processors 610, a communication interface 620, and a memory 630. Among them, the processor 610, the communication interface 620, and the memory 630 may be connected by a bus 640.
The processor 610 includes one or more general-purpose processors, which may be any type of device capable of processing electronic instructions, including a central processing unit (Central Processing Unit, CPU), microprocessor, microcontroller, main processor, controller, and ASIC (Application SPECIFIC INTEGRATED Circuit), among others. The processor 610 executes various types of digitally stored instructions, such as software or firmware programs stored in the memory 630, that enable the client to provide a wide variety of services. For example, the processor 610 can execute programs or process data to perform at least a portion of the methods discussed herein.
The communication interface 620 may be a wired interface (e.g., an ethernet interface) for communicating with a server or user. When communication interface 620 is a wired interface, communication interface 112 may employ a family of protocols over TCP/IP, such as RAAS protocol, remote function call (RemoteFunctionCall, RFC) protocol, simple object Access protocol (Simple Object Access Protocol, SOAP) protocol, simple network management protocol (Simple Network Management Protocol, SNMP) protocol, common object request broker architecture (Common Object Request Broker Architecture, CORBA) protocol, distributed protocol, and so forth.
The Memory 630 may include Volatile Memory (RAM), such as random access Memory (Random Access Memory), non-Volatile Memory (Non-Volatile Memory), such as Read-Only Memory (ROM), flash Memory (Flash Memory), hard disk (HARD DISK DRIVE, HDD), or Solid state disk (Solid-state-STATE DRIVE, SSD) Memory, and may include combinations of the above. Memory 630 may be used to store dynamic configuration modules, intelligent transfer modules, and I/O processing modules.
It will be appreciated that the BMS described above may be used to perform steps performed by the source hardware card or the target hardware card as shown in fig. 5 to 8, and refer specifically to fig. 5 to 8 and the related description.
In the above scheme, after receiving the migration command, the source hardware card notifies the source bare metal server to start the virtual machine manager to record the first memory dirty page position information, which is generated by the source bare metal server and aims at the memory of the source bare metal server, so that the memory dirty page of the source bare metal server is migrated to the target bare metal server on line, on-line migration of the BMS can be realized, and the work of migrating the memory dirty page of the target bare metal server on line according to the first memory dirty page position information is borne by the source hardware card, so that the burden of the source metal server can be effectively reduced.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital tenant line), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, storage disk, tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state storage disk Solid STATE DISK (SSD)), etc.

Claims (18)

1. A bare metal server online migration method, the method comprising:
A first hardware card receives a migration command for a first bare metal server, wherein the first hardware card is inserted into the first bare metal server;
the first hardware card informs the first bare metal server to start a virtual machine manager in the first bare metal server according to the migration command, the virtual machine manager records first memory dirty page position information, which is generated by the first bare metal server and aims at a memory of the first bare metal server, and sends the first memory dirty page position information to the first hardware card;
After the first hardware card receives an online migration command for a first bare metal server, the method comprises:
The first hardware card records second memory dirty page position information which is generated by the first hardware card and aims at a memory of the first bare metal server;
The first hardware card online transfers the memory dirty page of the first bare metal server to a second bare metal server according to the first memory dirty page position information, and the method comprises the following steps:
The first hardware card acquires at least one first memory page generating a dirty page from the memory according to the first memory dirty page position information, acquires at least one second memory page generating the dirty page from the memory according to the second memory dirty page position information, and sends the at least one first memory page and the at least one second memory page to a second hardware card, wherein the second hardware card is connected with the first hardware card through a network;
The second hardware card sets the memory of a second bare metal server according to the at least one first memory page and the at least one second memory page, wherein the second hardware card is inserted into the second bare metal server.
2. The method of claim 1, wherein after sending the at least one first memory page and the at least one second memory page to a second hardware card, the method comprises:
The first hardware card acquires a first I/O device state of the I/O device of the first bare metal server, acquires a second I/O device state of the I/O device of the first hardware card, and sends the first I/O device state and the second I/O device state to the second hardware card;
The second hardware card sets the I/O device of the second hardware card according to the second I/O device state, and sends the first I/O device state to the second bare metal server, so that the second bare metal server sets the I/O device of the second bare metal server according to the first I/O device state.
3. The method of claim 1 or 2, wherein before the virtual machine manager records the first memory dirty page location information for the memory of the first bare metal server generated by the first bare metal server, the method further comprises:
the virtual machine manager sends the full memory pages of the first bare metal server to the first hardware card;
the first hardware card sends the full memory pages to the second hardware card;
and initializing the memory of the second bare metal server by the second hardware card according to the full memory page.
4. The method according to claim 1 or 2, wherein the method further comprises:
The second hardware card receives the migration command;
And the second hardware card mounts the network disk mounted by the first hardware card according to the migration command and notifies the second bare metal server to start a virtual machine manager in the second bare metal server.
5. The method according to claim 1 or 2, wherein the method further comprises:
The first hardware card sends network configuration information of the first bare metal server to the second hardware card;
and the second hardware card performs network configuration according to the network configuration information.
6. The method of claim 5, wherein after the first hardware card sends the network configuration information of the first bare metal server to the second hardware card, the method further comprises:
and the first hardware card informs the cloud management platform that the first bare metal server is completely migrated.
7. The method according to claim 1 or 2, wherein a shared memory is provided in the first hardware card, the shared memory being accessed by a virtual machine manager of the first bare metal server.
8. The method according to claim 1 or 2, wherein the first hardware card starts a virtual machine manager according to the migration command, comprising:
the first hardware card generates an interrupt signal according to the migration command,
And the first bare metal server receives the interrupt signal and starts a virtual machine manager of the first bare metal server according to the interrupt signal.
9. The method of claim 8, wherein the step of determining the position of the first electrode is performed,
The interrupt signal is a system management interrupt of the X86 processor, or the interrupt signal is a security monitoring call SMC or a security interrupt of the Arm processor.
10. The bare metal server online migration system is characterized by comprising a first bare metal server, a first hardware card, a second bare metal server and a second hardware card,
The first hardware card is used for receiving a migration command for a first bare metal server, wherein the first hardware card is inserted into the first bare metal server;
the first hardware card is used for notifying the first bare metal server to start a virtual machine manager according to the migration command, and the virtual machine manager records first memory dirty page position information, which is generated by the first bare metal server and aims at a memory of the first bare metal server, and sends the first memory dirty page position information to the first hardware card;
The first hardware card is used for online migration of the memory dirty pages of the first bare metal server to a second bare metal server according to the first memory dirty page position information;
the first hardware card is further used for recording second memory dirty page position information, which is generated by the first hardware card and aims at a memory of the first bare metal server;
The first hardware card is further configured to obtain at least one first memory page that generates a dirty page from the memory according to the first memory dirty page location information, obtain at least one second memory page that generates a dirty page from the memory according to the second memory dirty page location information, and send the at least one first memory page and the at least one second memory page to a second hardware card, where the second hardware card is connected to the first hardware card through a network;
The second hardware card is further configured to set a memory of a second bare metal server according to the at least one first memory page and the at least one second memory page, where the second hardware card is inserted into the second bare metal server.
11. The system of claim 10, wherein the system further comprises a controller configured to control the controller,
The first hardware card is used for acquiring a first I/O equipment state of the I/O equipment of the first bare metal server, acquiring a second I/O equipment state of the I/O equipment of the first hardware card, and sending the first I/O equipment state and the second I/O equipment state to the second hardware card;
The second hardware card is configured to set an I/O device of the second hardware card according to the second I/O device state, and send the first I/O device state to the second bare metal server, so that the second bare metal server sets the I/O device of the second bare metal server according to the first I/O device state.
12. The system according to claim 10 or 11, wherein,
The first bare metal server is used for sending the total memory pages of the first bare metal server to the first hardware card;
the first hardware card is used for sending the full memory pages to the second hardware card;
and the second hardware card is used for initializing the memory of the second bare metal server according to the full memory page.
13. The system according to claim 10 or 11, wherein,
The second hardware card receives the migration command;
And the second hardware card mounts the network disk mounted by the first hardware card according to the migration command and notifies the second bare metal server to start a virtual machine manager in the second bare metal server.
14. The system according to claim 10 or 11, wherein,
The first hardware card sends network configuration information of the first bare metal server to the second hardware card;
and the second hardware card performs network configuration according to the network configuration information.
15. The system according to claim 10 or 11, wherein,
The first hardware card is used for notifying a cloud management platform to send that the first bare metal server is completely migrated.
16. The system of claim 10 or 11, wherein a shared memory is disposed within the first hardware card, the shared memory being accessible to a virtual machine manager of the first bare metal server.
17. The system according to claim 10 or 11, wherein,
The first hardware card is used for generating an interrupt signal according to the migration command,
The first bare metal server is used for receiving the interrupt signal and starting a virtual machine manager of the first bare metal server according to the interrupt signal.
18. The system of claim 17, wherein the system further comprises a controller configured to control the controller,
The interrupt signal is a system management interrupt of the X86 processor, or the interrupt signal is a security monitoring call SMC or a security interrupt of the Arm processor.
CN202011337002.1A 2020-08-29 2020-11-25 Bare metal server online migration method and system Active CN114115703B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP21859684.9A EP4195021A4 (en) 2020-08-29 2021-05-11 Online migration method and system for bare metal server
PCT/CN2021/092962 WO2022041839A1 (en) 2020-08-29 2021-05-11 Online migration method and system for bare metal server
US18/175,853 US20230214245A1 (en) 2020-08-29 2023-02-28 Online Migration Method and System for Bare Metal Server

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2020108908744 2020-08-29
CN202010890874 2020-08-29

Publications (2)

Publication Number Publication Date
CN114115703A CN114115703A (en) 2022-03-01
CN114115703B true CN114115703B (en) 2025-08-22

Family

ID=80360745

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011337002.1A Active CN114115703B (en) 2020-08-29 2020-11-25 Bare metal server online migration method and system

Country Status (1)

Country Link
CN (1) CN114115703B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114697191A (en) * 2022-03-29 2022-07-01 浪潮云信息技术股份公司 Resource migration method, device, equipment and storage medium
CN115269116A (en) * 2022-07-18 2022-11-01 天翼云科技有限公司 A cloud host hot migration method, medium and electronic device
CN119166395A (en) * 2023-06-19 2024-12-20 平头哥(上海)半导体技术有限公司 Weak memory order risk detection method, device, electronic device and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103530167A (en) * 2013-09-30 2014-01-22 华为技术有限公司 Virtual machine memory data migration method and relevant device and cluster system
CN110879741A (en) * 2018-09-06 2020-03-13 阿里巴巴集团控股有限公司 Virtual machine live migration method and device, storage medium and processor

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5951111B2 (en) * 2012-11-09 2016-07-13 株式会社日立製作所 Management computer, computer system, and instance management method
US9565129B2 (en) * 2014-09-30 2017-02-07 International Business Machines Corporation Resource provisioning planning for enterprise migration and automated application discovery
US10996968B2 (en) * 2014-11-24 2021-05-04 Intel Corporation Support for application transparent, high available GPU computing with VM checkpointing
US9619270B2 (en) * 2015-06-27 2017-04-11 Vmware, Inc. Remote-direct-memory-access-based virtual machine live migration
US10162612B2 (en) * 2016-01-04 2018-12-25 Syntel, Inc. Method and apparatus for inventory analysis
CN107515775B (en) * 2016-06-15 2021-11-19 华为技术有限公司 Data transmission method and device
CN110532208B (en) * 2019-07-12 2021-05-28 优刻得科技股份有限公司 Data processing method, interface conversion structure and data processing equipment
CN110532065A (en) * 2019-09-02 2019-12-03 广州市品高软件股份有限公司 A kind of dispositions method and device of bare metal server

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103530167A (en) * 2013-09-30 2014-01-22 华为技术有限公司 Virtual machine memory data migration method and relevant device and cluster system
CN110879741A (en) * 2018-09-06 2020-03-13 阿里巴巴集团控股有限公司 Virtual machine live migration method and device, storage medium and processor

Also Published As

Publication number Publication date
CN114115703A (en) 2022-03-01

Similar Documents

Publication Publication Date Title
US9519795B2 (en) Interconnect partition binding API, allocation and management of application-specific partitions
EP4050477B1 (en) Virtual machine migration techniques
JP5018252B2 (en) How to change device allocation
JP5305848B2 (en) Method and data processing system for managing input/output (I/O) virtualization within a data processing system and computer program product - Patents.com
JP3887314B2 (en) Methods and apparatus for powering down a logical partition in a data processing system and / or rebooting a logical partition
US12405816B2 (en) Virtual machine live migration method and communications device
US10592434B2 (en) Hypervisor-enforced self encrypting memory in computing fabric
US20090260007A1 (en) Provisioning Storage-Optimized Virtual Machines Within a Virtual Desktop Environment
CN114115703B (en) Bare metal server online migration method and system
US10635499B2 (en) Multifunction option virtualization for single root I/O virtualization
US20090125901A1 (en) Providing virtualization of a server management controller
JP6111181B2 (en) Computer control method and computer
JP2009145931A (en) Migration method between virtual computer and physical computer and computer system thereof
TW200817920A (en) Method, apparatus, and computer usable program code for migrating virtual adapters from source physical adapters to destination physical adapters
US20230214245A1 (en) Online Migration Method and System for Bare Metal Server
WO2013024510A2 (en) Storage control apparatus
CN114741233A (en) Quick Start Method
WO2023125482A1 (en) Cluster management method and device, and computing system
CN116069584A (en) Extending monitoring services into trusted cloud operator domains
US20250138864A1 (en) Cloud Computing Technology-Based Server and Cloud System
US20240241728A1 (en) Host and dpu coordination for dpu maintenance events
US20240345857A1 (en) Hypervisor-assisted scalable distributed systems
CN111522692B (en) Multi-operating-system input and output equipment redundancy guarantee system based on virtual machine
JP2004005113A (en) Virtual computer system operated on a plurality of actual computers, and control method thereof
WO2025192724A1 (en) System and migration method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant