[go: up one dir, main page]

CN110688193A - Disk processing method and device - Google Patents

Disk processing method and device Download PDF

Info

Publication number
CN110688193A
CN110688193A CN201810724079.0A CN201810724079A CN110688193A CN 110688193 A CN110688193 A CN 110688193A CN 201810724079 A CN201810724079 A CN 201810724079A CN 110688193 A CN110688193 A CN 110688193A
Authority
CN
China
Prior art keywords
disk
computing node
processed
node
computing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810724079.0A
Other languages
Chinese (zh)
Other versions
CN110688193B (en
Inventor
田世坤
吴东
张渊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Cloud Computing Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201810724079.0A priority Critical patent/CN110688193B/en
Publication of CN110688193A publication Critical patent/CN110688193A/en
Application granted granted Critical
Publication of CN110688193B publication Critical patent/CN110688193B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a disk processing method and a disk processing device, wherein the disk processing method comprises the following steps: detecting whether a disk in a computing cluster is occupied by virtual machines running on at least two computing nodes, and if so, taking the disk as a disk to be processed; determining a target compute node and a source compute node of the at least two compute nodes; and closing the occupation of the source computing node on the disk to be processed. The disk processing method realizes the quick recovery of the computing nodes on the target computing node, and avoids the influence of the source computing node on the disk occupied by the target computing node.

Description

Disk processing method and device
Technical Field
The application relates to the technical field of cloud computing, in particular to a disk processing method. The application also relates to a magnetic disk processing device and an electronic device.
Background
In a cloud computing environment, computing resources of a data center are divided into a large number of virtual machines (virtual machines) through a virtualization technology, users flexibly deploy own applications in the virtual machines, such as applications of Web, social contact, games, finance and the like, and important user data are often stored in the applications, so that the data reading and writing performance is required to be good, the operation is required to be stable, uninterrupted service can be performed for 7 x 24 hours, and the usability is high enough. Meanwhile, the applications also require good enough data reliability, the data has a plurality of redundant backups, and the use is not affected by the crash of a single server, so that the distributed storage needs to be accessed to the virtual machine disk.
The data center is internally composed of a large number of clusters, in order to improve the selling rate, reduce resource contention and the like, the clusters are divided into a computing cluster and a storage cluster, a virtualization platform is deployed on each server (computing node) of the computing cluster, virtual machines of users run on the computing nodes, and the storage cluster is deployed with distributed storage and used as a back-end storage to provide data storage services for the virtual machines on the computing nodes.
At present, an overtime Session mechanism is adopted in distributed storage, a Master node regularly updates sessions of a client and distributes data versions, whether the last Session is released overtime is checked, if not, the last Session is refused to be opened, and a server checks the data versions to determine whether the last Session is submitted. However, in the existing distributed storage, Session release needs a period of time, sessions of the remaining virtual machines need to wait for a period of time to elapse, and then the new virtual machine can normally run, so that fast recovery cannot be realized by immediately preempting.
Disclosure of Invention
The application provides a disk processing method to solve the defects in the prior art. The application also relates to a magnetic disk processing device and an electronic device.
The application provides a disk processing method, which comprises the following steps:
detecting whether a disk in a computing cluster is occupied by virtual machines running on at least two computing nodes, and if so, taking the disk as a disk to be processed;
determining a target compute node and a source compute node of the at least two compute nodes;
and closing the occupation of the source computing node on the disk to be processed.
Optionally, the disk processing method further includes:
and adding the link address corresponding to the source computing node into a link address blacklist for forbidding accessing the disk to be processed.
Optionally, the occupation manner of the virtual machine on the disk includes: read-write open and/or read-only open; the disk only allows any one of the computing nodes in the computing cluster to be occupied in the read-write open mode, and allows at least one of the computing nodes in the computing cluster to be occupied in the read-only open mode.
Optionally, the determining a target computing node and a source computing node in the at least two computing nodes includes:
judging whether at least one abnormal virtual machine with abnormal state exists in the virtual machines running on the computing nodes, if so, taking the computing node where the abnormal virtual machine is located as the source computing node, and taking the computing nodes except the source computing node in the at least two computing nodes as the target computing node.
Optionally, the disk processing method further includes:
and after detecting that the computing node in the computing cluster has a fault, restoring the virtual machine running on the faulty computing node in the computing cluster on a normal computing node except the faulty computing node in the computing cluster.
Optionally, the determining a target computing node and a source computing node in the at least two computing nodes includes:
and taking the failed computing node as the source computing node, and taking computing nodes except the source computing node in the at least two computing nodes as the target computing node.
Optionally, the closing of the occupation of the to-be-processed disk by the source computing node includes:
and occupying the disk to be processed on the source computing node on the target computing node in a read-write open mode.
Optionally, a disk of a virtual machine running on the computing node is configured with a disk management table corresponding to the disk, and when the disk is opened each time, a corresponding record entry is inserted into the disk management table corresponding to the disk;
the record entry records history information of the opened disk, identification information and a link address of a computing node of the opened disk.
Optionally, the closing of the occupation of the to-be-processed disk by the source computing node includes:
searching a record entry corresponding to the disk to be processed in the disk management table;
and judging whether the identification information recorded in the searched record entry is empty, if not, judging whether the link address of the computing node recorded in the record entry is the same as the link address of the source computing node, and if so, deleting the record entry from the disk management table.
Optionally, the closing of the occupation of the to-be-processed disk by the source computing node includes:
judging whether the occupation of the source computing node on the disk to be processed is read-write open or not, if so, searching the record items of the identification information and the link address of the computing node matched with the source computing node in the disk management table;
and deleting the searched record entry from the disk management table.
Optionally, the closing of the occupation of the to-be-processed disk by the source computing node includes:
judging whether the occupation of the source computing node on the disk to be processed is read-only open or not, if so, searching the record items of the identification information and the link address of the computing node matched with the source computing node in the disk management table;
screening the record items which occupy the disk to be processed and are read-only opened from the searched record items;
and deleting the screened record entries from the disk management table.
Optionally, the closing of the occupation of the to-be-processed disk by the source computing node includes:
judging whether the occupation of the source computing node on the disk to be processed is read-write open or not, if so, searching the record items of the identification information and the link address of the computing node matched with the source computing node in the disk management table;
deleting the searched record items from the disk management table;
if not, judging whether the occupation of the source computing node on the disk to be processed is read-only open or not, and if the occupation is read-only open, searching the identification information and the record item matched with the link address of the computing node and the source computing node in the disk management table;
screening the record items which occupy the disk to be processed and are read-only opened from the searched record items;
and deleting the screened record entries from the disk management table.
Optionally, the disk processing method is implemented based on a virtual machine live migration scenario, and before the migration operation is executed, a disk of a virtual machine running on the source computing node is occupied in a read-write open manner;
after the virtual machine running on the source computing node is migrated to the target computing node, occupying a disk of the virtual machine in a read-only open mode;
and after the migration operation is executed, changing the occupation of the target computing node on the disk of the virtual machine from the read-only open mode to read-write open mode, wherein correspondingly, the disk to be processed refers to the disk occupied by the source computing node in the read-write open mode.
The present application also provides a disk processing apparatus, including:
the disk detection unit is used for detecting whether a disk in the computing cluster is occupied by a virtual machine running on at least two computing nodes, and if so, the disk to be processed determining unit is run;
the to-be-processed disk determining unit is used for taking the disk as a to-be-processed disk;
a computing node distinguishing unit for determining a target computing node and a source computing node of the at least two computing nodes;
and the disk occupation closing unit is used for closing the occupation of the source computing node on the disk to be processed.
The present application further provides an electronic device, comprising:
a memory and a processor; the memory is to store computer-executable instructions, and the processor is to execute the computer-executable instructions to:
detecting whether a disk in a computing cluster is occupied by virtual machines running on at least two computing nodes, and if so, taking the disk as a disk to be processed;
determining a target compute node and a source compute node of the at least two compute nodes;
and closing the occupation of the source computing node on the disk to be processed.
The disk processing method provided by the application comprises the following steps: detecting whether a disk in a computing cluster is occupied by virtual machines running on at least two computing nodes, and if so, taking the disk as a disk to be processed; determining a target compute node and a source compute node of the at least two compute nodes; and closing the occupation of the source computing node on the disk to be processed.
In the disk processing method, in the process of processing the disks in the computing cluster, the disks occupied by the virtual machines running on the computing nodes in the computing cluster are detected, the source computing node and the normal target computing node which are occupied by the disks and need to be removed from the computing nodes occupying the disks are further determined, and finally the occupation of the source computing node on the disks to be processed is closed, so that the computing nodes are quickly recovered on the target computing nodes, and the influence of the source computing node on the disks occupied by the target computing nodes is avoided.
Drawings
FIG. 1 is a flow chart of a disk processing method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a computing cluster provided herein;
FIG. 3 is a schematic diagram of an embodiment of a disk handling device provided herein;
fig. 4 is a schematic diagram of an electronic device provided by the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is capable of implementation in many different ways than those herein set forth and of similar import by those skilled in the art without departing from the spirit of this application and is therefore not limited to the specific implementations disclosed below.
The application provides a disk processing method, a disk processing device and electronic equipment. The following detailed description and the description of the steps of the method are individually made with reference to the drawings of the embodiments provided in the present application.
The embodiment of the disk processing method provided by the application is as follows:
referring to fig. 1, a processing flow diagram of an embodiment of a disk processing method provided by the present application is shown, and referring to fig. 2, a schematic diagram of a computing cluster provided by the present application is shown.
Step S101, detecting whether a disk in a computing cluster is occupied by a virtual machine running on at least two computing nodes.
In a cloud computing environment, a Virtual Machine (VM) running on a computing node in a computing cluster needs to be able to normally run, I/O requests of a disk of the VM are required to be able to be normally submitted to a storage cluster, if a computing node where the VM is located fails, such as network disconnection, machine downtime, and the like, so that the state of the VM is abnormal, and no service is provided to the outside, if the behavior of the computing node where the VM is located is not controlled, a residual disk object of the VM may still submit the I/O requests to the storage cluster, possibly write out of order user data or read out illegal data, so that data is abnormal or data is lost, and therefore, to ensure that the VM is highly available, the VM needs to be recovered from a new normal computing node, and ensure that the VM running on the originally failed computing node does not affect the virtual machine running on the new normal computing node, and the system resources are not occupied or the disk data is not written disorderly.
The application provides a disk processing method, can not only realize the virtual machine that resumes operation on the computing node that breaks down on new normal computing node fast, make the virtual machine that resumes operation on the normal computing node not receive the influence of remaining the disk on the computing node that breaks down, even the action of the disk that remains on the computing node that breaks down is not controlled, also can't occupy system resources, also can't occupy the disk again simultaneously or submit the I/O request to storage cluster, and, can realize the nimble appointed of unusual computing node, keep apart the computing node that breaks down, in addition can also adopt different modes to appoint to reject remaining the disk.
In the embodiment of the application, a virtualization platform for building and bearing a computing cluster and a storage cluster of a data center is not limited, and may be Xen, KVM, Docker or other virtualization platforms, based on the virtualization platform, a plurality of virtual machines may be virtualized on one computing node, applications (such as websites, games, databases, and the like) deployed by users in the virtual machines may read and store data to a disk in the virtual machine, the virtual machine may have at least one system disk, a storage operating system, and possibly a plurality of data disks, and the data disk stores service data of the data disk, as shown in fig. 2.
An I/O request of each disk passes through a front-end drive in a virtual machine and then reaches a back-end drive through a virtualization platform, the back-end drive needs to forward the I/O request to a disk access module (Client), the disk access module (Client) submits the request to a distributed block storage system in a storage cluster, the distributed block storage system comprises a group of high-availability Master nodes and a plurality of Server nodes for processing I/O streams, and the Master nodes are responsible for processing creation/deletion, opening/closing, load/unload and the like of the disks and managing key information in the control streams; the Server node is responsible for receiving an I/O request of a disk access module (Client), assisting in processing legal object check and finally submitting the I/O request to a bottom-layer distributed file system.
Specifically, the disk access module (Client) is configured to open a disk to establish an I/O link, receive a disk I/O request from a compute node and submit the disk I/O request to a Server node of the storage cluster, acquire information necessary for processing a fault from a Master node, report a fault state of the storage node to the Master node, and respond to the virtual machine after the I/O request of the storage cluster is completed. The Master node is a high-availability module composed of at least 3 machines in the storage cluster, and is responsible for processing opening and closing of a disk, fast preemptive recovery, managing information such as openversion (history information of disk opening), token (identification information), computing node blacklist and the like of the disk, detecting a fault state of a Server node, initiating fault recovery of the Server node and the like. The Server node is used for receiving and processing an I/O request submitted by a disk access module (Client), processing check information carried in the I/O request, submitting the I/O request to a bottom distributed file storage, and processing heartbeat check and fault processing information sent by a Master.
In this scenario, if one or more nodes in the computing cluster fail, it is necessary to recover the virtual machine running on the failed computing node on other normal computing nodes in the computing cluster except the failed computing node, and it is necessary to clean up the residual disk on the failed computing node.
In this step, it is detected whether the disk of the virtual machine running on the compute node in the compute cluster is occupied by the virtual machines running on at least two compute nodes, which is a premise that the disk on the compute node is cleaned by the present application, and only if the disk on the compute node with the fault is recovered on the normal compute node, the operation of cleaning the residual disk on the compute node with the fault is performed, so it can be seen that the premise of performing disk cleaning is that the disk is occupied by the virtual machines running on two or more compute nodes, and if the current disk is occupied by the virtual machines running on two or more compute nodes, the following step S102 is performed, the disk is taken as a disk to be processed, and the subsequent steps S103 to S104 are combined to clean the disk to be processed. If the current disk is not occupied by the virtual machines running on two or more computing nodes, that is, the current disk is occupied by the virtual machine running on only one computing node in the computing cluster, in this case, it is necessary to wait for the current disk to be restored on at least one computing node in the computing cluster before performing a cleaning operation on the current disk.
Preferably, the method for occupying a disk by a virtual machine operated by a computing node in the embodiment of the present application includes two modes, namely read-write open mode and read-only open mode, where the disk has and only allows any one computing node in the computing cluster to occupy in the read-write open mode, and allows at least one computing node in the computing cluster to occupy in the read-only open mode. Therefore, if the disk is occupied by the virtual machines running on two or more computing nodes in the computing cluster in a read-write open mode, in this case, a cleaning operation needs to be performed on the disk, and the cleaning operation is performed in the presence of the virtual machines and is occupied by only one computing node in the read-write open mode. Based on this, in this step, it is detected whether the disk of the virtual machine running on the computing node in the computing cluster is occupied by the virtual machines running on at least two computing nodes, and it means that it is detected whether the disk of the virtual machine running on the computing node in the computing cluster is occupied by the virtual machines running on at least two computing nodes in a read-write development manner, if so, the following step S102 is executed, the disk is taken as a disk to be processed, and the subsequent steps S103 to S104 are combined to clean the disk to be processed.
It should be noted that, in addition to the above-provided example of the disk scenario for recovering the failed computing node on the new normal computing node in the computing cluster, in a cloud computing environment, many similar scenarios may also be used to clean the residual disk on the computing node by using the disk processing method provided in the present application. For example, in a virtual machine live migration scenario, before a migration operation is executed, a disk of a virtual machine running on a source computing node is occupied in a read-write open manner; after the virtual machine running on the source computing node is migrated to the target computing node, occupying a disk of the virtual machine in a read-only open mode; and after the migration operation is executed, changing the occupation of the disk of the virtual machine on the target computing node from the read-only open mode to read-write open mode, which means that in this case, the current disk is read-write open on both the source computing node and the target computing node, and therefore, the disk that is read-write open on both the computing node and the target computing node needs to be cleaned. It follows that none of these scenarios depart from the core of this application and are therefore within the scope of protection of this application.
And step S102, taking the magnetic disk as a magnetic disk to be processed.
Step S103, determining a target computing node and a source computing node in the at least two computing nodes.
In the embodiment of the present application, a target computing node and a source computing node in the at least two computing nodes are determined, which are intended to determine which computing nodes are faulty computing nodes and which are normal computing nodes in the computing nodes occupying the disk, and preparation is made for removing the disk from the faulty computing nodes as follows. Preferably, in a disk scenario in which a failed computing node is recovered from a new normal computing node in the computing cluster provided by this embodiment, the source computing node refers to the failed computing node that has failed in the at least two computing nodes, and the target computing node refers to a normal computing node other than the source computing node in the at least two computing nodes.
It should be noted that, in the virtual machine live migration scenario, the source computing node is a computing node where the virtual machine is located before migration, and the target computing node is a computing node where the virtual machine is located after migration.
In addition, in a specific implementation, in addition to the implementation manners for determining the target computing node and the source computing node in the at least two computing nodes provided for the two scenarios, the target computing node and the source computing node in the at least two computing nodes may be determined in other manners, for example, by determining whether there is at least one abnormal virtual machine with abnormal state in virtual machines running on the computing nodes, and if so, taking the computing node where the abnormal virtual machine is located as the source computing node, and taking computing nodes other than the source computing node in the at least two computing nodes as the target computing node.
And step S104, closing the occupation of the source computing node on the disk to be processed.
In the step S103, the failed source computing node and the normal target computing node that need to perform disk cleaning are determined, and in this step, the residual disk on the failed source computing node is cleaned by closing the occupation of the source computing node on the disk to be processed. Specifically, the disk to be processed on the source computing node is occupied on the target computing node in a read-write open mode, so that the disk to be processed on the source computing node is seized, that is, the residual disk on the failed computing node is cleaned. For a failed computing node in a computing cluster, which is likely to be in an out-of-control state due to the failure, after preempting the to-be-processed disk on the source computing node on the target computing node, that is, after completing the preempting of the residual disk on the failed computing node, closing the occupation of the to-be-processed disk by the virtual machine running on the source computing node, that is, eliminating the occupation of the to-be-processed disk by the virtual machine running on the source computing node.
Specifically, the add deviceblacklist, remove deviceblacklist and GetDeviceBlackList interfaces are realized based on the Master, that is, the add to link address blacklist operation, the delete operation from the link address blacklist operation and the acquire disk link address blacklist operation are performed, the link address blacklist can store one or more link addresses of the unhealthy computing nodes, and if the link address of a certain computing node is added into the link address blacklist, the disk opening request initiated from the link address can be directly returned and mistakenly returned, so that the abnormal behavior of the residual disk object is prevented from preempting system resources, and the operation of the normal disk object is prevented from being influenced. Therefore, the embodiment of the application realizes flexible designation of a plurality of unhealthy computing nodes by linking the address blacklist. In addition, a CloseImageByToken interface is realized based on the Master, and is used for closing the specified disk to achieve the purpose of removing the residual disk object.
As described above, the disk has and only allows any one of the computing nodes in the computing cluster to be occupied in the read-write open mode, and allows at least one of the computing nodes in the computing cluster to be occupied in the read-only open mode, that is: any disk can be opened globally only on one computing node by adopting a read-write opening mode, and can be opened on a plurality of computing nodes by adopting a read-only opening mode. In addition, the same token (identification information) is used regardless of whether a read-only opening or read-write opening mode is adopted for the same disk, and the same disk can be opened through different link addresses, for example, the same disk is opened on one computing node in the read-write opening mode, and meanwhile, the same disk can be opened on the computing node in the read-only opening mode and can also be opened on other computing nodes in the read-only opening mode.
For example, an uncontrolled virtual machine is found on the computing node 1, and the virtual machine occupies a disk opened by reading and writing, and is denoted as c1(id1, token1, ip1, RW), where id1 is a number of the computing node 1, token1 is identification information of the computing node 1, ip1 is a link address of the computing node 1, and RW is a mode that a virtual machine running on the computing node 1 occupies the disk and is opened by reading and writing. When the disk is opened on the computing node 2, which is denoted as c2(id1, token1, ip2, RW), the Client on the computing node 1 is preempted when the disk is opened on the computing node 2, and thus the virtual machine on the computing node 1 fails to continuously send the I/O request. Then, a CloseImageByToken (id1, token1, ip1) interface is called to remove the computing node 1, and AddDeviceBlackList (id1, ip1) is called to add the link address ip1 of the computing node 1 into the link address blacklist of the current disk, and after the link address ip1 of the computing node 1 is added into the link address blacklist of the current disk, the current disk can not be opened on the computing node 1.
Or, in a virtual machine live migration scenario, before a live migration operation, a virtual machine on the source computing node 1 is opened in a read-write open manner, which is denoted as c1(id1, token1, ip1, RW), and is live migrated to the destination computing node 2, and the virtual machine is opened on the destination computing node 2 in a read-only open manner, which is denoted as c2(id1, token1, ip2, RO), and after the live migration operation is completed, the virtual machine is switched to be opened in a read-write open manner on the destination computing node 2, so that a disk occupied by the read-write open manner on the source computing node 1 is preempted.
Preferably, a disk of a virtual machine running on a compute node in the embodiment of the present application is configured with a disk management table corresponding to the disk, and when the disk is opened each time, a corresponding record entry is inserted into the disk management table corresponding to the disk; the record entry records history information of the opened disk, identification information and a link address of a computing node of the opened disk. For example, a disk management table is maintained in the Master for each disk, and a corresponding record entry is inserted into the disk management table when the disk is opened (openVersion, clientToken, clientIp).
It should be noted that, in the process of preempting the to-be-processed disk on the source computing node on the target computing node, if the preempted source computing node occupies a disk in a read-write open manner, all disks whose identification information and link address are matched with the source computing node are removed, including all disks occupied in a read-write open manner and a read-only open manner. If the disks occupied by the source computing node in the read-only opening mode are preempted, and the identification information and the link address are matched with one computing node in the read-only opening mode, all the disks which are matched with the identification information and the link address and in the read-only opening mode are removed; if the identification information is matched, but the link addresses are not matched, no processing is required.
In a preferred implementation manner provided in the embodiment of the present application, the following 4 implementation manners for closing occupation of the to-be-processed disk by the virtual machine running on the source computing node are provided:
1) searching a record entry corresponding to the disk to be processed in the disk management table; and judging whether the identification information recorded in the searched record entry is empty, if not, judging whether the link address of the computing node recorded in the record entry is the same as the link address of the source computing node, and if so, deleting the record entry from the disk management table.
2) Judging whether the occupation of the source computing node on the disk to be processed is read-write open or not, if so, searching the record items of the identification information and the link address of the computing node matched with the source computing node in the disk management table; and deleting the searched record entry from the disk management table.
3) Judging whether the occupation of the source computing node on the disk to be processed is read-only open or not, if so, searching the record items of the identification information and the link address of the computing node matched with the source computing node in the disk management table; screening the record items which occupy the disk to be processed and are read-only opened from the searched record items; and deleting the screened record entries from the disk management table.
4) Judging whether the occupation of the source computing node on the disk to be processed is read-write open or not, if so, searching the record items of the identification information and the link address of the computing node matched with the source computing node in the disk management table; deleting the searched record items from the disk management table;
if not, judging whether the occupation of the source computing node on the disk to be processed is read-only open or not, and if the occupation is read-only open, searching the identification information and the record item matched with the link address of the computing node and the source computing node in the disk management table; screening the record items which occupy the disk to be processed and are read-only opened from the searched record items; and deleting the screened record entries from the disk management table.
Following the above example, in the Master, a disk management table is maintained for each disk, and a corresponding record entry is inserted into the disk management table when the disk is opened, where the (openVersion, clientToken, clientIp) of the disk is stored. In the process of eliminating the occupation of the virtual machine running on the computing node on the disk, the matched record entry is found in the disk management table maintained by the Master, and the operation of eliminating the occupation of the virtual machine running on the computing node on the disk is realized by deleting the record entry in the disk management table. On the other hand, in the Server node, after the Master performs the above operation, the Master sends a synchronous openVersion notification to the Server node, the Server node receives the request and updates openVersion maintained in the memory, so that openVersion of the virtual machine corresponding to the removed disk is deleted, and all I/O requests carrying the removed openVersion are directly returned to be wrong, thereby preventing the writing of the disordered data.
To sum up, in the disk processing method provided in the embodiment of the present application, in the process of processing a disk in a computing cluster, by detecting a disk occupied by a virtual machine running on a plurality of computing nodes in the computing cluster, and further determining that a source computing node and a normal target computing node occupied by the disk need to be removed from the plurality of computing nodes occupying the disk, and finally closing the occupation of the source computing node on the disk to be processed, the fast recovery of the computing node on the target computing node is realized, and the influence of the source computing node on the disk occupied by the target computing node is avoided.
The embodiment of the magnetic disk processing device provided by the application is as follows:
in the foregoing embodiment, a disk processing method is provided, and correspondingly, a disk processing apparatus is further provided in the present application, which is described below with reference to the accompanying drawings.
Referring to FIG. 3, a schematic diagram of an embodiment of a disk processing apparatus provided by the present application is shown.
Since the apparatus embodiments are substantially similar to the method embodiments, they are described in a relatively simple manner, and reference may be made to the corresponding description of the method embodiments provided above for relevant portions. The device embodiments described below are merely illustrative.
The application provides a disk processing apparatus, including:
a disk detection unit 301, configured to detect whether a disk in a computing cluster is occupied by a virtual machine running on at least two computing nodes, and if so, run a pending disk determination unit 302;
the to-be-processed disk determining unit 302 is configured to use the disk as a to-be-processed disk;
a computing node distinguishing unit 303, configured to determine a target computing node and a source computing node in the at least two computing nodes;
a disk occupation closing unit 304, configured to close occupation of the to-be-processed disk by the source computing node.
Optionally, the magnetic disk processing apparatus includes:
and the link address blacklist maintenance unit is used for adding the link address corresponding to the source computing node into a link address blacklist which forbids accessing the disk to be processed.
Optionally, the occupation manner of the virtual machine on the disk includes: read-write open and/or read-only open; the disk only allows any one of the computing nodes in the computing cluster to be occupied in the read-write open mode, and allows at least one of the computing nodes in the computing cluster to be occupied in the read-only open mode.
Optionally, the computing node distinguishing unit 303 is specifically configured to determine whether at least one abnormal virtual machine with an abnormal state exists in virtual machines running on a computing node, and if so, take the computing node where the abnormal virtual machine is located as the source computing node, and take computing nodes other than the source computing node in the at least two computing nodes as the target computing node.
Optionally, the magnetic disk processing apparatus includes:
and the virtual machine recovery unit is used for recovering the virtual machine running on the failed computing node in the computing cluster on a normal computing node except the failed computing node in the computing cluster after detecting that the computing node in the computing cluster fails.
Optionally, the computing node distinguishing unit 303 is specifically configured to use the faulty computing node as the source computing node, and use a computing node other than the source computing node in the at least two computing nodes as the target computing node.
Optionally, the disk occupation closing unit 304 is specifically configured to occupy the to-be-processed disk on the source computing node in a read-write open manner on the target computing node.
Optionally, a disk of a virtual machine running on the computing node is configured with a disk management table corresponding to the disk, and when the disk is opened each time, a corresponding record entry is inserted into the disk management table corresponding to the disk; the record entry records history information of the opened disk, identification information and a link address of a computing node of the opened disk.
Optionally, the disk occupation closing unit 304 includes:
a record item searching subunit, configured to search, in the disk management table, a record item corresponding to the disk to be processed;
the identification information judging subunit is used for judging whether the identification information recorded in the searched record entry is empty or not, and if not, the link address subunit is operated;
and the link address subunit is configured to determine whether the link address of the computing node recorded in the record entry is the same as the link address of the source computing node, and if so, delete the record entry from the disk management table.
Optionally, the disk occupation closing unit 304 includes:
a second disk occupation mode judgment subunit, configured to judge whether the occupation of the to-be-processed disk by the source computing node is read-write open, and if yes, run a second record entry search subunit and a second record entry deletion subunit;
the second record item searching subunit is configured to search, in the disk management table, a record item in which the identification information and the link address of the computing node are matched with the source computing node;
and the second record item deleting subunit is configured to delete the found record item from the disk management table.
Optionally, the disk occupation closing unit 304 includes:
a third disk occupation mode judgment subunit, configured to judge whether the occupation of the source computing node on the disk to be processed is read-only open, and if yes, run a third record entry search subunit, a third record entry screening subunit, and a third record entry deletion subunit;
the third record entry searching subunit is configured to search, in the disk management table, a record entry matching the identification information and the link address of the computing node with the source computing node;
the third record item screening subunit is configured to screen, from the found record items, record items whose occupation of the to-be-processed disk is read-only open;
and the third record item deleting subunit is configured to delete the screened record item from the disk management table.
Optionally, the disk occupation closing unit 304 includes:
a fourth disk occupation mode judgment subunit, configured to judge whether the occupation of the to-be-processed disk by the source computing node is read-write open, and if yes, run a fourth record entry search subunit and a fourth record entry deletion subunit; if not, operating a fourth disk occupation mode judgment subunit;
the fourth disk occupation mode judging subunit is configured to judge whether the source computing node occupies the to-be-processed disk in a read-only mode, and if the source computing node occupies the to-be-processed disk in the read-only mode, run a fifth record item searching subunit, a fifth record item screening subunit, and a fifth record item deleting subunit;
the fourth record entry searching subunit is configured to search, in the disk management table, a record entry matching the identification information and the link address of the computing node with the source computing node;
the fourth record item deleting subunit is configured to delete the found record item from the disk management table;
the fifth record entry searching subunit is configured to search, in the disk management table, a record entry matching the identification information and the link address of the computing node with the source computing node;
the fifth record item screening subunit is configured to screen, from the found record items, record items whose occupation of the to-be-processed disk is read-only open;
and the fifth record item deleting subunit is configured to delete the screened record item from the disk management table.
Optionally, the disk processing apparatus operates based on a live migration scenario of the virtual machine, and before the migration operation is executed, a disk of the virtual machine operating on the source computing node is occupied in a read-write open manner; after the virtual machine running on the source computing node is migrated to the target computing node, occupying a disk of the virtual machine in a read-only open mode; and after the migration operation is executed, changing the occupation of the target computing node on the disk of the virtual machine from the read-only open mode to read-write open mode, wherein correspondingly, the disk to be processed refers to the disk occupied by the source computing node in the read-write open mode.
The embodiment of the electronic equipment provided by the application is as follows:
in the foregoing embodiment, a disk processing method is provided, and in addition, the present application also provides an electronic device for implementing the disk processing method, which is described below with reference to the accompanying drawings.
Referring to fig. 4, a schematic diagram of an electronic device provided in the present embodiment is shown.
The embodiments of the electronic device provided in the present application are described more simply, and for related parts, reference may be made to the corresponding descriptions of the embodiments of the disk processing method provided above. The embodiments described below are merely illustrative.
The application provides an electronic device, including:
a memory 401 and a processor 402; the memory 401 is configured to store computer-executable instructions, and the processor 402 is configured to execute the following computer-executable instructions:
detecting whether a disk in a computing cluster is occupied by virtual machines running on at least two computing nodes, and if so, taking the disk as a disk to be processed;
determining a target compute node and a source compute node of the at least two compute nodes;
and closing the occupation of the source computing node on the disk to be processed.
Optionally, the processor 402 is further configured to execute the following computer-executable instructions:
and adding the link address corresponding to the source computing node into a link address blacklist for forbidding accessing the disk to be processed.
Optionally, the occupation manner of the virtual machine on the disk includes: read-write open and/or read-only open; the disk only allows any one of the computing nodes in the computing cluster to be occupied in the read-write open mode, and allows at least one of the computing nodes in the computing cluster to be occupied in the read-only open mode.
Optionally, the determining a target computing node and a source computing node in the at least two computing nodes includes:
judging whether at least one abnormal virtual machine with abnormal state exists in the virtual machines running on the computing nodes, if so, taking the computing node where the abnormal virtual machine is located as the source computing node, and taking the computing nodes except the source computing node in the at least two computing nodes as the target computing node.
Optionally, the processor 402 is further configured to execute the following computer-executable instructions:
and after detecting that the computing node in the computing cluster has a fault, restoring the virtual machine running on the faulty computing node in the computing cluster on a normal computing node except the faulty computing node in the computing cluster.
Optionally, the determining a target computing node and a source computing node in the at least two computing nodes includes:
and taking the failed computing node as the source computing node, and taking computing nodes except the source computing node in the at least two computing nodes as the target computing node.
Optionally, the closing of the occupation of the to-be-processed disk by the source computing node includes:
and occupying the disk to be processed on the source computing node on the target computing node in a read-write open mode.
Optionally, a disk of a virtual machine running on the computing node is configured with a disk management table corresponding to the disk, and when the disk is opened each time, a corresponding record entry is inserted into the disk management table corresponding to the disk; the record entry records history information of the opened disk, identification information and a link address of a computing node of the opened disk.
Optionally, the closing of the occupation of the to-be-processed disk by the source computing node includes:
searching a record entry corresponding to the disk to be processed in the disk management table;
and judging whether the identification information recorded in the searched record entry is empty, if not, judging whether the link address of the computing node recorded in the record entry is the same as the link address of the source computing node, and if so, deleting the record entry from the disk management table.
Optionally, the closing of the occupation of the to-be-processed disk by the source computing node includes:
judging whether the occupation of the source computing node on the disk to be processed is read-write open or not, if so, searching the record items of the identification information and the link address of the computing node matched with the source computing node in the disk management table;
and deleting the searched record entry from the disk management table.
Optionally, the closing of the occupation of the to-be-processed disk by the source computing node includes:
judging whether the occupation of the source computing node on the disk to be processed is read-only open or not, if so, searching the record items of the identification information and the link address of the computing node matched with the source computing node in the disk management table;
screening the record items which occupy the disk to be processed and are read-only opened from the searched record items;
and deleting the screened record entries from the disk management table.
Optionally, the closing of the occupation of the to-be-processed disk by the source computing node includes:
judging whether the occupation of the source computing node on the disk to be processed is read-write open or not, if so, searching the record items of the identification information and the link address of the computing node matched with the source computing node in the disk management table;
deleting the searched record items from the disk management table;
if not, judging whether the occupation of the source computing node on the disk to be processed is read-only open or not, and if the occupation is read-only open, searching the identification information and the record item matched with the link address of the computing node and the source computing node in the disk management table;
screening the record items which occupy the disk to be processed and are read-only opened from the searched record items;
and deleting the screened record entries from the disk management table.
Optionally, the computer executable instruction is implemented based on a virtual machine live migration scenario, and before the migration operation is executed, a disk of a virtual machine running on the source computing node is occupied in a read-write open manner; after the virtual machine running on the source computing node is migrated to the target computing node, occupying a disk of the virtual machine in a read-only open mode; and after the migration operation is executed, changing the occupation of the target computing node on the disk of the virtual machine from the read-only open mode to read-write open mode, wherein correspondingly, the disk to be processed refers to the disk occupied by the source computing node in the read-write open mode.
Although the present application has been described with reference to the preferred embodiments, it is not intended to limit the present application, and those skilled in the art can make variations and modifications without departing from the spirit and scope of the present application, therefore, the scope of the present application should be determined by the claims that follow.
In a typical configuration, a computing device includes one or more processors, input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include non-transitory computer readable media (transient media), such as modulated data signals and carrier waves.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.

Claims (15)

1. A disk processing method, comprising:
detecting whether a disk in a computing cluster is occupied by virtual machines running on at least two computing nodes, and if so, taking the disk as a disk to be processed;
determining a target compute node and a source compute node of the at least two compute nodes;
and closing the occupation of the source computing node on the disk to be processed.
2. The disk processing method according to claim 1, further comprising:
and adding the link address corresponding to the source computing node into a link address blacklist for forbidding accessing the disk to be processed.
3. The disk processing method according to claim 2, wherein the manner in which the virtual machine occupies the disk includes: read-write open and/or read-only open;
the disk only allows any one of the computing nodes in the computing cluster to be occupied in the read-write open mode, and allows at least one of the computing nodes in the computing cluster to be occupied in the read-only open mode.
4. The method of claim 3, wherein the determining a target compute node and a source compute node of the at least two compute nodes comprises:
judging whether at least one abnormal virtual machine with abnormal state exists in the virtual machines running on the computing nodes, if so, taking the computing node where the abnormal virtual machine is located as the source computing node, and taking the computing nodes except the source computing node in the at least two computing nodes as the target computing node.
5. The disk processing method according to claim 3 or 4, further comprising:
and after detecting that the computing node in the computing cluster has a fault, restoring the virtual machine running on the faulty computing node in the computing cluster on a normal computing node except the faulty computing node in the computing cluster.
6. The method of claim 5, wherein the determining a target compute node and a source compute node of the at least two compute nodes comprises:
and taking the failed computing node as the source computing node, and taking computing nodes except the source computing node in the at least two computing nodes as the target computing node.
7. The disk processing method according to claim 6, wherein said closing the occupation of the to-be-processed disk by the source computing node comprises:
and occupying the disk to be processed on the source computing node on the target computing node in a read-write open mode.
8. The disk processing method according to claim 7, wherein a disk management table corresponding to a disk of the virtual machine running on the compute node is configured, and a corresponding record entry is inserted into the disk management table corresponding to the disk each time the disk is opened;
the record entry records history information of the opened disk, identification information and a link address of a computing node of the opened disk.
9. The method of claim 8, wherein the shutting down the occupation of the pending disk by the source computing node comprises:
searching a record entry corresponding to the disk to be processed in the disk management table;
and judging whether the identification information recorded in the searched record entry is empty, if not, judging whether the link address of the computing node recorded in the record entry is the same as the link address of the source computing node, and if so, deleting the record entry from the disk management table.
10. The method of claim 9, wherein the shutting down the occupation of the to-be-processed disk by the source computing node comprises:
judging whether the occupation of the source computing node on the disk to be processed is read-write open or not, if so, searching the record items of the identification information and the link address of the computing node matched with the source computing node in the disk management table;
and deleting the searched record entry from the disk management table.
11. The method of claim 9, wherein the shutting down the occupation of the to-be-processed disk by the source computing node comprises:
judging whether the occupation of the source computing node on the disk to be processed is read-only open or not, if so, searching the record items of the identification information and the link address of the computing node matched with the source computing node in the disk management table;
screening the record items which occupy the disk to be processed and are read-only opened from the searched record items;
and deleting the screened record entries from the disk management table.
12. The method of claim 9, wherein the shutting down the occupation of the to-be-processed disk by the source computing node comprises:
judging whether the occupation of the source computing node on the disk to be processed is read-write open or not, if so, searching the record items of the identification information and the link address of the computing node matched with the source computing node in the disk management table;
deleting the searched record items from the disk management table;
if not, judging whether the occupation of the source computing node on the disk to be processed is read-only open or not, and if the occupation is read-only open, searching the identification information and the record item matched with the link address of the computing node and the source computing node in the disk management table;
screening the record items which occupy the disk to be processed and are read-only opened from the searched record items;
and deleting the screened record entries from the disk management table.
13. The disk processing method according to claim 3 or 4, wherein the disk processing method is implemented based on a virtual machine live migration scenario, and before the migration operation is executed, a disk of a virtual machine running on the source computing node is occupied in a read-write open manner;
after the virtual machine running on the source computing node is migrated to the target computing node, occupying a disk of the virtual machine in a read-only open mode;
and after the migration operation is executed, changing the occupation of the target computing node on the disk of the virtual machine from the read-only open mode to read-write open mode, wherein correspondingly, the disk to be processed refers to the disk occupied by the source computing node in the read-write open mode.
14. A disk processing apparatus, comprising:
the disk detection unit is used for detecting whether a disk in the computing cluster is occupied by a virtual machine running on at least two computing nodes, and if so, the disk to be processed determining unit is run;
the to-be-processed disk determining unit is used for taking the disk as a to-be-processed disk;
a computing node distinguishing unit for determining a target computing node and a source computing node of the at least two computing nodes;
and the disk occupation closing unit is used for closing the occupation of the source computing node on the disk to be processed.
15. An electronic device, comprising:
a memory and a processor;
the memory is to store computer-executable instructions, and the processor is to execute the computer-executable instructions to:
detecting whether a disk in a computing cluster is occupied by virtual machines running on at least two computing nodes, and if so, taking the disk as a disk to be processed;
determining a target compute node and a source compute node of the at least two compute nodes;
and closing the occupation of the source computing node on the disk to be processed.
CN201810724079.0A 2018-07-04 2018-07-04 Disk processing method and device Active CN110688193B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810724079.0A CN110688193B (en) 2018-07-04 2018-07-04 Disk processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810724079.0A CN110688193B (en) 2018-07-04 2018-07-04 Disk processing method and device

Publications (2)

Publication Number Publication Date
CN110688193A true CN110688193A (en) 2020-01-14
CN110688193B CN110688193B (en) 2023-05-09

Family

ID=69106424

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810724079.0A Active CN110688193B (en) 2018-07-04 2018-07-04 Disk processing method and device

Country Status (1)

Country Link
CN (1) CN110688193B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105468429A (en) * 2014-08-19 2016-04-06 西安慧泽知识产权运营管理有限公司 Efficient virtual cluster management method and cluster node
CN107273231A (en) * 2016-04-07 2017-10-20 阿里巴巴集团控股有限公司 Distributed memory system hard disk tangles fault detect, processing method and processing device
US20170364378A1 (en) * 2016-06-15 2017-12-21 Red Hat Israel, Ltd. Live storage domain decommissioning in a virtual environment
CN107547595A (en) * 2016-06-27 2018-01-05 腾讯科技(深圳)有限公司 cloud resource scheduling system, method and device
CN107885576A (en) * 2017-10-16 2018-04-06 北京易讯通信息技术股份有限公司 A kind of virtual machine HA method in private clound based on OpenStack

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105468429A (en) * 2014-08-19 2016-04-06 西安慧泽知识产权运营管理有限公司 Efficient virtual cluster management method and cluster node
CN107273231A (en) * 2016-04-07 2017-10-20 阿里巴巴集团控股有限公司 Distributed memory system hard disk tangles fault detect, processing method and processing device
US20170364378A1 (en) * 2016-06-15 2017-12-21 Red Hat Israel, Ltd. Live storage domain decommissioning in a virtual environment
CN107547595A (en) * 2016-06-27 2018-01-05 腾讯科技(深圳)有限公司 cloud resource scheduling system, method and device
CN107885576A (en) * 2017-10-16 2018-04-06 北京易讯通信息技术股份有限公司 A kind of virtual machine HA method in private clound based on OpenStack

Also Published As

Publication number Publication date
CN110688193B (en) 2023-05-09

Similar Documents

Publication Publication Date Title
US10353731B2 (en) Efficient suspend and resume of instances
US9524389B1 (en) Forensic instance snapshotting
US9354907B1 (en) Optimized restore of virtual machine and virtual disk data
US11789766B2 (en) System and method of selectively restoring a computer system to an operational state
CN118276783A (en) Data partition switching between storage clusters
US8954398B1 (en) Systems and methods for managing deduplication reference data
KR102844857B1 (en) Live migration of virtual machines to the target host in case of a fatal memory error.
US9606873B2 (en) Apparatus, system and method for temporary copy policy
US11379329B2 (en) Validation of data written via two different bus interfaces to a dual server based storage controller
US20170139637A1 (en) A method of live migration
US11720457B2 (en) Remote direct memory access (RDMA)-based recovery of dirty data in remote memory
US8874956B2 (en) Data re-protection in a distributed replicated data storage system
US9195528B1 (en) Systems and methods for managing failover clusters
US20120151501A1 (en) Configuration registry systems and methods
US8977896B1 (en) Maintaining data integrity in data migration operations using per-migration device error flags
KR20150111608A (en) Method for duplication of virtualization server and Virtualization control apparatus thereof
US10581668B2 (en) Identifying performance-degrading hardware components in computer storage systems
US11226746B2 (en) Automatic data healing by I/O
US12130712B2 (en) Data migration method and apparatus for database
US9575658B2 (en) Collaborative release of a virtual disk
CN110688193B (en) Disk processing method and device
US11226875B2 (en) System halt event recovery
US20140082313A1 (en) Storage class memory evacuation
CN117093325A (en) Virtual machine high availability implementation method, equipment and computer readable medium
CN120491887A (en) Management method of heterogeneous storage system, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20231207

Address after: Room 1-2-A06, Yungu Park, No. 1008 Dengcai Street, Sandun Town, Xihu District, Hangzhou City, Zhejiang Province

Patentee after: Aliyun Computing Co.,Ltd.

Address before: Box 847, four, Grand Cayman capital, Cayman Islands, UK

Patentee before: ALIBABA GROUP HOLDING Ltd.

TR01 Transfer of patent right