CN114816678B - A method, system, device and storage medium for virtual machine scheduling - Google Patents
A method, system, device and storage medium for virtual machine scheduling Download PDFInfo
- Publication number
- CN114816678B CN114816678B CN202210610965.7A CN202210610965A CN114816678B CN 114816678 B CN114816678 B CN 114816678B CN 202210610965 A CN202210610965 A CN 202210610965A CN 114816678 B CN114816678 B CN 114816678B
- Authority
- CN
- China
- Prior art keywords
- lock
- virtual
- cpu thread
- virtual machine
- queue
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/45575—Starting, stopping, suspending or resuming virtual machine instances
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention provides a method, a system, equipment and a storage medium for scheduling virtual machines, wherein the method comprises the following steps: creating different lock queues on a host according to different locks applied by a virtual machine, and setting service processes which hold the same locks and wait for unlocking in the same lock queues; judging whether a virtual CPU thread exists in the current queue or not on a host machine through a virtual machine service process; responding to the fact that no virtual CPU thread exists in the current queue, acquiring a lock and executing the service process; and responding to the existence of the virtual CPU thread in the current queue, creating a new virtual CPU thread according to the information of the service process, mounting the new virtual CPU thread at the tail of the queue, and suspending the service process. The invention enables the physical machine to sense whether the virtual machine is in a state that the spin lock is not locked, thereby coordinating the running of the virtual CPU thread of the virtual machine and improving the scheduling capability and service capability of the physical server.
Description
Technical Field
The present invention relates to the field of servers, and in particular, to a method, a system, an apparatus, and a storage medium for virtual machine scheduling.
Background
Cloud computing has become an infrastructure for new infrastructure construction in countries. Virtualization technology is an important technical support means and key support technology for cloud computing. The current virtualization mainly comprises physical server virtualization, storage virtualization, network virtualization, equipment virtualization and other technologies, and hardware virtualization, operating system virtualization and upper-layer service virtualization are realized through the technologies. The cloud computing method has the advantages of effectively guaranteeing high reliability of cloud computing and elastically expanding computing resources. Through the elastic management of the resources of the server, the utilization rate of the resources of the server is improved, and the resource loss is reduced.
Depending on the degree of virtualization, virtualization techniques are classified into full-virtualization and half-virtualization. The most intuitive embodiment of full virtualization is that any type of operating system can be installed on a virtual server. The operating systems installed in the current virtual machine independently run on the respective virtual servers, and no perception exists among the operating systems. Spin-locking is a lock mechanism that is used in a multiprocessor operating environment to wait for a resource to be acquired if it is not satisfied. Because of the dead characteristics, even if other tasks (processes) to be executed exist on the system in the period of time, CPU resources are not yielded and are used for other processes. To improve the performance of spin locks, spin locks have been developed with different designs of original spin locks (raw spinlock), ticket spinlock, mcs spinlock, queue spinlock, etc. But in a virtualized scenario, the physical CPUs are virtualized as VCPUs, each CPU being threaded. In this case, once the virtual operating system enters spinlock (spin lock) state, meaning that the thread is waiting continuously, wasting CPU resources, creating Lock Holder Preemption (LHP, lock hold preemption) to preempt the lock hold thread in the virtual machine, resulting in the lock wait thread being busy, etc., until the lock holder thread is again scheduled and releases the lock, the lock wait thread cannot acquire the lock. The wait of the remaining lock waiting threads is in fact wasting CPU power from the time the lock holding thread is preempted until it is again scheduled to run. And Lock Waiter Preemption (LWP, lock wait preempt) the next lock wait thread in the virtual machine is preempted until after it is scheduled again next time and the locks are acquired, blind to the remaining lock wait threads, etc. are actually wasting CPU power. Thus once a blind virtual CPU (thread) is trapped, the host can fully suspend the thread from running and perform other tasks. In this case the spin lock again produces paravirt spinlock. The spin lock is designed based on a virtualization scene, spinlock in the virtualization scene is optimized, the self is perceived to be in the virtualization scene by modifying the guest kernel, and the LHP and LWP problems can be alleviated to a certain extent by using a halt vcpu mode instead of spinning vcpu.
Currently spinlock only considers the use of locks within the scope of an operating system, and cannot perceive the problem of service synchronization under different virtual machines.
Disclosure of Invention
In view of this, an object of the embodiments of the present invention is to provide a method, a system, a computer device, and a computer readable storage medium for scheduling a virtual machine, which enable a physical host to sense whether the virtual machine is in a state where spin lock is not locked, so as to coordinate operation of virtual CPU threads of the virtual machine, and promote scheduling capability and service capability of a physical server.
Based on the above objects, an aspect of the embodiments of the present invention provides a method for scheduling virtual machines, including the following steps: creating different lock queues on a host according to different locks applied by a virtual machine, and setting service processes which hold the same locks and wait for unlocking in the same lock queues; judging whether a virtual CPU thread exists in the current queue or not on a host machine through a virtual machine service process; responding to the fact that no virtual CPU thread exists in the current queue, acquiring a lock and executing the service process; and responding to the existence of the virtual CPU thread in the current queue, creating a new virtual CPU thread according to the information of the service process, mounting the new virtual CPU thread at the tail of the queue, and suspending the service process.
In some embodiments, creating different lock queues on the host according to the differences in locks applied by the virtual machine comprises: initializing a lock and using the name of the lock for all virtual machines on the host to access the lock.
In some embodiments, the method further comprises: and responding to the service process with the lock to finish the work, releasing the current virtual CPU thread, and waking up the next virtual CPU thread in the queue.
In some embodiments, the method further comprises: and in response to closing the virtual machine, clearing virtual CPU thread information of all locks of the virtual machine.
In another aspect of the embodiment of the present invention, a system for scheduling virtual machines is provided, including: the creation module is configured to create different lock queues on the host according to different locks applied by the virtual machine, and set the business processes holding the same locks to be unlocked in the same lock queues; the judging module is configured to judge whether a virtual CPU thread exists in the current queue on the host machine through the virtual machine business process; the execution module is configured to acquire a lock and execute the service process in response to the fact that the virtual CPU thread does not exist in the current queue; and the mounting module is configured to respond to the existence of the virtual CPU thread in the current queue, create a new virtual CPU thread according to the information of the service process, mount the new virtual CPU thread to the tail of the queue and pause the service process.
In some embodiments, the creation module is configured to: initializing a lock and using the name of the lock for all virtual machines on the host to access the lock.
In some embodiments, the system further comprises an unlocking module configured to: and responding to the service process with the lock to finish the work, releasing the current virtual CPU thread, and waking up the next virtual CPU thread in the queue.
In some embodiments, the system further comprises a purge module configured to: and in response to closing the virtual machine, clearing virtual CPU thread information of all locks of the virtual machine.
In yet another aspect of the embodiment of the present invention, there is also provided a computer apparatus, including: at least one processor; and a memory storing computer instructions executable on the processor, which when executed by the processor, perform the steps of the method as above.
In yet another aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium storing a computer program which, when executed by a processor, implements the method steps as described above.
The invention has the following beneficial technical effects: the physical host can sense whether the virtual machine is in a state that the spin lock is not locked, so that the running of virtual CPU threads of the virtual machine is coordinated, and the scheduling capability and the service capability of the physical server are improved.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are necessary for the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention and that other embodiments may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of an embodiment of a method for virtual machine scheduling provided by the present invention;
FIG. 2 is a schematic diagram of a queuing mechanism for a lock provided by the present invention;
FIG. 3 is a schematic diagram of the content of a virtual CPU thread provided by the present invention;
FIG. 4 is a schematic diagram of an embodiment of a system for virtual machine scheduling provided by the present invention;
FIG. 5 is a schematic diagram of a hardware architecture of an embodiment of a virtual machine scheduled computer device according to the present invention;
fig. 6 is a schematic diagram of an embodiment of a computer storage medium for virtual machine scheduling provided by the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the following embodiments of the present invention will be described in further detail with reference to the accompanying drawings.
It should be noted that, in the embodiments of the present invention, all the expressions "first" and "second" are used to distinguish two entities with the same name but different entities or different parameters, and it is noted that the "first" and "second" are only used for convenience of expression, and should not be construed as limiting the embodiments of the present invention, and the following embodiments are not described one by one.
In a first aspect of the embodiment of the present invention, an embodiment of a method for scheduling virtual machines is provided. Fig. 1 is a schematic diagram of an embodiment of a method for scheduling virtual machines provided by the present invention. As shown in fig. 1, the embodiment of the present invention includes the following steps:
S1, creating different lock queues on a host according to different locks applied by a virtual machine, and setting service processes which hold the same locks and wait for unlocking in the same lock queues;
S2, judging whether a virtual CPU thread exists in the current queue on a host machine through a virtual machine service process;
s3, responding to the fact that no virtual CPU thread exists in the current queue, acquiring a lock and executing the service process; and
S4, responding to the existence of the virtual CPU thread in the current queue, creating a new virtual CPU thread according to the information of the service process, mounting the new virtual CPU thread at the tail of the queue, and suspending the service process.
The embodiment of the invention improves the resource utilization rate of the server by utilizing a virtualization technology on the basis of server virtualization, and coordinates the server resources to provide services to the outside to the greatest extent. The embodiment of the invention coordinates and dispatches the operation of the virtual operating system (virtual machine) by sensing the operation state (spin lock) of the operating system on the virtual server.
Raw spinlock is the most initial spinlock. The state of the available is represented by a shaping variable whose initial value is 1. When one CPU (set to CPU a) gets spinlock, the value of the variable is set to 0, after which other CPUs attempt to acquire this spinlock, it waits until CPU a releases spinlock and sets the value of the variable to 1.Raw spinlock is fast to implement, especially in the absence of a real race (which is the case in fact most of the time), but this approach has a drawback: it is "unfair". Once spinlock is released, the first CPU that can successfully perform spinlock operation will become the new owner (owner), and there is no way to ensure that the CPU with the longest latency on that spinlock gets lock first, which will cause a problem that the latency cannot be determined.
To address the unfair problem of this "out-of-order contention," another implementation of spinlock is to use the queuing form "ticket spinlock". Each cpu attempting to acquire a lock under this mechanism will acquire a queuing number. When the lock's current number equals the queuing number, the cpu acquires the lock. When acquiring the lock, if the number of the current lock is equal to the queuing number acquired by the CPU, the instruction is an empty queue, and the lock execution is acquired immediately. Otherwise, waiting. Global synchronization to each lock-holding CPU is required each time a lock is released. All locks waiting on spin must acquire the current lock's owner value in real time. Each time the owner value changes, the more threads waiting for the lock, the more often the cpu CACHE LINE is refreshed. But in practice only one cpu core will be used per refresh. Excessive refreshing results in unnecessary overhead.
And (3) modifying the variable ticket spinlock to make each CPU wait for the same spinlock variable, but wait for the variable based on different per-CPUs, so that each CPU only needs to inquire the local CACHE LINE where the corresponding variable is located at ordinary times, and only needs to read the memory and refresh CACHE LINE when the variable is changed. The problem ticket spinlock can be solved. Mcs spinlock is inconsistent with the current spinlock lock of the kernel, and can not be replaced directly by default, and code modification is needed for trial.
Queue spinlock is the default lock mechanism of the current Linux system. By optimizing the mcs lock, the mechanism of data structure compression and CACHE LINE refresh avoidance is adopted, so that the mcs is perfectly replaced by ticket spinlock.
Paravirt spinlock can optimize spinlock in the virtualized scene, and by modifying the guest kernel to make it feel that it is in the virtualized scene, the LHP and LWP problems can be alleviated to some extent by using halt vcpu instead of spinning vcpu.
The virtual machine is divided into different execution functions according to whether the current operating system runs on the virtual machine or the physical machine. The virtual machine is provided with three parts of lock initialization, lock acquisition and lock unlocking. There is a queuing mechanism for locking, scheduling, and locking on the host (i.e., physical machine). Because the service on the virtual machine needs to be synchronized between the virtual machines, the embodiment of the invention realizes the synchronization target by means of a queuing mechanism for constructing the lock on the host machine and a scheduling mechanism for the lock. Communication between the virtual machine and the host machine may be implemented by means of special instructions, construction virtio, etc.
And creating different lock queues on the host according to different locks applied by the virtual machine, and setting the business processes which hold the same locks and wait for unlocking in the same lock queues.
In some embodiments, creating different lock queues on the host according to the differences in locks applied by the virtual machine comprises: initializing a lock and using the name of the lock for all virtual machines on the host to access the lock. The initialization lock creates a lock on behalf of the virtual machine business process. The primary work done is to create a queue of locks on the host, the names of which are used by all virtual machines on the host to access the locks.
FIG. 2 is a schematic diagram of a queuing mechanism of a lock according to the present invention, where the lock needs to close an interrupt mechanism during use, disabling preemption, as shown in FIG. 2. Only one lock can exist per VCPU (virtual CPU thread) at the same time. So that only one per business process can appear on the queuing mechanism queue of the lock. Each queue represents a lock applied by the virtual machine, and vcpu on a queue represents that a business process holding the same lock is waiting to be unlocked.
Fig. 3 is a schematic diagram of the content of a virtual CPU thread provided by the present invention, where, as shown in fig. 3, the content of the virtual CPU thread includes a lock state, a virtual machine to which the virtual CPU thread belongs, a business process to which the virtual CPU thread belongs, and a count. The state of the lock: 0 represents locked and 1 represents currently held lock. The belonging virtual machine represents the virtual machine currently applying for lock. The associated traffic procedure represents the VCPU of the current application. Count represents the number of locks applied by the virtual machine, and the value is mainly used for counting the use cases of the locks.
Judging whether a virtual CPU thread exists in the current queue or not on a host machine through a virtual machine service process.
And acquiring a lock and executing the business process in response to the fact that the virtual CPU thread does not exist in the current queue. And responding to the existence of the virtual CPU thread in the current queue, creating a new virtual CPU thread according to the information of the service process, mounting the new virtual CPU thread at the tail of the queue, and suspending the service process. The obtaining lock represents the state of judging the lock on the host machine of the service process of the virtual machine, and if the current queue is empty, that is, vcpu is not present, the lock is immediately obtained and executed. If the current queue is not empty, creating vcpu the related information of the business process, mounting the related information to the tail of the queue, and finally suspending the business process.
In some embodiments, the method further comprises: and responding to the service process with the lock to finish the work, releasing the current virtual CPU thread, and waking up the next virtual CPU thread in the queue. Unlocking represents the process of releasing the lock after the business process holding the lock completes the relevant work. There is mainly a release vcpu and a wake-up queue next vcpu, and the wake-up method is to resume the operation of the business process.
The scheduling is responsible for the management of the business process, such as suspending the business process and resuming the business process.
In some embodiments, the method further comprises: and in response to closing the virtual machine, clearing virtual CPU thread information of all locks of the virtual machine. The maintenance module is responsible for maintaining a queue of the whole lock, such as vcpu information of the lock related to the virtual machine when the virtual machine is closed, such as destruction of the lock, etc.
In the embodiment of the invention, a physical machine can sense whether the virtual machine is in a state that spinlock does not acquire a lock, schedule a thread (vcpu) in a spinlock state and leave an execution state; the thread obtaining spinlock lock is scheduled, the operation of the thread is restored, the virtual machines can operate in coordination with each other through the lock mechanism, and different virtual machines can initialize and compete for the same spinlock lock. And enabling the business in the virtual machine to run in a coordinated manner.
It should be noted that, in the embodiments of the method for scheduling virtual machines, the steps may be intersected, replaced, added and subtracted, so that the method for scheduling virtual machines by using these reasonable permutation and combination should also belong to the protection scope of the present invention, and the protection scope of the present invention should not be limited to the embodiments.
Based on the above object, a second aspect of the embodiments of the present invention proposes a system for virtual machine scheduling. As shown in fig. 4, the system 200 includes the following modules: the creation module is configured to create different lock queues on the host according to different locks applied by the virtual machine, and set the business processes holding the same locks to be unlocked in the same lock queues; the judging module is configured to judge whether a virtual CPU thread exists in the current queue on the host machine through the virtual machine business process; the execution module is configured to acquire a lock and execute the service process in response to the fact that the virtual CPU thread does not exist in the current queue; and the mounting module is configured to respond to the existence of the virtual CPU thread in the current queue, create a new virtual CPU thread according to the information of the service process, mount the new virtual CPU thread to the tail of the queue and pause the service process.
In some embodiments, the creation module is configured to: initializing a lock and using the name of the lock for all virtual machines on the host to access the lock.
In some embodiments, the system further comprises an unlocking module configured to: and responding to the service process with the lock to finish the work, releasing the current virtual CPU thread, and waking up the next virtual CPU thread in the queue.
In some embodiments, the system further comprises a purge module configured to: and in response to closing the virtual machine, clearing virtual CPU thread information of all locks of the virtual machine.
In view of the above object, a third aspect of the embodiments of the present invention provides a computer device, including: at least one processor; and a memory storing computer instructions executable on the processor, the instructions being executable by the processor to perform the steps of: s1, creating different lock queues on a host according to different locks applied by a virtual machine, and setting service processes which hold the same locks and wait for unlocking in the same lock queues; s2, judging whether a virtual CPU thread exists in the current queue on a host machine through a virtual machine service process; s3, responding to the fact that no virtual CPU thread exists in the current queue, acquiring a lock and executing the service process; and S4, responding to the existence of the virtual CPU thread in the current queue, creating a new virtual CPU thread according to the information of the service process, mounting the new virtual CPU thread at the tail of the queue, and suspending the service process.
In some embodiments, creating different lock queues on the host according to the differences in locks applied by the virtual machine comprises: initializing a lock and using the name of the lock for all virtual machines on the host to access the lock.
In some embodiments, the steps further comprise: and responding to the service process with the lock to finish the work, releasing the current virtual CPU thread, and waking up the next virtual CPU thread in the queue.
In some embodiments, the steps further comprise: and in response to closing the virtual machine, clearing virtual CPU thread information of all locks of the virtual machine.
Fig. 5 is a schematic hardware structure of an embodiment of the virtual machine scheduling computer device according to the present invention.
Taking the example of the apparatus shown in fig. 5, a processor 301 and a memory 302 are included in the apparatus.
The processor 301 and the memory 302 may be connected by a bus or otherwise, for example in fig. 5.
The memory 302 is used as a non-volatile computer readable storage medium for storing non-volatile software programs, non-volatile computer executable programs, and modules, such as program instructions/modules corresponding to the method of virtual machine scheduling in the embodiments of the present application. The processor 301 executes various functional applications of the server and data processing, i.e., a method of implementing virtual machine scheduling, by running nonvolatile software programs, instructions, and modules stored in the memory 302.
Memory 302 may include a storage program area that may store an operating system, at least one application program required for functionality, and a storage data area; the storage data area may store data created according to the use of the virtual machine scheduled method, etc. In addition, memory 302 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some embodiments, memory 302 may optionally include memory located remotely from processor 301, which may be connected to the local module via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Computer instructions 303 corresponding to one or more virtual machine scheduling methods are stored in memory 302 that, when executed by processor 301, perform the virtual machine scheduling method of any of the method embodiments described above.
Any one embodiment of a computer device that performs the above method for virtual machine scheduling may achieve the same or similar effects as any one of the foregoing method embodiments corresponding thereto.
The present invention also provides a computer readable storage medium storing a computer program which when executed by a processor performs a method of virtual machine scheduling.
Fig. 6 is a schematic diagram of an embodiment of the computer storage medium for virtual machine scheduling according to the present invention. Taking a computer storage medium as shown in fig. 6 as an example, the computer readable storage medium 401 stores a computer program 402 that when executed by a processor performs the above method.
Finally, it should be noted that, as will be understood by those skilled in the art, all or part of the processes in the methods of the embodiments described above may be implemented by a computer program to instruct related hardware, and the program of the method for scheduling a virtual machine may be stored in a computer readable storage medium, where the program may include the processes in the embodiments of the methods described above when executed. The storage medium of the program may be a magnetic disk, an optical disk, a read-only memory (ROM), a random-access memory (RAM), or the like. The computer program embodiments described above may achieve the same or similar effects as any of the method embodiments described above.
The foregoing is an exemplary embodiment of the present disclosure, but it should be noted that various changes and modifications could be made herein without departing from the scope of the disclosure as defined by the appended claims. The functions, steps and/or actions of the method claims in accordance with the disclosed embodiments described herein need not be performed in any particular order. Furthermore, although elements of the disclosed embodiments may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated.
It should be understood that as used herein, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly supports the exception. It should also be understood that "and/or" as used herein is meant to include any and all possible combinations of one or more of the associated listed items.
The foregoing embodiment of the present invention has been disclosed with reference to the number of embodiments for the purpose of description only, and does not represent the advantages or disadvantages of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, and the program may be stored in a computer readable storage medium, where the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
Those of ordinary skill in the art will appreciate that: the above discussion of any embodiment is merely exemplary and is not intended to imply that the scope of the disclosure of embodiments of the invention, including the claims, is limited to such examples; combinations of features of the above embodiments or in different embodiments are also possible within the idea of an embodiment of the invention, and many other variations of the different aspects of the embodiments of the invention as described above exist, which are not provided in detail for the sake of brevity. Therefore, any omission, modification, equivalent replacement, improvement, etc. of the embodiments should be included in the protection scope of the embodiments of the present invention.
Claims (10)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210610965.7A CN114816678B (en) | 2022-05-31 | 2022-05-31 | A method, system, device and storage medium for virtual machine scheduling |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210610965.7A CN114816678B (en) | 2022-05-31 | 2022-05-31 | A method, system, device and storage medium for virtual machine scheduling |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN114816678A CN114816678A (en) | 2022-07-29 |
| CN114816678B true CN114816678B (en) | 2024-06-11 |
Family
ID=82519573
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202210610965.7A Active CN114816678B (en) | 2022-05-31 | 2022-05-31 | A method, system, device and storage medium for virtual machine scheduling |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN114816678B (en) |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN118349368B (en) * | 2024-05-14 | 2024-11-08 | 哈尔滨工业大学 | Real-time-oriented construction method of interruptible mutual exclusive lock of virtualization platform, electronic equipment and storage medium |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103473135A (en) * | 2013-09-23 | 2013-12-25 | 中国科学技术大学苏州研究院 | Processing method for spinlock LHP (Lock-Holder Preemption) phenomenon under virtual environment |
| KR20180066387A (en) * | 2016-12-08 | 2018-06-19 | 한국전자통신연구원 | Method and system for scalability using paravirtualized opportunistic spinlock algorithm |
| CN113032098A (en) * | 2021-03-25 | 2021-06-25 | 深信服科技股份有限公司 | Virtual machine scheduling method, device, equipment and readable storage medium |
Family Cites Families (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US9489228B2 (en) * | 2012-11-27 | 2016-11-08 | Red Hat Israel, Ltd. | Delivery of events from a virtual machine to a thread executable by multiple host CPUs using memory monitoring instructions |
-
2022
- 2022-05-31 CN CN202210610965.7A patent/CN114816678B/en active Active
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103473135A (en) * | 2013-09-23 | 2013-12-25 | 中国科学技术大学苏州研究院 | Processing method for spinlock LHP (Lock-Holder Preemption) phenomenon under virtual environment |
| KR20180066387A (en) * | 2016-12-08 | 2018-06-19 | 한국전자통신연구원 | Method and system for scalability using paravirtualized opportunistic spinlock algorithm |
| CN113032098A (en) * | 2021-03-25 | 2021-06-25 | 深信服科技股份有限公司 | Virtual machine scheduling method, device, equipment and readable storage medium |
Also Published As
| Publication number | Publication date |
|---|---|
| CN114816678A (en) | 2022-07-29 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN105579961B (en) | Data processing system and method of operation, hardware unit for data processing system | |
| US7698540B2 (en) | Dynamic hardware multithreading and partitioned hardware multithreading | |
| US9003410B2 (en) | Abstracting a multithreaded processor core to a single threaded processor core | |
| US20050081200A1 (en) | Data processing system having multiple processors, a task scheduler for a data processing system having multiple processors and a corresponding method for task scheduling | |
| US11537430B1 (en) | Wait optimizer for recording an order of first entry into a wait mode by a virtual central processing unit | |
| US9256477B2 (en) | Lockless waterfall thread communication | |
| US8443377B2 (en) | Parallel processing system running an OS for single processors and method thereof | |
| CN103473135B (en) | The processing method of spin lock LHP phenomenon under virtualized environment | |
| WO2012016439A1 (en) | Method, device and equipment for service management | |
| CN106062716B (en) | The method, apparatus and single task system of multitask are realized in single task system | |
| CN107562685B (en) | Method for data interaction between multi-core processor cores based on delay compensation | |
| CN110795254A (en) | Method for processing high-concurrency IO based on PHP | |
| CN103744728B (en) | Dynamic PLE (pause loop exit) technology based virtual machine co-scheduling method | |
| WO2021022964A1 (en) | Task processing method, device, and computer-readable storage medium based on multi-core system | |
| CN114816678B (en) | A method, system, device and storage medium for virtual machine scheduling | |
| US10289306B1 (en) | Data storage system with core-affined thread processing of data movement requests | |
| US20200081735A1 (en) | Efficient virtual machine memory monitoring with hyper-threading | |
| Li et al. | Teep: Supporting secure parallel processing in arm trustzone | |
| US20180143828A1 (en) | Efficient scheduling for hyper-threaded cpus using memory monitoring | |
| Torquati et al. | Reducing message latency and CPU utilization in the CAF actor framework | |
| CN110347507A (en) | Multi-level fusion real-time scheduling method based on round-robin | |
| CN101183317A (en) | A Method of Synchronizing Real-time Interrupt and Multiple Process States | |
| Fukuoka et al. | An efficient inter-node communication system with lightweight-thread scheduling | |
| CN114281529A (en) | Distributed virtualization guest operating system scheduling optimization method, system and terminal | |
| Shen et al. | A software framework for efficient preemptive scheduling on gpu |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant | ||
| CP03 | Change of name, title or address | ||
| CP03 | Change of name, title or address |
Address after: 215000 Building 9, No.1 guanpu Road, Guoxiang street, Wuzhong Economic Development Zone, Suzhou City, Jiangsu Province Patentee after: Suzhou Yuannao Intelligent Technology Co.,Ltd. Country or region after: China Address before: 215000 Building 9, No.1 guanpu Road, Guoxiang street, Wuzhong Economic Development Zone, Suzhou City, Jiangsu Province Patentee before: SUZHOU LANGCHAO INTELLIGENT TECHNOLOGY Co.,Ltd. Country or region before: China |