[go: up one dir, main page]

CN110442345B - A compiling method, running method and device - Google Patents

A compiling method, running method and device Download PDF

Info

Publication number
CN110442345B
CN110442345B CN201910543902.2A CN201910543902A CN110442345B CN 110442345 B CN110442345 B CN 110442345B CN 201910543902 A CN201910543902 A CN 201910543902A CN 110442345 B CN110442345 B CN 110442345B
Authority
CN
China
Prior art keywords
code block
scheduling
target code
function
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910543902.2A
Other languages
Chinese (zh)
Other versions
CN110442345A (en
Inventor
赵俊民
张魁
程帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201910543902.2A priority Critical patent/CN110442345B/en
Publication of CN110442345A publication Critical patent/CN110442345A/en
Application granted granted Critical
Publication of CN110442345B publication Critical patent/CN110442345B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/40Transformation of program code
    • G06F8/41Compilation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/48Indexing scheme relating to G06F9/48
    • G06F2209/484Precedence

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Devices For Executing Special Programs (AREA)

Abstract

本申请实施例提供一种编译方法、运行方法及设备,涉及电子技术领域,能够根据标注信息定义的调度方式,对标注信息作用范围内的代码块进行调度,使得该代码块能够优先、高效地执行。具体方案为:编译器获取标注信息和标注信息的作用对象;作用对象为源代码块,标注信息用于指示调度方式;编译器对标注信息进行编译,生成第一目标代码块和第二目标代码块;编译器对作用对象进行编译,生成第三目标代码块。虚拟机运行第一目标代码块,以配置标注信息指示的调度方式;虚拟机根据标注信息指示的调度方式,调度并运行第三目标代码块;虚拟机运行第二目标代码块,以取消标注信息指示的调度方式。本申请实施例用于调度过程。

Figure 201910543902

Embodiments of the present application provide a compiling method, an operating method, and a device, which relate to the field of electronic technology and can schedule a code block within the scope of the annotation information according to the scheduling method defined by the annotation information, so that the code block can be prioritized and efficiently implement. The specific scheme is as follows: the compiler obtains the annotation information and the function object of the annotation information; the function object is the source code block, and the annotation information is used to indicate the scheduling mode; the compiler compiles the annotation information to generate the first object code block and the second object code block; the compiler compiles the action object to generate the third object code block. The virtual machine runs the first target code block to configure the scheduling method indicated by the annotation information; the virtual machine schedules and runs the third target code block according to the scheduling method indicated by the annotation information; the virtual machine runs the second target code block to cancel the annotation information Indicates the scheduling method. This embodiment of the present application is used for a scheduling process.

Figure 201910543902

Description

Compiling method, running method and equipment
Technical Field
The embodiment of the application relates to the technical field of electronics, in particular to a compiling method, an operating method and equipment.
Background
In the Java language, service codes are generally developed in a multi-threaded manner. And in the running process of the code, the multithreading is subjected to cooperative processing. Inside a Java Virtual Machine (VM), the prior art provides a scheme for configuring the running priority of threads. The Thread class in the Java language provides a function interface setpriority (int newpriority) for configuring the priority level of the Thread, and the configurable priority level range is 1-10. The virtual machine may run the thread at the configured priority level.
Often times, priority reversal occurs between threads, limited by the code business logic. I.e., the low priority thread blocks the execution of the high priority thread. For example, a high-priority user interface thread is blocked by a low-priority thread due to the limitation of scenes such as locks, wait/notify functions or input/output I/O waiting, and the like, so that the situations of untimely response, user interface jamming and the like occur, and the user experience is influenced.
Disclosure of Invention
The embodiment of the application provides a compiling method, an operating method and equipment, which can schedule a code block in an action range of marking information according to a scheduling mode defined by the marking information, so that the code block can be preferentially and efficiently executed.
In order to achieve the above purpose, the embodiment of the present application adopts the following technical solutions:
in one aspect, an embodiment of the present application provides a compiling method, including: the compiler acquires a code to be compiled. The code to be compiled comprises the marking information and the action object of the marking information. The action object is a source code block, and the marking information is used for indicating a scheduling mode. And compiling the marking information by the compiler to generate a first target code block and a second target code block. The first target code block is used for configuring the scheduling mode indicated by the marking information. The second target code block is used for canceling the scheduling mode indicated by the marking information. And the compiler compiles the action object to generate a third target code block.
In this scheme, the compiler may compile a source code block corresponding to the action, and generate a third target code block. The compiler can also compile the annotation information to generate a target code block for configuring and canceling the scheduling mode indicated by the annotation information.
In this way, the virtual machine can run the target code block for configuring and canceling the scheduling mode indicated by the annotation information, so that the third target code block can be scheduled preferentially and efficiently according to the scheduling mode indicated by the annotation information. That is, the function of the source code block of the role object can be preferentially and efficiently realized according to the scheduling method indicated by the markup information.
In one possible design, the annotation information includes a character @, an annotation name and a scheduling parameter, and the scheduling parameter includes one or more of a scheduling priority of the code block, a scheduling policy of the code block, CPU information of the central processing unit, an input/output I/O scheduling policy, or an I/O scheduling priority. The scheduling policy and the I/O scheduling policy of the code block include a first-in first-out scheduling SCHED _ FIFO policy, a round robin scheduling SCHED _ RR policy, a real-time scheduling SCHED _ RT policy, or OTHER scheduling SCHED _ OTHER policies. The CPU information includes CPU core binding or CPU operating frequency.
In this way, the scheduling priority of the thread or the code block in the thread can be set through the marking information; the scheduling strategy, CPU information, I/O scheduling strategy, I/O scheduling priority and the like of the thread or the code block in the thread can be set, so that the processing performance in various aspects can be controlled, the marked code block can be processed preferentially through various scheduling modes, and the execution efficiency of the marked code block is improved. Therefore, the scheduling mode indicated by the label information has a wider scheduling range and is more flexible and various.
In another possible design, the method may further include: the compiler compiles the annotation information and also generates a first function inlet and a second function inlet, wherein the first function inlet is used for calling the first target code block, and the second function inlet is used for calling the second target code block. The compiler inserts the first function entry before the third target code block. The compiler inserts the second function entry after the third target code block.
In this scheme, the compiler may generate a first function entry and a second function entry according to the annotation information, and insert the first function entry and the second function entry before and after the third target code block, respectively. Therefore, before the virtual machine runs the third target code block, the code block of the scheduling mode indicated by the configuration marking information can be run according to the first function inlet; and after the third target code block is operated, the code block of the scheduling mode indicated by the cancellation marking information is operated according to the second function inlet.
In another possible design, if the code after the annotation information is an object, the annotation object of the annotation information is the object, and the active object is the source code within the object range. If the code after labeling the information is a method, the labeling object is a method, and the action object is a source code in a method range. And if the code after the information is labeled is a line of statement in the object or method, the labeled object and the action object are both source codes of the line of statement. And if the code after the annotation information is the statement limited by { }, the annotation object and the action object are both the source code of the statement in the range of { }. And if the code after the information is labeled is a notify function in the lock object, the labeled object is the notify function, and the action object comprises a source code in a notify function range and a source code in a wait function range matched with the notify function in the lock object.
In this way, the compiler can determine the annotation object of the annotation information and the action object (also called the scope) of the annotation information according to the specific code content after the annotation information.
In another possible design, the code after the information is labeled is a lock object, the labeled object is a lock object, and the action object is a source code within the range of the lock object.
In this way, the virtual machine can preferentially and efficiently schedule the target code block after the lock object is compiled according to the scheduling mode indicated by the marking information, so that the function of the lock object can be preferentially and efficiently realized.
In another possible design, the source code blocks in the role object are user interface related code blocks.
Therefore, the virtual machine can preferentially and efficiently schedule the target code block compiled by the source code block related to the user interface according to the scheduling mode indicated by the marking information, so that the function related to the user interface can be preferentially and efficiently realized.
In another possible design, when the compiler recognizes that the parameter in the annotation information is used to indicate the scheduling manner, the compiler compiles the annotation information to generate a first target code block and a second target code block.
That is, the compiler may determine whether the annotation information is the annotation information related to the embodiment of the present application and whether to compile the annotation information according to the definition, format, function, and other characteristics of the parameter in the annotation information.
In another possible design, the source code block to be compiled is a source code block written based on the Java language.
In another possible design, the scope of the annotation information includes compile time and runtime.
On the other hand, an embodiment of the present application provides an operation method, including: and the virtual machine acquires a target file generated after compiling. The target file comprises a first target code block and a second target code block which are generated after compiling according to the labeling information, and a third target code block which is generated after compiling according to the action object of the labeling information. The first target code block is used for configuring the scheduling mode indicated by the marking information. The second target code block is used for canceling the scheduling mode indicated by the marking information. And the virtual machine runs the first target code block to configure the scheduling mode indicated by the marking information. And the virtual machine schedules and runs the third target code block according to the scheduling mode indicated by the marking information. And the virtual machine runs the second target code block to cancel the scheduling mode indicated by the marking information.
In the scheme, the virtual machine can run the target code block for configuring and canceling the scheduling mode indicated by the marking information, so that the third target code block obtained by compiling the source code block of the action object can be scheduled preferentially and efficiently according to the scheduling mode indicated by the marking information. Therefore, according to the scheduling mode indicated by the marking information, the function of the source code block of the action object can be preferentially and efficiently realized.
In one possible design, the scheduling mode includes one or more of a scheduling priority of the code block, a scheduling policy of the code block, central processing unit CPU information, an input/output I/O scheduling policy, or an I/O scheduling priority. The scheduling policy and the I/O scheduling policy of the code block include a first-in first-out scheduling SCHED _ FIFO policy, a round robin scheduling SCHED _ RR policy, a real-time scheduling SCHED _ RT policy, or OTHER scheduling SCHED _ OTHER policies. The CPU information includes CPU core binding or CPU operating frequency.
In this way, the scheduling priority of the thread or the code block in the thread can be set through the marking information; the scheduling strategy, CPU information, I/O scheduling strategy, I/O scheduling priority and the like of the thread or the code block in the thread can be set, so that the processing performance in various aspects can be controlled, the marked code block can be processed preferentially through various scheduling modes, and the execution efficiency of the marked code block is improved. Therefore, the scheduling mode indicated by the label information has a wider scheduling range and is more flexible and various.
In another possible design, the target file further includes a first function entry and a second function entry generated after compiling according to the annotation information. The first function entry is for calling a first target code block. The second function entry is for calling a second target code block. The first function entry is located before the third target code block and the second function entry is located after the third target code block. Before the virtual machine runs the first target code block, the method further comprises: the virtual machine links the first function entry with the first target code block. The virtual machine runs a first target code block, comprising: the virtual machine runs a first target code block linked to a first function entry. Before the virtual machine runs the second target code block, the method further comprises: the virtual machine links the second function entry with the second target code block. The virtual machine runs a second target code block, comprising: the virtual machine runs a second target code block linked to the second function entry.
In the scheme, before the virtual machine runs the third target code block, the code block of the scheduling mode indicated by the configuration marking information is run according to the first function entry generated by the compiler; and after the third target code block is operated, operating the code block of the scheduling mode indicated by the label canceling information according to the second function inlet generated by the compiler.
In another possible design, before the virtual machine runs the first target code block, the method further includes: the virtual machine generates a first function inlet, and the first function inlet is used for calling a first target code block; the virtual machine links the first function entry with the first target code block. The virtual machine runs a first target code block, comprising: the virtual machine runs a first target code block linked to a first function entry. Before the virtual machine runs the second target code block, the method further comprises: the virtual machine generates a second function inlet, and the second function inlet is used for calling a second target code block; the virtual machine links the second function entry with the second target code block. The virtual machine runs a second target code block, comprising: the virtual machine runs a second target code block linked to the second function entry.
In the scheme, the virtual machine can firstly run a first target code block through a first function inlet generated by the virtual machine to configure a scheduling mode indicated by the marking information; scheduling and operating a third target code block according to the scheduling mode; and then, running a second target code block through a second function inlet generated by the virtual machine so as to cancel the scheduling mode indicated by the marking information.
In another possible design, the role object of the annotation information is a first source code block, and the first source code block and a second source code block belong to the same thread; the scheduling parameters include CPU information.
In this way, the first source code block can be efficiently executed according to the CPU information indicated by the scheduling parameter.
In another possible design, the role object of the annotation information is a first source code block, the first source code block is used for requesting resources, and the first source code block belongs to a first thread; the scheduling priority of the code block indicated by the scheduling parameter is high. The target file also comprises a fourth target code block generated after the second source code block is compiled and a fifth target code block generated after the lock object is compiled; the second source code block is used to request a resource and the lock object is used to lock the resource. After the virtual machine runs the first target code block, the scheduling mode indicated by the configuration marking information and before running the third target code block, the method further comprises the following steps: and the virtual machine determines that the scheduling priority of the third target code block is high according to the scheduling parameters. And the virtual machine authorizes the resource locked by the lock object corresponding to the fifth target code block to be used by the third target code block. After the virtual machine runs the second target code block to cancel the scheduling mode indicated by the marking information, the method further comprises the following steps: the virtual machine authorizes the locked resource of the lock object to be used by the fourth target code block; the virtual machine runs the fourth target code block.
In this way, the third target code block may be preferentially locked and thus may be preferentially executed, i.e., the function of the first source code block may be preferentially implemented.
In another possible design, the labeled object of the labeled information is a first source code block, the first source code block is a notify function, the first source code block is a function within a lock object range, the lock object further includes a wait function, and the function object of the labeled information is a source code block within the notify function range and a source code block within the wait function range. The third target code block comprises a fourth target code block generated after the source code block in the wait function range is compiled and a fifth target code block generated after the source code block in the notify function range is compiled. The virtual machine schedules and runs the third target code block according to the scheduling mode indicated by the marking information, and the method comprises the following steps: and the virtual machine schedules and runs the fourth target code block according to the scheduling mode indicated by the marking information. And the virtual machine schedules and runs the fifth target code block according to the scheduling mode indicated by the marking information.
In this way, in an asynchronous waiting scenario, a code block waiting for another code block can be executed preferentially and efficiently, so that another code block waiting for the code block can be executed efficiently as soon as possible.
In another possible design, the role object of the annotation information is a source code block within the scope of the lock object.
Since lock objects are typically associated with resource contention for multiple code blocks, the code blocks in the lock object are critical code blocks. Code blocks in the lock object are executed efficiently, and code blocks competing for resources can acquire the resources in time, so that the code blocks are executed as soon as possible. Therefore, the developer can mark the lock object, so that the lock object is scheduled preferentially, and the scheduling can be completed efficiently.
In another aspect, another embodiment of the present application provides a scheduling method, including: the compiler acquires a code to be compiled, wherein the code to be compiled comprises a defined object, a first Application Programming Interface (API) function located in front of the defined object, and a second API function located behind the defined object. Wherein the defined object is a source code block. The first API function is used to indicate a scheduling mode. The scheduling mode comprises one or more of scheduling priority of the code block, scheduling strategy of the code block, CPU information of a central processing unit, input/output (I/O) scheduling strategy or I/O scheduling priority. The compiler compiles the first API function to generate a first function inlet and a first target code block, wherein the first function inlet is used for calling the first target code block, and the first target code block is used for configuring a scheduling mode indicated by the first API function. And the compiler compiles the action object to generate a third target code block. And the compiler compiles the second API function to generate a second function inlet and a second target code block, wherein the second target code block is used for canceling the scheduling mode indicated by the first API function. And the virtual machine acquires a target file generated after compiling, wherein the target file comprises a first target code block, a second target code block, a third target code block, a first function inlet positioned in front of the third target code block and a second function inlet positioned behind the third target code block. The virtual machine links the first function entry and the first target code block. And the virtual machine runs the first target code block linked with the first function inlet to configure the scheduling mode indicated by the first API function. And the virtual machine schedules and runs the third target code block according to the scheduling mode indicated by the first API function. The virtual machine links the second function entry and the second target code block. And the virtual machine runs a second target code block linked with the second function inlet so as to cancel the scheduling mode indicated by the first API function.
In this scheme, the virtual machine may implement control such as preferential scheduling on the third target code block generated after the object of action is compiled according to the scheduling mode indicated by the first API function, that is, according to the scheduling mode configured by the developer through the first API function. And for other code blocks except for the action object, priority scheduling and other control are carried out without adopting the scheduling mode configured by the first API function.
In another aspect, an embodiment of the present application provides a compiler, where the compiler may be configured on an electronic device, and the compiler may include: the device comprises an acquisition unit, a compiling unit and a compiling unit, wherein the acquisition unit is used for acquiring a code to be compiled, and the code to be compiled comprises annotation information and an action object of the annotation information; the action object is a source code block, and the marking information is used for indicating a scheduling mode. The compiling unit is used for compiling the marking information to generate a first target code block and a second target code block; the first target code block is used for configuring a scheduling mode indicated by the marking information; the second target code block is used for canceling the scheduling mode indicated by the marking information. The compiling unit is further used for compiling the action object and generating a third target code block.
In one possible design, the annotation information includes a character @, an annotation name and a scheduling parameter, and the scheduling parameter includes one or more of a scheduling priority of the code block, a scheduling policy of the code block, CPU information of the central processing unit, an input/output I/O scheduling policy, or an I/O scheduling priority. The scheduling policy and the I/O scheduling policy of the code block include a first-in first-out scheduling SCHED _ FIFO policy, a round robin scheduling SCHED _ RR policy, a real-time scheduling SCHED _ RT policy, or OTHER scheduling SCHED _ OTHER policies. The CPU information includes CPU core binding or CPU operating frequency.
In another possible design, the compiling unit is further configured to: and compiling the labeling information, and generating a first function inlet and a second function inlet, wherein the first function inlet is used for calling the first target code block, and the second function inlet is used for calling the second target code block. The first function entry is inserted before the third target code block. A second function entry is inserted after the third target code block.
In another possible design, if the code after the annotation information is an object, the annotation object of the annotation information is the object, and the active object is the source code within the object range. If the code after labeling the information is a method, the labeling object is a method, and the action object is a source code in a method range. And if the code after the information is labeled is a line of statement in the object or method, the labeled object and the action object are both source codes of the line of statement. And if the code after the annotation information is the statement limited by { }, the annotation object and the action object are both the source code of the statement in the range of { }. And if the code after the information is labeled is a notify function in the lock object, the labeled object is the notify function, and the action object comprises a source code in a notify function range and a source code in a wait function range matched with the notify function in the lock object.
In another possible design, the code after the information is labeled is a lock object, the labeled object is a lock object, and the action object is a source code within the range of the lock object.
In another aspect, an embodiment of the present application provides a virtual machine, where the virtual machine may run on an electronic device. The virtual machine may include: the acquiring unit is used for acquiring a target file generated after compiling, wherein the target file comprises a first target code block and a second target code block generated after compiling according to the labeling information, and a third target code block generated after compiling according to the action object of the labeling information; the first target code block is used for configuring a scheduling mode indicated by the marking information; the second target code block is used for canceling the scheduling mode indicated by the marking information. And the running unit is used for running the first target code block so as to configure the scheduling mode indicated by the marking information. The running unit is further used for scheduling and running the third target code block according to the scheduling mode indicated by the marking information. The running unit is further configured to run the second target code block to cancel the scheduling mode indicated by the tagging information.
In one possible design, the scheduling mode includes one or more of a scheduling priority of the code block, a scheduling policy of the code block, central processing unit CPU information, an input/output I/O scheduling policy, or an I/O scheduling priority. The scheduling strategy and the I/O scheduling strategy of the code block comprise a first-in first-out scheduling SCHED _ FIFO strategy, a round-robin scheduling SCHED _ RR strategy, a real-time scheduling SCHED _ RT strategy or OTHER scheduling SCHED _ OTHER strategies; the CPU information includes CPU core binding or CPU operating frequency.
In another possible design, the target file further includes a first function entry and a second function entry generated after compiling according to the annotation information; the first function inlet is used for calling a first target code block, and the second function inlet is used for calling a second target code block; the first function entry is located before the third target code block and the second function entry is located after the third target code block. The virtual machine further comprises a linking unit for: linking the first function entry with the first target code block prior to executing the first target code block. The running unit is specifically configured to run the first target code block linked to the first function entry. The linking unit is further configured to link the second function entry with the second target code block prior to executing the second target code block. The running unit is specifically configured to run a second target code block linked to the second function entry.
In another possible design, the virtual machine further includes: and the generating unit is used for generating a first function inlet before the first target code block is run, and the first function inlet is used for calling the first target code block. And the linking unit is used for linking the first function inlet and the first target code block. The running unit is specifically configured to run the first target code block linked to the first function entry. The generating unit is further configured to generate a second function entry before the second target code block is executed, the second function entry being used to call the second target code block. The linking unit is further configured to link the second function entry with the second target code block. The running unit is specifically configured to run a second target code block linked to the second function entry.
In another possible design, the role object of the annotation information is a first source code block, and the first source code block and a second source code block belong to the same thread; the scheduling parameters include CPU information.
In another possible design, the role object of the annotation information is a first source code block, the first source code block is used for requesting resources, and the first source code block belongs to a first thread; the scheduling priority of the code block indicated by the scheduling parameter is high. The target file also comprises a fourth target code block generated after the second source code block is compiled and a fifth target code block generated after the lock object is compiled; the second source code block is used to request a resource and the lock object is used to lock the resource. The running unit is further configured to, after the first target code block is run, the scheduling mode indicated by the configuration marking information is configured, and before the third target code block is run, determine that the scheduling priority of the third target code block is high according to the scheduling parameter. And authorizing the resource locked by the lock object corresponding to the fifth target code block to be used by the third target code block. And after the second target code block is operated to cancel the scheduling mode indicated by the marking information, the locked resource of the lock object is authorized to be used by a fourth target code block. The fourth target code block is run.
In another possible design, the labeled object of the labeled information is a first source code block, the first source code block is a notify function, the first source code block is a function within a lock object range, the lock object further includes a wait function, and the function object of the labeled information is a source code block within the notify function range and a source code block within the wait function range. The third target code block comprises a fourth target code block generated after the source code block in the wait function range is compiled and a fifth target code block generated after the source code block in the notify function range is compiled. The operation unit is specifically configured to: scheduling and operating a fourth target code block according to the scheduling mode indicated by the marking information; and scheduling and operating the fifth target code block according to the scheduling mode indicated by the marking information.
In another possible design, the role object of the annotation information is a source code block within the scope of the lock object.
In another aspect, an embodiment of the present application provides an electronic device including a processor and a memory. The memory has code stored therein that is executed by the processor to implement the method of operation or scheduling of any of the above aspects or any of the possible designs.
In another aspect, an embodiment of the present application provides a system, which includes a compiler and a virtual machine. The compiler is used to execute the compiling method or the scheduling method in any one of the above aspects or any one of the possible designs. The virtual machine is used for executing the running method or the scheduling method in any one of the aspects or any one of the possible designs.
In another aspect, an embodiment of the present application provides a computer storage medium, which includes computer instructions, when the computer instructions are executed on an electronic device, cause the electronic device to perform a compiling method or a scheduling method in any one of the above aspects or any one of the possible designs.
In yet another aspect, the present application provides a computer program product, which when run on a computer, causes the computer to execute the compiling method or the scheduling method in any one of the above aspects or any one of the possible designs.
Drawings
Fig. 1 is a schematic hardware structure diagram of an electronic device according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of a software architecture of an electronic device according to an embodiment of the present disclosure;
fig. 3 is a flowchart of a scheduling method according to an embodiment of the present application;
fig. 4 is a schematic diagram of a scheduling process according to an embodiment of the present application;
fig. 5 is a flowchart of another scheduling method provided in the embodiment of the present application;
fig. 6 is a schematic diagram of another scheduling process provided in the embodiment of the present application;
fig. 7 is a flowchart of another scheduling method provided in the embodiment of the present application;
fig. 8 is a schematic diagram of another scheduling process provided in the embodiment of the present application;
FIG. 9 is a schematic diagram of a set of user interfaces provided by an embodiment of the present application;
fig. 10 is a schematic diagram of another scheduling method according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a compiler according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of a virtual machine according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application. In the description of the embodiments herein, "/" means "or" unless otherwise specified, for example, a/B may mean a or B; "and/or" herein is merely an association describing an associated object, and means that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, in the description of the embodiments of the present application, "a plurality" means two or more than two.
The existing scheme for configuring the thread running priority in the Java virtual machine can configure the priority of the thread in a level range of 1-10. The blocks of code within a thread, when executed, relate to operations to acquire network resources (e.g., resources used for upload or download), input/output I/O resources (e.g., resources used for write/read), or other resources. The thread priority configured in the Java virtual machine cannot be brought into the processes of competition and resource preemption of the code block in the thread and the code blocks of other threads. The resources may include network resources (e.g., uplink transmission or downlink reception resources), I/O resources (e.g., read/write resources of a memory card, a storage device, an IC card, etc.), bluetooth communication resources (e.g., peer-to-peer bluetooth direct communication resources), access resources of a common variable, or other resources.
For example, thread 1 needs to access picture 1 in the file of the operating system. Other multiple threads also need to access picture 1. Thread 1 has a higher priority than the other threads. The lock object 1 is used to lock the I/O resource right to access the picture. Code block a in thread 1 is used to request contention for I/O resources. Code blocks b, c, etc. in other threads are used to request contention for I/O resources. Although thread 1 has a higher priority than the other threads, code block a in thread 1 is in a competing relationship with code blocks b, c, etc. in the other threads. When the I/O resources are acquired, the priority of the code block a is not higher than that of the code blocks b and c.
When a thread with a higher priority and a thread with a lower priority preempt network resources, I/O resources or other resources, since the priority of the thread is not brought into the resource preemption process, a thread with a lower priority may preempt the resources first, and a thread with a higher priority may not preempt the resources. Thus, a low priority thread blocks the operation of a high priority thread, causing a priority reversal.
Annotation (annotation) is a language property provided by the Java language that sets metadata (metadata) for program elements. The embodiment of the application provides a scheduling method based on a labeling mechanism, which can adopt a scheduling mode indicated by labeling information to schedule code blocks (blocks) in threads at a fine granularity. In particular, the scheduling method provided by the embodiment of the present application may use the scheduling manner indicated by the labeling information to preferentially and efficiently schedule and execute the code blocks in the threads.
The marking information can define scheduling modes such as scheduling priority of code blocks with finer granularity in the high-priority thread. The code blocks may be code blocks that preempt network resources, I/O resources, or other resources. Therefore, the code blocks occupying the resources in the high-priority thread are scheduled preferentially, the high-priority thread can be scheduled preferentially, and the problem of thread priority reversal is avoided.
In embodiments of the present application, the annotation information may also define the priority of the thread. Furthermore, the level range of the thread priority level definable by the annotation information is a level range of a priority level supported by the operating system, and is not limited to the level range of a priority level configurable inside the virtual machine. Thus, the annotation information can define a larger range of boundary levels of priority. For example, if the level range of the configurable priority inside the virtual machine in the prior art is 1 to 10, the level range of the priority definable by the annotation information is greater than 1 to 10, for example, 1 to 299. Therefore, the priority of the thread can be higher, and the priority control is more flexible and effective.
Also, the granularity of a thread is large, as the thread may include multiple blocks of code. If the whole thread is scheduled preferentially by setting the priority of the whole thread, the key code blocks and the non-key code blocks in the thread have the same priority, and the time required by the scheduling and executing process of the whole thread is longer. According to the scheduling mode indicated by the marking information, a certain key code block with finer granularity in the thread is preferentially scheduled, and the scheduling and executing processes of the key code block can be rapidly completed, so that the operation of the key code block is preferentially ensured, and the regulation and control effect is better.
Specifically, in the embodiment of the present application, a developer may label a label object through the label information. The label object is the content modified after the label information. The label information is used for indicating a scheduling mode. The annotation information and the annotation object can be used to determine the role object of the annotation information. The role object is a source code block written based on Java language. The scope of the effect object may be greater than or equal to the scope of the annotation object. The source code blocks in the action objects can be scheduled according to the scheduling mode defined by developers through the marking information, so that the scheduling priority, the execution efficiency and the like of the source code blocks in the action objects can be improved, the source code blocks in the action objects are prevented from being blocked by other threads or code blocks, and the problem of priority inversion is avoided.
In an embodiment of the present application, a code block may include an object (object) or a sentence. The object may be a thread, a method (also called a function), or other piece of code defined by the user. The statement may be a line of code or a collection of lines of code. The statement may or may not be in the object. For example, the statement may be one or more lines of code defined between "{" and "}" outside the object.
The source code block as the object of the annotation may also be a thread, a method, a statement, or the like. The source code blocks of the actors may also be threads, methods, or statements, etc.
If the code after the annotation information is an object (object), the annotation object and the action object are both source codes within the range of the object (object). For example, if the code after the annotation information is a lock object (i.e., an object defined by synchronized), the annotation object and the active object are both source code within the scope of the lock object (object). If the code after labeling information is a method (method), the labeling object and the action object are source codes within the scope of the method. And if the code after the information is labeled is a line of statement in the object or method, the labeled object and the action object are the source code of the line of statement. And if the code after the annotation information is the statement limited by { }, the annotation object and the action object are the source code of the statement in the range of { }. If the code after the information is labeled is a notify function in the lock object, the labeled object is the notify function, the action object is a source code in a notify function range, and the lock object is a source code in a wait function range matched with the notify function.
Specifically, if the code following the markup information is an object, the markup object of the markup information is the object, and the role object of the markup information is the source code within the range of the object. And the action object of the labeling information is the action range of the labeling information. For example, the code after the annotation information is a lock object (i.e., an object defined by synchronized), the annotation object is the lock object, and the role object is the source code within the scope of the lock object. If the code after labeling the information is a method, the labeling object is the method, and the action object is the source code in the method range. If the code after marking the information is a line of statement in the object or method, the marking object and the action object are both the source code of the line of statement. And if the code after the annotation information is the statement limited by { }, the annotation object and the action object are both the source code of the statement included in the { }. If the code after the information is labeled is a notify function in the lock object, the labeled object is the notify function, and the action object comprises a source code in the notify function range and a source code in a wait function range matched with the notify function in the lock object.
That is to say, the method provided by the embodiment of the present application can schedule the thread according to the scheduling mode indicated by the labeling information; and carrying out fine-grained scheduling on code blocks such as objects, methods (functions) or statements in the thread according to the scheduling mode indicated by the marking information. Since many blocks of code may be included in a thread, it may be some block or blocks of code in the thread that affect the efficiency of thread execution. The priority of the whole thread is configured, and different parts in the thread cannot be controlled and scheduled, so that the problem of the running efficiency of the thread cannot be effectively solved. And according to the scheduling mode indicated by the marking information, fine-grained scheduling is carried out on objects (objects), methods (methods) or statements in the threads, so that each part in the threads can be effectively controlled, dominated and scheduled, and the flexibility and the maneuverability of the processing process are improved.
In some embodiments, the contributing object may be a key code block. For example, the key code block may be a user interface related code block. For example, the key code block may be a code block related to an input interface of a user name and a password when the user logs in. The code blocks related to the user interface are scheduled in the scheduling mode defined by the marking information, so that the code blocks related to the user interface can be preferentially and efficiently executed, the user interaction process is smoother, the user interface is prevented from being jammed, and the user experience is improved.
As another example, the critical code block may be a code block that achieves important performance. For example, when playing video, a code block related to video picture playing is a code block that realizes important performance compared to a code block that realizes display of a bullet screen, an advertisement, or the like. The code blocks realizing the important performance are scheduled by the scheduling mode defined by the marking information, so that the code blocks related to the important performance can be processed quickly and preferentially, and the important performance can be realized preferentially.
As another example, the critical code block may be a code block that affects overall operating efficiency. For example, in one thread, the operation of a plurality of code blocks needs to depend on the operation result of a certain code block, and the operation efficiency of the code block affects the operation efficiency of the whole thread or process. For another example, in a thread, the computation complexity of a certain code block is large, and the operation efficiency of the code block affects the overall operation efficiency of the thread. The code blocks which affect the whole operation efficiency are scheduled in a scheduling mode defined by the marking information, so that the code blocks which affect the whole operation efficiency can be preferentially and quickly executed, and the whole operation efficiency can be improved.
Generally, the lifecycle of a code run may include several phases, compile time, link time, load time, and run time. Wherein the compile-time phase may be completed by a compiler (compiler). The link, load, and run-time phases may be completed by a virtual machine.
The scheduling method provided by the embodiment of the application may include a compiling method executed by a compiler and an operating method executed by a virtual machine. The compiler may be configured on a server or other electronic device. The virtual machine may run on a terminal or other electronic device. The compiler and the virtual machine may be on the same electronic device or on different electronic devices.
For example, the electronic device running the virtual machine may be a mobile phone, a tablet computer, a wearable device, an in-vehicle device, an Augmented Reality (AR)/Virtual Reality (VR) device, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a Personal Digital Assistant (PDA), a smart home device, and the like. The device type of the electronic device is not particularly limited in the embodiments of the present application.
Fig. 1 shows a schematic structural diagram of an electronic device 100. A compiler and/or a virtual machine may be configured on the electronic device 100. The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a key 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identification Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
The controller may be, among other things, a neural center and a command center of the electronic device 100. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing system efficiency.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The charging management module 140 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication applied to the electronic device 100. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave.
The wireless communication module 160 may provide a solution for wireless communication applied to the electronic device 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (bluetooth, BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like.
In some embodiments, antenna 1 of electronic device 100 is coupled to mobile communication module 150 and antenna 2 is coupled to wireless communication module 160 so that electronic device 100 can communicate with networks and other devices through wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), General Packet Radio Service (GPRS), code division multiple access (code division multiple access, CDMA), Wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), Long Term Evolution (LTE), LTE, BT, GNSS, WLAN, NFC, FM, and/or IR technologies, among others. GNSS may include Global Positioning System (GPS), global navigation satellite system (GLONASS), beidou satellite navigation system (BDS), quasi-zenith satellite system (QZSS), and/or Satellite Based Augmentation System (SBAS).
The electronic device 100 implements display functions via the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, with N being a positive integer greater than 1.
The electronic device 100 may implement a shooting function through the ISP, the camera 193, the video codec, the GPU, the display 194, the application processor, and the like.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, the electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the electronic device 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (such as audio data, phone book, etc.) created during use of the electronic device 100, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
In embodiments of the present application, the processor 110 may execute instructions stored in the internal memory 121 to implement the functionality of a compiler or a virtual machine.
The electronic device 100 may implement audio functions via the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also called a "horn", is used to convert the audio electrical signal into an acoustic signal. The electronic apparatus 100 can listen to music through the speaker 170A or listen to a handsfree call.
The receiver 170B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the electronic apparatus 100 receives a call or voice information, it can receive voice by placing the receiver 170B close to the ear of the person.
The microphone 170C, also referred to as a "microphone," is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can input a voice signal to the microphone 170C by speaking the user's mouth near the microphone 170C. The electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C to achieve a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may further include three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, perform directional recording, and so on.
The headphone interface 170D is used to connect a wired headphone. The headset interface 170D may be the USB interface 130, or may be a 3.5mm open mobile electronic device platform (OMTP) standard interface, a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The touch sensor 180K is also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided through the display screen 194. In other embodiments, the touch sensor 180K may be disposed on a surface of the electronic device 100, different from the position of the display screen 194.
The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. Or may be touch keys. The electronic apparatus 100 may receive a key input, and generate a key signal input related to user setting and function control of the electronic apparatus 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration cues, as well as for touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also respond to different vibration feedback effects for touch operations applied to different areas of the display screen 194. Different application scenes (such as time reminding, receiving information, alarm clock, game and the like) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc.
It is to be understood that the illustrated structure of the embodiment of the present application does not specifically limit the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
For example, when the electronic device 100 is a mobile phone running a virtual machine, the electronic device 100 may include the components shown in fig. 1. When the electronic device 100 is a server configured with a compiler, the electronic device 100 may include a memory and a processor, and may further include a bus for connecting the memory, the processor, and other components; and may not include components such as a display screen, audio module, etc. The bus may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc.
In embodiments of the present application, the processor 110 may execute instructions stored in the internal memory 121 to implement the functionality of a compiler or a virtual machine. And after the compiler identifies the marking information used for indicating the scheduling mode, compiling the marking information so as to generate a code block for configuring and canceling the scheduling mode indicated by the marking information. The compiler can also compile the object of action to generate a target code block. When the virtual machine runs, the scheduling mode indicated by the marking information is firstly configured, then the target code block is scheduled and run according to the scheduling mode, and then the scheduling mode indicated by the marking information is cancelled. Therefore, scheduling control of code blocks such as threads, objects, methods, or statements can be realized.
The software system of the electronic device 100 may employ a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. The embodiment of the present application takes a software system with a layered architecture as an example, and exemplifies a software structure of the electronic device 100.
Fig. 2 is a block diagram of a software structure of the electronic device 100 according to the embodiment of the present application. The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the software system is divided into four layers, an application layer, an application framework layer, a runtime and system library, and a kernel layer from top to bottom. The application layer may include a series of application packages.
As shown in fig. 2, the application package may include applications such as camera, gallery, calendar, phone call, map, navigation, WLAN, bluetooth, music, video, short message, etc.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 2, the application framework layers may include a window manager, content provider, view system, phone manager, resource manager, notification manager, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
The phone manager is used to provide communication functions of the electronic device 100. Such as management of call status (including on, off, etc.).
The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, prompting text information in the status bar, sounding a prompt tone, vibrating the electronic device, flashing an indicator light, etc.
The runtime includes a core library and a virtual machine. The runtime is responsible for scheduling and management of the system. During operation, the virtual machine may configure the scheduling mode indicated by the label information, schedule the target code block after compiling the source code block such as the object, the method, or the sentence according to the scheduling mode, and then cancel the scheduling mode indicated by the label information, thereby implementing scheduling control on the code block.
The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of the system.
The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface managers (surface managers), Media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., OpenGL ES), 2D graphics engines (e.g., SGL), and the like.
The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, and the like.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The scheduling method provided by the embodiment of the present application is explained below from the perspective of a compiler and a virtual machine. It will be understood that the functions and operations performed by the compiler, that is, the functions and operations performed by the electronic device that configures the compiler, are performed. The functions and operations performed by the virtual machine, that is, the functions and operations performed by the electronic device running the virtual machine.
As shown in fig. 3, an embodiment of the present application provides a scheduling method, which may include:
301. the compiler acquires a code to be compiled, the code to be compiled comprises marking information and an action object of the marking information, the action object is a source code block, and the marking information is used for indicating a scheduling mode.
The source code block in the range of the annotation information action object is a code segment which is written based on Java language and is not compiled. And the source code blocks in the action object range are code blocks which are specified by developers and need to be subjected to scheduling control. For example, the source code blocks within the scope of the actor may be the key code blocks described above.
The annotation information may specifically include the character "@" and the scheduling parameter. The scheduling parameter is used to indicate a scheduling manner. In addition, the annotation information may also include an annotation name. For example, the format of the annotation information may be: @ + callname + (at least one scheduling parameter). And the code after the labeling information is a labeling object.
Illustratively, see the following pseudo code:
Figure BDA0002103381960000141
where @ task _ pri ("priority ═ 15, cpu ═ 2, io ═ HIGH") denotes label information. "task _ pro" following the character "@" represents the callout name. ("priority ═ 15, cpu ═ 2, io ═ HIGH") denotes scheduling parameters. The object is marked as void F () function, and the action object is the source code within the void F () function.
The compiler can determine the scheduling parameter in parentheses after "@" in the annotation information according to the content of the annotation information, so as to define the scheduling mode. Then, the compiler may identify the labeled object of the labeled information according to the code after labeling information by using the manner described in the above embodiment, and determine the action object (or action range) of the labeled information, which is not described herein again.
Scopes for annotation information may include compile time (class) and runtime (run time).
302. And the compiler compiles the annotation information to generate a first target code block and a second target code block, wherein the first target code block is used for configuring the scheduling mode indicated by the annotation information, and the second target code block is used for canceling the scheduling mode indicated by the annotation information.
In an embodiment of the present application, a code block before compiling may be referred to as a source code block, and a code block obtained after compiling may be referred to as a target code block. After the compiler acquires the code to be compiled, if the scheduling parameter after "@" in the annotation information is identified to be used for defining the scheduling mode, the annotation information can be used as metadata, so that the annotation information is compiled, and a first target code block and a second target code block are generated. In the prior art, the compiler generally does not compile the annotation information as metadata.
303. And the compiler compiles the action object to generate a third target code block.
That is, the compiler may obtain a first target code block and a second target code block generated after compiling the annotation information, and a third target code block generated after compiling the effect object.
The method described in steps 301-303 above can be referred to as a compiling method. Wherein, step 302 and step 303 have no definite precedence relationship. The compiler may generate the first target code block and the second target code block first, and then generate a third target code block; alternatively, the third target code block may be generated first, and then the first target code block and the second target code block may be generated.
304. And the virtual machine acquires a target file generated after compiling, wherein the target file comprises a first target code block, a second target code block and a third target code block.
And the first target code block and the second target code block are generated after compiling according to the labeling information. The third target code block is a target code block generated after compiling according to the action object of the labeling information.
305. And the virtual machine runs the first target code block to configure the scheduling mode indicated by the marking information.
The first target code block is used for configuring the scheduling mode indicated by the marking information. The virtual machine can run the first target code block to configure the scheduling mode indicated by the annotation information.
306. And the virtual machine schedules and runs the third target code block according to the scheduling mode indicated by the marking information.
307. And the virtual machine runs the second target code block to cancel the scheduling mode indicated by the marking information.
The second target code block is used for canceling the scheduling mode indicated by the marking information. After the scheduling of the third target code block is completed, the virtual machine may run the second target code block to cancel the scheduling manner indicated by the annotation information. Therefore, the virtual machine does not adopt the scheduling mode indicated by the marking information to schedule other target code blocks.
The method described in step 304-307 may be referred to as an operation method. Before the virtual machine runs the first target code block, the second target code block, and the third target code block, the first target code block, the second target code block, and the third target code block may be loaded into the memory.
Through the above steps 305-307, the virtual machine can implement control such as priority scheduling on the third target code block generated after the role object is compiled according to the scheduling mode indicated by the annotation information, that is, according to the scheduling mode configured by the developer. And for other code blocks except for the action object, priority scheduling and other control are carried out without adopting the scheduling mode indicated by the marking information.
Illustratively, FIG. 4 shows a schematic of a thread lifecycle for an application. The thread includes code block A, B, C, D, E five code blocks. The code block C is marked by the developer through marking information. The code block C may be executed according to the scheduling manner indicated by the labeling information, that is, according to the scheduling manner such as the priority configured by the developer. And after the code block C is executed and received, stopping scheduling according to the scheduling mode indicated by the marking information, recovering the original scheduling mode of the thread, and continuously finishing the execution of the rest code blocks.
In the scheduling methods shown in fig. 3 and 4, the virtual machine performs fine-grained scheduling on code blocks such as objects, methods, or statements in the thread according to the scheduling mode indicated by the labeling information. In addition, the fine-grained scheduling mode can effectively control, command and schedule different parts in the thread, and improves the flexibility and the maneuverability of the processing process.
When the object of the action is a code block inside a high-priority thread, the marking information can define scheduling modes such as scheduling priority of a code block with finer granularity inside the high-priority thread. The code blocks may be code blocks that preempt network resources, I/O resources, or other resources. Therefore, the code blocks occupying the resources in the high-priority interior are scheduled preferentially, the high-priority thread can be scheduled preferentially, the priority of the thread is ensured, and the problem of thread priority reversal is avoided.
In some embodiments, the scheduling parameter may include one or more of a scheduling priority of the code block, a scheduling policy of the code block, CPU information, an I/O scheduling policy, an I/O scheduling priority, or the like. That is, the virtual machine may schedule and run the third target code block according to the scheduling manner specified by the scheduling parameters.
The scheduling priority parameter of the code block is used for defining and executing the priority level of the code block in the action object.
The scheduling policy parameters of the code blocks are used to define the scheduling policy of the code blocks within the active object. For example, the scheduling policy may include a first-in first-out scheduling SCHED _ FIFO policy, a round robin scheduling SCHED _ RR policy, a real time scheduling SCHED _ RT policy, or OTHER scheduling SCHED _ OTHER policies.
The scheduling priority parameter of the code block and the scheduling policy parameter of the code block defined in the annotation information may indicate that the code block is preferentially scheduled.
The CPU information is used to define information such as CPU cores and CPU operating frequencies when executing code blocks in the objects of action. The CPU core may include a big core and a small core, among others. For example, the electronic device may include 8 CPU cores inside, including 4 big cores and 4 little cores. The CPU big core has higher main frequency and higher processing speed. If the code block in the action object is bound to a CPU big core, and the working frequency of the CPU is higher, the processing efficiency of the CPU is higher, and the code block can be more efficiently scheduled.
The I/O scheduling policy is used to define the input/output scheduling policy involved in executing the code block within the actor. Similarly, the scheduling policy may also include a first-in-first-out scheduling SCHED _ FIFO policy, a round robin scheduling SCHED _ RR policy, a real-time scheduling SCHED _ RT policy, or OTHER scheduling SCHED _ OTHER policies.
The I/O scheduling policy is used to define the scheduling priority of I/O operations involved in executing code blocks within an actor.
The I/O scheduling priority parameter and the I/O scheduling policy parameter defined in the annotation information may indicate that the I/O operation referred to by the code block is preferentially performed.
That is to say, in the scheduling method provided in the embodiment of the present application, the scheduling priority of the thread or the code block in the thread can be set through the label information; the scheduling strategy, CPU information, I/O scheduling strategy, I/O scheduling priority and the like of the thread or the code block in the thread can be set, so that the processing performance in various aspects can be controlled, the marked code block can be processed preferentially through various scheduling modes, and the execution efficiency of the marked code block is improved. Therefore, the scheduling mode indicated by the label information has a wider scheduling range and is more flexible and various. However, the prior art cannot control other scheduling parameters or other scheduling modes except the thread scheduling priority.
For example, the annotation information may be: @ task _ roi (scheduling policy of code block, CPU information, I/O scheduling policy, I/O scheduling priority). If the developer does not want to configure some scheduling parameter, in one technical solution, a parameter value of the scheduling parameter that is not configured in the tagging information may be a null value. For example, the annotation information may be @ task _ roi (scheduling priority of the code block, scheduling policy of the code block, CPU information, null, null). Alternatively, the annotation information may be @ task _ roi (scheduling policy of code block, CPU information). After identifying the labeling information in the embodiment of the present application, if it is found that a certain scheduling parameter is null, the compiler may perform compilation according to a default value corresponding to the scheduling parameter. The virtual machine can be scheduled according to the scheduling mode indicated by the default value corresponding to the scheduling parameter. If the developer does not want to configure a scheduling parameter specially, in another technical solution, the developer may set the scheduling parameter as a default value.
Exemplarily, in the above-mentioned label information @ task _ pri ("priority ═ 15, cpu ═ 2, and io ═ HIGH"), the "priority" is used to define the scheduling priority of the code block in the scheduling parameter. Illustratively, the priority level supported by the operating system ranges from 1 to 100, and the priority level of the code block in the role object is 15. "CPU" is used to define CPU information in the scheduling parameters. CPU 2 indicates that the code block in the role object is bound to the CPU core. If CPU equals 1, it may indicate that the code block in the scope is bound to the CPU corelet. "io" is used to define the I/O scheduling priority in the scheduling parameters. io-HIGH indicates that the I/O scheduling priority is HIGH. If io is NORMAL, I/O scheduling priority is indicated. The labeling information does not define two types of parameters, namely a scheduling policy and an I/O scheduling policy of the code block.
It should be noted that the definition manner of the scheduling parameter in the above noted information is only an exemplary illustration, and any definition manner supported by the operating system, the compiler and the virtual machine may be used, which is not limited in this embodiment of the present application. For example, the scheduling priority of the code blocks supported by the operating system is 1-5. The marking information comprises priority of 5, and the priority of the code block in the action object is the priority of the 5 th level. As another example, the I/O scheduling priorities supported by the operating system may range in levels from 1 to 99. The annotation information includes io-20, which indicates that the I/O scheduling priority involved in the role object is level 20.
In some embodiments, referring to fig. 5, the step 302 may further include: the compiler compiles the annotation information and also generates a first function inlet and a second function inlet. The first function entry is used for calling a first target code block, and the second function entry is used for calling a second target code block. For example, the first and second function entries may be instrumented (stub) functions.
That is, the compiler compiles the annotation information, and may generate not only the first object code block and the second object code block but also a first function entry for calling the first object code block and a second function entry for calling the second object code block.
After step 302, the method may further comprise:
308. the compiler inserts the first function entry before the third target code block.
309. The compiler inserts the second function entry after the third target code block.
Prior to step 305, the method may further comprise:
310. the virtual machine links the first function entry with the first target code block.
Step 305 may specifically include: and the virtual machine runs the first target code block linked with the first function inlet so as to configure the scheduling mode indicated by the marking information.
Prior to step 307, the method may further comprise:
311. the virtual machine links the second function entry with the second target code block.
Step 307 may specifically include: and the virtual machine runs a second target code block linked with the second function inlet so as to cancel the scheduling mode indicated by the marking information.
The schematic diagram of the scheduling method shown in fig. 5 can also be seen in fig. 6. In the scheduling methods shown in fig. 5 and fig. 6, the compiler may generate a first function entry and a second function entry according to the annotation information, and insert the first function entry and the second function entry before and after the third target code block, respectively. The virtual machine may link the first function entry and the first target code block and link the second function entry and the second target code block. Therefore, the virtual machine can firstly run the first target code block through the first function inlet so as to configure the scheduling mode indicated by the marking information; then, a third target code block can be scheduled and operated according to the scheduling mode; and then, running a second target code block through a second function inlet to cancel the scheduling mode indicated by the marking information.
In other embodiments, referring to fig. 7, prior to step 305, the method may further comprise:
312. the virtual machine generates a first function entry, and the first function entry is used for calling the first target code block.
313. The virtual machine links the first function entry with the first target code block.
Step 305 may specifically include: and the virtual machine runs the first target code block linked with the first function inlet so as to configure the scheduling mode indicated by the marking information.
Prior to step 307, the method may further comprise:
314. and the virtual machine generates a second function inlet, and the second function inlet is used for calling a second target code block.
315. The virtual machine links the second function entry with the second target code block.
Step 307 may specifically include: and the virtual machine runs a second target code block linked with the second function inlet so as to cancel the scheduling mode indicated by the marking information.
The schematic diagram of the scheduling method shown in fig. 7 can also be seen in fig. 8. In the scheduling methods shown in fig. 7 and 8, the virtual machine may generate a first function entry and a second function entry according to the first target code block and the second target code block, and link the first function entry and the first target code block, and link the second function entry and the second target code block. Then, the virtual machine may run the first target code block through the first function entry to configure the scheduling mode indicated by the annotation information; scheduling and operating a third target code block according to the scheduling mode; and then, running a second target code block through a second function inlet to cancel the scheduling mode indicated by the marking information.
The scheduling method provided by the embodiment of the present application is explained by taking a scenario as an example.
Scene one: the labeling object and the action object of the labeling information are any code blocks. Thus, any marked code block can be executed efficiently according to the scheduling mode indicated by the marking information. For example, the annotation object is an arbitrary function, and the action object is a code block within the function.
For example, the role object of the label information is a first source code block, and the first source code block and a second source code block belong to the same thread; the scheduling parameters include CPU information. The first source code block may be executed efficiently according to the CPU information indicated by the scheduling parameter.
Illustratively, the execution thread of the gallery application includes a code block a, a code block b, and a code block c. The code block a is used for opening the picture, the code block b is used for optimizing the picture, and the code block c is used for storing the picture. Since the complexity of the code block b for performing the picture optimization is relatively high, a developer can label the code block b through the label information, that is, an action object is the code block b. Therefore, parameters such as CPU information in the scheduling parameters can be configured through the marking information, so that the execution efficiency of the code block b is improved, and the user can experience that pictures can be optimized quickly. For example, the CPU information in the annotation information may indicate that the code block b is bound to the CPU core, and the operating frequency of the CPU is high.
Exemplary, the pseudo code in this scenario may be as follows:
Figure BDA0002103381960000181
where @ task _ roi ("cpu ═ 2,1400000") denotes label information. In the label information, "CPU" is used to define CPU information. "2" in the CPU 2,1400000 indicates that the code block is bound to the CPU core, and "1400000" indicates that the operating frequency of the CPU is 1400000kHz, i.e., 1.4 GHz. The annotation information does not define other scheduling parameters. The role object of the label information is the void F () function. The void F () function may be the code block b described above.
Scene two: and the label object and the action object are the code blocks of the resources which are required to be locked by the lock object besides the lock object.
For example, the role object of the annotation information is a first source code block, the first source code block is used for requesting resources, and the first source code block belongs to a first thread; the scheduling priority of the code block indicated by the scheduling parameter is high. The target file also comprises a fourth target code block generated after the second source code block is compiled and a fifth target code block generated after the lock object is compiled; the second source code block is used to request a resource and the lock object is used to lock the resource. After the virtual machine runs the first target code block, the scheduling mode indicated by the configuration marking information and before running the third target code block, the method further comprises the following steps: and the virtual machine determines that the scheduling priority of the third target code block is high according to the scheduling parameters. And the virtual machine authorizes the resource locked by the lock object corresponding to the fifth target code block to be used by the third target code block. After the virtual machine runs the second target code block to cancel the scheduling mode indicated by the marking information, the method further comprises the following steps: the virtual machine grants the locked resource of the lock object to the fourth target code block for use. The virtual machine runs the fourth target code block. In this way, the third target code block may be preferentially locked and thus may be preferentially executed, i.e., the function of the first source code block may be preferentially implemented.
Illustratively, thread 1 needs to access picture 1 in the file of the operating system. Other multiple (e.g., tens of) threads also need to access picture 1. Thread 1 has a higher priority than the other threads. The lock object 1 is used to lock the I/O resource right to access the picture. That is, lock object 1 is an I/O lock.
Code block a in thread 1 is used to request contention for I/O resources. Code blocks b, c, etc. in other threads are used to request contention for I/O resources. Although thread 1 has a higher priority than the other threads, code block a in thread 1 is in a competing relationship with code blocks b, c, etc. in the other threads.
If code blocks b and c in other threads compete for I/O resources preferentially, the virtual machine grants the I/O lock to the code blocks of other threads to hold, and the code block a does not acquire the I/O lock, so that the thread 1 is blocked by other threads. That is, the priority of thread 1 is inverted.
The developer can mark the code block a by the marking information to indicate that the code block a is scheduled by adopting high priority. In this way, when the virtual machine runs the first target code block, the priority of the first target code block is determined to be high according to the high priority configured by the marking information, and the lock is preferentially authorized to be held by the code block a. Thus, the code block a can be preferentially executed, and the thread 1 can also be preferentially executed, so that the problem of the reversal of the priority of the thread 1 can be avoided.
Scene three: the label object is a notify function in the lock object, and the action object comprises source codes in a notify function range and source codes in a wait function range matched with the notify function in the lock object. In this way, in an asynchronous waiting scenario, a code block waiting for another code block can be executed preferentially and efficiently, so that another code block waiting for the code block can be executed efficiently as soon as possible.
For example, the labeled object of the labeled information is a first source code block, the first source code block is a notify function, the first source code block is a function in the range of the lock object, the lock object further comprises a wait function, and the function object of the labeled information is a source code block in the range of the notify function and a source code block in the range of the wait function; the third target code block comprises a fourth target code block generated after the source code block in the wait function range is compiled and a fifth target code block generated after the source code block in the notify function range is compiled; the virtual machine schedules and runs the third target code block according to the scheduling mode indicated by the marking information, and the method comprises the following steps: the virtual machine schedules and runs the fourth target code block according to the scheduling mode indicated by the marking information; and the virtual machine schedules and runs the fifth target code block according to the scheduling mode indicated by the marking information.
Illustratively, one task of the camera application includes thread 1 and thread 2. Thread 1 is used to implement picture processing. Thread 1 includes code block a, code block b, and code block c. Thread 2 is used to implement the backup of the picture to the cloud. Thread 2 includes code block d and code block e. Thread 1 has a higher priority than thread 2. The code block a and the code block d need to compete for the same resource, or there is a conflict in the execution of the code block a and the code block d.
Wherein the execution of code block b and code block c needs to be dependent on code block a. That is, the code block b and the code block c can be executed only after the code block a is executed first. That is, there is a relationship: among the lock objects, wait (code block a), notify (code block b, code block c).
If code block d competes for resources before code block a, then code block a is blocked. The code blocks b and c of the asynchronous wait state are also blocked. Thread 1 is blocked by thread 2 and a priority reversal occurs. For example, referring to (a) in fig. 9, after the mobile phone detects that the user clicks the "picture processing" control, the thread 1 for implementing picture processing cannot be executed as soon as possible by the virtual machine running on the mobile phone, as shown in (b) in fig. 9, the user interface for picture processing is stuck.
And the developer can label the notify function through the label information to indicate that the code block b and the code block c in the notify function are scheduled by adopting high priority. The compiler determines the annotation ranges as notify function and wait function in the lock object. And the virtual machine schedules the notify function and the wait function according to the priority indicated by the marking information. And because of the dependency relationship, the virtual machine preferentially schedules the code block a limited by the dependent wait function, and then schedules the code block b and the code block c in the notify function. Therefore, the thread 1 is scheduled by the thread 2 in priority, and the problem of priority inversion is avoided. For example, referring to fig. 9 (a), after the mobile phone detects that the user clicks the "picture processing" control, the thread 1 for implementing picture processing can be executed as soon as possible according to the scheduling manner indicated by the annotation information, referring to fig. 9 (c), the user interface for picture processing is smooth, and a better experience of quickly processing pictures can be provided for the user.
Exemplary, the pseudo code in this scenario may be as follows:
Figure BDA0002103381960000191
Figure BDA0002103381960000201
where @ task _ pri ("priority ═ 15, cpu ═ 2, io ═ HIGH") denotes label information. The label object is a notify function in the lock object mLock. The action object is a code in the range of wait function and notify function in the lock object mLock. "// compiler automatically inserts the adjust priority algorithm code", "scheduled _ begin (priority, cpu, io)" and "scheduled _ end (priority, cpu, io)" are not included in the pre-compilation code; and the marking information before compiling is replaced in the pseudo code after compiling. The pre-compiled code does not include. During compiling, before the wait function code and notify function code in the mLock range, a code block, namely, scheduled _ begin (priority, cpu, io), of the scheduling mode indicated by the configuration marking information is inserted; after the wait function code and notify function code in the mLock range, a code block of the scheduling mode indicated by the cancellation marking information, that is, scheduled _ end (io), is inserted.
Scene four: the marking object is a lock object, and the action object is a source code block in the range of the lock object.
Lock objects typically involve resource contention for multiple code blocks, as compared to other code blocks, and the code blocks in the lock object are key code blocks. Code blocks in the lock object are executed efficiently, and code blocks competing for resources can acquire the resources in time, so that the code blocks are executed as soon as possible. Therefore, developers can mark the lock objects, the lock objects are scheduled preferentially by configuring the priority of the lock objects, CPU information and the like, and the scheduling can be completed efficiently.
Exemplary, the pseudo code in this scenario may be as follows:
Figure BDA0002103381960000202
where @ task _ pri ("priority ═ 15, cpu ═ 2, io ═ HIGH") denotes label information. And marking the object as a lock object mLock. The action object is code in the mLock range. "// compiler automatically inserts the adjust priority algorithm code", "scheduled _ begin (priority, cpu, io)" and "scheduled _ end (priority, cpu, io)" are not included in the pre-compilation code; and the marking information before compiling is replaced in the pseudo code after compiling. During compiling, before the code in the mLock range, a code block, namely scheduled _ begin (priority, cpu, io), of the scheduling mode indicated by the configuration marking information is inserted; after the code in the mLock range, a code block of the scheduling manner indicated by the cancellation label information, that is, scheduled _ end (priority, cpu, io), is inserted.
In some cases, if the lock object in scene four includes a wait function and a notify function, the labeled object is a lock object, and the action object is a code block in the whole lock object, including the wait function and the notify function. Illustratively, the pseudo-code may be as follows:
Figure BDA0002103381960000211
where @ task _ pri ("priority ═ 15, cpu ═ 2, io ═ HIGH") denotes label information. And marking the object as a lock object mLock. The active objects are code blocks in the mLock range, and comprise wait functions and notify functions. "// compiler automatically inserts the adjust priority algorithm code", "scheduled _ begin (priority, cpu, io)" and "scheduled _ end (priority, cpu, io)" are not included in the pre-compilation code; and the marking information before compiling is replaced in the pseudo code after compiling. During compiling, before the wait function code and notify function code in the mLock range, a code block, namely, scheduled _ begin (priority, cpu, io), of the scheduling mode indicated by the configuration marking information is inserted; after the wait function code and notify function code in the mLock range, a code block of the scheduling mode indicated by the cancellation marking information, that is, scheduled _ end (io), is inserted.
Other embodiments of the present application further provide a scheduling method based on the API function. In this method, the annotation information may be replaced with an API function. Referring to fig. 10, the method may include:
1001. the method comprises the steps that a compiler acquires a code to be compiled, wherein the code to be compiled comprises a defined object, a first API function located in front of the defined object and a second API function located behind the defined object; defining an object as a source code block; the first API function is used for indicating a scheduling mode; the scheduling mode comprises one or more of scheduling priority of the code block, scheduling strategy of the code block, CPU information, input/output I/O scheduling strategy or I/O scheduling priority.
A developer can insert a first API function for configuring a scheduling mode before a source code block needing to define the scheduling mode; and inserting a second API function for canceling the scheduling mode after the source code block needing to define the scheduling mode.
After detecting the first API function, if it is determined that the first API function is used for defining the scheduling manner, the compiler may determine a limitation object of the API function and an action object (or action scope) of the API function. Wherein the defined object of the API function can be a source code block between the first API function and the second API function.
Similar to the annotation object and the action object of the annotation information, if the code block between the first API function and the second API function is an object, the defined object of the API function is the object, and the action object (or action range) of the API function is the source code of the object. For example, the code block between the first API function and the second API function is a lock object, the defined object is the lock object, and the action object is the source code within the scope of the lock object. If the code block between the first API function and the second API function is a method, the object is defined as the method, and the action object is the source code of the method. And if the code block between the first API function and the second API function is a line of statement in the object or method, the limited object and the action object are both the source codes of the line of statement. If the code block between the first API function and the second API function is a statement defined by { }, the defined object and the action object are both source codes of the statement included in the { }. And if the code block between the first API function and the second API function is a notify function in the lock object, defining the object as the notify function, wherein the action object comprises source codes in the range of the notify function and source codes in the range of the wait function matched with the notify function in the lock object.
1002. The compiler compiles the first API function to generate a first function inlet and a first target code block, wherein the first function inlet is used for calling the first target code block, and the first code block is used for configuring a scheduling mode indicated by the first API function.
1003. And the compiler compiles the action object to generate a third target code block.
1004. And the compiler compiles the second API function to generate a second function inlet and a second target code block, wherein the second target code block is used for canceling the scheduling mode indicated by the first API function.
In this way, the compiler may obtain a first function entry, a second function entry, a first target code block and a second target code block generated after compiling the API function, and a third target code block generated after compiling the source code block.
1005. And the virtual machine acquires a target file generated after compiling, wherein the target file comprises a first target code block, a second target code block, a third target code block, a first function inlet positioned in front of the third target code block and a second function inlet positioned behind the third target code block.
1006. The virtual machine links the first function entry and the first target code block.
1007. And the virtual machine runs the first target code block linked with the first function inlet to configure the scheduling mode indicated by the first API function.
1008. And the virtual machine schedules and runs the third target code block according to the scheduling mode indicated by the first API function.
1009. The virtual machine links the second function entry and the second target code block.
1010. The virtual machine runs the second code block linked with the second function inlet so as to cancel the scheduling mode indicated by the first API function.
Through the above steps 1005-1009, the virtual machine may implement control such as performing priority scheduling on the third target code block generated after the object of action is compiled according to the scheduling mode indicated by the first API function, that is, according to the scheduling mode configured by the developer through the first API function. And for other code blocks except for the action object, priority scheduling and other control are carried out without adopting the scheduling mode configured by the first API function.
When the object of the action is a code block (e.g., a key code block) with a finer granularity in the high-priority thread, the annotation information may define a scheduling manner such as a priority of the code block with the finer granularity in the high-priority thread. The code blocks may be code blocks that preempt network resources, I/O resources, or other resources. Therefore, the code blocks occupying network resources, I/O resources or other resources in the high-priority interior are scheduled preferentially, the high-priority thread can be scheduled preferentially, the priority of the thread is ensured, and the problem of thread priority reversal is avoided.
The scheduling manner defined by the first API function may include one or more of a scheduling priority of the code block, a scheduling policy of the code block, CPU information, an input/output I/O scheduling policy, or an I/O scheduling priority. Therefore, the method and the device for scheduling the code blocks in the threads adopt the API function, so that the scheduling priority of the threads or the code blocks in the threads can be set; the scheduling strategy, CPU information, I/O scheduling strategy, or I/O scheduling priority of the thread or code blocks in the thread can be set, so that the processing performance in various aspects can be controlled, the marked code blocks can be processed preferentially in various modes, and the execution efficiency of the marked code blocks is improved. Therefore, the dispatching range is wider, and the dispatching mode is more flexible and various. The prior art can not control other aspects except the execution priority of the thread.
It will be appreciated that to implement the above functionality, the compiler and virtual machine contain corresponding hardware and/or software modules that perform the respective functions. The present application is capable of being implemented in hardware or a combination of hardware and computer software in conjunction with the exemplary algorithm steps described in connection with the embodiments disclosed herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, with the embodiment described in connection with the particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the compiler and the virtual machine may be divided into functional modules according to the above method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated module may be implemented in the form of hardware. It should be noted that, in the embodiment of the present application, the division of the module is schematic, and is only one logic function division, and there may be another division manner in actual implementation.
In the case of dividing each functional module by corresponding functions, fig. 11 shows a possible composition diagram of the compiler 1100 involved in the above embodiment, and as shown in fig. 11, the compiler 1100 may include: an acquisition unit 1101, a compiling unit 1102, and the like.
Among other things, the obtaining unit 1101 may be used to support the compiler 1100 to perform the above step 301, and/or other processes for the techniques described herein.
Compiling unit 1102 may be used to enable compiler 1100 to perform steps 302, 303, 308, 309, etc., described above, and/or other processes for the techniques described herein.
It should be noted that all relevant contents of each step related to the above method embodiment may be referred to the functional description of the corresponding functional module, and are not described herein again.
The compiler provided by the embodiment of the application is used for executing the compiling method, so that the same effect as the effect of the implementation method can be achieved.
In the case of dividing each functional module by corresponding functions, fig. 12 shows a possible composition diagram of the virtual machine 1200 involved in the foregoing embodiment, as shown in fig. 12, the virtual machine 1200 may include: an acquisition unit 1201, an execution unit 1202, a linking unit 1203, a generation unit 1204, and the like.
Among other things, the obtaining unit 1201 may be used to support the virtual machine 1200 to perform the above-described step 304, and/or other processes for the techniques described herein.
Execution unit 1202 may be configured to support virtual machine 1200 in performing steps 305, 306, 307, etc., described above, and/or other processes for the techniques described herein.
Linking unit 1203 may be used to enable virtual machine 1200 to perform steps 310, 311, 313, 315, etc. described above, and/or other processes for the techniques described herein.
The generation unit 1204 may be used to support the virtual machine 1200 in performing the above-described steps 312, 314, etc., and/or other processes for the techniques described herein.
It should be noted that all relevant contents of each step related to the above method embodiment may be referred to the functional description of the corresponding functional module, and are not described herein again.
The virtual machine provided by the embodiment of the application is used for executing the operation method, so that the same effect as the effect of the implementation method can be achieved.
The embodiment of the application provides an electronic device, and a compiler and/or a virtual machine can be configured on the electronic device. The electronic device can execute the related method steps to realize the scheduling method.
In case an integrated unit is employed, the electronic device may comprise a processing module and a memory module. The processing module may be configured to control and manage actions of the electronic device, and for example, may be configured to support the electronic device to execute steps executed by the obtaining unit 1101 and the compiling unit 1102 of the compiler. Or, the steps performed by the acquisition unit 1201, the execution unit 1202, the linking unit 1203, and the generation unit 1204, which may be used to support the electronic device to execute the virtual machine described above. The storage module may be used to store code.
The processing module may be a processor or a controller. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. A processor may also be a combination of computing functions, e.g., a combination of one or more microprocessors, a Digital Signal Processing (DSP) and a microprocessor, or the like. The storage module may be a memory. The electronic device may also have a communication module, which may be, for example, a radio frequency circuit, a bluetooth chip, a Wi-Fi chip, or the like that interacts with other devices.
In one embodiment, when the processing module is a processor and the storage module is a memory, the compiler according to the embodiment of the present application may be configured on an electronic device having the structure shown in fig. 1. Specifically, the internal memory 121 shown in fig. 1 may store codes, which when executed by the processor 110, enable the compiler and the virtual machine to perform the above-described compiling method and running method.
The embodiment of the present application further provides a computer storage medium, where a computer instruction is stored in the computer storage medium, and when the computer instruction runs on a compiler or a virtual machine, the compiler or the virtual machine is enabled to execute the relevant method steps to implement the scheduling method.
The embodiments of the present application further provide a computer program product, which when running on a computer, causes the computer to execute the above related steps to implement the scheduling method in the above embodiments.
In addition, embodiments of the present application also provide an apparatus, which may be specifically a chip, a component or a module, and may include a processor and a memory connected to each other; the memory is used for storing computer execution instructions, and when the device runs, the processor can execute the computer execution instructions stored in the memory, so that the chip can execute the scheduling method in the above-mentioned method embodiments.
The compiler, the virtual machine, the electronic device, the computer storage medium, the computer program product, or the chip provided in the embodiments of the present application are all configured to execute the corresponding method provided above, and therefore, the beneficial effects achieved by the compiler, the virtual machine, the electronic device, the computer storage medium, the computer program product, or the chip may refer to the beneficial effects in the corresponding method provided above, and are not described herein again.
Through the description of the above embodiments, those skilled in the art will understand that, for convenience and simplicity of description, only the division of the above functional modules is used as an example, and in practical applications, the above function distribution may be completed by different functional modules as needed, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical functional division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another device, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may be one physical unit or a plurality of physical units, that is, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially or partially contributed to by the prior art, or all or part of the technical solutions may be embodied in the form of a software product, where the software product is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (29)

1. A compilation method, comprising:
a compiler acquires a code to be compiled, wherein the code to be compiled comprises annotation information and an action object of the annotation information; the action object is a source code block, and the marking information is used for indicating a scheduling mode;
the compiler compiles the annotation information to generate a first target code block and a second target code block; the first target code block is used for configuring the scheduling mode indicated by the marking information; the second target code block is used for canceling the scheduling mode indicated by the marking information;
and the compiler compiles the action object to generate a third target code block.
2. The method of claim 1, wherein the annotation information comprises a character @, an annotation name, and a scheduling parameter, wherein the scheduling parameter comprises one or more of a scheduling priority of a code block, a scheduling policy of a code block, Central Processing Unit (CPU) information, an input/output (I/O) scheduling policy, or an I/O scheduling priority;
the scheduling policy and the I/O scheduling policy of the code block include a first-in first-out scheduling SCHED _ FIFO policy, a round robin scheduling SCHED _ RR policy, a real-time scheduling SCHED _ RT policy, or OTHER scheduling SCHED _ OTHER policies; the CPU information comprises CPU binding or CPU working frequency.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
the compiler compiles the annotation information and also generates a first function inlet and a second function inlet, wherein the first function inlet is used for calling the first target code block, and the second function inlet is used for calling the second target code block;
the compiler inserts the first function entry before the third target code block;
the compiler inserts the second function entry after the third target code block.
4. The method according to claim 1 or 2, wherein if the code following the markup information is an object, the markup object of the markup information is the object, and the active object is a source code within the object range;
if the code after the labeling information is a method, the labeling object is the method, and the action object is a source code in the method range;
if the code after the labeling information is the object or a line of sentences in the method, the labeling object and the action object are both source codes of the line of sentences;
if the code behind the annotation information is a statement limited by { }, the annotation object and the action object are both source codes of the statement in the range of { };
if the code after the labeling information is a notify function in the lock object, the labeling object is the notify function, and the action object comprises a source code in the notify function range and a source code in the wait function range matched with the notify function in the lock object.
5. The method of claim 4, wherein the code following the annotation information is a lock object, wherein the annotation object is the lock object, and wherein the active object is a source code within the scope of the lock object.
6. A method for machine language operation, comprising:
the virtual machine acquires a target file generated after compiling, wherein the target file comprises a first target code block and a second target code block which are generated after compiling according to the labeling information, and a third target code block which is generated after compiling according to the action object of the labeling information; the first target code block is used for configuring the scheduling mode indicated by the marking information; the second target code block is used for canceling the scheduling mode indicated by the marking information;
the virtual machine runs the first target code block to configure the scheduling mode indicated by the marking information;
the virtual machine schedules and runs the third target code block according to the scheduling mode indicated by the marking information;
and the virtual machine runs the second target code block to cancel the scheduling mode indicated by the marking information.
7. The method of claim 6, wherein the scheduling comprises one or more of a scheduling priority of a code block, a scheduling policy of a code block, Central Processing Unit (CPU) information, an input/output (I/O) scheduling policy, or an I/O scheduling priority;
the scheduling policy and the I/O scheduling policy of the code block include a first-in first-out scheduling SCHED _ FIFO policy, a round robin scheduling SCHED _ RR policy, a real-time scheduling SCHED _ RT policy, or OTHER scheduling SCHED _ OTHER policies; the CPU information comprises CPU binding or CPU working frequency.
8. The method of claim 6, wherein the object file further comprises a first function entry and a second function entry generated after compiling according to the annotation information; the first function entry is used for calling the first target code block, and the second function entry is used for calling the second target code block; the first function entry is located before the third target code block, and the second function entry is located after the third target code block;
before the virtual machine runs the first target code block, the method further comprises:
the virtual machine links the first function entry with the first target code block;
the virtual machine runs the first target code block, and the method comprises the following steps:
the virtual machine runs the first target code block linked with the first function inlet;
before the virtual machine runs the second target code block, the method further comprises:
the virtual machine links the second function entry with the second target code block;
the virtual machine runs the second target code block, and the method comprises the following steps:
and the virtual machine runs the second target code block linked with the second function inlet.
9. The method of claim 6, wherein prior to the virtual machine running the first target code block, the method further comprises:
the virtual machine generates a first function inlet, and the first function inlet is used for calling the first target code block;
the virtual machine links the first function entry with the first target code block;
the virtual machine runs the first target code block, and the method comprises the following steps:
the virtual machine runs the first target code block linked with the first function inlet;
before the virtual machine runs the second target code block, the method further comprises:
the virtual machine generates a second function inlet, and the second function inlet is used for calling the second target code block;
the virtual machine links the second function entry with the second target code block;
the virtual machine runs the second target code block, and the method comprises the following steps:
and the virtual machine runs the second target code block linked with the second function inlet.
10. The machine language operating method according to any one of claims 7 to 9, wherein the role object of the markup information is a first source code block, and the first source code block and a second source code block belong to the same thread; the scheduling parameter includes the CPU information.
11. The machine language operating method according to any one of claims 7 to 9, wherein the role object of the markup information is a first source code block, the first source code block is used for requesting a resource, and the first source code block belongs to a first thread; the scheduling priority of the code block indicated by the scheduling parameter is high;
the target file also comprises a fourth target code block generated after the second source code block is compiled and a fifth target code block generated after the lock object is compiled; the second source code block is used for requesting the resource, and the lock object is used for locking the resource;
after the virtual machine runs the first target code block, configures the scheduling mode indicated by the annotation information, and before running the third target code block, the method further includes:
the virtual machine determines that the scheduling priority of the third target code block is high according to the scheduling parameter;
the virtual machine authorizes the resource locked by the lock object corresponding to the fifth target code block to be used by the third target code block;
after the virtual machine runs the second target code block to cancel the scheduling mode indicated by the marking information, the method further includes:
the virtual machine authorizes the locked resource of the lock object to be used by the fourth target code block;
the virtual machine runs the fourth target code block.
12. The machine language operating method according to any one of claims 6 to 9, wherein a label object of the label information is a first source code block, the first source code block is a notify function, the first source code block is a function within a lock object, the lock object further includes a wait function, and a role object of the label information is a source code block within the notify function and a source code block within the wait function;
the third target code block comprises a fourth target code block generated after the source code block in the wait function range is compiled and a fifth target code block generated after the source code block in the notify function range is compiled;
the virtual machine schedules and runs the third target code block according to the scheduling mode indicated by the marking information, and the scheduling method comprises the following steps:
the virtual machine schedules and runs the fourth target code block according to the scheduling mode indicated by the marking information;
and the virtual machine schedules and runs the fifth target code block according to the scheduling mode indicated by the marking information.
13. The machine language running method according to any one of claims 6 to 9, wherein an action object of the markup information is a source code block within a lock object.
14. A compiler, comprising:
the device comprises an acquisition unit, a compiling unit and a compiling unit, wherein the acquisition unit is used for acquiring a code to be compiled, and the code to be compiled comprises annotation information and an action object of the annotation information; the action object is a source code block, and the marking information is used for indicating a scheduling mode;
the compiling unit is used for compiling the labeling information to generate a first target code block and a second target code block; the first target code block is used for configuring the scheduling mode indicated by the marking information; the second target code block is used for canceling the scheduling mode indicated by the marking information;
the compiling unit is further configured to compile the action object to generate a third target code block.
15. The compiler of claim 14, wherein the annotation information comprises a character @, an annotation name, and a scheduling parameter, the scheduling parameter comprising one or more of a scheduling priority of a code block, a scheduling policy of a code block, central processor CPU information, an input/output I/O scheduling policy, or an I/O scheduling priority;
the scheduling policy and the I/O scheduling policy of the code block include a first-in first-out scheduling SCHED _ FIFO policy, a round robin scheduling SCHED _ RR policy, a real-time scheduling SCHED _ RT policy, or OTHER scheduling SCHED _ OTHER policies; the CPU information comprises CPU binding or CPU working frequency.
16. The compiler of claim 14 or 15, wherein the compiling unit is further configured to:
compiling the labeling information, and generating a first function inlet and a second function inlet, wherein the first function inlet is used for calling the first target code block, and the second function inlet is used for calling the second target code block;
inserting the first function entry before the third target code block;
inserting the second function entry after the third target code block.
17. The compiler of claim 14 or 15, wherein if the code following the markup information is an object, the markup object of the markup information is the object, and the active object is a source code within the object;
if the code after the labeling information is a method, the labeling object is the method, and the action object is a source code in the method range;
if the code after the labeling information is the object or a line of sentences in the method, the labeling object and the action object are both source codes of the line of sentences;
if the code behind the annotation information is a statement limited by { }, the annotation object and the action object are both source codes of the statement in the range of { };
if the code after the labeling information is a notify function in the lock object, the labeling object is the notify function, and the action object comprises a source code in the notify function range and a source code in the wait function range matched with the notify function in the lock object.
18. The compiler of claim 17, wherein the code following the annotation information is a lock object, wherein the annotation object is the lock object, and wherein the active object is source code within the scope of the lock object.
19. A virtual machine, comprising:
the system comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a target file generated after compiling, and the target file comprises a first target code block and a second target code block which are generated after compiling according to marking information and a third target code block which is generated after compiling according to an action object of the marking information; the first target code block is used for configuring the scheduling mode indicated by the marking information; the second target code block is used for canceling the scheduling mode indicated by the marking information;
the operation unit is used for operating the first target code block so as to configure the scheduling mode indicated by the marking information;
the running unit is further configured to schedule and run the third target code block according to the scheduling mode indicated by the labeling information;
the running unit is further configured to run the second target code block to cancel the scheduling manner indicated by the tagging information.
20. The virtual machine of claim 19, wherein the scheduling comprises one or more of a scheduling priority of a code block, a scheduling policy of a code block, Central Processing Unit (CPU) information, an input/output (I/O) scheduling policy, or an I/O scheduling priority;
the scheduling policy and the I/O scheduling policy of the code block include a first-in first-out scheduling SCHED _ FIFO policy, a round robin scheduling SCHED _ RR policy, a real-time scheduling SCHED _ RT policy, or OTHER scheduling SCHED _ OTHER policies; the CPU information comprises CPU binding or CPU working frequency.
21. The virtual machine according to claim 19, wherein the object file further includes a first function entry and a second function entry generated after compiling according to the annotation information; the first function entry is used for calling the first target code block, and the second function entry is used for calling the second target code block; the first function entry is located before the third target code block, and the second function entry is located after the third target code block;
the virtual machine further comprises a linking unit for:
linking the first function entry with the first target code block prior to executing the first target code block;
the running unit is specifically configured to run the first target code block linked to the first function entry;
the linking unit is further configured to link the second function entry with the second target code block before the second target code block is executed;
the running unit is specifically configured to run the second target code block linked to the second function entry.
22. The virtual machine of claim 19, further comprising:
a generating unit, configured to generate a first function entry before the first target code block is run, where the first function entry is used to call the first target code block;
a linking unit, configured to link the first function entry and the first target code block;
the running unit is specifically configured to run the first target code block linked to the first function entry;
the generating unit is further configured to generate a second function entry before the second target code block is run, where the second function entry is used to call the second target code block;
the linking unit is further configured to link the second function entry with the second target code block;
the running unit is specifically configured to run the second target code block linked to the second function entry.
23. The virtual machine according to any one of claims 20 to 22, wherein the role object of the annotation information is a first source code block, and the first source code block and a second source code block belong to the same thread; the scheduling parameter includes the CPU information.
24. The virtual machine according to any one of claims 20 to 22, wherein the role object of the annotation information is a first source code block, the first source code block is used for requesting a resource, and the first source code block belongs to a first thread; the scheduling priority of the code block indicated by the scheduling parameter is high;
the target file also comprises a fourth target code block generated after the second source code block is compiled and a fifth target code block generated after the lock object is compiled; the second source code block is used for requesting the resource, and the lock object is used for locking the resource;
the running unit is further configured to, after the first target code block is run, the scheduling mode indicated by the annotation information is configured, and before the third target code block is run, determine that the scheduling priority of the third target code block is high according to the scheduling parameter;
authorizing the resource locked by the lock object corresponding to the fifth target code block to be used by the third target code block;
after the second target code block is operated to cancel the scheduling mode indicated by the marking information, the locked resource of the lock object is authorized to be used by the fourth target code block;
and running the fourth target code block.
25. The virtual machine according to any one of claims 19 to 22, wherein the labeled object of the labeled information is a first source code block, the first source code block is a notify function, the first source code block is a function within a range of a lock object, the lock object further includes a wait function, and the labeled information is a source code block within the range of the notify function and a source code block within the range of the wait function;
the third target code block comprises a fourth target code block generated after the source code block in the wait function range is compiled and a fifth target code block generated after the source code block in the notify function range is compiled;
the operation unit is specifically configured to:
scheduling and operating the fourth target code block according to the scheduling mode indicated by the marking information;
and scheduling and operating the fifth target code block according to the scheduling mode indicated by the marking information.
26. The virtual machine according to any one of claims 19 to 22, wherein the role object of the annotation information is a source code block within the scope of the lock object.
27. A system comprising a compiler and a virtual machine; the compiler is configured to execute the compiling method according to any one of claims 1 to 5, and the virtual machine is configured to execute the machine language running method according to any one of claims 6 to 13.
28. A computer storage medium comprising computer instructions that, when executed on an electronic device, cause the electronic device to perform the method of any of claims 1-13.
29. A computer program product, characterized in that, when the computer program product is run on a computer, it causes the computer to perform the method according to any of claims 1-13.
CN201910543902.2A 2019-06-21 2019-06-21 A compiling method, running method and device Active CN110442345B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910543902.2A CN110442345B (en) 2019-06-21 2019-06-21 A compiling method, running method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910543902.2A CN110442345B (en) 2019-06-21 2019-06-21 A compiling method, running method and device

Publications (2)

Publication Number Publication Date
CN110442345A CN110442345A (en) 2019-11-12
CN110442345B true CN110442345B (en) 2021-01-29

Family

ID=68428904

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910543902.2A Active CN110442345B (en) 2019-06-21 2019-06-21 A compiling method, running method and device

Country Status (1)

Country Link
CN (1) CN110442345B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114168532A (en) * 2021-12-14 2022-03-11 平安养老保险股份有限公司 Migration script construction method, apparatus, computer device and readable storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9747086B2 (en) * 2012-07-17 2017-08-29 Microsoft Technology Licensing, Llc Transmission point pattern extraction from executable code in message passing environments

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2915045B1 (en) * 2012-11-02 2019-01-02 Hewlett-Packard Enterprise Development LP Selective error correcting code and memory access granularity switching
US9563585B2 (en) * 2014-02-19 2017-02-07 Futurewei Technologies, Inc. System and method for isolating I/O execution via compiler and OS support
CN107301098B (en) * 2017-06-15 2020-09-08 搜易贷(北京)金融信息服务有限公司 Remote procedure calling device, method and system based on Thrift protocol
US10474478B2 (en) * 2017-10-27 2019-11-12 Intuit Inc. Methods, systems, and computer program product for implementing software applications with dynamic conditions and dynamic actions
CN109144374A (en) * 2018-09-27 2019-01-04 范若愚 Method for processing business, system and relevant device based on visualization regulation engine
CN109857529B (en) * 2019-01-15 2023-06-27 深圳业拓讯通信科技有限公司 Method and device for dynamically loading and scheduling timing tasks

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9747086B2 (en) * 2012-07-17 2017-08-29 Microsoft Technology Licensing, Llc Transmission point pattern extraction from executable code in message passing environments

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
java必备基础知识点;老肖2017;《https://www.cnblogs.com/dongrilaoxiao/p/6668451.html》;20170405;全文 *

Also Published As

Publication number Publication date
CN110442345A (en) 2019-11-12

Similar Documents

Publication Publication Date Title
US11947974B2 (en) Application start method and electronic device
CN111813536B (en) Task processing method, device, terminal and computer readable storage medium
CN112527476B (en) Resource scheduling method and electronic equipment
WO2021115112A1 (en) Installation package downloading method, installation package distribution method, terminal device, server, and system
WO2021238376A1 (en) Function pack loading method and apparatus, and server and electronic device
EP4407421A1 (en) Device collaboration method and related apparatus
WO2021073337A1 (en) Method and apparatus for installing plug-in, and storage medium
WO2022100141A1 (en) Plug-in management method, system and apparatus
CN115314591A (en) Device interaction method, electronic device, and computer-readable storage medium
WO2022222715A1 (en) Control method of vehicle-mounted electronic device and vehicle-mounted electronic device
CN116483734A (en) Compiler-based stub insertion method, system and related electronic equipment
CN114816973A (en) Method and device for debugging codes, electronic equipment and readable storage medium
CN113971034A (en) Method for installing application and electronic equipment
CN110442345B (en) A compiling method, running method and device
WO2025036044A1 (en) Resource scheduling method and apparatus, and electronic device
CN112286596A (en) Message display method and electronic equipment
CN114828098B (en) Data transmission method and electronic device
CN118689482A (en) Compiling method and electronic device
CN115269167A (en) Resource scheduling method and electronic equipment
CN114168115A (en) Communication system, application downloading method and device
CN116709609B (en) Message delivery method, electronic device and storage medium
CN117995137B (en) Method for adjusting color temperature of display screen, electronic equipment and related medium
CN117707720B (en) Process scheduling method and device and electronic equipment
CN117689796B (en) Rendering processing method and electronic equipment
CN117687770B (en) Memory application method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant