HK40002154B - A method of executing instructions in a cpu - Google Patents
A method of executing instructions in a cpu Download PDFInfo
- Publication number
- HK40002154B HK40002154B HK19125461.4A HK19125461A HK40002154B HK 40002154 B HK40002154 B HK 40002154B HK 19125461 A HK19125461 A HK 19125461A HK 40002154 B HK40002154 B HK 40002154B
- Authority
- HK
- Hong Kong
- Prior art keywords
- instruction
- thread queue
- instructions
- target
- cpu
- Prior art date
Links
Description
Technical Field
One or more embodiments of the present description relate to the field of computer hardware chips, and more particularly, to a method of executing instructions in a CPU.
Background
Under the current big data cloud environment, massive data needs to be stored and processed, and higher requirements are put forward on the computing speed of the data. It is well known that a determining factor of the computation speed is the performance of the central processing unit CPU. To achieve higher speed operations, CPUs are constantly being improved in various aspects, from physical processes to logic control.
For example, in order to improve the parallel processing capability, a CPU hyper-threading technology is proposed, that is, two logic cores are simulated into a physical chip by using hardware instructions with special characters, so that a single processor can use thread-level parallel computing, thereby being compatible with multi-thread parallel computing. That is to say, the hyper-threaded CPU can run 2 or more threads in parallel on the basis of one physical core, so as to obtain more instructions that can be executed in parallel, and improve the overall running performance.
On the other hand, in order to more effectively utilize the clock period of the CPU and avoid pipeline stall or waiting, an instruction prediction scheme is adopted to perform instruction prefetching and instruction pre-execution.
The schemes improve the execution efficiency of the CPU to a certain extent. However, instruction prediction is not always accurate, and in the case of a miss in instruction prediction, the execution efficiency of the CPU is seriously reduced.
Therefore, further improvements are desired to further increase CPU efficiency.
Disclosure of Invention
One or more embodiments of the present disclosure describe a method for executing an instruction in a CPU, which avoids an instruction with a wrong prediction from being executed on the basis of an original instruction prediction, and further improves the execution efficiency of the CPU.
According to a first aspect, there is provided a method of executing instructions in a CPU, comprising:
sequentially extracting instructions from a current thread queue to form an instruction block so as to send the instruction block to a CPU (central processing unit) execution unit for execution, wherein the instruction block comprises a single jump instruction, and the jump instruction is the last instruction in the instruction block;
supplementing at least one instruction to a current thread queue to form a to-be-executed thread queue;
determining a target instruction of the jump instruction according to an execution result of a CPU execution unit;
judging whether the to-be-executed thread queue contains the target instruction or not;
and under the condition that the target instruction is not contained in the to-be-executed thread queue, clearing the to-be-executed thread queue, acquiring the target instruction, and adding the target instruction into the to-be-executed thread queue.
According to one embodiment, the instruction block is formed by:
reading a predetermined threshold number of instructions from a current thread queue, the predetermined threshold number being dependent on the number of CPU execution units; judging whether the instructions with the preset threshold number contain jump instructions or not; and if the instruction block contains a jump instruction, intercepting the instruction block forwards by taking the jump instruction as the tail end, and taking the intercepted instruction as the instruction block.
According to another embodiment, the instruction block is formed by:
reading a first instruction from a current thread queue; adding the first instruction to a current instruction block if the number of instructions in the current instruction block does not reach a predetermined threshold, wherein the predetermined threshold depends on the number of CPU execution units; judging whether the first instruction is a jump instruction; and taking the current instruction block as the instruction block when the first instruction is a jump instruction.
In one possible design, instructions are supplemented to the current thread queue by: and supplementing at least one instruction corresponding to the predicted branch to the current thread queue according to the predicted branch predicted by the instruction.
In one possible scheme, corresponding instructions are read from a decoded cache to supplement a current thread queue so as to form a to-be-executed thread queue, wherein the decoded cache stores a plurality of prefetched and decoded instructions.
In one possible embodiment, the jump instruction is a register operation instruction, and the instruction block further includes at least one memory operation instruction.
Further, in one embodiment, before the at least one memory operation instruction is completely executed, a target instruction of the jump instruction may be determined.
According to one possible design, the target instruction is fetched by:
judging whether a decoded cache contains the target instruction or not, wherein the decoded cache stores a plurality of prefetched and decoded instructions;
if so, fetching the target instruction from the decoded cache;
and under the condition of not containing, acquiring the target instruction from the memory.
According to a second aspect, there is provided a CPU controller comprising:
the instruction extraction logic is used for sequentially extracting instructions from the current thread queue to form an instruction block so as to send the instruction block to a CPU (central processing unit) execution unit for execution, wherein the instruction block comprises a single jump instruction, and the jump instruction is the last instruction in the instruction block;
the instruction supplementing logic is used for supplementing at least one instruction to the current thread queue to form a to-be-executed thread queue;
the target determining logic is used for determining a target instruction of the jump instruction according to an execution result of the CPU execution unit;
the judging logic is used for judging whether the to-be-executed thread queue contains the target instruction or not;
and the queue operation logic is used for clearing the thread queue to be executed, acquiring the target instruction and adding the target instruction into the thread queue to be executed under the condition that the thread queue to be executed does not contain the target instruction.
According to a third aspect, there is provided a central processing unit, CPU, comprising the controller of the second aspect.
According to the scheme provided by the embodiment of the specification, instruction prefetching is performed according to an original mode, the instruction is placed into a decoded cache and a thread queue, but when the instruction is executed, it is ensured that at most one jump instruction is included in a code block executed simultaneously, and before the jump instruction determines a target instruction, renaming and executable resource allocation are not performed on subsequent instructions in the thread queue. After the target instruction is determined, instructions in the thread queue are compared to the target instruction for a match to ensure that only the correct branch is taken. Therefore, on the basis of utilizing the advantages of the original instruction prediction scheme, the rollback time caused by the execution of the instruction with the wrong prediction is avoided, and the overall execution efficiency of the CPU is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a CPU implementation according to one embodiment;
FIG. 2 illustrates a method of executing instructions in a CPU according to one embodiment;
FIG. 3 shows a functional block diagram of a CPU controller according to one embodiment.
Detailed Description
The scheme provided by the specification is described below with reference to the accompanying drawings.
FIG. 1 is a CPU implementation according to one embodiment. As shown in fig. 1, the overall execution process is divided into multiple phases. The first is the instruction fetch stage. Current mainstream CPUs can take 16 bytes per instruction cycle, approximately 4 instructions at a time. Followed by instruction pre-decoding. The main task of the pre-decode stage is to identify instruction length while marking the jump instruction. Typically, mainstream CPUs have a throughput of 5 instructions/cycle at this stage.
The pre-decoding is followed by a decoding stage. The decode stage essentially converts complex instructions into reduced instructions (fixed length) while specifying the type of operation. Typically this stage also has a throughput of 5 instructions/cycle. The decoded instruction is placed into the decoded cache.
The decoded cache serves as an instruction cache pool in which a number of decoded instructions may be stored for reading by the next stage, the thread queue. The throughput of decoded buffers to the next stage may be up to 6-hop instructions per cycle.
As previously described, for a hyper-threaded CPU, there may be multiple threads executing in parallel. During execution, each thread reads the next instruction to be executed to form its own thread cache queue, which is also referred to as a thread queue. And under the condition that the instruction to be executed exists in the decoded cache, using the instruction stored in the decoded cache, otherwise, obtaining a corresponding instruction from a front end (a memory) and adding the corresponding instruction into the queue. The respective thread queues for thread A and thread B are illustrated in FIG. 1, but it will be appreciated that the hyper-threaded CPU may also support parallel execution of more threads.
Then, the next stage is entered from the form thread queue: rename & allocate executable resources. The throughput from the thread queue to this stage may be up to 5-hop instructions per cycle. In the renaming and executable resource allocation stage, the main work is to solve the register read-write dependency relationship, remove unnecessary dependency relationship, strive to obtain more parallel execution capacity of the instruction, and allocate various resources required during execution.
After the resources needed for execution are allocated, the instructions are sent to the execution unit of the CPU for execution. Currently, the CPU has a plurality of execution units, and currently, the most common CPU has 8 pipelines which can be executed in parallel, that is, 8 micro-operations can be executed in each cycle, and although the execution can be out of order, the order of the last instruction submission is the same as the order of the program.
As mentioned above, to avoid pipeline stalls or waiting due to instruction misses, instruction Prediction, also known as Branch Prediction (Branch Prediction), is currently used by almost all CPUs for instruction Prediction and prefetching. After the end of each cycle, the prediction unit predicts the instructions to be prefetched from the historical execution state table it contains. If the instruction has no jump, the instruction block of the current instruction fetch address plus 16 bytes is fetched in the previous instruction fetch stage. And if the instruction has a jump, acquiring the instruction of the predicted branch according to the instruction prediction result.
With continuous improvement, the prediction accuracy of the current instruction prediction scheme can exceed 90%, and the prediction accuracy of some schemes can even reach 98%. However, there is still a possibility of prediction errors, at which point it is very likely that the wrong instruction block is input into the executable unit.
For example, assume that there are instructions L1, L2, L3, L4, L5, where L2 is a jump instruction that provides for jumping to instruction L5 when some decision condition is satisfied, otherwise instructions L3 and L4 are executed sequentially. If the branch targeted by the jump instruction L2 is predicted to be L3 during instruction prediction, then L3 and subsequent instructions are fetched during the instruction fetch stage, and it is possible to load L1, L2, L3, and L4 into the CPU execution units for execution during the subsequent execution stage. If, in fact, the results of the execution of L2 indicate that a jump to L5 should be made, then L3 and L4 are erroneously executed. In such a case, the CPU would have to re-flush the entire pipeline, roll back to the previous branch, and then re-warm start, selecting another branch for execution. Although the probability of instruction prediction error is not high, once it occurs, the above operation is required, which is time-consuming, resulting in a maximum CPU efficiency of only about 75%.
To this end, embodiments of the present specification further improve upon this in order to preserve and exploit as much of the advantages of high accuracy instruction prediction as possible while preventing erroneous instructions from being executed in the event of a prediction failure. According to one or more embodiments, instruction fetching is still performed according to the original mode, and the instruction is simultaneously placed in a decoded cache and a thread queue, but before a jump instruction obtains an effective target address, namely a target instruction is determined, renaming and executable resource allocation of a code block are not performed, so that the following execution operations are guaranteed to be completed correctly, and efficiency reduction caused by prediction failure is avoided. Implementations of the above concepts are described below.
FIG. 2 illustrates a flow diagram of a method of executing instructions in a CPU, as shown in FIG. 2, the method comprising: step 21, extracting an instruction from the current thread queue to form an instruction block, and sending the instruction block to a CPU (central processing unit) execution unit for execution, wherein the instruction block comprises a single jump instruction, and the jump instruction is the last instruction in the instruction block; step 22, supplementing at least one instruction to the current thread queue to form a to-be-executed thread queue; step 23, determining a target instruction of the jump instruction according to an execution result of the CPU execution unit; step 24, judging whether the to-be-executed thread queue contains the target instruction; if the target instruction is not included, in step 25, the to-be-executed thread queue is cleared, the target instruction is obtained, and the target instruction is added to the to-be-executed thread queue. Specific execution modes of the above steps are described below.
As previously described, according to embodiments of the present specification, in order to take advantage of the existing instruction prediction scheme, instruction fetching is still performed in the original manner, while placing into the decoded buffer and thread queue. That is, the instruction fetch stage, the pre-decode stage, and the decode stage of FIG. 1 are executed in the original manner, and the decoded instructions are placed in the decoded cache. Each thread may read instructions from the decoded cache to form a thread queue. Thus, it is assumed that the thread queue has been formed in an existing manner prior to step 21.
At step 21, instructions are fetched from the current thread queue to form instruction blocks for execution by the CPU execution units.
If the instruction fetched from the current thread queue does not contain a jump instruction, then the maximum length block of instructions corresponding to the maximum processing capacity of the hardware is formed, still in the normal manner. In general, the maximum processing capacity of the CPU hardware depends on the number of execution units involved, and a predetermined threshold may be determined as the maximum length of an instruction block according to the number of execution units. For example, the most common CPU currently has 8 pipelines that can be executed in parallel, then the predetermined threshold may be set to 8, and accordingly the maximum length of an instruction block is 8. In the case where the fetched instruction does not contain a jump instruction, 8 instructions may still be fetched as an instruction block in the normal manner.
Different from the conventional scheme, when the instruction to be extracted includes a jump instruction, it is ensured that one instruction block sent to the CPU execution unit only includes one jump instruction, and the jump instruction is the last instruction in the instruction block. That is, when an instruction is transferred from the thread queue to the CPU execution unit, the instruction type is determined, and the instruction block is divided with the jump instruction as a boundary, so that the jump instruction is the last instruction in a group of instructions sent to the CPU execution unit.
The instruction blocks described above may be formed in a variety of ways. In one embodiment, a predetermined threshold number of instructions are read at a time from the current thread queue, the predetermined threshold number corresponding to the CPU maximum processing capacity, or being dependent on the number of CPU execution units. Then, whether the instructions contain a jump instruction is judged. If not, these fetched instructions are treated as the instruction block as described above. If the instructions include a jump instruction, the jump instruction is taken as the tail end to be intercepted forwards, and the intercepted instruction is taken as the instruction block.
For example, assuming that the predetermined threshold number is 8, 8 instructions are read from the current thread queue at a time. If the 8 instructions have no jump instruction, the 8 instructions are directly taken as the instruction block and sent to a CPU execution unit. If the 8 instructions contain a jump instruction, the jump instruction is intercepted from its location to form an instruction block. For example, if the 5 th instruction is a jump instruction, then it is intercepted from the end of the 5 th instruction onward, i.e. the 1 st to 5 th instructions are taken as instruction blocks.
By the method, the jump instruction is ensured to be the last instruction in a group of instructions sent to the CPU execution unit for execution. It will be appreciated that the next instruction to be executed, the target instruction, cannot be accurately determined before the jump instruction is executed, and that the instruction prediction scheme is to prefetch the predicted target instruction into the thread queue. If it is ensured that the jump instruction is the last instruction in the group of instructions that is fed into the CPU execution unit at a time, this is equivalent to establishing an isolation or interrupt between the jump instruction and the subsequent predicted target instruction, ensuring that the predicted target instruction is not fed into the CPU execution unit along with the jump instruction for execution. Thus, opportunities and possibilities are provided for accurately identifying target instructions and, in turn, correcting errant target instructions in the event of a misprediction.
For example, in the foregoing example, L2 in instructions L1, L2, L3, L4, and L5 is a jump instruction. Even if the target branch of the jump instruction L2 is incorrectly predicted as L3, according to the above embodiment, only L1 and L2 are fed to the CPU execution unit as one instruction block for execution, and L1, L2, L3, and L4 are not executed together at the same time. In executing L1 and L2, opportunities are provided for determining the exact target branch of L2 and correcting the mispredicted branch.
As described above, if at least a portion of the instructions are fetched from the thread queue and sent to the CPU execution unit, the number of instructions to be executed in the thread queue will temporarily decrease. Thus, the thread queue is supplemented to maintain its queue length while or after the formation of the instruction block is fed into the CPU execution unit. That is, at step 22, at least one instruction is replenished into the current thread queue, forming a pending execution thread queue. It will be appreciated that the queue of threads to be executed is used to form the next block of instructions to be sent to the CPU execution unit for execution.
According to one embodiment, in this step, the to-be-executed thread queue may be formed by supplementing the thread queue with instructions based on predicted branches predicted by the instructions in a conventional manner. In one embodiment, according to the instruction prediction result, the corresponding instruction is read from a decoded cache to supplement the current thread queue, wherein the decoded cache stores a plurality of prefetched and decoded instructions. In rare cases, instructions may also be requested from a front end (e.g., memory), decoded, and then added to the thread queue.
On the other hand, after the instruction block formed in step 21 is sent to the CPU execution unit, the CPU execution unit will add these instructions to the pipeline for execution. Particularly, the last instruction of the instruction block is a jump instruction, and the jump target address, that is, the target instruction, can be accurately determined by executing the jump instruction. That is, in step 23, the target instruction of the jump instruction is determined based on the execution result of the CPU execution unit.
Next, in step 24, it is determined whether the to-be-executed thread queue supplemented in step 23 includes the target instruction. If the target instruction to be executed is contained in the thread queue to be executed, the target instruction to be executed next is already put into the thread queue to be executed, the instruction prediction result is correct, and no additional operation is required; after the current instruction block is executed, the method of fig. 2 may be followed to continue to fetch the next instruction block from the thread queue for execution.
However, if the thread queue to be executed does not contain the target instruction, this means that the instruction (target instruction) that should be executed next is not placed in the thread queue, and conversely, the sequence of instructions contained in the thread queue to be executed is not the instruction that should be executed next. The reason for this may be that the prediction of the instruction is missed and the wrong branch instruction is prefetched into the thread queue. In such a case, at step 25, the current to-be-executed thread queue is cleared (flush), the target instruction is fetched, and the target instruction is added to the to-be-executed thread queue.
Specifically, in step 25, since the current to-be-executed thread queue contains an erroneous instruction and should not be executed, a flush operation is first performed on the current to-be-executed thread queue. The flush is an operation in the CPU, and can flush all data stored in the operation object. Clearing the current pending thread queue means that all instructions in the queue are removed and the queue is emptied.
Corresponding to the instruction for clearing the error, the correct target instruction is also acquired and added to the queue of the thread to be executed.
In one embodiment, first determining whether the decoded cache contains a correct target instruction; in the case of an include, the target instruction is fetched from the decoded cache. It will be appreciated that although the previous thread queue added the wrong branch instruction, it was often just wrong in the order of instruction execution, and instruction prefetching based on instruction prediction schemes would continue to prefetch many instructions, decoded and placed into the decoded cache. Therefore, in most cases, the correct target instruction can be fetched from the decoded cache and added to the pending thread queue. Further, in addition to adding the target instruction, the subsequent instruction of the branch where the target instruction is located is also added to the to-be-executed thread queue correspondingly.
On the other hand, in the very rare case, the target instruction is not contained in the decoded cache. At this time, the target instruction may be obtained from the memory request, decoded, and added to the to-be-executed thread queue.
After the operation of step 25, it is ensured that the thread queue to be executed contains the correct instruction branch to be executed, so that the instruction block subsequently sent to the CPU execution unit is also the correct instruction block, and the instruction with the misprediction cannot be actually executed by the execution unit.
The above process is described below with reference to a specific example. Assume that there is one piece of instruction (where/'…'/middle is the interpretation of the instruction):
loop 1.fld f0,0(x 1)/. memory contents are transferred to register f0 ×/based on the address stored in x1
2, fadd.d f4, f0, f 2/. The result of f0+ f2 is stored in register f4 ×
Fsd f4,0(x 1)/. data in f4 is stored in memory at the address stored in x1 ×. + -
Addi x1, x 1-8/. x1 and stored in register x 1. pick & gt
Bne x1, x2, Loop/' if x1 is not equal to x2 jump to Loop, instruction 1 is executed, otherwise instruction 6 is entered
Addi x2, x2+ 1/. x2 and stored in register x 2. pick & gt
7...
8...
In this segment of instructions, instruction 5 is a jump instruction, and jumps to a different branch depending on whether x1 is equal to x2, the target instruction of the first branch is instruction 1, and the target instruction of the second branch is instruction 6. It is assumed that these instructions have been prefetched, decoded, and placed into the decoded cache in a conventional manner. Further, assume that instruction 5 will later execute a second branch based on the result of the instruction prediction, and thus, instructions 1 through 8 are placed in the thread queue in order.
If conventional, it is possible to send instructions 1 through 6 as one instruction block to the CPU execution unit for execution. If instruction prediction is wrong, instruction 5 should have followed the first branch, instruction 1, but instruction 6 is executed incorrectly, causing a time consuming clean-up rollback operation, drastically reducing CPU efficiency.
According to the solution of the embodiment of the present specification, in step 21, since instruction 5 is a jump instruction, only instruction 1 to instruction 5 will be formed into an instruction block, and sent to the CPU execution unit for execution.
At step 22, instructions are replenished from the decoded cache to the thread queue, forming a pending execution thread queue. Since the predicted branch of the instruction is instruction 6, the pending execution thread queue may now include: instruction 6, instruction 7, and other instructions that supplement the incoming.
In step 23, it can be determined that the target instruction of the jump instruction 5 should actually be instruction 1 in the first branch, based on the execution results of the CPU execution unit.
Next, at step 24, it may be determined that the pending thread queue does not include target instruction 1. At this point, the pending thread queue is cleared, instruction 1 is fetched from the decoded buffer and added to the pending thread queue to reform the pending thread queue according to the correct branch, step 25.
Thus, in the above embodiment, instruction prefetching is performed in the original manner, and the instruction is placed into the decoded cache and the thread queue, but when the instruction is executed, it is ensured that at most one jump instruction is included in the code block executed simultaneously, and before the jump instruction determines the target instruction, the renaming and executable resource allocation stages are not performed for the subsequent instructions in the thread queue. After the target instruction is determined, whether the instructions in the thread queue are matched with the target instruction is compared, so that only correct branches are executed, and efficiency reduction caused by prediction failure is avoided.
It will be appreciated that improvements in CPU execution efficiency require that each execution cycle be used as much as possible, reducing pipeline stalls and latency, and avoiding instruction-free idle operations. In order to prevent time-consuming rollback and efficiency reduction caused by instruction prediction errors, after the target instruction of the jump instruction is determined, the thread queues are judged and compared to ensure the correctness of subsequent operations. Whether such "additional" operations cause pipeline latency or no-operation, which in turn affects CPU execution efficiency, is a consideration. However, through the research and analysis of the inventor, the above operations do not cause the waste of the execution cycle of the CPU, and the execution efficiency is not affected. The following is a demonstration of this problem.
First, the optimization scheme of the present specification does not bring about the issue of no-instruction no-empty operation for several intermediate cycles. Typically, the CPU may fetch multiple instructions per cycle, for example 5 instructions. Through statistics, 5 instructions contain 1 jump instruction, 1.4 memory reading instructions, 1 memory writing instruction and 1.6 calculation operations on average. Statistics show that most jump instructions rely on simple register operations, so most jump instructions can be completed in 1 cycle.
On the other hand, any memory operation will have a longer period of delay. Specifically, if a memory operation hits the L1 cache, a 4 cycle delay is required; if hitting the L2 cache, a 12 cycle delay is required; if the cache is not hit, the data needs to be read from the memory, and at least 50 cycles of delay are needed. Therefore, the jump instruction will be executed first, so the destination address can be determined as early as possible without causing wait.
For example, in the above example of instruction 1 through instruction 6, instruction 1 through instruction 5 form an instruction block that is fed into the execution unit. Where instruction 1 needs to fetch data from memory, it takes 4 cycles even if the L1 cache hits. Since the dependency must execute instruction 2 and instruction 3 in turn, the process needs to be increased by 2 cycles again. While instructions 4, 5 are register operations and can be completed in one cycle, it is known in advance whether instruction 5 needs to jump or not before instructions 1, 2, 3 are completed.
In addition, the conventional mainstream CPU adopts multi-thread operation, for example, if there are two threads, the CPU will fetch instructions for each thread in turn, for example, fetch instructions for thread a first, then fetch instructions for thread B, and fetch instructions for thread a again. Thus, there is a one-cycle wait for thread A, and the target instruction of the jump instruction can be determined within the existing wait cycle.
In fact, currently in a big data cloud server environment, most performance problems focus on the latency of accessing the memory, which often reaches hundreds of instruction cycles, which is enough to be utilized to determine the target instruction of the jump instruction and confirm the correctness of the thread queue without affecting the operation of the CPU. Thus, by the above process, this scheme does not introduce CPU no-instruction no-op and latency problems.
In addition, the above optimization scheme and the instruction prefetching problem and the data prefetching problem caused by the original instruction prediction scheme need to be considered. The original instruction prediction method can operate undetermined instructions in the instruction pre-execution stage, and helps a CPU (central processing unit) to pre-fetch a lot of codes to be executed from the viewpoint, so that delay caused by instruction missing is reduced. In this case, the approach can be continued with the exception that the correct valid instruction is executed only after the target address of the jump instruction is specified. Regarding data prefetching, the original instruction prediction execution method can run undetermined instructions in the instruction pre-execution stage, so that the data to be used can be put into a CPU cache from a memory in advance. Such existing schemes may also be continued in embodiments of the present description, with valid instructions being executed only after the target address is cleared, to read the required data.
Therefore, the scheme of the embodiment of the specification can fully utilize the advantages of the existing prediction method, and on the basis, the existing characteristics and the main operating environment of the current CPU are combined, so that the utilization rate of the CPU is fully improved, and the throughput of the cloud computing cluster is improved.
The execution of instructions in the CPU is controlled by a controller, as is known to those skilled in the art. The controller is a command control center of the whole CPU and is used for coordinating the operation among all the components. The controller generally includes several portions of instruction control logic, timing control logic, bus control logic, interrupt control logic, and the like. Instruction control logic is to perform the operations of fetching instructions, parsing instructions, and executing instructions.
According to the solution of the above-described embodiment, the original instruction control process is optimized and adjusted, so that the controller circuit, in particular, the instruction control logic therein, can be modified on a hardware level accordingly to complete the control process described in the above-described embodiment.
FIG. 3 shows a functional block diagram of a CPU controller according to one embodiment. As shown in fig. 3, the controller 300 may include an instruction fetching logic 31, configured to fetch an instruction from the current thread queue to form an instruction block, so as to send the instruction block to the CPU execution unit for execution, where the instruction block includes a single jump instruction, and the jump instruction is the last instruction in the instruction block; instruction supplement logic 32, configured to supplement at least one instruction to the current thread queue to form a to-be-executed thread queue; target determination logic 33, configured to determine a target instruction of the jump instruction according to an execution result of the CPU execution unit; the judging logic 34 is configured to judge whether the to-be-executed thread queue includes the target instruction; and the queue operation logic 35 is configured to clear the to-be-executed thread queue, acquire the target instruction, and add the target instruction to the to-be-executed thread queue when the to-be-executed thread queue does not include the target instruction.
The above logics can be implemented by various circuit elements as required, for example, a plurality of comparators are used to implement the judgment logic.
Through the controller, the control process shown in fig. 2 can be realized, so that efficiency reduction caused by prediction errors is prevented and avoided on the basis of utilizing the advantages of instruction prediction and prefetching, and the execution efficiency of the CPU is comprehensively improved.
It should be understood by those skilled in the art that the above-mentioned embodiments are only exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements, etc. made on the basis of the technical solutions of the present invention should be included in the scope of the present invention.
Claims (8)
1. A method of executing instructions in a CPU, comprising:
reading instructions from the decoded cache to form a current thread queue;
extracting an instruction from a current thread queue to form an instruction block so as to send the instruction block to a CPU (central processing unit) execution unit for execution, wherein the instruction block comprises a single jump instruction, and the jump instruction is the last instruction in the instruction block;
supplementing at least one instruction to a current thread queue to form a to-be-executed thread queue; wherein supplementing at least one instruction to the current thread queue comprises: supplementing at least one instruction corresponding to the predicted branch to a current thread queue according to the predicted branch predicted by the instruction;
determining a target instruction of the jump instruction according to an execution result of a CPU execution unit;
judging whether the to-be-executed thread queue contains the target instruction or not;
and under the condition that the target instruction is not contained in the to-be-executed thread queue, clearing the to-be-executed thread queue, acquiring the target instruction, and adding the target instruction into the to-be-executed thread queue.
2. The method of claim 1, wherein fetching instructions from a current thread queue to form an instruction block comprises:
reading a predetermined threshold number of instructions from a current thread queue, the predetermined threshold number being dependent on the number of CPU execution units;
judging whether the instructions with the preset threshold number contain jump instructions or not;
and if the instruction block contains a jump instruction, intercepting the instruction block forwards by taking the jump instruction as the tail end, and taking the intercepted instruction as the instruction block.
3. The method of claim 1, wherein supplementing at least one instruction to a current thread queue, forming a pending execution thread queue comprises:
and reading a corresponding instruction from a decoded cache to supplement the instruction to a current thread queue, wherein the decoded cache stores a plurality of prefetched and decoded instructions.
4. The method of claim 1, wherein the jump instruction is a register operation instruction, the instruction block further comprising at least one memory operation instruction.
5. The method of claim 4, wherein determining a target instruction of the jump instruction from the results of the execution by the CPU execution unit comprises:
and determining a target instruction of the jump instruction before the execution of the at least one memory operation instruction is finished.
6. The method of claim 1, wherein fetching the target instruction comprises:
judging whether a decoded cache contains the target instruction or not, wherein the decoded cache stores a plurality of prefetched and decoded instructions;
if so, fetching the target instruction from the decoded cache;
and under the condition of not containing, acquiring the target instruction from the memory.
7. A CPU controller comprising:
instruction fetch logic to fetch instructions from the decoded cache to form a current thread queue; sequentially extracting instructions from a current thread queue to form an instruction block so as to send the instruction block to a CPU (central processing unit) execution unit for execution, wherein the instruction block comprises a single jump instruction, and the jump instruction is the last instruction in the instruction block;
the instruction supplementing logic is used for supplementing at least one instruction to the current thread queue to form a to-be-executed thread queue; wherein supplementing at least one instruction to the current thread queue comprises: supplementing at least one instruction corresponding to the predicted branch to a current thread queue according to the predicted branch predicted by the instruction;
the target determining logic is used for determining a target instruction of the jump instruction according to an execution result of the CPU execution unit;
the judging logic is used for judging whether the to-be-executed thread queue contains the target instruction or not;
and the queue operation logic is used for clearing the thread queue to be executed, acquiring the target instruction and adding the target instruction into the thread queue to be executed under the condition that the thread queue to be executed does not contain the target instruction.
8. A central processing unit, CPU, comprising the controller of claim 7.
Publications (2)
| Publication Number | Publication Date |
|---|---|
| HK40002154A HK40002154A (en) | 2020-03-13 |
| HK40002154B true HK40002154B (en) | 2021-03-12 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN109101276B (en) | Method for executing instruction in CPU | |
| US7725684B2 (en) | Speculative instruction issue in a simultaneously multithreaded processor | |
| US8099586B2 (en) | Branch misprediction recovery mechanism for microprocessors | |
| KR940009100B1 (en) | Data processor unit and method to minimize prefetch redirection overhead | |
| CN102968293B (en) | Dynamic detection and execution method of program loop code based on instruction queue | |
| CN112214241B (en) | Method and system for distributed instruction execution unit | |
| EP0448499A2 (en) | Instruction prefetch method for branch-with-execute instructions | |
| EP2192483A1 (en) | Processing device | |
| EP1460532A2 (en) | Computer processor data fetch unit and related method | |
| US7734897B2 (en) | Allocation of memory access operations to memory access capable pipelines in a superscalar data processing apparatus and method having a plurality of execution threads | |
| US7707391B2 (en) | Methods and apparatus for improving fetching and dispatch of instructions in multithreaded processors | |
| CN110825442A (en) | Instruction prefetching method and processor | |
| CN111538535B (en) | CPU instruction processing method, controller and central processing unit | |
| US9146745B2 (en) | Method and apparatus for partitioned pipelined execution of multiple execution threads | |
| CN114168202A (en) | Instruction scheduling method, instruction scheduling device, processor and storage medium | |
| US20250045055A1 (en) | Parallel decoding method, processor, chip and electronic device | |
| US7454596B2 (en) | Method and apparatus for partitioned pipelined fetching of multiple execution threads | |
| CN115617402B (en) | Decoupling branch prediction method and device suitable for general processor | |
| CN109614146B (en) | Local jump instruction fetch method and device | |
| HK40002154A (en) | A method of executing instructions in a cpu | |
| HK40002154B (en) | A method of executing instructions in a cpu | |
| CN118245187A (en) | Thread scheduling method and device, electronic device and storage medium | |
| CN119759425A (en) | Processor, chip and computer equipment | |
| CN120762763A (en) | Instruction processing device, method and related equipment | |
| EP4645081A1 (en) | Resource allocation method, processor and electronic device |