Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The flow diagrams depicted in the figures are merely illustrative and not necessarily all of the elements and operations/steps are included or performed in the order described. For example, some operations/steps may be further divided, combined, or partially combined, so that the order of actual execution may be changed according to actual situations.
The embodiment of the application provides a task scheduling method, a handheld financial terminal and a readable storage medium.
Some embodiments of the present application are described in detail below with reference to the accompanying drawings. The following embodiments and features of the embodiments may be combined with each other without conflict.
Referring to fig. 1, fig. 1 is a flow chart of a task scheduling method according to an embodiment of the application. The task scheduling method is applied to handheld financial terminals, such as palm computers (Personal DIGITAL ASSISTANT, PDA) with limited hardware resources, such as POS machines, so as to schedule tasks in memories in the handheld financial terminals. It will be appreciated that the task scheduling method may also be applied to other terminal devices with limited hardware resources, which are not limited herein.
As shown in fig. 1, in some embodiments, the task scheduling method includes steps S101-S105.
Step S101, obtaining a load memory of an internal memory in the handheld financial terminal.
Illustratively, the linked list includes at least one node that includes a data field for holding data members and a pointer field for pointing to a next node address.
In some embodiments, the step S101 of obtaining the load memory of the internal memory of the handheld financial terminal includes determining the load memory according to a linked list of memories occupied by tasks for recording that task states in the internal memory are running states.
The load memory in the internal memory is managed by the linked list, for example, a node storing the linked list at the head of each running task in the memory space, the size of the memory space occupied by the current task is recorded by the data field of the node, the pointer field of the node points to the next running task, and the size of the load memory is determined.
Illustratively, a task is said to be in an operational state when the task is in a state of interaction with the processor.
By way of example, the load memory is obtained through a linked list, so that dynamic changes of the storage space in the internal memory can be reflected in time, and the size of the memory space occupied by the task with the task state of the running state in the internal memory can be accurately determined.
And step S102, determining the task which is not operated in the task queue as a first target task when the load memory is larger than or equal to the early warning memory threshold value.
The task queue is stored in the internal memory, and includes an operating task and an unoperated task, wherein the operating task refers to the task in an operating state, and the unoperated task refers to a task that is temporarily not operated or cannot be operated.
The non-running tasks include, but are not limited to, tasks in a ready state and tasks in a blocked state, for example. Specifically, a task is said to be in a ready state when it is in a state waiting for interaction with a processor, and is said to be in a blocking state when it is executing a time-consuming operation such as input or output, or is in a state waiting for a certain condition to be triggered, or the like.
In some embodiments, the step S102 of determining the task that is not running in the task queue as the first target task when the load memory is greater than or equal to the early warning memory threshold includes determining the task in the task queue in a ready state and the task in the task queue in a blocking state as the first target task when the load memory is greater than or equal to the early warning memory threshold.
For example, when the memory occupied by the task whose task state is the running state is greater than or equal to the early warning memory threshold, that is, the load memory is greater than or equal to the early warning memory threshold, the task whose task state is the ready state and the blocking state in the task queue is determined as the first target task and is scheduled to an external memory, so as to provide enough memory space for the task in the running state.
For example, since the tasks in the task queue in the ready state and the blocked state are not running, when the load memory is greater than or equal to the early warning memory threshold, the tasks in the task queue in the ready state and the blocked state are determined as the first target tasks and scheduled to the external memory, so that enough memory space is provided for the tasks in the running state.
For example, the early warning memory threshold may be set according to actual requirements, for example, the early warning memory threshold may be set to 90% of the storage space of the internal memory, and when the memory occupied by the task in the task queue in the running state is greater than or equal to 90% of the storage space of the internal memory, all the tasks that are not running in the task queue are scheduled to the external memory, that is, all the tasks in the ready state and the blocking state in the task queue are scheduled to the external memory. Of course, the early warning memory threshold may be other values, which are not limited herein.
The method includes the steps that when the memory occupied by the task with the task state being the running state in the task queue is large, more space is needed in an internal memory to store data generated by the task executed by a processor, and meanwhile, in order to reduce frequent calculation and frequent scheduling of the task in the prior art, when the memory occupied by the task with the task state being the running state is larger than or equal to the early warning memory threshold, namely, the load memory is larger than or equal to the early warning memory threshold, the task which is not running in the task queue is determined to be a first target task, so that enough space is reserved in the internal memory to store the task with the task state being the running state and the data generated by the task with the task state being the running state, the storage space of the internal memory is reasonably utilized, and the use experience of a user is improved.
And step 103, when the load memory is smaller than the early warning memory threshold, sequentially acquiring task memories occupied by all tasks in a current task queue according to a preset task arrangement sequence, and calculating task accumulation memories according to the task memories.
For example, when the load memory is smaller than the early warning memory threshold, the task with the task state being the running state in the task queue occupies less memory, the data generated by the processor executing the task is also less, and at least part of the tasks which are not running can be stored in the internal memory.
Illustratively, the tasks in the task queue are ordered in order to more reasonably determine the non-running tasks remaining in the internal memory and the non-running tasks that need to be moved out of the internal memory.
Referring to fig. 2, fig. 2 is a flowchart of a task scheduling method according to another embodiment of the present application. In some embodiments, as shown in fig. 2, the step S103 of sequentially obtaining task memories occupied by the tasks in the current task queue according to a preset task arrangement order and before calculating task accumulated memories according to the task memories further includes a step S106 of determining an arrangement priority of the tasks in the task queue according to task state priorities and/or operation priorities of the tasks in the task queue, and a step S107 of determining the preset task arrangement order according to the arrangement priorities corresponding to the tasks in the task queue, thereby determining an arrangement order of the tasks in the current task queue after being ordered.
The tasks in the task queue are provided with respective operation priorities according to actual demands, wherein the operation priorities are used for indicating importance degrees of the tasks. According to the task state priority and/or the running priority of each task in the task queue, the arrangement priority of each task in the task queue is determined, so that the task with the lower arrangement priority is determined as the first target task, and the rationality of task scheduling is improved.
Referring to fig. 3, fig. 3 is a flowchart of a task scheduling method according to another embodiment of the present application. In some embodiments, as shown in fig. 3, the step S106 of determining the arrangement priority of each task in the task queue according to the task state priority and/or the operation priority of each task in the task queue includes a step S1061 of ordering each task in the task queue according to the task state of each task, where in the task queue, the task state priority corresponding to the task in the ready state is higher than the task state priority corresponding to the task in the blocking state, and a step S1062 of ordering the operation priority of the task in the same state according to the importance of each task, thereby determining the arrangement priority of each task in the task queue, where when the task in the same task state exists in the task queue, the arrangement priority corresponding to the task in the high operation priority is higher than the arrangement priority corresponding to the task in the low operation priority.
For example, since the task in the blocking state needs to wait for a period of time to enter the ready state to wait for running, the task in the blocking state corresponds to a lower task state priority, and when the space in the internal memory is insufficient, the task in the blocking state in the internal memory can be preferentially determined as the first target task and is scheduled to the external memory, so that the internal memory has enough memory space to store the task in the running state and the ready state.
For example, when the processor executes a task, the processor will preferentially execute a task with higher running priority when there are tasks with the same task state in the task queue, for example, when there are two or more tasks with ready task states in the task queue. Therefore, when the tasks with the same task state exist in the task queue, the task with the higher running priority corresponds to the higher arrangement priority, for example, when the tasks with the blocking state exist in the task queue, the task with the higher running priority corresponds to the higher arrangement priority in the two tasks, so that the task with the lower running priority is scheduled to the external memory, and enough space is reserved in the internal memory for storing the task with the higher running priority.
Referring to fig. 4a, fig. 4a is a schematic view of a task queue according to an embodiment of the application. As shown in fig. 4a, the internal memory includes a task 1 and a task 2 in an operation state, a task 3 and a task 5 in a ready state, and a task 4 and a task 6 in a blocking state, and the operation priorities corresponding to the tasks in fig. 4a are shown as priority values in brackets, and the operation priority of the task 3 is higher than the operation priority of the task 5 as the priority values are smaller, for example, the operation priorities of the task 3 and the task 5 in the ready state in fig. 4a are shown as 2, and the operation priority of the task 3 is higher than the operation priority of the task 5 as the priority value of the task 5 is 4.
When the load memory in the internal memory is smaller than the early warning memory threshold, determining the arrangement priority of each task in the task queue according to the task state priority and/or the operation priority of each task in the task queue, and determining the preset task arrangement sequence according to the arrangement priority corresponding to each task in the task queue. For example, the task state priority corresponding to the task in the ready state is higher than the task state priority corresponding to the task in the blocking state, for example, in fig. 4a, the task 3 and the task 5 are higher than the task 4 and the task 6, the task in the same task state determines the corresponding ranking priority according to the operation priority, the task with the higher operation priority is higher than the task with the lower operation priority, for example, in fig. 4a, the task 3 and the task 5 are higher than the task 5 due to the higher operation priority corresponding to the task 3, and the task 3 is higher than the task 5.
The preset task arrangement sequence is determined according to the arrangement priority corresponding to each task in the task queue, for example, the tasks in the task queue are arranged according to the arrangement priority from high to low, and the current task queue is confirmed. Referring to fig. 4b, fig. 4b is a schematic view of a task queue according to another embodiment of the application. And (4) arranging the tasks of the task queue in the figure 4a from high to low according to the arrangement priority, so as to obtain the current task queue after being ordered as shown in the figure 4 b.
The method includes the steps of determining the arrangement priority of each task in the task queue according to the task state priority and/or the operation priority of each task in the task queue, and arranging each task in the task queue according to the arrangement priority, so that when the task memories are accumulated, the un-operated task with higher priority degree in execution is preferentially calculated, and the un-operated task with higher priority degree in execution is reserved in the internal memory.
In some embodiments, when the load memory is smaller than the early warning memory threshold, the method further includes sequentially obtaining task memories occupied by tasks in a current task queue according to a preset task arrangement sequence, adding the obtained task memories to a current task accumulation memory, stopping calculating the task accumulation memory when the task accumulation memory is within a preset memory occupation range, and determining a task which is not accumulated in the current task queue as the first target task.
Illustratively, the task memory occupied by each task is sequentially obtained according to the arrangement sequence of each task in the task queue. The value of the task accumulation memory may be preset, for example, the task accumulation memory is set to 0, and then the task memory corresponding to each obtained task is added to the task accumulation memory in sequence. Taking fig. 4B as an example, firstly, task memory a corresponding to task 1 is obtained, a is added with the current task accumulation memory (for example, 0) to obtain task accumulation memory a at the moment, secondly, task memory B corresponding to task 2 is obtained, B is added with the current task accumulation memory (for example, a) to obtain task accumulation memory a+b at the moment, and so on until the value of the task accumulation memory is within a preset memory occupation range, or accumulation of task memories of all tasks in the current task queue is completed. And stopping calculating the task accumulation memory when the task accumulation memory is within a preset memory occupation range, and determining the task which is not accumulated in the current task queue as the first target task so as to ensure that the total memory occupied by each task in the current task queue is kept within the preset memory occupation range after the first target task is scheduled to an external memory.
And step S104, when the task accumulation exists in a preset memory occupation range, determining the task which is not accumulated in the current task queue as the first target task.
For example, in order to control the total memory occupied by each task in the task queue of the internal memory within a preset memory occupation range, so as to ensure that the handheld financial terminal uses the memory space of the internal memory to the maximum extent, and further ensure that the task in operation can run smoothly, when the accumulated memory of the tasks is within the preset memory occupation range, the task which is not accumulated in the current task queue is determined as the first target task, so that the task exceeding the preset memory occupation range is scheduled to the external memory.
Taking fig. 4b as an example, the task memories of the tasks are accumulated according to the sequence in fig. 4b to obtain task accumulated memories, if the task memory corresponding to the task 5 is counted into the task accumulated memory and the value of the task accumulated memory is within a preset memory occupation range, the task accumulated memory is stopped being calculated, and the task 4 and the task 6 which are not accumulated are determined as the first target task.
Specifically, the preset memory occupation range may be set according to actual requirements, for example, the memory occupation range may be set to 95% -98% of the storage space of the internal memory, when the calculated task accumulation memory is 95% -98% of the storage space of the internal memory, the task accumulation memory is stopped, and the task which is not accumulated is determined as the first target task, so that the total memory occupied by each task in the task queue is controlled to be 95% -98% of the storage space of the internal memory. Of course, the memory occupation range is not limited thereto, and may be other preset ranges, which are not limited thereto.
Step S105, scheduling the first target task to an external memory.
The first target task may be scheduled to the external memory, or the whole address space of the first target task may be scheduled to the external memory, or the partial address space of the first target task may be scheduled to the external memory according to actual situations, which is not limited herein.
By way of example, the external memory may be a hard disk, a floppy disk, an optical disk, a U disk, etc., without limitation.
Referring to fig. 4c, fig. 4c is a schematic view of a task scheduling method according to an embodiment of the application. As shown in fig. 4c, if the task memory corresponding to the task 5 is counted into the task accumulation memory and the value of the task accumulation memory is within the preset memory occupation range, the task accumulation memory is stopped being calculated, and the task 4 and the task 6 which are not accumulated are determined as the first target task and are scheduled to an external memory.
Referring to fig. 5, fig. 5 is a flowchart of a task scheduling method according to another embodiment of the present application. As shown in FIG. 5, in some embodiments, step S105 includes determining a target external memory storing the first target task according to a memory space occupied by the first target task, and scheduling at least a portion of an address space of the target task from the internal memory to the target external memory, step S1052.
If the external memory in the present application includes a plurality of external memories, before the first target task is scheduled to the external memories, the target external memory storing the first target task is determined according to the memory space occupied by the first target task, and at least part of the address space of the target task is scheduled from the internal memory to the target external memory. For example, if the space occupied by the first target task is larger, determining the external memory with larger available space as the target external memory. It may be appreciated that the number of the target external memories may be one or more, that is, the first target task may be scheduled to the same external memory, or may be scheduled to a different external memory, which is not limited herein.
According to the memory space occupied by the first target task, the target external memory for storing the first target task is determined, so that the flexibility degree of scheduling the first target task to the external memory is improved, and the memory space of the external memory can be more reasonably utilized.
In some embodiments, the arrangement order of the first target task in the external memory is determined according to the arrangement priority corresponding to the first target task. Taking fig. 4c as an example, the tasks 4 and 6 are scheduled to the external memory, and the arrangement sequence of the first target task in the external memory is determined according to the arrangement priorities corresponding to the tasks 4 and 6. For example, the first target tasks in the external memory are arranged in the order from high to low according to the arrangement priority, i.e. the tasks 4 and 6 are arranged in the external memory in the order from high to low according to the arrangement priority.
In some embodiments, the task scheduling method further comprises determining at least one task in the external memory as a second target task according to the arrangement sequence of the first target task in the external memory when the total memory occupied by each task in the internal memory is smaller than the minimum value of the memory occupation range, and scheduling the second target task to the internal memory.
The second target task in the external memory is scheduled to the internal memory when the total memory occupied by each task in the task queue of the internal memory is less than the minimum value of the memory occupied range, for example, the memory occupied range is 95% -98% of the memory storage space of the internal memory, and when the total memory occupied by each task in the task queue of the internal memory is less than 95% of the memory storage space of the internal memory, so as to ensure that the memory size occupied by the task in the internal memory is within the memory occupied range, and the memory storage space of the internal memory is fully utilized.
For example, when the second target task in the external memory is scheduled to the internal memory, the second target task with higher priority in the external memory is scheduled to the internal memory, that is, the second target task with higher priority in the external memory is scheduled to the internal memory.
In some embodiments, a second target task arrangement order is determined according to the arrangement priority corresponding to the second target task in the external memory, so that when the total memory occupied by each task in the task queue of the internal memory is smaller than the minimum value of the memory occupation range, the second target task with higher arrangement priority is scheduled to the internal memory, thereby improving the rationality of task scheduling.
The method includes the steps that after the available memory in the internal memory is increased, a second target task is determined, and the second target task is scheduled back to the internal memory, so that the task waiting to be executed in the internal memory can be directly executed after the processor executes the task with the task state being the running state, and the task execution efficiency is improved.
The task scheduling method includes the steps of obtaining load memory of an internal memory in the handheld financial terminal, determining tasks which are not operated in a task queue as first target tasks when the load memory is larger than or equal to an early warning memory threshold, sequentially obtaining task memories occupied by tasks in a current task queue according to a preset task arrangement sequence when the load memory is smaller than the early warning memory threshold, calculating task accumulation memory according to the task memories, determining tasks which are not accumulated in the current task queue as the first target tasks when the task accumulation memory is within a preset memory occupation range, and scheduling the first target tasks to an external memory. At least part of the non-running tasks are transferred to an external memory, so that the memory space of the handheld financial terminal can be flexibly and reasonably utilized, the utilization efficiency of the memory space is improved, and the user experience is improved.
By way of example, the task scheduling method described above may be implemented in the form of a computer program that can be run on a handheld financial terminal as shown in fig. 6.
Referring to fig. 6, fig. 6 is a schematic block diagram of a handheld financial terminal according to an embodiment of the present application.
As shown in fig. 6, the handheld financial terminal includes a processor, a memory, and a network interface connected by a system bus, wherein the memory may include a storage medium and an internal memory.
The storage medium may store an operating system and a computer program. The computer program comprises program instructions that, when executed, cause a processor to perform any one of a number of task scheduling methods.
The processor is configured to provide computing and control capabilities to support the operation of the entire handheld financial terminal.
The internal memory provides an environment for the execution of a computer program in a storage medium that, when executed by a processor, causes the processor to perform any one of a number of task scheduling methods.
The network interface is used for network communication such as transmitting assigned tasks and the like. It will be appreciated by those skilled in the art that the structure shown in fig. 6 is merely a block diagram of a portion of the structure associated with the present inventive arrangements and is not limiting of the handheld financial terminal to which the present inventive arrangements are applied, and that a particular handheld financial terminal may include more or fewer components than shown, or may combine certain components, or may have a different arrangement of components.
It should be appreciated that the Processor may be a central processing unit (Central Processing Unit, CPU), it may also be other general purpose processors, digital signal processors (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. Wherein the general purpose processor may be a microprocessor, or the processor may be any conventional processor or the like.
Wherein in one embodiment the processor is configured to run a computer program stored in the memory to implement the steps of:
acquiring a load memory of an internal memory in the handheld financial terminal;
when the load memory is larger than or equal to the early warning memory threshold, determining an unoperated task in the task queue as a first target task;
When the load memory is smaller than the early warning memory threshold, task memories occupied by tasks in a current task queue are sequentially obtained according to a preset task arrangement sequence, and task accumulation memories are calculated according to the task memories;
when the task accumulation exists in a preset memory occupation range, determining the task which is not accumulated in the current task queue as the first target task;
and dispatching the first target task to an external memory.
In one embodiment, when the processor realizes that the load memory is smaller than the early warning memory threshold, the processor sequentially obtains task memories occupied by each task in the current task queue according to a preset task arrangement sequence, and calculates task accumulation memories according to the task memories, the processor is configured to realize:
sequentially acquiring task memories occupied by all tasks in a current task queue according to a preset task arrangement sequence, and adding the acquired task memories with a current task accumulation memory;
Stopping calculating the task accumulation memory when the task accumulation memory is within a preset memory occupation range;
And determining the task which is not accumulated in the current task queue as the first target task.
In one embodiment, before implementing the task memory occupied by each task in the current task queue according to the preset task arrangement sequence and calculating task accumulation memory according to the task memory, the processor is configured to implement:
Determining the arrangement priority of each task in the task queue according to the task state priority and/or the operation priority of each task in the task queue;
and determining the preset task arrangement sequence according to the arrangement priority corresponding to each task in the task queue.
In one embodiment, the processor is configured to, when implementing the determining the arrangement priority of each task in the task queue according to the task state priority and/or the running priority of each task in the task queue, implement:
According to the task state of each task, carrying out task state priority sequencing on each task in the task queue, wherein the task state priority corresponding to the task with the task state being the ready state is higher than the task state priority corresponding to the task with the task state being the blocking state in the task queue;
And sequencing the running priority of the tasks with the same task state according to the importance degree of each task, and further determining the sequencing priority of each task, wherein when the tasks with the same task state exist in the task queue, the sequencing priority corresponding to the task with the higher running priority is higher than the sequencing priority corresponding to the task with the lower running priority.
In one embodiment, the processor, when implementing the scheduling of the first target task to the external memory, is configured to implement:
Determining a target external memory for storing the first target task according to the memory space occupied by the first target task;
at least a portion of the address space of the target task is scheduled from the internal memory to the target external memory.
In one embodiment, the processor, after implementing the scheduling of the first target task to the external memory, is configured to implement:
And determining the arrangement sequence of the first target task in the external memory according to the arrangement priority corresponding to the first target task.
In one embodiment, when implementing the task scheduling method, the processor is further configured to implement:
When the total memory occupied by each task in the internal memory is smaller than the minimum value of the memory occupied range, determining at least one task in the external memory as a second target task according to the arrangement sequence of the first target task in the external memory;
And dispatching the second target task to the internal memory.
In one embodiment, when implementing the obtaining the load memory of the internal memory of the handheld financial terminal, the processor is configured to implement:
and determining the load memory according to a linked list of the memory occupied by the task with the task state being the running state in the internal memory.
It should be noted that, for convenience and brevity of description, the specific working process of task scheduling described above may refer to the corresponding process in the foregoing task scheduling control method embodiment, and will not be described herein again.
Embodiments of the present application also provide a computer readable storage medium having a computer program stored thereon, where the computer program includes program instructions, where the method implemented when the program instructions are executed may refer to various embodiments of the task scheduling method of the present application.
It is to be understood that the terminology used in the description of the application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should also be understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations. It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments. While the application has been described with reference to certain preferred embodiments, it will be understood by those skilled in the art that various changes and substitutions of equivalents may be made and equivalents will be apparent to those skilled in the art without departing from the scope of the application. Therefore, the protection scope of the application is subject to the protection scope of the claims.