[go: up one dir, main page]

GB2500434A - Scheduling actions based on the state of the resources needed to execute the actions - Google Patents

Scheduling actions based on the state of the resources needed to execute the actions Download PDF

Info

Publication number
GB2500434A
GB2500434A GB1205155.3A GB201205155A GB2500434A GB 2500434 A GB2500434 A GB 2500434A GB 201205155 A GB201205155 A GB 201205155A GB 2500434 A GB2500434 A GB 2500434A
Authority
GB
United Kingdom
Prior art keywords
action
resources
invocation
execute
priority
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1205155.3A
Other versions
GB201205155D0 (en
Inventor
Erkut Uygun
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
COGNOVO Ltd
Original Assignee
COGNOVO Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by COGNOVO Ltd filed Critical COGNOVO Ltd
Priority to GB1205155.3A priority Critical patent/GB2500434A/en
Publication of GB201205155D0 publication Critical patent/GB201205155D0/en
Publication of GB2500434A publication Critical patent/GB2500434A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5094Allocation of resources, e.g. of the central processing unit [CPU] where the allocation takes into account power or heat criteria
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5021Priority
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Multi Processors (AREA)

Abstract

A multiprocessor data processing system comprises a number of resources, such as processing elements, memory space and electrical power. The resources may be in a number of states, such as off, idle or busy. A scheduler receives actions to be executed by the resources. The scheduler allocates resources to the actions based on the state of the resources. The states are then updated to take account of the allocated actions. The actions are passed to resources using FIFO buffers. The buffers may be arranged by priority and associated resource. An action may be interrupted to execute a higher priority action if the same resource is required or if there is insufficient power to execute both at the same time. The scheduler may be a separate schedule processor, which executes a Unified Modeling Language (UML) activity diagram.

Description

Task scheduler apparatus and method
Technical field
The present invention relates to multiprocessor systems and methods for schedufing tasks in multiprocessor systems.
Background
An earlier application by the present Applicant (GB 2482l4lA) has described a system and method of controlling the execution of tasks in a system with multiple processors where computation has to meet hard real-time constraints. The system described has the following elements. First, there are a number of processing elements, each of which may be for example a signal processor (including a vector signal processor), a standard microcontroller, a direct memory access (DMA) controller, or a speciaLpurpose processing element dedicated to a particular mathematical algorithm such as decoding error-correction codes. Typically (but not ncccssarily) thc proccssing elements havc the charactcristic that they arc non-preemptable: that is, the element is provided at a particular time with input data and (if necessary) program code and executes its function to completion without provision for interruption, when it signals task completion and provides the results of the task. A particular problem in such a system is to define a complex processing sequence consisting of a number of such tasks, some of which must execute in parallel on different elements, and control its execution in a way that ensures deterministic operation and optimises the usage of the available resources.
In the application an apparatus was descnbed which consisted of a Sequence Processor (SP). a programmable processor with an instruction set that implements primitives of the Universal Modelling Language (UML); which can be programmed with code that represents a sequence of operations defined in UML and generates control signals to trigger operations in the various processing elements; and where signals from the processing elements indicating task completion are further used in conjunction with control primitives to control the execution of subsequent tasks. The system also allows the use of time events generated by a system clock to be included
I
as trigger signals so that hard real-time constraints can be applied to the system. An advantage of the apparatus and method described is that task execution in a set of multiple processors can be very effectively controlled; and furthermore that a sequence of tasks defined in UML can be automatically compiled to microcode to control the operation of the SR However, whilst the method described has significant advantages there are thrther problems in the implementation of processing in such a system. One of these is created by the fact that in many practical applications of such a system, the time taken by a task may be (within certain bounds) indeterminate. For example. in a radio modem implementation. a processor may be given the task of decoding a set of samples of a received signal acquired during a time slot of the radio protocol. There are a number of reasons why the decoding time may vary: the protocol may for instance define different data interleaving depths and one of the steps in the decoding task may reveal the depth used, which information is then used to decode the rest of the slot data-the time taken will depend on this depth. Another example may be that an iterative "turbo" decoding process is used, where the results of one pass through the data are used to improve the qua'ity of the input data samples and then fed back through the decoder, the process being repeated until a specified quality level is reached. In such a case the time taken will increase as the quality of the transmission channel decreases.
A second problem is created by the desire to be able to re-use sequence programs in different configurations of a multiprocessor system designed according to the methodology. For example. a program may be created to implement a radio protocol such as WCDMA using a particular configuration of processors, and then at a later stage it becomes desirable to run the same program as a mode on a more powerful configuration with more, or fewer, and perhaps faster, processors. It would be preferred if the same code could run without re-compilation so that code could be portable between instantiations of the same processor architecture.
These examples result from the fact that one of the steps in creating the sequence program has to be the scheduling of tasks, at design time, to the set of resources available. To solve the first problem, it becomes necessary to consider all the cases where task completion times can vary and provide specific control flows to ensure that errors cannot arise as a result, thus making the sequence more complex than might otherwise be necessary. In the second case the design has to be re-compiled for a different set of resources.
Summary of invention
According to a first aspect of the present invention, there is provided a method of scheduling actions in a multiprocessor system, as set out in the claims appended hereto. According to further aspects of the present invention, a scheduler is disclosed, configured to execute the method according to the first aspect, and a computing device comprising a plurality of resources and such a scheduler.
Brief description of the drawings
For a better understanding of the present invention, and to show more clearly how it may be carried into effect, reference will now be made, by way of example, to the following drawings, in which: Figure 1 is a schematic illustration of a scheduler according to embodiments of the present invention; and Figure 2 shows a multiprocessor system according to embodiments of the present invention.
Detailed description
Figure 1 is a schematic diagram of a scheduler 10 according to embodiments of the present invention, and its connection to various other components of a multiprocessor system.
A sequence processor 30 generates Action Invocations (Als) 11 according to the control flow defined in a microcoded version of a Unified Modelling Language RTML) activity diagram. For a full understanding of UML and the actions carried out by the sequence processor 30, the reader is directed to the earlier application of the present Applicant referred to above, GB 2482141, which is incorporated herein by reference. However, it suffices for a description of the present invention to repeat the description of the sequence processor 30 provided above. That is, the sequence processor 30 is a programmable processor with an instruction set that implements primitives of UML. The sequence processor 30 can be programmed with code that represents a sequence of operations defined in UML and generates Action Invocations to trigger operations in the various processing elements. Signals 13 (also called Action Indications) from the processing elements indicating task completion are further used in conjunction with control primitives to control the execution of subsequent tasks. Thus a sequence of tasks defined in UML can be automatically compiled to microcode to control the operation of the sequence processor 30. which generates Action Invocations (AIs) to trigger operations in the processing elements.
Each Al is a message including some or all of the following: a definition of the resources required to execute the Action, including the type of processing element required, any memory requirements andlor any power or current requirements; an indication of whether the required processing element can be pre-empted; pointers to the code (if required) to program the processing element; pointers to the input data and output data buffers; data on the smallest and largest number of execution cycles required to carry out the Action if data dependent; a definition of the priority of the Action; a definition of any earliest start or latest finish times (i.e. the Action must not be started before a certain time, or must be completed by a certain time); and a definition of the position in the sequence of operations to which contr& shou'd be handed back on completion of the Action.
The AIs are passed to a scheduler 10 which acts to sort, queue, prioritize and distribute the AIs to the various processing elements and other resources in the system according to a program 20 which defines a scheduling policy. According to embodiments of the present invention, the scheduler 10 comprises a component for queuing the Als received from the sequence processor 30 (in the illustrated embodiment a combination of an Al sorter 12 and a plurality of buffers 14), a component 16 for routing the AIs to one or more of the various processing elements and resources of the system, and a schedule processor 18 coupled to each of the former components, which oversees the scheduling process.
The resources of the system are represented in Figure 1 by the components under reference numeral 50. These include processing resources of the system, represented by a set of Data Engines (DEs). Data engines may include any processor or specialized processor. For example, a Data Engine may be a Vector Signal Processor capable of executing matrix-vector arithmetic operations on every element of a data array in the same processor cycle; or it may be a hardware block dedicated to a single type of operation such as direct memory access or turbo-decoding; or one of many other types of processing block. The state of a Data Engine may be "off", i.e. with no electncal power applied; or "idle" with power applied but not executing any operation; or it may be "busy", executing a previously triggered operation which has not completed.
It will also be appreciated that in a complete system there are other resources that need to be considered, for example memory space and available electrical power or current.
Thus, memory buffers may also be considered as resources to which tasks can be mapped dynamically. In that case, Als may contain metadata on the memory requirements of the task they represent, and the scheduler 10 would allocate buffers from the available pool of resources as tasks require them.
In a mobile system such as a wireless mobile device, another important and limited resource is available electncal power. A particular constraint for example is the peak current that the battery can provide. Both the radio circuitry and the applications processor (which often includes a high resolution display) require high peak current from time to time. So for example, when a mobile device is near the edge of a cell and/or needs to transmit a data packet at high data rate it will need to use peak power from the radio power amplifier and this will be associated with running the algorithms for processing transmitted data. The sequencer 30 can have knowledge of the demanded power output and could therefore include in the appropriate Als data on the required supply current.
The scheduler 10 thus serves to receive Action Invocations from the sequencer 30, stored and evaluate them, and then pass them to the resources 50 required to execute the actions. Once an action is completed. the scheduler 10 passes control back to the sequencer 30.
In one embodiment the buffers 14 comprise an array of ti separate memories, where n is a positive integer. However, it will be appreciated that other equivalent implementation methods can be used. For example, an area of memory, such as random-access memory (RAM), may be provided, where the AT sorter 12 or another controller is operated to treat the memory as an array of separate buffers. lii one embodiment, the memories (whether separate or not) are operated on a First-In-First-Out (FIFO) basis.
On receipt of an Al from the sequence processor 30, the AT is routed to an appropriate buffer by the Al sorter 12. In one embodiment, Als are sorted into buffers 14 according to the task priority assigned to the Al by the sequence processor 30. That is, each buffer is assigned to AIs having a particular value of priority. For example.
Als defining Actions with the highest value of priority (i.e. the most urgent tasks) may be assigned to FWO 1, those defining Actions with the next highest value of priority to FIFO 2, etc. The AIs are then stored in the appropriate buffer for a period of time which will vary according to their priority; those with higher priority will generally be executed before those with lower priority. In a further embodiment, the buffers 14 may also be grouped such that each group of buffers is assigned to Als defining Actions requinng similar resources (e.g. the same type of data engines). Each group of buffers may include a buffer for each priority value.
The Schedule Processor 18 is a programmable hardware element for managing the allocation of ATs to resources. The Schedule Processor 18 has knowledge of the resources in the system, such as the Data Engines referred to above, their capabilities, and their current state, and also the available memory and power. Such data can be stored in a memory within the schedule processor 18 or a system memory to which the schedule processor has access. For a data engine such as a processing element, the state may be off, idle or busy, as described above; for memory, the culTent state may be the available memory space and whether or not it is allocated; for current or power, the current state may be the available peak current, or the current state of battery charge.
The Schedule Processor 18 operates by monitoring the queued AIs and determining which AT needs to be executed with highest priority for which resources are available (e.g. a data engine with the required capability). The AT is then taken off is queue sent via the Al Router 16 to the appropriate resources which are thereafter triggered to perform the Action. The Schedule Processor 18 keeps a copy of the AT, and retains knowledge of which resources have been allocated to the task; the state indications of those resources are then set internally in the Schedule Processor 18. For example, the culTent state indication may be changed from an "off" or "idle" state to an "active" state when the resources are sent a particular Al. In this way, each resource may not be selected for any subsequent task until its culTently allocated task is completed.
In one embodiment, the Schedule Processor 18 comprises a central processing unit (CPU) operating under the control of a program 20 which causes it to apply a particular scheduling policy. The precise scheduling policy which is applied may be any policy as dictated by the circumstances and the invention is therefore not limited to any particular one. The Schedule Processor 18 also comprises a plurality of AT Register Files 22 (AT_i, AI_2, AT_rn, where m is a positive integer) for storing the contents of the AIs which have been assigned to resources; and a plurality of status register files 24 (Status_I, Status_2 Status_x. where x is the number of resources) that record the current status of each of the resources 50.
The Schedule Processor 18 monitors the outputs of the memories 14 in priority order assesses the type of resource required to execute the Action, and uses information from the Status Register files 24 to identify the next available resource (e.g. the next available data engine with the right capabibty). The Schedule Processor 18 then assigns the Action to those resources, takes the Al from the queue (thereby erasing it from the queue) and writes it into one of the Al Register files 22. It then changes the status of the resources to "busy" in the appropriate Status Register files 24; and sends the Alto the resources via the AT router 16. When the Action has been completed by the resources, its completion is signalled back to the Schedule Processor 18 which then sends an AT Indication to the Sequencer 30, and deletes the AT from the AT Register File 22.
In some instances, an Al for one type of resource (e.g. one type of data element) placed in a buffer 14 may be held up by an Al at the output of the same buffer for another resource which is currently active in processing an Action, even though the resource required for the first-mentioned Al is available. To avoid this problem, in a further embodiment of the invention, the buffers 14 may be ranged into subgroups, each subgroup being associated with a particular set of resources, or a particular type of resources (e.g. a particular type of data engine). Each subgroup of buffers 14 includes at least one buffer for each value of priority associated with its respective resource type. For example, FIFOs 1 to 3 may be associated with a first type of resource, for which there are three possible values of priority; FIFOs 4 and S may be associated with a second type of source, for which there are two possible values of priority; and so on for other types of resources and other buffers. It may not be necessary to define subgroups for each type of resources, however, and thus one or more subgroups of buffers 14 may be associated generally with types of resources for which no specific subgroup of buffers is defined.
On receiving an AT, the AT sorter 12 can determine the type of resources required, thus directing the Alto the appropriate subgroup, and the priority of the Action defined, directing the Alto the appropriate buffer within the subgroup. The scheduk processor 18 may then apply similar or different scheduling policies to the different types of resource, according to the program 20 that it executes.
The Data Engines may be of various types. For example, they may be fixed-function hardware units performing operations such as direct memory access (DMA); signal processing hardware for turbo decoding or cryptographic processing; programmable vector signal processing devices which may or may not be preemptable; or conventional CPU cores which may have interrupt support.
Circumstances may anse where a data engine needs to be re-assigned to a new task before it has completed a previous one. For example. in a modern system which is processing two asynchronous protocols, a high-priority task iii one may be required because of a request issued by a higher layer in the protocol stack and there may not be a suitable processing resource available to undertake the task. In these circumstances it may be desirable to re-allocate one of the other resources to it.
The Schedule Processor 18 has knowledge of the capabilities of each of the data engines including whether or not they are preemptable (i.e. whether they can be interrupted); and of the priority of AIs being executed. If they are preemptable. the act of pre-emption may result in the task being abandoned, or, if the data engine in question supports interrupts, task context data may be saved so that the task can resume once the pre-empting task is complete. A method of dealing with such abandoned tasks is as follows.
The Al for each Action assigned to resources is stored in one of the plurality of AT Register Files 22. If the Schedule Processor 18 assigns the same resources to a new task, the Al may be returned to the buffer memories 14, either at the same priority as it was originally or with a higher priority to reflect the fact that it has been pre-empted and is therefore presumably more urgent. If the buffers 14 are arranged into subgroups according to the type of resources required, the AT is returned to the appropriate (i.e. the same) subgroup. In a further embodiment, the Schedule Processor 18 may use any deadline (i.e. the time by which the Action must be finished) and/or processing time information in the Alto estimate if the Action either is or will be beyond its useful deadline by the time the Action has been executed. If so the Action may be erased, and/or an exception raised so that other measures may be taken in the system to deal with any consequences.
If the resources required for a particular Al include a certain amount of memory or available current, the Schedule Processor 18 may be given a measure of control to suspend operations in the system which generate high current demand or high memory usage. For example, the display backlight might be rnornentarily turned off; the processing of a graphics data element could be temporarily stopped; andior the processing of Actions by Data Engines may be temporarily halted.
Figure 2 shows schematic hardware architecture of a multiprocessor system 100 according to embodiments of the present invention in which the Scheduler 10 shown in Figure 1 is embedded. The system 100 further comprises various Data Engines, including: a Control Processor. Vector Signal Processors (VSP); an Interrupt Controller; a Timer; a Turbo Decoder; DMA Controllers; System Memory; a Sequencer; and External Intenupts. The External Interrupt may be routed for example to an application processor system in a cellular UE. to control peak current usage at times when high transmit power is required. The data engines, resources and scheduler are all connected via a System Interconnect such as a bus. Those skilled in the art will appreciate that any or all of these resources may be included in a multiprocessor system according to embodiments of the invention.
The present invention thus provides a scheduler and associated method for use in a multiprocessor system, which enables tasks to be scheduled dynamically to the most appropriate resources in the system. By maintaining information on the current state of each resource in the system, the scheduler can ensure that tasks are allocated at appropriate times and to appropriate resources. In embodiments of the invention, the tasks may be given different priority values, such that higher-priority tasks are executed before lower-priority values. If necessary, this may involve the preemption of a lower-priority task running on resources.
Those skilled in the art will appreciate that various amendments and alterations can be made to the embodiments described above without departing from the scope of the invention as defined in the claims appended hereto.

Claims (15)

  1. Claims I. A method of scheduling actions in a multiprocessor system, the system comprising a plurality of resources, the method comprising: storing an indication of the current state of each resource; receiving an action invocation, the action invocation defining an action to be processed by the multiprocessor system; sending the action invocation to one or more of the resources on the basis of the stored indications for execution of the action; and updating the stored indications for said one or more resources.
  2. 2. The method as c'aimed in claim 1, further comprising: storing the action invocation in one of a plurality of memory buffers, prior to sending the action invocation to one or more of the resources.
  3. 3. The method as claimed in claim 2, wherein the action invocation is stored in the plurality of memory buffers according to a priority assigned to the action defined in the action invocation, such that each memory buffer contains action invocations defining actions of a given priority value.
  4. 4. The method as claimed in claim 2 or 3, wherein the action invocation is stored in the plurality of memory buffers according to the type of resources required to execute the action defined in the action invocation.I
  5. 5. The method as claimed in claim 4, wherein the plurality of memory buffers are arranged into subgroups of memory buffers, each subgroup of memory buffers being assigned to a respective type of resources.
  6. 6. The method as claimed in claim 5, wherein each subgroup of memory buffers comprises at least one memory buffer for each priority value.
  7. 7. The method as claimed in any one of claims 2 to 6, further comprising: transferring the action invocation from the memory buffer when a resource of the type required to execute the action defined in the action invocation becomes available; and writing the action invocation to one of a plurality of register files, the plurality of register files storing action invocations which are assigned to one or more resources.
  8. 8. The method as claimed in claim 7, further comprising: receiving an indication that the action assigned to the one or more resources has been completed; and deleting the action invocation from the register tile.
  9. 9. The method as claimed in claim 7 or 8, further comprising: interrupting the execution of the action at the one or more resources; and returning the action invocation from the one of the plurality of register files to the memory buffers with the same or a higher value of priority.
  10. 10. The method as c'aimed in any one of the preceding claims, further comprising: determining that the action defined in the action invocation requires an amount of current or power; comparing said amount of current or power to available system resources; and suspending or halting operation of one or more system processes while said action is executed.
  11. 11. The method as claimed in any one of the preceding claims, wherein the action invocation comprises any of: an indication of the type or types of resources required to execute the action; an indication of whether or not the type or types of resources required to execute the action are preemptable; an indication of the earliest start time and/or the latest finish time for execution of the action; and/or an indication of the priority of the action.
  12. 12. The method as claimed in any one of the preceding claims, wherein each resource can be in an off state, an idle state or a busy state.
  13. 13. The method as claimed in any one of the preceding claims, wherein the resources comprise one or more of processing elements, memory space and available current or power.
  14. 14. A scheduler for a multiprocessor system, configured to execute the method according to any one of the preceding claims.
  15. 15. A computing device comprising a plurality of resources and a scheduler according to claim 14.
GB1205155.3A 2012-03-23 2012-03-23 Scheduling actions based on the state of the resources needed to execute the actions Withdrawn GB2500434A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1205155.3A GB2500434A (en) 2012-03-23 2012-03-23 Scheduling actions based on the state of the resources needed to execute the actions

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1205155.3A GB2500434A (en) 2012-03-23 2012-03-23 Scheduling actions based on the state of the resources needed to execute the actions

Publications (2)

Publication Number Publication Date
GB201205155D0 GB201205155D0 (en) 2012-05-09
GB2500434A true GB2500434A (en) 2013-09-25

Family

ID=46087024

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1205155.3A Withdrawn GB2500434A (en) 2012-03-23 2012-03-23 Scheduling actions based on the state of the resources needed to execute the actions

Country Status (1)

Country Link
GB (1) GB2500434A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104714785A (en) * 2015-03-31 2015-06-17 中芯睿智(北京)微电子科技有限公司 Task scheduling device, task scheduling method and data parallel processing device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0293616A2 (en) * 1987-06-04 1988-12-07 International Business Machines Corporation Dynamic switch with task allocation capability
US20070143514A1 (en) * 2002-12-26 2007-06-21 Kaushik Shivnandan D Mechanism for processor power state aware distribution of lowest priority interrupts
US20080189701A1 (en) * 2007-01-29 2008-08-07 Norimitsu Hayakawa Computer system
US20100083015A1 (en) * 2008-10-01 2010-04-01 Hitachi, Ltd. Virtual pc management method, virtual pc management system, and virtual pc management program
US20100106876A1 (en) * 2008-10-24 2010-04-29 Fujitsu Microelectronics Limited Multiprocessor system configured as system lsi
GB2473015A (en) * 2009-08-26 2011-03-02 Dell Products Lp Accessing main processor resources to process events with a second processor when the main processor is in a low power mode
US20110239016A1 (en) * 2010-03-25 2011-09-29 International Business Machines Corporation Power Management in a Multi-Processor Computer System
WO2011147777A2 (en) * 2010-05-26 2011-12-01 International Business Machines Corporation Optimizing energy consumption and application performance in a multi-core multi-threaded processor system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0293616A2 (en) * 1987-06-04 1988-12-07 International Business Machines Corporation Dynamic switch with task allocation capability
US20070143514A1 (en) * 2002-12-26 2007-06-21 Kaushik Shivnandan D Mechanism for processor power state aware distribution of lowest priority interrupts
US20080189701A1 (en) * 2007-01-29 2008-08-07 Norimitsu Hayakawa Computer system
US20100083015A1 (en) * 2008-10-01 2010-04-01 Hitachi, Ltd. Virtual pc management method, virtual pc management system, and virtual pc management program
US20100106876A1 (en) * 2008-10-24 2010-04-29 Fujitsu Microelectronics Limited Multiprocessor system configured as system lsi
GB2473015A (en) * 2009-08-26 2011-03-02 Dell Products Lp Accessing main processor resources to process events with a second processor when the main processor is in a low power mode
US20110239016A1 (en) * 2010-03-25 2011-09-29 International Business Machines Corporation Power Management in a Multi-Processor Computer System
WO2011147777A2 (en) * 2010-05-26 2011-12-01 International Business Machines Corporation Optimizing energy consumption and application performance in a multi-core multi-threaded processor system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104714785A (en) * 2015-03-31 2015-06-17 中芯睿智(北京)微电子科技有限公司 Task scheduling device, task scheduling method and data parallel processing device

Also Published As

Publication number Publication date
GB201205155D0 (en) 2012-05-09

Similar Documents

Publication Publication Date Title
US10733032B2 (en) Migrating operating system interference events to a second set of logical processors along with a set of timers that are synchronized using a global clock
US8397235B2 (en) User tolerance based scheduling method for aperiodic real-time tasks
Audsley et al. Real-time system scheduling
Huang et al. ShuffleDog: characterizing and adapting user-perceived latency of android apps
US8239869B2 (en) Method, system and apparatus for scheduling computer micro-jobs to execute at non-disruptive times and modifying a minimum wait time between the utilization windows for monitoring the resources
US20120054770A1 (en) High throughput computing in a hybrid computing environment
Zhao et al. Design optimization for AUTOSAR models with preemption thresholds and mixed-criticality scheduling
CN105224886A (en) A kind of mobile terminal safety partition method, device and mobile terminal
CN114840318A (en) A scheduling method for multiple processes to preempt hardware key encryption and decryption resources
Agung et al. Preemptive parallel job scheduling for heterogeneous systems supporting urgent computing
KR20130051076A (en) Method and apparatus for scheduling application program
Abeni et al. EDF scheduling of real-time tasks on multiple cores: adaptive partitioning vs. global scheduling
Parikh et al. Performance parameters of RTOSs; comparison of open source RTOSs and benchmarking techniques
Erickson Managing tardiness bounds and overload in soft real-time systems
Yang et al. Improved blocking time analysis and evaluation for the multiprocessor priority ceiling protocol
GB2500434A (en) Scheduling actions based on the state of the resources needed to execute the actions
Wellings et al. Asynchronous event handling and real-time threads in the real-time specification for Java
Nicolau Specification and analysis of weakly hard real-time systems
Ngolah et al. The real-time task scheduling algorithm of RTOS+
Phavorin et al. Complexity of scheduling real-time tasks subjected to cache-related preemption delays
Li et al. Towards Virtualization-Agnostic Latency for Time-Sensitive Applications
Dubey et al. Operating System–Principles and Applications
TRIVEDI Real time operating system (RTOS) with its effective scheduling techniques
Liu et al. A server-based approach for overrun management in multi-core real-time systems
Lyu et al. SledgeScale: Load-Aware Dispatch and Deadline-Driven Scheduling for Scalable, Dense Serverless Computing in Edge Data Centers

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)