GB2633031A - A computer-implemented method for optimizing task allocation and scheduling of a set of computational tasks, a computer program product, a non-transitory - Google Patents
A computer-implemented method for optimizing task allocation and scheduling of a set of computational tasks, a computer program product, a non-transitory Download PDFInfo
- Publication number
- GB2633031A GB2633031A GB2313127.9A GB202313127A GB2633031A GB 2633031 A GB2633031 A GB 2633031A GB 202313127 A GB202313127 A GB 202313127A GB 2633031 A GB2633031 A GB 2633031A
- Authority
- GB
- United Kingdom
- Prior art keywords
- computational
- task
- computational tasks
- computer
- computer hardware
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/40—Transformation of program code
- G06F8/41—Compilation
- G06F8/44—Encoding
- G06F8/443—Optimisation
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W60/00—Drive control systems specially adapted for autonomous road vehicles
- B60W60/001—Planning or execution of driving tasks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5066—Algorithms for mapping a plurality of inter-dependent sub-tasks onto a plurality of physical CPUs
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Human Computer Interaction (AREA)
- Transportation (AREA)
- Mechanical Engineering (AREA)
- Debugging And Monitoring (AREA)
Abstract
A computer-implemented method for optimizing task allocation and scheduling of a set of computational tasks 24, such as autonomous driving tasks, to be executed by an electronic computing device 10 comprising a set of computer hardware elements, comprises the steps of: storing, in a memory resource, a description of a computer hardware topology of the computer hardware, and a description of a compute graph; and performing a task allocation optimization to: based on the description of the computer hardware topology, assigning each computational task to one or more hardware elements; generating a schedule 26 for executing each computational task; generating the compute graph for execution by the hardware elements, optionally based on a directed acyclic graph, the compute graph comprising the set of computational tasks, assignments for each computational task to the one or more hardware elements, and the schedule for executing each computational task; and generating a table 28 containing addresses of data in the memory for executing the set of computational tasks by the set of computer hardware elements, which may be an optimal set of memory addresses. Risk of failure to meet a deadline, total execution time, and/or communication cost may be minimised.
Description
A computer-implemented method for optimizing task allocation and scheduling of a set of computational tasks, a computer program product, a non-transitory computer-readable storage medium, as well as an electronic computing device
FIELD OF THE INVENTION
[0001] The present invention relates to the field of automobiles. More specifically the present invention relates to a method for optimizing task allocation and scheduling of a set of computational tasks. Furthermore, the present invention relates to a corresponding computer program product, a corresponding non-transitory computer-readable storage medium, as well as to a corresponding electronic computing device.
BACKGROUND INFORMATION
[0002] Methods for optimizing task allocation and scheduling of a set of computational tasks are already known in the state of the art.
SUMMARY OF THE INVENTION
[0003] It is an object of the present invention to provide a method, a corresponding computer program product, a corresponding non-transitory computer-readable storage medium, as well as a corresponding electronic computing device, by which optimizing task allocation and scheduling of a set of computational tasks is performed in an improved manner.
[0004] This object is solved by a method, a corresponding computer program product, a corresponding non-transitory computer-readable storage medium, as well as a corresponding electronic computing device according to the independent claims. Advantageous forms of embodiments are presented in the dependent claims.
[0005] Given a computer hardware topology, optimizing the allocation of computational tasks to specified hardware component can result in the minimization of total execution time while also minimizing the risk that any computational task does not meet a corresponding deadline when executed in the computer hardware. Such an optimization may be beneficial for computer systems that are limited in space and/or available bandwidth for performing the computational tasks. For example, the computing systems on board semi-autonomous or fully autonomous vehicles may be limited in space, where various computational tasks may compete for limited computing resources. The computational tasks can include any semi or fully autonomous driving tasks, such as sensor data acquisition (e.g., from LIDAR sensors, image sensors, radar sensors, ultrasonic sensors, etc.), sensor data processing (e.g., object detection, object classification, occupancy determination, and/or scene understanding tasks), motion prediction, and autonomous vehicle control tasks The computational tasks can further include advanced driver assistance system (ADAS) tasks, such as hazard warning, collision warning, assisted emergency braking, collision avoidance, lane keeping, lane following, automatic parking, and other semi-autonomous tasks.
[0006] A computing system can optimize task allocation and scheduling of a set of computational tasks to be executed on computer hardware. In various examples, the computer hardware can comprise a set of computer hardware elements and a memory that stores an instruction set. In the examples described herein, the computer hardware that performs the computational tasks can be included in an on-board computing system of a vehicle. In various examples, the computing system that performs the task allocation and scheduling optimization can include a memory resource storing a description of the computer hardware topology of the computer hardware, a description of a compute graph, and a set of execution instructions. Based on executing the instructions, the computing system can perform the task allocation optimization based on the description of the computer hardware topology.
[0007] In various examples, the computing system can assign each computational task in the set of computational tasks to one or more hardware elements of the set of computer hardware elements, and generate a schedule for executing each computational task in the set of computational tasks. The computing system can further generate the compute graph for execution by the set of computer hardware elements. The compute graph can comprise the set of computational tasks, assignments for each computational task to the one or more hardware elements, and the schedule for executing each computational task in the set of computational tasks. The computing system then generates a table containing addresses of data in the memory for executing the set of computational tasks by the set of computer hardware elements.
[0008] In various examples, the computing system can further generate a task schedule to optimize communication for performing the set of computational tasks based on the task allocation optimization. In further examples, the computing system can generate an optimal set of memory addresses for accessing the data in the memory, where the optimal set of memory addresses is used by the set of computer hardware elements for performing the set of computational tasks in accordance with the task schedule (e.g., when the instruction set triggers execution of each computational task). As provided herein, the instruction set can comprise an autonomous drive instruction set, in which sensor data from a sensor system of the vehicle is processed by the computer hardware elements to perform the autonomous driving tasks described herein. In various examples, the computer hardware elements can be included on a system-on-chip (SoC), such as described by the Universal Chiplet Interconnect Express (UCIe) specification in which interconnects and serial bus between chiplets is standardized.
[0009] In various implementations, the task allocation optimization results in the extraction of a directed acyclic graph (DAG) based on the schedule for executing each computational task in the set of computational tasks. In such implementations, the compute graph is generated based on the extracted DAG. In further examples, the compute graph can be utilized by the computer hardware elements to execute the set of computational tasks in a deterministic manner, such as through the execution of workloads in independent pipelines. In accordance with examples described herein, the task allocation optimization is performed by the computing system to both minimize total execution time of the set of computational tasks and to minimize cost of communication for executing the compute graph. In still further examples, the task allocation optimization and the memory addressing are performed to minimize each of error or failure risk, total execution time, and cost of communication for executing the set of computational tasks in the computer hardware.
[0010] In certain implementations, the computing system can perform one or more functions described herein using a learning-based approach, such as by executing an artificial neural network (e.g., a recurrent neural network, convolutional neural network, etc.) or one or more machine-learning models. Such learning-based approaches can further correspond to the computing system storing or including one or more machine-learned models. In an embodiment, the machine-learned models may include an unsupervised learning model. In an embodiment, the machine-learned models may include neural networks (e.g., deep neural networks) or other types of machine-learned models, including non-linear models and/or linear models. Neural networks may include feed-forward neural networks, recurrent neural networks (e.g., long short-term memory recurrent neural networks), convolutional neural networks or other forms of neural networks. Some example machine-learned models may leverage an attention mechanism such as self-attention. For example, some example machine-learned models may include multi-headed self-attention models (e.g., transformer models).
[0011] As provided herein, a "network' or "one or more networks" can comprise any type of network or combination of networks that allows for communication between devices. In an embodiment, the network may include one or more of a local area network, wide area network, the Internet, secure network, cellular network, mesh network, peer-topeer communication link or some combination thereof and may include any number of wired or wireless links. Communication over the network(s) may be accomplished, for instance, via a network interface using any type of protocol, protection scheme, encoding, format, packaging, etc. [0012] One or more examples described herein provide that methods, techniques, and actions performed by a computing device are performed programmatically, or as a computer implemented method. Programmatically, as used herein, means through the use of code or computer-executable instructions. These instructions can be stored in one or more memory resources of the computing device. A programmatically performed step may or may not be automatic.
[0013] One or more examples described herein can be implemented using programmatic modules, engines, or components. A programmatic module, engine, or component can include a program, a sub-routine, a portion of a program, or a software component or a hardware component capable of performing one or more stated tasks or functions. As used herein, a module or component can exist on a hardware component independently of other modules or components. Alternatively, a module or component can be a shared element or process of other modules, programs or machines.
[0014] Some examples described herein can generally require the use of computing systems and devices, including processing and memory resources. For example, one or more examples described herein may be implemented, in whole or in part, on computing devices such as servers and/or personal computers using network equipment (e.g., routers). Memory, processing, and network resources may all be used in connection with the establishment, use, or performance of any example described herein (including with the performance of any method or with the implementation of any system).
[0015] Furthermore, one or more examples described herein may be implemented through the use of instructions that are executable by one or more processors. These instructions may be carried on a non-transitory computer-readable medium. Machines shown or described with figures below provide examples of processing resources and computer-readable mediums on which instructions for implementing examples disclosed herein can be carried and/or executed. In particular, the numerous machines shown with examples of the invention include processors and various forms of memory for holding data and instructions. Examples of non-transitory computer-readable mediums include permanent memory storage devices, such as hard drives on personal computers or servers. Other examples of computer storage mediums include portable storage units, such as flash memory or magnetic memory. Computers, terminals, network-enabled devices are all examples of machines and devices that utilize processors, memory, and instructions stored on computer-readable mediums. Additionally, examples may be implemented in the form of computer programs, or a computer usable carrier medium capable of carrying such a program.
[0016] Further advantages, features, and details of the invention derive from the following description of preferred embodiments as well as from the drawings. The features and feature combinations previously mentioned in the description as well as the features and feature combinations mentioned in the following description of the figures and/or shown in the figures alone can be employed not only in the respectively indicated combination but also in any other combination or taken alone without leaving the scope of the invention.
BRIEF DESCRIPTION OF THE DRAWINGS
[0017] The novel features and characteristic of the disclosure are set forth in the appended claims. The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and together with the description, serve to explain the disclosed principles. The same numbers are used throughout the figures to reference like features and components. Some embodiments of system and/or methods in accordance with embodiments of the present subject matter are now described below, by way of example only, and with reference to the accompanying figures.
[0018] The drawings show in: [0019] Fig. 1 a schematic block diagram according to an embodiment of an electronic computing device; and [0020] Fig. 2 another schematic block diagram according to an embodiment of an electronic computing device.
[0021] In the figures the same elements or elements having the same function are indicated by the same reference signs.
DETAILED DESCRIPTION
[0022] In the present document, the word "exemplary" is used herein to mean "serving as an example, instance, or illustration". Any embodiment or implementation of the present subject matter described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
[0023] While the disclosure is susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawing and will be described in detail below. It should be understood, however, that it is not intended to limit the disclosure to the particular forms disclosed, but on the contrary, the disclosure is to cover all modifications, equivalents, and alternatives falling within the scope of the disclosure.
[0024] The terms "comprises", "comprising", or any other variations thereof, are intended to cover a non-exclusive inclusion so that a setup, device or method that comprises a list of components or steps does not include only those components or steps but may include other components or steps not expressly listed or inherent to such setup or device or method. In other words, one or more elements in a system or apparatus preceded by "comprises" or "comprise" does not or do not, without more constraints, preclude the existence of other elements or additional elements in the system or method.
[0025] In the following detailed description of the embodiment of the disclosure, reference is made to the accompanying drawing that forms part hereof, and in which is shown by way of illustration a specific embodiment in which the disclosure may be practiced. This embodiment is described in sufficient detail to enable those skilled in the art to practice the disclosure, and it is to be understood that other embodiments may be utilized and that changes may be made without departing from the scope of the present disclosure. The following description is, therefore, not to be taken in a limiting sense.
[0026] FIG. 1 shows a block diagram depicting an example of an electronic computing device 10 in which embodiments described herein may be implemented, in accordance with examples described herein. In an embodiment, the electronic computing device 10 can include one or more control circuits 12 that may include one or more processors (e.g., microprocessors), one or more processing cores, a programmable logic circuit (PLC) or a programmable logic/gate array (PLA/PGA), a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), systems on chip (SoCs), or any other control circuit.
[0027] In an embodiment, the control circuit(s) 12 may be programmed by one or more computer-readable or computer-executable instructions stored on the non-transitory computer-readable medium 14. The non-transitory computer-readable medium 14 may be a memory device, also referred to as a data storage device, which may include an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination thereof. The non-transitory computer-readable medium 14 may form, for example, a computer diskette, a hard disk drive (HDD), a solid state drive (SDD) or solid state integrated memory, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), dynamic random access memory (DRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), and/or a memory stick. In some cases, the non-transitory computer-readable medium 14 may store computer-executable instructions or computer-readable instructions.
[0028] In various embodiments, the terms "computer-readable instructions" and "computer-executable instructions" are used to describe software instructions or computer code configured to carry out various tasks and operations. In various embodiments, if the computer-readable or computer-executable instructions form modules, the term "module" refers broadly to a collection of software instructions or code configured to cause the control circuit 12 to perform one or more functional tasks The modules and computerreadable/executable instructions may be described as performing various operations or tasks when the control circuit(s) 12 or other hardware components execute the modules or computer-readable instructions.
[0029] In further embodiments, the electronic computing device 10 can include a communication interface 16 that enables communications over one or more networks 18 to transmit and receive data. In various examples, the electronic computing device 10 can communicate, over the one or more networks 18, with fleet vehicles using the communication interface 16 to receive sensor data and implement the intersection classification methods described throughout the present disclosure. In certain embodiments, the communication interface 16 may be used to communicate with one or more other systems. The communication interface 16 may include any circuits, components, software, etc. for communicating via one or more networks 18 (e.g., a local area network, wide area network, the Internet, secure network, cellular network, mesh network, and/or peer-to-peer communication link). In some implementations, the communication interface 6 may include for example, one or more of a communications controller, receiver, transceiver, transmitter, port, conductors, software and/or hardware for communicating data/information.
[0030] FIG. 2 shows a block diagram depicting an example electronic computing device 10 including modules for performing task allocation and scheduling optimization, according to examples described herein. Referring to FIG. 2, the electronic computing device 10 includes a communication interface 16 that enables the electronic computing device 10 to communication over one or more networks 18. The electronic computing device 10 includes a task allocation optimizer 20 that receives, as input, a set of computational tasks 24 and a computer hardware topology for a set of computer hardware elements. The hardware topology can describe the specifications of the computer hardware components, such as processing power or speed, bandwidth, clock rate, memory specifications, and the like.
[0031] As provided herein, the task allocation optimizer 20 can assign each computational task in the set of computational tasks to one or more hardware elements of the set of computer hardware elements based on the hardware topology. The task allocation optimizer 20 can generate a task schedule 26 for executing each computational task 24 in the set of computational tasks 24. In various examples, the task schedule 26 optimizes communication for performing the set of computational tasks 24 based on the task allocation optimization.
[0032] In various examples, the electronic computing device 10 can include a graph generator 22 that generates a compute graph, based on the task schedule 26, for execution by the set of computer hardware elements. The compute graph can comprise the set of computational tasks 24, assignments for each computational task 24 to the one or more hardware elements, and the schedule for executing each computational task 24 in the set of computational tasks 24. The graph generator 22 can further generates an address table 28 (e.g., a workload or task reservation table) containing addresses of data in memory for executing the set of computational tasks 24 by the set of computer hardware elements. In certain examples, the address table 28 includes an optimal set of memory addresses for accessing the data in the memory, where the optimal set of memory addresses is used by the set of computer hardware elements for performing the set of computational tasks 24 in accordance with the task schedule 26 (e.g., when the instruction set triggers execution of each computational task).
[0033] In certain implementations, the graph generator 22 can extract of a directed acyclic graph (DAG) based on the task schedule 26 outputted by the task allocation optimizer 20. In such implementations, the graph generator 22 generates the compute graph based on the extracted DAG. In further examples, the compute graph can be utilized by the computer hardware elements to execute the set of computational tasks 24 in a deterministic manner, such as through the execution of workloads in independent pipelines. In accordance with examples described herein, the task allocation optimization is performed by the electronic computing device 10 to minimize total execution time of the set of computational tasks 24, minimize cost of communication for executing the compute graph, and minimize the risk of error or failure of completing a computational task 24 within a particular time constraint.
[0034] It is contemplated for examples described herein to extend to individual elements and concepts described herein, independently of other concepts, ideas or systems, as well as for examples to include combinations of elements recited anywhere in this application. Although examples are described in detail herein with reference to the accompanying drawings, it is to be understood that the concepts are not limited to those precise examples. As such, many modifications and variations will be apparent to practitioners skilled in this art. Accordingly, it is intended that the scope of the concepts be defined by the following claims and their equivalents. Furthermore, it is contemplated that a particular feature described either individually or as part of an example can be combined with other individually described features, or parts of other examples, even if the other features and examples make no mention of the particular feature.
Reference signs: Electronic computing device 12 Control circuits 14 Computer-readable storage medium 16 Communication interface 18 network Task allocation optimizer 22 Graph Generator 24 Computational Task 26 Schedule
28 Table
Claims (10)
- CLAIMS1. A computer-implemented method for optimizing task allocation and scheduling of a set of computational tasks (24) to be executed by an electronic computing device (10) comprising a set of computer hardware elements, and a memory storing an instruction set, comprising the steps of: -storing, in a memory resource, a description of a computer hardware topology of the computer hardware, and a description of a compute graph; and performing a task allocation optimization to: - based on the description of the computer hardware topology, assigning each computational task (24) in the set of computational tasks (24) to one or more hardware elements of the set of computer hardware elements; - generating a schedule (26) for executing each computational task (24) in the set of computational tasks (24); - generating the compute graph for execution by the set of computer hardware elements, the compute graph comprising the set of computational tasks (24), assignments for each computational task (24) to the one or more hardware elements, and the schedule (26) for executing each computational task (24) in the set of computational tasks (24); and - generating a table (28) containing addresses of data in the memory for executing the set of computational tasks (24) by the set of computer hardware elements.
- 2. The method according to claim 1, characterized in that based on the task allocation optimization a task schedule (26) is generated to optimize communication for performing the set of computational tasks (24) and an optimal set of memory addresses for accessing the data in the memory is generated, wherein the optimal set of memory addresses is used by the set of computer hardware elements for performing the set of computational tasks (24) in accordance with the task schedule (26) when the instruction set triggers execution of each computational task (24) in the set of computational tasks (24).
- 3. The method according to claim 1 or 2, characterized in that the task allocation optimization results in extraction of a directed acyclic graph based on the schedule (26) for executing each computational task (24) in the set of computational tasks (24), wherein the compute graph is generated based on the extracted directed acyclic graph.
- 4. The method according to any one of claims 1 to 3, characterized in that the set of computational tasks (24) correspond to autonomous driving tasks.
- 5. The method according to any one of claims 1 to 4, characterized in that the task allocation optimization is performed to minimize risk that a computational task (24) in the set of computational tasks (24) does not meet a corresponding deadline when executed in the computer hardware.
- 6. The method according to any one of claims 1 to 5, characterized in that the task allocation optimization is performed to minimize total execution time of the set of computational tasks (24).
- 7. The method according to any one of claims 1 to 6, characterized in that the task allocation optimization and the memory addressing are performed to minimize risk, total execution time, and cost of communication for executing the set of computational tasks (24) in the computer hardware.
- 8. A computer program product comprising program code means for performing a method according to any one of claims 1 to 7.
- 9. A non-transitory computer-readable storage medium (14) comprising at least the computer program product according to claim 8.
- 10. An electronic computing device (10) for optimizing task allocation and scheduling of a set of computational tasks (24), comprising at least a set of computer hardware elements, and a memory storing an instruction set, wherein the electronic computing device (10) is configured for performing a method according to any one of claims 1 to 7.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB2313127.9A GB2633031A (en) | 2023-08-30 | 2023-08-30 | A computer-implemented method for optimizing task allocation and scheduling of a set of computational tasks, a computer program product, a non-transitory |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB2313127.9A GB2633031A (en) | 2023-08-30 | 2023-08-30 | A computer-implemented method for optimizing task allocation and scheduling of a set of computational tasks, a computer program product, a non-transitory |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| GB202313127D0 GB202313127D0 (en) | 2023-10-11 |
| GB2633031A true GB2633031A (en) | 2025-03-05 |
Family
ID=88237263
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| GB2313127.9A Pending GB2633031A (en) | 2023-08-30 | 2023-08-30 | A computer-implemented method for optimizing task allocation and scheduling of a set of computational tasks, a computer program product, a non-transitory |
Country Status (1)
| Country | Link |
|---|---|
| GB (1) | GB2633031A (en) |
Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20170308411A1 (en) * | 2016-04-20 | 2017-10-26 | Samsung Electronics Co., Ltd | Optimal task scheduler |
| US20190391796A1 (en) * | 2019-06-28 | 2019-12-26 | Intel Corporation | Control of scheduling dependencies by a neural network compiler |
| US20200249998A1 (en) * | 2019-02-01 | 2020-08-06 | Alibaba Group Holding Limited | Scheduling computation graph heterogeneous computer system |
| US20210304066A1 (en) * | 2020-03-30 | 2021-09-30 | Microsoft Technology Licensing, Llc | Partitioning for an execution pipeline |
| US20220004430A1 (en) * | 2020-07-01 | 2022-01-06 | International Business Machines Corporation | Heterogeneous system on a chip scheduler with learning agent |
| WO2023049287A1 (en) * | 2021-09-23 | 2023-03-30 | Callisto Design Solutions Llc | Intelligent scheduler |
| US20230121986A1 (en) * | 2017-09-21 | 2023-04-20 | Groq, Inc. | Processor compiler for scheduling instructions to reduce execution delay due to dependencies |
| US20230162032A1 (en) * | 2021-11-22 | 2023-05-25 | SambaNova Systems, Inc. | Estimating Throughput for Placement Graphs for a Reconfigurable Dataflow Computing System |
-
2023
- 2023-08-30 GB GB2313127.9A patent/GB2633031A/en active Pending
Patent Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20170308411A1 (en) * | 2016-04-20 | 2017-10-26 | Samsung Electronics Co., Ltd | Optimal task scheduler |
| US20230121986A1 (en) * | 2017-09-21 | 2023-04-20 | Groq, Inc. | Processor compiler for scheduling instructions to reduce execution delay due to dependencies |
| US20200249998A1 (en) * | 2019-02-01 | 2020-08-06 | Alibaba Group Holding Limited | Scheduling computation graph heterogeneous computer system |
| US20190391796A1 (en) * | 2019-06-28 | 2019-12-26 | Intel Corporation | Control of scheduling dependencies by a neural network compiler |
| US20210304066A1 (en) * | 2020-03-30 | 2021-09-30 | Microsoft Technology Licensing, Llc | Partitioning for an execution pipeline |
| US20220004430A1 (en) * | 2020-07-01 | 2022-01-06 | International Business Machines Corporation | Heterogeneous system on a chip scheduler with learning agent |
| WO2023049287A1 (en) * | 2021-09-23 | 2023-03-30 | Callisto Design Solutions Llc | Intelligent scheduler |
| US20230162032A1 (en) * | 2021-11-22 | 2023-05-25 | SambaNova Systems, Inc. | Estimating Throughput for Placement Graphs for a Reconfigurable Dataflow Computing System |
Also Published As
| Publication number | Publication date |
|---|---|
| GB202313127D0 (en) | 2023-10-11 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11962664B1 (en) | Context-based data valuation and transmission | |
| WO2020207504A1 (en) | Distributed centralized automatic driving system | |
| Jiang | Vehicle E/E architecture and its adaptation to new technical trends | |
| US20240378090A1 (en) | Out-of-order workload execution | |
| US20190050732A1 (en) | Dynamic responsiveness prediction | |
| US20240375670A1 (en) | Autonomous vehicle system on chip | |
| CN110959145A (en) | Application priority based power management for computer devices | |
| CN111818189B (en) | Vehicle road cooperative control system, method and medium | |
| Kenjić et al. | Connectivity challenges in automotive solutions | |
| US11849225B2 (en) | Throughput reduction in autonomous vehicle camera sensors | |
| Ernst | Automated driving: The cyber-physical perspective | |
| CN112346450A (en) | Robust Autonomous Driving Design | |
| US20210312729A1 (en) | Distributed autonomous vehicle data logger | |
| GB2633031A (en) | A computer-implemented method for optimizing task allocation and scheduling of a set of computational tasks, a computer program product, a non-transitory | |
| CN110377272B (en) | Method and device for realizing SDK based on TBOX | |
| CN115509726B (en) | Sensor data access system | |
| US12397814B2 (en) | Training of a perception model on edge of a vehicle | |
| US20230322241A1 (en) | Implementing degraded performance modes in an autonomous vehicle | |
| US20220214924A1 (en) | Scheduled data transfer | |
| WO2023077018A1 (en) | Data flow management for computational loads | |
| US12522228B2 (en) | Workload execution in deterministic pipelines | |
| US20250291748A1 (en) | Interconnect providing freedom from interference | |
| Mohamed | Deep learning for autonomous driving | |
| Yang et al. | Cames: enabling centralized automotive embedded systems with Time-Sensitive Network | |
| CN115016317A (en) | A distributed heterogeneous autonomous driving simulation test method, system and device |