GB2583745A - Neural network for processing sensor data - Google Patents
Neural network for processing sensor data Download PDFInfo
- Publication number
- GB2583745A GB2583745A GB1906476.5A GB201906476A GB2583745A GB 2583745 A GB2583745 A GB 2583745A GB 201906476 A GB201906476 A GB 201906476A GB 2583745 A GB2583745 A GB 2583745A
- Authority
- GB
- United Kingdom
- Prior art keywords
- sensor
- data
- output
- neural network
- node
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/049—Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Neurology (AREA)
- Image Analysis (AREA)
Abstract
Neural network engine 101 is coupled to input 103 to receive event-based output sensor data from sensor elements 102a-c. Received output-sensor-data indicate a change detected by a first element 102a and are generated independent of output sensor data of second sensor elements 102b-c. Neural network engine 101 generates analysis data 104 from the output-sensor-data of the first element. Neuromorphic engine 101 processes events instantaneously in a localised, asynchronous, and parallel manner in contrast to conventional artificial neural networks. Nodes of engine 101 store state data maintained after transmitting node output and updated through the difference between current and last received output-sensor-data, thus instantaneously processing change-events without having to wait for data from all input connections. Parameters of a neural network trained on tensors can be transferred to event-based neural engine 101 without modification. Event‑based neural-networks provide fast‑response and accurate sensor data processing for Dynamic Vision Sensors (DVS) or silicon cochlear.
Description
Neural network for processing sensor data
Technical Field
This specification relates to methods and systems for processing event-based sensor data to generate analysis data, in particular a computer-implemented neural network system for generating analysis data.
Background
Artificial neural networks are increasingly being deployed in real-world applications, for example, in image processing systems performing tasks such as face recognition or object tracking, in speech processing systems performing tasks such as speech recognition and speaker recognition amongst many other applications.
Certain applications however require a fast response. For example, autonomous emergency car braking systems may use image processing in order to detect potential imminent collisions, and may require an almost instant response to engage the braking system to avoid a collision. In addition, certain environments, such as mobile environments, may impose limitations on available power. Therefore, there is a need for fast sensor data processing systems, capable of providing accurate analysis data with minimal latency and minimal power requirements.
Summary
According to a first aspect, there is provided a computer-implemented neural network system for generating analysis data comprising: an input configured to receive event-based sensor data from a sensor comprising a plurality of sensor elements, wherein the received event-based sensor data comprises output sensor data indicative of a change detected by a first sensor element of the plurality of sensor elements, wherein the output sensor data indicative of a change detected by the first sensor element is generated by the sensor independent of output sensor data indicative of one or more changes detected by second sensor elements of the plurality of sensor elements; and a neural network engine coupled to the input, wherein the neural network engine is configured to process the output sensor data indicative of a change detected by the first sensor element responsive to the change indicated by the output sensor data to generate image analysis data.
Some recent improvements in artificial neural network processing methods have been due to the ability to train and deploy large scale artificial neural networks comprising millions of trainable parameters. These artificial neural networks are trained based upon conventional sensor data that is captured either periodically or on demand from all sensor elements of the sensor. For example, in image processing, an artificial neural network may be trained using image data captured using conventional frame-based image sensors.
An alternative type of sensor is an event-based sensor. An event-based sensor comprises a plurality of sensor elements configured to operate independently and asynchronously and to detect local changes in a sensed property. That is, each time an individual sensor element detects a change in a sensed property in its locality, sensor data indicative of the change is generated and output by the sensor instantaneously. As such, event-based sensors are particularly suitable for use in timing critical systems where decisions need to be made in a short period of time, such as autonomous emergency braking systems. For these types, of systems, having access to as much data as possible as quickly as possible is advantageous. By contrast, a typical frame-based camera operating at 24 frames per second has a frame delay of 33.4 ms, whilst an event-based camera is capable of outputting image data relating to an event with less than a millisecond of latency. In addition, the volume of data produced by an event-based sensor may be far smaller than that of a full capture sensor and therefore reduces bandwidth and storage requirements and enables faster processing of the sensor data in a power efficient manner.
Part of the success of large-scale artificial neural networks has been due to successes in implementing neural networks in specialist tensor processing hardware, such as graphical processing units (GPUs) and tensor processing units (TPUs), and in large scale distributed systems. However such neural network systems are designed and optimized for processing tensor data, for example, a full frame of image data, and hence are not suited for processing event-based sensor data. One possibility for processing event-based data in conventional tensor-based artificial neural networks is to simply embed data from a single event into a tensor for processing. However, processing of such a sparse tensor is not efficient, especially given that it is possible that thousands or millions of events may be generated per second. Another possibility is to accumulate events over a period of time to generate more densely populated tensors for processing. However, this would nullify the low latency advantage that an event-based sensor has over conventional full capture sensors.
To overcome the above, there is provided a neural network engine that is capable of directly processing event-based sensor data in a localised, asynchronous and parallel manner. The neural network engine is configured to process the output sensor data indicative of a change detected by the first sensor element (i.e. an event) responsive to the change indicated by the output sensor data to generate analysis data. That is, a received event is processed and an output generated instantaneously without delay. As such, the neural network engine processes the output sensor data responsive to the change indicated by the output sensor data to generate analysis data. The neural network engine need not accumulate event data over time and is capable of directly processing event-based sensor data efficiently. In particular, the local and asynchronous processing can greatly reduce power requirements for processing sensor data.
As noted above, the event-based sensor may be an image sensor such as a Dynamic Vision Sensor, or an audio sensor such as a silicon cochlear. However, it will be appreciated that the system is not limited to image and audio processing and other applications may include network sensors for the detection of threats to a computer network and sensors for monitoring a manufacturing process and control of high speed machinery amongst other applications.
The neural network engine may comprise a plurality of parameters initialised based upon the parameters of a neural network trained using tensor data. That is, the parameter values from an artificial neural network trained using conventional tensor data may be transferred to the neural network engine. For example, the neural network engine may comprise a plurality of parameters initialised based upon the parameters of a neural network trained using full frame image data or data of an appropriate modality.
A portion of the plurality of parameters of the neural network engine may be initialised to have the same values of a portion of the neural network trained using tensor data.
The transfer of parameter values means that the neural network engine may have the same or a substantially similar architecture, that is, the neural network engine may have the same or similar number of nodes, layers and connection topology. For example, where parameter values are below a particular threshold, the associated connection may be pruned and not transferred to the neural network engine.
Thus, the inventors have realised that it is possible to transfer the parameters of an artificial neural network trained on conventional tensor data to a neural network engine adapted to process event-based sensor data of the same modality. It is not required to develop special datasets of event-driven sensor data in order to train the neural network engine. As datasets of event-based sensor data is relatively rare compared to datasets of conventional tensor data and large scale artificial neural networks require large amounts of training data to achieve good performance on tasks, the ability to transfer the parameters of conventionally trained neural networks to an event-based neural network engine is therefore particularly advantageous. Thus, any advances made in the field of conventional artificial neural networks may also be applied to the event-based neural network engine through parameter and architecture transfer.
It will be appreciated that the term tensor refers to an n-dimensional array. For example, a one dimensional array is commonly referred to as a vector and a two dimensional array is commonly referred to as a matrix. A tensor may be provided as an input to an artificial neural network which in turn performs an operation on all of the data values of the tensor at the same time for efficiency. In this way, a conventional artificial neural network implementation operates in parallel but is synchronous rather than asynchronous.
The tensor may comprise sensor data obtained from all sensor elements of the sensor at one point in time. For example, a conventional frame-based image sensor comprises a two dimensional array of sensor elements. The frame-based image sensor provides tensor data comprising values for each sensor element. In general, a tensor comprising conventional sensor data may comprise spatially linked data captured at a single point in time, or temporally linked data captured for a location or a spatio-temporal pattern. Where such data is available, it is also possible that the tensor comprises an accumulation of events from an event-based sensor over a particular period of time such that a conventional artificial neural network may be efficiently trained using event-based sensor data.
It will be further appreciated that the term artificial neural network refers to any type of artificial neural network architecture, for example, a feed-forward multi-layer perceptron, a convolutional neural network, a recurrent neural network, an artificial neural network having connections that skip layers,-an artificial neural network having shared parameters amongst others.
The neural network engine may further comprise a first node configured to process the output sensor data indicative of a change detected by the first sensor element to update a state value associated with the first node; generate node output data based upon the state value; transmit the node output data through one or more output connections of the first node; and maintain the state value after transmitting the node output data. By contrast, a neuron in a conventional artificial neural network model recalculates a neuron's state value upon receipt of new tensor data. By maintaining the state value, the first node is adapted to respond quickly to change events.
In addition, once a current change event has been processed, the first node can immediately process a next received change event and generate and transmit output data without delay. Outputs may be generated instantaneously and there is no requirement for a predetermined delay period between transmissions of output data.
Processing the output sensor data indicative of a change detected by the first sensor element may comprise updating the state value based upon a difference between the currently received output sensor data indicative of a change detected by the first sensor element and a last received output sensor data indicative of a change detected by the first sensor element. The first node may comprise a plurality of inputs, each input of the plurality of inputs configured to receive output sensor data indicative of a change detected by a respective sensor element. The neural network engine may further comprise a plurality of first nodes configured to process the output sensor data indicative of a change detected by the first sensor element asynchronously and in parallel.
As discussed above, in an artificial neural network, each neuron requires input data to be received from all of its input connections before processing to generate output data may take place. In conventional feed-forward neural network implementations, neurons of one layer waits until processing of all previous layers have been completed. In this way, nodes in the same layer are synchronized. By contrast, configuring the first node of the neural network engine to update its state value based upon a difference between the currently received output sensor data indicative of a change detected by the first sensor element and a last received output sensor data indicative of a change detected by the first sensor element enables the first node to operate asynchronously with respect to other nodes in the same layer of the neural network engine. That is, the first node can generate output data based only on receipt of data from one of its input connections without having to wait for receipt of data from all of its input connections as in a standard artificial neuron. In this way, the first node is adapted to be responsive to change events and can be considered as an adaptation of an artificial neuron to operate asynchronously on an event-driven basis. This enables processing of change events instantaneously, with data being processed and propagated through the relevant node connections within the neural network engine asynchronously and in parallel. The neural network engine is therefore capable of responding to change events faster than an artificial neural network model or other types of neural network model such as a spiking neural network model. In addition, localised processing in the neural network engine is enabled and therefore power requirements may be reduced as only nodes that are in receipt of data need be active.
The difference between the currently received output sensor data indicative of a change detected by the first sensor element and a last received output sensor data indicative of a change detected by the first sensor element may be a weighted difference or a function may be applied to the currently received output sensor data and the last received output sensor data prior to taking the difference.
The first node may be further configured to store the last received output sensor data value received from each respective input of the first node.
Generating the node output data based upon the state value may comprise applying an activation function to the state value. For example, the activation function may be a linear function or a non-linear function such as a sigmoid function, hyperbolic tangent, linear rectification or maximum function amongst others.
The first node may be configured to transmit the node output data through the one or more output connections to one or more second nodes of the neural network engine.
For example, the first node may be a node in a first hidden layer in the neural network engine whilst the second nodes may a node in a higher layer of the neural network engine. It will be appreciated that the neural network engine may comprise further nodes arranged in further layers and may comprise a final layer coupled to an output of the neural network system to provide the generated analysis data.
Alternatively, the first node may be configured to transmit the node output data through the one or more output connections to an output coupled to the neural network engine. As such, the analysis data may comprise the node output data of the first node.
The first node may be configured to transmit the node output data through the one or more output connections based upon a conditional function. The conditional function may be based upon a timing signal. For example, it may be that transmittal is to occur at a certain time or transmittal may be delayed in order to simulate connections that skip one or more layers of the artificial neural network. In another example, the conditional function may be based upon a received number of output sensor data packets exceeding a threshold or the conditional function may determine whether the node output data is sufficiently different to previously generated node output data to warrant transmittal.
The system may further comprise a plurality of conditional functions, wherein each respective conditional function is associated with at least one of the one or more output connections of the first node. For example, one subset of the one or more output connections may be conditioned on a first timing signal and a different subset of the one or more output connections may be conditioned on a second timing signal.
The neural network engine may be based upon a neuromorphic architecture and may be implemented by a neuromorphic computer. Neuromorphic computers comprise a plurality of processor cores, each core configured to operate independently in parallel and asynchronously. The processor cores typically operate on an event-driven/interrupt basis. That is, the receipt of an event triggers an interrupt to wake-up the core, the core processes the event which may result in an output event being transmitted along a communication network for receipt by another core which triggers the receiving the core to wake-up and process the output event and so on. After processing of an event, a core returns to a sleep state until a next interrupt is received.
Whereas typically artificial neural networks would not be implemented using neuromorphic computers, the above described adaptations of the neural network engine to process event-based sensor data enables the neural network engine to be implemented efficiently using a neuromorphic computer. In particular, the localised and asynchronous processing greatly reduces the required power consumption.
The system may comprise the event-based sensor. For example, the event-based sensor may be an image sensor, an audio sensor, a sensor associated with a computer network, a sensor associated with a manufacturing process. The output sensor data may comprise an intensity value.
The analysis data may comprise data indicative of an object classification and/or data indicative of a change in state of an object. For example, where the sensor is an image sensor, the analysis data may be indicative of a classification of an object present in a scene or where the sensor is an audio sensor the analysis data may be used for automatic speech recognition or speaker detection or the audio analysis data may indicate that a particular trigger sound has occurred such that a system to performs an action responsive to the trigger sound, or that a sound indicative of a fault or the like has been detected. Where the sensor is a sensor associated with a computer network, the analysis data may comprise network sensor analysis data which may indicate the detection of a threat to a computer network. Where the sensor is associated with a manufacturing process, the analysis data may be indicative of a state of a machine used in the manufacturing process or may be indicative of a state of manufactured items.
In other examples, the neural network system may be used in a system for control of autonomous vehicles. For example, the neural network system may be used in an autonomous braking system where a critical decision to apply a brake may be required in as little time as possible. The neural network system may be used in other time sensitive systems.
According to a second aspect, there is provided a method of generating analysis data comprising the operations implemented by the system of the first aspect. That is, the method comprises the operations the input of the neural network system is configured to perform and the operations the neural network engine of the neural network system is configured to perform. The method may further comprise the optional operations that the neural network system of the first aspect is further configured to perform.
Aspects can be combined and it will be readily appreciated that features described in the context of one aspect can be combined with other aspects.
It will be appreciated that aspects can be implemented in any convenient form. For example, aspects may be implemented by appropriate computer programs which may be carried on appropriate carrier media which may be tangible carrier media (e.g. disks) or intangible carrier media (e.g. communications signals). Aspects may also be implemented using suitable apparatus which may take the form of programmable computers running computer programs.
Brief Description of the Figures
Embodiments will now be described, by.way of example, with reference to the accompanying drawings, in which: Figure 1 is a schematic illustration of a neural network system according to an embodiment.
Figure 2 is a schematic illustration of a first node of the neural network system according to an embodiment.
Figure 3 is a schematic illustration of a data structure for storing state data of the first node according to an embodiment.
Figure 4A is a schematic illustration of a computer for implementing the neural network system according to an embodiment.
Figure 4B is a schematic illustration of a neuromorphic computer for implementing the neural network system according to an embodiment.
The neural network engine 101 is coupled to the input 103. The input 103 provides an interface for the neural network engine 101 with incoming sensor data. The input 103 may be external to the neural network engine 101. Alternatively, the input 103 may be part of the neural network engine 101 and in such cases, the input 103 may form part of an input layer of the neural network engine 101.
The neural network engine 101 is configured to process the output sensor data indicative of a change detected by the first sensor element 102a responsive to the change indicated by the output sensor data to generate analysis data 104. That is, neural network engine 101 is configured to operate on an event driven basis with each event corresponding to a change detected by a respective sensor element as indicated by the received output sensor data. Thus, the neural network engine 101 is configured to process events instantaneously to generate analysis data 104 once the output sensor data indicative of the change event is received by neural network engine 101.
This is in contrast to conventional artificial neural network implementations which are optimized for processing of tensor data.
The neural network engine 101 may comprise a plurality of parameters as is typical with any type of neural network. For example, the parameters of a conventional artificial neural network may include the weights and biases associated with the nodes of the artificial neural network. These are typically set using some form of training procedure such as backpropagation of error values computed through gradient descent methods.
As noted above, the parameters of the neural network engine 101 and therefore the architecture of the neural network engine 101 may be based upon the parameters and architecture of an artificial neural network trained using conventional tensor data. For example, a portion of the parameter values of the neural network engine 101 may be initialised to the same values of a portion of the parameters of the artificial neural network trained using tensor data. The portion may include all of the parameter values or may include a set of values that are above a particular threshold such that it is determined a respective parameter provides some contribution for generating analysis data whilst small valued parameters and associated connections may be pruned.
Figure 5 is a flowchart showing processing carried out for generating image analysis data.
Figure 6 is a flowchart showing processing carried out by an image sensor.
Figure 7 is a flowchart showing processing carried out by the first node.
Detailed Description
Referring to Figure 1, a computer-implemented neural network system 100 comprises a neural network engine 101 coupled to an input 103. The input 103 is configured to receive event-based sensor data from a sensor 102. The sensor 102 comprises a plurality of sensor elements 102a-c. Figure 1 depicts a cut-away portion of the sensor 102 with three sensor elements 102a-c shown. It will be appreciated that the sensor 102 may have more sensor elements than is shown in Figure 1. The sensor 102 may be external to the neural network system 100 and hence an optional component of the system 100.
The input 103 is configured to receive event-based sensor data from the sensor 102. The received event-based sensor data comprises output sensor data indicative of a change detected by a first sensor element 102a of the plurality of sensor elements 102a-c. The output sensor data indicative of a change detected by the first sensor element 102a is generated by the sensor 102 independent of output sensor data indicative of one or more changes detected by second sensor elements 102b, 102c of the plurality of sensor elements 102a-c.
That is, the sensor 102 is configured to generate output sensor data indicative of a change detected by a respective sensor element independently of changes detected by other sensor elements. For example, the sensor 102 may generate output sensor data corresponding to a change detected by the first sensor element 102a separately to any changes detected by second sensor elements 102b and 102c. This is in contrast to a conventional sensor where output sensor data for all sensor elements are collected into and transmitted as a single data structure such as a frame either on demand or periodically according to a refresh rate of the sensor. Further details with respect to the sensor 102 is provided below.
Thus, the inventors have realised that it is possible to transfer the parameters of an artificial neural network trained on conventional tensor data to a neural network engine 101 configured to process event-based sensor data of the same modality without requiring modification of the transferred parameter values. As such, the neural network engine 101 can benefit from the same advances that have been made in the field of artificial neural networks trained on conventional tensor data.
Furthermore, given that the parameters of the neural network engine 101 can be obtained from a conventionally trained artificial neural network without modification, the neural network engine 101 can effectively be trained using conventional tensor data without requiring the creation of a training data set comprising event-based image data. In addition, the inventors have also realised that it is not required to develop a special training set of event-based sensor data converted to tensor data in order to train an artificial neural network for transfer of parameters to the neural network engine 101.
This is particularly advantageous given that there is a limited amount of available event-based image data and that conventional tensor data is widely available.
The neural network engine 101 may further comprise a first node 201. An exemplary first node 201 is shown in the schematic illustration of Figure 2. The first node 201 may 'be configured to process the output sensor data indicative of a change detected by the first sensor element 102a to update a state value 203 associated with the first node 201. The first node 201 may be further configured to generate node output data based upon the state value 203. The first node 201 may additionally be configured to transmit the node output data through one or more output connections 204 and to maintain the state value 203 after transmitting the node output data. This is in contrast to a neuron in a conventional artificial neural network model whereby a neuron's state value is recalculated upon receipt of new tensor data. By maintaining the state value, the first node 201 is adapted to respond quickly to change events.
Once a current change event has been processed, the first node 201 can immediately process a next received change event and generate and transmit output data without delay. Outputs are generated instantaneously and there is no requirement for a predetermined delay period between transmissions of output data.
Further details with respect to the configuration of the first node 201 will now be described. The processing of output sensor data indicative of a change detected by the first sensor element 102a may comprise updating the state value 203 based upon a difference between the currently received output sensor data, that is, the change event currently being processed, and a last received output sensor data indicative of a change detected by the first sensor element 102a. In this regard, the first node 201 may be configured to store the last received output sensor data value associated with the first element 102a and to update the stored value to the current output sensor data value at a point after processing has been completed.
As discussed above, in an artificial neural network, each neuron requires input data to be received from all of its input connections before processing to generate output data may take place. In conventional feed-forward neural network implementations, neurons of one layer waits until processing of all previous layers have been completed. In this way, nodes in the same layer are synchronized. By contrast, configuring the first node 201 of the neural network engine 101 to update its state value based upon a difference between the currently received output sensor data indicative of a change detected by the first sensor element 102a and a last received output sensor data indicative of a change detected by the first sensor element 102a enables the first node 201 to operate asynchronously with respect to other nodes in the same layer of the neural network engine 101. That is, the first node 201 can generate output data based only on receipt of data from one of its input connections without having to wait for receipt of data from all of its input connections as in a standard artificial neuron. In this way, the first node 201 is adapted to be responsive to change events and can be considered as an adaptation of an artificial neuron to operate asynchronously on an event-driven basis.
This enables processing of change events instantaneously, with data being processed and propagated through the relevant node connections within the neural network engine 101 asynchronously and in parallel. The neural network engine 101 is therefore capable of responding to change events faster than a spiking neural network model or an artificial neural network model.
The first node 201 may be configured to process the output sensor data as described in more detail below. The first node 201 may comprise a plurality of input connections 202a-c as shown in Figure 2. Each respective input connection 202a-c may be associated with one of the sensor elements 102a-c. For example, input connection 202a may receive output sensor data indicative of a change detected by the first sensor element 102a, input connection 202a may receive output sensor data indicative of a change detected by sensor element 202b and input connection 202c may receive output sensor data indicative of a change detected by sensor element 102c. When the first node 201 receives output sensor data indicative of a change detected by a sensor element, for example the first sensor element 102a via input connection 202a, processing to update the state value of the first node 201 may be carried out according to the following equation: = Y + Mkt) -Mxi) (1) where y is the updated state value of the first node 201, y is the current state value of the first node 201 prior to updating, is the value of the output sensor data indicative of a change detected by the first sensor element 102a, xi is the value of the last received output sensor data indicative of a change detected by the first sensor element 102a and fi(.) is a function applied to the input data values which may for example be a multiplication by a weight wi associated with the input connection 202a. As discussed above, weight values may be set based upon the weight values of a conventional artificial neural network trained on tensor data. The state value of the first node 201 may be initialised to 0 or another pre-determined value such as a bias value as appropriate. The input value xi may be initialised to 0 or another pre-determined value as appropriate. After the updating the state value, the value of the last received output sensor data may be updated to the currently received value, that is: xi e z, . It will be appreciated that the state value, y, need not be a scalar value and may be a vector.
The output sensor data, may also be a vector or may be a scalar value from which a vector is generated by the function fa.). For example, the generated vector may be a vector of zero values in every position except the i-th position which has a value of 2i.
As discussed above, the first node 201 may be configured to generate node output data based upon applying an activation function to the state value 203. For example, the generation may be performed according to the following equation: z = a(y) (2) where z is the node output data, y is the updated state value 203, and a is an activation function. The activation function may be a linear function or a non-linear function such as a sigmoid function, hyperbolic tangent, linear rectification, maximum function where the state value is a vector or any other function as deemed appropriate by a person skilled in the art. A check may be performed to determine whether the node output data is different from the last generated node output data. If the node output data is not different then it is possible that the node output data does not need to be transmitted out. As such, sensor data is represented in the neural network model by a set of data values.
The one or more output connections 204 may be coupled to one or more second nodes of the neural network engine 101 and thus the output of the first node 201 may provide an input to the one or more second nodes. Alternatively, the first node 201 may be configured to transmit the node output data through the one or more output connections to an output coupled to the neural network engine 101.
The first node 201 may be configured to transmit the node output data through the one or more output connections 204 based upon a conditional function. The conditional function may be based upon a timing signal. For example, it may be that transmittal is to occur at a certain time or transmittal may be delayed in order to simulate connections that skip one or more layers of the neural network. In another example, the conditional function may be based upon a received number of output sensor data packets exceeding a threshold or the conditional function may determine whether the node output data is sufficiently different to previously generated node output data to warrant transmittal. There may also be a plurality of conditional functions with each conditional function associated with at least one of the one or more output connections 204 of the first node. For example, one subset of the one or more output connections may be conditioned on a first timing signal 'and a different subset of the one or more output connections may be conditioned on a second timing signal.
To facilitate the processing performed by the first node 201, the first node 201 may be provided with a local memory. The first node 201 may be configured to store appropriate data in a data structure 300 in the local memory. An exemplary data structure is shown in Figure 3. The data structure 300 may comprise an entry for storing a node index value to identify the node. The data structure 300 may further comprise entries for the state value of the node 302, the activation function of the node 303, data associated with the node's input connections 304 and data associated with the node's output connections 305. It is also possible that the actual data values themselves are stored in a shared memory, the shared memory being accessible to a plurality of nodes of the neural network engine 101. In such a case, the data structure 300 of the first node 201 may comprise pointers to the location in shared memory when the corresponding values are stored.
The data associated with the node's input connections 304 may comprise a set of tuples, with each respective tuple comprising an index value identifying the incoming node or sensor element associated with the input connection, an associated function for the input connection as shown in Equation (1), and the last received value associated with the input connection. The set of indices identifying the input connections are denoted by T in Figure 3.
The data associated with the node's output connections 305 may comprise a set of index values indicating the identity of the second nodes that the first node is connected to and thus where the output node data should be transmitted. This set of indices identifying the output nodes is denoted by D in Figure 3.
As will be appreciated, the neural network engine 101 may comprise a plurality of first nodes, second nodes and further nodes. The nodes may be arranged in layers following a conventional artificial neural network architecture such as a multilayer perceptron, a convolutional neural network, a recurrent neural network or other type of node arrangement as deemed appropriate. It will also be appreciated that the second nodes and further nodes of the neural network engine 101 may be configured to perform processing in similar manner to that of first node 201. However, instead of processing output sensor data indicating a change detected by a respective sensor element, the second or further nodes are configured to process a change in the data received through one of its input connections. As such, the nodes of the neural network engine 101 are all configured to process data instantaneously, asynchronously and in parallel to generate the analysis data. Each node may have its own corresponding activation function and/or set of input functions as deemed appropriate by a person skilled in the art.
The final layer of the neural network engine 101 may be coupled to an output of the neural network system 100 to provide the generated analysis data. The analysis data may comprise data indicative of an object classification such as a likelihood of the presence of particular type of object. In this way, the neural network engine 101 may act as a classifier or the neural network engine 101 may provide the analysis data as input data to an external image classifier. In another example, the analysis data may comprise data indicative of a change in state of an object. For example, the analysis data may be image analysis data and may indicate that an object is moving in a particular direction or has stopped moving.
It will be appreciated that the analysis data may take various forms other than image analysis data such as audio analysis data, network sensor analysis data, analysis data. associated with real world entities, such as manufactured products or machinery under the control of process, or any other analysis data indicative of a change detected by a sensor. The analysis data may be used for the performance of a particular technical task. For example, where the analysis data comprises audio analysis data the analysis data may indicate that a particular trigger sound has occurred such that a system to performs an action responsive to the trigger sound, or that a sound indicative of a fault or the like has been detected. Where the analysis data comprises network sensor analysis data the analysis data may indicate the detection of a threat to a computer network. Other event-based data will suitable for processing will be apparent to a person skilled in the art.
The neural network system 100 may be used in a system for control of autonomous vehicles. For example, the neural network system 100 may be used in an autonomous braking system where a critical decision to apply a brake may be required in as little time as possible. The neural network system 100 may be used in other time sensitive systems.
As discussed above, the neural network system 100 is configured to receive event-based sensor data from a sensor 102 via input 103. For example, the sensor may be an event-based image sensor and may comprise a plurality of sensor elements 102a-c as shown in Figure 1. A sensor element may be a photosensor for converting light incident on the sensor element to an electrical signal and may indicate the intensity of the light incident on the sensor element. Alternatively, the sensor element may detect a different property of light.
The plurality of sensor elements 102a-c may be arranged in a two-dimensional array and may correspond to pixel locations of a full frame image. However, unlike frame-based image sensors which outputs a full image based upon data collected from all sensor elements in the two-dimensional array either periodically or on demand, each sensor element of the present system is configured to operate and output sensor data independently. That is, output sensor data is generated each time an individual sensor element detects a change in a property of the light incident on the particular sensor element. As such, the output of the image sensor 102 is a series of change detection events corresponding to changes detected by individual sensor elements in real-time. It will be appreciated that a change detection event may be generated by the image sensor 102 when a detected change in incident light by a sensor element is above a threshold to avoid spurious detection events being generated.
The output sensor data may comprise an intensity value and an identifier of the sensor element, such as the co-ordinates of the sensor element in the array. The output sensor data may be transmitted to the input 103 of the neural network system 160 via a high-speed, low latency interconnect. The image sensor 102 and the neural network system 100 may be provided together as an integrated device or may be provided as separate components to be connected together.
An exemplary type of image sensor 102 suitable for use with the neural network system 100 is a Dynamic Vision Sensor. One suitable Dynamic Vision Sensor is a Celex-4 DVS sensor manufactured by CelePixel, Shanghai, China. The sensor has a resolution of 640 x 768 pixels and each (pixel) sensor element is capable of producing events at a latency of a few microseconds. Various other sensors suitable for generating event based sensor data will be understood by one skilled in the art.
The neural network engine 101 may be based upon a neuromorphic architecture which in order to take full advantage of configuration of the neural network engine 101, may be implemented in hardware using a neuromorphic computer.
In general, neuromorphic computers are a class of specialist computing devices for implementing spiking neural network type models in hardware. Spiking neural networks are a very different type of neural network model to artificial neural networks. A spiking neuron is a closer model of a biological neuron as compared to a neuron of an artificial neural network. Spiking neural networks operate based upon propagating impulses known as "spikes" through the network. Data is modelled by the spiking neural network based upon a timing/frequency of these spikes and thus requires the system to track time. By contrast, artificial neural networks operate based upon the propagation of data values and do not require explicit knowledge of time. Whereas typically artificial neural networks would not be implemented using neuromorphic computers, the adaptations of the neural network engine 101 to process event-based sensor data enables the neural network engine 101 to be implemented efficiently using a neuromorphic computer.
Neuromorphic computers typically comprise multiple parallel processor cores that are connected by a communication network, and a distributed memory system with each core having its own local memory that is not accessible by any other core. The processor cores typically operate on an event-driven/interrupt basis., That is, the receipt of an event triggers an interrupt to wake-up the core, the core processes the event which may result in an output event being transmitted along the communication network for receipt by another core. After processing has been completed, the core returns to a sleep state until a next interrupt is received.
Neuromorphic computers are therefore highly parallel with each core operating asynchronously based upon a message-passing, event-driven paradigm. Given the event-driven nature of the neural network system 100 and the asynchronous and parallel operation of its nodes, the neural network system 100 is particularly suitable for implementation on a neuromorphic computer.
Referring now to Figure 4B, a neuromorphic computer suitable for implementing the neural network system 100 will now be described. Figure 4B is a schematic illustration of an exemplary neuromorphic chip 402a. A neuromorphic computer may comprise one or more such neuromorphic chips 402a connected through a data communication network.
The neuromorphic chip 402a comprises a plurality of processor cores 402b. The plurality of processor cores 402b are connected to a communication network on the chip 402a by a plurality of bi-directional data communication links 402c in order to enable the plurality of processor cores 402b to send and receive messages between them. Each processor core 402b has its own dedicated local memory 402d that may store data such as data received from other processor cores, data transmitted to other processor cores, internal state data, and other functional parameters.
The neuromorphic chip 402a further comprises a central router 402g that is responsible for directing network traffic on the communication network between the plurality of processor cores 402g. Whilst Figure 4B depicts a central router 402g, it will be appreciated that the neuromorphic chip 402b may operate a distributed routing system.
For example, each processor core 402b may have a dedicated routing function.
Messages transmitted on the communication network 402c may be indicative of events generated by the plurality of processor cores 402b. The messages may be source routed. That is, a message may identify the sender with routing tables indicating where to transmit a message based upon the identity of the sender.
The chip 402a may receive external input data via an input 402e for processing by the plurality of computing cores 402b. The processing may generate output data and may be transmitted to an external output component connected to an output 402f of the chip 402a.
As noted above, the neuromorphic computer may comprise a plurality of neuromorphic chips 402a coupled together to expand the number of computing cores 402b available to the neuromorphic computer for carrying out processing. The plurality of chips 402a may be connected together to form a network via the extemally facing input 402e and output 402f. The router 402g may be configured appropriately to handle routing of messages to the processor cores of other chips.
It will be appreciated that neuromorphic computers may have lower power requirements when implementing neural network systems as compared to implementations of neural network systems on a general purpose computing system or on a computing system where the neural network system is implemented using a graphics processing unit. For example, the processor cores of neuromorphic computers are typically in a low-powered sleep state until the receipt of data to be processed. After processing the data, the processor core returns to a low-powered state awaiting further data to be processed. As such, the neuromorphic computer operates with minimal power requirements and may therefore perform a task using less power than other systems. Such lower power processing may be particularly advantageous in computing systems where processing may be required to be performed using battery power.
In order to implement a neural network engine 101 on a neuromorphic computer, each node 201 of neural network engine 101 may be allocated to a particular processor core 402b on a neuromorphic chip 402a. That is, a particular computing core 402b may be dedicated to performing the operations of the allocated node of the neural network engine 101. If the number of nodes of the neural network engine 101 is greater than the number of computing cores available, groups of nodes may be allocated to individual computing cores 402b. The router 402g may be configured to route messages in accordance with the connection topology between nodes of the neural network engine 101.
Sensor event data from the image sensor 102 may be received at the input 402e of the neuromorphic chip 402a and transmitted to the appropriate processor core(s) 402a allocated to the node(s) 201 tasked with processing the sensor event data through the router 402g. Generated image analysis data may be provided at the output 402f of the neuromorphic chip 402b for further processing by external components.
An exemplary neuromorphic chip suitable for use with the present invention is the SpiNNaker chip produced by the University of Manchester, UK. Further details in relation to the SpiNNaker chip may be found in Furber et al., "Overview of the SpiNNaker System Architecture", IEEE Transactions of Computers, December 2013 which is hereby incorporated by reference in its entirety.
It will be appreciated that the implementation of the neural network engine 101 on a neuromorphic computer provides an efficient, low-power system for processing event-based sensor data to generate image analysis data at a low latency. Thus, a neural network engine having similar performance to that of a large-scale artificial neural network may be deployed in a more efficient manner.
Alternatively, the neural network engine 101 may be simulated in software where a neuromorphic computer is not available. The simulation may be performed on a computing device such as a computer or mobile device such as that shown in Figure 4A.
Figure 4A shows a computing device 401 comprising a CPU 401a which is configured to read and execute instructions stored in a volatile memory 401b which takes the form of a random access memory. The volatile memory 401b stores instructions for execution by the CPU 401a and data used by those instructions. For example, in use, data corresponding to the simulated neural network system may be stored volatile memory 401 b.
The computing device 401 further comprises non-volatile storage in the form of a hard disc drive 401c or a suitable alternative such as flash memory. The computing device 401 further comprises an I/O interface 401d to which are connected peripheral devices used in connection with the processing device 401. More particularly, a display 404e is configured so as to display output from the computer 401. The display 404e may, for example, display a visual representation of the image analysis data at a particular display resolution. Input devices are also connected to the I/O interface 401d. Such input devices could include a keyboard 401f and a mouse 401g for a computer. Other input devices may also include gesture-based input devices such as touch-screens for a mobile device. A network interface 401h allows the computing device 401 to be connected to an appropriate computer network so as to receive and transmit data from and to other computing devices. For example, the simulation may be distributed across a plurality of computing devices rather than being run on a single computing device. The CPU 401a, volatile memory 401b, hard disc drive 401c, I/O interface 401d, and network interface 401h, are connected together by a bus 401i.
Referring now to Figure 5, an exemplary method of generating analysis data will now be described. It will be appreciated that the method may be implemented by the neural network system 100 of Figure 1.
At step S501, event-based sensor data is received by an input 103 of a neural network system 100 from a sensor 102 comprising a plurality of sensor elements 102a-c. The received event-based sensor data comprises output sensor data indicative of a change detected by a first sensor element 102a of the plurality of sensor elements 102a-c. The output sensor data indicative of a change detected by the first sensor element 102a is generated by the image sensor independent of output sensor data indicative of one or more changes detected by second sensor elements 102b-c of the plurality of sensor elements 102a-c. As discussed above, the sensor 102 is configured to generate output sensor data indicative of a change detected by a respective sensor element independently of changes detected by other sensor elements. This is in contrast to a conventional image sensor where output sensor data for all sensor elements are collected into and transmitted as a single data structure either on demand or periodically according to a refresh rate of the image sensor.
At step S502, the output sensor data indicative of a change detected by the first sensor element 102a is processed responsive to the change indicated by the output sensor data by a neural network engine 101 to generate analysis data 104. That is, as discussed above, the processing of the output sensor data is performed on an event driven basis with each event corresponding to a change detected by a respective sensor element as indicated by the received output sensor data.
The processing of Figure 5 may optionally comprise processing performed by the sensor 102 prior to the receipt of event-based sensor data by the input 103 at step S501. For example, as shown in Figure 6, the processing performed at the sensor may Comprise detecting a change in a sensed property by a first sensor element 102a of the image sensor 102 as shown at step S601. As an example, for an image sensor, the sensed property may be may be an intensity of light incident on the sensor element.
Various other sensors and sensed properties will be understood by one skilled in the art, as described above.
At step S602, output sensor data indicative of the change detected by the first sensor element 102a is generated. As noted above, the output sensor data indicative of the change detected by the first sensor element is generated independently of one or more changes detected by second sensor elements 102b-c of the plurality of sensor elements 102a-c. At step S603, the output sensor data indicative of the change detected by the first sensor element is transmitted from the sensor 102 to the input 103 of the neural network system 100. Processing may then continue at the neural network system 100 starting from step S501 and continuing to step S502.
The processing of step S502 may comprise the processing of Figure 7 which may be implemented by a first node 201 of the neural network engine 101.
At step S701, the output sensor data indicative of a change detected by the first sensor element 102a received at step S501, is processed to update a state value associated with the first node 201. For example, the state value may be updated based upon a difference between the currently received output sensor data indicative of a change detected by the first sensor element and a last received output sensor data indicative of a change detected by the first sensor element as described above with respect to the configuration of the first node 201.
At step S702, node output data is generated based upon the state value. For example, the generating may comprise applying an activation function to the state value as described above with respect to the configuration of the first node 201.
At step S703, the node output data is transmitted through one or more output connections of the first node. The transmittal may be based upon a conditional function as described above with respect to the configuration of the first node 201.
At step S704, after transmittal of the node output data, the state value is maintained. That is, the updated state value computed at step S601 is not reset. The updated state value may be stored in a local memory of the first node 201 in a data structure 300 as described above with reference to Figure 3.
It will be appreciated that the processing of Figure 7 may be performed by a plurality of first nodes 201 of the neural network engine 101 asynchronously and in parallel as described above. It will be further appreciated that the neural network engine 101 may comprise one or more second nodes coupled to the one or more first nodes with the one or more second nodes processing the output of the one or more first nodes in a similar manner to that of the processing of Figure 7.
In addition, it will be appreciated the configuration of the neural network system 100 and the sensor 102 as described above with reference to Figures 1 to 4A and 4B may be combined with the processing methods described with reference to Figures 5 to 7.
Although specific embodiments of the invention have been described above, it will be appreciated that various modifications can be made to the described embodiments without departing from the spirit and scope of the present invention. That is, the described embodiments are to be considered in all respects exemplary and non-limiting. In particular, where a particular form has been described for particular processing, it will be appreciated that such processing may be carried out in any suitable form arranged to provide suitable output.
Claims (22)
- CLAIMS: 1. A computer-implemented neural network system for generating analysis data comprising: an input configured to receive event-based sensor data from a sensor comprising a plurality of sensor elements, wherein the received event-based sensor data comprises output sensor data indicative of a change detected by a first sensor element of the plurality of sensor elements, wherein the output sensor data indicative of a change detected by the first sensor element is generated by the sensor independent of output sensor data indicative of one or more changes detected by second sensor elements of the plurality of sensor elements; and a neural network engine coupled to the input, wherein the neural network engine is configured to process the output sensor data indicative of a change detected by the first sensor element responsive to the change indicated by the output sensor data to generate analysis data.
- 2. The system of claim 1, wherein the neural network engine comprises a plurality of parameters initialised based upon the parameters of a neural network trained using tensor data.
- 3. The system of claim 2, wherein a portion of the plurality of parameters of the neural network engine is initialised to have the same values of a portion of the parameters of the neural network trained using tensor data.
- 4. The system of any preceding claim, wherein the neural, network engine further comprises a first node configured to: process the output sensor data indicative of a change detected by the first sensor element to update a state value associated with the first node; generate node output data based upon the state value; transmit the node output data through one or more output connections of the first node; and maintain the state value after transmitting the node output data.
- 5. The system of claim 4, wherein processing the output sensor data indicative of a change detected by the first sensor element comprises updating the state value based upon a difference between the currently received output sensor data indicative of a change detected by the first sensor element and a last received output sensor data indicative of a change detected by the first sensor element.
- 6. The system of claim 5, wherein the first node comprises a plurality of inputs, each input of the plurality of inputs configured to receive output sensor data indicative of a change detected by a respective sensor element.
- 7. The system of claim 5 or 6, wherein the first node is further configured to store * 10 the last received output sensor data value received from each respective input of the first node.
- 8. The system of any one of claims 4 to 7, wherein the neural network engine further comprises a plurality of first nodes configured to process the output sensor data indicative of a change detected by the first sensor element asynchronously and in parallel.
- 9. The system of any one of claims 4 to 8, wherein generating the node output data based upon the state value comprises applying an activation function to the state value.
- 10. The system of any one of claims 4 to 9, wherein the first node is configured to transmit the node output data through the one or more output connections to one or more second nodes of the neural network engine.
- 11. The system of any one of claims 4 to 10, wherein the first node is configured to transmit the node output data through the one or more output connections to an output coupled to the neural network engine.
- 12. The system of any one of claims 4 to 11, wherein the first node is configured to transmit the node output data through the one or more output connections based upon a conditional function.
- 13. The system of claim 12, wherein the conditional function is based upon a timing signal.
- 14. The system of any one of claims 12 or 13, further comprising a plurality of conditional functions, wherein each respective conditional function is associated with at least one of the one or more output connections of the first node.
- 15. The system of any one of claims 8 to 14, wherein each respective first node is associated with a node grouping.
- 16. The system of any preceding claim, wherein the neural network engine is based upon a neuromorphic architecture.
- 17. The system of claim 16, wherein the neural network engine is implemented by a neuromorphic computer.
- 18.. The system of any preceding claim, wherein the neural network engine comprises a plurality of parameters initialised based upon the parameters of a neural network trained using full frame image data.
- 19. The system of any preceding claim, further comprising the event-based sensor.
- 20. The system of any preceding claim, wherein the sensor is one of the following: an image sensor, an audio sensor, a sensor associated with a computer network, or a sensor associated a manufacturing process.
- 21. The system of any preceding claim, wherein the output sensor data further comprises an intensity value.
- 22. The system of any preceding claim, wherein the analysis data comprises data indicative of an object classification and/or data indicative of a change in state of an object.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB1906476.5A GB2583745A (en) | 2019-05-08 | 2019-05-08 | Neural network for processing sensor data |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB1906476.5A GB2583745A (en) | 2019-05-08 | 2019-05-08 | Neural network for processing sensor data |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| GB201906476D0 GB201906476D0 (en) | 2019-06-19 |
| GB2583745A true GB2583745A (en) | 2020-11-11 |
Family
ID=67385014
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| GB1906476.5A Withdrawn GB2583745A (en) | 2019-05-08 | 2019-05-08 | Neural network for processing sensor data |
Country Status (1)
| Country | Link |
|---|---|
| GB (1) | GB2583745A (en) |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| DE102020214123B4 (en) * | 2020-11-10 | 2023-08-03 | Robert Bosch Gesellschaft mit beschränkter Haftung | Method for detecting an environment of a first sensor system |
Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105760930A (en) * | 2016-02-18 | 2016-07-13 | 天津大学 | Multilayer spiking neural network recognition system for AER |
-
2019
- 2019-05-08 GB GB1906476.5A patent/GB2583745A/en not_active Withdrawn
Patent Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105760930A (en) * | 2016-02-18 | 2016-07-13 | 天津大学 | Multilayer spiking neural network recognition system for AER |
Non-Patent Citations (3)
| Title |
|---|
| IEEE International Symposium on Circuits and Systems (ISCAS) May 2018, Camunas-Mesa et al "Event-Driven Configurable Module with Refractory Mechanism for ConvNets on FPGA", doi:10.1109/ISCAS.2018.8351570 * |
| Intelligent Virtual Agent, IVA 2015, Springer, 6 October 2014, pp 171-182, Tschechne S et al, "Bio-Inspired Optic Flow from Event-Based Neuromorphic Sensor Input", doi:10.1007/978-3-319-11656-3_16 * |
| Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2017, pp 7388-7397, Amir A et al, "A Low Power, Fully Event-Based Gesture Recognition System", doi:10.1109/CVPR.2017.781 * |
Also Published As
| Publication number | Publication date |
|---|---|
| GB201906476D0 (en) | 2019-06-19 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20220092408A1 (en) | Neural network weight distribution using a tree direct-memory access (dma) bus | |
| US11709783B1 (en) | Tensor data distribution using grid direct-memory access (DMA) controller | |
| CN113408671B (en) | Object recognition method and device, chip and electronic device | |
| US20240281393A1 (en) | Circular buffer for input and output of tensor computations | |
| Risi et al. | A spike-based neuromorphic architecture of stereo vision | |
| JP7108780B2 (en) | Video system, imaging device, and video processing device | |
| US11922306B2 (en) | Tensor controller architecture | |
| CN113722668B (en) | Processing unit, correlation device and tensor operation method | |
| CN114565079A (en) | Training method, chip and electronic product of spiking neural network in space-time domain | |
| CN113988276B (en) | Target identification method and system | |
| CN118504645A (en) | Multi-mode large model training method, robot motion prediction method and processing device | |
| CN117830799A (en) | Training brain-like gesture recognition model, gesture category recognition method and related device | |
| GB2583745A (en) | Neural network for processing sensor data | |
| CN114554279A (en) | Remote analysis based trigger response clip extraction | |
| EP4035083B1 (en) | Hardware architecture for spiking neural networks and method of operating | |
| Liu et al. | Abnormal behavior analysis strategy of bus drivers based on deep learning | |
| US11704562B1 (en) | Architecture for virtual instructions | |
| Artemov et al. | Subsystem for simple dynamic gesture recognition using 3DCNNLSTM | |
| Yang et al. | Human activity recognition with spiking neural network | |
| US12242854B2 (en) | Compressing instructions for machine-learning accelerators | |
| Cheng et al. | A 1024-neuron 1M-synapse event-driven SNN accelerator for DVS applications | |
| US20240264948A1 (en) | Transpose a tensor with a single transpose buffer | |
| Sadovnykov et al. | ART IMAGE OBJECT DETECTION AND CLASSIFICATION APPROACHES | |
| US20240080573A1 (en) | In-plane preprocessor and multiplane hardware neural network | |
| WO2025079795A1 (en) | Method and apparatus for federated learning |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| WAP | Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1) |