US20230393897A1 - Signal processing device, signal processing method, and imaging apparatus - Google Patents
Signal processing device, signal processing method, and imaging apparatus Download PDFInfo
- Publication number
- US20230393897A1 US20230393897A1 US18/033,745 US202118033745A US2023393897A1 US 20230393897 A1 US20230393897 A1 US 20230393897A1 US 202118033745 A US202118033745 A US 202118033745A US 2023393897 A1 US2023393897 A1 US 2023393897A1
- Authority
- US
- United States
- Prior art keywords
- signal processing
- data
- processing device
- additional information
- units
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/5038—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/31—Flow control; Congestion control by tagging of packets, e.g. using discard eligibility [DE] bits
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/50—Queue scheduling
- H04L47/62—Queue scheduling characterised by scheduling criteria
- H04L47/625—Queue scheduling characterised by scheduling criteria for service slots or service orders
- H04L47/6275—Queue scheduling characterised by scheduling criteria for service slots or service orders based on priority
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/90—Buffering arrangements
- H04L49/9057—Arrangements for supporting packet reassembly or resequencing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/45—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/76—Architectures of general purpose stored program computers
- G06F15/82—Architectures of general purpose stored program computers data or demand driven
- G06F15/825—Dataflow computers
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/5021—Priority
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/509—Offload
Definitions
- the present disclosure relates to a signal processing device, a signal processing method, and an imaging apparatus that perform signal processing on each of multiple pieces of data.
- camera-mounted equipment such as a smartphone has come to be mounted with multiple kinds of sensors including, for example, an RGB sensor, a monochrome sensor, a range sensor, and a deflection sensor.
- different signal processing is necessary for each of the kinds of the multiple sensors in some cases, and it is possible to perform signal processing in common to the multiple sensors in some cases.
- hardware that performs signal processing is prepared as a dedicated block for each sensor. Therefore, an increase in the number of sensor results in an increase in hardware scale.
- there is a method of sharing a signal processing system block by multiple sensors by using a CPU (Central Processing Unit) or a DFP (Data Flow Processor) dedicated to data flow (see PTL 1).
- CPU Central Processing Unit
- DFP Data Flow Processor
- a signal processing device includes: multiple input units that add additional information necessary for signal processing to each of multiple pieces of data inputted from respective multiple external devices, and output the multiple pieces of data; and multiple stages of processing units each configured to perform common signal processing on each of the multiple pieces of data, on the basis of the additional information.
- a signal processing method includes: adding additional information necessary for signal processing to each of multiple pieces of data inputted from respective multiple external devices, and outputting the multiple pieces of data; and performing, in each of multiple stages of processing units, common signal processing on each of the multiple pieces of data, on the basis of the additional information.
- An imaging apparatus includes: multiple sensors; multiple input units that add additional information necessary for signal processing to each of multiple pieces of data inputted from the respective multiple sensors, and output the multiple pieces of data; and multiple stages of processing units each configured to perform common signal processing on each of the multiple pieces of data, on the basis of the additional information.
- FIG, 1 is a block diagram illustrating a configuration example of a signal processing device according to a comparative example.
- FIG. 2 is a block diagram illustrating a configuration example of a signal processing device according to a first embodiment of the present disclosure.
- FIG, 3 is a block diagram illustrating a configuration example of an input unit in the signal processing device according to the first embodiment.
- FIG. 4 is a block diagram illustrating a first configuration example of a processing unit in the signal processing device according to the first embodiment
- FIG. 5 is a block diagram illustrating a second configuration example of the processing unit in the signal processing device according to the first embodiment.
- FIG. 6 is a block diagram illustrating a first specific example of a configuration of the signal processing device according to the first embodiment.
- FIG. 7 is a block diagram illustrating a second specific example of the configuration of the signal processing device according to the first embodiment.
- FIG. 8 is an explanatory diagram schematically illustrating operation timing of signal processing by a signal processing device according to a comparative example.
- FIG. 9 is an explanatory diagram schematically illustrating operation timing of signal processing by the signal processing device according to the second specific example.
- FIG. 10 is a block diagram illustrating a third specific example of the configuration of the signal processing device according to the first embodiment.
- FIG. 11 is a block diagram illustrating a fourth specific example of the configuration of the signal processing device according to the first embodiment.
- FIG. 12 is a block diagram illustrating a specific example of queue processing to be performed by a queue processor in the signal processing device according to the first embodiment.
- FIG. 13 is a block diagram illustrating a fifth specific example of the configuration of the signal processing device according to the first embodiment.
- FIG. 14 is a block diagram illustrating a specific example of queue processing to be performed by the queue processor in the signal processing device according to the first embodiment.
- FIG. 15 is a block diagram illustrating a sixth specific example of the configuration of the signal processing device according to the first embodiment.
- FIG. 16 is an explanatory diagram illustrating an example of additional information to be added by the signal processing device according to the first embodiment.
- FIG, 17 is a block diagram illustrating a seventh specific example of the configuration of the signal processing device according to the first embodiment.
- FIG. 18 is an explanatory diagram illustrating an example of the additional information to be added by the signal processing device according to the seventh specific example.
- FIG. 19 is an explanatory diagram illustrating an example of an amount of electric power consumed by the processing unit in the signal processing device according to the comparative example.
- FIG. 20 is an explanatory diagram illustrating an example of an amount of electric power consumed by the processing unit in the signal processing device according to the first embodiment.
- FIG, 21 is a flowchart illustrating an example of operation related. to each of multiple input units in the signal processing device according to the first embodiment.
- FIG. 22 is a flowchart illustrating an example of operation related to multiple processing units in each of the signal processing device according to the first embodiment.
- FIG. 1 illustrates a configuration example of a signal processing device 100 according to a comparative example.
- FIG. 1 illustrates a configuration example in a case of processing data. outputted from multiple sensors 10 A, 10 B, and 10 C serving as multiple external devices.
- the multiple sensors 10 A, 10 B, and 10 C each output, for example, image data.
- the multiple external devices and the signal processing device 100 may configure an imaging apparatus as a whole.
- the signal processing device 100 includes multiple image input sections 21 A, 21 B, and 21 C provided to correspond respectively to - the multiple sensors 10 A, 10 B, and 10 C. To the multiple image input sections 21 A, 21 B, and 21 C, pieces of data from the multiple sensors 10 A, 10 B, and 10 C are inputted respectively.
- the signal processing device 100 further includes a CPU 2 , a DFP 3 , multiple stages of signal processing system hardware, and multiple SW processing units (software) 40 A, 40 B, and 40 C.
- the multiple SW processing units 40 A, 40 B, and 40 C are provided to correspond respectively to the multiple sensors 10 A, 10 B, and 10 C. To each of the multiple SW processing units 40 A, 40 B, and 40 C, data after signal processing by the multiple stages of signal processing system hardware is inputted.
- FIG. 1 illustrates a configuration example in a case where multiple stages of ISPs (Image Signal Processors) 31 A, 31 B, and 31 C that perform image processing are present, as the multiple stages of signal processing system hardware.
- the multiple stages of ISPs 31 A, 31 B, and 31 C are each shared by the multiple sensors 10 A, 10 B, and 10 C, and able to perform common signal processing (processing A, processing B, and processing C) on each of multiple pieces of data.
- the DFP 3 is controlled by the CPU 2 .
- the DFP 3 controls a setting value and operation timing (data flow) of each of the multiple image input sections 21 A, 21 B, and 21 C and the multiple stages of ISPs 31 A, 31 B, and 31 C.
- the multiple stages of ISPs 31 A, 31 B, and 31 C perform signal processing time-divisionally, under the control of the DFP 3 , on the respective pieces of data from the multiple sensors 10 A, 10 B, and inputted via the multiple sensors 10 A, 10 B, and 10 C.
- FIG. 2 schematically illustrates a configuration example of a signal processing device 1 according to a first embodiment of the present disclosure. Note that description is omitted as appropriate regarding portions, in FIG. 2 , having configurations and operations similar to those of the signal processing device 100 according to the comparative example in FIG. 1 .
- FIG. 2 illustrates a configuration example in a case of processing data outputted from the multiple sensors 10 A, 10 B, and 10 C serving as the multiple external devices, as with the signal processing device 100 according to the comparative example.
- the multiple sensors 10 A, 10 B, and 10 C each output, for example, image data.
- the multiple external devices and the signal processing device 1 may configure an imaging apparatus as a whole.
- the signal processing device 1 includes multiple input units 20 A, 20 B, and 20 C provided to correspond respectively to the multiple sensors 10 A, 10 B, and 10 C. To the multiple input units 20 A, 20 B, and 20 C, pieces of data from the multiple sensors 10 A, 10 B, and 10 C are inputted respectively.
- the signal processing device 1 further includes the CPU 2 , multiple stages of signal processing system hardware, and the multiple SW processing units 40 A, 40 B, and 40 C.
- the multiple external devices may be two or four or more external devices.
- the multiple input units 20 A, 20 B, and 20 C may also be two or four or more input units.
- the multiple SW processing units 40 A, 40 B, and 40 C may also be two or four or more SW processing units.
- FIG. 2 illustrates a configuration example in a case where multiple stages of processing units 30 A, 30 B, and 30 C are present, as the multiple stages of signal processing system hardware.
- the multiple stages of processing units 30 A, 30 B, and 30 C are each shared by the multiple sensors 10 A, 10 B, and 10 C, and able to perform common signal processing (processing A, processing B, and processing C) on each of multiple pieces of data.
- the multiple stages of signal processing system hardware may be two or four or more stages of signal processing system hardware.
- the multiple input units 20 A, 20 B, and 20 C include the multiple image input sections 21 A, 21 B, and 21 C respectively.
- the multiple stages of processing units 30 A, 30 B, and 30 C include the multiple stages of ISPs 31 A, 31 B, and 31 C respectively.
- FIG. 3 illustrates a configuration example of an input unit 20 x in the signal processing device 1 .
- the input unit 20 x represents any one of the multiple input units 20 A, 20 B, and 20 C.
- An image input section 21 x represents any one of the multiple image input sections 21 A, 21 B, and 21 C.
- the multiple input units 20 A, 20 B, and 20 C generate and output data obtained by adding additional information necessary for signal processing to each of multiple pieces of data inputted from the respective multiple sensors 10 A, 10 B, and 10 C.
- the multiple input units 20 A, 20 B, and 20 C each include a packet generator 22 serving as a first packet generator, in an output stage of corresponding one of the multiple image input sections 21 A, 21 B, and 21 C,
- the packet generator 22 generates a packet of each of the multiple pieces of data inputted from the respective multiple sensors 10 A, 10 B, and 10 C, adds the additional information as a header to the packet, and outputs the packet.
- the additional information may include instruction information indicating a routing instruction as to signal processing using which processing unit, out of the multiple stages of processing units 30 A, 30 B, and 30 C, is to be performed on each of the multiple pieces of data.
- the additional information may include setting information indicating a setting value to be used for signal processing in each of the multiple stages of processing units 30 A, 30 B, and 30 C.
- the CPU 2 serves as a controller that instructs each of the multiple input units 20 A, 20 B, and 20 C as to signal processing using which processing unit, out of the multiple stages of processing units 30 A, 30 B, and 30 C, is to be performed.
- FIG. 4 illustrates a first configuration example of a processing unit 30 x in the signal processing device 1 .
- the processing unit 30 x represents any one of the multiple stages of processing units 30 A, 30 B, and 30 C.
- An ISP 31 x represents any one of the multiple stages of ISPs 31 A, 31 B, and 31 C.
- the multiple stages of processing units 30 A, 30 B, and 30 C are each configured to perform common signal processing on each of the multiple pieces of data, on the basis of the additional information.
- the multiple stages of processing units 30 A, 30 B, and 30 C each include a packet analyzer 32 , and a packet generator 33 serving as a second packet generator.
- the packet analyzer 32 is provided in an input stage of each of the multiple stages of ISPs 31 A, 31 B, and 31 C.
- the packet generator 33 is provided in an output stage of each of the multiple stages of ISPs 31 A, 31 B, and 31 C.
- the packet analyzer 32 analyzes the header added to the packet, and determines the setting value to be used for signal processing in the ISP 31 x.
- the packet generator 33 generates a packet in which information to be used for signal processing of the processing unit in the next stage, out of the multiple stages of ISPs 31 A, 31 B, and 31 C, is added as additional information to a header.
- FIG. 5 illustrates a second configuration example of the processing unit 30 x in the signal processing device 1 .
- the additional information to be added by each of the multiple input units 20 A, 20 B, and 20 C may include information indicating a priority for signal processing of each of the multiple pieces of data, in each of the multiple stages of processing units 30 A, 30 B, and 30 C.
- the multiple stages of processing units 30 A, 30 B, and 30 C may each include a queue processor 34 in an input stage of the packet analyzer 32 .
- the queue processor 34 performs queue processing on each of the multiple pieces of data, on the basis of the information indicating the priority, as will be described later. In a case where a queue overflow occurs, the queue processor 34 may discard data of which the priority is relatively low, out of the multiple pieces of data, as will be described later.
- the CPU 2 may be a controller that is able to adjust the setting value of each of the multiple sensors 10 A, 10 B, and 10 C.
- the queue processor 34 may provide the CPU 2 with a notification that the queue overflow has occurred, as will be described later.
- the CPU 2 may adjust the setting value of each of the multiple sensors 10 A, 10 B, and 10 C, on the basis of the notification from the queue processor 34 .
- the CPU 2 gives, to each of the multiple input units 20 A, 20 B, and 20 C, the additional information including, for example, a routing instruction as to signal processing using which processing unit, out of the multiple stages of processing units 30 A, 30 B, and 30 C, is to be performed.
- This allows each processing unit to route and process data automatically, without being controlled by the CPU 2 .
- making it possible to perform data flow processing in the signal processing system hardware makes it unnecessary for data flow to be controlled by the CPU 2 or the DSP 3 ( FIG. 1 ). This allows for processing regardless of a limitation imposed by the CPU 2 or the DSP 3 , even in a case where the number of external devices is increased, for example, to 10 and further to 20.
- FIG. 6 illustrates a first specific example of a configuration of the signal processing device 1 according to the first embodiment.
- a signal processing device 1 A according to the first specific example illustrated in FIG. 6 represents a configuration example in a case of processing data outputted from an RGB sensor 110 A and a monochrome sensor 110 B as the multiple external devices.
- the multiple external devices and the signal processing device 1 A may configure an imaging apparatus as a whole.
- the signal processing device 1 A includes the multiple input units 20 A and 20 B provided to correspond to the RGB sensor 110 A and the monochrome sensor 110 B.
- the data from the RGB sensor 110 A is inputted to the input unit 20 A.
- the data from the monochrome sensor 110 B is inputted to the input unit 20 B.
- the signal processing device 1 A further includes the CPU 2 , multiple stages of signal processing system hardware, and the multiple SW processing units 40 A and 40 B.
- FIG. 6 illustrates a configuration example in a case where the multiple stages of processing units 30 A, 30 B, and 30 C provided to correspond to the RGB sensor 110 A are present, as the multiple stages of signal processing system hardware.
- FIG. 6 illustrates a configuration example in a case where one processing unit 30 A provided to correspond to the monochrome sensor 110 B is present, as the signal processing system hardware.
- the input unit 20 A On the basis of an instruction from the CPU 2 , the input unit 20 A generates a packet in which additional information including, for example, a routing instruction is added to a header Hd of data Da from the RGB sensor 110 A, and outputs the packet to the processing unit 30 A provided to correspond to the RGB sensor 110 A.
- additional information including, for example, a routing instruction is added to a header Hd of data Da from the RGB sensor 110 A, and outputs the packet to the processing unit 30 A provided to correspond to the RGB sensor 110 A.
- the input unit 20 B On the basis of an instruction from the CPU 2 , the input unit 20 B generates a packet in which additional information including, for example, a routing instruction is added to a header Hd of data Db from the monochrome sensor 110 B, and outputs the packet to the processing unit 30 A provided to correspond to the monochrome sensor 110 B.
- additional information including, for example, a routing instruction is added to a header Hd of data Db from the monochrome sensor 110 B, and outputs the packet to the processing unit 30 A provided to correspond to the monochrome sensor 110 B.
- the processing unit 30 B To the processing unit 30 B, the data Da from the processing unit 30 A provided to correspond to the RGB sensor 110 A, and the data Db from the processing unit 30 A provided to correspond to the monochrome sensor 110 B are inputted in common.
- the processing unit 30 B performs processing of the data Da and the data Db time-divisionally, on the basis of the additional information indicated by the header Hd.
- the processing unit 30 B performs signal processing of the data Da and the data Db time-divisionally, on the basis of the additional information indicated by the header Hd.
- the processing unit 30 B outputs the data Db after signal processing to the SW processing unit 40 B.
- the processing unit 30 B outputs the data Da after signal processing to the processing unit 30 C in the next stage.
- the processing unit 30 C outputs the data Da after signal processing to the SW processing unit 40 A.
- multiple stages of processing units 30 A, 30 B, and 30 C may each repeat multiple times of signal processing, depending on contents of signal processing. For example, noise reduction processing or the like may be executed multiple times as signal processing.
- FIG. 6 illustrates an example in which the processing unit 30 C repeats multiple times of signal processing.
- FIG. 7 illustrates a second specific example of the configuration of the signal processing device 1 according to the first embodiment.
- a signal processing device 1 B according to the second specific example illustrated in FIG. 7 represents a configuration example in a case of processing data outputted from an RGB sensor 210 A and an RGB sensor 210 B having different pixel sizes from each other, as the multiple external devices.
- the multiple external devices and the signal processing device 1 B may configure an imaging apparatus as a whole. Note that illustration of the CPU 2 is omitted in FIG. 7 .
- the RGB sensor 210 A is an image sensor having a higher resolution than the RGB sensor 210 B.
- FIG. 7 illustrates an example in which the pixel size of the RGB sensor 210 A is, for example, 12 Mpix, and the pixel size of the RGB sensor 210 B is, for example, 4 Mpix.
- the signal processing device 1 B includes the multiple input units 20 A and provided to correspond to the RGB sensor 210 A and the RGB sensor 210 B.
- the data from the RGB sensor 110 A is inputted to the input unit 20 A.
- the data from the RGB sensor 210 B is inputted to the input unit 20 B.
- the signal processing device 1 B further includes the unillustrated CPU 2 , multiple stages of signal processing system hardware, and the multiple SW processing units 40 A and 40 B.
- FIG. 7 illustrates a configuration example in a case where multiple stages of processing units 30 A, 30 B, 30 C, and 30 D provided in common to each of the RGB sensor 210 A and the RGB sensor 210 B are present, as the multiple stages of signal processing system hardware.
- the processing unit 30 A includes a preprocessing section 51 A.
- the processing unit 30 B includes a demosaic processing section 51 B.
- the processing unit includes a Y (luminance) C (chroma) processing section.
- the processing unit 30 D includes a color adjuster 51 D.
- the input unit 20 A On the basis of an instruction from the CPU 2 , the input unit 20 A generates a packet in which additional information including, for example, a routing instruction is added to the header Hd of the data Da from the RGB sensor 210 A, and outputs the packet to the processing unit 30 A.
- additional information including, for example, a routing instruction is added to the header Hd of the data Da from the RGB sensor 210 A, and outputs the packet to the processing unit 30 A.
- the input unit 20 B On the basis of an instruction from the CPU 2 , the input unit 20 B generates a packet in which additional information including, for example, a routing instruction is added to the header Hd of the data Db from the RGB sensor 210 B, and outputs the packet to the processing unit 30 A.
- additional information including, for example, a routing instruction is added to the header Hd of the data Db from the RGB sensor 210 B, and outputs the packet to the processing unit 30 A.
- the multiple stages of processing units 30 A, 30 B, 30 C, and 30 D each perform processing of the data Da and the data Db time-divisionally, on the basis of the additional information indicated by the header Hd.
- the processing unit 30 D outputs the data Da after signal processing to the SW processing unit 40 A, and outputs the data Db after signal processing to the SW processing unit 40 A.
- FIG. 8 schematically illustrates operation timing of signal processing by a signal processing device according to a comparative example.
- FIG. 8 illustrates an example case where the multiple stages of processing units 30 A, 30 B, 30 C, and 30 D are controlled by the DFP 3 , as the signal processing device according to the comparative example.
- FIG. 9 schematically illustrates operation timing of signal processing by the signal processing device 1 B according to the second specific example.
- FIGS. 8 and 9 illustrate, in an upper stage, a timing example of data output from each of the RGB sensors 210 A and 210 B.
- data is continuously outputted from each of the RGB sensors 210 A and 210 B.
- FIG. 8 illustrates, in a lower stage, a timing example of signal processing by the DFP 3 and the preprocessing section 51 A in the processing 30 A.
- FIG. 9 illustrates, in a lower stage, a timing example of signal processing by the CPU 2 and the preprocessing section 51 A in the processing unit 30 A.
- control (kick) by the DFP 3 is necessary, each time the preprocessing section 51 A processes the data from each of the RGB sensors 210 A and 210 B time-divisionally. Therefore, processing is limited by performance of the DFP 3 .
- the preprocessing section 51 A operates autonomously on the basis of the additional information indicated by the header Hd and performs signal processing. Therefore, when processing the data from each of the RGB sensors 210 A and 210 B time-divisionally, control by the CPU 2 is unnecessary, and the processing is not limited by performance of the CPU 2 .
- FIG. 10 illustrates a third specific example of the configuration of the signal processing device 1 according to the first embodiment.
- a signal processing device 1 C according to the third specific example illustrated in FIG. 10 represents a configuration example in a case of processing data outputted from an RGB sensor 310 A and a monochrome sensor 310 B as the multiple external devices.
- the multiple external devices and the signal processing device 1 C may configure an imaging apparatus as a whole. Note that illustration of the CPU 2 is omitted in FIG. 10 .
- FIG. 10 illustrates an example in which the RGB sensor 310 A and the monochrome sensor 310 B have the same pixel size, for example, 12 Mpix.
- the signal processing device 1 C includes the multiple input units 20 A and 20 B provided to correspond to the RGB sensor 310 A and the monochrome sensor 310 B.
- the data from the RGB sensor 310 A is inputted to the input unit 20 A.
- the data from the monochrome sensor 310 B is inputted to the input unit 20 B.
- the signal processing device 1 C further includes the unillustrated CPU 2 , multiple stages of signal processing system hardware, and the multiple SW processing units 40 A and 40 B.
- FIG. 10 illustrates a configuration example in a case where the multiple stages of processing units 30 A, 30 B, 30 C, and 30 D provided to correspond to the RGB sensor 310 A are present, as the multiple stages of signal processing system hardware.
- FIG. 10 illustrates a configuration example in a case where the multiple stages of processing units 30 A and 30 B provided to correspond to the monochrome sensor 310 B are present, as the signal processing system hardware.
- the processing unit 30 A includes the preprocessing section 51 A.
- the processing unit 30 B includes the demosaic processing section 51 B.
- the processing unit includes the Y (luminance) C (chroma) processing section.
- the processing unit 30 D includes the color adjuster 51 D.
- the input unit 20 A On the basis of an instruction from the CPU 2 , the input unit 20 A generates a packet in which additional information including, for example, a routing instruction is added to the header Hd of the data Da from the RGB sensor 310 A, and outputs the packet to the processing unit 30 A provided to correspond to the RGB sensor 310 A.
- the data Da from the processing unit 30 A provided to correspond to the RGB sensor 110 A is inputted to the processing unit 30 B provided to correspond to the RGB sensor 310 A.
- the input unit 20 B On the basis of an instruction from the CPU 2 , the input unit 20 B generates a packet in which additional information including, for example, a routing instruction is added to the header Hd of the data Db from the monochrome sensor 310 B, and outputs the packet to the processing unit 30 A provided to correspond to the monochrome sensor 310 B.
- the data Db from the processing unit 30 A provided to correspond to the monochrome sensor 310 B is inputted to the processing unit 30 B provided to correspond to the monochrome sensor 310 B.
- the data Da from the processing unit 30 B provided to correspond to the RGB sensor 310 A, and the data Db from the processing unit 30 B provided to correspond to the monochrome sensor 310 B are inputted in common.
- the processing unit 30 C performs signal processing of the data Da and the data Db time-divisionally, on the basis of the additional information indicated by the header Hd, and outputs the data Da and the data Db after signal processing to the processing unit 30 D in the next stage.
- the processing unit 30 D performs processing of the data Da and the data Db time-divisionally, on the basis of the additional information indicated by the header Hd.
- the processing unit 30 D outputs the data Da after signal processing to the SW processing unit 40 A, and outputs the data Db after signal processing to the SW processing unit 40 A.
- the multiple external devices are a combination of the RGB sensor 310 A and the monochrome sensor 310 B, it is possible to share a portion where common signal processing is possible, out of the multiple stages of signal processing system hardware.
- control by the CPU 2 is unnecessary when processing data time-divisionally, and the processing is not limited by performance of the CPU 2 .
- FIG. 11 illustrates a fourth specific example of the configuration of the signal processing device 1 according to the first embodiment.
- a signal processing device 1 D according to the fourth specific example illustrated in FIG. 11 represents a configuration example in a case of processing data outputted from a sensor 410 A and a sensor 410 B as the multiple external devices.
- the multiple external devices and the signal processing device 1 D may configure an imaging apparatus as a whole. Note that illustration of the CPU 2 is omitted in FIG. 11 .
- the above specific examples represent examples in which a combination of an RGB sensor and an RGB sensor or a combination of an RGB sensor and a monochrome sensor is used as the multiple external devices.
- sensors to be used as the multiple external devices are not limited to these combinations.
- the sensor 410 A and the sensor 410 B may be, for example, a combination of any multiple sensors of the same kind or different kinds, out of an RGB sensor, a monochrome sensor, a polarization sensor, a multispectral sensor, a ToF (Time of Flight) sensor, a DVS (Dynamic Vision Sensor) sensor, and the like.
- the signal processing device 1 D includes the multiple input units 20 A and provided to correspond to the sensor 410 A and the sensor 410 B.
- the data from the sensor 410 A is inputted to the input unit 20 A.
- the data from the sensor 410 B is inputted to the input unit 20 B.
- the signal processing device 1 D further includes the unillustrated CPU 2 , multiple stages of signal processing system hardware, and the multiple SW processing units 40 A and 40 B.
- FIG. 11 illustrates a configuration example in a case where the multiple stages of processing units 30 A, 30 B, 30 C, and 30 D provided to correspond to the sensor 410 A are present, as the multiple stages of signal processing system hardware.
- FIG. 11 illustrates a configuration example in a case where the multiple stages of processing units 30 A and 30 B provided to correspond to the sensor 410 B are present, as the signal processing system hardware.
- the multiple stages of processing units 30 A, 30 B, 30 C, and 30 D include multiple stages of ISPs 31 A, 31 B, and 31 C, and 31 D respectively.
- the multiple stages of ISPs 31 A, 31 B, and 31 C, and 31 D perform processing A, processing B, processing C, and processing D respectively as signal processing.
- the input unit 20 A On the basis of an instruction from the CPU 2 , the input unit 20 A generates a packet in which additional information including, for example, a routing instruction is added to the header Hd of the data Da from the sensor 410 A, and outputs the packet to the processing unit 30 A provided to correspond to the sensor 410 A.
- the data Da from the processing unit 30 A provided to correspond to the sensor 410 A is inputted to the processing unit 30 B provided to correspond to the sensor 410 A.
- the input unit 20 B On the basis of an instruction from the CPU 2 , the input unit 20 B generates a packet in which additional information including, for example, a routing instruction is added to the header Hd of the data Db from the sensor 410 B, and outputs the packet to the processing unit 30 A provided to correspond to the sensor 410 B.
- the data Db from the processing unit 30 A provided to correspond to the sensor 410 B is inputted to the processing unit 30 B provided to correspond to the sensor 410 B.
- the processing unit 30 C To the processing unit 30 C, the data Da from the processing unit 30 B provided to correspond to the sensor 410 A, and the data Db from the processing unit 30 B provided to correspond to the sensor 410 B are inputted in common.
- the processing unit performs signal processing of the data Da and the data Db time divisionally, on the basis of the additional information indicated by the header Hd, and outputs the data Da and the data Db after signal processing to the processing unit 30 B in the next stage.
- the processing unit 30 D performs processing of the data Da and the data Db time-divisionally, on the basis of the additional information indicated by the header Hd.
- the processing unit 30 D outputs the data Da after signal processing to the SW processing unit 40 A, and outputs the data Db after signal processing to the SW processing unit 40 A.
- the multiple external devices are a combination of the sensor 410 A and the sensor 410 B, it is possible to share a portion where common signal processing is possible, out of the multiple stages of signal processing system hardware.
- control by the CPU 2 is unnecessary when processing data time-divisionally, and the processing is not limited by performance of the CPU 2 .
- FIG. 12 illustrates a specific example of queue processing to be performed by the queue processor 34 in the signal processing device 1 according to the first embodiment.
- Any processing unit 30 x in the signal processing device 1 may include the queue processor 34 in the input stage of the packet analyzer 32 .
- the queue processor 34 performs queue processing on each of multiple pieces of data, on the basis of information indicating a priority added to the header Hd of the packet.
- the processing unit 30 x performs signal processing in the order of the priority. In this case, as illustrated in FIG. 12 , in a case where there are multiple packets with the same priority, the processing unit 30 x performs signal processing in the order of, for example, a time stamp, and outputs the packets to the queue processor 34 in the next stage in the order in which signal processing is performed.
- FIG. 13 illustrates a fifth specific example of the configuration of the signal processing device 1 according to the first embodiment.
- a signal processing device 1 E according to the fifth specific example may have a configuration substantially similar to that of the signal processing device 1 B according to the second specific example illustrated in FIG. 7 , except for a configuration related to the priority.
- the signal processing device 1 E represents a configuration example in a case of processing data outputted from the RGB sensor 210 A and the RGB sensor 210 B having different pixel sizes from each other, as the multiple external devices.
- the multiple external devices and the signal processing device 1 E may configure an imaging apparatus as a whole. Note that illustration of the CPU 2 is omitted in FIG. 13 .
- the RGB sensor 210 A is an image sensor having a higher resolution than the RGB sensor 210 B.
- FIG. 13 illustrates an example in which the pixel size of the RGB sensor 210 A is, for example, 12 Mpix, and the pixel size of the RGB sensor 210 B is, for example, 4 Mpix.
- the input unit 20 A On the basis of an instruction from the CPU 2 , the input unit 20 A generates a packet in which additional information including, for example, a routing instruction is added to the header Hd of the data Da from the RGB sensor 210 A, and outputs the packet to the processing unit 30 A. In addition, on the basis of an instruction from the CPU 2 , the input unit 20 A includes information indicating a priority as additional information in the header Hd of the data Da.
- the input unit 20 B On the basis of an instruction from the CPU 2 , the input unit 20 B generates a packet in which additional information including, for example, a routing instruction is added to the header Hd of the data Db from the RGB sensor 210 B, and outputs the packet to the processing unit 30 A. In addition, on the basis of an instruction from the CPU 2 , the input unit 20 B includes information indicating a priority as additional information in the header Hd of the data Db.
- the signal processing device 1 E may, for example, set a higher priority for the data Da from the RGB sensor 210 A which is an image sensor with a higher resolution, and set a lower priority for the data Db from the RGB sensor 210 B which is an image sensor with a lower resolution, to preferentially process the data Da with a higher resolution.
- FIG. 14 illustrates a specific example of queue processing to be performed by the queue processor 34 in the signal processing device 1 according to the first embodiment.
- the queue processor 34 may discard data of which the priority is relatively low, out of the multiple pieces of data.
- FIG. 15 illustrates a sixth specific example of the configuration of the signal processing device 1 according to the first embodiment.
- a signal processing device 1 F according to the sixth specific example may have a configuration substantially similar to that of the signal processing device 1 E according to the fifth specific example illustrated in FIG. 13 , except for a configuration related to processing in a case where a queue overflow occurs.
- the queue processor 34 may provide the CPU 2 with a notification that the queue overflow has occurred.
- the CPU 2 may adjust the setting value of each of the RGB sensor 210 A and the RGB sensor 210 B serving as the multiple external devices.
- the CPU 2 may adjust, for example, setting values of a resolution and a frame rate, for the RGB sensor 210 A and the RGB sensor 210 B.
- FIG. 16 is an explanatory diagram illustrating an example of the additional information (header information) to be added to the packet by the signal processing device 1 according to the first embodiment.
- items of the header information to be added by the signal processing device 1 may include, for example, Lens information, an image size, an infrared filter, a format, the number of bits, a gamma characteristic, an NR characteristic, a shutter time, a Gain amount, a time stamp, route information, and information on a priority.
- the route information may include “next route information” indicating information on the processing unit in the next stage.
- FIG. 17 illustrates a seventh specific example of the configuration of the signal processing device 1 according to the first embodiment.
- a signal processing device 1 G according to the seventh specific example may have a configuration substantially similar to that of the signal processing device 1 C according to the third specific example illustrated in FIG. 10 , except for a route of signal processing.
- the multiple stages of processing units 30 B, 30 C, and 30 D are present as the multiple stages of signal processing system hardware.
- the processing unit 30 A includes the preprocessing section 51 A.
- the processing unit 30 B includes the demosaic processing section 51 B.
- the processing unit includes the Y (luminance) C (chroma) processing section.
- the processing unit includes the color adjuster 51 D.
- FIG. 18 illustrates an example of the additional information (header information) to be added by the signal processing device 1 G according to the seventh specific example.
- FIG. 18 illustrates an example of the header information (Header 1) added to the packet outputted from the input unit 20 A and the header information (Header 3) added to the packet outputted from the processing unit 30 B.
- the “next route information” indicating information on the processing unit in the next stage is updated, by going through each processing unit.
- the header information do not contain the actual value of a large-size parameter such as a filter parameter.
- the value of each parameter may be separately held in a memory, instead of containing the value itself of each parameter.
- the header information may contain information indicating where the value of the parameter is in the memory.
- FIG. 19 illustrates an example of an amount of electric power consumed by the processing unit in the signal processing device 100 according to the comparative example.
- FIG. 20 illustrates an example of an amount of electric power consumed by the processing unit in the signal processing device 1 according to the first embodiment.
- each processing unit that performs signal processing has to operate at all times in a period from start to stop of data input (streaming) from the external device, and consumes a large amount of electric power.
- each processing unit that performs signal processing has to operate only in at least a period from reception of a packet on which signal processing is to be performed to transmission of a packet to the processing unit in the next stage, and consumes a small amount of electric power.
- FIG. 21 is a flowchart illustrating an example of operation related to each of the multiple input units 20 A, 20 B, and 20 C in the signal processing device 1 .
- the CPU 2 sets, for each of the multiple input units 20 A, 20 B, and 20 C, data flow to be a basis for additional information necessary for signal processing (step S 11 ).
- streaming from each of the multiple sensors 10 A, 10 B, and 10 C is started (step S 12 ).
- the multiple input units 20 A, 20 B, and 20 C wait for input from the multiple sensors 10 A, 10 B, and 10 C respectively (step S 13 ).
- the multiple input units 20 A, 20 B, and 20 C each end processing in a case where streaming of each of the multiple sensors 10 A, 10 B, and 10 C ends.
- the multiple input units 20 A, 20 B, and 20 C start processing of the pieces of data (step S 14 ).
- the multiple input units 20 A, 20 B, and 20 C each perform a packet generation process for the processing unit in the subsequent stage (step S 15 ).
- the multiple input units 20 A, 20 B, and 20 C each transmit data packetized and including the additional information added as the header information, to the processing unit in the subsequent stage (step S 16 ).
- FIG. 22 is a flowchart illustrating an example of operation related to each of the multiple stages of processing units 30 A, 30 B, and 30 C in the signal processing device.
- streaming from each of the multiple sensors 10 A, 10 B, and 10 C is started (step S 21 ).
- the multiple stages of processing units 30 A, 30 B, and 30 C each wait for data reception in the queue processor 34 (step S 22 ).
- the multiple stages of processing units 30 A, 30 B, and 30 C each end processing in a case where streaming of each of the multiple sensors 10 A, 10 B, and 10 C ends.
- the multiple stages of processing units 30 A, 30 B, and 30 C each determine the data to be processed on the basis of the information indicating the priority indicated by the header information (step S 23 ).
- the multiple stages of processing units 30 A, 30 B, and 30 C each determine the setting value for processing, on the basis of the header information (step S 24 ).
- the multiple stages of processing units 30 A, 30 B, and 30 C each start processing of the data (step S 25 ).
- the multiple stages of processing units 30 A, 30 B, and 30 C each perform a packet generation process for the processing unit in the subsequent stage (step S 26 ).
- the multiple stages of processing units 30 A, 30 B, and 30 C each transmit packetized data to the processing unit in the subsequent stage (step S 27 ).
- the multiple stages of processing units 30 A, 30 B, and 30 C each determine whether or not there is data in the queue processor 34 (step S 28 ). In a case where determination is made that there is no data in the queue processor 34 (step S 28 ; N), the process returns to step S 22 . In a case where determination is made that there is data in the queue processor 34 (step S 28 ; Y), the process returns to step S 23 .
- the signal processing device 1 in each of the multiple stages of signal processing system hardware, common signal processing is performed on each of the multiple pieces of data, on the basis of the additional information added to each of the multiple pieces of data. This makes it possible to perform signal processing on the multiple pieces of data, while suppressing a circuit scale and electric power consumption.
- the multiple stages of signal processing system hardware are shared by the multiple external devices, which makes it possible to reduce a hardware scale.
- data flow of the signal processing system hardware proceeds without intervention of the CPU 2 or the DSP 3 , which allows the CPU 2 to concentrate on processing other than signal processing.
- routing for the multiple stages of signal processing system hardware is performed in units of packets, which makes it possible to suppress electric power consumption during standby for signal processing.
- the present technology may have the following configurations.
- a signal processing device including:
- multiple input units that add additional information necessary for signal processing to each of multiple pieces of data inputted from respective multiple external devices, and output the multiple pieces of data; and multiple stages of processing units each configured to perform common signal
- instruction information indicating an instruction as to signal processing using which processing unit, out of the multiple stages of processing units, is to be performed on each of the multiple pieces of data
- setting information indicating a setting value to be used for signal processing in each of the multiple stages of processing units.
- the signal processing device in which the additional information includes information indicating a priority for signal processing of each of the multiple pieces of data in each of the multiple stages of processing units.
- the signal processing device according to any one of (1) to (3), further including a controller that instructs each of the multiple input units as to signal processing using which processing unit, out of the multiple stages of processing units, is to be performed.
- the signal processing device according to any one of (1) to (4), in which the multiple input units each include a first packet generator that generates a packet of each of the multiple pieces of data, adds the additional information as a header to the packet, and outputs the packet.
- a packet analyzer that analyzes the header added to the packet, and determines a setting value to be used for signal processing
- a second packet generator that generates a packet in which information to be used for signal processing of the processing unit in the next stage is added as the additional information to a header.
- the signal processing device in which the multiple stages of processing units each further include a queue processor that performs queue processing on each of the multiple pieces of data, on the basis of the information indicating the priority.
- the signal processing device in which, in a case where a queue overflow occurs, the queue processor discards data of which the priority is relatively low, out of the multiple pieces of data.
- the signal processing device further including a controller that is able to adjust setting values of the multiple external devices, in which,
- the queue processor provides the controller with a notification that the queue overflow has occurred
- the controller adjusts the setting values of the multiple external devices, on the basis of the notification from the queue processor.
- a signal processing method including:
- An imaging apparatus including:
- multiple stages of processing units each configured to perform common signal processing on each of the multiple pieces of data, on the basis of the additional information.
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Studio Devices (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
A signal processing device according to the present disclosure includes: multiple input units that add additional information necessary for signal processing to each of multiple pieces of data inputted from respective multiple external devices, and output the multiple pieces of data; and multiple stages of processing units each configured to perform common signal processing on each of the multiple pieces of data, on the basis of the additional information.
Description
- The present disclosure relates to a signal processing device, a signal processing method, and an imaging apparatus that perform signal processing on each of multiple pieces of data.
- In recent years, camera-mounted equipment such as a smartphone has come to be mounted with multiple kinds of sensors including, for example, an RGB sensor, a monochrome sensor, a range sensor, and a deflection sensor. In such camera-mounted equipment, different signal processing is necessary for each of the kinds of the multiple sensors in some cases, and it is possible to perform signal processing in common to the multiple sensors in some cases. In general, hardware that performs signal processing is prepared as a dedicated block for each sensor. Therefore, an increase in the number of sensor results in an increase in hardware scale. On the other hand, there is a method of sharing a signal processing system block by multiple sensors, by using a CPU (Central Processing Unit) or a DFP (Data Flow Processor) dedicated to data flow (see PTL 1).
- PTL 1: Japanese Unexamined Patent Application Publication No. H5-265880
- In a case of sharing a signal processing system block by multiple sensors by using a CPU or a DFP, a limitation is imposed due to performance of the CPU or the DFP. In addition, using the DFP brings about an increase in circuit scale and electric power consumption.
- It is desirable to provide a signal processing device, a signal processing method, and an imaging apparatus that are able to perform signal processing on multiple pieces of data, while suppressing a circuit scale and electric power consumption.
- A signal processing device according to an embodiment of the present disclosure includes: multiple input units that add additional information necessary for signal processing to each of multiple pieces of data inputted from respective multiple external devices, and output the multiple pieces of data; and multiple stages of processing units each configured to perform common signal processing on each of the multiple pieces of data, on the basis of the additional information.
- A signal processing method according to an embodiment of the present disclosure includes: adding additional information necessary for signal processing to each of multiple pieces of data inputted from respective multiple external devices, and outputting the multiple pieces of data; and performing, in each of multiple stages of processing units, common signal processing on each of the multiple pieces of data, on the basis of the additional information.
- An imaging apparatus according to an embodiment of the present disclosure includes: multiple sensors; multiple input units that add additional information necessary for signal processing to each of multiple pieces of data inputted from the respective multiple sensors, and output the multiple pieces of data; and multiple stages of processing units each configured to perform common signal processing on each of the multiple pieces of data, on the basis of the additional information.
- In the signal processing device, the signal processing method, or the imaging apparatus according to the embodiment of the present disclosure, in each of the multiple stages of processing units, common signal processing is performed on each of the multiple pieces of data, on the basis of the additional information added to each of the multiple pieces of data.
- FIG, 1 is a block diagram illustrating a configuration example of a signal processing device according to a comparative example.
-
FIG. 2 is a block diagram illustrating a configuration example of a signal processing device according to a first embodiment of the present disclosure. - FIG, 3 is a block diagram illustrating a configuration example of an input unit in the signal processing device according to the first embodiment.
-
FIG. 4 is a block diagram illustrating a first configuration example of a processing unit in the signal processing device according to the first embodiment, -
FIG. 5 is a block diagram illustrating a second configuration example of the processing unit in the signal processing device according to the first embodiment. -
FIG. 6 is a block diagram illustrating a first specific example of a configuration of the signal processing device according to the first embodiment. -
FIG. 7 is a block diagram illustrating a second specific example of the configuration of the signal processing device according to the first embodiment. -
FIG. 8 is an explanatory diagram schematically illustrating operation timing of signal processing by a signal processing device according to a comparative example. -
FIG. 9 is an explanatory diagram schematically illustrating operation timing of signal processing by the signal processing device according to the second specific example. -
FIG. 10 is a block diagram illustrating a third specific example of the configuration of the signal processing device according to the first embodiment. -
FIG. 11 is a block diagram illustrating a fourth specific example of the configuration of the signal processing device according to the first embodiment. -
FIG. 12 is a block diagram illustrating a specific example of queue processing to be performed by a queue processor in the signal processing device according to the first embodiment. -
FIG. 13 is a block diagram illustrating a fifth specific example of the configuration of the signal processing device according to the first embodiment. -
FIG. 14 is a block diagram illustrating a specific example of queue processing to be performed by the queue processor in the signal processing device according to the first embodiment. -
FIG. 15 is a block diagram illustrating a sixth specific example of the configuration of the signal processing device according to the first embodiment. -
FIG. 16 is an explanatory diagram illustrating an example of additional information to be added by the signal processing device according to the first embodiment. - FIG, 17 is a block diagram illustrating a seventh specific example of the configuration of the signal processing device according to the first embodiment.
-
FIG. 18 is an explanatory diagram illustrating an example of the additional information to be added by the signal processing device according to the seventh specific example. -
FIG. 19 is an explanatory diagram illustrating an example of an amount of electric power consumed by the processing unit in the signal processing device according to the comparative example. vFIG. 20 is an explanatory diagram illustrating an example of an amount of electric power consumed by the processing unit in the signal processing device according to the first embodiment. - FIG, 21 is a flowchart illustrating an example of operation related. to each of multiple input units in the signal processing device according to the first embodiment.
-
FIG. 22 is a flowchart illustrating an example of operation related to multiple processing units in each of the signal processing device according to the first embodiment. - In the following, description is given of embodiments of the present disclosure in detail with reference to the drawings. It is to be noted that the description is given in the following order.
-
- 0. Comparative Example (
FIG. 1 ) - 1. First Embodiment (
FIGS. 2 to 22 )- 1.1 Overview
- 1.2 Specific Examples
- 1.3 Operation
- 1.4 Effects
- 2. Other Embodiments
- 0. Comparative Example (
-
FIG. 1 illustrates a configuration example of asignal processing device 100 according to a comparative example. -
FIG. 1 illustrates a configuration example in a case of processing data. outputted from 10A, 10B, and 10C serving as multiple external devices. Themultiple sensors 10A, 10B, and 10C each output, for example, image data. The multiple external devices and themultiple sensors signal processing device 100 may configure an imaging apparatus as a whole. - The
signal processing device 100 according to the comparative example includes multiple 21A, 21B, and 21C provided to correspond respectively to - theimage input sections 10A, 10B, and 10C. To the multiplemultiple sensors 21A, 21B, and 21C, pieces of data from theimage input sections 10A, 10B, and 10C are inputted respectively.multiple sensors - The
signal processing device 100 further includes aCPU 2, aDFP 3, multiple stages of signal processing system hardware, and multiple SW processing units (software) 40A, 40B, and 40C. - The multiple
40A, 40B, and 40C are provided to correspond respectively to theSW processing units 10A, 10B, and 10C. To each of the multiplemultiple sensors 40A, 40B, and 40C, data after signal processing by the multiple stages of signal processing system hardware is inputted.SW processing units -
FIG. 1 illustrates a configuration example in a case where multiple stages of ISPs (Image Signal Processors) 31A, 31B, and 31C that perform image processing are present, as the multiple stages of signal processing system hardware. The multiple stages of 31A, 31B, and 31C are each shared by theISPs 10A, 10B, and 10C, and able to perform common signal processing (processing A, processing B, and processing C) on each of multiple pieces of data.multiple sensors - The
DFP 3 is controlled by theCPU 2. TheDFP 3 controls a setting value and operation timing (data flow) of each of the multiple 21A, 21B, and 21C and the multiple stages ofimage input sections 31A, 31B, and 31C. The multiple stages ofISPs 31A, 31B, and 31C perform signal processing time-divisionally, under the control of theISPs DFP 3, on the respective pieces of data from the 10A, 10B, and inputted via themultiple sensors 10A, 10B, and 10C.multiple sensors - In the
signal processing device 100 according to the comparative example, if the multiple stages of signal processing system hardware are subjected to data flow control by theCPU 2, performance of theCPU 2 imposes a limitation. Hence, performing data flow control by using theDFP 3 dedicated to data flow allows signal processing to be performed without being limited by the performance of theCPU 2. Here, even though theDFP 3 is dedicated to data flow and thus has high throughput, signal processing executable with performance of theDFP 3 is limited. - Hence, it is desired to develop a technique that makes it possible to perform signal processing not limited by the performance of the
CPU 2 or theDFP 3. In addition, it is desired to develop a technique that makes it possible to perform signal processing on multiple pieces of data. While suppressing a circuit scale and electric power consumption. -
FIG. 2 schematically illustrates a configuration example of asignal processing device 1 according to a first embodiment of the present disclosure. Note that description is omitted as appropriate regarding portions, inFIG. 2 , having configurations and operations similar to those of thesignal processing device 100 according to the comparative example inFIG. 1 . -
FIG. 2 illustrates a configuration example in a case of processing data outputted from the 10A, 10B, and 10C serving as the multiple external devices, as with themultiple sensors signal processing device 100 according to the comparative example. The 10A, 10B, and 10C each output, for example, image data. The multiple external devices and themultiple sensors signal processing device 1 may configure an imaging apparatus as a whole. - The
signal processing device 1 according to the first embodiment includes 20A, 20B, and 20C provided to correspond respectively to themultiple input units 10A, 10B, and 10C. To themultiple sensors 20A, 20B, and 20C, pieces of data from themultiple input units 10A, 10B, and 10C are inputted respectively.multiple sensors - The
signal processing device 1 further includes theCPU 2, multiple stages of signal processing system hardware, and the multiple 40A, 40B, and 40C.SW processing units - Note that the multiple external devices may be two or four or more external devices. In addition, in accordance with the number of the multiple external devices, the
20A, 20B, and 20C may also be two or four or more input units. Similarly, in accordance with the number of the multiple external devices, the multiplemultiple input units 40A, 40B, and 40C may also be two or four or more SW processing units.SW processing units -
FIG. 2 illustrates a configuration example in a case where multiple stages of 30A, 30B, and 30C are present, as the multiple stages of signal processing system hardware. The multiple stages ofprocessing units 30A, 30B, and 30C are each shared by theprocessing units 10A, 10B, and 10C, and able to perform common signal processing (processing A, processing B, and processing C) on each of multiple pieces of data. Note that the multiple stages of signal processing system hardware may be two or four or more stages of signal processing system hardware.multiple sensors - The
20A, 20B, and 20C include the multiplemultiple input units 21A, 21B, and 21C respectively. The multiple stages ofimage input sections 30A, 30B, and 30C include the multiple stages ofprocessing units 31A, 31B, and 31C respectively.ISPs -
FIG. 3 illustrates a configuration example of aninput unit 20 x in thesignal processing device 1. Here, theinput unit 20 x represents any one of the 20A, 20B, and 20C. Anmultiple input units image input section 21 x represents any one of the multiple 21A, 21B, and 21C.image input sections - The
20A, 20B, and 20C generate and output data obtained by adding additional information necessary for signal processing to each of multiple pieces of data inputted from the respectivemultiple input units 10A, 10B, and 10C.multiple sensors - The
20A, 20B, and 20C each include amultiple input units packet generator 22 serving as a first packet generator, in an output stage of corresponding one of the multiple 21A, 21B, and 21C, Theimage input sections packet generator 22 generates a packet of each of the multiple pieces of data inputted from the respective 10A, 10B, and 10C, adds the additional information as a header to the packet, and outputs the packet.multiple sensors - Here, the additional information may include instruction information indicating a routing instruction as to signal processing using which processing unit, out of the multiple stages of
30A, 30B, and 30C, is to be performed on each of the multiple pieces of data. In addition, the additional information may include setting information indicating a setting value to be used for signal processing in each of the multiple stages ofprocessing units 30A, 30B, and 30C.processing units - The
CPU 2 serves as a controller that instructs each of the 20A, 20B, and 20C as to signal processing using which processing unit, out of the multiple stages ofmultiple input units 30A, 30B, and 30C, is to be performed.processing units -
FIG. 4 illustrates a first configuration example of aprocessing unit 30 x in thesignal processing device 1. Here, theprocessing unit 30 x represents any one of the multiple stages of 30A, 30B, and 30C. Anprocessing units ISP 31 x represents any one of the multiple stages of 31A, 31B, and 31C.ISPs - The multiple stages of
30A, 30B, and 30C are each configured to perform common signal processing on each of the multiple pieces of data, on the basis of the additional information.processing units - The multiple stages of
30A, 30B, and 30C each include aprocessing units packet analyzer 32, and apacket generator 33 serving as a second packet generator. - The
packet analyzer 32 is provided in an input stage of each of the multiple stages of 31A, 31B, and 31C. TheISPs packet generator 33 is provided in an output stage of each of the multiple stages of 31A, 31B, and 31C.ISPs - The
packet analyzer 32 analyzes the header added to the packet, and determines the setting value to be used for signal processing in theISP 31 x. - The
packet generator 33 generates a packet in which information to be used for signal processing of the processing unit in the next stage, out of the multiple stages of 31A, 31B, and 31C, is added as additional information to a header.ISPs -
FIG. 5 illustrates a second configuration example of theprocessing unit 30 x in thesignal processing device 1. - The additional information to be added by each of the
20A, 20B, and 20C may include information indicating a priority for signal processing of each of the multiple pieces of data, in each of the multiple stages ofmultiple input units 30A, 30B, and 30C.processing units - The multiple stages of
30A, 30B, and 30C may each include aprocessing units queue processor 34 in an input stage of thepacket analyzer 32. - The
queue processor 34 performs queue processing on each of the multiple pieces of data, on the basis of the information indicating the priority, as will be described later. In a case where a queue overflow occurs, thequeue processor 34 may discard data of which the priority is relatively low, out of the multiple pieces of data, as will be described later. - The
CPU 2 may be a controller that is able to adjust the setting value of each of the 10A, 10B, and 10C. In a case where a queue overflow occurs, themultiple sensors queue processor 34 may provide theCPU 2 with a notification that the queue overflow has occurred, as will be described later. TheCPU 2 may adjust the setting value of each of the 10A, 10B, and 10C, on the basis of the notification from themultiple sensors queue processor 34. - In the
signal processing device 1 configured as described above, for example, before startup of the device, theCPU 2 gives, to each of the 20A, 20B, and 20C, the additional information including, for example, a routing instruction as to signal processing using which processing unit, out of the multiple stages ofmultiple input units 30A, 30B, and 30C, is to be performed. This allows each processing unit to route and process data automatically, without being controlled by theprocessing units CPU 2. In thesignal processing device 1, making it possible to perform data flow processing in the signal processing system hardware makes it unnecessary for data flow to be controlled by theCPU 2 or the DSP 3 (FIG. 1 ). This allows for processing regardless of a limitation imposed by theCPU 2 or theDSP 3, even in a case where the number of external devices is increased, for example, to 10 and further to 20. - Next, more specific configuration examples of the
signal processing device 1 according to the first embodiment are described. Note that description is omitted as appropriate regarding portions having configurations and operations similar to those inFIG. 2 . -
FIG. 6 illustrates a first specific example of a configuration of thesignal processing device 1 according to the first embodiment. - A
signal processing device 1A according to the first specific example illustrated inFIG. 6 represents a configuration example in a case of processing data outputted from anRGB sensor 110A and amonochrome sensor 110B as the multiple external devices. The multiple external devices and thesignal processing device 1A may configure an imaging apparatus as a whole. - The
signal processing device 1A includes the 20A and 20B provided to correspond to themultiple input units RGB sensor 110A and themonochrome sensor 110B. The data from theRGB sensor 110A is inputted to theinput unit 20A. The data from themonochrome sensor 110B is inputted to theinput unit 20B. - The
signal processing device 1A further includes theCPU 2, multiple stages of signal processing system hardware, and the multiple 40A and 40B.SW processing units -
FIG. 6 illustrates a configuration example in a case where the multiple stages of 30A, 30B, and 30C provided to correspond to theprocessing units RGB sensor 110A are present, as the multiple stages of signal processing system hardware. In addition,FIG. 6 illustrates a configuration example in a case where oneprocessing unit 30A provided to correspond to themonochrome sensor 110B is present, as the signal processing system hardware. - On the basis of an instruction from the
CPU 2, theinput unit 20A generates a packet in which additional information including, for example, a routing instruction is added to a header Hd of data Da from theRGB sensor 110A, and outputs the packet to theprocessing unit 30A provided to correspond to theRGB sensor 110A. - On the basis of an instruction from the
CPU 2, theinput unit 20B generates a packet in which additional information including, for example, a routing instruction is added to a header Hd of data Db from themonochrome sensor 110B, and outputs the packet to theprocessing unit 30A provided to correspond to themonochrome sensor 110B. - To the
processing unit 30B, the data Da from theprocessing unit 30A provided to correspond to theRGB sensor 110A, and the data Db from theprocessing unit 30A provided to correspond to themonochrome sensor 110B are inputted in common. Theprocessing unit 30B performs processing of the data Da and the data Db time-divisionally, on the basis of the additional information indicated by the header Hd. - The
processing unit 30B performs signal processing of the data Da and the data Db time-divisionally, on the basis of the additional information indicated by the header Hd. Theprocessing unit 30B outputs the data Db after signal processing to theSW processing unit 40B. In contrast, theprocessing unit 30B outputs the data Da after signal processing to theprocessing unit 30C in the next stage. Theprocessing unit 30C outputs the data Da after signal processing to theSW processing unit 40A. - Note that the multiple stages of
30A, 30B, and 30C may each repeat multiple times of signal processing, depending on contents of signal processing. For example, noise reduction processing or the like may be executed multiple times as signal processing.processing units FIG. 6 illustrates an example in which theprocessing unit 30C repeats multiple times of signal processing. -
FIG. 7 illustrates a second specific example of the configuration of thesignal processing device 1 according to the first embodiment. - A signal processing device 1B according to the second specific example illustrated in
FIG. 7 represents a configuration example in a case of processing data outputted from anRGB sensor 210A and anRGB sensor 210B having different pixel sizes from each other, as the multiple external devices. The multiple external devices and the signal processing device 1B may configure an imaging apparatus as a whole. Note that illustration of theCPU 2 is omitted inFIG. 7 . - The
RGB sensor 210A is an image sensor having a higher resolution than theRGB sensor 210B.FIG. 7 illustrates an example in which the pixel size of theRGB sensor 210A is, for example, 12 Mpix, and the pixel size of theRGB sensor 210B is, for example, 4 Mpix. - The signal processing device 1B includes the
multiple input units 20A and provided to correspond to theRGB sensor 210A and theRGB sensor 210B. The data from theRGB sensor 110A is inputted to theinput unit 20A. The data from theRGB sensor 210B is inputted to theinput unit 20B. - The signal processing device 1B further includes the
unillustrated CPU 2, multiple stages of signal processing system hardware, and the multiple 40A and 40B.SW processing units -
FIG. 7 illustrates a configuration example in a case where multiple stages of 30A, 30B, 30C, and 30D provided in common to each of theprocessing units RGB sensor 210A and theRGB sensor 210B are present, as the multiple stages of signal processing system hardware. - The
processing unit 30A includes apreprocessing section 51A. Theprocessing unit 30B includes ademosaic processing section 51B. The processing unit includes a Y (luminance) C (chroma) processing section. Theprocessing unit 30D includes acolor adjuster 51D. - On the basis of an instruction from the
CPU 2, theinput unit 20A generates a packet in which additional information including, for example, a routing instruction is added to the header Hd of the data Da from theRGB sensor 210A, and outputs the packet to theprocessing unit 30A. - On the basis of an instruction from the
CPU 2, theinput unit 20B generates a packet in which additional information including, for example, a routing instruction is added to the header Hd of the data Db from theRGB sensor 210B, and outputs the packet to theprocessing unit 30A. - The multiple stages of
30A, 30B, 30C, and 30D each perform processing of the data Da and the data Db time-divisionally, on the basis of the additional information indicated by the header Hd. Theprocessing units processing unit 30D outputs the data Da after signal processing to theSW processing unit 40A, and outputs the data Db after signal processing to theSW processing unit 40A. -
FIG. 8 schematically illustrates operation timing of signal processing by a signal processing device according to a comparative example.FIG. 8 illustrates an example case where the multiple stages of 30A, 30B, 30C, and 30D are controlled by theprocessing units DFP 3, as the signal processing device according to the comparative example.FIG. 9 schematically illustrates operation timing of signal processing by the signal processing device 1B according to the second specific example. -
FIGS. 8 and 9 illustrate, in an upper stage, a timing example of data output from each of the 210A and 210B. In each of the signal processing device according to the comparative example and the signal processing device 1B according to the second specific example, data is continuously outputted from each of theRGB sensors 210A and 210B.RGB sensors -
FIG. 8 illustrates, in a lower stage, a timing example of signal processing by theDFP 3 and thepreprocessing section 51A in theprocessing 30A.FIG. 9 illustrates, in a lower stage, a timing example of signal processing by theCPU 2 and thepreprocessing section 51A in theprocessing unit 30A. - In the signal processing device according to the comparative example, control (kick) by the
DFP 3 is necessary, each time thepreprocessing section 51A processes the data from each of the 210A and 210B time-divisionally. Therefore, processing is limited by performance of theRGB sensors DFP 3. In contrast, in the signal processing device 1B according to the second specific example, thepreprocessing section 51A operates autonomously on the basis of the additional information indicated by the header Hd and performs signal processing. Therefore, when processing the data from each of the 210A and 210B time-divisionally, control by theRGB sensors CPU 2 is unnecessary, and the processing is not limited by performance of theCPU 2. -
FIG. 10 illustrates a third specific example of the configuration of thesignal processing device 1 according to the first embodiment. - A signal processing device 1C according to the third specific example illustrated in
FIG. 10 represents a configuration example in a case of processing data outputted from anRGB sensor 310A and amonochrome sensor 310B as the multiple external devices. The multiple external devices and the signal processing device 1C may configure an imaging apparatus as a whole. Note that illustration of theCPU 2 is omitted inFIG. 10 . -
FIG. 10 illustrates an example in which theRGB sensor 310A and themonochrome sensor 310B have the same pixel size, for example, 12 Mpix. - The signal processing device 1C includes the
20A and 20B provided to correspond to themultiple input units RGB sensor 310A and themonochrome sensor 310B. The data from theRGB sensor 310A is inputted to theinput unit 20A. The data from themonochrome sensor 310B is inputted to theinput unit 20B. - The signal processing device 1C further includes the
unillustrated CPU 2, multiple stages of signal processing system hardware, and the multiple 40A and 40B.SW processing units -
FIG. 10 illustrates a configuration example in a case where the multiple stages of 30A, 30B, 30C, and 30D provided to correspond to theprocessing units RGB sensor 310A are present, as the multiple stages of signal processing system hardware. In addition,FIG. 10 illustrates a configuration example in a case where the multiple stages of 30A and 30B provided to correspond to theprocessing units monochrome sensor 310B are present, as the signal processing system hardware. - The
processing unit 30A includes thepreprocessing section 51A. Theprocessing unit 30B includes thedemosaic processing section 51B. The processing unit includes the Y (luminance) C (chroma) processing section. Theprocessing unit 30D includes thecolor adjuster 51D. - On the basis of an instruction from the
CPU 2, theinput unit 20A generates a packet in which additional information including, for example, a routing instruction is added to the header Hd of the data Da from theRGB sensor 310A, and outputs the packet to theprocessing unit 30A provided to correspond to theRGB sensor 310A. The data Da from theprocessing unit 30A provided to correspond to theRGB sensor 110A is inputted to theprocessing unit 30B provided to correspond to theRGB sensor 310A. - On the basis of an instruction from the
CPU 2, theinput unit 20B generates a packet in which additional information including, for example, a routing instruction is added to the header Hd of the data Db from themonochrome sensor 310B, and outputs the packet to theprocessing unit 30A provided to correspond to themonochrome sensor 310B. The data Db from theprocessing unit 30A provided to correspond to themonochrome sensor 310B is inputted to theprocessing unit 30B provided to correspond to themonochrome sensor 310B. - To the
processing unit 30C, the data Da from theprocessing unit 30B provided to correspond to theRGB sensor 310A, and the data Db from theprocessing unit 30B provided to correspond to themonochrome sensor 310B are inputted in common. - The
processing unit 30C performs signal processing of the data Da and the data Db time-divisionally, on the basis of the additional information indicated by the header Hd, and outputs the data Da and the data Db after signal processing to theprocessing unit 30D in the next stage. - The
processing unit 30D performs processing of the data Da and the data Db time-divisionally, on the basis of the additional information indicated by the header Hd. Theprocessing unit 30D outputs the data Da after signal processing to theSW processing unit 40A, and outputs the data Db after signal processing to theSW processing unit 40A. - Thus, even if the multiple external devices are a combination of the
RGB sensor 310A and themonochrome sensor 310B, it is possible to share a portion where common signal processing is possible, out of the multiple stages of signal processing system hardware. In this case, in shared hardware, control by theCPU 2 is unnecessary when processing data time-divisionally, and the processing is not limited by performance of theCPU 2. -
FIG. 11 illustrates a fourth specific example of the configuration of thesignal processing device 1 according to the first embodiment. - A
signal processing device 1D according to the fourth specific example illustrated inFIG. 11 represents a configuration example in a case of processing data outputted from asensor 410A and asensor 410B as the multiple external devices. The multiple external devices and thesignal processing device 1D may configure an imaging apparatus as a whole. Note that illustration of theCPU 2 is omitted inFIG. 11 . - The above specific examples represent examples in which a combination of an RGB sensor and an RGB sensor or a combination of an RGB sensor and a monochrome sensor is used as the multiple external devices. However, sensors to be used as the multiple external devices are not limited to these combinations. The
sensor 410A and thesensor 410B may be, for example, a combination of any multiple sensors of the same kind or different kinds, out of an RGB sensor, a monochrome sensor, a polarization sensor, a multispectral sensor, a ToF (Time of Flight) sensor, a DVS (Dynamic Vision Sensor) sensor, and the like. - The
signal processing device 1D includes themultiple input units 20A and provided to correspond to thesensor 410A and thesensor 410B. The data from thesensor 410A is inputted to theinput unit 20A. The data from thesensor 410B is inputted to theinput unit 20B. - The
signal processing device 1D further includes theunillustrated CPU 2, multiple stages of signal processing system hardware, and the multiple 40A and 40B.SW processing units -
FIG. 11 illustrates a configuration example in a case where the multiple stages of 30A, 30B, 30C, and 30D provided to correspond to theprocessing units sensor 410A are present, as the multiple stages of signal processing system hardware. In addition,FIG. 11 illustrates a configuration example in a case where the multiple stages of 30A and 30B provided to correspond to theprocessing units sensor 410B are present, as the signal processing system hardware. - The multiple stages of
30A, 30B, 30C, and 30D include multiple stages ofprocessing units 31A, 31B, and 31C, and 31D respectively. The multiple stages ofISPs 31A, 31B, and 31C, and 31D perform processing A, processing B, processing C, and processing D respectively as signal processing.ISPs - On the basis of an instruction from the
CPU 2, theinput unit 20A generates a packet in which additional information including, for example, a routing instruction is added to the header Hd of the data Da from thesensor 410A, and outputs the packet to theprocessing unit 30A provided to correspond to thesensor 410A. The data Da from theprocessing unit 30A provided to correspond to thesensor 410A is inputted to theprocessing unit 30B provided to correspond to thesensor 410A. - On the basis of an instruction from the
CPU 2, theinput unit 20B generates a packet in which additional information including, for example, a routing instruction is added to the header Hd of the data Db from thesensor 410B, and outputs the packet to theprocessing unit 30A provided to correspond to thesensor 410B. The data Db from theprocessing unit 30A provided to correspond to thesensor 410B is inputted to theprocessing unit 30B provided to correspond to thesensor 410B. - To the
processing unit 30C, the data Da from theprocessing unit 30B provided to correspond to thesensor 410A, and the data Db from theprocessing unit 30B provided to correspond to thesensor 410B are inputted in common. The processing unit performs signal processing of the data Da and the data Db time divisionally, on the basis of the additional information indicated by the header Hd, and outputs the data Da and the data Db after signal processing to theprocessing unit 30B in the next stage. - The
processing unit 30D performs processing of the data Da and the data Db time-divisionally, on the basis of the additional information indicated by the header Hd. Theprocessing unit 30D outputs the data Da after signal processing to theSW processing unit 40A, and outputs the data Db after signal processing to theSW processing unit 40A. - Thus, even if the multiple external devices are a combination of the
sensor 410A and thesensor 410B, it is possible to share a portion where common signal processing is possible, out of the multiple stages of signal processing system hardware. In this case, in shared hardware, control by theCPU 2 is unnecessary when processing data time-divisionally, and the processing is not limited by performance of theCPU 2. -
FIG. 12 illustrates a specific example of queue processing to be performed by thequeue processor 34 in thesignal processing device 1 according to the first embodiment. - Any
processing unit 30 x in thesignal processing device 1 may include thequeue processor 34 in the input stage of thepacket analyzer 32. Thequeue processor 34 performs queue processing on each of multiple pieces of data, on the basis of information indicating a priority added to the header Hd of the packet. - The
processing unit 30 x performs signal processing in the order of the priority. In this case, as illustrated inFIG. 12 , in a case where there are multiple packets with the same priority, theprocessing unit 30 x performs signal processing in the order of, for example, a time stamp, and outputs the packets to thequeue processor 34 in the next stage in the order in which signal processing is performed. -
FIG. 13 illustrates a fifth specific example of the configuration of thesignal processing device 1 according to the first embodiment. - A
signal processing device 1E according to the fifth specific example may have a configuration substantially similar to that of the signal processing device 1B according to the second specific example illustrated inFIG. 7 , except for a configuration related to the priority. - The
signal processing device 1E represents a configuration example in a case of processing data outputted from theRGB sensor 210A and theRGB sensor 210B having different pixel sizes from each other, as the multiple external devices. The multiple external devices and thesignal processing device 1E may configure an imaging apparatus as a whole. Note that illustration of theCPU 2 is omitted inFIG. 13 . - The
RGB sensor 210A is an image sensor having a higher resolution than theRGB sensor 210B.FIG. 13 illustrates an example in which the pixel size of theRGB sensor 210A is, for example, 12 Mpix, and the pixel size of theRGB sensor 210B is, for example, 4 Mpix. - On the basis of an instruction from the
CPU 2, theinput unit 20A generates a packet in which additional information including, for example, a routing instruction is added to the header Hd of the data Da from theRGB sensor 210A, and outputs the packet to theprocessing unit 30A. In addition, on the basis of an instruction from theCPU 2, theinput unit 20A includes information indicating a priority as additional information in the header Hd of the data Da. - On the basis of an instruction from the
CPU 2, theinput unit 20B generates a packet in which additional information including, for example, a routing instruction is added to the header Hd of the data Db from theRGB sensor 210B, and outputs the packet to theprocessing unit 30A. In addition, on the basis of an instruction from theCPU 2, theinput unit 20B includes information indicating a priority as additional information in the header Hd of the data Db. - The
signal processing device 1E may, for example, set a higher priority for the data Da from theRGB sensor 210A which is an image sensor with a higher resolution, and set a lower priority for the data Db from theRGB sensor 210B which is an image sensor with a lower resolution, to preferentially process the data Da with a higher resolution. -
FIG. 14 illustrates a specific example of queue processing to be performed by thequeue processor 34 in thesignal processing device 1 according to the first embodiment. - In a case where a queue overflow occurs, the
queue processor 34 may discard data of which the priority is relatively low, out of the multiple pieces of data. -
FIG. 15 illustrates a sixth specific example of the configuration of thesignal processing device 1 according to the first embodiment. - A
signal processing device 1F according to the sixth specific example may have a configuration substantially similar to that of thesignal processing device 1E according to the fifth specific example illustrated inFIG. 13 , except for a configuration related to processing in a case where a queue overflow occurs. - In a case where a queue overflow occurs, the
queue processor 34 may provide theCPU 2 with a notification that the queue overflow has occurred. On the basis of the notification from thequeue processor 34, theCPU 2 may adjust the setting value of each of theRGB sensor 210A and theRGB sensor 210B serving as the multiple external devices. TheCPU 2 may adjust, for example, setting values of a resolution and a frame rate, for theRGB sensor 210A and theRGB sensor 210B. -
FIG. 16 is an explanatory diagram illustrating an example of the additional information (header information) to be added to the packet by thesignal processing device 1 according to the first embodiment. - As illustrated in
FIG. 16 , items of the header information to be added by thesignal processing device 1 may include, for example, Lens information, an image size, an infrared filter, a format, the number of bits, a gamma characteristic, an NR characteristic, a shutter time, a Gain amount, a time stamp, route information, and information on a priority. The route information may include “next route information” indicating information on the processing unit in the next stage. -
FIG. 17 illustrates a seventh specific example of the configuration of thesignal processing device 1 according to the first embodiment. - A signal processing device 1G according to the seventh specific example may have a configuration substantially similar to that of the signal processing device 1C according to the third specific example illustrated in
FIG. 10 , except for a route of signal processing. - In the signal processing device 1G, the multiple stages of processing
30B, 30C, and 30D are present as the multiple stages of signal processing system hardware.units - The
processing unit 30A includes thepreprocessing section 51A. Theprocessing unit 30B includes thedemosaic processing section 51B. The processing unit includes the Y (luminance) C (chroma) processing section. The processing unit includes thecolor adjuster 51D. -
FIG. 18 illustrates an example of the additional information (header information) to be added by the signal processing device 1G according to the seventh specific example.FIG. 18 illustrates an example of the header information (Header 1) added to the packet outputted from theinput unit 20A and the header information (Header 3) added to the packet outputted from theprocessing unit 30B. - For example, the “next route information” indicating information on the processing unit in the next stage is updated, by going through each processing unit.
- Note that it is desirable that the header information do not contain the actual value of a large-size parameter such as a filter parameter. In addition, as the header information, the value of each parameter may be separately held in a memory, instead of containing the value itself of each parameter. In that case, the header information may contain information indicating where the value of the parameter is in the memory.
-
FIG. 19 illustrates an example of an amount of electric power consumed by the processing unit in thesignal processing device 100 according to the comparative example.FIG. 20 illustrates an example of an amount of electric power consumed by the processing unit in thesignal processing device 1 according to the first embodiment. - In the
signal processing device 100 according to the comparative example (FIG. 1 ), as illustrated inFIG. 19 , each processing unit that performs signal processing has to operate at all times in a period from start to stop of data input (streaming) from the external device, and consumes a large amount of electric power. - In contrast, in the
signal processing device 1 according to the first embodiment, as illustrated inFIG. 20 , each processing unit that performs signal processing has to operate only in at least a period from reception of a packet on which signal processing is to be performed to transmission of a packet to the processing unit in the next stage, and consumes a small amount of electric power. - Referring to
FIGS. 21 and 22 , operation of thesignal processing device 1 illustrated inFIG. 2 is described below. -
FIG. 21 is a flowchart illustrating an example of operation related to each of the 20A, 20B, and 20C in themultiple input units signal processing device 1. - First, before data input (streaming) from each of the
10A, 10B, and 10C is started, themultiple sensors CPU 2 sets, for each of the 20A, 20B, and 20C, data flow to be a basis for additional information necessary for signal processing (step S11). Next, streaming from each of themultiple input units 10A, 10B, and 10C is started (step S12).multiple sensors - The
20A, 20B, and 20C wait for input from themultiple input units 10A, 10B, and 10C respectively (step S13). Themultiple sensors 20A, 20B, and 20C each end processing in a case where streaming of each of themultiple input units 10A, 10B, and 10C ends.multiple sensors - In a case where pieces of data from the
10A, 10B, and 10C are received respectively, themultiple sensors 20A, 20B, and 20C start processing of the pieces of data (step S14). Themultiple input units 20A, 20B, and 20C each perform a packet generation process for the processing unit in the subsequent stage (step S15). Next, themultiple input units 20A, 20B, and 20C each transmit data packetized and including the additional information added as the header information, to the processing unit in the subsequent stage (step S16).multiple input units -
FIG. 22 is a flowchart illustrating an example of operation related to each of the multiple stages of 30A, 30B, and 30C in the signal processing device.processing units - First, streaming from each of the
10A, 10B, and 10C is started (step S21). The multiple stages ofmultiple sensors 30A, 30B, and 30C each wait for data reception in the queue processor 34 (step S22). The multiple stages ofprocessing units 30A, 30B, and 30C each end processing in a case where streaming of each of theprocessing units 10A, 10B, and 10C ends.multiple sensors - In a case where data is received in the
queue processor 34, the multiple stages of 30A, 30B, and 30C each determine the data to be processed on the basis of the information indicating the priority indicated by the header information (step S23). Next, the multiple stages ofprocessing units 30A, 30B, and 30C each determine the setting value for processing, on the basis of the header information (step S24). Next, the multiple stages ofprocessing units 30A, 30B, and 30C each start processing of the data (step S25).processing units - The multiple stages of
30A, 30B, and 30C each perform a packet generation process for the processing unit in the subsequent stage (step S26). Next, the multiple stages ofprocessing units 30A, 30B, and 30C each transmit packetized data to the processing unit in the subsequent stage (step S27). Next, the multiple stages ofprocessing units 30A, 30B, and 30C each determine whether or not there is data in the queue processor 34 (step S28). In a case where determination is made that there is no data in the queue processor 34 (step S28; N), the process returns to step S22. In a case where determination is made that there is data in the queue processor 34 (step S28; Y), the process returns to step S23.processing units - As described above, in the
signal processing device 1 according to the first embodiment, in each of the multiple stages of signal processing system hardware, common signal processing is performed on each of the multiple pieces of data, on the basis of the additional information added to each of the multiple pieces of data. This makes it possible to perform signal processing on the multiple pieces of data, while suppressing a circuit scale and electric power consumption. - In the
signal processing device 1 according to the first embodiment, the multiple stages of signal processing system hardware are shared by the multiple external devices, which makes it possible to reduce a hardware scale. In thesignal processing device 1 according to the first embodiment, data flow of the signal processing system hardware proceeds without intervention of theCPU 2 or theDSP 3, which allows theCPU 2 to concentrate on processing other than signal processing. In thesignal processing device 1 according to the first embodiment, routing for the multiple stages of signal processing system hardware is performed in units of packets, which makes it possible to suppress electric power consumption during standby for signal processing. - It is to be noted that the effects described in the present specification are merely examples and not limitative, and other effects may be achieved. The same applies to effects of the following other embodiments.
- The technology according to the present disclosure is not limited to the description of the embodiment described above, and various modifications may be made.
- For example, the present technology may have the following configurations.
- According to the present technology having the following configurations, in each of the multiple stages of processing units, common signal processing is performed on each of the multiple pieces of data, on the basis of the additional information added to each of the multiple pieces of data. This makes it possible to perform signal processing on the multiple pieces of data, while suppressing a circuit scale and electric power consumption.
- (1)
- A signal processing device including:
- multiple input units that add additional information necessary for signal processing to each of multiple pieces of data inputted from respective multiple external devices, and output the multiple pieces of data; and multiple stages of processing units each configured to perform common signal
- processing on each of the multiple pieces of data, on the basis of the additional information.
- (2)
- The signal processing device according to (1), in which the additional information includes
- instruction information indicating an instruction as to signal processing using which processing unit, out of the multiple stages of processing units, is to be performed on each of the multiple pieces of data, and
- setting information indicating a setting value to be used for signal processing in each of the multiple stages of processing units.
- (3)
- The signal processing device according to (1) or (2), in which the additional information includes information indicating a priority for signal processing of each of the multiple pieces of data in each of the multiple stages of processing units.
- (4)
- The signal processing device according to any one of (1) to (3), further including a controller that instructs each of the multiple input units as to signal processing using which processing unit, out of the multiple stages of processing units, is to be performed.
- (5)
- The signal processing device according to any one of (1) to (4), in which the multiple input units each include a first packet generator that generates a packet of each of the multiple pieces of data, adds the additional information as a header to the packet, and outputs the packet.
- (6)
- The signal processing device according to (5), in which the multiple stages of processing units each include
- a packet analyzer that analyzes the header added to the packet, and determines a setting value to be used for signal processing, and
- a second packet generator that generates a packet in which information to be used for signal processing of the processing unit in the next stage is added as the additional information to a header.
- (7)
- The signal processing device according to (3), in which the multiple stages of processing units each further include a queue processor that performs queue processing on each of the multiple pieces of data, on the basis of the information indicating the priority.
- (8)
- The signal processing device according to (7), in which, in a case where a queue overflow occurs, the queue processor discards data of which the priority is relatively low, out of the multiple pieces of data.
- (9)
- The signal processing device according to (8), further including a controller that is able to adjust setting values of the multiple external devices, in which,
- in a case where a queue overflow occurs, the queue processor provides the controller with a notification that the queue overflow has occurred, and
- the controller adjusts the setting values of the multiple external devices, on the basis of the notification from the queue processor.
- (10)
- A signal processing method including:
- adding additional information necessary for signal processing to each of multiple pieces of data inputted from respective multiple external devices, and outputting the multiple pieces of data; and
- performing, in each of multiple stages of processing units, common signal processing on each of the multiple pieces of data, on the basis of the additional information.
- (11)
- An imaging apparatus including:
- multiple sensors;
- multiple input units that add additional information necessary for signal processing to each of multiple pieces of data inputted from the respective multiple sensors, and output the multiple pieces of data; and
- multiple stages of processing units each configured to perform common signal processing on each of the multiple pieces of data, on the basis of the additional information.
- This application claims the benefit of Japanese Priority Patent Application JP2020-186819 filed with the Japan Patent Office on Nov. 9, 2020, the entire contents of which are incorporated herein by reference.
- It should be understood by those skilled in the art that various modifications, combinations, sub-combinations, and alterations may occur depending on design requirements and other factors insofar as they are within the scope of the appended claims or the equivalents thereof.
Claims (11)
1. A signal processing device comprising:
multiple input units that add additional information necessary for signal processing to each of multiple pieces of data inputted from respective multiple external devices, and output the multiple pieces of data; and
multiple stages of processing units each configured to perform common signal processing on each of the multiple pieces of data, on a basis of the additional information.
2. The signal processing device according to claim 1 , wherein the additional information includes
instruction information indicating an instruction as to signal processing using which processing unit, out of the multiple stages of processing units, is to be performed on each of the multiple pieces of data, and
setting information indicating a setting value to be used for signal processing in each of the multiple stages of processing units.
3. The signal processing device according to claim 1 , wherein the additional information includes information indicating a priority for signal processing of each of the multiple pieces of data in each of the multiple stages of processing units.
4. The signal processing device according to claim 1 , further comprising a controller that instructs each of the multiple input units as to signal processing using which processing unit, out of the multiple stages of processing units, is to be performed.
5. The signal processing device according to claim 1 , wherein the multiple input units each include a first packet generator that generates a packet of each of the multiple pieces of data, adds the additional information as a header to the packet, and outputs the packet.
6. The signal processing device according to claim 5 , wherein the multiple stages of processing units each include
a packet analyzer that analyzes the header added to the packet, and determines a setting value to be used for signal processing, and
a second packet generator that generates a packet in which information to be used for signal processing of the processing unit in the next stage is added as the additional information to a header.
7. The signal processing device according to claim 3 , wherein the multiple stages of processing units each further include a queue processor that performs queue processing on each of the multiple pieces of data, on a basis of the information indicating the priority.
8. The signal processing device according to claim 7 , wherein, in a case where a queue overflow occurs, the queue processor discards data of which the priority is relatively low, out of the multiple pieces of data.
9. The signal processing device according to claim 8 , further comprising a controller that is able to adjust setting values of the multiple external devices, wherein,
in a case where a queue overflow occurs, the queue processor provides the controller with a notification that the queue overflow has occurred, and
the controller adjusts the setting values of the multiple external devices, on a basis of the notification from the queue processor.
10. A signal processing method comprising:
adding additional information necessary for signal processing to each of multiple pieces of data inputted from respective multiple external devices, and outputting the multiple pieces of data; and
performing, in each of multiple stages of processing units, common signal processing on each of the multiple pieces of data, on a basis of the additional information.
11. An imaging apparatus comprising:
multiple sensors;
multiple input units that add additional information necessary for signal processing to each of multiple pieces of data inputted from the respective multiple sensors, and output the multiple pieces of data; and
multiple stages of processing units each configured to perform common signal processing on each of the multiple pieces of data, on a basis of the additional information.
Applications Claiming Priority (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2020186819A JP2022076407A (en) | 2020-11-09 | 2020-11-09 | Signal processing apparatus, signal processing method, and imaging apparatus |
| JP2020-186819 | 2020-11-09 | ||
| PCT/JP2021/037795 WO2022097434A1 (en) | 2020-11-09 | 2021-10-12 | Signal processing apparatus, signal processing method, and imaging apparatus |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| US20230393897A1 true US20230393897A1 (en) | 2023-12-07 |
Family
ID=81457142
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US18/033,745 Pending US20230393897A1 (en) | 2020-11-09 | 2021-10-12 | Signal processing device, signal processing method, and imaging apparatus |
Country Status (3)
| Country | Link |
|---|---|
| US (1) | US20230393897A1 (en) |
| JP (1) | JP2022076407A (en) |
| WO (1) | WO2022097434A1 (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP4543029A1 (en) * | 2023-10-20 | 2025-04-23 | Samsung Electronics Co., Ltd | Method and apparatus with machine-perspective signal processing |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US8595352B2 (en) * | 2006-03-22 | 2013-11-26 | Brocade Communications Systems, Inc. | Protocols for connecting intelligent service modules in a storage area network |
| US20190130533A1 (en) * | 2017-11-01 | 2019-05-02 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Method for image-processing and mobile terminal using dual cameras |
| US11307900B2 (en) * | 2017-08-29 | 2022-04-19 | International Business Machines Corporation | Adjustment of the number of central processing units to meet performance requirements of an I/O resource |
Family Cites Families (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2002247509A (en) * | 2001-02-21 | 2002-08-30 | Sanyo Electric Co Ltd | Camera device |
| JP4305055B2 (en) * | 2003-05-22 | 2009-07-29 | 株式会社日立製作所 | Image recording device |
| JP4415978B2 (en) * | 2006-08-02 | 2010-02-17 | ソニー株式会社 | Image signal processing apparatus and image signal processing method |
| JP5266076B2 (en) * | 2009-01-30 | 2013-08-21 | パナソニック株式会社 | Intercom system |
| US20120320751A1 (en) * | 2011-06-17 | 2012-12-20 | Jing Zhu | Method and system for communicating data packets |
-
2020
- 2020-11-09 JP JP2020186819A patent/JP2022076407A/en active Pending
-
2021
- 2021-10-12 WO PCT/JP2021/037795 patent/WO2022097434A1/en not_active Ceased
- 2021-10-12 US US18/033,745 patent/US20230393897A1/en active Pending
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US8595352B2 (en) * | 2006-03-22 | 2013-11-26 | Brocade Communications Systems, Inc. | Protocols for connecting intelligent service modules in a storage area network |
| US11307900B2 (en) * | 2017-08-29 | 2022-04-19 | International Business Machines Corporation | Adjustment of the number of central processing units to meet performance requirements of an I/O resource |
| US20190130533A1 (en) * | 2017-11-01 | 2019-05-02 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Method for image-processing and mobile terminal using dual cameras |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP4543029A1 (en) * | 2023-10-20 | 2025-04-23 | Samsung Electronics Co., Ltd | Method and apparatus with machine-perspective signal processing |
Also Published As
| Publication number | Publication date |
|---|---|
| JP2022076407A (en) | 2022-05-19 |
| WO2022097434A1 (en) | 2022-05-12 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN103873781B (en) | A kind of wide dynamic camera implementation method and device | |
| US8508625B2 (en) | Image processing apparatus | |
| TWI519151B (en) | Image processing method and image processing device | |
| US10348969B2 (en) | Camera controller, image processing module, and semiconductor system | |
| US10255704B2 (en) | Video delivery terminal, non-transitory computer-readable medium, and video delivery method | |
| WO2016098641A1 (en) | Image pickup device, image pickup method, and program | |
| US20080170131A1 (en) | Display apparatus and video adjusting method thereof | |
| EP2945368A1 (en) | Apparatus and method for obtaining vital sign of subject | |
| JP2018137567A (en) | Video transmitter and video receiver | |
| US20230393897A1 (en) | Signal processing device, signal processing method, and imaging apparatus | |
| US20170257548A1 (en) | Image signal processor apparatus and image signal processing method | |
| JP2008124928A (en) | Auto white balance system | |
| US10469727B2 (en) | Imaging apparatus, imaging method, and imaging system | |
| US9338414B2 (en) | Imaging apparatus and imaging method for performing interpolation processing | |
| JP5033702B2 (en) | Imaging device | |
| JP2025505127A (en) | Local generation of commands for vehicle sensors | |
| JP6676948B2 (en) | Image processing apparatus, imaging apparatus, and image processing program | |
| JP6065934B2 (en) | Video signal processing apparatus and imaging system | |
| WO2018170918A1 (en) | Multi-camera system for low light | |
| US20240121519A1 (en) | Information processing device, information processing method, and program | |
| US9940488B2 (en) | Dual video pipe with overlap filtering | |
| JP4123642B2 (en) | SIGNAL PROCESSING CIRCUIT FOR SOLID-STATE IMAGING DEVICE, CAMERA, VIDEO SIGNAL COMMUNICATION SYSTEM, AND VIDEO SIGNAL COMMUNICATION METHOD USING SOLID-STATE IMAGING DEVICE | |
| JP2010147756A (en) | Video image display apparatus | |
| US20140071314A1 (en) | Image processing apparatus and control method thereof | |
| JP6292870B2 (en) | Image processing apparatus, image processing method, and program |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AS | Assignment |
Owner name: SONY SEMICONDUCTOR SOLUTIONS CORPORATION, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:WATANABE, GOSHI;OKUIKE, KAZUYUKI;KATAYAMA, HIROSHI;SIGNING DATES FROM 20230315 TO 20230424;REEL/FRAME:063436/0063 |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION COUNTED, NOT YET MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |