HK1202009B - Systems and methods for generating a panoramic image - Google Patents
Systems and methods for generating a panoramic image Download PDFInfo
- Publication number
- HK1202009B HK1202009B HK15102380.7A HK15102380A HK1202009B HK 1202009 B HK1202009 B HK 1202009B HK 15102380 A HK15102380 A HK 15102380A HK 1202009 B HK1202009 B HK 1202009B
- Authority
- HK
- Hong Kong
- Prior art keywords
- camera
- image data
- cameras
- line
- downstream
- Prior art date
Links
Abstract
Systems and methods for generating a panoramic image include capturing image data from a plurality of cameras and storing the image data within a memory buffer of the respective camera; transmitting image data from an upstream camera to a downstream camera; and combining, within the downstream camera, the image data from the upstream camera with the image data of the downstream camera as combined image data. Each of the plurality of cameras may include an imaging array for capturing image data of a scene; a receiver for receiving image data from an upstream camera of the plurality of cameras; a memory buffer for: combining the image data received from the upstream camera with image data captured by the imaging array to form combined image data, and storing the combined image data; and a transmitter for transmitting the stored combined image data to a downstream camera of the plurality of camera.
Description
RELATED APPLICATIONS
This application claims priority to U.S. provisional application serial No.61/835, 492, filed 2013, 6, 14, incorporated herein by reference in its entirety.
Technical Field
The present disclosure relates to an image sensor for generating a panoramic view. In particular, a Mobile Industry Processor Interface (MIPI) daisy chain (daisy chain) is constructed using a plurality of CMOS image sensors, thereby creating a panoramic view.
Background
Panoramic images acquired by smartphones are typically generated using a single camera, wherein a user adjusts the field of view of the camera to acquire multiple images. This operation consumes a lot of effort by the user and furthermore it consumes a lot of processing requirements to fuse multiple fields of view together.
Disclosure of Invention
In a first aspect, a method for generating a panoramic image is disclosed, comprising: capturing image data from a plurality of cameras and storing the image data in memory buffers of the respective cameras; transmitting image data from an upstream camera of the plurality of cameras to a downstream camera of the plurality of cameras; and combining the image data from the upstream camera with the image data of the downstream camera within the downstream camera as combined image data.
In a second aspect, a system for generating a panoramic image is disclosed, comprising: a plurality of cameras, each camera having an imaging array for capturing image data of a scene, a receiver for receiving image data from an upstream camera of the plurality of cameras, and a memory buffer for: the method includes the steps of combining image data received from an upstream camera with image data captured by an imaging array to form combined image data, and storing the combined image data, and a transmitter for transmitting the stored combined image data to a downstream camera of the plurality of cameras.
Drawings
FIG. 1 illustrates a block diagram of an exemplary daisy chain of multiple imaging arrays for generating panoramic images in one embodiment.
Fig. 2 shows a block diagram illustrating exemplary processing of multiple rows of data received by multiple cameras, e.g., the daisy chain of fig. 1, in one embodiment.
Fig. 3 shows an exemplary configuration of multiple cameras within a daisy chain.
FIG. 4 illustrates an exemplary method for generating an image using a daisy chain of cameras in one embodiment.
Fig. 5 illustrates an exemplary address identifier for each device within the daisy chain of fig. 1.
FIG. 6 illustrates an exemplary data transfer diagram showing a master device writing to a slave device in one embodiment.
FIG. 7 illustrates an exemplary data transfer diagram showing a master device reading from a slave device in one embodiment.
FIG. 8 shows I in FIG. 1 in one embodiment2C general call on bus line.
Detailed Description
Applications of the embodiments disclosed herein may include panoramic or surround view imaging, 3D imaging, gesture recognition. The cameras described herein may be daisy-chained rather than fiber-optic, coupled together via a length of cable.
Several cameras (or image sensors) are used to achieve a panoramic view. In some embodiments, each of the several cameras need not be aligned in any one axis, as camera alignment is achieved by pre-calibration. Although there is overlap between multiple cameras, One Time Programmable (OTP) memory can be programmed to read out so that there is no overlap in the panoramic images produced by the cameras.
In one embodiment, there is no frame memory, only line buffer memory. In another embodiment, there may be additional frame memory so that the concepts discussed below are processed for each individual frame and then combined together.
In some embodiments, the line buffer shares data with a picture-in-picture (PIP) main buffer.
The horizontal blanking time (H blanking time) may be defined by the time required to transmit the image line data collected by each sensor to the host. For example, the H blanking time may be proportional to the number of image sensors in the system. Data rates (i.e., transmission and reception rates) may be limited by MIPI transmit (Tx) and (Rx) receive lanes.
In one embodiment, a global inter-integrated circuit identifier (I)2C ID) identifying the camera (or image sensor) chain, and I alone2The C ID identifies each particular camera (or image sensor). Separate I for each camera2The C ID may be loaded from OTP memory during initial setup. Further detailed discussion of I2C protocol related additional examples. In one embodiment, some "intermediate" sensors may simply bypass data, as described below. Exemplary Camera alignment in the figureShown in fig. 3. For example, arrangement 302 is shown to include four cameras positioned on a phone for generating panoramic images. Arrangement 304 illustrates an exemplary gesture recognition embodiment. Arrangement 306 illustrates an exemplary automobile monitoring system from a top view.
Fig. 1 illustrates a block diagram of an exemplary daisy chain 100 of multiple imaging arrays for generating panoramic images in one embodiment. The daisy chain 100 is, for example, a MIPI daisy chain comprising an input interface and an output interface. Control signals from the host processor pass through I2The C line is input to each camera. In addition, each camera includes a synchronization signal (FSIN) input so that all cameras can acquire pictures simultaneously. The advantage of synchronization is that there is no motion blur between frames.
The MIPI daisy chain 100 includes three imaging cameras 102, 104 and 106, each coupled together in series and using I2C data line 110 is coupled to host 108. Host 108 may be a processor, or alternatively host 108 may be a downstream-most camera, including via I2The C data line 110 is used to control the processor of the upstream camera timing. The present application is not intended to be limited to I only2C range of bus protocols. That is, other connection protocols may be used to couple each of the cameras 102, 104, and 106 and the host 108. Further, the daisy chain 100 may include more or fewer cameras than shown without departing from its scope.
Each camera 102, 104, and 106 includes an associated camera subassembly that includes, for example, an imaging array (i.e., arrays 112, 114, 116 within the camera), an associated buffer (i.e., buffers 118, 120, and 122), an associated MIPI receiver (i.e., MIPI receivers 124, 126, and 128), and an associated MIPI transmitter (i.e., MIPI transmitters 130, 132, and 134). The imaging arrays 112, 114, 116 are, for example, CMOS, CCD, NMOS, Live MOS, or other arrays of photosensitive pixels capable of generating image data of a panoramic scene. Buffers 118, 120, and 122 are memory buffers capable of storing image data from their respective imaging arrays as well as image data received from upstream cameras within daisy chain 100. For example, the buffer 120 of the second camera 104 stores image data from the array 114, as well as image data received by the MIPI Rx 126 from the MIPI Tx 130 of the camera 102. The MIPI receiver receives image data from an upstream camera (i.e., MIPI Rx 126 of camera 104 receives image data from MIPI Tx 130 of camera 102). The MIPI transceiver transmits image data to a downstream camera (i.e., MIPI Tx 132 transmits data to MIPI Rx 128).
In fig. 1, the upstream camera is facing the left and the downstream camera is facing the right, where camera 102 is the first camera in MIPI daisy chain 100. All of the cameras 102, 104, and 106 (and potentially the host 108 if the host is a camera) acquire image data of the image scene simultaneously. For example, cameras 102, 104, and 106 may acquire one or more lines of image data and store them in each respective buffer 118, 120, and 122. Arrows (1), (4) and (9) within fig. 1 indicate image data transferred from each array to the respective buffer. Beginning with the first camera 102, the image data stored in the buffer 118 is transferred from the first camera 102 to the second camera 104, where the image data combines the image data stored in the buffer 120. For example, MIPI Tx 130 transmits the image data stored within buffer 118 to MIPI Rx 126 of camera 104, as indicated by arrow (3). The MIPI Rx 126 of the intermediate camera 104 then transmits the image data from the first camera 102 immediately upstream, as indicated by arrow (5), and saves it to the line buffer 120, where the image data generated by the array 112 is combined in series with the image data generated by the array 114. The data transfer indicated by arrows (2), (3), and (5) may begin after the first line of image data is transferred to buffer 118, but before the entire array collection of image data from array 112. That is, when camera 102 captures image data via array 112, camera 102 may simultaneously transmit image data to camera 104.
Once combined within buffer 120, the combined image data is transferred to MIPI Tx 132, as indicated by arrow (6), which in turn transfers the image data to the next adjacent downstream camera's MIPI Rx (i.e., MIPIRx128 for camera 106). Although described above as capturing image data simultaneously, it should be understood that each camera need not capture image data simultaneously. In such embodiments, the camera 104 may image the scene after or just before receiving image data from the MIPI Tx 130 of the camera 102. Thus, the order of data transmission indicated by arrows (4) and (5) may be interchanged.
The data transfer indicated by lines (3) to (6) is repeated for each additional downstream camera in the MIPI daisy chain 100, i.e. camera 106. For example, in data transfer (7), the MIPI Tx 132 of the middle camera 104 transfers the combination of data to the MIPI Rx128 of the downstream right-side camera 106. The same process continues in the rightmost camera 106, as represented by arrows (8) - (10). The last camera in the daisy chain 100 transfers (11) the total data combination to the host 108.
The time taken to transmit from the most upstream camera (i.e., camera 102) to host 108 defines the H blanking time of the overall daisy chain 100 for a particular row of image data. The H-blanking time is significantly less than the processing time required to transmit all of the image data from each camera and fuse the image data together by the processor. For example, some cameras capture image data by "scanning" a first row of pixels, and then scanning a second row of pixels, and so on, until all rows are scanned. Advantageously, in the daisy chain 100 configuration, the first row of image data from the most upstream camera may be transferred to the next downstream camera before the scan of the entire imaging array of the first camera is complete. Therefore, the processing time to generate the panoramic image is significantly reduced. Furthermore, because the OTP defines a fusing characteristic between each adjacent camera in the daisy chain, processing time is reduced because complex image fusion techniques requiring a large amount of processing time are not required.
In one embodiment, the architectural features within each camera of FIG. 1 can be summarized as follows. First, each camera reads out the same timing line row and saves it to its own line buffer. Second, the MIPI Tx receives MIPI line data from the line buffer and delivers it to the next camera in sequence, and each camera starts reading out the next timing line simultaneously. Third, the MIPI Rx receives MIPI line data from the upstream camera and delivers it to the line buffer, and the last camera MIPI line data is delivered to the host.
In the above description, an image is composed of a plurality of rows. An example of how each line of image data is generated, combined, and transmitted is described. Fig. 2 shows a block diagram illustrating exemplary processing of multiple rows of data received by multiple cameras, such as the daisy chain 100 of fig. 1, in one embodiment. Cameras 1, 2, and 3 (not shown) are similar to cameras 112, 114, 116 of fig. 1, and each produces a respective row 1 of data, with each of the row 1 data being transferred through a line buffer, combined, and finally sent to a processor (or host camera) to form the last row 1 as shown. The same procedure applies to rows 2, 3, etc.
Thus, each current camera in the daisy chain receives the combined image data from each upstream camera, adds image data rows from the current camera to the combined image data, and then sends the updated combined image data downstream to the next camera in the chain. This is illustrated in fig. 1 via the circles representing data from each camera. For example, the data 113 for the camera 102 is initially stored in the buffer 118, and then passes through each of the downstream buffers 120 and 122, where the data 113 is combined with additional data from each downstream camera. Similarly, data 115 is initially stored within buffer 120, combined with data 113 in buffer 120, and combined data 113 and 115 then passes through buffer 122, combining data 113 and 115 with data 117 in buffer 122. This process iterates until all the combined data reaches host 108. As disclosed above, camera alignment is achieved by pre-calibration of the OTP. Although there is overlap between multiple cameras, the OTP memory can be programmed to read out correctly so that there is no overlap in the panoramic view.
Fig. 3 shows an exemplary configuration of multiple cameras within a daisy chain. Each of the cameras may be located on a respective side of the object. For example, configuration 300 shows a smartphone configuration having four cameras 302A-302D (indicated by dashed circles), each located on a respective side 302A-302D of the smartphone. As shown, camera 302A is located on front side 302A, camera 302B is shown on back side 302B, camera 302C is located on left side 302C, and camera 302D is located on right side 302D. The cameras are configured as a daisy chain such that the panoramic images produced by the smart phone incorporate the concepts and structures discussed above with respect to fig. 1-2.
Configuration 304 illustrates a gesture recognition configuration in which each camera is located on a single surface of a device such as a smartphone. Importantly, utilizing a daisy chain configuration provides significant advantages for gesture recognition. As discussed above, processing time is significantly reduced with daisy chaining, so fast gestures can be recognized more accurately.
Configuration 306 shows the vehicle monitoring system in a top view. Each camera is located on a different side of the car. It should be understood that more or fewer cameras may be on each side of the automobile. The reduced processing time of the daisy chain configuration of cameras within configuration 306 is advantageous because the automobile is traveling at the highest speed. Thus, faster processing times allow for more frequent and accurate imaging of the surrounding environment in which the automobile is traveling. Within configuration 306, each camera 307a-307d may be configured as a daisy chain, as discussed above. Further, CPU 308 may operate in a similar manner as host 108, as discussed above. The CPU 308 may be a host vehicle computer, or a separate CPU for providing visual recognition assistance, such as a visual lane assistance computer, automatic headlamp control, wiper assistance control, parking assistance control, brake assistance control, or other vehicle operation assistance that utilizes visual image data. The CPU may receive and analyze image data received from each of the cameras 307a-307d and utilize such information to assist in vehicle operation.
Fig. 4 illustrates an exemplary method 400 for generating an image using a daisy chain of cameras in one embodiment. The method 400 is performed, for example, using the daisy chain 100 of fig. 1.
In optional step 402, method 400 calibrates each camera and stores calibration settings in OTP memory to coordinate image fusion between multiple cameras. For example, step 402 is performed by the processor or host 108 and calibrates each of the cameras 102, 104, 106 such that an overlap blur is not present in the image resulting from the combination of image data from each of the cameras.
In step 404, a plurality of cameras capture image data. In one example of step 404, each camera 102, 104, 106 begins capture of image data 113, 115, 117 of the external scene and stores the image data in its respective line buffer 118, 120, 122. For example, to capture image data, the timing signal is via I2The C bus line 110 is sent from the host 108 so that the rows of pixels of each array 112, 114, 116 are read out simultaneously and stored in respective buffers.
In step 406, the image data of the first camera stored in the first camera buffer in step 404 is combined with the image data of the second camera. For example, image data of a first camera is transmitted from the first camera to a second camera downstream of the first camera, and the image data from the first camera is combined with the image data of the second camera at the second camera. In one example of step 406, a first line of image data stored in buffer 118 is transferred to MIPI Tx 130. Image data is transmitted via the MIPI Tx 130 to the MIPI Rx 126 of the camera 104. The image data 113 is then stored and combined with the first line of image data 115 captured by the second camera 104 in the buffer 120. Step 406 may be performed after the first row of image data 113 is captured by camera 102, but before the image data for the entire array 112 is captured.
In step 408, the combined image data of the first and second cameras stored in the second camera buffer in step 406 is combined with the image data of the additional camera. For example, combined image data for a first line of image data (including the first line of image data from array 112 combined with the first line of image data from array 114) is transmitted and combined with a first line of image data 117 from array 116 of a third camera. In one example of operation at step 408, the combined image data 113+115 from the second camera 104 is transferred via the MIPI Tx 132 to the MIPI Rx128 of the third camera 106. The combined image 113+115 data is then combined with the first row of image data 117 for the additional camera (i.e., camera 106) within the additional camera buffer (i.e., buffer 122). Step 408 iterates for each additional downstream camera until the combined image data is transmitted to host 108.
Steps 406-408 are repeated for each row of image data captured by the daisy chain camera. In step 410, the method 400 combines the current line of combined image data with the previous line of combined image data. In one example of operation, host 108 receives a combined second line of image data from camera 106 (i.e., via MIPI Tx 134) and combines the second line of image data with the first line of image data.
In step 412, the method 400 outputs a panoramic image including all combined lines of combined image data from each camera. For example, host 108 outputs a panoramic image including a combined row of combined image data received by each of cameras 102, 104, and 106.
The above examples include the use of I2The C protocol thus controls the timing of the image capture sequencing. I is2C allows passing two wires at two I2C devices, thereby reducing the physical footprint required for the daisy chain of image systems. For example, each device is connected to clock and data serial lines for controlling timing and data information, respectively. In I2Within the C protocol, each device is either a master or a slave. The master device is responsible for the bus of the current time, controls the clock, generates start and stop signals, and can transmit or receive data to and from the slave devices. The slave listens to the bus and acts on the control and data it receives. Usually the slave device does notBetween which data is transmitted.
Fig. 5 illustrates an exemplary address identifier 500 for each device within the daisy chain of fig. 1. For example, each of the array, MIPITx, MIPI Rx may have an identifier associated therewith. Further, each identifier is unique to a particular device. Thus, when a master device sends a signal to one or more slave devices, the signal is broadcast to all slave devices simultaneously. Each device compares the first 7 bits after the START bit to its own address to determine if the master device is "talking" to that particular slave device. If the first 7 bits match its own address identifier, the slave considers itself addressed by the master. The bits following the address identifier are read/write bits. It should be understood that the address identifier may include a fixed portion and a programmable portion. Further, the address identifier may have more or fewer bits associated therewith than shown in FIG. 5 without departing from the scope thereof.
Fig. 6 illustrates an exemplary data transfer diagram 600 showing a master device writing to a slave device in one embodiment. In fig. 6, data transmitted from the master device to the slave device is not padded, and data transmitted from the slave device to the master device has dotted padding. In data transfer 600, the start of the transfer includes an identifier 500 and a write bit 601. The master then waits for an acknowledgement from the slave 602. After receiving the acknowledgement 602, the master device starts data transmission. The slave device may periodically acknowledge the data transmission throughout the entire transmission process until the master device sends a stop command.
FIG. 7 illustrates an exemplary data transmission diagram 700 showing a master device reading from a slave device in one embodiment. In fig. 7, data transmitted from the master device to the slave device is not padded, and data transmitted from the slave device to the master device has dotted padding. In data transfer 700, the start of the transfer includes an identifier 500 and a write bit 701. The master then waits for an acknowledgement from the slave 702. After receiving the acknowledgement 702, the slave device begins data transmission to the master device. The master device periodically sends an acknowledgement 704 indicating the receipt of data from the slave device until the master device sends a stop command.
FIG. 8 shows that in one embodiment I2C general call on bus line. The general call 800 includes a first byte 802 indicating which slaves the call addresses, and a second byte 804 indicating the actions to take. For example, a first byte 802 may indicate that all arrays are addressed and a second byte 804 may indicate the start of capturing image data. In one example of operation, host 108 of FIG. 1 is via I2The C bus line 110 carries a general call 800 to each of the cameras 102, 104, 106. As discussed above, a master device may address more than one slave device at a time.
In some embodiments, the second byte may indicate which devices are to utilize the programmable portion of their identifier (i.e., identifier 500). In such an embodiment, the last bit of the second byte may be either 0 or 1. When 0, all slave devices may be reset and get in the programmable part of their address, or get in the programmable part of their address without reset. When the last bit of the second byte is 1, the call is made by the master device without any a priori information of the connected slave address. Here, the master device makes a call using its own address so that the slave device can identify the source of the message.
I used in daisy chain of imaging system2The C protocol provides significant advantages. For example, I2The C protocol assists in camera initialization and synchronization to significantly reduce blurring between the fused portions that produce the panoramic image. For example, within the daisy chain 100 of fig. 1, the image data collected by each array 112, 114, 116 may be synchronized based on a general call, as discussed above with respect to fig. 8.
Variations in the above methods and systems may be made without departing from the scope of the invention. It is therefore to be noted that the subject matter contained in the above description or shown in the accompanying drawings is to be interpreted as illustrative and not in a limiting sense. The following claims are intended to cover all generic and specific features described herein, as well as all statements of the scope of the present method and system, which, as a matter of language, might be said to fall therebetween.
Claims (17)
1. A method for generating a panoramic image, the method comprising:
capturing image data from at least three cameras, the image data of each camera as a series of lines captured by a respective line of the imaging array of cameras, the cameras being coupled in series such that all but a first camera of the series of cameras is a downstream camera coupled to an adjacent upstream camera;
for each camera, storing the image data in a memory buffer of the respective camera, one row at a time;
transmitting image data from each adjacent upstream camera to a downstream camera coupled thereto, one row at a time; and
each line of image data from an adjacent upstream camera is combined within each of the downstream cameras with a corresponding line of image data from the downstream camera to form a line of combined image data.
2. The method of claim 1, the capturing step comprising simultaneously capturing image data from cameras.
3. The method of claim 1, wherein the transferring step occurs after a first row of the series of rows is captured by an adjacent upstream camera, but before a last row of the series of rows is captured by an adjacent upstream camera.
4. The method of claim 1, wherein the first and second light sources are selected from the group consisting of,
the transmitting step includes accessing image data from a memory buffer of the adjacent upstream camera using a transmitter within the adjacent upstream camera and transmitting the accessed data to a receiver of the downstream camera; and
the combining step includes storing image data from the adjacent upstream camera from a receiver of the downstream camera into a memory buffer of the downstream camera.
5. The method of claim 1, further comprising generating a call signal from a host to control data transfer between cameras.
6. The method of claim 5, further comprising integrating I at an internal integrated circuit2And transmitting the call signal on a bus C.
7. The method of claim 5, the call signal comprising a first byte indicating an address identifier identifying one or more of the cameras and camera subassemblies and a second byte indicating an action taken by one or more of the cameras, a memory buffer within each camera, and an addressing in a transmitter within each camera.
8. A system for generating a panoramic image, the system comprising at least three serially coupled cameras, each camera comprising:
an imaging array for capturing image data of a scene, the image data being in the form of a series of rows;
a receiver for receiving a series of lines of image data captured by a camera immediately upstream of the serially coupled cameras, one line at a time;
a memory buffer to:
combining a line of image data received from an adjacent upstream camera with a line of image data captured by the imaging array to form a line of combined image data, an
Storing the rows of the combined image data; and
a transmitter for (a) transmitting a line of combined image data to a downstream camera of the serially coupled cameras before the memory buffer completes the step of storing a next line of said combined image data when the camera is not the first of the serially coupled cameras, and (b) transmitting a line of image data captured by the imaging array of cameras when the camera is the first of the serially coupled cameras.
9. The system of claim 8, further comprising a host for coordinating data transfer between serially coupled cameras.
10. The system of claim 8, wherein each of the serially coupled cameras is adapted to capture image data simultaneously.
11. The system of claim 8, wherein each of the serially coupled cameras is via an inter-integrated circuit I2The C-buses are coupled together.
12. The system of claim 9, wherein the master comprises a master of serially coupled cameras and the remaining serially coupled cameras are slaves.
13. The system of claim 9, wherein the host comprises an external processor.
14. The system of claim 8, the immediately upstream camera to begin transmitting image data to the downstream camera after a first row of the series of rows is captured by the immediately upstream camera but before a last row of the series of rows is captured by the immediately upstream camera.
15. The system of claim 8, wherein the serially coupled cameras are located on respective sides of an object.
16. The system of claim 8, further comprising:
a respective address identifier for each of the imaging array, the memory buffer, the receiver, and the transmitter of each of the serially coupled cameras; and
internal integrated circuit I2A C bus coupling the host to each of the cameras;
wherein the host is connected via an internal integrated circuit I2A call signal transmitted over the C-bus, which indicates which address identifier is addressed by the host, coordinates the data transfer between each of the cameras.
17. The system of claim 16, the call signal comprising a first byte indicating which address identifiers are addressed and a second byte indicating an action taken by at least one camera subassembly.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US201361835492P | 2013-06-14 | 2013-06-14 | |
| US61/835,492 | 2013-06-14 |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| HK1202009A1 HK1202009A1 (en) | 2015-09-11 |
| HK1202009B true HK1202009B (en) | 2018-08-10 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US9681045B2 (en) | Systems and methods for generating a panoramic image | |
| KR101571942B1 (en) | Method and apparatus for 3d capture syncronization | |
| US20170085755A1 (en) | Bidirectional synchronizing camera, camera system including the same and method of operating the camera | |
| CN109155814B (en) | Processing device, image sensor and system | |
| JP7028857B2 (en) | Image sensor and control system | |
| CN108432228B (en) | Frame synchronization method of image data, image signal processing unit, device and terminal | |
| CN107948463B (en) | A camera synchronization method, device and system | |
| KR20240090196A (en) | Image sensors, data processing devices, and image sensor systems | |
| HK1201393A1 (en) | Multi-band image sensor for providing three-dimensional color images | |
| CN111479078B (en) | Image sensor chip, electronic device and method for operating image sensor chip | |
| CN115190254A (en) | Synchronous exposure circuit, exposure method and exposure device for multiple vision sensors | |
| US20180191940A1 (en) | Image capturing device and control method thereof | |
| CN109309784B (en) | Mobile terminal | |
| CN117156073B (en) | Video data transmission device and system | |
| KR20190126287A (en) | Image sensor and transmission system | |
| HK1202009B (en) | Systems and methods for generating a panoramic image | |
| KR102176447B1 (en) | PCIe FPGA Frame Grabber based DisplayPort standard | |
| CN108683866B (en) | Image processing and transmitting method, image processor, and related storage medium and system | |
| KR101230397B1 (en) | High speed data transmission / reception method and apparatus | |
| CN102231839B (en) | Inter-integrated circuit (I2C) control device and method for double sensors | |
| CN102497514A (en) | Three-channel video forwarding equipment and forwarding method | |
| US9152242B2 (en) | Optical navigator device and its transmission interface including quick burst motion readout mechanism | |
| CN108270960A (en) | Image capturing apparatus and control method thereof | |
| JP2020188323A (en) | Imaging device and imaging method | |
| CN103841039B (en) | The method and apparatus of network streaming |