[go: up one dir, main page]

WO2008152647A2 - Three-dimensional imaging method and apparatus - Google Patents

Three-dimensional imaging method and apparatus Download PDF

Info

Publication number
WO2008152647A2
WO2008152647A2 PCT/IL2008/000812 IL2008000812W WO2008152647A2 WO 2008152647 A2 WO2008152647 A2 WO 2008152647A2 IL 2008000812 W IL2008000812 W IL 2008000812W WO 2008152647 A2 WO2008152647 A2 WO 2008152647A2
Authority
WO
WIPO (PCT)
Prior art keywords
distance
pixel
equals
laser pulse
output level
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/IL2008/000812
Other languages
French (fr)
Other versions
WO2008152647A3 (en
Inventor
Orly Yadid-Pecht
Alexander Belenky
Shay Hamami
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ben Gurion University of the Negev Research and Development Authority Ltd
Ben Gurion University of the Negev BGU
Original Assignee
Ben Gurion University of the Negev Research and Development Authority Ltd
Ben Gurion University of the Negev BGU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ben Gurion University of the Negev Research and Development Authority Ltd, Ben Gurion University of the Negev BGU filed Critical Ben Gurion University of the Negev Research and Development Authority Ltd
Publication of WO2008152647A2 publication Critical patent/WO2008152647A2/en
Anticipated expiration legal-status Critical
Publication of WO2008152647A3 publication Critical patent/WO2008152647A3/en
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/483Details of pulse systems
    • G01S7/486Receivers
    • G01S7/4865Time delay measurement, e.g. time-of-flight measurement, time of arrival measurement or determining the exact position of a peak
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • G01S17/10Systems determining position data of a target for measuring distance only using transmission of interrupted, pulse-modulated waves
    • G01S17/18Systems determining position data of a target for measuring distance only using transmission of interrupted, pulse-modulated waves wherein range gates are used
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • G01S17/8943D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/483Details of pulse systems
    • G01S7/486Receivers
    • G01S7/4861Circuits for detection, sampling, integration or read-out
    • G01S7/4863Detector arrays, e.g. charge-transfer gates
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light

Definitions

  • the present invention in some embodiments thereof, relates to time-of-flight three-dimensional imaging and, more particularly, but not exclusively, to a two-stage time-of-flight imaging technique.
  • Three-dimensional (3D) imaging is concerned with extracting visual information from the geometry of visible surfaces and analyzing the 3D coordinate data thus obtained.
  • the 3D data may be used to detect, track the position, and reconstruct the profile of an object, often in real time.
  • 3D data analysis is utilized in a variety of industrial applications, including ground surveys, automated process control, target recognition, autonomous machinery guidance and collision avoidance.
  • TOF time-of-flight
  • LADAR Laser Detection and Ranging
  • the senor In general, since real-time TOF applications are concerned with very fast events (i.e., occurring with the speed of light) two main conditions are necessary for the appropriate sensor operation. Firstly, very fast and low-noise operation of the readout electronics (i.e., very high operational frequency) is required. Secondly, the sensor (or the sensing element) should have the ability to detect and distinguish (e.g., separate from the background and handle unknown object reflectivity) the light signal which might be very weak (e.g., the light pulse reflected from objects which are relatively close one to the other), and/or should have the ability to integrate the detected signal in order to achieve a reasonable output in time.
  • very fast and low-noise operation of the readout electronics i.e., very high operational frequency
  • the sensor or the sensing element
  • Fig. 1 shows a simplified block diagram of a pulsed-based TOF system.
  • a short laser pulse is generated by laser pulser 100 and transmitted towards an optically-visible target 110.
  • a signal is taken from the transmitter to serve as a start pulse for time interval measurement circuitry 120 (e.g. a time-to-digital converter).
  • the back-reflected pulse is detected by photodetector 130, and amplified by amplifier 140.
  • a stop pulse for time interval measurement circuitry 120 is generated from the amplified signal by timing detection element 150.
  • the time interval between the start and stop pulses i.e. the time of flight
  • the distance to the target is calculated by multiplying the TOF by the velocity of the signal in the application medium, as shown in Eqn. 1 :
  • c is the speed of light (3*10 8 m/sec).
  • One approach for determining the time of arrival of the reflected light pulse uses comparators to sense the moment in which the photodetector output signal exceeds a certain threshold.
  • the time intervals to be measured are typically very short and the required timing accuracy and stability are very high, (e.g., picoseconds in time correspond to 1 cm in distance).
  • fast and complex readout electronics are required.
  • Some complex non-CMOS technologies may meet the required performance (e.g., BiCMOS) [4], but the need for very high bandwidth pixels makes it difficult to perform two-dimensional array integration.
  • Fig. 2 is a simplified timing diagram for CMOS pulsed-based TOF, as performed in [5]-[10].
  • S 1 a short laser pulse with duration of Tp is emitted.
  • the shutter is triggered when the laser pulse is transmitted, and remains open for the duration of the laser pulse Tp.
  • Tp the duration of the laser pulse
  • the second cycle the measurement is repeated, but now with a shutter window greatly exceeding Tp.
  • Vsi the signal that results when the shutter is open for substantially the time duration Tp (i.e. measurement cycle S 1 )
  • Vs 2 as the signal that results when the shutter windowing exceeds Tp (i.e. measurement cycle S 2 ).
  • the round trip TOF is calculated as:
  • the distance d is calculated as: where c is the speed of light.
  • the maximum TOF which may be measured is Tp. Therefore, for typical Tp values of 30ns-200ns range of pulse duration, the maximum distance which may be measured is about 4.5m-30m.
  • Fig. 3 is a simplified timing diagram illustrating an alternate approach to TOF determination, denoted pulsed indirect TOF [H]. This approach increases the distance information gathered by generating successive laser pulses which are integrated with associated delayed shutters S 1 -S K - The distance, d&, is calculated as:
  • Vs2 - V si The accuracy of pulsed indirect TOF depends on the precise measurement of the voltage difference (Vs2 - V si), which may be on the order of about 5 ⁇ V.
  • Vs2 - V si the voltage difference
  • ADC Analog-to-Digital Converter
  • SNR signal-to-noise ratio
  • a distance image sensor determines the signals of two charge storage nodes which depend on the delay time of the modulated light.
  • a signal by the background light is received from the third charge and is subtracted from the signal which depends on the delay time of the two charge storage nodes, so as to remove the influence of the background.
  • TOF imaging determines the distance to objects in the field of view by exposing an optical sensor to back-reflections of laser pulses. The time between the transmission of the laser pulse to its return to the optical sensor is used to calculate the distance to the object from which the laser pulse was reflected. A 3D image may then be constructed of the scene.
  • Some TOF imaging embodiments presented herein are based on a two-stage data collection and processing approach.
  • First a coarse ranging phase is performed by dividing the distance range of interest into sub-ranges. Each sub-range is checked to determine whether an object is or is not present for each sensor array pixel. Since the actual distance is not calculated at this stage, there is no need to gather data with a high SNR. There is also no need to perform a complex calculation of the object distance, merely to make a Yes/No decision whether an object is present in the sub-range currently being checked.
  • the coarse ranging phase it is known which pixels have objects in their field of view and in which sub-range.
  • the fine imaging phase data which permits a more accurate determination of the distance to the objects in the range of interest is gathered and the distances calculated.
  • this stage may require collecting a larger amount of data per pixel, the data is collected only for those pixels for which an object has been detected, and only in the sub-range in which the object was detected.
  • the dual-stage approach permits efficient 3D imaging by focusing the data collection process and distance calculations on the significant regions for the relevant pixels, and does not require collecting complete data over the entire distance range for all of the pixels.
  • TOF data is collected using a pixel which includes two optical sensing elements, where the two sensing elements may be exposed with different timing.
  • the two sensing elements are exposed in successive time intervals, and the TOF of the laser pulse is calculated from the output levels of both sensing elements.
  • a method for determining a distance to an object is performed as follows. First a first optical sensing element of an optical pixel is exposed to a back-reflected laser pulse for an initial time interval to obtain a first output level. Then a second optical sensing element of the optical pixel is exposed to the back-reflected laser pulse at a successive time interval to obtain a second output level. Finally, the distance to the object is calculated from the first and second output levels.
  • the calculating is in accordance with a ratio of the first and second output levels.
  • the method includes the further step of determining a background noise level and subtracting the background noise level from the first and second output levels prior to the calculating.
  • the calculating is performed as:
  • V 1 the first output level minus a background noise level
  • V 1 the second output level minus a background noise level
  • the calculating is performed as:
  • T an initial exposure time after transmission of the laser pulse
  • T p the duration of the laser pulse
  • V 1 the first output level
  • V 2 the second output level
  • the initial and successive time intervals are of the duration of the laser pulse.
  • the method includes the further step of comparing the first and second output levels to a threshold to determine if an object is present.
  • a method for performing three-dimensional imaging is performed as follows.
  • each pixel of an optical sensor array is exposed to a back-reflected laser pulse.
  • Each of the pixels is exposed with a shutter timing corresponding to a respective distance sub-range.
  • the pixel's output level is then used to determine whether an object is present in the pixel's respective distance sub-range.
  • the distance to the object is determined from the respective pixel output level.
  • the determining if an object is present in the respective distance sub-range includes comparing the respective pixel output level to a threshold.
  • the method includes the further step of outputting an array of the determined distances.
  • the method includes the further step of outputting a three-dimensional image generated in accordance with the determined distances.
  • the method includes the further step of selecting the duration of a pixel exposure time in accordance with a required length of the distance sub-range.
  • the method includes the further step of transmitting a laser pulse with a specified pulse length.
  • the duration of the pixel exposure equals the laser pulse length.
  • the pixels of a row of the array are exposed with the same shutter timing for each frame.
  • the pixels of successive rows of the array are exposed with a shutter timing corresponding to successive distance sub- ranges.
  • the pixels of a row of the array are exposed with a shutter timing corresponding to successive distance sub-ranges.
  • determining a distance from a pixel to the object includes: exposing the pixel to a plurality of back-reflected laser pulses, with a shutter timing corresponding to the respective distance sub-range; accumulating a pixel output level to laser pulses back-reflected from the respective distance sub-range; and calculating, from the accumulated pixel output level, a distance to an object within the respective distance sub-range.
  • the accumulating is repeated to obtain a desired signal to noise ratio.
  • the method includes beginning the exposure of a pixel in accordance with the distance of an initial distance of the respective distance sub-range.
  • the method includes the further steps of: providing the pixel as a first and second optical sensing element; obtaining a first and second output level by exposing each of the first and second sensing elements to a back-reflected laser pulse for a respective time interval; and calculating a distance from the optical pixel to the object from the first and second output levels.
  • the method includes the further step of determining a background noise level and subtracting the background noise level from the first and second output levels prior to the calculating.
  • the calculating a distance from the optical pixel to the object from the first and second output levels is in accordance with a ratio of the first and second output levels.
  • the calculating a distance from the optical pixel to the object from the first and second output levels is performed as:
  • d the distance to the object
  • c the speed of light
  • Tj an initial exposure time after transmission of the laser pulse
  • T p the duration of the laser pulse
  • V 1 the first output level minus a background noise level
  • V 2 the second output level minus a background noise level
  • an optical pixel which includes: a first optical sensing element configured for providing a first output level in accordance with exposure to a back-reflected laser pulse over a first exposure period; a second optical sensing element configured for providing a second output level in accordance with exposure to a back-reflected laser pulse over a successive exposure period; and a distance calculator configured for calculating a distance from the optical pixel to an object from the first and second output levels.
  • the distance calculator is further configured for subtracting a background noise level from the first and second output levels prior to calculating the distance.
  • the distance calculator is configured calculate the distance in accordance with a ratio of the first and second output levels.
  • the distance calculator is configured to calculate the distance as:
  • d equals the distance to the object
  • c equals the speed of light
  • T an initial exposure time after transmission of the laser pulse
  • T p equals the duration of the laser pulse
  • V x equals the first output level minus a background noise level
  • V 2 equals the second output level minus a background noise level
  • the distance calculator is configured to calculate the distance as:
  • d the distance to the object
  • c the speed of light
  • Tj an initial exposure time after transmission of the laser pulse
  • T p the duration of the laser pulse
  • V 1 the first output level
  • V 1 the second output level
  • a three-dimensional imaging apparatus which includes: a sensor array which includes a plurality of optical pixels configured for exposure to a back-reflected laser pulse: an exposure controller associated with the sensor array, configured for controlling a respective exposure time of each of the pixels so as to expose each of the pixels to back-reflection from a respective distance sub-range; and a distance calculator associated with the sensor array, configured for calculating from a pixel respective output level a distance to an object within the respective distance sub-range.
  • the distance calculator includes an object detector configured for determining if an object is present in a pixel's respective distance sub-range from a respective pixel output level.
  • the object detector is configured to determine if the object is present by comparing the respective pixel output level to a threshold.
  • the imaging apparatus further includes an image generator for outputting a three-dimensional image generated from the calculated distances.
  • the imaging apparatus further includes a laser for generating laser pulses for back-reflection.
  • the exposure controller is configured for selecting the initial time of the exposure in accordance with an initial distance of the respective distance sub-range and the duration of the exposure in accordance with a length of the respective distance sub-range.
  • the exposure controller is configured for exposing successive rows of the array with a shutter timing corresponding to successive distance sub-ranges.
  • the exposure controller is configured for exposing the pixels of a row of the array with a shutter timing corresponding to successive distance sub-ranges.
  • the distance calculator includes an output accumulator configured for accumulating a pixel output level from a plurality of exposures, and wherein the distance calculator is configured for calculating the distance from the accumulated pixel output level.
  • each of the optical pixels includes a first optical sensing element configured for providing a first output level in accordance with exposure to a back-reflected laser pulse for a first time interval, and a second optical sensing element configured for providing a second output level in accordance with exposure to a back-reflected laser pulse for a successive time, and wherein the distance calculator is configured for calculating the distance from the first and second output levels.
  • the distance calculator is configured calculate the distance in accordance with a ratio of the first and second output levels.
  • Implementation of the method and/or system of embodiments of the invention can involve performing or completing selected tasks manually, automatically, or a combination thereof.
  • several selected tasks could be implemented by hardware, by software or by firmware or by a combination thereof using an operating system.
  • hardware for performing selected tasks according to embodiments of the invention could be implemented as a chip or a circuit.
  • selected tasks according to embodiments of the invention could be implemented as a plurality of software instructions being executed by a computer using any suitable operating system.
  • a data processor such as a computing platform for executing a plurality of instructions.
  • the data processor includes a volatile memory for storing instructions and/or data and/or a non-volatile storage, for example, a magnetic hard-disk and/or removable media, for storing instructions and/or data.
  • a network connection is provided as well.
  • a display and/or a user input device such as a keyboard or mouse are optionally provided as well.
  • Fig. 1 is a block diagram of a prior art embodiment of a pulsed-based TOF system
  • Fig. 2 is a simplified timing diagram for a prior art pulsed-based TOF technique
  • Fig. 3 is a simplified timing diagram for a prior art pulsed indirect TOF technique
  • Fig. 4 is a simplified system of a 3D imaging system, in accordance with an embodiment of the present invention.
  • Fig. 5 is a simplified flowchart of a method for performing three-dimensional imaging, according to a preferred embodiment of the present invention
  • Fig. 6a illustrates partitioning an overall distance into N coarse sub-ranges Ad
  • Fig. 6b shows the distance sub-ranges captured by each row of the sensor array during the first frame t o -t l5 in an exemplary embodiment of the present invention
  • Fig. 7 shows sub-ranges captured by each row at the end of the coarse ranging phase, in an exemplary embodiment of the present invention
  • Fig. 8 is an example of a distance map for N x M pixel array
  • Fig. 9 is a simplified flowchart of a method for determining a distance to an object, according to a first preferred embodiment of the present invention
  • Fig. 10 is a simplified flowchart of a method for determining a distance to an object, according to a second preferred embodiment of the present invention.
  • Fig. 11 is a simplified exposure timing diagram for TOF data acquisition by a pixel having two sensing elements, according to a preferred embodiment of the present invention.
  • Fig. 12 is a simplified exposure timing diagram for pixels in a pixel array during the fine ranging phase, in an exemplary embodiment of the present invention
  • Fig. 13 is an exemplary timing diagram for collecting data in the third distance sub-range over multiple frames during the fine ranging phase;
  • Fig. 14 is a simplified block diagram of an imaging apparatus, according to a preferred embodiment of the present invention
  • Fig. 15 is a simplified block diagram of an optical pixel for TOF imaging, according to a preferred embodiment of the present invention.
  • Fig. 16 is a simplified block diagram of a 3D imager with column-parallel readout, according to an exemplary embodiment of the present invention.
  • the present invention in some embodiments thereof, relates to time-of-flight three-dimensional imaging and, more particularly, but not exclusively, to a two-stage time-of-flight three-dimensional imaging technique.
  • a sequence of short laser pulses is transmitted towards the field of view.
  • the back-reflected light is focused on a two- dimensional array of optical sensors.
  • the overall distance range of interest is partitioned into coarse sub-ranges d; to d ⁇ .
  • First a coarse ranging phase is performed to detect 3D objects in the various sub-ranges.
  • a fine sub-ranging phase is triggered for high- resolution 3D imaging within the sub-range, by accumulating pixel output levels over multiple exposure intervals to improve the SNR.
  • a sequence of rolling-shutter readouts is performed on the sensor array.
  • each row of the sensor array images one of the distance sub-ranges.
  • the sub-range imaged by each of the rows is shifted, so that after N readout cycles all the rows of the pixel array have imaged each of the dj to d ⁇ sub-ranges.
  • each pixel of the sensor array includes two sensing elements, as described in more detail below.
  • the two sensing elements are exposed during successive intervals, and the TOF is calculated from the ratio of the signals detected by the sensing elements.
  • the distance may be obtained directly from the TOF according to Eqn. 1.
  • Controller 10 generates trigger pulses, which cause laser 20 to emit a sequence of laser pulses towards the field of view.
  • the laser pulses are reflected towards sensor array 40 (in this case a two- dimensional CMOS camera).
  • Controller 10 reads out sensor array 40, and processes the sensor array output signal in order to determine for each pixel if an object is or is not present in the currently imaged sub-range dj. During the fine ranging phase, a similar process is performed using the accumulated levels of both sensors in order to determine where a detected object is located within the sub-range. Controller 10 analyzes the collected data to obtain a 3D image 50 of the object(s) within the field of view.
  • the optical sensing elements in sensor array 40 are compatible with Active Pixel Sensor (APS) CMOS fabrication.
  • a CMOS sensor array may allow integration of some or all of the functions required for timing, exposure control, color processing, image enhancement, image compression and/or ADC on the same die.
  • Other possible advantages of utilizing a CMOS 3D sensor array include low- power, low-voltage and monolithic integration.
  • the laser is a near-infrared (NIR) laser which operates in the Si responsivity spectrum.
  • NIR near-infrared
  • a sensor array formed as an NxM pixels array with column parallel readout is presented. It is to be understood that the invention is capable of other embodiments of sensor array configurations and/or readout techniques.
  • the sensor array may be a single pixel.
  • Embodiments presented below utilize a constant pulse duration of Tp for each laser pulse. Other embodiments may utilize varying laser pulse lengths, thereby enabling dividing the overall distance range of interest into unequal sub-ranges.
  • Embodiments presented below utilize a pixel exposure time of Tp. Other embodiments may utilize varying pixel exposure times.
  • FIG. 5 is a simplified flowchart of a method for performing three-dimensional imaging, according to a preferred embodiment of the present invention.
  • the optical data is gathered by an optical sensor array, having an array of pixels.
  • each of the array pixels is exposed to a back-reflected laser pulse.
  • Each of the array pixels is exposed with a shutter timing corresponding to a respective distance sub-range.
  • phrases relating to exposure of an optical sensor to back- reflection of a laser pulse mean that the optical sensor is exposed in the direction towards which the laser pulse was transmitted, so that reflections of the laser pulse back from objects will be arrive at the optical sensor.
  • the phrase does not imply that the shutter timing is set so that the optical sensor is necessarily exposed at the time that the reflections arrive at the optical sensor.
  • the pixel output level preferably by comparing the pixel output to a threshold. If the pixel output exceeds the threshold, the pixel is designated as having an object in the current distance range.
  • the threshold may differ for different sub-ranges and/or specific array pixels.
  • the determination made in 510 is whether an object is or is not present for the given pixel, and does not necessarily indicate that the object is the same for multiple pixels.
  • the distance from the pixel to the object is calculated for each pixel for which an object was detected in 510. The details of how the distance calculation is made are presented in more detail below.
  • the method may further include outputting an array of the distances obtained in 520 and/or a three-dimensional image generated in accordance with the determined distances.
  • the method may further include transmitting a laser pulse of duration Tp.
  • the timing of the pixel exposure is preferably determined relative to the time of transmission and duration of the laser pulse whose back-reflection is being collected.
  • the duration of the pixel exposure period equals the laser pulse duration for some or all of the sensor array pixels.
  • each row in the array may be individually exposed to back-reflected laser pulses from a predefined sub-range.
  • the beginning of the exposure period is preferably selected in accordance with the initial distance of the respective distance sub-range.
  • the duration of the exposure period for a given pixel is selected in accordance with the required length of the respective distance sub-range.
  • the overall distance may be partitioned into N coarse sub-ranges Ad, as shown in Fig. 6a.
  • the length of the sub-ranges Ad is equivalent to the travel time of a laser pulse with duration Tp.
  • the maximum TOF may be expressed as a function of Tp as follows:
  • TOF max r 0 + N-7> (6)
  • TQ denotes a fixed Time of Interest (TOI) from which data should be recorded.
  • TOI Time of Interest
  • the pixels of each row i are preferably exposed with the same shutter timing.
  • the pixels of successive rows of the array are exposed with a shutter timing corresponding to successive distance ranges.
  • a pulse length of Tp corresponds to a sub-range of length Ad, where ⁇ d is equal to the travel of the laser pulse in the duration of the exposure time interval.
  • ⁇ d is equal to the travel of the laser pulse in the duration of the exposure time interval.
  • Fig. 6b illustrates the distance sub-ranges captured by each row of the sensor array during the first frame Vt 1 , in an exemplary embodiment of the present invention.
  • the row shutter timing is set to capture reflections from successive distance sub-ranges in successive frames.
  • the exposure process may be repeated N times, once for each distance sub-range.
  • the shutter timing for each row may be shifted
  • Fig. 7 illustrates the sub-ranges captured by each row at the end of the coarse ranging phase, in an exemplary embodiment of the present invention.
  • the first row of the sensor array collects data from the j-th sub-range
  • the second row of the sensor array collects data from the subsequent sub-range, and so forth.
  • the i-th row of the sensor array collects data from sub-range [i - (J-I)IN-
  • the quantity [/ - (/-2)]w is the result of a modulo operation, given by:
  • the sub-range coordinates (or a sub-range identifier) are stored for the given pixel.
  • the location of the 3D objects within the FOV is coarsely known.
  • all of the array pixels are exposed with the same shutter timing for each frame. That is, all of the array pixels collect data for the same sub-range. Data for different sub-ranges is collected in subsequent frames. Fine ranging phase
  • a map of approximate distances to objects within the range of interest is known.
  • An example of a distance map for N x M pixel array is shown in Fig. 8.
  • an additional fine ranging phase is performed to improve distance measurement accuracy within the sub-range (520 of Fig. 5).
  • the fine-ranging is performed only for pixels (or rows) for which objects were found in the coarse ranging phase, and/or only for sub-ranges in which the object was found.
  • the coarse range map may first be analyzed to identify regions of interest, for which fine-ranging is performed.
  • the value of vn or v # may be very small (e.g., 5 ⁇ V).
  • the measurement process is repeated using laser pulse re-emitting for signal accumulation.
  • the SNR improvement is on the order of the square-root of m, where m is the number of time the process is repeated. Improved SNR provides a longer distance measurement range and increases the maximum unambiguous range.
  • Fig. 9 is a simplified flowchart of a method for determining a distance from a pixel to an object, according to a first preferred embodiment of the present invention.
  • the pixel is exposed to a plurality of back- reflected laser pulses.
  • the pixel shutter timing relative to the respective laser pulse, corresponds to the distance sub-range in which the object was detected.
  • the pixel level is accumulated over the multiple exposures. Due to the shutter timing, the pixel level is accumulated for pulses reflected from the distance sub-range of interest.
  • the distance from the pixel to the object is calculated from the accumulated pixel output level. The number of exposures for which the pixel output is accumulated may be selected in order to obtain the desired signal to noise ratio.
  • the distance information from the coarse and/or fine ranging phases may be stored for post-processing and 3D object reconstruction.
  • TOF Acquisition The distance information from the coarse and/or fine ranging phases may be stored for post-processing and 3D object reconstruction.
  • the TOF (and consequently the distance) is determined using data collected by a pixel consisting of two optical sensing elements (e.g. pinned photodiodes) which may be exposed independently.
  • the sensing elements are of a type suitable for CMOS
  • Fig. 10 is a simplified flowchart of a method for determining a distance to an object, according to a second preferred embodiment of the present invention.
  • the optical pixel being exposed to the back-reflected laser pulse includes two sensing elements.
  • the first sensing element is exposed for an initial time interval to obtain a first output level.
  • the second sensing element is exposed for a successive time interval to obtain a second output level.
  • the distance from the optical pixel to the object is calculated from the first and second output levels.
  • the output levels may first be compared to a threshold to determine if an object is present, and the distance calculated only if there is an object.
  • the length of the two exposure durations may be the laser pulse length, in order to obtain the best sub-range distance resolution. As shown below, the distance may be calculated from the ratio of the two output levels.
  • Fig. 11 is a simplified exposure timing diagram for TOF data acquisition by a pixel having two sensing elements, according to a preferred embodiment of the present invention.
  • the exposure of the pixel consists of two successive intervals with Tp duration.
  • a laser pulse of duration Tp is emitted at time t m i t -
  • the back- reflected light arrives at the pixel at t; n i t +TOF.
  • Sensing element S 1 is exposed from time Ti to Tj+Tp, while sensing element S 2 is exposed from time Tj+Tp to Tj+2-Tp.
  • the signals Vu and v i2 obtained respectively from sensing elements S 1 and S 2 , are proportional to the magnitude of the reflected pulse, ⁇ L and background illumination, ⁇ B , as follows:
  • the TOF is calculated as:
  • the background illumination ⁇ B is measured previously without sending a laser pulse under known exposure conditions and stored.
  • the TOF is calculated after subtracting the background illumination ⁇ B from signals v,- / and va ⁇ .
  • the calculated TOF does not depend on the responsivity of the sensor or on the reflectivity of the target object.
  • Fig. 12 is a simplified exposure timing diagram for pixels in a pixel array during the fine ranging phase, for an exemplary embodiment of the present invention.
  • Each pixel includes two sensing elements with independent shutter timing.
  • S, / and S ⁇ denote the shutter timing applied to the first and second sensing element, respectively, for a pixel within the z-th row of the sensor array.
  • the shaded regions represent time intervals which are necessary to avoid, during which the pixel is not exposed in order to eliminate interference from near and far objects outside the current sub-range.
  • the output levels of both sensing elements are preferably accumulated for m iterations prior to calculating the object distance. For example, consider the distance map acquired for Nx M pixels array at the end of the coarse ranging phase shown in Fig. 8. An object in the third sub-range has been detected by certain pixels in rows / and /+1. For the subsequent m iterations laser pulses continue to be emitted and the pixels which were reflected from the object are exposed at the third sub-range and read, as exemplarily shown in Fig. 13. The readouts from these m iterations are accumulated to determine the location of the object within the third sub-range, and improve the distance measurement precision.
  • the SNR of some embodiments is dependent on the noise sources affecting the detected sensor output signal.
  • the total noise on the sensing element e.g., photodiode
  • the total noise on the sensing element may be expressed as the sum of five independent components:
  • CDS Correlated Double Sampling
  • the charge (including noise) on the sensing element associated with v/and v/ may thus be expressed as:
  • Ql Yq[ 1 L - [Tp -At) + 1 B -T P + Q shot + Q r e adout ] electron
  • Qf Y q [h - ⁇ t + h -Tp +Qshot +Grtadout] electron provided Q(Tp) ⁇ Q max , (Qma x is the saturation charge, also referred to as well capacity).
  • Ii and I B are the laser and background induced photoelectric currents, respectively.
  • the generated shot noise charge Q st , ot with zero mean and PSD is given by:
  • the time resolution (absolute accuracy) is inversely proportion to the root of the laser photocurrent (i.e., optical power) as is expected from Poisson statistics.
  • the result of Eqn. 17 may be reduced by the square root of the number of measurement iterations, as mentioned above.
  • the final range measurement is determined after the completion of the coarse ranging and the fine ranging phases.
  • the durations for completing these phases are:
  • T frame T coarse +T fine
  • M denotes the number of columns in the sensor (with column parallel readout).
  • Imager 1400 includes sensor array 1410, exposure controller 1420 and distance calculator 1430.
  • Sensor array 1410 includes multiple optical pixels for absorbing back-reflection of a laser pulse, plus any other required circuitry (e.g. for sensor array readout).
  • the pixels may be organized as an NxM array.
  • Exposure controller 1420 controls the exposure time of each of the pixels so as to expose each of the pixels to back-reflection from the pixel's current distance sub-range. Exposure controller 1420 adjusts the initial time and duration of exposure, which are based on the initial distance and length of the current distance sub-range.
  • Exposure controller 1420 may also control the pulse timing of laser 1440, in order to accurately synchronize the timing between the laser pulse transmission and the pixel exposure.
  • Exposure controller 1420 adjusts the pixel exposure according to the desired method for scanning the different sub-ranges of interest. For example, each row of pixels may be exposed to a different sub-range during a single frame (see Fig. 7). Additionally or alternately, a given row of pixels may be exposed to different subranges in different frames.
  • Distance calculator 1430 calculates the distance to an object from a pixel output level, within the distance sub-range in which an object was detected for the given pixel.
  • Distance calculator 1430 may include object detector 1450 which determines if an object is present in a pixel's respective distance sub-range. In one embodiment, the presence of an object is determined by comparing the pixel output level compared to a threshold. An object is considered to be present only if the threshold is exceeded.
  • Imager 1400 may include output accumulator 1460 which accumulates the pixel output level over multiple exposures. Distance calculator 1430 may then calculate the distance to the object from the accumulated pixel output level.
  • Imager 1400 may further include image generator 1470, which generates a three- dimensional image from the distances provided by distance calculator 1430, and outputs the image.
  • each pixel includes two optical sensing elements, which have separate shutter controls.
  • the sensing elements may be shuttered by independent trigger signals, or may be shuttered with an automatic offset from a single trigger signal.
  • the distance/TOF for an object in a given sub-range is preferably calculated as a function of the both of the sensing element output signals.
  • Fig. 15 is a simplified block diagram of an optical pixel for TOF imaging, according to a preferred embodiment of the present invention.
  • Pixel 1510 includes two optical sensing elements, 1515.1 and 1515.2, and distance calculator 1530.
  • the two sensing elements 1515.1 and 1515.2 are exposed for successive exposure periods, where the initial time and duration of the exposure period is preferably selected in accordance with distance and length of the current distance subrange.
  • Distance calculator 1530 calculates the distance from pixel 1510 to an object from the first and second output levels. The distance may be calculated only after distance calculator 1530 determines that an object is present in the current sub-range. Distance calculator 1530 may subtract the background noise level from the sensing element output levels prior to calculating the distance.
  • distance calculator 1530 calculates the distance from the pixel to the object according to Eqn. 1 Ia or 1 Ib above.
  • 3D sensor 1600 includes a CMOS image sensor (CIS) 1610, with an NxM array of CMOS pixels.
  • Sensor 1600 further includes controller 1620, analog peripheral circuits 1630, and two channels of readout circuit (1640.1 and 1640.2 respectively).
  • Each pixel consists of two sensing elements (e.g., photodiodes), and allows individual exposure for each of the sensing elements. Additional circuits (not shown) may be integrated at the periphery of the pixels array for row and column fixed pixel noise reduction.
  • Controller 1620 manages the system synchronicity and generates the control signals (e.g.
  • Analog peripheral circuits 1630 provide reference voltages and currents, and low-jitter clocks for proper operation of the imager and the readout circuits.
  • Readout circuits (1640.1 and 1640.2) consist of two channels for even and odd rows for relaxed readout timing constraints. The output data may be steered to an off-chip digital computation circuit.
  • Each readout circuit comprises a column decoder (not shown), Sample & Hold (S/H) circuits (1650.1/1650.2), column ADCs (1660.1/1660.2), and a RAM block (shown as ping-pong memory 1670.1/1670.2). The signal path is now described.
  • the low-noise S/H circuits 1650.1/1650.2 maintain a fixed pixel output voltage, while the high performance (i.e. high conversion rate and accuracy) column ADC 1660.1/1660.2 provides the corresponding digital coding representation.
  • the role of RAM block 1670.1/1670.2 is twofold: (1) storing the digital value at the end of the A/D conversion, and (2) enabling readout of the stored data to an off-chip digital computation circuit.
  • the RAM architecture is dual- port ping-pong RAM.
  • the ping-pong RAM enables exchanging blocks of data between processors rather than individual words.
  • the RAM block in each channel may be partitioned into two sub-blocks for two successive rows (i.e., 2/ and 2i + 2 or 2/ - 1 and 2/ + 1).
  • a mode e.g., write for the 2z-th row or read out of the 2i + 2-th row
  • the two sub-blocks are exchanged. That is, the mode is switched to read out of 2Mb. row or to write to the 2/ + 2-th row.
  • This approach allows a dual-port function with performance equal to that of individual RAM.
  • the 3D imaging techniques described above provide a high-performance TOF- based 3D imaging method and system.
  • the system may be constructed using standard CMOS technology.
  • compositions, methods or structure may include additional ingredients, steps and/or parts, but only if the additional ingredients, steps and/or parts do not materially alter the basic and novel characteristics of the claimed composition, method or structure.
  • a compound or “at least one compound” may include a plurality of compounds, including mixtures thereof.
  • various embodiments of this invention may be presented in a range format. It should be understood that the description in range format is merely for convenience and brevity and should not be construed as an inflexible limitation on the scope of the invention. Accordingly, the description of a range should be considered to have specifically disclosed all the possible subranges as well as individual numerical values within that range.
  • a range such as from 1 to 6 should be considered to have specifically disclosed subranges such as from 1 to 3, from 1 to 4, from 1 to 5, from 2 to 4, from 2 to 6, from 3 to 6 etc., as well as individual numbers within that range, for example, 1, 2, 3, 4, 5, and 6. This applies regardless of the breadth of the range.
  • a numerical range is indicated herein, it is meant to include any cited numeral (fractional or integral) within the indicated range.
  • the phrases "ranging/ranges between" a first indicate number and a second indicate number and “ranging/ranges from” a first indicate number "to” a second indicate number are used herein interchangeably and are meant to include the first and second indicated numbers and all the fractional and integral numerals therebetween.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • Optics & Photonics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

A method for determining a distance to an object is performed as follows. First a first optical sensing element of an optical pixel is exposed to a back-reflected laser pulse for an initial time interval to obtain a first output level. Then a second optical sensing element of the optical pixel is exposed to the back-reflected laser pulse at a successive time interval to obtain a second output level. Finally, the distance to the object is calculated from the first and second output levels. Three-dimensional imaging may be performed for a sensor array in a two-stage process, by first determining whether an object is present in a pixel's distance sub-range. In some embodiments, the distance to the object is calculated from a pixel output level accumulated over multiple laser pulses.

Description

THREE-DIMENSIONAL IMAGING METHOD AND APPARATUS
FIELD AND BACKGROUND OF THE INVENTION
The present invention, in some embodiments thereof, relates to time-of-flight three-dimensional imaging and, more particularly, but not exclusively, to a two-stage time-of-flight imaging technique.
Three-dimensional (3D) imaging is concerned with extracting visual information from the geometry of visible surfaces and analyzing the 3D coordinate data thus obtained. The 3D data may be used to detect, track the position, and reconstruct the profile of an object, often in real time. 3D data analysis is utilized in a variety of industrial applications, including ground surveys, automated process control, target recognition, autonomous machinery guidance and collision avoidance.
The time-of-flight (TOF) technique is a method for the fast optical acquisition of distances and 3D vision acquisition. TOF-based range finders are suitable for long range measurements, ranging from several centimeters to tens of kilometers. TOF systems have been used in radar and Laser Detection and Ranging (LADAR) applications [1], [2], [3].
In general, since real-time TOF applications are concerned with very fast events (i.e., occurring with the speed of light) two main conditions are necessary for the appropriate sensor operation. Firstly, very fast and low-noise operation of the readout electronics (i.e., very high operational frequency) is required. Secondly, the sensor (or the sensing element) should have the ability to detect and distinguish (e.g., separate from the background and handle unknown object reflectivity) the light signal which might be very weak (e.g., the light pulse reflected from objects which are relatively close one to the other), and/or should have the ability to integrate the detected signal in order to achieve a reasonable output in time.
Pulsed-based TOF
One technique for TOF imaging is pulsed-based TOF. Fig. 1 shows a simplified block diagram of a pulsed-based TOF system. A short laser pulse is generated by laser pulser 100 and transmitted towards an optically-visible target 110. A signal is taken from the transmitter to serve as a start pulse for time interval measurement circuitry 120 (e.g. a time-to-digital converter). The back-reflected pulse is detected by photodetector 130, and amplified by amplifier 140. A stop pulse for time interval measurement circuitry 120 is generated from the amplified signal by timing detection element 150. The time interval between the start and stop pulses (i.e. the time of flight) is employed to extract the distance information. The distance to the target is calculated by multiplying the TOF by the velocity of the signal in the application medium, as shown in Eqn. 1 :
d = -TOF (1)
where c is the speed of light (3*108 m/sec). One approach for determining the time of arrival of the reflected light pulse uses comparators to sense the moment in which the photodetector output signal exceeds a certain threshold. However, the time intervals to be measured are typically very short and the required timing accuracy and stability are very high, (e.g., picoseconds in time correspond to 1 cm in distance). In order to achieve high accuracy and stability of pulse arrival time based on the pulse amplitude fast and complex readout electronics are required. Some complex non-CMOS technologies may meet the required performance (e.g., BiCMOS) [4], but the need for very high bandwidth pixels makes it difficult to perform two-dimensional array integration.
Recently, pulsed-based TOF measurement principles have been implemented in CMOS technology [5]-[l I]. These methods are based on an electronic shutter and photo-generated charge integration.
Fig. 2 is a simplified timing diagram for CMOS pulsed-based TOF, as performed in [5]-[10]. In the first cycle (S1) a short laser pulse with duration of Tp is emitted. The shutter is triggered when the laser pulse is transmitted, and remains open for the duration of the laser pulse Tp. In the second cycle the measurement is repeated, but now with a shutter window greatly exceeding Tp. We shall denote Vsi as the signal that results when the shutter is open for substantially the time duration Tp (i.e. measurement cycle S1), and Vs2 as the signal that results when the shutter windowing exceeds Tp (i.e. measurement cycle S2). The round trip TOF is calculated as:
Figure imgf000004_0001
Based on Eqn. 1 the distance d is calculated as:
Figure imgf000004_0002
where c is the speed of light.
From Eqn. 2 it follows that the maximum TOF which may be measured is Tp. Therefore, for typical Tp values of 30ns-200ns range of pulse duration, the maximum distance which may be measured is about 4.5m-30m.
Fig. 3 is a simplified timing diagram illustrating an alternate approach to TOF determination, denoted pulsed indirect TOF [H]. This approach increases the distance information gathered by generating successive laser pulses which are integrated with associated delayed shutters S1-SK- The distance, d&, is calculated as:
Figure imgf000004_0003
It should be noted that in the above approaches the influence of the reflectivity of the target object within the individual obtained signal is removed.
The accuracy of pulsed indirect TOF depends on the precise measurement of the voltage difference (Vs2 - V si), which may be on the order of about 5μV. A 5μV variation is equivalent to the detection of relatively few electrons, which typically can not be resolved by conventional Analog-to-Digital Converter (ADC). In some embodiments, in order to enhance the signal-to-noise ratio (SNR) the measurements are accumulated by repeating the two cycles m times by emitting multiple laser pulses.
In US Pat. Appl. Publ. 2006/0192938 by Kawahito, a distance image sensor determines the signals of two charge storage nodes which depend on the delay time of the modulated light. A signal by the background light is received from the third charge and is subtracted from the signal which depends on the delay time of the two charge storage nodes, so as to remove the influence of the background. Additional background art includes:
1) D Stoppa, D.; Pancheri, L.; Scandiuzzo, M.; Gonzo, L.; Dalla Betta, G. -F.; Simoni, A., "A CMOS 3-D Imager based on Single Photon Avalanche Diode," IEEE Trans, on Circuits and Sy stems- 1, TCASl, vol. 54, no. 1, January, 2007. 2) C. Niclass, A. Rochas, P. Besse, E. Charbon, " Design and Characterization of a CMOS 3-D Image Sensor Based on Single Photon Avalanche Diodes," IEEE Journal ofSolid-State Circuits, vol. 40, no. 9, Sept., 2005.
3) R. Jeremias, W. Brockherde, G. Doemens, B. Hosticka, L. Listl, and P. Mengel, "A CMOS photosensor array for 3D imaging using pulsed lasers," IEEE International Solid- State Circuit Conference, ISSCC, 2001.
4) S. Hsu, S. Acharya, A. Rafii, and R. New, "Performance of a Time-of-Flight Range Camera for Intelligent Vehicle Safety Applications," in Advanced Microsystems or Automotive Applications Canesta Inc., 2006.
5) "Methods and Devices for Charge Management for Three-Dimensional Sensing", United States Patent 6,906,793, Canesta Inc., June 2005.
6) "Systems for CMO S -compatible three-dimensional image sensing using quantum efficiency modulation," United States Patent 6,580,496, Canesta Inc, June 2003.
7) US Pat. Appl. Publ. 2007/0158770 by Kawahito
SUMMARY OF THE INVENTION
TOF imaging determines the distance to objects in the field of view by exposing an optical sensor to back-reflections of laser pulses. The time between the transmission of the laser pulse to its return to the optical sensor is used to calculate the distance to the object from which the laser pulse was reflected. A 3D image may then be constructed of the scene.
Some TOF imaging embodiments presented herein are based on a two-stage data collection and processing approach. First a coarse ranging phase is performed by dividing the distance range of interest into sub-ranges. Each sub-range is checked to determine whether an object is or is not present for each sensor array pixel. Since the actual distance is not calculated at this stage, there is no need to gather data with a high SNR. There is also no need to perform a complex calculation of the object distance, merely to make a Yes/No decision whether an object is present in the sub-range currently being checked.
At the end of the coarse ranging phase, it is known which pixels have objects in their field of view and in which sub-range. In the fine imaging phase, data which permits a more accurate determination of the distance to the objects in the range of interest is gathered and the distances calculated. Although this stage may require collecting a larger amount of data per pixel, the data is collected only for those pixels for which an object has been detected, and only in the sub-range in which the object was detected. The dual-stage approach permits efficient 3D imaging by focusing the data collection process and distance calculations on the significant regions for the relevant pixels, and does not require collecting complete data over the entire distance range for all of the pixels.
In some embodiments, TOF data is collected using a pixel which includes two optical sensing elements, where the two sensing elements may be exposed with different timing. The two sensing elements are exposed in successive time intervals, and the TOF of the laser pulse is calculated from the output levels of both sensing elements.
According to an aspect of some embodiments of the present invention there is provided a method for determining a distance to an object is performed as follows. First a first optical sensing element of an optical pixel is exposed to a back-reflected laser pulse for an initial time interval to obtain a first output level. Then a second optical sensing element of the optical pixel is exposed to the back-reflected laser pulse at a successive time interval to obtain a second output level. Finally, the distance to the object is calculated from the first and second output levels.
According to some embodiments of the invention, the calculating is in accordance with a ratio of the first and second output levels. According to some embodiments of the invention, the method includes the further step of determining a background noise level and subtracting the background noise level from the first and second output levels prior to the calculating. According to some embodiments of the invention, the calculating is performed as:
Figure imgf000007_0001
, where d equals the distance to the object, c equals the speed of light, Tj an initial exposure time after transmission of the laser pulse, Tp equals the duration of the laser pulse, V1 equals the first output level minus a background noise level, and V1 equals the second output level minus a background noise level.
According to some embodiments of the invention, the calculating is performed as:
Figure imgf000007_0002
where d equals the distance to the object, c equals the speed of light, T; an initial exposure time after transmission of the laser pulse, Tp equals the duration of the laser pulse, V1 equals the first output level, and V2 equals the second output level.
According to some embodiments of the invention, the initial and successive time intervals are of the duration of the laser pulse.
According to some embodiments of the invention, the method includes the further step of comparing the first and second output levels to a threshold to determine if an object is present.
According to an aspect of some embodiments of the present invention there is provided a method for performing three-dimensional imaging is performed as follows.
First each pixel of an optical sensor array is exposed to a back-reflected laser pulse.
Each of the pixels is exposed with a shutter timing corresponding to a respective distance sub-range. For each of the pixels, the pixel's output level is then used to determine whether an object is present in the pixel's respective distance sub-range. For each pixel having an object present, the distance to the object is determined from the respective pixel output level. According to some embodiments of the invention, the determining if an object is present in the respective distance sub-range includes comparing the respective pixel output level to a threshold.
According to some embodiments of the invention, the method includes the further step of outputting an array of the determined distances.
According to some embodiments of the invention, the method includes the further step of outputting a three-dimensional image generated in accordance with the determined distances.
According to some embodiments of the invention, the method includes the further step of selecting the duration of a pixel exposure time in accordance with a required length of the distance sub-range.
According to some embodiments of the invention, the method includes the further step of transmitting a laser pulse with a specified pulse length.
- According to some embodiments of the invention, the duration of the pixel exposure equals the laser pulse length.
According to some embodiments of the invention, the pixels of a row of the array are exposed with the same shutter timing for each frame.
According to some embodiments of the invention the pixels of successive rows of the array are exposed with a shutter timing corresponding to successive distance sub- ranges.
According to some embodiments of the invention in successive frames, the pixels of a row of the array are exposed with a shutter timing corresponding to successive distance sub-ranges.
According to some embodiments of the invention, determining a distance from a pixel to the object, includes: exposing the pixel to a plurality of back-reflected laser pulses, with a shutter timing corresponding to the respective distance sub-range; accumulating a pixel output level to laser pulses back-reflected from the respective distance sub-range; and calculating, from the accumulated pixel output level, a distance to an object within the respective distance sub-range. According to some embodiments of the invention, the accumulating is repeated to obtain a desired signal to noise ratio. According to some embodiments of the invention, the method includes beginning the exposure of a pixel in accordance with the distance of an initial distance of the respective distance sub-range.
According to some embodiments of the invention, the method includes the further steps of: providing the pixel as a first and second optical sensing element; obtaining a first and second output level by exposing each of the first and second sensing elements to a back-reflected laser pulse for a respective time interval; and calculating a distance from the optical pixel to the object from the first and second output levels. According to some embodiments of the invention, the method includes the further step of determining a background noise level and subtracting the background noise level from the first and second output levels prior to the calculating.
According to some embodiments of the invention, the calculating a distance from the optical pixel to the object from the first and second output levels is in accordance with a ratio of the first and second output levels.
According to some embodiments of the invention, the calculating a distance from the optical pixel to the object from the first and second output levels is performed as:
Figure imgf000009_0001
where d equals the distance to the object, c equals the speed of light, Tj an initial exposure time after transmission of the laser pulse, Tp equals the duration of the laser pulse, V1 equals the first output level minus a background noise level, and V2 equals the second output level minus a background noise level.
According to an aspect of some embodiments of the present invention there is provided an optical pixel which includes: a first optical sensing element configured for providing a first output level in accordance with exposure to a back-reflected laser pulse over a first exposure period; a second optical sensing element configured for providing a second output level in accordance with exposure to a back-reflected laser pulse over a successive exposure period; and a distance calculator configured for calculating a distance from the optical pixel to an object from the first and second output levels.
According to some embodiments of the invention, the distance calculator is further configured for subtracting a background noise level from the first and second output levels prior to calculating the distance.
According to some embodiments of the invention, the distance calculator is configured calculate the distance in accordance with a ratio of the first and second output levels.
According to some embodiments of the invention, the distance calculator is configured to calculate the distance as:
Figure imgf000010_0001
where d equals the distance to the object, c equals the speed of light, T, an initial exposure time after transmission of the laser pulse, Tp equals the duration of the laser pulse, Vx equals the first output level minus a background noise level, and V2 equals the second output level minus a background noise level.
According to some embodiments of the invention, the distance calculator is configured to calculate the distance as:
Figure imgf000010_0002
where d equals the distance to the object, c equals the speed of light, Tj an initial exposure time after transmission of the laser pulse, Tp equals the duration of the laser pulse, V1 equals the first output level, and V1 equals the second output level.
According to an aspect of some embodiments of the present invention there is provided a three-dimensional imaging apparatus which includes: a sensor array which includes a plurality of optical pixels configured for exposure to a back-reflected laser pulse: an exposure controller associated with the sensor array, configured for controlling a respective exposure time of each of the pixels so as to expose each of the pixels to back-reflection from a respective distance sub-range; and a distance calculator associated with the sensor array, configured for calculating from a pixel respective output level a distance to an object within the respective distance sub-range.
According to some embodiments of the invention, the distance calculator includes an object detector configured for determining if an object is present in a pixel's respective distance sub-range from a respective pixel output level.
According to some embodiments of the invention, the object detector is configured to determine if the object is present by comparing the respective pixel output level to a threshold. According to some embodiments of the invention, the imaging apparatus further includes an image generator for outputting a three-dimensional image generated from the calculated distances.
According to some embodiments of the invention, the imaging apparatus further includes a laser for generating laser pulses for back-reflection. According to some embodiments of the invention, the exposure controller is configured for selecting the initial time of the exposure in accordance with an initial distance of the respective distance sub-range and the duration of the exposure in accordance with a length of the respective distance sub-range.
According to some embodiments of the invention, the exposure controller is configured for exposing successive rows of the array with a shutter timing corresponding to successive distance sub-ranges.
According to some embodiments of the invention, in successive frames, the exposure controller is configured for exposing the pixels of a row of the array with a shutter timing corresponding to successive distance sub-ranges. According to some embodiments of the invention, the distance calculator includes an output accumulator configured for accumulating a pixel output level from a plurality of exposures, and wherein the distance calculator is configured for calculating the distance from the accumulated pixel output level.
According to some embodiments of the invention, each of the optical pixels includes a first optical sensing element configured for providing a first output level in accordance with exposure to a back-reflected laser pulse for a first time interval, and a second optical sensing element configured for providing a second output level in accordance with exposure to a back-reflected laser pulse for a successive time, and wherein the distance calculator is configured for calculating the distance from the first and second output levels.
According to some embodiments of the invention, the distance calculator is configured calculate the distance in accordance with a ratio of the first and second output levels.
Unless otherwise defined, all technical and/or scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which the invention pertains. Although methods and materials similar or equivalent to those described herein can be used in the practice or testing of embodiments of the invention, exemplary methods and/or materials are described below. In case of conflict, the patent specification, including definitions, will control. In addition, the materials, methods, and examples are illustrative only and are not intended to be necessarily limiting.
Implementation of the method and/or system of embodiments of the invention can involve performing or completing selected tasks manually, automatically, or a combination thereof. Moreover, according to actual instrumentation and equipment of embodiments of the method and/or system of the invention, several selected tasks could be implemented by hardware, by software or by firmware or by a combination thereof using an operating system. For example, hardware for performing selected tasks according to embodiments of the invention could be implemented as a chip or a circuit. As software, selected tasks according to embodiments of the invention could be implemented as a plurality of software instructions being executed by a computer using any suitable operating system. In an exemplary embodiment of the invention, one or more tasks according to exemplary embodiments of method and/or system as described herein are performed by a data processor, such as a computing platform for executing a plurality of instructions. Optionally, the data processor includes a volatile memory for storing instructions and/or data and/or a non-volatile storage, for example, a magnetic hard-disk and/or removable media, for storing instructions and/or data. Optionally, a network connection is provided as well. A display and/or a user input device such as a keyboard or mouse are optionally provided as well. BRIEF DESCRIPTION OF THE DRAWINGS
Some embodiments of the invention are herein described, by way of example only, with reference to the accompanying drawings. With specific reference now to the drawings in detail, it is stressed that the particulars shown are by way of example and for purposes of illustrative discussion of embodiments of the invention. In this regard, the description taken with the drawings makes apparent to those skilled in the art how embodiments of the invention may be practiced. In the drawings:
Fig. 1 is a block diagram of a prior art embodiment of a pulsed-based TOF system;
Fig. 2 is a simplified timing diagram for a prior art pulsed-based TOF technique; Fig. 3 is a simplified timing diagram for a prior art pulsed indirect TOF technique;
Fig. 4 is a simplified system of a 3D imaging system, in accordance with an embodiment of the present invention;
Fig. 5 is a simplified flowchart of a method for performing three-dimensional imaging, according to a preferred embodiment of the present invention;
Fig. 6a illustrates partitioning an overall distance into N coarse sub-ranges Ad; Fig. 6b shows the distance sub-ranges captured by each row of the sensor array during the first frame to-tl5 in an exemplary embodiment of the present invention;
Fig. 7 shows sub-ranges captured by each row at the end of the coarse ranging phase, in an exemplary embodiment of the present invention;
Fig. 8 is an example of a distance map for N x M pixel array; Fig. 9 is a simplified flowchart of a method for determining a distance to an object, according to a first preferred embodiment of the present invention;
Fig. 10 is a simplified flowchart of a method for determining a distance to an object, according to a second preferred embodiment of the present invention;
Fig. 11 is a simplified exposure timing diagram for TOF data acquisition by a pixel having two sensing elements, according to a preferred embodiment of the present invention;
Fig. 12 is a simplified exposure timing diagram for pixels in a pixel array during the fine ranging phase, in an exemplary embodiment of the present invention; Fig. 13 is an exemplary timing diagram for collecting data in the third distance sub-range over multiple frames during the fine ranging phase;
Fig. 14 is a simplified block diagram of an imaging apparatus, according to a preferred embodiment of the present invention; Fig. 15 is a simplified block diagram of an optical pixel for TOF imaging, according to a preferred embodiment of the present invention; and
Fig. 16 is a simplified block diagram of a 3D imager with column-parallel readout, according to an exemplary embodiment of the present invention.
DESCRIPTION OF SPECIFIC EMBODIMENTS OF THE INVENTION
The present invention, in some embodiments thereof, relates to time-of-flight three-dimensional imaging and, more particularly, but not exclusively, to a two-stage time-of-flight three-dimensional imaging technique.
In the 3D imaging described herein, a sequence of short laser pulses is transmitted towards the field of view. The back-reflected light is focused on a two- dimensional array of optical sensors. As described below, the overall distance range of interest is partitioned into coarse sub-ranges d; to d^. First a coarse ranging phase is performed to detect 3D objects in the various sub-ranges. In some embodiments, if an object is detected in a specific sub-range, a fine sub-ranging phase is triggered for high- resolution 3D imaging within the sub-range, by accumulating pixel output levels over multiple exposure intervals to improve the SNR.
During the coarse ranging phase, a sequence of rolling-shutter readouts is performed on the sensor array. During a given readout cycle, each row of the sensor array images one of the distance sub-ranges. During the following readout cycle the sub-range imaged by each of the rows is shifted, so that after N readout cycles all the rows of the pixel array have imaged each of the dj to d^ sub-ranges.
In some embodiments, each pixel of the sensor array includes two sensing elements, as described in more detail below. The two sensing elements are exposed during successive intervals, and the TOF is calculated from the ratio of the signals detected by the sensing elements. The distance may be obtained directly from the TOF according to Eqn. 1. Referring now to the drawings, Fig. 4 shows a simplified system concept of a 3D imaging system, in accordance with an embodiment of the present invention. Controller 10 generates trigger pulses, which cause laser 20 to emit a sequence of laser pulses towards the field of view. When an object 30 is present in the field of view, the laser pulses are reflected towards sensor array 40 (in this case a two- dimensional CMOS camera). Controller 10 reads out sensor array 40, and processes the sensor array output signal in order to determine for each pixel if an object is or is not present in the currently imaged sub-range dj. During the fine ranging phase, a similar process is performed using the accumulated levels of both sensors in order to determine where a detected object is located within the sub-range. Controller 10 analyzes the collected data to obtain a 3D image 50 of the object(s) within the field of view.
In some embodiments the optical sensing elements in sensor array 40 are compatible with Active Pixel Sensor (APS) CMOS fabrication. A CMOS sensor array may allow integration of some or all of the functions required for timing, exposure control, color processing, image enhancement, image compression and/or ADC on the same die. Other possible advantages of utilizing a CMOS 3D sensor array include low- power, low-voltage and monolithic integration.
The features of the laser are determined by the system requirements. Possibly, the laser is a near-infrared (NIR) laser which operates in the Si responsivity spectrum. Before explaining at least one embodiment of the invention in detail, it is to be understood that the invention is not necessarily limited in its application to the details of construction and the arrangement of the components and/or methods set forth in the following description and/or illustrated in the drawings and/or the Examples. The invention is capable of other embodiments or of being practiced or carried out in various ways.
In the following, an embodiment of a sensor array formed as an NxM pixels array with column parallel readout is presented. It is to be understood that the invention is capable of other embodiments of sensor array configurations and/or readout techniques. The sensor array may be a single pixel. Embodiments presented below utilize a constant pulse duration of Tp for each laser pulse. Other embodiments may utilize varying laser pulse lengths, thereby enabling dividing the overall distance range of interest into unequal sub-ranges.
Embodiments presented below utilize a pixel exposure time of Tp. Other embodiments may utilize varying pixel exposure times.
I. Coarse ranging phase
Reference is now made to Fig. 5, which is a simplified flowchart of a method for performing three-dimensional imaging, according to a preferred embodiment of the present invention. The optical data is gathered by an optical sensor array, having an array of pixels.
In 500, each of the array pixels is exposed to a back-reflected laser pulse. Each of the array pixels is exposed with a shutter timing corresponding to a respective distance sub-range. As used herein, phrases relating to exposure of an optical sensor to back- reflection of a laser pulse mean that the optical sensor is exposed in the direction towards which the laser pulse was transmitted, so that reflections of the laser pulse back from objects will be arrive at the optical sensor. As used herein, the phrase does not imply that the shutter timing is set so that the optical sensor is necessarily exposed at the time that the reflections arrive at the optical sensor.
In 510 it is determined, for each of the pixels, if an object is present in the respective distance sub-range. The presence or absence of a reflecting object is determined from the pixel output level, preferably by comparing the pixel output to a threshold. If the pixel output exceeds the threshold, the pixel is designated as having an object in the current distance range. The threshold may differ for different sub-ranges and/or specific array pixels.
Note that different pixels may not be absorbing reflectance from the same object. The determination made in 510 is whether an object is or is not present for the given pixel, and does not necessarily indicate that the object is the same for multiple pixels. In 520, the distance from the pixel to the object is calculated for each pixel for which an object was detected in 510. The details of how the distance calculation is made are presented in more detail below. The method may further include outputting an array of the distances obtained in 520 and/or a three-dimensional image generated in accordance with the determined distances.
The method may further include transmitting a laser pulse of duration Tp. The timing of the pixel exposure is preferably determined relative to the time of transmission and duration of the laser pulse whose back-reflection is being collected. In some embodiments, the duration of the pixel exposure period equals the laser pulse duration for some or all of the sensor array pixels.
By applying the appropriate shutter timing relative to the laser pulse, each row in the array may be individually exposed to back-reflected laser pulses from a predefined sub-range. For a given pixel, the beginning of the exposure period is preferably selected in accordance with the initial distance of the respective distance sub-range. Preferably, the duration of the exposure period for a given pixel is selected in accordance with the required length of the respective distance sub-range. The overall distance may be partitioned into N coarse sub-ranges Ad, as shown in Fig. 6a. The length of the sub-ranges Ad is equivalent to the travel time of a laser pulse with duration Tp. The maximum TOF may be expressed as a function of Tp as follows:
TOFmax = r0 + N-7> (6) where TQ denotes a fixed Time of Interest (TOI) from which data should be recorded. The travel time from the first sub-range to the last one is defined as Tu./rame = Nx Tp as depicted in Fig. 6a.
The pixels of each row i are preferably exposed with the same shutter timing. In some embodiments, the pixels of successive rows of the array are exposed with a shutter timing corresponding to successive distance ranges.
For example, each row i may be exposed for a time period of duration 7> beginning at time t/, where: tt = T0 +Tp -i ; / = l,2,-,_V. (7)
A pulse length of Tp corresponds to a sub-range of length Ad, where Δ d is equal to the travel of the laser pulse in the duration of the exposure time interval. Assume that the first row of the sensor array collects data from sub-range DQ to Do+ Ad, and that successive sensor array rows collect data from successive sub-ranges. In this case, during the first frame the z'-th row of the sensor array collects data from a sub-range beginning at:
dt = ^(T0 +TP - i) = DQ + άd'i ; i = l,2,-,N. (8)
where Do denotes a fixed Distance of Interest (DOI) from which a data should be captured. Note that the DOI is determined by the value of T0. Fig. 6b illustrates the distance sub-ranges captured by each row of the sensor array during the first frame Vt1, in an exemplary embodiment of the present invention.
In the preferred embodiment, the row shutter timing is set to capture reflections from successive distance sub-ranges in successive frames. To capture the total range for each of the sensor array rows, the exposure process may be repeated N times, once for each distance sub-range. In each frame, the shutter timing for each row may be shifted
Fig. 7 illustrates the sub-ranges captured by each row at the end of the coarse ranging phase, in an exemplary embodiment of the present invention. In the present example, after transmission of the j-th laser pulse the first row of the sensor array collects data from the j-th sub-range, the second row of the sensor array collects data from the subsequent sub-range, and so forth. In more general terms, the i-th row of the sensor array collects data from sub-range [i - (J-I)IN- The quantity [/ - (/-2)]w is the result of a modulo operation, given by:
U -(J-I) ; i ≥ j
[-σ-DL H (9)
- U - I) + N- i < j
If a back-reflected laser pulse from the currently tested sub-range is sensed by a pixel in row i, the sub-range coordinates (or a sub-range identifier) are stored for the given pixel. By the end of this phase, the location of the 3D objects within the FOV is coarsely known. In other embodiments, all of the array pixels are exposed with the same shutter timing for each frame. That is, all of the array pixels collect data for the same sub-range. Data for different sub-ranges is collected in subsequent frames. Fine ranging phase
At the end of the coarse ranging phase (500 and 510 of Fig. 5) a map of approximate distances to objects within the range of interest is known. An example of a distance map for N x M pixel array is shown in Fig. 8. After the coarse sub-range distances are obtained, an additional fine ranging phase is performed to improve distance measurement accuracy within the sub-range (520 of Fig. 5).
Preferably the fine-ranging is performed only for pixels (or rows) for which objects were found in the coarse ranging phase, and/or only for sub-ranges in which the object was found. The coarse range map may first be analyzed to identify regions of interest, for which fine-ranging is performed.
For high resolution TOF (e.g., lcm) the value of vn or v# may be very small (e.g., 5μV). In order to improve the SNR during fine-ranging, the measurement process is repeated using laser pulse re-emitting for signal accumulation. Typically, the SNR improvement is on the order of the square-root of m, where m is the number of time the process is repeated. Improved SNR provides a longer distance measurement range and increases the maximum unambiguous range.
Reference is now made to Fig. 9, which is a simplified flowchart of a method for determining a distance from a pixel to an object, according to a first preferred embodiment of the present invention. In 900 the pixel is exposed to a plurality of back- reflected laser pulses. The pixel shutter timing, relative to the respective laser pulse, corresponds to the distance sub-range in which the object was detected. In 910 the pixel level is accumulated over the multiple exposures. Due to the shutter timing, the pixel level is accumulated for pulses reflected from the distance sub-range of interest. In 920 the distance from the pixel to the object is calculated from the accumulated pixel output level. The number of exposures for which the pixel output is accumulated may be selected in order to obtain the desired signal to noise ratio.
The distance information from the coarse and/or fine ranging phases may be stored for post-processing and 3D object reconstruction. TOF Acquisition.
In some embodiments of the present invention, the TOF (and consequently the distance) is determined using data collected by a pixel consisting of two optical sensing elements (e.g. pinned photodiodes) which may be exposed independently. In some embodiments the sensing elements are of a type suitable for CMOS
APS fabrication.
Reference is now made to Fig. 10, which is a simplified flowchart of a method for determining a distance to an object, according to a second preferred embodiment of the present invention. In the present embodiment the optical pixel being exposed to the back-reflected laser pulse includes two sensing elements.
In 1000 the first sensing element is exposed for an initial time interval to obtain a first output level. In 1010 the second sensing element is exposed for a successive time interval to obtain a second output level. In 1020 the distance from the optical pixel to the object is calculated from the first and second output levels. The output levels may first be compared to a threshold to determine if an object is present, and the distance calculated only if there is an object. The length of the two exposure durations may be the laser pulse length, in order to obtain the best sub-range distance resolution. As shown below, the distance may be calculated from the ratio of the two output levels. Fig. 11 is a simplified exposure timing diagram for TOF data acquisition by a pixel having two sensing elements, according to a preferred embodiment of the present invention. The exposure of the pixel consists of two successive intervals with Tp duration. A laser pulse of duration Tp is emitted at time tmit- The back- reflected light arrives at the pixel at t;nit+TOF. Sensing element S1 is exposed from time Ti to Tj+Tp, while sensing element S2 is exposed from time Tj+Tp to Tj+2-Tp. Consider a back-reflected laser pulse with Tp duration which is detected by the pixel during its exposure period. The signals Vu and vi2, obtained respectively from sensing elements S1 and S2, are proportional to the magnitude of the reflected pulse, ΦL and background illumination, ΦB, as follows:
(10) v, = Φr - At + Φπ -Tp In some embodiments, the TOF is calculated as:
Figure imgf000021_0001
In other embodiments, the background illumination ΦB is measured previously without sending a laser pulse under known exposure conditions and stored. The TOF is calculated after subtracting the background illumination ΦB from signals v,-/ and va~.
Figure imgf000021_0002
where Vx = vn- ΦB Tp and V1= va- ΦB ' Tp.
Since the TOF is calculated from a ratio of the sensing element output signals, the calculated TOF does not depend on the responsivity of the sensor or on the reflectivity of the target object.
From Eqns. 11a and 1 Ib it follows that the maximum value of rF/ri+(v,/v2)l , and consequently of the measurable TOF, is limited by the Tp pulse width. Thus the laser pulse length defines the maximum sub-range length Δ d. Fig. 12 is a simplified exposure timing diagram for pixels in a pixel array during the fine ranging phase, for an exemplary embodiment of the present invention. Each pixel includes two sensing elements with independent shutter timing. S,/and S^ denote the shutter timing applied to the first and second sensing element, respectively, for a pixel within the z-th row of the sensor array. The shaded regions represent time intervals which are necessary to avoid, during which the pixel is not exposed in order to eliminate interference from near and far objects outside the current sub-range.
During the fine-ranging phase, the output levels of both sensing elements are preferably accumulated for m iterations prior to calculating the object distance. For example, consider the distance map acquired for Nx M pixels array at the end of the coarse ranging phase shown in Fig. 8. An object in the third sub-range has been detected by certain pixels in rows / and /+1. For the subsequent m iterations laser pulses continue to be emitted and the pixels which were reflected from the object are exposed at the third sub-range and read, as exemplarily shown in Fig. 13. The readouts from these m iterations are accumulated to determine the location of the object within the third sub-range, and improve the distance measurement precision.
Performance Analysis I) Noise Analysis
In order to achieve high range resolution, low sensor noise is required. Eqn. 12 shows the minimum range resolution achievable:
8R = -dt (12)
where dt denotes the time resolution. The SNR of some embodiments is dependent on the noise sources affecting the detected sensor output signal. The total noise on the sensing element (e.g., photodiode) may be expressed as the sum of five independent components:
(1) Generated shot noise
(2) Readout circuit noise with zero mean and average power σ2 readOut (3) Offset and gain Fixed Pattern Noise (FPN),
(4) Background illumination and dark current
(5) Electronic jitter.
The following assumptions are made in the noise analysis:
1) Correlated Double Sampling (CDS) is performed in the pixel in order to eliminate the offset component of the fixed pattern noise.
2) Sensor calibration reduces the gain component of the fixed pattern noise to negligible levels.
3) Background illumination is minimized by optical filtering.
4) Dark count rates are assumed to be in the kHz and are neglected. 5) Electronic jitter is reduced to negligible levels by well-designed components.
The charge (including noise) on the sensing element associated with v/and v/ may thus be expressed as:
Ql = Yq[1 L - [Tp -At) + 1 B -TP + Qshot + Qr e adout ] electron Qf = Yq[h -^t + h -Tp +Qshot +Grtadout] electron provided Q(Tp) < Qmax, (Qmax is the saturation charge, also referred to as well capacity). Ii and IB are the laser and background induced photoelectric currents, respectively.
The generated shot noise charge Qst,ot with zero mean and PSD is given by:
Q shot = /yaH ψL iTp -Δή + IB -TP electron
Q2 = Y JlL -At + IB -Tp electron sHo, /q v L * (14)
Defining parameter Kγ so that V1= Ky Qx. dt for v/and v,2 is therefore:
Sv1 = Kv JqIL (Tp - At) + qIB 7> + q2σreadout
. (15) dv2 = Ky ■ ^qI1 ■ At + qIB TP + q2σreadout
Rearranging Eqn. 8 and discarding term T1 gives:
v2 TOF-I ; = At = 2; — i-J-r- (16)
Ignoring readout noise, background illumination and the associated shot noise, and applying error propagation to Eqn. 16, dt is obtained as:
Figure imgf000023_0001
The time resolution (absolute accuracy) is inversely proportion to the root of the laser photocurrent (i.e., optical power) as is expected from Poisson statistics. The maximum value of dt is achieved for At = Tp, and the minimum value BX At = O is determined by the background shot noise and the readout noise. The result of Eqn. 17 may be reduced by the square root of the number of measurement iterations, as mentioned above.
II) Frame rate
The following provides an estimate of the frame rate under the assumption of full-range measurements and multiple target acquisition. In the present embodiment the final range measurement is determined after the completion of the coarse ranging and the fine ranging phases. The durations for completing these phases are:
Tcoarse
Figure imgf000024_0001
+N-Tμ_fmme)
Tfme = m .M-(T0 + N-Tμ_frame) (18)
T frame = T coarse +T fine where M denotes the number of columns in the sensor (with column parallel readout).
Imagers
Reference is now made to Fig. 14, which is a simplified block diagram of an imaging apparatus imager, according to a preferred embodiment of the present invention. Imager 1400 includes sensor array 1410, exposure controller 1420 and distance calculator 1430.
Sensor array 1410 includes multiple optical pixels for absorbing back-reflection of a laser pulse, plus any other required circuitry (e.g. for sensor array readout). The pixels may be organized as an NxM array. Exposure controller 1420 controls the exposure time of each of the pixels so as to expose each of the pixels to back-reflection from the pixel's current distance sub-range. Exposure controller 1420 adjusts the initial time and duration of exposure, which are based on the initial distance and length of the current distance sub-range.
Exposure controller 1420 may also control the pulse timing of laser 1440, in order to accurately synchronize the timing between the laser pulse transmission and the pixel exposure.
Exposure controller 1420 adjusts the pixel exposure according to the desired method for scanning the different sub-ranges of interest. For example, each row of pixels may be exposed to a different sub-range during a single frame (see Fig. 7). Additionally or alternately, a given row of pixels may be exposed to different subranges in different frames.
Distance calculator 1430 calculates the distance to an object from a pixel output level, within the distance sub-range in which an object was detected for the given pixel. Distance calculator 1430 may include object detector 1450 which determines if an object is present in a pixel's respective distance sub-range. In one embodiment, the presence of an object is determined by comparing the pixel output level compared to a threshold. An object is considered to be present only if the threshold is exceeded.
Imager 1400 may include output accumulator 1460 which accumulates the pixel output level over multiple exposures. Distance calculator 1430 may then calculate the distance to the object from the accumulated pixel output level.
Imager 1400 may further include image generator 1470, which generates a three- dimensional image from the distances provided by distance calculator 1430, and outputs the image. In some embodiments each pixel includes two optical sensing elements, which have separate shutter controls. The sensing elements may be shuttered by independent trigger signals, or may be shuttered with an automatic offset from a single trigger signal. The distance/TOF for an object in a given sub-range is preferably calculated as a function of the both of the sensing element output signals. Reference is now made to Fig. 15, which is a simplified block diagram of an optical pixel for TOF imaging, according to a preferred embodiment of the present invention. Pixel 1510 includes two optical sensing elements, 1515.1 and 1515.2, and distance calculator 1530. The two sensing elements 1515.1 and 1515.2 are exposed for successive exposure periods, where the initial time and duration of the exposure period is preferably selected in accordance with distance and length of the current distance subrange. Distance calculator 1530 calculates the distance from pixel 1510 to an object from the first and second output levels. The distance may be calculated only after distance calculator 1530 determines that an object is present in the current sub-range. Distance calculator 1530 may subtract the background noise level from the sensing element output levels prior to calculating the distance.
In some embodiments, distance calculator 1530 calculates the distance from the pixel to the object according to Eqn. 1 Ia or 1 Ib above.
System Architecture Fig. 16 is a simplified block diagram of a 3D imager with column-parallel readout, according to an exemplary embodiment of the present invention. 3D sensor 1600 includes a CMOS image sensor (CIS) 1610, with an NxM array of CMOS pixels. Sensor 1600 further includes controller 1620, analog peripheral circuits 1630, and two channels of readout circuit (1640.1 and 1640.2 respectively). Each pixel consists of two sensing elements (e.g., photodiodes), and allows individual exposure for each of the sensing elements. Additional circuits (not shown) may be integrated at the periphery of the pixels array for row and column fixed pixel noise reduction. Controller 1620 manages the system synchronicity and generates the control signals (e.g. exposure timing for each pixel). Analog peripheral circuits 1630 (e.g. bandgap reference and phase lock loop) provide reference voltages and currents, and low-jitter clocks for proper operation of the imager and the readout circuits. Readout circuits (1640.1 and 1640.2) consist of two channels for even and odd rows for relaxed readout timing constraints. The output data may be steered to an off-chip digital computation circuit. Each readout circuit comprises a column decoder (not shown), Sample & Hold (S/H) circuits (1650.1/1650.2), column ADCs (1660.1/1660.2), and a RAM block (shown as ping-pong memory 1670.1/1670.2). The signal path is now described. The low-noise S/H circuits 1650.1/1650.2 maintain a fixed pixel output voltage, while the high performance (i.e. high conversion rate and accuracy) column ADC 1660.1/1660.2 provides the corresponding digital coding representation. The role of RAM block 1670.1/1670.2 is twofold: (1) storing the digital value at the end of the A/D conversion, and (2) enabling readout of the stored data to an off-chip digital computation circuit. Preferably, the RAM architecture is dual- port ping-pong RAM. The ping-pong RAM enables exchanging blocks of data between processors rather than individual words. In the context of CIS, the RAM block in each channel may be partitioned into two sub-blocks for two successive rows (i.e., 2/ and 2i + 2 or 2/ - 1 and 2/ + 1). Once a mode (e.g., write for the 2z-th row or read out of the 2i + 2-th row) is completed, the two sub-blocks are exchanged. That is, the mode is switched to read out of 2Mb. row or to write to the 2/ + 2-th row. This approach allows a dual-port function with performance equal to that of individual RAM.
The 3D imaging techniques described above provide a high-performance TOF- based 3D imaging method and system. The system may be constructed using standard CMOS technology.
It is expected that during the life of a patent maturing from this application many relevant optical sensors, lasers and 3D imaging systems will be developed and the scope of the term optical sensor or sensing element, laser and imaging system is intended to include all such new technologies a priori.
As used herein the term "about" refers to ± 10 %.
The terms "comprises", "comprising", "includes", "including", "having" and their conjugates mean "including but not limited to".
The term "consisting of means "including and limited to". The term "consisting essentially of means that the composition, method or structure may include additional ingredients, steps and/or parts, but only if the additional ingredients, steps and/or parts do not materially alter the basic and novel characteristics of the claimed composition, method or structure.
As used herein, the singular form "a", "an" and "the" include plural references unless the context clearly dictates otherwise. For example, the term "a compound" or "at least one compound" may include a plurality of compounds, including mixtures thereof. Throughout this application, various embodiments of this invention may be presented in a range format. It should be understood that the description in range format is merely for convenience and brevity and should not be construed as an inflexible limitation on the scope of the invention. Accordingly, the description of a range should be considered to have specifically disclosed all the possible subranges as well as individual numerical values within that range. For example, description of a range such as from 1 to 6 should be considered to have specifically disclosed subranges such as from 1 to 3, from 1 to 4, from 1 to 5, from 2 to 4, from 2 to 6, from 3 to 6 etc., as well as individual numbers within that range, for example, 1, 2, 3, 4, 5, and 6. This applies regardless of the breadth of the range. Whenever a numerical range is indicated herein, it is meant to include any cited numeral (fractional or integral) within the indicated range. The phrases "ranging/ranges between" a first indicate number and a second indicate number and "ranging/ranges from" a first indicate number "to" a second indicate number are used herein interchangeably and are meant to include the first and second indicated numbers and all the fractional and integral numerals therebetween.
It is appreciated that certain features of the invention, which are, for clarity, described in the context of separate embodiments, may also be provided in combination in a single embodiment. Conversely, various features of the invention, which are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable subcombination or as suitable in any other described embodiment of the invention. Certain features described in the context of various embodiments are not to be considered essential features of those embodiments, unless the embodiment is inoperative without those elements.
Although the invention has been described in conjunction with specific embodiments thereof, it is evident that many alternatives, modifications and variations will be apparent to those skilled in the art. Accordingly, it is intended to embrace all such alternatives, modifications and variations that fall within the spirit and broad scope of the appended claims.
All publications, patents and patent applications mentioned in this specification are herein incorporated in their entirety by reference into the specification, to the same extent as if each individual publication, patent or patent application was specifically and . individually indicated to be incorporated herein by reference. In addition, citation or identification of any reference in this application shall not be construed as an admission that such reference is available as prior art to the present invention. To the extent that section headings are used, they should not be construed as necessarily limiting.
REFERENCES CITED BY NUMERALS
[I] F. Blais, "Review of 20 Years of Range Sensor Development," Journal of Electronic Imaging, vol. 13, pp: 231-240, January 2004.
[2] M. Adams, "Coaxial range measurement - current trends for mobile robotic applications," IEEE Sensors Journal, vol. 2, pp: 2-13, Feb., 2002.
[3] M. Hebert and E. Krotkov, "3-D measurements from imaging laser radars: how good are they?," International Workshop on Intelligent Robots and Systems, pp: 359-364, Nov. 1991.
[4] P. Palojarvi, T. Ruotsalainen, and J. Kostamovaara, "A 250-MHz BiCMOS Receiver Channel With Leading Edge Timing Discriminator for a Pulsed Time-of-Flight Laser Rangefϊnder," IEEE Journal of Solid-State Circuits, vol. 40, no. 6, June, 2005.
[5] O. Elkhalili, O. M. Schrey, P. Mengel, M. Petermann, W. Brockherde, B. J. Hosticka, "A 4x64 Pixel CMOS Image Sensor for 3-D Measurement Applications," IEEE Journal of Solid-State Circuits, vol. 39, no. 7, July 2004.
[6] W. Brockherde B. J. Hosticka, P. Mengel, et al "3D Time-of-Flight Sensor Principle," in Fraunhofer IMS annual report 2005.
[7] B. J. Hosticka, et al," Modeling of a 3D-CMOS sensor for time-of-flight measurement," in Fraunhofer IMS annual report 2004.
[8] L. Viarani, D. Stoppa, L. Gonzo, M. Gottardi, and A. Simoni, "A CMOS Smart Pixel for Active 3D Vision Applications," IEEE Sensors Journal, vol. 4, pp: 145- 152, Sept. 2004.
[9] L. Viarani, D. Stoppa, L. Gonzo, M. Gottardi and A. Simoni, "A CMOS test chip for 3D vision applications with high dynamic range," European Conference on Circuit Theory and Design, Krakow, Poland , Sept. 2003.
[10] N. Massari, L. Gonzo, M. Gottardi, and A. Simoni, "A Fast CMOS Optical Position Sensor with High Sub-Pixel Resolution," IEEE Instrumentations and Measurements, Feb. 2004.
II 1] D. Stoppa, L. Viarani, A. Simoni, L. Gonzo, and M. Malfatti, "A New Architecture for TOF-based Range-finding Sensor," in IEEE Proceedings of Sensors, vol. 1, pp: 481-484, Oct. 2004.

Claims

WHAT IS CLAIMED IS:
1. A method for determining a distance to an object, comprising: exposing a first optical sensing element of an optical pixel to a back-reflected laser pulse for an initial time interval to obtain a first output level; exposing a second optical sensing element of said optical pixel to said back- reflected laser pulse at a successive time interval to obtain a second output level; and calculating a distance from said optical pixel to said object from said first and second output levels.
2. A method according to claim 1, wherein said calculating is in accordance with a ratio of said first and second output levels.
3. A method according to claim 1, further comprising determining a background noise level and subtracting said background noise level from said first and second output levels prior to said calculating.
4. A method according to claim 1, wherein said calculating is performed as:
Figure imgf000030_0001
where d equals said distance to said object, c equals the speed of light, T; an initial exposure time after transmission of said laser pulse, Tp equals the duration of the laser pulse, V1 equals said first output level minus a background noise level, and V2 equals said second output level minus a background noise level.
5. A method according to claim 1, wherein said calculating is performed as:
Figure imgf000030_0002
where d equals said distance to said object, c equals the speed of light, Tj an initial exposure time after transmission of said laser pulse, Tp equals the duration of the laser pulse, V1 equals said first output level, and V2 equals said second output level.
6. A method according to claim 1, wherein said initial and successive time intervals are of a duration of said laser pulse.
7. A method according to claim 1, further comprising comparing said first and second output levels to a threshold to determine if an object is present.
8. A method for performing three-dimensional imaging, comprising: exposing each pixel of an optical sensor array to a back-reflected laser pulse, wherein each of said pixels is exposed with a shutter timing corresponding to a respective distance sub-range; for each of said pixels, determining if an object is present in said respective distance sub-range from a respective pixel output level; and for each pixel having an object present in said respective distance sub-range, determining, from an output level of said respective pixel, a distance to said object within said respective distance sub-range.
9. A method according to claim 8, wherein said determining if an object is present in said respective distance sub-range comprises comparing said respective pixel output level to a threshold.
10. A method according to claim 8, further comprising outputting an array of said determined distances.
11. A method according to claim 8, further comprising outputting a three- dimensional image generated in accordance with said determined distances.
12. A method according to claim 8, further comprising selecting a duration of a pixel exposure time in accordance with a required length of said distance sub-range.
13. A method according to claim 8, further comprising transmitting a laser pulse with a specified pulse length.
14. A method according to claim 13, wherein a duration of said pixel exposure equals said laser pulse length.
15. A method according to claim 8, wherein the pixels of a row of said array are exposed with a same shutter timing for each frame.
16. A method according to claim 8, wherein the pixels of successive rows of said array are exposed with a shutter timing corresponding to successive distance subranges.
17. A method according to claim 8, wherein, in successive frames, the pixels of a row of said array are exposed with a shutter timing corresponding to successive distance sub-ranges.
18. A method according to claim 8, wherein said determining a distance from a pixel to said object, comprises: exposing said pixel to a plurality of back-reflected laser pulses, with a shutter timing corresponding to said respective distance sub-range; accumulating a pixel output level to laser pulses back-reflected from said respective distance sub-range; and calculating, from said accumulated pixel output level, a distance to an object within said respective distance sub-range.
19. A method according to claim 18, wherein said accumulating is repeated to obtain a desired signal to noise ratio.
20. A method according to claim 8, further comprising beginning said exposure of a pixel in accordance with a distance of an initial distance of said respective distance sub-range.
21. A method according to claim 8, further comprising: providing said pixel as a first and second optical sensing element; obtaining a first and second output level by exposing each of said first and second sensing elements to a back-reflected laser pulse for a respective time interval; and calculating a distance from said optical pixel to said object from said first and second output levels.
22. A method according to claim 21, further comprising determining a background noise level and subtracting said background noise level from said first and second output levels prior to said calculating.
23. A method according to claim 21, wherein said calculating is in accordance with a ratio of said first and second output levels.
24. A method according to claim 22, wherein said calculating is performed as:
Figure imgf000033_0001
where d equals said distance to said object, c equals the speed of light, Ti an initial exposure time after transmission of said laser pulse, Tp equals the duration of the laser pulse, Vx equals said first output level minus a background noise level, and V1 equals said second output level minus a background noise level.
25. An optical pixel, comprising: a first optical sensing element configured for providing a first output level in accordance with exposure to a back-reflected laser pulse over a first exposure period; a second optical sensing element configured for providing a second output level in accordance with exposure to a back-reflected laser pulse over a successive exposure period; and a distance calculator configured for calculating a distance from said optical pixel to an object from said first and second output levels.
26. An optical pixel according to claim 25, wherein said distance calculator is further configured for subtracting a background noise level from said first and second output levels prior to said calculating.
27. An optical pixel according to claim 25, wherein said distance calculator is configured calculate said distance in accordance with a ratio of said first and second output levels.
28. An optical pixel according to claim 25, wherein said distance calculator is configured to calculate said distance as:
Figure imgf000034_0001
where d equals said distance to said object, c equals the speed of light, T; an initial exposure time after transmission of said laser pulse, Tp equals the duration of the laser pulse, V1 equals said first output level minus a background noise level, and V2 equals said second output level minus a background noise level.
29. An optical pixel according to claim 25, wherein said distance calculator is configured to calculate said distance as:
Figure imgf000034_0002
where d equals said distance to said object, c equals the speed of light, Tj an initial exposure time after transmission of said laser pulse, Tp equals the duration of the laser pulse, Vx equals said first output level, and V2 equals said second output level.
30. A three-dimensional imaging apparatus, comprising: a sensor array comprising a plurality of optical pixels configured for exposure to a back-reflected laser pulse: an exposure controller associated with said sensor array, configured for controlling a respective exposure time of each of said pixels so as to expose each of said pixels to back-reflection from a respective distance sub-range; and a distance calculator associated with said sensor array, configured for calculating from a pixel respective output level a distance to an object within said respective distance sub-range.
31. An apparatus according to claim 30, wherein said distance calculator comprises an object detector configured for determining if an object is present in a pixel's respective distance sub-range from a respective pixel output level.
32. An apparatus according to claim 31, wherein said object detector is configured to determine if said object is present by comparing said respective pixel output level to a threshold.
33. An apparatus according to claim 30, further comprising an image generator for outputting a three-dimensional image generated from said calculated distances.
34. An apparatus according to claim 30, further comprising a laser for generating laser pulses for back-reflection.
35. An apparatus according to claim 30, wherein said exposure controller is configured for selecting an initial time of said exposure in accordance with an initial distance of said respective distance sub-range and a duration of said exposure in accordance with a length of said respective distance sub-range.
36. An apparatus according to claim 30, wherein said exposure controller is configured for exposing successive rows of said array with a shutter timing corresponding to successive distance sub-ranges.
37. An apparatus according to claim 30, wherein, in successive frames, said- exposure controller is configured for exposing the pixels of a row of said array with a shutter timing corresponding to successive distance sub-ranges.
38. An apparatus according to claim 30, wherein said distance calculator comprises an output accumulator configured for accumulating a pixel output level from a plurality of exposures, and wherein said distance calculator is configured for calculating said distance from said accumulated pixel output level.
39. An apparatus according to claim 30, wherein each of said optical pixels comprises a first optical sensing element configured for providing a first output level in accordance with exposure to a back-reflected laser pulse for a first time interval, and a second optical sensing element configured for providing a second output level in accordance with exposure to a back-reflected laser pulse for a successive time, and wherein said distance calculator is configured for calculating said distance from said first and second output levels.
40. An apparatus according to claim 39, wherein said distance calculator is configured calculate said distance in accordance with a ratio of said first and second output levels.
PCT/IL2008/000812 2007-06-15 2008-06-15 Three-dimensional imaging method and apparatus Ceased WO2008152647A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US92917307P 2007-06-15 2007-06-15
US60/929,173 2007-06-15

Publications (2)

Publication Number Publication Date
WO2008152647A2 true WO2008152647A2 (en) 2008-12-18
WO2008152647A3 WO2008152647A3 (en) 2010-02-25

Family

ID=40130290

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2008/000812 Ceased WO2008152647A2 (en) 2007-06-15 2008-06-15 Three-dimensional imaging method and apparatus

Country Status (1)

Country Link
WO (1) WO2008152647A2 (en)

Cited By (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011150574A1 (en) * 2010-06-04 2011-12-08 深圳泰山在线科技有限公司 Cmos image sensor, timing control method and exposure method thereof
EP2322953A4 (en) * 2008-07-30 2012-01-25 Univ Shizuoka Nat Univ Corp DISTANCE IMAGE SENSOR AND METHOD FOR GENERATING IMAGE SIGNAL BY THE FLIGHT TIME METHOD
FR2998683A1 (en) * 2012-11-27 2014-05-30 Saint Louis Inst 3D IMAGING PROCESS
FR2998666A1 (en) * 2012-11-27 2014-05-30 E2V Semiconductors METHOD FOR PRODUCING IMAGES WITH DEPTH INFORMATION AND IMAGE SENSOR
WO2015057535A1 (en) * 2013-10-17 2015-04-23 Microsoft Corporation Probabilistic time of flight imaging
JP2015210176A (en) * 2014-04-25 2015-11-24 キヤノン株式会社 Imaging apparatus and driving method of imaging apparatus
CN105093206A (en) * 2014-05-19 2015-11-25 洛克威尔自动控制技术股份有限公司 Waveform reconstruction in a time-of-flight sensor
US9256944B2 (en) 2014-05-19 2016-02-09 Rockwell Automation Technologies, Inc. Integration of optical area monitoring with industrial machine control
WO2016034408A1 (en) * 2014-09-03 2016-03-10 Basler Ag Method and device for the simplified detection of a depth image
WO2016171913A1 (en) * 2015-04-21 2016-10-27 Microsoft Technology Licensing, Llc Time-of-flight simulation of multipath light phenomena
WO2016186775A1 (en) * 2015-05-17 2016-11-24 Microsoft Technology Licensing, Llc Gated time of flight camera
EP3147689A1 (en) * 2015-09-28 2017-03-29 Sick Ag Method for detecting an object
US9625108B2 (en) 2014-10-08 2017-04-18 Rockwell Automation Technologies, Inc. Auxiliary light source associated with an industrial application
US9696424B2 (en) 2014-05-19 2017-07-04 Rockwell Automation Technologies, Inc. Optical area monitoring with spot matrix illumination
JP2017151062A (en) * 2016-02-26 2017-08-31 株式会社東京精密 Surface shape measurement device and surface shape measurement method
WO2017178711A1 (en) * 2016-04-13 2017-10-19 Oulun Yliopisto Distance measuring device and transmitter, receiver and method thereof
EP3460517A1 (en) * 2017-09-20 2019-03-27 Industry-Academic Cooperation Foundation, Yonsei University Lidar sensor for vehicles and method of operating the same
EP3477340A1 (en) * 2017-10-27 2019-05-01 Omron Corporation Displacement sensor
US10311378B2 (en) 2016-03-13 2019-06-04 Microsoft Technology Licensing, Llc Depth from time-of-flight using machine learning
CN110073243A (en) * 2016-10-31 2019-07-30 杰拉德·迪尔克·施密茨 Fast-scanning lidar with dynamic voxel detection
CN110168403A (en) * 2017-02-28 2019-08-23 索尼半导体解决方案公司 Distance-measuring device, distance measurement method and Range Measurement System
JPWO2018131514A1 (en) * 2017-01-13 2019-11-07 ソニー株式会社 Signal processing apparatus, signal processing method, and program
EP3627466A1 (en) * 2018-09-24 2020-03-25 Rockwell Automation Technologies, Inc. Object intrusion detection system and method
US10725177B2 (en) 2018-01-29 2020-07-28 Gerard Dirk Smits Hyper-resolved, high bandwidth scanned LIDAR systems
CN111758047A (en) * 2017-12-26 2020-10-09 罗伯特·博世有限公司 Single-chip RGB-D camera
JP2020180941A (en) * 2019-04-26 2020-11-05 株式会社デンソー Optical distance measuring device and method therefor
CN112034471A (en) * 2019-06-04 2020-12-04 精準基因生物科技股份有限公司 Time-of-flight ranging device and time-of-flight ranging method
WO2021020496A1 (en) * 2019-08-01 2021-02-04 株式会社ブルックマンテクノロジ Distance-image capturing apparatus and distance-image capturing method
US10935989B2 (en) 2017-10-19 2021-03-02 Gerard Dirk Smits Methods and systems for navigating a vehicle including a novel fiducial marker system
US10962867B2 (en) 2007-10-10 2021-03-30 Gerard Dirk Smits Method, apparatus, and manufacture for a tracking camera or detector with fast asynchronous triggering
US10969476B2 (en) 2018-07-10 2021-04-06 Rockwell Automation Technologies, Inc. High dynamic range for sensing systems and methods
US10996324B2 (en) 2018-05-14 2021-05-04 Rockwell Automation Technologies, Inc. Time of flight system and method using multiple measuring sequences
US11002836B2 (en) 2018-05-14 2021-05-11 Rockwell Automation Technologies, Inc. Permutation of measuring capacitors in a time-of-flight sensor
CN112888958A (en) * 2018-10-16 2021-06-01 布鲁克曼科技株式会社 Distance measuring device, camera and driving adjustment method of distance measuring device
US11067794B2 (en) 2017-05-10 2021-07-20 Gerard Dirk Smits Scan mirror systems and methods
US11137497B2 (en) 2014-08-11 2021-10-05 Gerard Dirk Smits Three-dimensional triangulation and time-of-flight based tracking systems and methods
US11243294B2 (en) 2014-05-19 2022-02-08 Rockwell Automation Technologies, Inc. Waveform reconstruction in a time-of-flight sensor
WO2022162328A1 (en) * 2021-02-01 2022-08-04 Keopsys Industries Method for acquiring 3d images by line scanning and time-gate detection
US11709236B2 (en) 2016-12-27 2023-07-25 Samsung Semiconductor, Inc. Systems and methods for machine perception
US11714170B2 (en) 2015-12-18 2023-08-01 Samsung Semiconuctor, Inc. Real time position sensing of objects
US11829059B2 (en) 2020-02-27 2023-11-28 Gerard Dirk Smits High resolution scanning of remote objects with fast sweeping laser beams and signal recovery by twitchy pixel array
US12025807B2 (en) 2010-10-04 2024-07-02 Gerard Dirk Smits System and method for 3-D projection and enhancements for interactivity
US12542890B2 (en) 2024-06-28 2026-02-03 Voxelsensors Srl System and method for 3-D projection and enhancements for interactivity

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5988862A (en) * 1996-04-24 1999-11-23 Cyra Technologies, Inc. Integrated system for quickly and accurately imaging and modeling three dimensional objects
US6088099A (en) * 1996-10-30 2000-07-11 Applied Spectral Imaging Ltd. Method for interferometer based spectral imaging of moving objects
US6512838B1 (en) * 1999-09-22 2003-01-28 Canesta, Inc. Methods for enhancing performance and data acquired from three-dimensional image systems
ATE367587T1 (en) * 2003-10-29 2007-08-15 Fraunhofer Ges Forschung DISTANCE SENSOR AND METHOD FOR DISTANCE DETECTION
US7609875B2 (en) * 2005-05-27 2009-10-27 Orametrix, Inc. Scanner system and method for mapping surface of three-dimensional object

Cited By (85)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10962867B2 (en) 2007-10-10 2021-03-30 Gerard Dirk Smits Method, apparatus, and manufacture for a tracking camera or detector with fast asynchronous triggering
EP2322953A4 (en) * 2008-07-30 2012-01-25 Univ Shizuoka Nat Univ Corp DISTANCE IMAGE SENSOR AND METHOD FOR GENERATING IMAGE SIGNAL BY THE FLIGHT TIME METHOD
US8537218B2 (en) 2008-07-30 2013-09-17 National University Corporation Shizuoka University Distance image sensor and method for generating image signal by time-of-flight method
US8964083B2 (en) 2010-06-04 2015-02-24 Shenzhen Taishan Online Technology Co., Ltd. CMOS image sensor, timing control method and exposure method thereof
CN102907084A (en) * 2010-06-04 2013-01-30 深圳泰山在线科技有限公司 CMOS image sensor, timing control method and exposure method thereof
WO2011150574A1 (en) * 2010-06-04 2011-12-08 深圳泰山在线科技有限公司 Cmos image sensor, timing control method and exposure method thereof
CN102907084B (en) * 2010-06-04 2015-09-23 深圳泰山在线科技有限公司 Cmos image sensor and sequential control method thereof and exposure method
US12025807B2 (en) 2010-10-04 2024-07-02 Gerard Dirk Smits System and method for 3-D projection and enhancements for interactivity
FR2998666A1 (en) * 2012-11-27 2014-05-30 E2V Semiconductors METHOD FOR PRODUCING IMAGES WITH DEPTH INFORMATION AND IMAGE SENSOR
EP2735886A3 (en) * 2012-11-27 2014-08-13 I.S.L. Institut Franco-Allemand de Recherches de Saint-Louis 3D imaging method
WO2014082864A1 (en) * 2012-11-27 2014-06-05 E2V Semiconductors Method for producing images with depth information and image sensor
US9699442B2 (en) 2012-11-27 2017-07-04 E2V Semiconductors Method for producing images with depth information and image sensor
FR2998683A1 (en) * 2012-11-27 2014-05-30 Saint Louis Inst 3D IMAGING PROCESS
WO2015057535A1 (en) * 2013-10-17 2015-04-23 Microsoft Corporation Probabilistic time of flight imaging
US20150109414A1 (en) * 2013-10-17 2015-04-23 Amit Adam Probabilistic time of flight imaging
KR102233419B1 (en) 2013-10-17 2021-03-26 마이크로소프트 테크놀로지 라이센싱, 엘엘씨 Probabilistic time of flight imaging
KR20160071390A (en) * 2013-10-17 2016-06-21 마이크로소프트 테크놀로지 라이센싱, 엘엘씨 Probabilistic time of flight imaging
CN105723238A (en) * 2013-10-17 2016-06-29 微软技术许可有限责任公司 Probabilistic time of flight imaging
US10063844B2 (en) 2013-10-17 2018-08-28 Microsoft Technology Licensing, Llc. Determining distances by probabilistic time of flight imaging
JP2015210176A (en) * 2014-04-25 2015-11-24 キヤノン株式会社 Imaging apparatus and driving method of imaging apparatus
US9696424B2 (en) 2014-05-19 2017-07-04 Rockwell Automation Technologies, Inc. Optical area monitoring with spot matrix illumination
US9921300B2 (en) 2014-05-19 2018-03-20 Rockwell Automation Technologies, Inc. Waveform reconstruction in a time-of-flight sensor
US9256944B2 (en) 2014-05-19 2016-02-09 Rockwell Automation Technologies, Inc. Integration of optical area monitoring with industrial machine control
US11243294B2 (en) 2014-05-19 2022-02-08 Rockwell Automation Technologies, Inc. Waveform reconstruction in a time-of-flight sensor
CN105093206A (en) * 2014-05-19 2015-11-25 洛克威尔自动控制技术股份有限公司 Waveform reconstruction in a time-of-flight sensor
US9477907B2 (en) 2014-05-19 2016-10-25 Rockwell Automation Technologies, Inc. Integration of optical area monitoring with industrial machine control
EP2947477A3 (en) * 2014-05-19 2015-12-16 Rockwell Automation Technologies, Inc. Waveform reconstruction in a time-of-flight sensor
US11137497B2 (en) 2014-08-11 2021-10-05 Gerard Dirk Smits Three-dimensional triangulation and time-of-flight based tracking systems and methods
WO2016034408A1 (en) * 2014-09-03 2016-03-10 Basler Ag Method and device for the simplified detection of a depth image
US9625108B2 (en) 2014-10-08 2017-04-18 Rockwell Automation Technologies, Inc. Auxiliary light source associated with an industrial application
US10062201B2 (en) 2015-04-21 2018-08-28 Microsoft Technology Licensing, Llc Time-of-flight simulation of multipath light phenomena
WO2016171913A1 (en) * 2015-04-21 2016-10-27 Microsoft Technology Licensing, Llc Time-of-flight simulation of multipath light phenomena
US9864048B2 (en) 2015-05-17 2018-01-09 Microsoft Technology Licensing, Llc. Gated time of flight camera
WO2016186775A1 (en) * 2015-05-17 2016-11-24 Microsoft Technology Licensing, Llc Gated time of flight camera
US9989641B2 (en) 2015-09-28 2018-06-05 Sick Ag Method of detecting an object
JP2017106894A (en) * 2015-09-28 2017-06-15 ジック アーゲー Method of detecting objects
DE102015116368A1 (en) 2015-09-28 2017-03-30 Sick Ag Method for detecting an object
EP3147689A1 (en) * 2015-09-28 2017-03-29 Sick Ag Method for detecting an object
US11714170B2 (en) 2015-12-18 2023-08-01 Samsung Semiconuctor, Inc. Real time position sensing of objects
JP2017151062A (en) * 2016-02-26 2017-08-31 株式会社東京精密 Surface shape measurement device and surface shape measurement method
US10311378B2 (en) 2016-03-13 2019-06-04 Microsoft Technology Licensing, Llc Depth from time-of-flight using machine learning
US11300666B2 (en) 2016-04-13 2022-04-12 Oulun Yliopisto Distance measuring device and transmitter, receiver and method thereof
WO2017178711A1 (en) * 2016-04-13 2017-10-19 Oulun Yliopisto Distance measuring device and transmitter, receiver and method thereof
CN110073243A (en) * 2016-10-31 2019-07-30 杰拉德·迪尔克·施密茨 Fast-scanning lidar with dynamic voxel detection
EP3532863A4 (en) * 2016-10-31 2020-06-03 Gerard Dirk Smits FAST SCAN LIDAR WITH DYNAMIC VOXEL PROBE
CN110073243B (en) * 2016-10-31 2023-08-04 杰拉德·迪尔克·施密茨 Fast-scanning lidar with dynamic voxel detection
US10935659B2 (en) 2016-10-31 2021-03-02 Gerard Dirk Smits Fast scanning lidar with dynamic voxel probing
US11709236B2 (en) 2016-12-27 2023-07-25 Samsung Semiconductor, Inc. Systems and methods for machine perception
JPWO2018131514A1 (en) * 2017-01-13 2019-11-07 ソニー株式会社 Signal processing apparatus, signal processing method, and program
EP3570066A4 (en) * 2017-01-13 2019-12-25 Sony Corporation SIGNAL PROCESSING DEVICE, SIGNAL PROCESSING METHOD, AND PROGRAM
US11585898B2 (en) 2017-01-13 2023-02-21 Sony Group Corporation Signal processing device, signal processing method, and program
JP7172603B2 (en) 2017-01-13 2022-11-16 ソニーグループ株式会社 SIGNAL PROCESSING DEVICE, SIGNAL PROCESSING METHOD, AND PROGRAM
JPWO2018159289A1 (en) * 2017-02-28 2019-12-19 ソニーセミコンダクタソリューションズ株式会社 Distance measuring device, distance measuring method, and distance measuring system
EP3591437A4 (en) * 2017-02-28 2020-07-22 Sony Semiconductor Solutions Corporation DISTANCE MEASURING DEVICE, DISTANCE MEASURING METHOD AND DISTANCE MEASURING SYSTEM
CN110168403A (en) * 2017-02-28 2019-08-23 索尼半导体解决方案公司 Distance-measuring device, distance measurement method and Range Measurement System
JP7027403B2 (en) 2017-02-28 2022-03-01 ソニーセミコンダクタソリューションズ株式会社 Distance measuring device and distance measuring method
CN110168403B (en) * 2017-02-28 2023-12-01 索尼半导体解决方案公司 Distance measuring device, distance measuring method and distance measuring system
US11067794B2 (en) 2017-05-10 2021-07-20 Gerard Dirk Smits Scan mirror systems and methods
US10935658B2 (en) 2017-09-20 2021-03-02 Industry-Academic Cooperation Foundation, Yonsei University Lidar sensor for vehicles and method of operating the same
EP3460517A1 (en) * 2017-09-20 2019-03-27 Industry-Academic Cooperation Foundation, Yonsei University Lidar sensor for vehicles and method of operating the same
US10935989B2 (en) 2017-10-19 2021-03-02 Gerard Dirk Smits Methods and systems for navigating a vehicle including a novel fiducial marker system
EP3477340A1 (en) * 2017-10-27 2019-05-01 Omron Corporation Displacement sensor
US11294057B2 (en) 2017-10-27 2022-04-05 Omron Corporation Displacement sensor
CN111758047B (en) * 2017-12-26 2024-01-19 罗伯特·博世有限公司 Single-chip RGB-D camera
US11240445B2 (en) * 2017-12-26 2022-02-01 Robert Bosch Gmbh Single-chip RGB-D camera
CN111758047A (en) * 2017-12-26 2020-10-09 罗伯特·博世有限公司 Single-chip RGB-D camera
US10725177B2 (en) 2018-01-29 2020-07-28 Gerard Dirk Smits Hyper-resolved, high bandwidth scanned LIDAR systems
US11002836B2 (en) 2018-05-14 2021-05-11 Rockwell Automation Technologies, Inc. Permutation of measuring capacitors in a time-of-flight sensor
US10996324B2 (en) 2018-05-14 2021-05-04 Rockwell Automation Technologies, Inc. Time of flight system and method using multiple measuring sequences
US10969476B2 (en) 2018-07-10 2021-04-06 Rockwell Automation Technologies, Inc. High dynamic range for sensing systems and methods
US10789506B2 (en) 2018-09-24 2020-09-29 Rockwell Automation Technologies, Inc. Object intrusion detection system and method
EP3627466A1 (en) * 2018-09-24 2020-03-25 Rockwell Automation Technologies, Inc. Object intrusion detection system and method
CN112888958A (en) * 2018-10-16 2021-06-01 布鲁克曼科技株式会社 Distance measuring device, camera and driving adjustment method of distance measuring device
EP3839555A4 (en) * 2018-10-16 2022-07-06 Brookman Technology, Inc. DISTANCE MEASURING DEVICE, CAMERA AND METHOD FOR ADJUSTING THE DRIVE OF A DISTANCE MEASURING DEVICE
US12181609B2 (en) 2018-10-16 2024-12-31 Toppan Holdings Inc. Distance measuring device, camera, and method for adjusting drive of distance measuring device
JP7259525B2 (en) 2019-04-26 2023-04-18 株式会社デンソー Optical ranging device and method
JP2020180941A (en) * 2019-04-26 2020-11-05 株式会社デンソー Optical distance measuring device and method therefor
CN112034471A (en) * 2019-06-04 2020-12-04 精準基因生物科技股份有限公司 Time-of-flight ranging device and time-of-flight ranging method
JP2021025833A (en) * 2019-08-01 2021-02-22 株式会社ブルックマンテクノロジ Distance image imaging device, and distance image imaging method
JP7463671B2 (en) 2019-08-01 2024-04-09 Toppanホールディングス株式会社 Distance image capturing device and distance image capturing method
WO2021020496A1 (en) * 2019-08-01 2021-02-04 株式会社ブルックマンテクノロジ Distance-image capturing apparatus and distance-image capturing method
US11829059B2 (en) 2020-02-27 2023-11-28 Gerard Dirk Smits High resolution scanning of remote objects with fast sweeping laser beams and signal recovery by twitchy pixel array
FR3119462A1 (en) * 2021-02-01 2022-08-05 Keopsys Industries Process for acquiring 3D images by line scanning and temporal gate detection
WO2022162328A1 (en) * 2021-02-01 2022-08-04 Keopsys Industries Method for acquiring 3d images by line scanning and time-gate detection
US12542890B2 (en) 2024-06-28 2026-02-03 Voxelsensors Srl System and method for 3-D projection and enhancements for interactivity

Also Published As

Publication number Publication date
WO2008152647A3 (en) 2010-02-25

Similar Documents

Publication Publication Date Title
WO2008152647A2 (en) Three-dimensional imaging method and apparatus
US11769775B2 (en) Distance-measuring imaging device, distance measuring method of distance-measuring imaging device, and solid-state imaging device
Perenzoni et al. A 64$\times $64-Pixels digital silicon photomultiplier direct TOF sensor with 100-MPhotons/s/pixel background rejection and imaging/altimeter mode with 0.14% precision up to 6 km for spacecraft navigation and landing
CN108061603B (en) Time-of-flight optical sensor
US10681295B2 (en) Time of flight camera with photon correlation successive approximation
US10838066B2 (en) Solid-state imaging device, distance measurement device, and distance measurement method
US12481042B2 (en) Time of flight sensor
KR102471540B1 (en) Method for subtracting background light from exposure values of pixels in an imaging array and pixels using the same
US8829408B2 (en) Sensor pixel array and separated array of storage and accumulation with parallel acquisition and readout wherein each pixel includes storage sites and readout nodes
US9140795B2 (en) Time of flight sensor with subframe compression and method
JP6665873B2 (en) Photo detector
US10073164B2 (en) Distance-measuring/imaging apparatus, distance measuring method of the same, and solid imaging element
US11119196B2 (en) First photon correlated time-of-flight sensor
EP3881098A2 (en) High dynamic range direct time of flight sensor with signal-dependent effective readout rate
JP6709335B2 (en) Optical sensor, electronic device, arithmetic unit, and method for measuring distance between optical sensor and detection target
EP3370079B1 (en) Range and parameter extraction using processed histograms generated from a time of flight sensor - pulse detection
CN110221273A (en) Time flight depth camera and the distance measurement method of single-frequency modulation /demodulation
US10520590B2 (en) System and method for ranging a target with a digital-pixel focal plane array
US12153140B2 (en) Enhanced depth mapping using visual inertial odometry
WO2018181013A1 (en) Light detector
Bellisai et al. Low-power 20-meter 3D ranging SPAD camera based on continuous-wave indirect time-of-flight
Piron et al. An 8-Windows Continuous-Wave Indirect Time-of-Flight Method for High-Frequency SPAD-Based 3-D Imagers in 0.18 μm CMOS
Bronzi et al. 3D Sensor for indirect ranging with pulsed laser source
JP2021025810A (en) Distance image sensor, and distance image measurement device
US20250172675A1 (en) Time-of-flight light event detection circuitry and time-of-flight light event detection method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08763570

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 08763570

Country of ref document: EP

Kind code of ref document: A2