[go: up one dir, main page]

TWI696841B - Computation apparatus, sensing apparatus and processing method based on time of flight - Google Patents

Computation apparatus, sensing apparatus and processing method based on time of flight Download PDF

Info

Publication number
TWI696841B
TWI696841B TW108121698A TW108121698A TWI696841B TW I696841 B TWI696841 B TW I696841B TW 108121698 A TW108121698 A TW 108121698A TW 108121698 A TW108121698 A TW 108121698A TW I696841 B TWI696841 B TW I696841B
Authority
TW
Taiwan
Prior art keywords
phases
pixel
intensity information
difference
time
Prior art date
Application number
TW108121698A
Other languages
Chinese (zh)
Other versions
TW202032155A (en
Inventor
魏守德
陳韋志
吳峻豪
Original Assignee
大陸商光寶電子(廣州)有限公司
光寶科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 大陸商光寶電子(廣州)有限公司, 光寶科技股份有限公司 filed Critical 大陸商光寶電子(廣州)有限公司
Application granted granted Critical
Publication of TWI696841B publication Critical patent/TWI696841B/en
Publication of TW202032155A publication Critical patent/TW202032155A/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Transforming Light Signals Into Electric Signals (AREA)
  • Solid State Image Pick-Up Elements (AREA)
  • Measurement Of Optical Distance (AREA)

Abstract

A computation apparatus, a sensing apparatus and a processing method based on time-of-flight (ToF) are provided. In the method, intensity information of at least two phases corresponding to at least one pixel is obtained. The intensity information is generated by sensing a modulation light with time delays using these phases. Whether to abandon the intensity information of the at least two phases of the pixel is determined according to the difference between the intensity information of the at least two phases. Accordingly, the influence caused by motion blur would be improved for depth information estimation.

Description

基於飛行時間測距的運算裝置、感測裝置及處理方法Computing device, sensing device and processing method based on time-of-flight ranging

本發明是有關於一種光學量測技術,且特別是有關於一種基於飛行時間(Time of Flight,ToF)測距的運算裝置、感測裝置及處理方法。 The present invention relates to an optical measurement technology, and particularly relates to a computing device, a sensing device, and a processing method based on Time of Flight (ToF) distance measurement.

隨著科技的發展,光學三維量測技術已逐漸成熟,其中飛行時間測距是目前一種常見的主動式深度感測技術。ToF測距技術的基本原理是,調變光(例如,紅外光、或雷射光等)經發射後遇到物體將被反射,而藉由被反射的調變光的反射時間差或相位差來換算被拍攝物體的距離,即可產生相對於物體的深度資訊。 With the development of technology, optical three-dimensional measurement technology has gradually matured. Time-of-flight distance measurement is a common active depth sensing technology. The basic principle of ToF ranging technology is that modulated light (for example, infrared light, laser light, etc.) will be reflected after encountering an object after being emitted, and converted by the reflection time difference or phase difference of the reflected modulated light The distance of the object being shot can produce depth information relative to the object.

值得注意的是,請參照圖1A的時序圖,ToF測距技術在感測調變光的時間被稱為曝光時間,其類似於相機快門時間。例如,邏輯1代表進行曝光/感測;邏輯0代表停止曝光。其中,當曝光時間增長時,接收到調變光的資料量亦會隨著增加。然而,較 長的曝光時間恐會容易造成動態模糊(motion blur)現象。例如,圖1B所示是待測物體運動所造成的殘影,且圖1C所示是車燈移動所造成的光軌。然而,當利用ToF測距技術計算深度資訊時,若遭遇到動態模糊情況,將導致深度距離不準確、或畫面模糊的情況。因此,如何提供一種簡便且能有效降低動態模糊影響的方法成為目前相關領域待努力的目標之一。 It is worth noting that, referring to the timing diagram of FIG. 1A, the time when the ToF ranging technology senses dimming is called the exposure time, which is similar to the camera shutter time. For example, logic 1 means exposure/sensing; logic 0 means stop exposure. Among them, when the exposure time increases, the amount of data received for dimming will also increase. However, compared Long exposure time may easily cause motion blur. For example, FIG. 1B shows the afterimage caused by the movement of the object to be measured, and FIG. 1C shows the light track caused by the movement of the vehicle lights. However, when using ToF ranging technology to calculate depth information, if it encounters dynamic blur, it will lead to inaccurate depth distance or blurry picture. Therefore, how to provide a simple and effective method to reduce the impact of dynamic blur has become one of the goals to be worked on in the related field.

有鑑於此,本發明實施例提供一種基於飛行時間測距的運算裝置、感測裝置及處理方法,其能有效避免動態模糊所造成無效的深度計算。 In view of this, embodiments of the present invention provide a computing device, a sensing device, and a processing method based on time-of-flight ranging, which can effectively avoid invalid depth calculation caused by dynamic blur.

本發明實施例基於飛行時間測距的運算裝置,其包括記憶體及處理器。記憶體記錄至少一畫素所對應至少二個相位的強度資訊以及用於運算裝置的處理方法所對應的程式碼,而此強度資訊係利用那些相位在時間上延遲感測調變光所得。處理器耦接記憶體,並經配置用以執行程式碼,且此處理方法包括下列步驟:取得至少二個相位的強度資訊。依據那些相位的強度資訊之間的差異,決定是否放棄此畫素所對應至少二個相位的強度資訊。 An operation device based on time-of-flight distance measurement according to an embodiment of the present invention includes a memory and a processor. The memory records the intensity information of at least two phases corresponding to at least one pixel and the program code corresponding to the processing method for the computing device, and the intensity information is obtained by using those phases to delay the modulation of dimming in time. The processor is coupled to the memory and is configured to execute the code. The processing method includes the following steps: obtaining intensity information of at least two phases. Based on the difference between the intensity information of those phases, it is decided whether to discard the intensity information of at least two phases corresponding to this pixel.

本發明實施例基於飛行時間測距的感測裝置,其包括調變光發射電路、調變光接收電路、記憶體以及處理器。調變光發射電路發射調變光。調變光接收電路利用至少二個相位在時間延遲上以接收此調變光。記憶體記錄至少一個畫素所對應至少二相位 的強度資訊以及用於感測裝置的處理方法所對應的程式碼。處理器耦接調變光接收電路及記憶體,並經配置用以執行程式碼,且此處理方法包括下列步驟。取得至少二個相位的強度資訊,而此強度資訊係利用那些相位在時間上延遲感測調變光所得。依據那些相位的強度資訊之間的差異,決定是否放棄此畫素所對應的那些相位的強度資訊。 A sensing device based on time-of-flight distance measurement according to an embodiment of the present invention includes a dimming light transmitting circuit, a dimming light receiving circuit, a memory, and a processor. The dimming light emitting circuit emits dimming light. The dimming light receiving circuit uses at least two phases in time delay to receive the dimming light. The memory records at least two phases corresponding to at least one pixel Strength information and the corresponding code of the processing method used for the sensing device. The processor is coupled to the dimming receiving circuit and the memory, and is configured to execute the program code, and the processing method includes the following steps. Obtain the intensity information of at least two phases, and the intensity information is obtained by using the phases to delay the dimming in time. Based on the difference between the intensity information of those phases, it is decided whether to discard the intensity information of those phases corresponding to this pixel.

另一方面,本發明實施例基於飛行時間測距的處理方法,其包括下列步驟:取得至少一畫素所對應至少二個相位的強度資訊,而此強度資訊係利用那些相位在時間上延遲感測調變光所得。依據那些相位的強度資訊之間的差異,決定是否放棄此畫素所對應的那些相位的強度資訊。 On the other hand, the processing method based on time-of-flight ranging in the embodiment of the present invention includes the following steps: obtaining intensity information of at least two phases corresponding to at least one pixel, and the intensity information uses those phases to delay in time Obtained by measuring dimming. Based on the difference between the intensity information of those phases, it is decided whether to discard the intensity information of those phases corresponding to this pixel.

基於上述,本發明實施例基於飛行時間測距的運算裝置、感測裝置及處理方法,依據兩相位的強度資訊之間的差異來評估是否發生動態模糊現象,並據以將有動態模糊現象的畫素放棄,且重新拍攝或僅採用有效的畫素。藉此,可有效降低動態模糊對於深度資訊估測的影響。 Based on the above, the operation device, sensing device and processing method based on the time-of-flight distance measurement of the embodiments of the present invention evaluate whether dynamic blur occurs according to the difference between the intensity information of the two phases, and accordingly The pixels are discarded and re-shot or only valid pixels are used. In this way, the effect of motion blur on depth information estimation can be effectively reduced.

為讓本發明的上述特徵和優點能更明顯易懂,下文特舉實施例,並配合所附圖式作詳細說明如下。 In order to make the above-mentioned features and advantages of the present invention more obvious and understandable, the embodiments are specifically described below and described in detail in conjunction with the accompanying drawings.

10:測距系統 10: Ranging system

100:感測裝置 100: sensing device

110:調變光發射電路 110: Modulated light emitting circuit

120:調變光接收電路 120: Modulated light receiving circuit

122:光電感應器 122: Photoelectric sensor

130:處理器 130: processor

140:訊號處理電路 140: signal processing circuit

150:記憶體 150: memory

160:運算裝置 160: computing device

170:姿態感測器 170: attitude sensor

CA、CB:電容 CA, CB: capacitance

QA、QB:改變的電荷量 QA, QB: the amount of charge changed

CS:控制訊號 CS: control signal

CSB:反相控制訊號 CSB: Inverted control signal

DS:感測訊號 DS: Sensing signal

EM:調變光 EM: dimming

MS:調變訊號 MS: Modulation signal

NA、NB:節點 NA, NB: Node

REM:被反射的調變光 REM: Modulated light reflected

SW1、SW2:開關 SW1, SW2: switch

VA、VB:電壓訊號 V A , V B : voltage signal

TA:目標物體 TA: target object

S410~S430、S710~S790、S810~S890、S910~S950:步驟 S410~S430, S710~S790, S810~S890, S910~S950: Steps

圖1A是一範例說明曝光時間與調變光訊號的時序圖。 FIG. 1A is an example illustrating a timing diagram of exposure time and modulated light signal.

圖1B及1C是二範例說明動態模糊現象。 1B and 1C are two examples to illustrate the phenomenon of dynamic blur.

圖2是依據本發明一實施例的測距系統的示意圖。 2 is a schematic diagram of a distance measuring system according to an embodiment of the invention.

圖3A是依據本發明一實施例的調變光接收電路的電路示意圖。 FIG. 3A is a schematic circuit diagram of a modulated light receiving circuit according to an embodiment of the invention.

圖3B是依據圖3A的實施例的訊號波形示意圖。 FIG. 3B is a signal waveform diagram according to the embodiment of FIG. 3A.

圖4是依據本發明一實施例基於飛行時間測距的處理方法的流程圖。 4 is a flowchart of a processing method based on time-of-flight ranging according to an embodiment of the present invention.

圖5A~5D是一範例說明本域(local)的動態模糊現象。 5A~5D are an example to illustrate the local motion blur phenomenon.

圖6A~6D是一範例說明全域(global)的動態模糊現象。 6A~6D are an example to illustrate the global dynamic blur phenomenon.

圖7是依據本發明第一實施例基於飛行時間測距的處理方法的流程圖。 7 is a flowchart of a processing method based on time-of-flight ranging according to the first embodiment of the present invention.

圖8是依據本發明第二實施例基於飛行時間測距的處理方法的流程圖。 8 is a flowchart of a processing method based on time-of-flight ranging according to a second embodiment of the present invention.

圖9是依據本發明第三實施例基於飛行時間測距的處理方法的流程圖。 9 is a flowchart of a processing method based on time-of-flight ranging according to a third embodiment of the present invention.

圖10A及10B是一範例說明放棄無效資料的感測示意圖。 10A and 10B are schematic diagrams illustrating an example of abandoning invalid data.

圖2是依據本發明一實施例的測距系統10的示意圖。請參照圖2,測距系統10包括基於ToF的感測裝置100及目標物體TA。 2 is a schematic diagram of a distance measuring system 10 according to an embodiment of the invention. Referring to FIG. 2, the ranging system 10 includes a ToF-based sensing device 100 and a target object TA.

感測裝置100包括但不僅限於調變光發射電路110、調變 光接收電路120、處理器130、訊號處理電路140、記憶體150與姿態感測器170。感測裝置100可應用於諸如三維模型建模、物體辨識、車用輔助系統、定位、產線測試或誤差校正等領域。感測裝置100可能是獨立裝置,或經模組化而裝載於其他裝置,非用以限制本發明的範疇。 The sensing device 100 includes but is not limited to the modulating light emitting circuit 110, the modulating The light receiving circuit 120, the processor 130, the signal processing circuit 140, the memory 150, and the attitude sensor 170. The sensing device 100 can be applied to fields such as three-dimensional model modeling, object recognition, vehicle auxiliary systems, positioning, production line testing, or error correction. The sensing device 100 may be a stand-alone device, or may be loaded on other devices through modularization, and is not intended to limit the scope of the present invention.

調變光發射電路110例如是雷射二極體或準直光產生裝置,且調變光接收電路120例如是攝像裝置或光源感應裝置(至少包括光感測器、讀取電路等)。訊號處理電路140耦接調變光發射電路110與調變光接收電路120。訊號處理電路140用以提供調變訊號MS給調變光發射電路110且提供控制訊號CS至調變光接收電路120。調變光發射電路110用以依據調變訊號MS發出調變光EM,而此調變光EM例如紅外光、雷射光或其他波段的準直光。例如,調變訊號MS為脈衝訊號,且調變訊號MS上升的邊緣對應調變光EM的觸發時間。調變光EM遇到目標物體TA後將會被反射,而調變光接收電路120可接收被反射的調變光REM。調變光接收電路120依據控制訊號CS對被反射的調變光REM解調變,以產生感測訊號DS。 The modulated light emitting circuit 110 is, for example, a laser diode or a collimated light generating device, and the modulated light receiving circuit 120 is, for example, an imaging device or a light source sensing device (including at least a light sensor, a reading circuit, etc.). The signal processing circuit 140 is coupled to the dimming light transmitting circuit 110 and the dimming light receiving circuit 120. The signal processing circuit 140 is used to provide a modulation signal MS to the modulation light transmitting circuit 110 and a control signal CS to the modulation light receiving circuit 120. The modulating light emitting circuit 110 is used to emit the modulating light EM according to the modulating signal MS, and the modulating light EM is, for example, infrared light, laser light or collimated light in other wavelength bands. For example, the modulation signal MS is a pulse signal, and the rising edge of the modulation signal MS corresponds to the trigger time of the modulation light EM. The modulated light EM will be reflected after meeting the target object TA, and the modulated light receiving circuit 120 may receive the reflected modulated light REM. The modulated light receiving circuit 120 demodulates the reflected modulated light REM according to the control signal CS to generate a sensing signal DS.

更具體而言,圖3A是依據本發明一實施例的調變光接收電路120的電路示意圖。請參照圖3A,為了方便說明,本圖式以單位/單一畫素的電路為例。調變光接收電路120中對應於單位/單一畫素的電路包括光電感應元件122、電容CA、電容CB、開關SW1與開關SW2。光電感應器122例如是光電二極體(photodiode) 或具有類似用以感測被反射調變光REM之功能的其他光感測元件。光電感應器122一端接收共同參考電壓(例如,接地GND),且其另一端耦接開關SW1與開關SW2的其中一端。開關SW1的另一端通過節點NA耦接電容CA且受控於控制訊號CS的反相訊號CSB。開關SW2的另一端通過節點NB耦接電容CB且受控於控制訊號CS。調變光接收電路120輸出節點NA上的電壓(或電流)訊號VA與節點NB上的電壓(或電流)訊號VB作為感測訊號DS。在另一實施例中,調變光接收電路120也可以選擇輸出電壓訊號VA與電壓訊號VB的差值作為感測訊號DS(可作為強度(intensity)資訊)。 More specifically, FIG. 3A is a circuit schematic diagram of the modulated light receiving circuit 120 according to an embodiment of the invention. Please refer to FIG. 3A. For the convenience of explanation, this drawing takes the unit/single pixel circuit as an example. The circuit corresponding to the unit/single pixel in the modulated light receiving circuit 120 includes a photoelectric sensor 122, a capacitor CA, a capacitor CB, a switch SW1 and a switch SW2. The photoelectric sensor 122 is, for example, a photodiode or other light-sensing device having a function similar to that used to sense the reflected modulated light REM. One end of the photoelectric sensor 122 receives a common reference voltage (eg, ground GND), and the other end thereof is coupled to one end of the switch SW1 and the switch SW2. The other end of the switch SW1 is coupled to the capacitor CA through the node NA and is controlled by the inverted signal CSB of the control signal CS. The other end of the switch SW2 is coupled to the capacitor CB through the node NB and is controlled by the control signal CS. Voltage on the output node NA 120 modulation light receiving circuit (or current) signal V A and the voltage at the node NB (or current) signal V B as a sensing signal DS. In another embodiment, the modulated light receiving circuit 120 can also select the difference between the output voltage signal V A and the voltage signal V B as the sensing signal DS (which can be used as intensity information).

圖3A的實施例僅作為舉例說明,調變光接收電路120的電路架構並不限於此。調變光接收電路120可以具有多個光電感應器122,或是更多電容或開關。本領域具有通常知識者可依據通常知識與實際需求而做適當調整。 The embodiment of FIG. 3A is for illustration only, and the circuit architecture of the modulated light receiving circuit 120 is not limited thereto. The modulated light receiving circuit 120 may have multiple photoelectric sensors 122, or more capacitors or switches. Those with common knowledge in this field can make appropriate adjustments based on the common knowledge and actual needs.

圖3B是依據圖3A的實施例的訊號波形示意圖。請接著參照圖3A與圖3B,當反相控制訊號CSB為低準位(例如,邏輯0)時,開關SW1導通,此時控制訊號CS會處於高準位(例如,邏輯1),開關SW2不導通。反之,當控制訊號CS為低準位(例如,邏輯0)時,開關SW2導通,此時反相控制訊號CSB處於高準位(例如,邏輯1),開關SW1不導通。此外,光電感應器122的導通即可使光電感應器122接收到被反射的調變光REM。當光電感應器122與開關SW1都導通時,電容CA進行放電(或充電),圖3B中 的QA表示電容CA所改變的電荷量,節點NA上的電壓訊號VA會相應地改變。當光電感應器122與開關SW2都導通時,電容CB進行放電(或充電),圖3B中的QB表示電容CB所改變的電荷量,節點NB上的電壓訊號VB會相應地改變。 FIG. 3B is a signal waveform diagram according to the embodiment of FIG. 3A. 3A and 3B, when the inverting control signal CSB is at a low level (for example, logic 0), the switch SW1 is turned on, and the control signal CS will be at a high level (for example, logic 1), and the switch SW2 Not conductive. Conversely, when the control signal CS is at a low level (for example, logic 0), the switch SW2 is turned on. At this time, the inverted control signal CSB is at a high level (for example, logic 1), and the switch SW1 is not turned on. In addition, when the photoelectric sensor 122 is turned on, the photoelectric sensor 122 can receive the reflected modulated light REM. When both the photoelectric sensor 122 and the switch SW1 are turned on, the capacitor CA is discharged (or charged). QA in FIG. 3B represents the amount of charge changed by the capacitor CA, and the voltage signal V A on the node NA changes accordingly. When both the photoelectric sensor 122 and the switch SW2 are turned on, the capacitor CB is discharged (or charged). QB in FIG. 3B represents the amount of charge changed by the capacitor CB, and the voltage signal V B on the node NB changes accordingly.

處理器130耦接調變光接收電路120。處理器130可以是中央處理單元(Central Processing Unit,CPU),或是其他可程式化之一般用途或特殊用途的微處理器(Microprocessor)、數位訊號處理器(Digital Signal Processor,DSP)、可程式化控制器、特殊應用積體電路(Application-Specific Integrated Circuit,ASIC)或其他類似元件或上述元件的組合。於本發明實施例中,處理器130可依據感測訊號DS計算控制訊號CS與被反射的調變光REM之間的相位差,且依據此相位差來進行距離量測。例如,請參照圖3B,依據電壓訊號VA與電壓訊號VB之間的差異,處理器130可以計算出控制訊號CS與被反射的調變光REM之間的相位差。需說明的是,在一些實施例中,處理器130可能內建或電性連接有類比至數位轉換器(Analogy-to-Digital,ADC),且透過類比至數位轉換器將感測訊號DS轉換成數位形式的訊號。 The processor 130 is coupled to the dimming light receiving circuit 120. The processor 130 may be a central processing unit (Central Processing Unit, CPU), or other programmable general-purpose or special-purpose microprocessor (Microprocessor), digital signal processor (DSP), programmable Controller, application-specific integrated circuit (Application-Specific Integrated Circuit, ASIC) or other similar components or a combination of the above components. In the embodiment of the present invention, the processor 130 can calculate the phase difference between the control signal CS and the reflected modulated light REM according to the sensing signal DS, and perform distance measurement based on the phase difference. For example, referring to FIG. 3B, according to the difference between the voltage signal V A and the voltage signal V B , the processor 130 can calculate the phase difference between the control signal CS and the reflected modulated light REM. It should be noted that, in some embodiments, the processor 130 may be built-in or electrically connected with an analog-to-digital converter (Analogy-to-Digital, ADC), and convert the sensing signal DS through the analog-to-digital converter Signal in digital form.

記憶體150耦接處理器130,記憶體150可以是任何型態的固定或可移動隨機存取記憶體(Random Access Memory,RAM)、快閃記憶體(Flash Memory)、傳統硬碟(Hard Disk Drive,HDD)、固態硬碟(Solid-State Disk,SSD)、非揮發性(non-volatile)記憶體或類似元件或上述元件之組合的儲存器。於本實施例中,記憶體150 用於儲存緩衝的或永久的資料(例如,感測訊號DS對應的強度資訊、門檻值等)、程式碼、軟體模組、作業系統、應用程式、驅動程式等資料或檔案,且其詳細內容待後續實施例詳述。值得注意的是,記憶體150所記錄的程式碼是用於感測裝置100的處理方法,且後續實施例將詳加說明此處理方法。 The memory 150 is coupled to the processor 130. The memory 150 may be any type of fixed or removable random access memory (RAM), flash memory (Flash Memory), and traditional hard disk (Hard Disk) Drive (HDD), solid-state disk (Solid-State Disk, SSD), non-volatile (non-volatile) memory or similar components or a combination of the above components. In this embodiment, the memory 150 Used to store buffered or permanent data (for example, intensity information corresponding to the sensing signal DS, threshold, etc.), code, software modules, operating systems, applications, drivers, and other data or files, and their details To be detailed in subsequent embodiments. It is worth noting that the program code recorded in the memory 150 is used for the processing method of the sensing device 100, and the processing method will be described in detail in subsequent embodiments.

姿態偵測器170耦接處理器130,姿態偵測器170可以是重力感測器(G-sensor)/加速度計(Accelerometer)、慣性(Inertial)感測器、陀螺儀(Gyroscope)、磁力感測器(Magnetometer)或其組合,並用以偵測諸如加速度、角速度、方位等運動或姿態,並據以產生姿態資訊(例如,記錄有三軸的重力加速度、角速度或磁力等資料)。 The attitude detector 170 is coupled to the processor 130, and the attitude detector 170 may be a gravity sensor (G-sensor)/accelerometer (Accelerometer), an inertial (Inertial) sensor, a gyroscope (Gyroscope), a magnetic sensor Magnetometer or its combination, and used to detect motion or posture such as acceleration, angular velocity, azimuth, etc., and generate posture information accordingly (for example, three-axis gravity acceleration, angular velocity or magnetic force etc. are recorded).

需說明的是,在一些實施例中,處理器130及記憶體150可能被獨立出來而成為運算裝置160。此運算裝置160可以是桌上型電腦、筆記型電腦、伺服器、智慧型手機、平板電腦的裝置。運算裝置160與感測裝置100更具有可相互通訊的通訊收發器(例如,支援Wi-Fi、藍芽、乙太網路(Ethernet)等通訊技術的收發器),使運算裝置160可取得來自感測裝置100的感測訊號DS或對應的強度資訊(可記錄在記憶體140中以供處理器130存取)。 It should be noted that, in some embodiments, the processor 130 and the memory 150 may be separated into the computing device 160. The computing device 160 can be a device of a desktop computer, a notebook computer, a server, a smartphone, or a tablet computer. The computing device 160 and the sensing device 100 further have communication transceivers that can communicate with each other (for example, transceivers that support communication technologies such as Wi-Fi, Bluetooth, and Ethernet), so that the computing device 160 can obtain The sensing signal DS or the corresponding intensity information of the sensing device 100 (which can be recorded in the memory 140 for the processor 130 to access).

為了方便理解本發明實施例的操作流程,以下將舉諸多實施例詳細說明本發明實施例中感測裝置100及/或運算裝置160的運作流程。下文中,將搭配感測裝置100及運算裝置160中的各項元件及模組說明本發明實施例所述之方法。本方法的各個流程可依照實施情形而隨之調整,且並不僅限於此。 In order to facilitate understanding of the operation process of the embodiment of the present invention, a number of embodiments will be described in detail below to describe the operation process of the sensing device 100 and/or the computing device 160 in the embodiment of the present invention. Hereinafter, the methods described in the embodiments of the present invention will be described with various elements and modules in the sensing device 100 and the computing device 160. The various processes of the method can be adjusted according to the implementation situation, and it is not limited to this.

圖4是依據本發明一實施例基於飛行時間測距的處理方法的流程圖。請參照圖4,處理器130取得至少一個畫素所對應至少二個相位的強度資訊(步驟S410)。具體而言,在圖3B的實施例中,調變訊號MS與控制訊號CS同步,但訊號處理電路140還可以讓調變訊號MS與控制訊號CS之間不同步。也就是說,控制訊號CS與調變訊號MS之間可具有參考相位。而訊號處理電路140會依據不同的參考相位將調變訊號MS或控制訊號CS的相位延遲或提前,使得調變訊號MS與控制訊號CS具有相位差/相位延遲。 4 is a flowchart of a processing method based on time-of-flight ranging according to an embodiment of the present invention. Referring to FIG. 4, the processor 130 obtains intensity information of at least two phases corresponding to at least one pixel (step S410). Specifically, in the embodiment of FIG. 3B, the modulation signal MS and the control signal CS are synchronized, but the signal processing circuit 140 can also make the modulation signal MS and the control signal CS out of synchronization. That is to say, there may be a reference phase between the control signal CS and the modulation signal MS. The signal processing circuit 140 delays or advances the phase of the modulation signal MS or the control signal CS according to different reference phases, so that the modulation signal MS and the control signal CS have a phase difference/phase delay.

在連續波(Continuous Wave,CW)量測機制中,相位差例如是0度、90度、180度及270度,即四相位方法。不同相位即會對應到不同起始及結束時間點的電荷累計時間區間。換句而言,調變光接收電路120利用四個相位在時間延遲上以接收被反射的調變光REM。而利用那些相位在時間上延遲感測被反射的調變光REM即可得到對應於不同相位的感測訊號DS,且感測訊號DS將可進一步作為強度資訊。此強度資訊可能記錄單一畫素(一個畫素對應到圖3A的電路)所累計的電荷量或進一步轉換成強度值。即,各畫素的強度資訊係利用那些相位在時間上延遲感測被反射的調變光REM所得。 In the continuous wave (Continuous Wave, CW) measurement mechanism, the phase difference is, for example, 0 degrees, 90 degrees, 180 degrees, and 270 degrees, which is a four-phase method. Different phases correspond to different charge accumulation time intervals at different start and end time points. In other words, the modulated light receiving circuit 120 uses four phases in time delay to receive the reflected modulated light REM. By using those phases to delay sensing the reflected modulated light REM in time, the sensing signals DS corresponding to different phases can be obtained, and the sensing signals DS can be further used as intensity information. This intensity information may record the amount of charge accumulated by a single pixel (one pixel corresponds to the circuit of FIG. 3A) or may be further converted into an intensity value. That is, the intensity information of each pixel is obtained by using those phases to delay the reflected modulated light REM in time.

接著,處理器130依據至少二個相位的強度資訊之間的差異,決定是否放棄此畫素所對應至少二個相位的強度資訊(步驟S430)。具體而言,經實驗可知,動態模糊現象將造成不同相位之間的強度資訊產生差異。舉例而言,圖5A~5D是一範例說明本域 (local)的動態模糊現象。請參照圖5A,圖中所示是目標物體TA(以椅子為例)與感測裝置100皆為靜止狀態(例如,無晃動、無跳動等運動)下依據感測訊號DS所生成的影像(以解析度為240×180為例)。請參照圖5B,圖中所示是相同時間點所對應任兩個相位對應強度值相減(即,強度資訊的差異)所產生的影像。請參照圖5C是圖5B經尺度調整後的影像,且可觀察到所有畫素的強度差異大致相同並等於或趨近於零。接著,假設目標物體TA移動。請參照圖5D,圖中所示是相同時間點所對應任兩個相位對應強度值相減(即,強度資訊的差異)所產生的影像,與圖5C相比可觀察到部分畫素的強度差異較大。 Then, the processor 130 determines whether to discard the intensity information of at least two phases corresponding to the pixel according to the difference between the intensity information of the at least two phases (step S430). Specifically, it can be known from experiments that the dynamic blur phenomenon will cause differences in intensity information between different phases. For example, Figures 5A~5D are an example to illustrate this domain (local) dynamic blur phenomenon. Please refer to FIG. 5A. The figure shows an image generated by the sensing signal DS when the target object TA (taking a chair as an example) and the sensing device 100 are in a stationary state (eg, no shaking, no jumping, etc.) Take the resolution of 240×180 as an example). Please refer to FIG. 5B, which shows an image generated by subtracting the intensity value corresponding to any two phases corresponding to the same time point (ie, the difference in intensity information). Please refer to FIG. 5C is the scaled image of FIG. 5B, and it can be observed that the intensity difference of all pixels is approximately the same and is equal to or close to zero. Next, assume that the target object TA moves. Please refer to FIG. 5D, which shows the image generated by the subtraction of the intensity values of any two phases corresponding to the same time point (ie, the difference in intensity information), and the intensity of some pixels can be observed compared to FIG. 5C big different.

圖6A~6D是一範例說明全域(global)的動態模糊現象。請參照圖6A,圖中所示是目標物體TA(以椅子為例)與感測裝置100皆為靜止狀態下依據感測訊號DS所生成的影像(以解析度為240×180為例)。請參照圖6B,圖中所示是相同時間點所對應任兩個相位對應強度值相減(即,強度資訊的差異)所產生的影像。請參照圖6C是圖6B經尺度調整後的影像,且可觀察到所有畫素的強度差異大致相同並等於或趨近於零。接著,假設感測裝置100移動。請參照圖6D,圖中所示是相同時間點所對應任兩個相位對應強度值相減(即,強度資訊的差異)所產生的影像,與圖6C相比可觀察到部分畫素的強度差異較大。 6A~6D are an example to illustrate the global dynamic blur phenomenon. Please refer to FIG. 6A. The figure shows an image generated by the sensing signal DS when the target object TA (taking a chair as an example) and the sensing device 100 are at rest (taking a resolution of 240×180 as an example). Please refer to FIG. 6B, which shows an image generated by subtracting the intensity value corresponding to any two phases corresponding to the same time point (ie, the difference in intensity information). Please refer to FIG. 6C is the scaled image of FIG. 6B, and it can be observed that the intensity difference of all pixels is approximately the same and is equal to or close to zero. Next, assume that the sensing device 100 moves. Please refer to FIG. 6D, which shows the image generated by subtracting the intensity value corresponding to any two phases corresponding to the same time point (ie, the difference in intensity information). Compared with FIG. 6C, the intensity of some pixels can be observed. big different.

由此可知,無論是產生本域或全域的動態模糊現象,兩個相位的強度資訊之間的差異將增加。反之,若無動態模糊現象,則 兩個相位的強度資訊的差異將等於或趨近於零。因此,兩個相位的強度資訊之間的差異將可用於評估是否發生動態模糊現象。 It can be seen that, no matter whether it is a dynamic blur phenomenon in the local domain or the global domain, the difference between the intensity information of the two phases will increase. Conversely, if there is no motion blur, then The difference in intensity information of the two phases will be equal to or approach zero. Therefore, the difference between the intensity information of the two phases can be used to assess whether dynamic blurring has occurred.

在一實施例中,針對各畫素,處理器130可判斷至少兩個相位的強度資訊之間的差異是否大於差異門檻值,且若此差異大於差異門檻值,則處理器130放棄此畫素所對應的那些相位的強度資訊。具體而言,無可避免地,兩相位的強度資訊的差異可能不會剛好等於零。因此,本發明實施例提升了寬容度,使處理器130可預設或經使用者設定有差異門檻值(例如,10、20、或40等)。若差異小於此差異門檻值,則處理器130可視為未發生動態模糊現象。反之,若差異大於此差異門檻值,則處理器130可直接視為發生動態模糊現象、或需進一步透過其他資訊來評估。值得注意的是,那些強度值差異大於差異門檻值的畫素的強度資訊可能會影響後續深度資訊估測的結果。因此,本發明實施例會依據一些條件來放棄任一畫素具有低於差異門檻值的四個相位的強度資訊。若處理器130放棄那些相位的強度資訊,則將判斷採用此畫素於不同時間點所對應的那些相位的強度資訊或是採用未放棄的其他畫素所對應不同相位的強度資訊。若任一畫素於當前時間點所對應至少二個相位的強度資訊被放棄,則需要再次透過調變光接收電路120感測以重新取得此畫素於不同時間點(後續的時間點)所對應的那些相位的強度資訊,並再評估是否採用這些強度資訊。或者,若僅部分畫素於當前時間點所對應的那些相位的強度資訊被放棄,則未放棄的那些畫素(或是受保留的畫素)所對應不同相位的 強度資訊將被採用。此外,處理器130可依據最後採用的強度資訊來計算深度資訊 In one embodiment, for each pixel, the processor 130 can determine whether the difference between the intensity information of at least two phases is greater than the difference threshold, and if the difference is greater than the difference threshold, the processor 130 discards the pixel The intensity information of those corresponding phases. Specifically, inevitably, the difference in intensity information of the two phases may not be exactly equal to zero. Therefore, the embodiment of the present invention improves the tolerance, so that the processor 130 can be preset or set by the user with a different threshold (for example, 10, 20, or 40, etc.). If the difference is less than the difference threshold, the processor 130 can be regarded as that no dynamic blur phenomenon has occurred. Conversely, if the difference is greater than the difference threshold, the processor 130 can be directly regarded as the occurrence of dynamic blurring, or it needs to be further evaluated through other information. It is worth noting that the intensity information of pixels whose intensity value difference is greater than the difference threshold may affect the results of subsequent depth information estimation. Therefore, the embodiments of the present invention will discard the intensity information of any four phases that have a pixel below the difference threshold according to some conditions. If the processor 130 discards the intensity information of those phases, it will determine the intensity information of those phases corresponding to different time points using this pixel or the intensity information of different phases corresponding to other pixels that have not been discarded. If the intensity information of at least two phases corresponding to any pixel at the current time point is discarded, it needs to be sensed again through the modulation light receiving circuit 120 to reacquire the pixel at different time points (subsequent time points) Corresponding to the intensity information of those phases, and then evaluate whether to use these intensity information. Or, if only the intensity information of the phases corresponding to some pixels at the current time point is discarded, those pixels that are not discarded (or reserved pixels) correspond to different phases Strength information will be used. In addition, the processor 130 can calculate the depth information according to the last adopted intensity information

需說明的是,針對任一個畫素,處理器130可將任兩個相位(例如,0度與180度、180度與270度等)的差異來與差異門檻值(其數值需對應調整)比較。在其他實施例中,處理器130亦可挑選差異最大的兩個相位的數值來與差異門檻值(其數值需對應調整)比較。或者,處理器130亦可能是隨機挑選更多個相位的強度資訊來比較。而若取得超過兩個差異,則可進一步平均或以特定線性組合來與差異門檻值比較。 It should be noted that for any pixel, the processor 130 can compare the difference between any two phases (for example, 0 degrees and 180 degrees, 180 degrees and 270 degrees, etc.) with the difference threshold (the value of which needs to be adjusted accordingly) Compare. In other embodiments, the processor 130 may also select the values of the two phases with the largest difference to compare with the difference threshold (the value of which needs to be adjusted accordingly). Alternatively, the processor 130 may randomly select more phase intensity information for comparison. If more than two differences are obtained, they can be further averaged or compared with a specific linear combination to the difference threshold.

以下將接著詳細說明那些放棄條件及對應的處理方式:圖7是依據本發明第一實施例基於飛行時間測距的處理方法的流程圖。請參照圖7,針對各畫素,處理器130取得至少兩個相位的強度資訊(步驟S710),並判斷強度資訊之間的差異是否大於差異門檻值(步驟S730),而其詳細說明可分別參酌步驟S410、S430的說明,故於此不再贅述。接著,若此差異未大於此差異門檻值,則處理器130可基於此畫素於當前時間點所對應的所有相位的強度資訊來計算深度資訊(步驟S735)。例如,0度與180度的差異作為實數部,90度與270度的差異作為虛數部;而實數部與虛數部所形成的角度即可作為相位差φ,且距離(即作為深度資訊)為1/2*c*φ/2π*f,其中c為光速常數,f為取樣頻率。 In the following, those abandon conditions and corresponding processing methods will be described in detail below: FIG. 7 is a flowchart of a processing method based on time-of-flight ranging according to the first embodiment of the present invention. Referring to FIG. 7, for each pixel, the processor 130 obtains intensity information of at least two phases (step S710), and determines whether the difference between the intensity information is greater than the difference threshold (step S730), and the detailed descriptions thereof can be separately Please refer to the description of steps S410 and S430, so it will not be repeated here. Then, if the difference is not greater than the difference threshold, the processor 130 may calculate depth information based on the intensity information of all phases corresponding to the pixel at the current time point (step S735). For example, the difference between 0 degrees and 180 degrees is taken as the real part, and the difference between 90 degrees and 270 degrees is taken as the imaginary part; the angle formed by the real part and the imaginary part can be taken as the phase difference φ, and the distance (that is, as depth information) is 1/2* c *φ/2π* f , where c is the constant of light speed and f is the sampling frequency.

另一方面,若此差異大於差異門檻值,則處理器130可放棄/取消/不使用此畫素於當前時間點的四個相位的強度資訊(步 驟S780),即不使用此畫素當次累計電荷時間區間中的強度資訊。處理器130可在下次透過調變光接收電路120感測時適性調整其偵測那些相位的調變光的曝光時間。由於縮短曝光時間可改善動態模糊現象,處理器130可進一步通知調變光接收電路120將曝光時間降低以重新拍攝/感測/接收被反射的調變光REM(步驟S790),從而取得此畫素於不同時間點反應於調變光REM所得到至少二個相位的強度資訊。 On the other hand, if the difference is greater than the difference threshold, the processor 130 may discard/cancel/not use the intensity information of the four phases of the pixel at the current time point (step Step S780), that is, the intensity information in the time interval of the accumulated charge of the pixel is not used. The processor 130 may adjust the exposure time of the modulated light of which phase it detects when it is sensed through the modulated light receiving circuit 120 next time. Since shortening the exposure time can improve the motion blur phenomenon, the processor 130 can further notify the dimming light receiving circuit 120 to reduce the exposure time to re-shoot/sensing/receive the reflected dimming light REM (step S790), thereby obtaining the picture The intensity information of at least two phases obtained by responding to the modulated light REM at different time points.

圖8是依據本發明第二實施例基於飛行時間測距的處理方法的流程圖。請參照圖8,步驟S810、S830、S835、S880及S890的詳細內容可參酌步驟S710、S730、S735、S780及S790的說明,於此不再贅述。與第一實施例不同之處在於,若兩相位的強度資訊之間的差異大於差異門檻值,則處理器130會依據姿態感測器170所取得的姿態資訊來判斷造成此差異的動態模糊為全域或本域(步驟S850)。以三軸加速度感測值Xout、Yout及Zout為例,若

Figure 108121698-A0305-02-0015-1
為1g,則表示感測裝置100為靜止狀態且此差異是本域的動態模糊所造成(例如,目標物體TA移動);若其值不為1g,則表示感測裝置100不為靜止狀態且此差異是全域的動態模糊所造成。需說明的是,依據姿態感測器170的類型,其判斷靜止狀態的條件可能不同,應用本發明實施例者可自行調整對應參數,非用以限制本發明的範疇。 8 is a flowchart of a processing method based on time-of-flight ranging according to a second embodiment of the present invention. Please refer to FIG. 8. For details of steps S810, S830, S835, S880 and S890, please refer to the description of steps S710, S730, S735, S780 and S790, which will not be repeated here. The difference from the first embodiment is that if the difference between the intensity information of the two phases is greater than the difference threshold, the processor 130 will determine the motion blur caused by the difference according to the pose information obtained by the pose sensor 170 Global or local (step S850). Taking the three-axis acceleration sensing values X out , Y out and Z out as examples, if
Figure 108121698-A0305-02-0015-1
Is 1g, it means that the sensing device 100 is at rest and this difference is caused by the motion blur in this domain (for example, the target object TA moves); if its value is not 1g, it means that the sensing device 100 is not at rest and This difference is caused by global motion blur. It should be noted that, according to the type of the posture sensor 170, the conditions for determining the static state may be different. Those applying the embodiments of the present invention may adjust the corresponding parameters by themselves, not to limit the scope of the present invention.

若經判斷為全域的動態模糊,則處理器130會直接放棄 此畫素於當前時間點所對應的那些相位的強度資訊,並透過調變光接收電路120拍攝以重新取得此畫素下一次電荷累計時間區間內至少二個相位的強度資訊(步驟S855)。另一方面,若經判斷為本域的動態模糊,則處理器130可進一步依據模糊畫素數量來決定是否透過調變光接收電路120重新取得此畫素於不同時間點所對應至少二個相位的強度資訊。此模糊畫素數量是反應於一畫素經判斷有動態模糊而據此累計的數量。換句而言,若對應於某一畫素的強度資訊之間的差異大於差異門檻值,則累計模糊畫素數量,且經評估所有畫素後即可得到最終的模糊畫素數量。 If it is judged as global motion blur, the processor 130 will give up directly The intensity information of the phases corresponding to the pixel at the current time point is captured by the modulation light receiving circuit 120 to regain the intensity information of at least two phases in the next charge accumulation time interval of the pixel (step S855). On the other hand, if it is determined to be motion blur in this domain, the processor 130 can further determine whether to regain at least two phases corresponding to this pixel at different time points through the dimming light receiving circuit 120 according to the number of blurred pixels Strength information. The number of blurred pixels is the amount accumulated in response to a pixel judged to have motion blur. In other words, if the difference between the intensity information corresponding to a certain pixel is greater than the difference threshold, the number of blurred pixels is accumulated, and the final number of blurred pixels can be obtained after evaluating all pixels.

值得注意的是,經實驗可知,不同差異門檻值將對應不同模糊畫素數量。換句而言,不同差異門檻值,一張感測影像中經判斷有動態模糊的畫素占所有畫素的比例可能不同。以240×180解析度為例,表(1)是不同差異門檻值對應的模糊畫素數量及其所占比例:

Figure 108121698-A0305-02-0016-2
It is worth noting that, according to experiments, different thresholds for differences will correspond to the number of different blur pixels. In other words, for different difference thresholds, the proportion of pixels that are judged to have motion blur in a sensing image may be different in all pixels. Taking 240×180 resolution as an example, Table (1) is the number and proportion of blurred pixels corresponding to different thresholds:
Figure 108121698-A0305-02-0016-2

表(1)中實驗所得的模糊畫素數量例如可用來作為比對的數量門檻值,但此數量門檻值可能依據不同差異門檻值、不同解析度或其他條件而被調整,本發明實施例不加以限制。處理器130可 判斷當前時間點(或取樣區間內)得到的模糊畫素數量是否大於設定的數量門檻值(步驟S870)。而若此模糊畫素數量大於設定的數量門檻值,則處理器130可放棄此畫素於當前時間點所對應的那些相位的強度資訊(步驟S880),並透過調變光接收電路120拍攝以重新取得此畫素不同時間點(例如,下一個取樣時間點或後續取樣區間)所對應至少二個相位的強度資訊(步驟S890)。反之,若當前時間區間內得到的模糊畫素數量未大於設定的數量門檻值,則處理器130直接依據此畫素當前時間點所取得的複數個相位的強度資訊來計算深度資訊(步驟S835),即保留此像素對應的強度資訊。 The number of fuzzy pixels obtained in the experiment in Table (1) can be used as a threshold value for comparison, but this threshold value may be adjusted according to different thresholds, different resolutions, or other conditions. The embodiments of the present invention do not Be restricted. Processor 130 may It is determined whether the number of blurred pixels obtained at the current time point (or within the sampling interval) is greater than the set threshold value (step S870). If the number of blurred pixels is greater than the set threshold value, the processor 130 can discard the intensity information of the phase corresponding to the pixel at the current time point (step S880), and shoot through the modulated light receiving circuit 120 to Retrieve the intensity information of at least two phases corresponding to different time points (for example, the next sampling time point or subsequent sampling interval) of the pixel (step S890). Conversely, if the number of blurred pixels obtained in the current time interval is not greater than the set threshold value, the processor 130 directly calculates the depth information based on the intensity information of the plurality of phases obtained at the current time point of the pixel (step S835) , That is, retain the intensity information corresponding to this pixel.

在一實施例中,步驟S890中調整曝光時間的長度可與模糊畫素數量及數量門檻值之間的差異相關。例如,調整的曝光時間可依據公式(1)、(2)得出:

Figure 108121698-A0305-02-0017-4
In one embodiment, the length of the exposure time adjusted in step S890 may be related to the difference between the number of blurred pixels and the threshold value. For example, the adjusted exposure time can be obtained according to formulas (1) and (2):
Figure 108121698-A0305-02-0017-4

Figure 108121698-A0305-02-0017-5
其中,exposure_time'為調整的曝光時間,exposure_time為原曝光時間,blur_pixels為模糊畫素數量,且threshold為數量門檻值。
Figure 108121698-A0305-02-0017-5
Where, exposure_time' is the adjusted exposure time, exposure_time is the original exposure time, blur_pixels is the number of blur pixels, and threshold is the number threshold.

需說明的是,在其他實施例中,調變光接收電路120亦可直接減少特定長度或亂數長度的曝光時間。 It should be noted that, in other embodiments, the modulated light receiving circuit 120 can also directly reduce the exposure time of a specific length or a random length.

圖9是依據本發明第三實施例基於飛行時間測距的處理方法的流程圖。請參照圖9,步驟S910、S930、S935及S880的詳細內容可參酌步驟S710、S730、S735及S780的說明,於此不再 贅述。與第一實施例不同之處在於,若任一畫素的至少兩個相位的強度資訊之間的差異大於差異門檻值,則處理器130可僅放棄/取消/不使用此畫素於當前時間點所對應的那些相位的強度資訊(步驟950),而受放棄的畫素是經判斷有動態模糊(例如,此畫素在兩相位之間的強度差異大於差異門檻值)。處理器130可記錄這些受放棄的畫素在感測影像中的位置、索引或代碼。而在步驟S935中,處理器130將依據未放棄的畫素的強度資訊來計算深度資訊。需說明的是,所有畫素中排除受放棄的畫素後所剩餘的畫素即為未放棄的畫素。而排除掉受放棄的畫素的強度資訊將可降低動態模糊的影像,且經判斷未受動態模糊影響的畫素的強度資訊仍可繼續供後續深度資訊計算使用。藉此,可避免多次重新拍攝,並進而提升效率。 9 is a flowchart of a processing method based on time-of-flight ranging according to a third embodiment of the present invention. Please refer to FIG. 9. For details of steps S910, S930, S935 and S880, please refer to the description of steps S710, S730, S735 and S780. Repeat. The difference from the first embodiment is that if the difference between the intensity information of at least two phases of any pixel is greater than the difference threshold, the processor 130 may only discard/cancel/not use the pixel at the current time The intensity information of the phases corresponding to the points (step 950), and the discarded pixel is judged to have motion blur (for example, the intensity difference between the two phases of this pixel is greater than the difference threshold). The processor 130 may record the position, index, or code of the discarded pixels in the sensing image. In step S935, the processor 130 will calculate the depth information according to the intensity information of the pixels that have not been discarded. It should be noted that the pixels remaining after excluding abandoned pixels among all pixels are undiscarded pixels. Excluding the intensity information of the discarded pixels can reduce the motion blur image, and the intensity information of the pixels that are not affected by the motion blur can still be used for subsequent depth information calculation. In this way, multiple re-shooting can be avoided and the efficiency can be improved.

圖10A及10B是一範例說明放棄無效資料的感測示意圖。請先參照圖10A,圖中所示為所有畫素皆保留的影像。請接著參照圖10B,假設差異門檻值為40,則差異超過40的畫素將被放棄,且處理器130可將這些受放棄的畫素的強度設為零或忽略不計。 10A and 10B are schematic diagrams illustrating an example of abandoning invalid data. Please refer to FIG. 10A first. The picture shows an image in which all pixels are retained. 10B, assuming that the difference threshold is 40, pixels with a difference exceeding 40 will be discarded, and the processor 130 may set the intensity of these discarded pixels to zero or neglect.

需說明的是,前述三個實施例中各步驟可依據實際需求而互換、增加、或改變。例如,在第一實施例的步驟730中更進一步增加步驟870的判斷模糊畫素數量的機制。 It should be noted that the steps in the foregoing three embodiments can be interchanged, added, or changed according to actual needs. For example, in step 730 of the first embodiment, the mechanism for determining the number of blurred pixels in step 870 is further increased.

綜上所述,本發明實施例基於飛行時間測距的運算裝置、感測裝置及處理方法,可基於任兩相位的強度資訊之間的差異、模糊畫素數量、姿態資訊或其組合來判斷是否發生動態模糊現象。若 經評估有動態模糊現象,可重新拍攝、或放棄部分有動態模糊的畫素的強度資訊。藉此,可以簡便的方式來降低動態模糊現象對後續深度資訊估測的影響。 In summary, the calculation device, the sensing device and the processing method based on the time-of-flight ranging in the embodiments of the present invention can be judged based on the difference between the intensity information of any two phases, the number of blurred pixels, the posture information or a combination thereof Whether the motion blur phenomenon occurs. If After evaluating the phenomenon of motion blur, you can re-shoot or discard the intensity information of some pixels with motion blur. In this way, the influence of the motion blur phenomenon on the subsequent depth information estimation can be reduced in a simple way.

雖然本發明已以實施例揭露如上,然其並非用以限定本發明,任何所屬技術領域中具有通常知識者,在不脫離本發明的精神和範圍內,當可作些許的更動與潤飾,故本發明的保護範圍當視後附的申請專利範圍所界定者為準。 Although the present invention has been disclosed as above by the embodiments, it is not intended to limit the present invention. Any person with ordinary knowledge in the technical field can make some changes and modifications without departing from the spirit and scope of the present invention. The scope of protection of the present invention shall be subject to the scope defined in the appended patent application.

S410~S430:步驟 S410~S430: Steps

Claims (13)

一種基於飛行時間測距的運算裝置,包括:一記憶體,記錄至少一畫素所對應至少二相位的強度資訊以及用於該運算裝置的處理方法所對應的程式碼,其中該強度資訊係利用該至少二相位在時間上延遲感測一調變光所得;以及一處理器,耦接該記憶體,並經配置用以執行該程式碼,該處理方法包括:取得該至少二相位的強度資訊;以及依據該至少二相位的強度資訊之間的差異,決定是否放棄該畫素所對應該至少二相位的強度資訊,其中決定是否放棄該畫素所對應該至少二相位的強度資訊的步驟包括:判斷該差異是否大於一差異門檻值;反應於該差異大於該差異門檻值,放棄該畫素所對應該至少二相位的強度資訊;反應於放棄該畫素所對應該至少二相位的強度資訊,判斷是否重新取得該畫素於不同時間點所對應至少二相位的強度資訊;以及依據複數個該畫素所對應不同相位的強度資訊來計算一深度資訊。 A computing device based on time-of-flight ranging includes: a memory that records intensity information corresponding to at least two phases of at least one pixel and a program code corresponding to a processing method for the computing device, wherein the intensity information is utilized The at least two phases are delayed in time by sensing a dimming light; and a processor coupled to the memory and configured to execute the program code, the processing method includes: obtaining intensity information of the at least two phases And according to the difference between the intensity information of the at least two phases, determine whether to discard the intensity information corresponding to the at least two phases of the pixel, wherein the step of determining whether to discard the intensity information corresponding to the at least two phases of the pixel includes : Determine whether the difference is greater than a difference threshold; respond to the difference greater than the difference threshold, discard the intensity information corresponding to at least two phases of the pixel; respond to discard the intensity information corresponding to at least two phases of the pixel To determine whether to regain the intensity information of at least two phases corresponding to the pixel at different time points; and to calculate depth information based on the intensity information of different phases corresponding to the pixel. 如申請專利範圍第1項所述基於飛行時間測距的運算裝置,其中該處理方法還包括:適性調整偵測該至少二相位的調變光的一曝光時間,以重新 取得該畫素於不同時間點所對應至少二相位的強度資訊。 The computing device based on time-of-flight ranging as described in item 1 of the scope of the patent application, wherein the processing method further includes: adaptively adjusting an exposure time for detecting the at least two phases of modulated light to restart Obtain intensity information of at least two phases corresponding to the pixel at different time points. 如申請專利範圍第1項所述基於飛行時間測距的運算裝置,其中該記憶體更記錄一姿態資訊,該姿態資訊係對應於感測該至少二相位的調變光的裝置,且該處理方法還包括:反應於該差異大於該差異門檻值,依據該姿態資訊判斷造成該差異的動態模糊為全域(global)或本域(local);反應於全域的動態模糊,放棄該畫素所對應該至少二相位的強度資訊,並於重新取得該畫素於不同時間點所對應至少二相位的強度資訊;以及反應於本域的動態模糊,依據一模糊畫素數量來決定是否重新取得該畫素於不同時間點所對應至少二相位的強度資訊,其中該模糊畫素數量是反應於該畫素經判斷有動態模糊而累計的一數量。 The computing device based on time-of-flight ranging as described in item 1 of the patent scope, wherein the memory further records a posture information corresponding to the device for sensing the at least two-phase dimming light, and the processing The method further includes: in response to the difference being greater than the difference threshold, judging from the posture information that the motion blur caused by the difference is global or local; in response to the global motion blur, discarding the pixel The intensity information of at least two phases should be obtained, and the intensity information of at least two phases corresponding to the pixel at different time points should be re-acquired; and the dynamic blur in the local area can be used to decide whether to re-acquire the image according to the number of fuzzy pixels Intensity information corresponding to at least two phases corresponding to different time points, wherein the number of blurred pixels is an amount that is accumulated in response to the pixel being judged to have motion blur. 如申請專利範圍第3項所述基於飛行時間測距的運算裝置,其中該處理方法還包括:反應於該模糊畫素數量大於一數量門檻值,放棄該畫素所對應該至少二相位的強度資訊,並重新取得該畫素於不同時間點所對應至少二相位的強度資訊。 An arithmetic device based on time-of-flight ranging as described in item 3 of the patent application scope, wherein the processing method further includes: abandoning the intensity corresponding to at least two phases of the pixel in response to the number of blurred pixels being greater than a threshold value Information, and regain the intensity information of at least two phases corresponding to the pixel at different time points. 如申請專利範圍第1項所述基於飛行時間測距的運算裝置,其中該處理方法還包括:反應於該差異大於該差異門檻值,放棄至少一該畫素所對應不同相位的強度資訊,其中受放棄的該至少一畫素是判斷有動態 模糊;以及依據未放棄的複數個該畫素所對應不同相位的強度資訊來計算該深度資訊。 The computing device based on time-of-flight ranging as described in item 1 of the patent application scope, wherein the processing method further comprises: in response to the difference being greater than the difference threshold, discarding at least one intensity information corresponding to different phases of the pixel, wherein The abandoned at least one pixel is judged to be dynamic Blur; and calculate the depth information according to the intensity information of different phases corresponding to the pixels that have not been abandoned. 一種基於飛行時間測距的感測裝置,包括:一調變光發射電路,發射一調變光;一調變光接收電路,利用至少二相位在時間延遲上以接收該調變光;一記憶體,記錄至少一畫素所對應至少二相位的強度資訊以及用於該感測裝置的處理方法所對應的程式碼;以及一處理器,耦接該調變光接收電路及該記憶體,並經配置用以執行該程式碼,該處理方法包括:取得該至少二相位的強度資訊,其中該強度資訊係利用該至少二相位在時間上延遲感測該調變光所得;以及依據該至少二相位的強度資訊之間的差異,決定是否放棄該畫素所對應該至少二相位的強度資訊,其中決定是否放棄該畫素所對應該至少二相位的強度資訊的操作包括:判斷該差異是否大於一差異門檻值;反應於該差異大於該差異門檻值,放棄該畫素所對應該至少二相位的強度資訊;反應於放棄該畫素所對應該至少二相位的強度資訊,判斷是否重新取得該畫素於不同時點所對應至少二相位的強度資訊;以及 依據複數個該畫素所對應不同相位的強度資訊來計算一深度資訊。 A sensing device based on time-of-flight ranging includes: a dimming transmitting circuit that emits a dimming light; a dimming receiving circuit that uses at least two phases to receive the dimming light on a time delay; a memory Body, recording the intensity information of at least two phases corresponding to at least one pixel and the program code corresponding to the processing method for the sensing device; and a processor coupled to the dimming light receiving circuit and the memory, and Configured to execute the program code, the processing method includes: obtaining intensity information of the at least two phases, wherein the intensity information is obtained by using the at least two phases to delay sense the dimming in time; and based on the at least two phases The difference between the phase intensity information determines whether to discard the intensity information corresponding to at least two phases of the pixel, and the operation to determine whether to discard the intensity information corresponding to at least two phases of the pixel includes: determining whether the difference is greater than A difference threshold; in response to the difference being greater than the difference threshold, the intensity information corresponding to at least two phases of the pixel is discarded; in response to discarding the intensity information corresponding to at least two phases of the pixel, it is judged whether to reacquire the Intensity information of at least two phases corresponding to pixels at different time points; and A depth information is calculated according to the intensity information of different phases corresponding to the pixels. 如申請專利範圍第6項所述基於飛行時間測距的感測裝置,其中該處理方法還包括:適性調整該調變光接收電路偵測該至少二相位的調變光的一曝光時間,以重新取得該畫素於不同時間點所對應至少二相位的強度資訊。 The sensing device based on time-of-flight ranging as described in item 6 of the patent application scope, wherein the processing method further comprises: adaptively adjusting an exposure time of the modulated light receiving circuit to detect the modulated light of the at least two phases, to Regain the intensity information of at least two phases corresponding to the pixel at different time points. 如申請專利範圍第6項所述基於飛行時間測距的感測裝置,更包括:一姿態感測器,感測該感測裝置的姿態,並據以產生一姿態資訊,且該處理方法還包括:反應於該差異大於一差異門檻值,依據該姿態資訊判斷造成該差異的動態模糊為全域或本域;反應於全域的動態模糊,放棄該畫素所對應該至少二相位的強度資訊,並透過該調變光接收電路重新取得該畫素於不同時間點所對應至少二相位的強度資訊;以及反應於本域的動態模糊,依據一模糊畫素數量來決定是否重新透過該調變光接收電路重新取得該畫素於不同時間點所對應至少二相位的強度資訊,其中該模糊畫素數量是反應於經判斷有動態模糊而累計的一數量。 The sensing device based on time-of-flight ranging as described in item 6 of the scope of the patent application further includes: a posture sensor that senses the posture of the sensing device and generates a posture information accordingly, and the processing method also Including: in response to the difference being greater than a difference threshold, the motion blur caused by the difference is determined to be global or local based on the posture information; in response to the global motion blur, the intensity information corresponding to at least two phases of the pixel is discarded, And obtain the intensity information of at least two phases corresponding to the pixel at different time points through the dimming light receiving circuit; and reflect the dynamic blur in the local area, and decide whether to re-transmit the dimming light according to the number of a blur pixel The receiving circuit retrieves the intensity information of at least two phases corresponding to the pixel at different time points, wherein the number of blurred pixels is an amount accumulated in response to the determination of dynamic blur. 一種基於飛行時間測距的處理方法,包括:取得至少一畫素所對應至少二相位的強度資訊,其中該強度 資訊係利用該至少二相位在時間上延遲感測一調變光所得;以及依據該至少二相位的強度資訊之間的差異,決定是否放棄該畫素所對應該至少二相位的強度資訊,其中決定是否放棄該畫素所對應該至少二相位的強度資訊的步驟包括:判斷該差異是否大於一差異門檻值;反應於該差異大於該差異門檻值,放棄該畫素所對應該至少二相位的強度資訊;反應於放棄該畫素所對應該至少二相位的強度資訊,判斷是否重新取得該畫素於不同時點所對應至少二相位的強度資訊;以及依據複數個該畫素所對應不同相位的強度資訊來計算一深度資訊。 A processing method based on time-of-flight ranging includes: obtaining intensity information of at least two phases corresponding to at least one pixel, wherein the intensity The information is obtained by using the at least two phases to delay sense a dimming light in time; and according to the difference between the intensity information of the at least two phases, it is determined whether to discard the intensity information corresponding to the at least two phases of the pixel, wherein The step of deciding whether to discard the intensity information corresponding to at least two phases of the pixel includes: determining whether the difference is greater than a difference threshold; in response to the difference being greater than the difference threshold, discarding the pixel corresponding to at least two phases Intensity information; in response to discarding the intensity information corresponding to at least two phases of the pixel, to determine whether to regain the intensity information of at least two phases corresponding to the pixel at different points in time; and according to the number of different phases corresponding to the pixel Strength information to calculate a depth of information. 如申請專利範圍第9項所述基於飛行時間測距的處理方法,其中決定是否放棄該畫素所對應該至少二相位的強度資訊的步驟之後,更包括:適性調整偵測該至少二相位的調變光的一曝光時間,以重新取得該畫素於不同時間點所對應至少二相位的強度資訊。 The processing method based on time-of-flight ranging as described in item 9 of the patent application scope, wherein after the step of deciding whether to discard the intensity information corresponding to at least two phases of the pixel, it further includes: adaptively adjusting the detection of the at least two phases Adjusting an exposure time of light to regain the intensity information of at least two phases corresponding to the pixel at different time points. 如申請專利範圍第9項所述基於飛行時間測距的處理方法,其中判斷該差異是否大於該差異門檻值的步驟之後,更包括:取得一姿態資訊,該姿態資訊係對應於感測該至少二相位的調變光的裝置; 反應於該差異大於該差異門檻值,依據該姿態資訊判斷造成該差異的動態模糊為全域或本域;反應於全域的動態模糊,放棄該畫素所對應該至少二相位的強度資訊,並重新取得該畫素於不同時間點所對應至少二相位的強度資訊;以及反應於本域的動態模糊,依據一模糊畫素數量來決定是否重新取得該畫素於不同時間點所對應至少二相位的強度資訊,其中該模糊畫素數量是反應於該畫素經判斷有動態模糊而累計的一數量。 The processing method based on time-of-flight ranging as described in item 9 of the scope of the patent application, wherein after the step of determining whether the difference is greater than the difference threshold, it further includes: obtaining posture information corresponding to sensing the at least Two-phase dimming device; In response to the difference being greater than the difference threshold, the motion blur caused by the difference is determined to be global or local based on the posture information; in response to the global motion blur, the intensity information corresponding to at least two phases of the pixel is discarded, and the Obtain the intensity information of at least two phases corresponding to the pixel at different time points; and respond to the dynamic blur in the local area, and determine whether to re-acquire at least two phases corresponding to the pixel at different time points according to the number of blurred pixels Intensity information, where the number of blurred pixels is an amount that is accumulated in response to the pixel being judged to have motion blur. 如申請專利範圍第11項所述基於飛行時間測距的處理方法,其中判斷該差異是否大於該差異門檻值的步驟之後,更包括:反應於該模糊畫素數量大於一數量門檻值,放棄該畫素所對應該至少二相位的強度資訊,並重新取得該畫素於不同時間點所對應至少二相位的強度資訊。 The processing method based on time-of-flight ranging as described in item 11 of the patent application scope, wherein after the step of judging whether the difference is greater than the difference threshold, it further includes: in response to the number of blurred pixels being greater than a number threshold, abandoning the The pixel corresponds to intensity information of at least two phases, and retrieves intensity information of at least two phases corresponding to the pixel at different time points. 如申請專利範圍第9項所述基於飛行時間測距的處理方法,其中判斷該差異是否大於該差異門檻值的步驟之後,更包括:反應於該差異大於該差異門檻值,放棄至少一該畫素所對應不同相位的強度資訊,其中受放棄的該至少一畫素是經判斷有動態模糊;以及依據未放棄的複數個該畫素所對應不同相位的強度資訊來計 算深度資訊。 The processing method based on time-of-flight ranging as described in item 9 of the patent application scope, wherein after the step of judging whether the difference is greater than the difference threshold, it further includes: abandoning at least one of the paintings in response to the difference being greater than the difference threshold Intensity information corresponding to different phases of the pixel, wherein the discarded at least one pixel is judged to have motion blur; and based on the intensity information of different phases corresponding to the plurality of pixels that have not been discarded Calculate depth information.
TW108121698A 2019-02-19 2019-06-21 Computation apparatus, sensing apparatus and processing method based on time of flight TWI696841B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962807246P 2019-02-19 2019-02-19
US62/807,246 2019-02-19

Publications (2)

Publication Number Publication Date
TWI696841B true TWI696841B (en) 2020-06-21
TW202032155A TW202032155A (en) 2020-09-01

Family

ID=72110768

Family Applications (2)

Application Number Title Priority Date Filing Date
TW108115961A TWI741291B (en) 2019-02-19 2019-05-09 Verification method of time-of-flight camera module and verification system thereof
TW108121698A TWI696841B (en) 2019-02-19 2019-06-21 Computation apparatus, sensing apparatus and processing method based on time of flight

Family Applications Before (1)

Application Number Title Priority Date Filing Date
TW108115961A TWI741291B (en) 2019-02-19 2019-05-09 Verification method of time-of-flight camera module and verification system thereof

Country Status (2)

Country Link
CN (5) CN111580117A (en)
TW (2) TWI741291B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102020123671B4 (en) * 2020-09-10 2022-12-22 Ifm Electronic Gmbh Method and device for dynamic expansion of a time-of-flight camera system
CN112954230B (en) * 2021-02-08 2022-09-09 深圳市汇顶科技股份有限公司 Depth measurement method, chip and electronic device
CN113298778B (en) * 2021-05-21 2023-04-07 奥比中光科技集团股份有限公司 Depth calculation method and system based on flight time and storage medium
CN113219476B (en) * 2021-07-08 2021-09-28 武汉市聚芯微电子有限责任公司 Ranging method, terminal and storage medium
TWI762387B (en) * 2021-07-16 2022-04-21 台達電子工業股份有限公司 Time of flight devide and inspecting method for the same

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105894492A (en) * 2015-01-06 2016-08-24 三星电子株式会社 T-O-F depth imaging device rendering depth image of object and method thereof
TW201719193A (en) * 2015-09-10 2017-06-01 義明科技股份有限公司 Non-contact optical sensing device and method for sensing depth and position of an object in three-dimensional space
TW201841000A (en) * 2017-02-17 2018-11-16 日商北陽電機股份有限公司 Object capturing device

Family Cites Families (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002139818A (en) * 2000-11-01 2002-05-17 Fuji Photo Film Co Ltd Lens-fitted photographic film unit
CN101252802B (en) * 2007-02-25 2013-08-21 电灯专利信托有限公司 Charge pump electric ballast for low input voltage
JP2008209298A (en) * 2007-02-27 2008-09-11 Fujifilm Corp Ranging device and ranging method
EP2402783B1 (en) * 2009-02-27 2013-10-02 Panasonic Corporation Distance measuring apparatus
CN102735910B (en) * 2011-04-08 2014-10-29 中山大学 Maximum peak voltage detection circuit
WO2013009099A2 (en) * 2011-07-12 2013-01-17 삼성전자 주식회사 Device and method for blur processing
CN103181156B (en) * 2011-07-12 2017-09-01 三星电子株式会社 Fuzzy Processing device and method
EP2728374B1 (en) * 2012-10-30 2016-12-28 Technische Universität Darmstadt Invention relating to the hand-eye calibration of cameras, in particular depth image cameras
AT513589B1 (en) * 2012-11-08 2015-11-15 Bluetechnix Gmbh Recording method for at least two ToF cameras
US9019480B2 (en) * 2013-02-26 2015-04-28 Jds Uniphase Corporation Time-of-flight (TOF) system, sensor pixel, and method
US9681123B2 (en) * 2014-04-04 2017-06-13 Microsoft Technology Licensing, Llc Time-of-flight phase-offset calibration
US9641830B2 (en) * 2014-04-08 2017-05-02 Lucasfilm Entertainment Company Ltd. Automated camera calibration methods and systems
JP6424338B2 (en) * 2014-06-09 2018-11-21 パナソニックIpマネジメント株式会社 Ranging device
TWI545951B (en) * 2014-07-01 2016-08-11 晶相光電股份有限公司 Sensors and sensing methods
EP2978216B1 (en) * 2014-07-24 2017-08-16 Espros Photonics AG Method for the detection of motion blur
JP6280002B2 (en) * 2014-08-22 2018-02-14 浜松ホトニクス株式会社 Ranging method and ranging device
CN104677277B (en) * 2015-02-16 2017-06-06 武汉天远视科技有限责任公司 A kind of method and system for measuring object geometric attribute or distance
CN106152947B (en) * 2015-03-31 2019-11-29 北京京东尚科信息技术有限公司 Measure equipment, the method and apparatus of dimension of object
US9945936B2 (en) * 2015-05-27 2018-04-17 Microsoft Technology Licensing, Llc Reduction in camera to camera interference in depth measurements using spread spectrum
CN107850664B (en) * 2015-07-22 2021-11-05 新唐科技日本株式会社 ranging device
US9716850B2 (en) * 2015-09-08 2017-07-25 Pixart Imaging (Penang) Sdn. Bhd. BJT pixel circuit capable of cancelling ambient light influence, image system including the same and operating method thereof
TWI557393B (en) * 2015-10-08 2016-11-11 微星科技股份有限公司 Calibration method of laser ranging and device utilizing the method
US10057526B2 (en) * 2015-11-13 2018-08-21 Pixart Imaging Inc. Pixel circuit with low power consumption, image system including the same and operating method thereof
US9762824B2 (en) * 2015-12-30 2017-09-12 Raytheon Company Gain adaptable unit cell
US10516875B2 (en) * 2016-01-22 2019-12-24 Samsung Electronics Co., Ltd. Method and apparatus for obtaining depth image by using time-of-flight sensor
CN106997582A (en) * 2016-01-22 2017-08-01 北京三星通信技术研究有限公司 The motion blur removing method and equipment of flight time three-dimension sensor
CN107040732B (en) * 2016-02-03 2019-11-05 原相科技股份有限公司 Image sensing circuit and method
CN107229056A (en) * 2016-03-23 2017-10-03 松下知识产权经营株式会社 Image processing apparatus, image processing method and recording medium
KR102752035B1 (en) * 2016-08-22 2025-01-09 삼성전자주식회사 Method and device for acquiring distance information
US10762651B2 (en) * 2016-09-30 2020-09-01 Magic Leap, Inc. Real time calibration for time-of-flight depth measurement
JP6862751B2 (en) * 2016-10-14 2021-04-21 富士通株式会社 Distance measuring device, distance measuring method and program
CN108616726A (en) * 2016-12-21 2018-10-02 光宝电子(广州)有限公司 Exposal control method based on structure light and exposure-control device
US20180189977A1 (en) * 2016-12-30 2018-07-05 Analog Devices Global Light detector calibrating a time-of-flight optical system
US10557921B2 (en) * 2017-01-23 2020-02-11 Microsoft Technology Licensing, Llc Active brightness-based strategy for invalidating pixels in time-of-flight depth-sensing
CN108700664A (en) * 2017-02-06 2018-10-23 松下知识产权经营株式会社 Three-dimensional motion acquisition device and three-dimensional motion adquisitiones
WO2018235163A1 (en) * 2017-06-20 2018-12-27 株式会社ソニー・インタラクティブエンタテインメント Calibration device, calibration chart, chart pattern generation device, and calibration method
EP3783304B1 (en) * 2017-06-22 2024-07-03 Hexagon Technology Center GmbH Calibration of a triangulation sensor
TWI622960B (en) * 2017-11-10 2018-05-01 財團法人工業技術研究院 Correction method of depth image capturing device
CN108401098A (en) * 2018-05-15 2018-08-14 绍兴知威光电科技有限公司 A kind of TOF depth camera systems and its method for reducing external error
CN112363150B (en) * 2018-08-22 2024-05-28 Oppo广东移动通信有限公司 Calibration method, calibration controller, electronic device and calibration system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105894492A (en) * 2015-01-06 2016-08-24 三星电子株式会社 T-O-F depth imaging device rendering depth image of object and method thereof
TW201719193A (en) * 2015-09-10 2017-06-01 義明科技股份有限公司 Non-contact optical sensing device and method for sensing depth and position of an object in three-dimensional space
TW201841000A (en) * 2017-02-17 2018-11-16 日商北陽電機股份有限公司 Object capturing device

Also Published As

Publication number Publication date
TW202032154A (en) 2020-09-01
CN111586307A (en) 2020-08-25
CN111580117A (en) 2020-08-25
CN111586306B (en) 2022-02-01
CN111580067A (en) 2020-08-25
CN111580067B (en) 2022-12-02
CN111586307B (en) 2021-11-02
TW202032155A (en) 2020-09-01
CN111624612B (en) 2023-04-07
CN111624612A (en) 2020-09-04
TWI741291B (en) 2021-10-01
CN111586306A (en) 2020-08-25

Similar Documents

Publication Publication Date Title
TWI696841B (en) Computation apparatus, sensing apparatus and processing method based on time of flight
US20250138193A1 (en) Processing system for lidar measurements
JP7308856B2 (en) Active signal detection using adaptive discrimination of noise floor
US20190113606A1 (en) Time-of-flight depth image processing systems and methods
JP7321178B2 (en) Choosing a LIDAR Pulse Detector According to Pulse Type
KR102869766B1 (en) METHOD FOR TIME-OF-FLIGHT DEPTH MEASUREMENT AND ToF CAMERA FOR PERFORMING THE SAME
JP7325433B2 (en) Detection of laser pulse edges for real-time detection
US11272157B2 (en) Depth non-linearity compensation in time-of-flight imaging
US11423572B2 (en) Built-in calibration of time-of-flight depth imaging systems
US10473461B2 (en) Motion-sensor device having multiple light sources
JP6193227B2 (en) Blur processing apparatus and method
CN109903324B (en) Depth image acquisition method and device
CN112368597A (en) Optical distance measuring device
KR20140057625A (en) Improvements in or relating to the processing of time-of-flight signals
US11965962B2 (en) Resolving multi-path corruption of time-of-flight depth images
CN113439195A (en) Three-dimensional imaging and sensing using dynamic vision sensors and pattern projection
KR20130008469A (en) Method and apparatus for processing blur
CN113497892B (en) Imaging device, distance measuring method, storage medium, and computer device
CN117280177A (en) Systems and methods for structured light depth calculation using single photon avalanche diodes
US11467258B2 (en) Computation device, sensing device and processing method based on time of flight
TWI707152B (en) Computation apparatus, sensing apparatus, and processing method based on time of flight
CN112415487B (en) Computing device, sensing device and processing method based on time-of-flight ranging
US11961257B2 (en) Built-in calibration of time-of-flight depth imaging systems