TWI696841B - Computation apparatus, sensing apparatus and processing method based on time of flight - Google Patents
Computation apparatus, sensing apparatus and processing method based on time of flight Download PDFInfo
- Publication number
- TWI696841B TWI696841B TW108121698A TW108121698A TWI696841B TW I696841 B TWI696841 B TW I696841B TW 108121698 A TW108121698 A TW 108121698A TW 108121698 A TW108121698 A TW 108121698A TW I696841 B TWI696841 B TW I696841B
- Authority
- TW
- Taiwan
- Prior art keywords
- phases
- pixel
- intensity information
- difference
- time
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 36
- 238000000034 method Methods 0.000 claims abstract description 9
- 230000004044 response Effects 0.000 claims description 18
- 238000012545 processing Methods 0.000 claims description 9
- 230000003111 delayed effect Effects 0.000 claims 1
- 238000001514 detection method Methods 0.000 claims 1
- 238000010422 painting Methods 0.000 claims 1
- 230000001934 delay Effects 0.000 abstract description 2
- 239000008186 active pharmaceutical agent Substances 0.000 description 12
- 238000010586 diagram Methods 0.000 description 10
- 239000003990 capacitor Substances 0.000 description 9
- 238000005259 measurement Methods 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 8
- 238000005070 sampling Methods 0.000 description 4
- 230000001133 acceleration Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 238000002474 experimental method Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000009825 accumulation Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000005484 gravity Effects 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 206010047571 Visual impairment Diseases 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000009191 jumping Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
- 238000004904 shortening Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/06—Systems determining position data of a target
- G01S17/08—Systems determining position data of a target for measuring distance only
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/497—Means for monitoring or calibrating
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Electromagnetism (AREA)
- Optical Radar Systems And Details Thereof (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Transforming Light Signals Into Electric Signals (AREA)
- Solid State Image Pick-Up Elements (AREA)
- Measurement Of Optical Distance (AREA)
Abstract
Description
本發明是有關於一種光學量測技術,且特別是有關於一種基於飛行時間(Time of Flight,ToF)測距的運算裝置、感測裝置及處理方法。 The present invention relates to an optical measurement technology, and particularly relates to a computing device, a sensing device, and a processing method based on Time of Flight (ToF) distance measurement.
隨著科技的發展,光學三維量測技術已逐漸成熟,其中飛行時間測距是目前一種常見的主動式深度感測技術。ToF測距技術的基本原理是,調變光(例如,紅外光、或雷射光等)經發射後遇到物體將被反射,而藉由被反射的調變光的反射時間差或相位差來換算被拍攝物體的距離,即可產生相對於物體的深度資訊。 With the development of technology, optical three-dimensional measurement technology has gradually matured. Time-of-flight distance measurement is a common active depth sensing technology. The basic principle of ToF ranging technology is that modulated light (for example, infrared light, laser light, etc.) will be reflected after encountering an object after being emitted, and converted by the reflection time difference or phase difference of the reflected modulated light The distance of the object being shot can produce depth information relative to the object.
值得注意的是,請參照圖1A的時序圖,ToF測距技術在感測調變光的時間被稱為曝光時間,其類似於相機快門時間。例如,邏輯1代表進行曝光/感測;邏輯0代表停止曝光。其中,當曝光時間增長時,接收到調變光的資料量亦會隨著增加。然而,較
長的曝光時間恐會容易造成動態模糊(motion blur)現象。例如,圖1B所示是待測物體運動所造成的殘影,且圖1C所示是車燈移動所造成的光軌。然而,當利用ToF測距技術計算深度資訊時,若遭遇到動態模糊情況,將導致深度距離不準確、或畫面模糊的情況。因此,如何提供一種簡便且能有效降低動態模糊影響的方法成為目前相關領域待努力的目標之一。
It is worth noting that, referring to the timing diagram of FIG. 1A, the time when the ToF ranging technology senses dimming is called the exposure time, which is similar to the camera shutter time. For example,
有鑑於此,本發明實施例提供一種基於飛行時間測距的運算裝置、感測裝置及處理方法,其能有效避免動態模糊所造成無效的深度計算。 In view of this, embodiments of the present invention provide a computing device, a sensing device, and a processing method based on time-of-flight ranging, which can effectively avoid invalid depth calculation caused by dynamic blur.
本發明實施例基於飛行時間測距的運算裝置,其包括記憶體及處理器。記憶體記錄至少一畫素所對應至少二個相位的強度資訊以及用於運算裝置的處理方法所對應的程式碼,而此強度資訊係利用那些相位在時間上延遲感測調變光所得。處理器耦接記憶體,並經配置用以執行程式碼,且此處理方法包括下列步驟:取得至少二個相位的強度資訊。依據那些相位的強度資訊之間的差異,決定是否放棄此畫素所對應至少二個相位的強度資訊。 An operation device based on time-of-flight distance measurement according to an embodiment of the present invention includes a memory and a processor. The memory records the intensity information of at least two phases corresponding to at least one pixel and the program code corresponding to the processing method for the computing device, and the intensity information is obtained by using those phases to delay the modulation of dimming in time. The processor is coupled to the memory and is configured to execute the code. The processing method includes the following steps: obtaining intensity information of at least two phases. Based on the difference between the intensity information of those phases, it is decided whether to discard the intensity information of at least two phases corresponding to this pixel.
本發明實施例基於飛行時間測距的感測裝置,其包括調變光發射電路、調變光接收電路、記憶體以及處理器。調變光發射電路發射調變光。調變光接收電路利用至少二個相位在時間延遲上以接收此調變光。記憶體記錄至少一個畫素所對應至少二相位 的強度資訊以及用於感測裝置的處理方法所對應的程式碼。處理器耦接調變光接收電路及記憶體,並經配置用以執行程式碼,且此處理方法包括下列步驟。取得至少二個相位的強度資訊,而此強度資訊係利用那些相位在時間上延遲感測調變光所得。依據那些相位的強度資訊之間的差異,決定是否放棄此畫素所對應的那些相位的強度資訊。 A sensing device based on time-of-flight distance measurement according to an embodiment of the present invention includes a dimming light transmitting circuit, a dimming light receiving circuit, a memory, and a processor. The dimming light emitting circuit emits dimming light. The dimming light receiving circuit uses at least two phases in time delay to receive the dimming light. The memory records at least two phases corresponding to at least one pixel Strength information and the corresponding code of the processing method used for the sensing device. The processor is coupled to the dimming receiving circuit and the memory, and is configured to execute the program code, and the processing method includes the following steps. Obtain the intensity information of at least two phases, and the intensity information is obtained by using the phases to delay the dimming in time. Based on the difference between the intensity information of those phases, it is decided whether to discard the intensity information of those phases corresponding to this pixel.
另一方面,本發明實施例基於飛行時間測距的處理方法,其包括下列步驟:取得至少一畫素所對應至少二個相位的強度資訊,而此強度資訊係利用那些相位在時間上延遲感測調變光所得。依據那些相位的強度資訊之間的差異,決定是否放棄此畫素所對應的那些相位的強度資訊。 On the other hand, the processing method based on time-of-flight ranging in the embodiment of the present invention includes the following steps: obtaining intensity information of at least two phases corresponding to at least one pixel, and the intensity information uses those phases to delay in time Obtained by measuring dimming. Based on the difference between the intensity information of those phases, it is decided whether to discard the intensity information of those phases corresponding to this pixel.
基於上述,本發明實施例基於飛行時間測距的運算裝置、感測裝置及處理方法,依據兩相位的強度資訊之間的差異來評估是否發生動態模糊現象,並據以將有動態模糊現象的畫素放棄,且重新拍攝或僅採用有效的畫素。藉此,可有效降低動態模糊對於深度資訊估測的影響。 Based on the above, the operation device, sensing device and processing method based on the time-of-flight distance measurement of the embodiments of the present invention evaluate whether dynamic blur occurs according to the difference between the intensity information of the two phases, and accordingly The pixels are discarded and re-shot or only valid pixels are used. In this way, the effect of motion blur on depth information estimation can be effectively reduced.
為讓本發明的上述特徵和優點能更明顯易懂,下文特舉實施例,並配合所附圖式作詳細說明如下。 In order to make the above-mentioned features and advantages of the present invention more obvious and understandable, the embodiments are specifically described below and described in detail in conjunction with the accompanying drawings.
10:測距系統 10: Ranging system
100:感測裝置 100: sensing device
110:調變光發射電路 110: Modulated light emitting circuit
120:調變光接收電路 120: Modulated light receiving circuit
122:光電感應器 122: Photoelectric sensor
130:處理器 130: processor
140:訊號處理電路 140: signal processing circuit
150:記憶體 150: memory
160:運算裝置 160: computing device
170:姿態感測器 170: attitude sensor
CA、CB:電容 CA, CB: capacitance
QA、QB:改變的電荷量 QA, QB: the amount of charge changed
CS:控制訊號 CS: control signal
CSB:反相控制訊號 CSB: Inverted control signal
DS:感測訊號 DS: Sensing signal
EM:調變光 EM: dimming
MS:調變訊號 MS: Modulation signal
NA、NB:節點 NA, NB: Node
REM:被反射的調變光 REM: Modulated light reflected
SW1、SW2:開關 SW1, SW2: switch
VA、VB:電壓訊號 V A , V B : voltage signal
TA:目標物體 TA: target object
S410~S430、S710~S790、S810~S890、S910~S950:步驟 S410~S430, S710~S790, S810~S890, S910~S950: Steps
圖1A是一範例說明曝光時間與調變光訊號的時序圖。 FIG. 1A is an example illustrating a timing diagram of exposure time and modulated light signal.
圖1B及1C是二範例說明動態模糊現象。 1B and 1C are two examples to illustrate the phenomenon of dynamic blur.
圖2是依據本發明一實施例的測距系統的示意圖。 2 is a schematic diagram of a distance measuring system according to an embodiment of the invention.
圖3A是依據本發明一實施例的調變光接收電路的電路示意圖。 FIG. 3A is a schematic circuit diagram of a modulated light receiving circuit according to an embodiment of the invention.
圖3B是依據圖3A的實施例的訊號波形示意圖。 FIG. 3B is a signal waveform diagram according to the embodiment of FIG. 3A.
圖4是依據本發明一實施例基於飛行時間測距的處理方法的流程圖。 4 is a flowchart of a processing method based on time-of-flight ranging according to an embodiment of the present invention.
圖5A~5D是一範例說明本域(local)的動態模糊現象。 5A~5D are an example to illustrate the local motion blur phenomenon.
圖6A~6D是一範例說明全域(global)的動態模糊現象。 6A~6D are an example to illustrate the global dynamic blur phenomenon.
圖7是依據本發明第一實施例基於飛行時間測距的處理方法的流程圖。 7 is a flowchart of a processing method based on time-of-flight ranging according to the first embodiment of the present invention.
圖8是依據本發明第二實施例基於飛行時間測距的處理方法的流程圖。 8 is a flowchart of a processing method based on time-of-flight ranging according to a second embodiment of the present invention.
圖9是依據本發明第三實施例基於飛行時間測距的處理方法的流程圖。 9 is a flowchart of a processing method based on time-of-flight ranging according to a third embodiment of the present invention.
圖10A及10B是一範例說明放棄無效資料的感測示意圖。 10A and 10B are schematic diagrams illustrating an example of abandoning invalid data.
圖2是依據本發明一實施例的測距系統10的示意圖。請參照圖2,測距系統10包括基於ToF的感測裝置100及目標物體TA。
2 is a schematic diagram of a
感測裝置100包括但不僅限於調變光發射電路110、調變
光接收電路120、處理器130、訊號處理電路140、記憶體150與姿態感測器170。感測裝置100可應用於諸如三維模型建模、物體辨識、車用輔助系統、定位、產線測試或誤差校正等領域。感測裝置100可能是獨立裝置,或經模組化而裝載於其他裝置,非用以限制本發明的範疇。
The
調變光發射電路110例如是雷射二極體或準直光產生裝置,且調變光接收電路120例如是攝像裝置或光源感應裝置(至少包括光感測器、讀取電路等)。訊號處理電路140耦接調變光發射電路110與調變光接收電路120。訊號處理電路140用以提供調變訊號MS給調變光發射電路110且提供控制訊號CS至調變光接收電路120。調變光發射電路110用以依據調變訊號MS發出調變光EM,而此調變光EM例如紅外光、雷射光或其他波段的準直光。例如,調變訊號MS為脈衝訊號,且調變訊號MS上升的邊緣對應調變光EM的觸發時間。調變光EM遇到目標物體TA後將會被反射,而調變光接收電路120可接收被反射的調變光REM。調變光接收電路120依據控制訊號CS對被反射的調變光REM解調變,以產生感測訊號DS。
The modulated
更具體而言,圖3A是依據本發明一實施例的調變光接收電路120的電路示意圖。請參照圖3A,為了方便說明,本圖式以單位/單一畫素的電路為例。調變光接收電路120中對應於單位/單一畫素的電路包括光電感應元件122、電容CA、電容CB、開關SW1與開關SW2。光電感應器122例如是光電二極體(photodiode)
或具有類似用以感測被反射調變光REM之功能的其他光感測元件。光電感應器122一端接收共同參考電壓(例如,接地GND),且其另一端耦接開關SW1與開關SW2的其中一端。開關SW1的另一端通過節點NA耦接電容CA且受控於控制訊號CS的反相訊號CSB。開關SW2的另一端通過節點NB耦接電容CB且受控於控制訊號CS。調變光接收電路120輸出節點NA上的電壓(或電流)訊號VA與節點NB上的電壓(或電流)訊號VB作為感測訊號DS。在另一實施例中,調變光接收電路120也可以選擇輸出電壓訊號VA與電壓訊號VB的差值作為感測訊號DS(可作為強度(intensity)資訊)。
More specifically, FIG. 3A is a circuit schematic diagram of the modulated
圖3A的實施例僅作為舉例說明,調變光接收電路120的電路架構並不限於此。調變光接收電路120可以具有多個光電感應器122,或是更多電容或開關。本領域具有通常知識者可依據通常知識與實際需求而做適當調整。
The embodiment of FIG. 3A is for illustration only, and the circuit architecture of the modulated
圖3B是依據圖3A的實施例的訊號波形示意圖。請接著參照圖3A與圖3B,當反相控制訊號CSB為低準位(例如,邏輯0)時,開關SW1導通,此時控制訊號CS會處於高準位(例如,邏輯1),開關SW2不導通。反之,當控制訊號CS為低準位(例如,邏輯0)時,開關SW2導通,此時反相控制訊號CSB處於高準位(例如,邏輯1),開關SW1不導通。此外,光電感應器122的導通即可使光電感應器122接收到被反射的調變光REM。當光電感應器122與開關SW1都導通時,電容CA進行放電(或充電),圖3B中
的QA表示電容CA所改變的電荷量,節點NA上的電壓訊號VA會相應地改變。當光電感應器122與開關SW2都導通時,電容CB進行放電(或充電),圖3B中的QB表示電容CB所改變的電荷量,節點NB上的電壓訊號VB會相應地改變。
FIG. 3B is a signal waveform diagram according to the embodiment of FIG. 3A. 3A and 3B, when the inverting control signal CSB is at a low level (for example, logic 0), the switch SW1 is turned on, and the control signal CS will be at a high level (for example, logic 1), and the switch SW2 Not conductive. Conversely, when the control signal CS is at a low level (for example, logic 0), the switch SW2 is turned on. At this time, the inverted control signal CSB is at a high level (for example, logic 1), and the switch SW1 is not turned on. In addition, when the
處理器130耦接調變光接收電路120。處理器130可以是中央處理單元(Central Processing Unit,CPU),或是其他可程式化之一般用途或特殊用途的微處理器(Microprocessor)、數位訊號處理器(Digital Signal Processor,DSP)、可程式化控制器、特殊應用積體電路(Application-Specific Integrated Circuit,ASIC)或其他類似元件或上述元件的組合。於本發明實施例中,處理器130可依據感測訊號DS計算控制訊號CS與被反射的調變光REM之間的相位差,且依據此相位差來進行距離量測。例如,請參照圖3B,依據電壓訊號VA與電壓訊號VB之間的差異,處理器130可以計算出控制訊號CS與被反射的調變光REM之間的相位差。需說明的是,在一些實施例中,處理器130可能內建或電性連接有類比至數位轉換器(Analogy-to-Digital,ADC),且透過類比至數位轉換器將感測訊號DS轉換成數位形式的訊號。
The
記憶體150耦接處理器130,記憶體150可以是任何型態的固定或可移動隨機存取記憶體(Random Access Memory,RAM)、快閃記憶體(Flash Memory)、傳統硬碟(Hard Disk Drive,HDD)、固態硬碟(Solid-State Disk,SSD)、非揮發性(non-volatile)記憶體或類似元件或上述元件之組合的儲存器。於本實施例中,記憶體150
用於儲存緩衝的或永久的資料(例如,感測訊號DS對應的強度資訊、門檻值等)、程式碼、軟體模組、作業系統、應用程式、驅動程式等資料或檔案,且其詳細內容待後續實施例詳述。值得注意的是,記憶體150所記錄的程式碼是用於感測裝置100的處理方法,且後續實施例將詳加說明此處理方法。
The
姿態偵測器170耦接處理器130,姿態偵測器170可以是重力感測器(G-sensor)/加速度計(Accelerometer)、慣性(Inertial)感測器、陀螺儀(Gyroscope)、磁力感測器(Magnetometer)或其組合,並用以偵測諸如加速度、角速度、方位等運動或姿態,並據以產生姿態資訊(例如,記錄有三軸的重力加速度、角速度或磁力等資料)。
The
需說明的是,在一些實施例中,處理器130及記憶體150可能被獨立出來而成為運算裝置160。此運算裝置160可以是桌上型電腦、筆記型電腦、伺服器、智慧型手機、平板電腦的裝置。運算裝置160與感測裝置100更具有可相互通訊的通訊收發器(例如,支援Wi-Fi、藍芽、乙太網路(Ethernet)等通訊技術的收發器),使運算裝置160可取得來自感測裝置100的感測訊號DS或對應的強度資訊(可記錄在記憶體140中以供處理器130存取)。
It should be noted that, in some embodiments, the
為了方便理解本發明實施例的操作流程,以下將舉諸多實施例詳細說明本發明實施例中感測裝置100及/或運算裝置160的運作流程。下文中,將搭配感測裝置100及運算裝置160中的各項元件及模組說明本發明實施例所述之方法。本方法的各個流程可依照實施情形而隨之調整,且並不僅限於此。
In order to facilitate understanding of the operation process of the embodiment of the present invention, a number of embodiments will be described in detail below to describe the operation process of the
圖4是依據本發明一實施例基於飛行時間測距的處理方法的流程圖。請參照圖4,處理器130取得至少一個畫素所對應至少二個相位的強度資訊(步驟S410)。具體而言,在圖3B的實施例中,調變訊號MS與控制訊號CS同步,但訊號處理電路140還可以讓調變訊號MS與控制訊號CS之間不同步。也就是說,控制訊號CS與調變訊號MS之間可具有參考相位。而訊號處理電路140會依據不同的參考相位將調變訊號MS或控制訊號CS的相位延遲或提前,使得調變訊號MS與控制訊號CS具有相位差/相位延遲。
4 is a flowchart of a processing method based on time-of-flight ranging according to an embodiment of the present invention. Referring to FIG. 4, the
在連續波(Continuous Wave,CW)量測機制中,相位差例如是0度、90度、180度及270度,即四相位方法。不同相位即會對應到不同起始及結束時間點的電荷累計時間區間。換句而言,調變光接收電路120利用四個相位在時間延遲上以接收被反射的調變光REM。而利用那些相位在時間上延遲感測被反射的調變光REM即可得到對應於不同相位的感測訊號DS,且感測訊號DS將可進一步作為強度資訊。此強度資訊可能記錄單一畫素(一個畫素對應到圖3A的電路)所累計的電荷量或進一步轉換成強度值。即,各畫素的強度資訊係利用那些相位在時間上延遲感測被反射的調變光REM所得。
In the continuous wave (Continuous Wave, CW) measurement mechanism, the phase difference is, for example, 0 degrees, 90 degrees, 180 degrees, and 270 degrees, which is a four-phase method. Different phases correspond to different charge accumulation time intervals at different start and end time points. In other words, the modulated
接著,處理器130依據至少二個相位的強度資訊之間的差異,決定是否放棄此畫素所對應至少二個相位的強度資訊(步驟S430)。具體而言,經實驗可知,動態模糊現象將造成不同相位之間的強度資訊產生差異。舉例而言,圖5A~5D是一範例說明本域
(local)的動態模糊現象。請參照圖5A,圖中所示是目標物體TA(以椅子為例)與感測裝置100皆為靜止狀態(例如,無晃動、無跳動等運動)下依據感測訊號DS所生成的影像(以解析度為240×180為例)。請參照圖5B,圖中所示是相同時間點所對應任兩個相位對應強度值相減(即,強度資訊的差異)所產生的影像。請參照圖5C是圖5B經尺度調整後的影像,且可觀察到所有畫素的強度差異大致相同並等於或趨近於零。接著,假設目標物體TA移動。請參照圖5D,圖中所示是相同時間點所對應任兩個相位對應強度值相減(即,強度資訊的差異)所產生的影像,與圖5C相比可觀察到部分畫素的強度差異較大。
Then, the
圖6A~6D是一範例說明全域(global)的動態模糊現象。請參照圖6A,圖中所示是目標物體TA(以椅子為例)與感測裝置100皆為靜止狀態下依據感測訊號DS所生成的影像(以解析度為240×180為例)。請參照圖6B,圖中所示是相同時間點所對應任兩個相位對應強度值相減(即,強度資訊的差異)所產生的影像。請參照圖6C是圖6B經尺度調整後的影像,且可觀察到所有畫素的強度差異大致相同並等於或趨近於零。接著,假設感測裝置100移動。請參照圖6D,圖中所示是相同時間點所對應任兩個相位對應強度值相減(即,強度資訊的差異)所產生的影像,與圖6C相比可觀察到部分畫素的強度差異較大。
6A~6D are an example to illustrate the global dynamic blur phenomenon. Please refer to FIG. 6A. The figure shows an image generated by the sensing signal DS when the target object TA (taking a chair as an example) and the
由此可知,無論是產生本域或全域的動態模糊現象,兩個相位的強度資訊之間的差異將增加。反之,若無動態模糊現象,則 兩個相位的強度資訊的差異將等於或趨近於零。因此,兩個相位的強度資訊之間的差異將可用於評估是否發生動態模糊現象。 It can be seen that, no matter whether it is a dynamic blur phenomenon in the local domain or the global domain, the difference between the intensity information of the two phases will increase. Conversely, if there is no motion blur, then The difference in intensity information of the two phases will be equal to or approach zero. Therefore, the difference between the intensity information of the two phases can be used to assess whether dynamic blurring has occurred.
在一實施例中,針對各畫素,處理器130可判斷至少兩個相位的強度資訊之間的差異是否大於差異門檻值,且若此差異大於差異門檻值,則處理器130放棄此畫素所對應的那些相位的強度資訊。具體而言,無可避免地,兩相位的強度資訊的差異可能不會剛好等於零。因此,本發明實施例提升了寬容度,使處理器130可預設或經使用者設定有差異門檻值(例如,10、20、或40等)。若差異小於此差異門檻值,則處理器130可視為未發生動態模糊現象。反之,若差異大於此差異門檻值,則處理器130可直接視為發生動態模糊現象、或需進一步透過其他資訊來評估。值得注意的是,那些強度值差異大於差異門檻值的畫素的強度資訊可能會影響後續深度資訊估測的結果。因此,本發明實施例會依據一些條件來放棄任一畫素具有低於差異門檻值的四個相位的強度資訊。若處理器130放棄那些相位的強度資訊,則將判斷採用此畫素於不同時間點所對應的那些相位的強度資訊或是採用未放棄的其他畫素所對應不同相位的強度資訊。若任一畫素於當前時間點所對應至少二個相位的強度資訊被放棄,則需要再次透過調變光接收電路120感測以重新取得此畫素於不同時間點(後續的時間點)所對應的那些相位的強度資訊,並再評估是否採用這些強度資訊。或者,若僅部分畫素於當前時間點所對應的那些相位的強度資訊被放棄,則未放棄的那些畫素(或是受保留的畫素)所對應不同相位的
強度資訊將被採用。此外,處理器130可依據最後採用的強度資訊來計算深度資訊
In one embodiment, for each pixel, the
需說明的是,針對任一個畫素,處理器130可將任兩個相位(例如,0度與180度、180度與270度等)的差異來與差異門檻值(其數值需對應調整)比較。在其他實施例中,處理器130亦可挑選差異最大的兩個相位的數值來與差異門檻值(其數值需對應調整)比較。或者,處理器130亦可能是隨機挑選更多個相位的強度資訊來比較。而若取得超過兩個差異,則可進一步平均或以特定線性組合來與差異門檻值比較。
It should be noted that for any pixel, the
以下將接著詳細說明那些放棄條件及對應的處理方式:圖7是依據本發明第一實施例基於飛行時間測距的處理方法的流程圖。請參照圖7,針對各畫素,處理器130取得至少兩個相位的強度資訊(步驟S710),並判斷強度資訊之間的差異是否大於差異門檻值(步驟S730),而其詳細說明可分別參酌步驟S410、S430的說明,故於此不再贅述。接著,若此差異未大於此差異門檻值,則處理器130可基於此畫素於當前時間點所對應的所有相位的強度資訊來計算深度資訊(步驟S735)。例如,0度與180度的差異作為實數部,90度與270度的差異作為虛數部;而實數部與虛數部所形成的角度即可作為相位差φ,且距離(即作為深度資訊)為1/2*c*φ/2π*f,其中c為光速常數,f為取樣頻率。
In the following, those abandon conditions and corresponding processing methods will be described in detail below: FIG. 7 is a flowchart of a processing method based on time-of-flight ranging according to the first embodiment of the present invention. Referring to FIG. 7, for each pixel, the
另一方面,若此差異大於差異門檻值,則處理器130可放棄/取消/不使用此畫素於當前時間點的四個相位的強度資訊(步
驟S780),即不使用此畫素當次累計電荷時間區間中的強度資訊。處理器130可在下次透過調變光接收電路120感測時適性調整其偵測那些相位的調變光的曝光時間。由於縮短曝光時間可改善動態模糊現象,處理器130可進一步通知調變光接收電路120將曝光時間降低以重新拍攝/感測/接收被反射的調變光REM(步驟S790),從而取得此畫素於不同時間點反應於調變光REM所得到至少二個相位的強度資訊。
On the other hand, if the difference is greater than the difference threshold, the
圖8是依據本發明第二實施例基於飛行時間測距的處理方法的流程圖。請參照圖8,步驟S810、S830、S835、S880及S890的詳細內容可參酌步驟S710、S730、S735、S780及S790的說明,於此不再贅述。與第一實施例不同之處在於,若兩相位的強度資訊之間的差異大於差異門檻值,則處理器130會依據姿態感測器170所取得的姿態資訊來判斷造成此差異的動態模糊為全域或本域(步驟S850)。以三軸加速度感測值Xout、Yout及Zout為例,若為1g,則表示感測裝置100為靜止狀態且此差異是本域的動態模糊所造成(例如,目標物體TA移動);若其值不為1g,則表示感測裝置100不為靜止狀態且此差異是全域的動態模糊所造成。需說明的是,依據姿態感測器170的類型,其判斷靜止狀態的條件可能不同,應用本發明實施例者可自行調整對應參數,非用以限制本發明的範疇。
8 is a flowchart of a processing method based on time-of-flight ranging according to a second embodiment of the present invention. Please refer to FIG. 8. For details of steps S810, S830, S835, S880 and S890, please refer to the description of steps S710, S730, S735, S780 and S790, which will not be repeated here. The difference from the first embodiment is that if the difference between the intensity information of the two phases is greater than the difference threshold, the
若經判斷為全域的動態模糊,則處理器130會直接放棄
此畫素於當前時間點所對應的那些相位的強度資訊,並透過調變光接收電路120拍攝以重新取得此畫素下一次電荷累計時間區間內至少二個相位的強度資訊(步驟S855)。另一方面,若經判斷為本域的動態模糊,則處理器130可進一步依據模糊畫素數量來決定是否透過調變光接收電路120重新取得此畫素於不同時間點所對應至少二個相位的強度資訊。此模糊畫素數量是反應於一畫素經判斷有動態模糊而據此累計的數量。換句而言,若對應於某一畫素的強度資訊之間的差異大於差異門檻值,則累計模糊畫素數量,且經評估所有畫素後即可得到最終的模糊畫素數量。
If it is judged as global motion blur, the
值得注意的是,經實驗可知,不同差異門檻值將對應不同模糊畫素數量。換句而言,不同差異門檻值,一張感測影像中經判斷有動態模糊的畫素占所有畫素的比例可能不同。以240×180解析度為例,表(1)是不同差異門檻值對應的模糊畫素數量及其所占比例:
表(1)中實驗所得的模糊畫素數量例如可用來作為比對的數量門檻值,但此數量門檻值可能依據不同差異門檻值、不同解析度或其他條件而被調整,本發明實施例不加以限制。處理器130可
判斷當前時間點(或取樣區間內)得到的模糊畫素數量是否大於設定的數量門檻值(步驟S870)。而若此模糊畫素數量大於設定的數量門檻值,則處理器130可放棄此畫素於當前時間點所對應的那些相位的強度資訊(步驟S880),並透過調變光接收電路120拍攝以重新取得此畫素不同時間點(例如,下一個取樣時間點或後續取樣區間)所對應至少二個相位的強度資訊(步驟S890)。反之,若當前時間區間內得到的模糊畫素數量未大於設定的數量門檻值,則處理器130直接依據此畫素當前時間點所取得的複數個相位的強度資訊來計算深度資訊(步驟S835),即保留此像素對應的強度資訊。
The number of fuzzy pixels obtained in the experiment in Table (1) can be used as a threshold value for comparison, but this threshold value may be adjusted according to different thresholds, different resolutions, or other conditions. The embodiments of the present invention do not Be restricted.
在一實施例中,步驟S890中調整曝光時間的長度可與模糊畫素數量及數量門檻值之間的差異相關。例如,調整的曝光時間可依據公式(1)、(2)得出:
需說明的是,在其他實施例中,調變光接收電路120亦可直接減少特定長度或亂數長度的曝光時間。
It should be noted that, in other embodiments, the modulated
圖9是依據本發明第三實施例基於飛行時間測距的處理方法的流程圖。請參照圖9,步驟S910、S930、S935及S880的詳細內容可參酌步驟S710、S730、S735及S780的說明,於此不再
贅述。與第一實施例不同之處在於,若任一畫素的至少兩個相位的強度資訊之間的差異大於差異門檻值,則處理器130可僅放棄/取消/不使用此畫素於當前時間點所對應的那些相位的強度資訊(步驟950),而受放棄的畫素是經判斷有動態模糊(例如,此畫素在兩相位之間的強度差異大於差異門檻值)。處理器130可記錄這些受放棄的畫素在感測影像中的位置、索引或代碼。而在步驟S935中,處理器130將依據未放棄的畫素的強度資訊來計算深度資訊。需說明的是,所有畫素中排除受放棄的畫素後所剩餘的畫素即為未放棄的畫素。而排除掉受放棄的畫素的強度資訊將可降低動態模糊的影像,且經判斷未受動態模糊影響的畫素的強度資訊仍可繼續供後續深度資訊計算使用。藉此,可避免多次重新拍攝,並進而提升效率。
9 is a flowchart of a processing method based on time-of-flight ranging according to a third embodiment of the present invention. Please refer to FIG. 9. For details of steps S910, S930, S935 and S880, please refer to the description of steps S710, S730, S735 and S780.
Repeat. The difference from the first embodiment is that if the difference between the intensity information of at least two phases of any pixel is greater than the difference threshold, the
圖10A及10B是一範例說明放棄無效資料的感測示意圖。請先參照圖10A,圖中所示為所有畫素皆保留的影像。請接著參照圖10B,假設差異門檻值為40,則差異超過40的畫素將被放棄,且處理器130可將這些受放棄的畫素的強度設為零或忽略不計。
10A and 10B are schematic diagrams illustrating an example of abandoning invalid data. Please refer to FIG. 10A first. The picture shows an image in which all pixels are retained. 10B, assuming that the difference threshold is 40, pixels with a difference exceeding 40 will be discarded, and the
需說明的是,前述三個實施例中各步驟可依據實際需求而互換、增加、或改變。例如,在第一實施例的步驟730中更進一步增加步驟870的判斷模糊畫素數量的機制。 It should be noted that the steps in the foregoing three embodiments can be interchanged, added, or changed according to actual needs. For example, in step 730 of the first embodiment, the mechanism for determining the number of blurred pixels in step 870 is further increased.
綜上所述,本發明實施例基於飛行時間測距的運算裝置、感測裝置及處理方法,可基於任兩相位的強度資訊之間的差異、模糊畫素數量、姿態資訊或其組合來判斷是否發生動態模糊現象。若 經評估有動態模糊現象,可重新拍攝、或放棄部分有動態模糊的畫素的強度資訊。藉此,可以簡便的方式來降低動態模糊現象對後續深度資訊估測的影響。 In summary, the calculation device, the sensing device and the processing method based on the time-of-flight ranging in the embodiments of the present invention can be judged based on the difference between the intensity information of any two phases, the number of blurred pixels, the posture information or a combination thereof Whether the motion blur phenomenon occurs. If After evaluating the phenomenon of motion blur, you can re-shoot or discard the intensity information of some pixels with motion blur. In this way, the influence of the motion blur phenomenon on the subsequent depth information estimation can be reduced in a simple way.
雖然本發明已以實施例揭露如上,然其並非用以限定本發明,任何所屬技術領域中具有通常知識者,在不脫離本發明的精神和範圍內,當可作些許的更動與潤飾,故本發明的保護範圍當視後附的申請專利範圍所界定者為準。 Although the present invention has been disclosed as above by the embodiments, it is not intended to limit the present invention. Any person with ordinary knowledge in the technical field can make some changes and modifications without departing from the spirit and scope of the present invention. The scope of protection of the present invention shall be subject to the scope defined in the appended patent application.
S410~S430:步驟 S410~S430: Steps
Claims (13)
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US201962807246P | 2019-02-19 | 2019-02-19 | |
| US62/807,246 | 2019-02-19 |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| TWI696841B true TWI696841B (en) | 2020-06-21 |
| TW202032155A TW202032155A (en) | 2020-09-01 |
Family
ID=72110768
Family Applications (2)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| TW108115961A TWI741291B (en) | 2019-02-19 | 2019-05-09 | Verification method of time-of-flight camera module and verification system thereof |
| TW108121698A TWI696841B (en) | 2019-02-19 | 2019-06-21 | Computation apparatus, sensing apparatus and processing method based on time of flight |
Family Applications Before (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| TW108115961A TWI741291B (en) | 2019-02-19 | 2019-05-09 | Verification method of time-of-flight camera module and verification system thereof |
Country Status (2)
| Country | Link |
|---|---|
| CN (5) | CN111580117A (en) |
| TW (2) | TWI741291B (en) |
Families Citing this family (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| DE102020123671B4 (en) * | 2020-09-10 | 2022-12-22 | Ifm Electronic Gmbh | Method and device for dynamic expansion of a time-of-flight camera system |
| CN112954230B (en) * | 2021-02-08 | 2022-09-09 | 深圳市汇顶科技股份有限公司 | Depth measurement method, chip and electronic device |
| CN113298778B (en) * | 2021-05-21 | 2023-04-07 | 奥比中光科技集团股份有限公司 | Depth calculation method and system based on flight time and storage medium |
| CN113219476B (en) * | 2021-07-08 | 2021-09-28 | 武汉市聚芯微电子有限责任公司 | Ranging method, terminal and storage medium |
| TWI762387B (en) * | 2021-07-16 | 2022-04-21 | 台達電子工業股份有限公司 | Time of flight devide and inspecting method for the same |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105894492A (en) * | 2015-01-06 | 2016-08-24 | 三星电子株式会社 | T-O-F depth imaging device rendering depth image of object and method thereof |
| TW201719193A (en) * | 2015-09-10 | 2017-06-01 | 義明科技股份有限公司 | Non-contact optical sensing device and method for sensing depth and position of an object in three-dimensional space |
| TW201841000A (en) * | 2017-02-17 | 2018-11-16 | 日商北陽電機股份有限公司 | Object capturing device |
Family Cites Families (40)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2002139818A (en) * | 2000-11-01 | 2002-05-17 | Fuji Photo Film Co Ltd | Lens-fitted photographic film unit |
| CN101252802B (en) * | 2007-02-25 | 2013-08-21 | 电灯专利信托有限公司 | Charge pump electric ballast for low input voltage |
| JP2008209298A (en) * | 2007-02-27 | 2008-09-11 | Fujifilm Corp | Ranging device and ranging method |
| EP2402783B1 (en) * | 2009-02-27 | 2013-10-02 | Panasonic Corporation | Distance measuring apparatus |
| CN102735910B (en) * | 2011-04-08 | 2014-10-29 | 中山大学 | Maximum peak voltage detection circuit |
| WO2013009099A2 (en) * | 2011-07-12 | 2013-01-17 | 삼성전자 주식회사 | Device and method for blur processing |
| CN103181156B (en) * | 2011-07-12 | 2017-09-01 | 三星电子株式会社 | Fuzzy Processing device and method |
| EP2728374B1 (en) * | 2012-10-30 | 2016-12-28 | Technische Universität Darmstadt | Invention relating to the hand-eye calibration of cameras, in particular depth image cameras |
| AT513589B1 (en) * | 2012-11-08 | 2015-11-15 | Bluetechnix Gmbh | Recording method for at least two ToF cameras |
| US9019480B2 (en) * | 2013-02-26 | 2015-04-28 | Jds Uniphase Corporation | Time-of-flight (TOF) system, sensor pixel, and method |
| US9681123B2 (en) * | 2014-04-04 | 2017-06-13 | Microsoft Technology Licensing, Llc | Time-of-flight phase-offset calibration |
| US9641830B2 (en) * | 2014-04-08 | 2017-05-02 | Lucasfilm Entertainment Company Ltd. | Automated camera calibration methods and systems |
| JP6424338B2 (en) * | 2014-06-09 | 2018-11-21 | パナソニックIpマネジメント株式会社 | Ranging device |
| TWI545951B (en) * | 2014-07-01 | 2016-08-11 | 晶相光電股份有限公司 | Sensors and sensing methods |
| EP2978216B1 (en) * | 2014-07-24 | 2017-08-16 | Espros Photonics AG | Method for the detection of motion blur |
| JP6280002B2 (en) * | 2014-08-22 | 2018-02-14 | 浜松ホトニクス株式会社 | Ranging method and ranging device |
| CN104677277B (en) * | 2015-02-16 | 2017-06-06 | 武汉天远视科技有限责任公司 | A kind of method and system for measuring object geometric attribute or distance |
| CN106152947B (en) * | 2015-03-31 | 2019-11-29 | 北京京东尚科信息技术有限公司 | Measure equipment, the method and apparatus of dimension of object |
| US9945936B2 (en) * | 2015-05-27 | 2018-04-17 | Microsoft Technology Licensing, Llc | Reduction in camera to camera interference in depth measurements using spread spectrum |
| CN107850664B (en) * | 2015-07-22 | 2021-11-05 | 新唐科技日本株式会社 | ranging device |
| US9716850B2 (en) * | 2015-09-08 | 2017-07-25 | Pixart Imaging (Penang) Sdn. Bhd. | BJT pixel circuit capable of cancelling ambient light influence, image system including the same and operating method thereof |
| TWI557393B (en) * | 2015-10-08 | 2016-11-11 | 微星科技股份有限公司 | Calibration method of laser ranging and device utilizing the method |
| US10057526B2 (en) * | 2015-11-13 | 2018-08-21 | Pixart Imaging Inc. | Pixel circuit with low power consumption, image system including the same and operating method thereof |
| US9762824B2 (en) * | 2015-12-30 | 2017-09-12 | Raytheon Company | Gain adaptable unit cell |
| US10516875B2 (en) * | 2016-01-22 | 2019-12-24 | Samsung Electronics Co., Ltd. | Method and apparatus for obtaining depth image by using time-of-flight sensor |
| CN106997582A (en) * | 2016-01-22 | 2017-08-01 | 北京三星通信技术研究有限公司 | The motion blur removing method and equipment of flight time three-dimension sensor |
| CN107040732B (en) * | 2016-02-03 | 2019-11-05 | 原相科技股份有限公司 | Image sensing circuit and method |
| CN107229056A (en) * | 2016-03-23 | 2017-10-03 | 松下知识产权经营株式会社 | Image processing apparatus, image processing method and recording medium |
| KR102752035B1 (en) * | 2016-08-22 | 2025-01-09 | 삼성전자주식회사 | Method and device for acquiring distance information |
| US10762651B2 (en) * | 2016-09-30 | 2020-09-01 | Magic Leap, Inc. | Real time calibration for time-of-flight depth measurement |
| JP6862751B2 (en) * | 2016-10-14 | 2021-04-21 | 富士通株式会社 | Distance measuring device, distance measuring method and program |
| CN108616726A (en) * | 2016-12-21 | 2018-10-02 | 光宝电子(广州)有限公司 | Exposal control method based on structure light and exposure-control device |
| US20180189977A1 (en) * | 2016-12-30 | 2018-07-05 | Analog Devices Global | Light detector calibrating a time-of-flight optical system |
| US10557921B2 (en) * | 2017-01-23 | 2020-02-11 | Microsoft Technology Licensing, Llc | Active brightness-based strategy for invalidating pixels in time-of-flight depth-sensing |
| CN108700664A (en) * | 2017-02-06 | 2018-10-23 | 松下知识产权经营株式会社 | Three-dimensional motion acquisition device and three-dimensional motion adquisitiones |
| WO2018235163A1 (en) * | 2017-06-20 | 2018-12-27 | 株式会社ソニー・インタラクティブエンタテインメント | Calibration device, calibration chart, chart pattern generation device, and calibration method |
| EP3783304B1 (en) * | 2017-06-22 | 2024-07-03 | Hexagon Technology Center GmbH | Calibration of a triangulation sensor |
| TWI622960B (en) * | 2017-11-10 | 2018-05-01 | 財團法人工業技術研究院 | Correction method of depth image capturing device |
| CN108401098A (en) * | 2018-05-15 | 2018-08-14 | 绍兴知威光电科技有限公司 | A kind of TOF depth camera systems and its method for reducing external error |
| CN112363150B (en) * | 2018-08-22 | 2024-05-28 | Oppo广东移动通信有限公司 | Calibration method, calibration controller, electronic device and calibration system |
-
2019
- 2019-04-02 CN CN201910263049.9A patent/CN111580117A/en active Pending
- 2019-04-25 CN CN201910341808.9A patent/CN111586306B/en active Active
- 2019-05-09 TW TW108115961A patent/TWI741291B/en active
- 2019-05-23 CN CN201910435784.3A patent/CN111586307B/en active Active
- 2019-06-21 TW TW108121698A patent/TWI696841B/en active
- 2019-06-21 CN CN201910541119.2A patent/CN111580067B/en active Active
- 2019-10-14 CN CN201910971700.8A patent/CN111624612B/en active Active
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105894492A (en) * | 2015-01-06 | 2016-08-24 | 三星电子株式会社 | T-O-F depth imaging device rendering depth image of object and method thereof |
| TW201719193A (en) * | 2015-09-10 | 2017-06-01 | 義明科技股份有限公司 | Non-contact optical sensing device and method for sensing depth and position of an object in three-dimensional space |
| TW201841000A (en) * | 2017-02-17 | 2018-11-16 | 日商北陽電機股份有限公司 | Object capturing device |
Also Published As
| Publication number | Publication date |
|---|---|
| TW202032154A (en) | 2020-09-01 |
| CN111586307A (en) | 2020-08-25 |
| CN111580117A (en) | 2020-08-25 |
| CN111586306B (en) | 2022-02-01 |
| CN111580067A (en) | 2020-08-25 |
| CN111580067B (en) | 2022-12-02 |
| CN111586307B (en) | 2021-11-02 |
| TW202032155A (en) | 2020-09-01 |
| CN111624612B (en) | 2023-04-07 |
| CN111624612A (en) | 2020-09-04 |
| TWI741291B (en) | 2021-10-01 |
| CN111586306A (en) | 2020-08-25 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| TWI696841B (en) | Computation apparatus, sensing apparatus and processing method based on time of flight | |
| US20250138193A1 (en) | Processing system for lidar measurements | |
| JP7308856B2 (en) | Active signal detection using adaptive discrimination of noise floor | |
| US20190113606A1 (en) | Time-of-flight depth image processing systems and methods | |
| JP7321178B2 (en) | Choosing a LIDAR Pulse Detector According to Pulse Type | |
| KR102869766B1 (en) | METHOD FOR TIME-OF-FLIGHT DEPTH MEASUREMENT AND ToF CAMERA FOR PERFORMING THE SAME | |
| JP7325433B2 (en) | Detection of laser pulse edges for real-time detection | |
| US11272157B2 (en) | Depth non-linearity compensation in time-of-flight imaging | |
| US11423572B2 (en) | Built-in calibration of time-of-flight depth imaging systems | |
| US10473461B2 (en) | Motion-sensor device having multiple light sources | |
| JP6193227B2 (en) | Blur processing apparatus and method | |
| CN109903324B (en) | Depth image acquisition method and device | |
| CN112368597A (en) | Optical distance measuring device | |
| KR20140057625A (en) | Improvements in or relating to the processing of time-of-flight signals | |
| US11965962B2 (en) | Resolving multi-path corruption of time-of-flight depth images | |
| CN113439195A (en) | Three-dimensional imaging and sensing using dynamic vision sensors and pattern projection | |
| KR20130008469A (en) | Method and apparatus for processing blur | |
| CN113497892B (en) | Imaging device, distance measuring method, storage medium, and computer device | |
| CN117280177A (en) | Systems and methods for structured light depth calculation using single photon avalanche diodes | |
| US11467258B2 (en) | Computation device, sensing device and processing method based on time of flight | |
| TWI707152B (en) | Computation apparatus, sensing apparatus, and processing method based on time of flight | |
| CN112415487B (en) | Computing device, sensing device and processing method based on time-of-flight ranging | |
| US11961257B2 (en) | Built-in calibration of time-of-flight depth imaging systems |