US20230328401A1 - Timing of multiplexed sensor phase measurements in a depth camera assembly for depth determination using fringe interferometry - Google Patents
Timing of multiplexed sensor phase measurements in a depth camera assembly for depth determination using fringe interferometry Download PDFInfo
- Publication number
- US20230328401A1 US20230328401A1 US18/208,143 US202318208143A US2023328401A1 US 20230328401 A1 US20230328401 A1 US 20230328401A1 US 202318208143 A US202318208143 A US 202318208143A US 2023328401 A1 US2023328401 A1 US 2023328401A1
- Authority
- US
- United States
- Prior art keywords
- pixel
- photodiode
- illumination source
- transfer gate
- time interval
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000005259 measurement Methods 0.000 title description 9
- 238000005305 interferometry Methods 0.000 title 1
- 238000005286 illumination Methods 0.000 claims description 233
- 238000012546 transfer Methods 0.000 claims description 165
- 230000000737 periodic effect Effects 0.000 claims description 62
- 230000010363 phase shift Effects 0.000 claims description 51
- 238000000034 method Methods 0.000 claims description 18
- 230000008878 coupling Effects 0.000 claims description 16
- 238000010168 coupling process Methods 0.000 claims description 16
- 238000005859 coupling reaction Methods 0.000 claims description 16
- 230000004044 response Effects 0.000 claims description 8
- 230000003190 augmentative effect Effects 0.000 abstract description 9
- 238000003384 imaging method Methods 0.000 description 56
- 230000003287 optical effect Effects 0.000 description 22
- 230000009471 action Effects 0.000 description 15
- 230000001360 synchronised effect Effects 0.000 description 11
- 230000006870 function Effects 0.000 description 5
- 238000013507 mapping Methods 0.000 description 5
- 230000004075 alteration Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000003213 activating effect Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 229920001621 AMOLED Polymers 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 238000006073 displacement reaction Methods 0.000 description 2
- 208000013057 hereditary mucoepithelial dysplasia Diseases 0.000 description 2
- 210000001747 pupil Anatomy 0.000 description 2
- 230000001105 regulatory effect Effects 0.000 description 2
- 241000226585 Antennaria plantaginifolia Species 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 239000006117 anti-reflective coating Substances 0.000 description 1
- 201000009310 astigmatism Diseases 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000000576 coating method Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000009849 deactivation Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000012886 linear function Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000009738 saturating Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/332—Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
- H04N13/344—Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/481—Constructional features, e.g. arrangements of optical elements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/50—Control of the SSIS exposure
- H04N25/53—Control of the integration time
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/06—Systems determining position data of a target
- G01S17/08—Systems determining position data of a target for measuring distance only
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/06—Systems determining position data of a target
- G01S17/46—Indirect determination of position data
- G01S17/48—Active triangulation systems, i.e. using the transmission and reflection of electromagnetic waves other than radio waves
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/483—Details of pulse systems
- G01S7/486—Receivers
- G01S7/4861—Circuits for detection, sampling, integration or read-out
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/497—Means for monitoring or calibrating
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/017—Head mounted
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/017—Head mounted
- G02B27/0172—Head mounted characterised by optical features
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/254—Image signal generators using stereoscopic image cameras in combination with electromagnetic radiation sources for illuminating objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/271—Image signal generators wherein the generated image signals comprise depth maps or disparity maps
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/70—SSIS architectures; Circuits associated therewith
- H04N25/76—Addressed sensors, e.g. MOS or CMOS sensors
- H04N25/77—Pixel circuitry, e.g. memories, A/D converters, pixel amplifiers, shared circuits or shared components
- H04N25/771—Pixel circuitry, e.g. memories, A/D converters, pixel amplifiers, shared circuits or shared components comprising storage means other than floating diffusion
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
- G02B2027/0138—Head-up displays characterised by optical features comprising image capture systems, e.g. camera
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/207—Image signal generators using stereoscopic image cameras using a single 2D image sensor
Definitions
- the present disclosure generally relates to virtual or augmented reality systems and more specifically relates to headsets for virtual reality systems that obtain depth information of a local area.
- Providing virtual reality (VR) or augmented reality (AR) content to users through a head mounted display (HMD) often relies on localizing a user's position in an arbitrary environment and determining a three dimensional mapping of the surroundings within the arbitrary environment.
- the user's surroundings within the arbitrary environment may then be represented in a virtual environment or the user's surroundings may be overlaid with additional content.
- Conventional HMDs include one or more quantitative depth cameras to determine surroundings of a user within the user's environment.
- conventional depth cameras use structured light or time of flight to determine the HMD's location within an environment.
- Structured light depth cameras use an active illumination source to project known patterns into the environment surrounding the HMD.
- structured light commonly requires a pattern that is projected to be configured so different portions of the pattern include different characteristics that are later identified. Having different characteristics of different portions of the pattern causes signification portions of a resulting image of the projected pattern to not be illuminated. This inefficiently uses a sensor capturing the resulting image; for example, projection of the pattern by a structured light depth camera results in less than 10% of sensor pixels collecting light from the projected pattern, while requiring multiple sensor pixels to be illuminated to perform a single depth measurement.
- Time of flight depth cameras measure a round trip travel time of light projected into the environment surrounding a depth camera and returning to pixels on a sensor array. While time of flight depth cameras are capable of measure depths of different objects in the environment independently via each sensor pixel, light incident on a sensor pixel may be a combination of light received from multiple optical paths in the environment surrounding the depth camera. Existing techniques to resolve the optical paths of light incident on a sensor pixels are computationally complex and do not fully disambiguate between optical paths in the environment.
- a headset in a virtual reality (VR) or augmented reality (AR) system environment includes a depth camera assembly (DCA) configured to determine distances between a head mounted display (HMD) and one or more objects in an area surrounding the HMD and within a field of view of an imaging device included in the headset (i.e., a “local area”).
- the DCA includes the imaging device, such as a camera, and an illumination source that is displaced by a specific distance relative to the illumination source.
- the illumination source is configured to emit a series of periodic illumination patterns (e.g., a sinusoid) into the local area. Each periodic illumination pattern of the series is phase shifted by a different amount.
- the periodicity of the illumination pattern is a spatial periodicity observed on an object illuminated by the illumination pattern, and the phase shifts are lateral spatial phase shifts along the direction of periodicity.
- the periodicity of the illumination pattern is in a direction that is parallel to a displacement between the illumination source and a center of the imaging device of the DCA.
- the imaging device captures frames including the periodic illumination patterns via a sensor including multiple pixels and coupled to a processor.
- the processor For each pixel of the sensor, the processor relates intensities captured by a pixel in multiple images to a phase shift of a periodic illumination pattern captured by the multiple images. From the phase shift of the periodic illumination pattern captured by the pixel, the processor determines a depth of a location within the local area from which the pixel captured the intensities of the periodic illumination pattern from the HMD.
- Each pixel of the sensor may independently determine a depth based on captured intensities of the periodic illumination pattern, optimally using the pixels of the sensor of the DCA.
- each pixel of the sensor comprises a photodiode coupled to multiple charge storage bins by transfer gates.
- a pixel of the sensor includes a photodiode coupled to three charge storage bins, with a different transfer gate coupling the photodiode to different charge storage bins.
- the pixel receives a control signal opening a specific transfer gate, while other transfer gates remain closed. Charge accumulated by the photodiode of the pixel is accumulated in the charge storage bin via the opened specific transfer gate. Subsequently, the specific transfer gate is closed and charge is accumulated by the photodiode.
- a subsequent control signal received by the pixel opens another transfer gate at a different time, so charge accumulated by the photodiode is accumulated in another charge storage bin through the other transfer gate.
- different transfer gates are opened at different times when the illumination source emits the periodic illumination pattern. For example, a first transfer gate is opened, while other transfer gates remain closed, during a time interval when the illumination source emits the periodic illumination pattern. The first transfer gate is closed when the illumination source stops emitting the periodic illumination pattern. Subsequently, a different transfer gate is opened when the illumination source emits the periodic illumination pattern during another time interval, while the first transfer gate and other transfer gates are closed.
- different charge storage bins store charge accumulated by the sensor at different times. Charge accumulated in different charge storage bins is retrieved and used to determine depth of a location in the local area from which the pixel captured intensity of light.
- a method is described. It is determined that an illumination source is emitting a first periodic illumination pattern during a first time interval. During the first time interval, a first control sensor is communicated to a sensor, the first control signal opening a first transfer gate coupling a photodiode of a pixel to a first charge storage bin and other control signals closing other transfer gates coupling the photodiode of the pixel to other charge storage bins apart from the first charge storage bin. It is determined that the illumination source is emitting a second periodic illumination pattern having a different spatial phase shift during a second time interval.
- a second control signal is communicated to the sensor, the second control signal opening up a second transfer gate coupling the photodiode of the pixel to a second charge storage bin and other control signals closing other transfer gates coupling the photodiode of the pixel to other charge storage bins apart from the second charge storage bin.
- FIG. 1 is a block diagram of a system environment for providing virtual reality or augmented reality content, in accordance with an embodiment.
- FIG. 2 is a diagram of a head mounted display (HMD), in accordance with an embodiment.
- HMD head mounted display
- FIG. 3 is a cross section of a front rigid body of a head mounted display (HMD), in accordance with an embodiment.
- HMD head mounted display
- FIG. 4 is an example of light emitted into a local area and captured by a depth camera assembly, in accordance with an embodiment.
- FIG. 5 is an example of using multiple frequencies of a continuous intensity pattern of light emitted by a DCA to identify a phase shift for a pixel of the sensor, in accordance with an embodiment.
- FIG. 6 A is an example pixel of a sensor included in an imaging device of a depth camera assembly, in accordance with an embodiment.
- FIG. 6 B is an example of control signals operating the example pixel shown in FIG. 6 A , in accordance with an embodiment.
- FIG. 7 is another example of control signals operating the example pixel shown in FIG. 6 A , in accordance with an embodiment.
- FIG. 1 is a block diagram of one embodiment of a system environment 100 in which a console 110 operates.
- the system environment 100 shown in FIG. 1 may provide augmented reality (AR) or virtual reality (VR) content to users in various embodiments. Additionally or alternatively, the system environment 100 generates one or more virtual environments and presents a virtual environment with which a user may interact to the user.
- the system environment 100 shown by FIG. 1 comprises a head mounted display (HMD) 105 and an input/output (I/O) interface 115 that is coupled to a console 110 . While FIG. 1 shows an example system environment 100 including one HMD 105 and one I/O interface 115 , in other embodiments any number of these components may be included in the system environment 100 .
- HMD head mounted display
- I/O input/output
- HMDs 105 there may be multiple HMDs 105 each having an associated I/O interface 115 , with each HMD 105 and I/O interface 115 communicating with the console 110 .
- different and/or additional components may be included in the system environment 100 .
- functionality described in conjunction with one or more of the components shown in FIG. 1 may be distributed among the components in a different manner than described in conjunction with FIG. 1 in some embodiments.
- some or all of the functionality of the console 110 is provided by the HMD 105 .
- the head mounted display (HMD) 105 presents content to a user comprising augmented views of a physical, real-world environment with computer-generated elements (e.g., two dimensional (2D) or three dimensional (3D) images, 2D or 3D video, sound, etc.) or presents content comprising a virtual environment.
- the presented content includes audio that is presented via an external device (e.g., speakers and/or headphones) that receives audio information from the HMD 105 , the console 110 , or both, and presents audio data based on the audio information.
- An embodiment of the HMD 105 is further described below in conjunction with FIGS. 2 and 3 .
- the HMD 105 may comprise one or more rigid bodies, which may be rigidly or non-rigidly coupled to each other together.
- a rigid coupling between rigid bodies causes the coupled rigid bodies to act as a single rigid entity.
- a non-rigid coupling between rigid bodies allows the rigid bodies to move relative to each other.
- the HMD 105 includes a depth camera assembly (DCA) 120 , an electronic display 125 , an optics block 130 , one or more position sensors 135 , and an inertial measurement unit (IMU) 140 .
- DCA depth camera assembly
- IMU inertial measurement unit
- Some embodiments of The HMD 105 have different components than those described in conjunction with FIG. 1 . Additionally, the functionality provided by various components described in conjunction with FIG. 1 may be differently distributed among the components of the HMD 105 in other embodiments.
- the DCA 120 captures data describing depth information of an area surrounding the HMD 105 .
- Some embodiments of the DCA 120 include one or more imaging devices (e.g., a camera, a video camera) and an illumination source configured to emit a series of periodic illumination patterns, with each periodic illumination pattern phase shifted by a different amount.
- the illumination source emits a series of sinusoids that each have a specific spatial phase shift.
- the periodicity of the illumination pattern is a spatial periodicity observed on an object illuminated by the illumination pattern, and the phase shifts are lateral spatial phase shifts along the direction of periodicity.
- the periodicity of the illumination pattern is in a direction that is parallel to a displacement between the illumination source and a center of the imaging device of the DCA 120
- the illumination source emits a series of sinusoids that each have a different spatial phase shift into an environment surrounding the HMD 105 .
- the illumination source emits a sinusoidal pattern multiplied by a low frequency envelope, such as a Gaussian, which changes relative signal intensity over the field of view of the imaging device. This change in relative signal intensity over the imaging device's field of view changes temporal noise characteristics without affecting the depth determination, which is further described below in conjunction with FIGS. 4 and 5 provided the higher frequency signal is a sinusoid.
- the imaging device of the DCA 120 includes a sensor comprising multiple pixels that determine a phase shift of a periodic illumination pattern included in multiple images captured by the imaging device based on relative intensities included in the multiple captured images.
- the DCA 120 determines a depth of a location within the local area from which images of the periodic illumination from the determined phase shift, as further described below in conjunction with FIGS. 4 and 5 .
- each pixel of the sensor of the imaging device determines a depth of a location within the local area from which a pixel captured intensities of the periodic illumination pattern based on a phase shift determined for the periodic illumination pattern captured by the pixel.
- the imaging device captures and records particular ranges of wavelengths of light (i.e., “bands” of light).
- Example bands of light captured by an imaging device include: a visible band ( ⁇ 380 nm to 750 nm), an infrared (IR) band ( ⁇ 750 nm to 2,200 nm), an ultraviolet band (100 nm to 380 nm), another portion of the electromagnetic spectrum, or some combination thereof.
- an imaging device captures images including light in the visible band and in the infrared band.
- the electronic display 125 displays 2D or 3D images to the user in accordance with data received from the console 110 .
- the electronic display 125 comprises a single electronic display or multiple electronic displays (e.g., a display for each eye of a user).
- Examples of the electronic display 125 include: a liquid crystal display (LCD), an organic light emitting diode (OLED) display, an active-matrix organic light-emitting diode display (AMOLED), some other display, or some combination thereof.
- the optics block 130 magnifies image light received from the electronic display 125 , corrects optical errors associated with the image light, and presents the corrected image light to a user of the HMD 105 .
- the optics block 130 includes one or more optical elements.
- Example optical elements included in the optics block 130 include: an aperture, a Fresnel lens, a convex lens, a concave lens, a filter, a reflecting surface, or any other suitable optical element that affects image light.
- the optics block 130 may include combinations of different optical elements.
- one or more of the optical elements in the optics block 130 may have one or more coatings, such as anti-reflective coatings.
- magnification and focusing of the image light by the optics block 130 allows the electronic display 125 to be physically smaller, weigh less and consume less power than larger displays. Additionally, magnification may increase the field of view of the content presented by the electronic display 125 . For example, the field of view of the displayed content is such that the displayed content is presented using almost all (e.g., approximately 110 degrees diagonal), and in some cases all, of the user's field of view. Additionally, in some embodiments, the amount of magnification may be adjusted by adding or removing optical elements.
- the optics block 130 may be designed to correct one or more types of optical error.
- optical error include barrel distortions, pincushion distortions, longitudinal chromatic aberrations, or transverse chromatic aberrations.
- Other types of optical errors may further include spherical aberrations, comatic aberrations or errors due to the lens field curvature, astigmatisms, or any other type of optical error.
- content provided to the electronic display 125 for display is pre-distorted, and the optics block 130 corrects the distortion when it receives image light from the electronic display 125 generated based on the content.
- the IMU 140 is an electronic device that generates data indicating a position of the HMD 105 based on measurement signals received from one or more of the position sensors 135 and from depth information received from the DCA 120 .
- a position sensor 135 generates one or more measurement signals in response to motion of the HMD 105 .
- Examples of position sensors 135 include: one or more accelerometers, one or more gyroscopes, one or more magnetometers, another suitable type of sensor that detects motion, a type of sensor used for error correction of the IMU 140 , or some combination thereof.
- the position sensors 135 may be located external to the IMU 140 , internal to the IMU 140 , or some combination thereof.
- the IMU 140 Based on the one or more measurement signals from one or more position sensors 135 , the IMU 140 generates data indicating an estimated current position of the HMD 105 relative to an initial position of the HMD 105 .
- the position sensors 135 include multiple accelerometers to measure translational motion (forward/back, up/down, left/right) and multiple gyroscopes to measure rotational motion (e.g., pitch, yaw, roll).
- the IMU 140 rapidly samples the measurement signals and calculates the estimated current position of the HMD 105 from the sampled data.
- the IMU 140 integrates the measurement signals received from the accelerometers over time to estimate a velocity vector and integrates the velocity vector over time to determine an estimated current position of a reference point on the HMD 105 .
- the IMU 140 provides the sampled measurement signals to the console 110 , which interprets the data to reduce error.
- the reference point is a point that may be used to describe the position of the HMD 105 .
- the reference point may generally be defined as a point in space or a position related to the HMD's 105 orientation and position.
- the IMU 140 receives one or more parameters from the console 110 . As further discussed below, the one or more parameters are used to maintain tracking of the HMD 105 . Based on a received parameter, the IMU 140 may adjust one or more IMU parameters (e.g., sample rate). In some embodiments, certain parameters cause the IMU 140 to update an initial position of the reference point so it corresponds to a next position of the reference point. Updating the initial position of the reference point as the next calibrated position of the reference point helps reduce accumulated error associated with the current position estimated the IMU 140 . The accumulated error, also referred to as drift error, causes the estimated position of the reference point to “drift” away from the actual position of the reference point over time. In some embodiments of the HMD 105 , the IMU 140 may be a dedicated hardware component. In other embodiments, the IMU 140 may be a software component implemented in one or more processors.
- IMU parameters e.g., sample rate
- certain parameters cause the IMU 140 to update an initial position of the reference
- the I/O interface 115 is a device that allows a user to send action requests and receive responses from the console 110 .
- An action request is a request to perform a particular action.
- an action request may be an instruction to start or end capture of image or video data or an instruction to perform a particular action within an application.
- the I/O interface 115 may include one or more input devices.
- Example input devices include: a keyboard, a mouse, a game controller, or any other suitable device for receiving action requests and communicating the action requests to the console 110 .
- An action request received by the I/O interface 115 is communicated to the console 110 , which performs an action corresponding to the action request.
- the I/O interface 115 includes an IMU 140 , as further described above, that captures calibration data indicating an estimated position of the I/O interface 115 relative to an initial position of the I/O interface 115 .
- the I/O interface 115 may provide haptic feedback to the user in accordance with instructions received from the console 110 . For example, haptic feedback is provided when an action request is received, or the console 110 communicates instructions to the I/O interface 115 causing the I/O interface 115 to generate haptic feedback when the console 110 performs an action.
- the console 110 provides content to the HMD 105 for processing in accordance with information received from one or more of: the DCA 120 , the HMD 105 , and the VR I/O interface 115 .
- the console 110 includes an application store 150 , a tracking module 155 and a content engine 145 .
- Some embodiments of the console 110 have different modules or components than those described in conjunction with FIG. 1 .
- the functions further described below may be distributed among components of the console 110 in a different manner than described in conjunction with FIG. 1 .
- the application store 150 stores one or more applications for execution by the console 110 .
- An application is a group of instructions, that when executed by a processor, generates content for presentation to the user. Content generated by an application may be in response to inputs received from the user via movement of the HMD 105 or the I/O interface 115 . Examples of applications include: gaming applications, conferencing applications, video playback applications, or other suitable applications.
- the tracking module 155 calibrates the system environment 100 using one or more calibration parameters and may adjust one or more calibration parameters to reduce error in determination of the position of the HMD 105 or of the I/O interface 115 .
- the tracking module 155 communicates a calibration parameter to the DCA 120 to adjust the focus of the DCA 120 to more accurately determine depths of locations within the local area surrounding the HMD 105 from captured intensities. Calibration performed by the tracking module 155 also accounts for information received from the IMU 140 in the HMD 105 and/or an IMU 140 included in the I/O interface 115 . Additionally, if tracking of the HMD 105 is lost (e.g., the DCA 120 loses line of sight of at least a threshold number of SL elements), the tracking module 140 may re-calibrate some or all of the system environment 100 .
- the tracking module 155 tracks movements of the HMD 105 or of the I/O interface 115 using information from the DCA 120 , the one or more position sensors 135 , the IMU 140 or some combination thereof. For example, the tracking module 155 determines a position of a reference point of the HMD 105 in a mapping of a local area based on information from the HMD 105 . The tracking module 155 may also determine positions of the reference point of the HMD 105 or a reference point of the I/O interface 115 using data indicating a position of the HMD 105 from the IMU 140 or using data indicating a position of the I/O interface 115 from an IMU 140 included in the I/O interface 115 , respectively.
- the tracking module 155 may use portions of data indicating a position of the HMD 105 from the IMU 140 as well as representations of the local area from the DCA 120 to predict a future location of the HMD 105 .
- the tracking module 155 provides the estimated or predicted future position of the HMD 105 or the I/O interface 115 to the content engine 145 .
- the content engine 145 generates a 3D mapping of the area surrounding the HMD 105 (i.e., the “local area”) based on information received from the DCA 120 included in the HMD 105 .
- the content engine 145 determines depth information for the 3D mapping of the local area based on depths determined by each pixel of the sensor in the imaging device from a phase shift determined from relative intensities captured by a pixel of the sensor in multiple images.
- the content engine 145 uses different types of information determined by the DCA 120 or a combination of types of information determined by the DCA 120 to generate the 3D mapping of the local area.
- the content engine 145 also executes applications within the system environment 100 and receives position information, acceleration information, velocity information, predicted future positions, or some combination thereof, of the HMD 105 from the tracking module 155 . Based on the received information, the content engine 145 determines content to provide to the HMD 105 for presentation to the user. For example, if the received information indicates that the user has looked to the left, the content engine 145 generates content for the HMD 105 that mirrors the user's movement in a virtual environment or in an environment augmenting the local area with additional content. Additionally, the content engine 145 performs an action within an application executing on the console 110 in response to an action request received from the I/O interface 115 and provides feedback to the user that the action was performed. The provided feedback may be visual or audible feedback via the HMD 105 or haptic feedback via the I/O interface 115 .
- FIG. 2 is a wire diagram of one embodiment of a head mounted display (HMD) 200 .
- the HMD 200 is an embodiment of the HMD 105 , and includes a front rigid body 205 , a band 210 , a reference point 215 , a left side 220 A, a top side 220 B, a right side 220 C, a bottom side 220 D, and a front side 220 E.
- the HMD 200 shown in FIG. 2 also includes an embodiment of a depth camera assembly (DCA) 120 including an imaging device 225 and an illumination source 230 , which are further described below in conjunction with FIGS. 3 and 4 .
- the front rigid body 205 includes one or more electronic display elements of the electronic display 125 (not shown), the IMU 130 , the one or more position sensors 135 , and the reference point 215 .
- the HMD 200 includes a DCA 120 comprising an illumination source 225 , such as a camera, and an illumination source 230 configured to emit a series of periodic illumination patterns, with each periodic illumination pattern phase shifted by a different amount into a local area surrounding the HMD 200 .
- the illumination source 230 emits a sinusoidal pattern, a near sinusoidal pattern, or any other periodic pattern (e.g., a square wave).
- the illumination source 230 emits a series of sinusoids that each have a different phase shift into an environment surrounding the HMD 200 .
- the illumination source 230 includes an acousto-optic modulator configured to generate two Gaussian beams of light that interfere with each other in the local area so a sinusoidal interference pattern is generated.
- the illumination source 230 includes one or more of an acousto-optic device, an electro-optic device, physical optics, optical interference, a diffractive optical device, or any other suitable components configured to generate the periodic illumination pattern.
- the illumination source 230 includes additional optical elements that modify the generated sinusoidal interference pattern to be within an intensity envelope (e.g., within a Gaussian intensity pattern); alternatively, the HMD 200 includes the additional optical elements and the Gaussian beams of light generated by the illumination source 230 are directed through the additional optical elements before being emitted into the environment surrounding the HMD 200 .
- the imaging device 225 captures images of the local area, which are used to calculate depths relative to the HMD 200 of various locations within the local area, as further described below in conjunction with FIGS. 3 - 5 .
- FIG. 3 is a cross section of the front rigid body 205 of the HMD 200 depicted in FIG. 2 .
- the front rigid body 205 includes an imaging device 225 and an illumination source 230 .
- the front rigid body 205 also has an optical axis corresponding to a path along which light propagates through the front rigid body 205 .
- the imaging device 225 is positioned along the optical axis and captures images of a local area 305 , which is a portion of an environment surrounding the front rigid body 205 within a field of view of the imaging device 225 .
- the front rigid body 205 includes the electronic display 125 and the optics block 130 , which are further described above in conjunction with FIG. 1 .
- the front rigid body 205 also includes an exit pupil 335 where the user's eye 340 is located.
- FIG. 3 shows a cross section of the front rigid body 205 in accordance with a single eye 340 .
- the local area 305 reflects incident ambient light as well as light projected by the illumination source 230 , which is subsequently captured by the imaging device 225 .
- the electronic display 125 emits light forming an image toward the optics block 130 , which alters the light received from the electronic display 125 .
- the optics block 130 directs the altered image light to the exit pupil 335 , which is a location of the front rigid body 205 where a user's eye 340 is positioned.
- FIG. 3 shows a cross section of the front rigid body 205 for a single eye 340 of the user, with another electronic display 125 and optics block 130 , separate from those shown in FIG. 3 , included in the front rigid body 205 to present content, such as an augmented representation of the local area 305 or virtual content, to another eye of the user.
- the illumination source 230 of the depth camera assembly emits a series of periodic illumination patterns, with each periodic illumination pattern phase shifted by a different amount into the local area 305 , and the imaging device 225 captures images of the periodic illumination patterns projected onto the local area 305 using a sensor comprising multiple pixels.
- Each pixel captures intensity of light emitted by the illumination source 230 from the local area 305 in various images and communicates the captured intensity to a controller or to the console 110 , which determines a phase shift for each image, as further described below in conjunction with FIGS. 4 - 6 B , and determines a depth of a location within the local area onto which the light emitted by the illumination source 230 captured by the imaging device 225 was captured, also further described below in conjunction with FIGS. 4 - 6 B .
- FIG. 4 example of light emitted into a local area and captured by a depth camera assembly included in a head mounted display (HMD) 105 .
- FIG. 4 shows an imaging device 225 and an illumination source 230 of a depth camera assembly (DCA) 120 included in the HMD.
- imaging device 225 and the illumination source 230 are separated by a specific distance D (also referred to as a “baseline”), which is specified when the DCA 120 is assembled.
- the distance D between the imaging device 223 and the illumination source 230 is stored in a storage device coupled to the imaging device 225 , coupled to a controller included in the DCA 120 , or coupled to the console 110 in various embodiments.
- the illumination source 230 emits a smooth continuous intensity pattern of light 405 onto a flat target 410 within a local area surrounding the HMD 105 and within a field of view of the imaging device 225 .
- the continuous intensity pattern of light 405 has a period T known to the DCA 120 .
- the illumination source 230 emits any suitable intensity pattern having a period T known to the DCA 120 .
- FIG. 4 identifies an angle ⁇ i that is one half of the period T of the continuous intensity pattern of light 405 .
- ⁇ i defines a depth independent periodicity of the illumination.
- ⁇ c specifies an angle between the line perpendicular to the plane including the imaging device 225 and the location on the target 410 from which the specific pixel captures intensities of the continuous intensity pattern of light 405 emitted by the illumination source 230 .
- Each pixel of the sensor of the imaging device 225 provides an intensity of light from the continuous intensity pattern of light 405 captured in multiple images to a controller or to the console 110 , which determines a phase shift, ⁇ , of the continuous intensity pattern of light 405 captured by each pixel of the sensor.
- Each image captured by the imaging device 225 is a digital sampling of the continuous intensity pattern of light 405 , so the set of images captured by the sensor represent a Fourier transform of the continuous intensity pattern of light 405 , and the Fourier components, a 1 and b 1 , of the fundamental harmonic of the continuous intensity pattern 405 are directly related to the phase shift for a pixel of the sensor.
- the Fourier components a 1 and b 1 are determined using the following equations:
- S n denotes an intensity of the pixel of the sensor in a particular image, n, captured by the sensor
- the set ⁇ n of represents the phase shifts introduced into the continuous intensity pattern of light 405 .
- the set of ⁇ n includes 0 degrees, 120 degrees, and 240 degrees.
- the set of ⁇ n includes 0 degrees, 90 degrees, 180 degrees, and 270 degrees.
- the set of ⁇ n is determines so 0 degrees and 360 degrees are uniformly sampled by the captured images, but the set of ⁇ n may include any values in different implementations.
- the controller or the console determines the phase shift ⁇ of the continuous intensity pattern of light 405 captured by a pixel of the sensor as follows:
- ⁇ ⁇ ( R ) tan - 1 ( a 1 b 1 ) - ⁇ 1 ( 3 )
- R a 1 2 + b 1 2 ( 4 )
- ⁇ is the phase shift of the first harmonic of the continuous intensity pattern of light 405
- R is the magnitude of the first harmonic of the continuous intensity pattern of light 405
- ⁇ 1 is a calibration offset.
- the DCA 120 determines phase shifts using the intensity of the pixel of the sensor in at least three images.
- the phase shift of the first harmonic of the continuous intensity pattern 405 determined through equation (3) above is used by a controller 430 coupled to the imaging device 225 and to the illumination source 230 .
- the controller 430 is a processor that may be included in in the imaging device 225 , in the illumination source 230 , or in the console 110 to determine the depth of the location of the target 410 from which the pixel of the sensor captures intensities of the continuous intensity pattern of light 405 as follows:
- z is the depth of the location of the target 410 from which the pixel of the sensor captures intensities of the continuous intensity pattern of light 405 ;
- D is the distance between the illumination source 230 and the imaging device 225 ;
- ⁇ i is one half of the period T of the continuous intensity pattern of light 405 ;
- ⁇ c is an angle between and a line perpendicular to a plane including the imaging device 225 and a the location on the target 410 from which a particular pixel located at row i and column j of the sensor included in the imaging device 225 captured intensities of the continuous intensity pattern of light 405 .
- ⁇ ij is the phase shift determined for the pixel at row i and column j of the sensor, determined as further described above.
- ⁇ ij,cal is a calibration offset for the pixel of the sensor at row i and column j of the sensor, which is determined as further described below.
- the DCA 120 determines phase shifts for each of at least a set of pixels of the sensor of the imaging device 225 , as described above. For each of at least the set of pixels, the DCA 120 determines a depth from the DCA 120 to a location within the local area surrounding the DCA 120 from which a pixel of the set captured intensities of the continuous intensity pattern of light 405 emitted into the local area. This allows different pixels of the sensor of the imaging device 225 to determine depths of locations within the local area from which different pixels captured intensities of the continuous intensity pattern of light 405 .
- each pixel of the sensor of the imaging device 225 determines a depth from the DCA 120 to a location within the local area surrounding the DCA 120 from which a pixel captured intensities of the continuous intensity pattern of light 405 in various images.
- the DCA 120 may generate a depth map identifying depths from the DCA 120 to different locations within the local area from which different pixels captured intensities of the continuous intensity pattern of light 405 .
- the generated depth map identifies depths from the DCA 120 to different locations within the local area based on intensities captured by each pixel of the sensor, with a depth corresponding to a pixel of the sensor that captured intensities used to determine the depth.
- the continuous intensity pattern of light 405 includes two or more spatial frequencies in sequence. Using two or more spatial frequencies increases a range of phases within which phase shifts may be unambiguously identified.
- the range of phases is extended for a subset of pixels within the sensor of the imaging device 225 based on a maximum parallax expected during operation of the imaging device 225 , which may be determined based on a difference between a maximum range and a minimum range of the imaging device 225 .
- the range of phases is extended for the subset of pixels of the sensor most likely to capture light from the continuous intensity pattern of light 405 .
- FIG. 5 shows an example of using two frequencies of a continuous intensity pattern of light emitted by a DCA 120 to identify a phase shift for a pixel of the sensor.
- phase shifts identified from frequency 505 repeat through the interval of 0 and 2 ⁇ radians three times in a time interval
- phase shifts identified from frequency 510 repeat through the interval of 0 and 2 ⁇ radians twice in the time interval, as shown in plot 520 .
- emitting light patterns having frequency 505 and frequency 510 allows the DCA 120 to identify a phase shift in the time interval over a larger interval than between 0 and 2 ⁇ (i.e., “unwraps” the phase shifts that may be unambiguously identified).
- FIG. 5 shows an example of using two frequencies of a continuous intensity pattern of light emitted by a DCA 120 to identify a phase shift for a pixel of the sensor.
- phase shifts identified from frequency 505 repeat through the interval of 0 and 2 ⁇ radians three times in a time interval
- FIG. 5 shows another example where, phase shifts identified from frequency 505 repeat through the interval of 0 and 2 ⁇ radians five times in a time interval, while phase shifts identified from frequency 515 repeat through the interval of 0 and 2 ⁇ radians twice in the time interval, as shown in plot 530 .
- This similarly allows the DCA 120 to identify a phase shift in the time interval over a larger interval than between 0 and 2 ⁇ (i.e., “unwraps” the phase shifts that may be unambiguously identified).
- FIG. 5 also shows an analogous three dimensional plot 540 of frequency 505 , frequency 510 , and frequency 515 , which may further extend the range of phases over which phase shifts may be unambiguously identified.
- any number of frequencies of the continuous intensity pattern of light may be used to identify the phase shift for the pixel of the sensor using the process further described above.
- a pixel of the sensor of the imaging device 225 captures intensity of the continuous intensity pattern of light 405 at a position of D+x 0 relative to the illumination source 230 , where x 0 is a distance from a principal point of the imaging device 225 (e.g., an optical center of a detector) along an axis separating the illumination source 230 and the sensor (e.g., along a horizontal axis along which the illumination source 230 and the sensor are positioned).
- the position of the pixel along the axis separating the illumination source 230 and the sensor is related to the phase shift, ⁇ ij , determined for the pixel.
- ⁇ i defines the spatial periodicity of the continuous intensity pattern of light 405 in the local area and corresponds to half of the period T of the continuous intensity pattern of light. As the continuous intensity pattern of light 405 expands angularly as depth z from the DCA 120 increases, the period T of the continuous intensity pattern of light 405 corresponds to a specific depth z from the DCA 120 , while the periodicity defined by ⁇ i is independent of depth z from the DCA 120 .
- This relationship between the depth-dependent period T, the distance from a principal point of the imaging device 225 , and phase shift, ⁇ ij , determined for the pixel equates a an estimate of lateral extent at the camera plane and the plane including the object onto which the continuous intensity pattern of light 405 was emitted, which both measure a distance from a center of the continuous intensity pattern of light 405 to a central ray of the pixel.
- a calibration offset, ⁇ ij,cal is determined for each pixel via a calibration process where the sensor of the imaging device 225 captures intensities from the continuous illumination pattern of light 405 emitted onto a target at an accurately predetermined depth, z cal .
- the target is a Lambertian surface or other surface that reflects at least a threshold amount of light incident on the target. Accounting for the calibration offset modifies equation (7) above into equation (5),
- the calibration offset for each pixel is determined as:
- the calibration offset is determined for each pixel of the sensor and for each frequency of the continuous intensity pattern of light 405 based on the predetermined depth z cal and is stored in the DCA 120 for use during operation.
- a calibration offset for each pixel of the sensor is determined for each period of continuous intensity pattern of light 405 emitted by the illumination source 230 and stored during the calibration process.
- the DCA 120 stores a calibration offset for a pixel of the sensor in association with a location (e.g., a row and a column) of the pixel within the sensor and in association with a frequency of the continuous intensity pattern of light 405 .
- the DCA 120 stores a parameterized function for determining the calibration offset of different pixels of the sensor based on location within the sensor and frequency of the continuous intensity pattern of light 405 instead of storing calibration offsets determined for individual pixels of the sensor.
- the DCA 120 stores a parameterized function corresponding to each period T of continuous intensity patterns of light 405 emitted by the illumination source 230 in various embodiments.
- the parameterized function determining the calibration offset of different pixels is a linear function.
- the period T of the continuous intensity pattern of light 405 is determined as:
- T 2 z ( 2 ⁇ a ⁇ ) 2 - 1 ( 9 )
- ⁇ is a wavelength of the illumination source 230 and a is the separation of the Gaussian beams generated by the acousto-optic modulator to generate the continuous intensity pattern of light 405 emitted into the local area surrounding the DCA 120 .
- the determined period T may then be used to determine the calibration offset for various pixels of the detector, as further described above.
- FIG. 6 A shows an example pixel 600 of a sensor included in an imaging device 225 of a depth camera assembly (DCA) 120 .
- the pixel 600 includes a photodiode 605 coupled to multiple charge storage bins 615 , 625 , 635 . While FIG. 6 A shows three charge storage bins 615 , 625 , 635 coupled to the photodiode 605 , in other embodiments, the pixel 600 is coupled to more than three charge storage bins 615 , 625 , 635 .
- the photodiode 605 is coupled to charge storage bin 615 via transfer gate 610 , coupled to charge storage bin 625 via transfer gate 620 , and coupled to charge storage bin 635 via transfer gate 630 .
- a controller is coupled to the illumination source 230 of the DCA 120 , which is further described above in conjunction with FIG. 4 , and also to the sensor of the imaging device 225 .
- the controller provides control signals to transfer gate 610 , transfer gate 620 , and transfer 630 based on times when the illumination source 230 emits a periodic illumination pattern.
- the illumination source 230 is activated to emit a periodic illumination pattern, is deactivated, and is activated again to emit another periodic illumination pattern.
- the controller communicates control signals to transfer gate 610 , transfer gate 620 , and transfer gate 630 .
- the control signals cause a single transfer gate 610 , 620 , 630 to open, while the other transfer gates 610 , 620 , 630 remain closed, so charge accumulated by the photodiode 605 while the illumination source 230 was activated is transferred to the charge storage bin 615 , 625 , 635 coupled to the photodiode via the open single transfer gate 610 , 620 , 630 .
- the open transfer gate 610 , 620 , 630 is closed when the illumination source 230 is again activated, and control signals from the controller open another transfer gate 610 , 620 , 630 , so charge accumulated by the photodiode 605 while the illumination source 230 was active is transferred to a charge storage bin 615 , 625 , 635 via the open transfer gate 610 , 620 , 630 .
- FIG. 6 B is one example of control signals regulating operation of the pixel 600 shown in FIG. 6 A .
- FIG. 6 B identifies a signal 650 indicating times when the illumination source 230 emits a periodic illumination pattern.
- the illumination source 230 emits a periodic illumination pattern.
- FIG. 6 B also shows control signals provided to transfer gate 610 , transfer gate 620 , and transfer gate 630 .
- transfer gate 610 transfer gate 620
- transfer gate 630 transfer gate 630 .
- transfer gate 610 when the illumination source 230 is initially activated and emits a periodic illumination pattern, transfer gate 610 , transfer gate 620 , and transfer gate 630 are closed.
- transfer gate 610 receives a control signal that opens transfer gate 610 , while transfer gate 620 and transfer gate 630 remain closed.
- Charge accumulated by the photodiode 605 while the illumination source 230 was activated is transferred into charge storage bin 615 via transfer gate 610 .
- the control signal closes transfer gate 610 before the illumination source 230 is activated again, and the photodiode 605 accumulates charge from light captured while the illumination source 230 is activated.
- transfer gate 620 When the illumination source 230 is deactivated, transfer gate 620 receives a control signal that opens transfer gate 620 , while transfer gate 610 and transfer gate 630 remain closed; hence, charge accumulated by the photodiode 605 is transferred to charge storage bin 625 .
- the control signal closes transfer gate 620 before the illumination source 230 is activated, and the photodiode 605 accumulates charge from light captured while the illumination source 230 is activated.
- transfer gate 630 receives a control signal that opens transfer gate 630 , while transfer gate 610 and transfer gate 620 remain closed. Accordingly, charge accumulated by the photodiode 605 is transferred to charge storage bin 635 .
- Transfer gate 630 closes before the illumination source 230 is again activated, and the control signals received by transfer gate 610 , transfer gate 620 , and transfer gate 630 open and close the transfer gates 610 , 620 , 630 as described above while the illumination source 230 is activated and deactivated.
- the controller determines a signal to noise ratio from the charge is accumulated in charge storage bins 615 , 625 , 635 and compares the determined signal to noise ratio to a threshold. If the determined signal to noise ratio is less than the threshold, the controller provides control signals to open and close transfer gates 610 , 620 , 630 , as further described above, until the signal to noise ratio determined from the charge accumulated in charge storage bins 615 , 625 , 635 equals or exceeds the threshold.
- the controller combines the charge stored in each of charge storage bin 615 , 625 , 635 to determine an intensity of light from the illumination source 230 captured by the pixel 600 and determines a depth of a location within the local area surrounding the DCA 120 from which the pixel 600 captured light form the illumination source 230 , as further described above in conjunction with FIGS. 4 and 5 .
- Accumulating charge in a single charge storage bin 615 , 625 , 635 coupled to the photodiode 605 by an open transfer gate 610 , 620 , 630 limits accumulation of background noise caused by the photodiode 605 capturing light from sources other than the illumination source 230 , allowing the pixel 600 to have a higher signal to noise ratio than other techniques.
- accumulating charge from the photodiode 605 in different charge storage bins 615 , 625 , 635 allows the pixel to multiplex phase shift determinations for the periodic illumination pattern captured by the pixel 600 , which reduces the number of images for the image capture device 225 to capture to determine the phase shift of the periodic illumination pattern captured by the pixel 600 and reduces an amount of time the pixel 600 captures light emitted from the illumination source.
- the pixel 600 also includes a drain 645 coupled to the photodiode 605 via a shutter 640 .
- the controller provides a control signal to the shutter 640 that causes the shutter 640 to open and couple the photodiode 605 to the drain 645 .
- Coupling the photodiode 605 to the drain 640 while the illumination source is deactivated discharges charge produced by the photodiode 605 from ambient light in the local area surrounding the DCA 120 .
- the controller provides an alternative control signal to the shutter 640 that closes the shutter 640 to decouple the photodiode 605 from the drain 645 .
- the shutter 640 may be configured to open if a charge accumulated by the photodiode 605 from captured light equals or exceeds a threshold value, allowing charge accumulated by the photodiode 605 to be removed via the drain 604 , preventing the photodiode 605 from saturating and preventing charge accumulated by the photodiode 605 from being transferred into adjacent pixels 600 .
- the shutter 640 is configured to couple the photodiode 605 to the drain 645 until the charge accumulated by the photodiode 605 is less than the threshold or is at least a threshold amount below the threshold in some embodiments.
- different control signals regulate operation of the pixel 600 shown in FIG. 6 A .
- the illumination source 230 is activated and remains activated, while different transfer gates 610 , 620 , 630 are activated at different times, so charge is accumulated in different bins 615 , 625 , 635 at different times.
- the illumination source 230 remains activated and emitting the periodic illumination pattern, and control signals alternately activate transfer gates 610 , 620 , 630 during different time periods, so charge accumulated by the photodiode 605 is alternately transferred into bins 615 , 625 , 635 , respectively, during the time interval.
- transfer gate 610 is activated and transfer gates 620 , 630 are closed, so charge accumulated by the photodiode 605 is transferred into bin 615 .
- transfer gate 620 is activated and transfer gates 610 , 630 are closed, so charge accumulated by the photodiode 605 is transferred into bin 625 .
- transfer gate 630 is activated and transfer gates 610 , 620 are closed, so charge accumulated by the photodiode 605 is transferred into bin 635 .
- the different transfer gates 610 , 620 , 630 may alternately be activated as described above during a time interval while the illumination source 230 is emitting the periodic illumination pattern.
- relative timing between control signals activating the illumination source 230 and controls activating transfer gates 610 , 620 , 630 may differ.
- a control signal activates a transfer gate 610 , 620 , 630 so the transfer gate 610 , 620 , 630 is active for at least a portion of a time while the illumination source 230 is active and emitting the periodic illumination pattern.
- control signals activating a transfer gate 610 , 620 , 630 are received by a transfer gate 610 , 620 , 630 after the illumination source 230 has been deactivated for a specific amount of time, adding a delay between deactivation of the illumination source 230 and activation of a transfer gate 610 , 620 , 630 .
- the preceding are merely examples, and the pixel 600 may be operated in any suitable manner in different embodiments.
- FIG. 7 is another example of control signals regulating operation of the pixel 600 shown in FIG. 6 A .
- each transfer gate 610 , 620 , 630 is activated in a sequence separated by a fixed drain time.
- FIG. 7 indicates times when the illumination source 230 emits light.
- an illumination source 230 further described above in conjunction with FIGS. 3 and 4 , emits pulses of light during a time interval when one of transfer gate 610 , transfer gate 620 , or transfer gate 630 is open.
- the illumination source 230 emits pulses of light synchronized with opening of each transfer gate 610 , 620 , 630 .
- FIG. 7 shows a time interval when one of transfer gate 610 , transfer gate 620 , or transfer gate 630 .
- transfer gate 610 is open and transfer gate 620 and transfer gate 230 are closed; hence, charge accumulated when pulse of light 710 is emitted is transferred into charge storage bin 615 via transfer gate 610 .
- transfer gate 620 is open, while transfer gate 610 and transfer gate 630 are closed; therefore, charge accumulated during emission of pulse of light 720 is transferred into charge storage bin 625 via transfer gate 620 .
- illumination source 230 emits pulse of light 730 and transfer gate 630 is open, while and transfer gate 610 and transfer gate 620 are closed; thus, charge accumulated when pulse of light 730 is emitted is transferred into charge storage bin 635 via transfer gate 630 .
- opening of transfer gate 610 , transfer gate 620 , and transfer gate 630 is synchronized with emission of pulse of light 710 , pulse of light 720 , and pulse of light 730 , respectively.
- the shutter 645 is opened during intervals when the illumination source 230 is not emitting pulse of light 710 , pulse of light 720 , or pulse of light 730 . Opening the shutter 645 during intervals between opening of different transfer gates 610 , 620 , 630 removes ambient background light captured by the photodiode 605 during times when the illumination source 230 is not emitting light by transferring the captured ambient background light to the drain 640 . In various embodiments, the shutter 645 is opened within a threshold time interval from a time when the illumination source 230 stops emitting pulse of light 710 , pulse of light 720 , or pulse of light 730 .
- the shutter 645 is open from within a threshold time interval from a time when the illumination source 230 stops emitting pulse of light 710 until a time when the illumination source 230 starts emitting pulse of light 720 (or until a time within the threshold time interval when the illumination source 230 starts emitting pulse of light 720 ).
- shutter 645 may be open from a time within the threshold time interval from a time when the illumination source 230 stops emitting pulse of light 730 until a time when the illumination source 230 starts emitting pulse of light 730 or until a time within the threshold time interval when the illumination source 230 starts emitting pulse of light 730 ); the shutter 645 may further be open from a time that is within the threshold time interval from a time when the illumination source 230 stops emitting pulse of light 730 until a time when the illumination source 230 starts emitting another pulse of light (or within a threshold time interval from the time when the illumination source 230 starts emitting another pulse of light).
- the pixel 600 does not include the shutter 640 and the drain 645 .
- FIG. 7 shows example timing of illumination of pulses of light
- the shutter 640 may be opened for a longer or a shorter time interval than shown in FIG. 7 ; in an embodiment, the shutter 640 is opened for a fraction of an amount of time between emission of consecutive pulses of light by the illumination source 230 (e.g., for 10% of a time between emission of consecutive pulses of light by the illumination source 230 , for 50% of a time between emission of consecutive pulses of light by the illumination source 230 ).
- the shutter is opened for a specific amount of time between emission of consecutive pulses of light by the illumination source 230 .
- the shutter 640 may not be opened.
- transfer gate 610 , transfer gate 630 , or transfer gate 640 may be opened for a longer or a shorter length of time than those shown in FIG. 7 .
- the illumination source 230 emits a different pattern of light when different transfer gates 610 , 620 , 630 are open.
- the illumination source 230 when transfer gate 610 is open, the illumination source 230 emits a pulse of light (e.g., pulse of light 710 in FIG. 7 ) having a first illumination pattern, emits a pulse of light (e.g., pulse of light 720 in FIG. 7 ) having a second illumination pattern when transfer gate 620 is open, and emits a pulse of light (e.g., pulse of light 730 in FIG. 7 ) having a third illumination pattern when transfer gate 630 is open.
- the first illumination pattern, the second illumination pattern, and the third illumination pattern are different from each other. Accordingly, the illumination source 230 emits pulses of light having different illumination patterns during time intervals when different transfer gates 610 , 620 , 630 are open.
- the illumination source 230 emits a variable number of pulses of light that are synchronized with opening of one of transfer gate 610 , transfer gate 620 , and transfer gate 630 ; the number of emitted pulses of light may be fixed or may dynamically vary (e.g., based on an auto-exposure mechanism). A different number of pulses of light may be synchronized with opening of different transfer gates 610 , 620 , 630 in some embodiments.
- the illumination source 230 emits pulses of light synchronized with 1000 openings of transfer gate 610 , emits pulses of light synchronized with 2000 openings of transfer gate 620 , and emits pulses of light synchronized with 3000 openings of transfer gate 630 ; however, the illumination source 230 may emit any arbitrary number of pulses of light differing for opening of different transfer gates 610 , 620 , 630 and synchronized with opening of different transfer gates 610 , 620 , 630 .
- the illumination source 230 continuously emits a pattern of light instead of discrete pulses of light.
- the continuous pattern of light emitted by the illumination source slowly changes over time in various embodiments (e.g. as a fringe pattern that is moving continuously in time, as further described above in conjunction with FIGS. 3 and 4 ).
- the transfer gates 610 , 620 , 630 and shutter 640 are opened as described in conjunction with FIG. 7 , so the continuous pattern of light is integrated over a fixed discrete time.
- the emitted illumination pattern is configured to return to a previously emitted pattern over a specified time interval; hence, the illumination pattern changes over time, but repeats with a specific frequency or period.
- Opening of each transfer gate 610 , 620 , 630 is also synchronized to repeat using the specified time interval, causing different transfer gates 610 , 620 , 630 to be opened when the same portion of the illumination pattern is emitted during different periods.
- transfer gate 610 is synchronized to be opened when a specific portion of the continuous illumination pattern is emitted during each period of the continuous illumination pattern, so transfer gate 610 integrates the specific portion of the continuous illumination pattern during each period of the continuous illumination pattern.
- transfer gate 620 is synchronized with the illumination source 230 so a different specific portion of the continuous illumination pattern is emitted during each period of the continuous illumination pattern, so transfer gate 620 integrates the different specific portion of the continuous illumination pattern during each period of the continuous illumination pattern.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Electromagnetism (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Optics & Photonics (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
Description
- This application is a continuation of U.S. patent application Ser. No. 16/298,278, which claims the benefit of U.S. Provisional Application No. 62/642,199, filed Mar. 13, 2018, which are incorporated by reference in its entirety.
- The present disclosure generally relates to virtual or augmented reality systems and more specifically relates to headsets for virtual reality systems that obtain depth information of a local area.
- Providing virtual reality (VR) or augmented reality (AR) content to users through a head mounted display (HMD) often relies on localizing a user's position in an arbitrary environment and determining a three dimensional mapping of the surroundings within the arbitrary environment. The user's surroundings within the arbitrary environment may then be represented in a virtual environment or the user's surroundings may be overlaid with additional content.
- Conventional HMDs include one or more quantitative depth cameras to determine surroundings of a user within the user's environment. Typically, conventional depth cameras use structured light or time of flight to determine the HMD's location within an environment. Structured light depth cameras use an active illumination source to project known patterns into the environment surrounding the HMD. However, structured light commonly requires a pattern that is projected to be configured so different portions of the pattern include different characteristics that are later identified. Having different characteristics of different portions of the pattern causes signification portions of a resulting image of the projected pattern to not be illuminated. This inefficiently uses a sensor capturing the resulting image; for example, projection of the pattern by a structured light depth camera results in less than 10% of sensor pixels collecting light from the projected pattern, while requiring multiple sensor pixels to be illuminated to perform a single depth measurement.
- Time of flight depth cameras measure a round trip travel time of light projected into the environment surrounding a depth camera and returning to pixels on a sensor array. While time of flight depth cameras are capable of measure depths of different objects in the environment independently via each sensor pixel, light incident on a sensor pixel may be a combination of light received from multiple optical paths in the environment surrounding the depth camera. Existing techniques to resolve the optical paths of light incident on a sensor pixels are computationally complex and do not fully disambiguate between optical paths in the environment.
- A headset in a virtual reality (VR) or augmented reality (AR) system environment includes a depth camera assembly (DCA) configured to determine distances between a head mounted display (HMD) and one or more objects in an area surrounding the HMD and within a field of view of an imaging device included in the headset (i.e., a “local area”). The DCA includes the imaging device, such as a camera, and an illumination source that is displaced by a specific distance relative to the illumination source. The illumination source is configured to emit a series of periodic illumination patterns (e.g., a sinusoid) into the local area. Each periodic illumination pattern of the series is phase shifted by a different amount. The periodicity of the illumination pattern is a spatial periodicity observed on an object illuminated by the illumination pattern, and the phase shifts are lateral spatial phase shifts along the direction of periodicity. In various embodiments, the periodicity of the illumination pattern is in a direction that is parallel to a displacement between the illumination source and a center of the imaging device of the DCA.
- The imaging device captures frames including the periodic illumination patterns via a sensor including multiple pixels and coupled to a processor. For each pixel of the sensor, the processor relates intensities captured by a pixel in multiple images to a phase shift of a periodic illumination pattern captured by the multiple images. From the phase shift of the periodic illumination pattern captured by the pixel, the processor determines a depth of a location within the local area from which the pixel captured the intensities of the periodic illumination pattern from the HMD. Each pixel of the sensor may independently determine a depth based on captured intensities of the periodic illumination pattern, optimally using the pixels of the sensor of the DCA.
- In various embodiments, each pixel of the sensor comprises a photodiode coupled to multiple charge storage bins by transfer gates. For example, a pixel of the sensor includes a photodiode coupled to three charge storage bins, with a different transfer gate coupling the photodiode to different charge storage bins. At different times, the pixel receives a control signal opening a specific transfer gate, while other transfer gates remain closed. Charge accumulated by the photodiode of the pixel is accumulated in the charge storage bin via the opened specific transfer gate. Subsequently, the specific transfer gate is closed and charge is accumulated by the photodiode. A subsequent control signal received by the pixel opens another transfer gate at a different time, so charge accumulated by the photodiode is accumulated in another charge storage bin through the other transfer gate. In various embodiments, different transfer gates are opened at different times when the illumination source emits the periodic illumination pattern. For example, a first transfer gate is opened, while other transfer gates remain closed, during a time interval when the illumination source emits the periodic illumination pattern. The first transfer gate is closed when the illumination source stops emitting the periodic illumination pattern. Subsequently, a different transfer gate is opened when the illumination source emits the periodic illumination pattern during another time interval, while the first transfer gate and other transfer gates are closed. Hence, different charge storage bins store charge accumulated by the sensor at different times. Charge accumulated in different charge storage bins is retrieved and used to determine depth of a location in the local area from which the pixel captured intensity of light.
- In some embodiments a method is described. It is determined that an illumination source is emitting a first periodic illumination pattern during a first time interval. During the first time interval, a first control sensor is communicated to a sensor, the first control signal opening a first transfer gate coupling a photodiode of a pixel to a first charge storage bin and other control signals closing other transfer gates coupling the photodiode of the pixel to other charge storage bins apart from the first charge storage bin. It is determined that the illumination source is emitting a second periodic illumination pattern having a different spatial phase shift during a second time interval. During the second time interval, a second control signal is communicated to the sensor, the second control signal opening up a second transfer gate coupling the photodiode of the pixel to a second charge storage bin and other control signals closing other transfer gates coupling the photodiode of the pixel to other charge storage bins apart from the second charge storage bin.
-
FIG. 1 is a block diagram of a system environment for providing virtual reality or augmented reality content, in accordance with an embodiment. -
FIG. 2 is a diagram of a head mounted display (HMD), in accordance with an embodiment. -
FIG. 3 is a cross section of a front rigid body of a head mounted display (HMD), in accordance with an embodiment. -
FIG. 4 is an example of light emitted into a local area and captured by a depth camera assembly, in accordance with an embodiment. -
FIG. 5 is an example of using multiple frequencies of a continuous intensity pattern of light emitted by a DCA to identify a phase shift for a pixel of the sensor, in accordance with an embodiment. -
FIG. 6A is an example pixel of a sensor included in an imaging device of a depth camera assembly, in accordance with an embodiment. -
FIG. 6B is an example of control signals operating the example pixel shown inFIG. 6A , in accordance with an embodiment. -
FIG. 7 is another example of control signals operating the example pixel shown inFIG. 6A , in accordance with an embodiment. - The figures depict embodiments of the present disclosure for purposes of illustration only. One skilled in the art will readily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles, or benefits touted, of the disclosure described herein.
-
FIG. 1 is a block diagram of one embodiment of asystem environment 100 in which aconsole 110 operates. Thesystem environment 100 shown inFIG. 1 may provide augmented reality (AR) or virtual reality (VR) content to users in various embodiments. Additionally or alternatively, thesystem environment 100 generates one or more virtual environments and presents a virtual environment with which a user may interact to the user. Thesystem environment 100 shown byFIG. 1 comprises a head mounted display (HMD) 105 and an input/output (I/O)interface 115 that is coupled to aconsole 110. WhileFIG. 1 shows anexample system environment 100 including one HMD 105 and one I/O interface 115, in other embodiments any number of these components may be included in thesystem environment 100. For example, there may bemultiple HMDs 105 each having an associated I/O interface 115, with eachHMD 105 and I/O interface 115 communicating with theconsole 110. In alternative configurations, different and/or additional components may be included in thesystem environment 100. Additionally, functionality described in conjunction with one or more of the components shown inFIG. 1 may be distributed among the components in a different manner than described in conjunction withFIG. 1 in some embodiments. For example, some or all of the functionality of theconsole 110 is provided by theHMD 105. - The head mounted display (HMD) 105 presents content to a user comprising augmented views of a physical, real-world environment with computer-generated elements (e.g., two dimensional (2D) or three dimensional (3D) images, 2D or 3D video, sound, etc.) or presents content comprising a virtual environment. In some embodiments, the presented content includes audio that is presented via an external device (e.g., speakers and/or headphones) that receives audio information from the
HMD 105, theconsole 110, or both, and presents audio data based on the audio information. An embodiment of theHMD 105 is further described below in conjunction withFIGS. 2 and 3 . TheHMD 105 may comprise one or more rigid bodies, which may be rigidly or non-rigidly coupled to each other together. A rigid coupling between rigid bodies causes the coupled rigid bodies to act as a single rigid entity. In contrast, a non-rigid coupling between rigid bodies allows the rigid bodies to move relative to each other. - The
HMD 105 includes a depth camera assembly (DCA) 120, anelectronic display 125, anoptics block 130, one ormore position sensors 135, and an inertial measurement unit (IMU) 140. Some embodiments of TheHMD 105 have different components than those described in conjunction withFIG. 1 . Additionally, the functionality provided by various components described in conjunction withFIG. 1 may be differently distributed among the components of theHMD 105 in other embodiments. - The
DCA 120 captures data describing depth information of an area surrounding theHMD 105. Some embodiments of theDCA 120 include one or more imaging devices (e.g., a camera, a video camera) and an illumination source configured to emit a series of periodic illumination patterns, with each periodic illumination pattern phase shifted by a different amount. As another example, the illumination source emits a series of sinusoids that each have a specific spatial phase shift. The periodicity of the illumination pattern is a spatial periodicity observed on an object illuminated by the illumination pattern, and the phase shifts are lateral spatial phase shifts along the direction of periodicity. In various embodiments, the periodicity of the illumination pattern is in a direction that is parallel to a displacement between the illumination source and a center of the imaging device of theDCA 120 - For example, the illumination source emits a series of sinusoids that each have a different spatial phase shift into an environment surrounding the
HMD 105. In other examples, the illumination source emits a sinusoidal pattern multiplied by a low frequency envelope, such as a Gaussian, which changes relative signal intensity over the field of view of the imaging device. This change in relative signal intensity over the imaging device's field of view changes temporal noise characteristics without affecting the depth determination, which is further described below in conjunction withFIGS. 4 and 5 provided the higher frequency signal is a sinusoid. The imaging device of theDCA 120 includes a sensor comprising multiple pixels that determine a phase shift of a periodic illumination pattern included in multiple images captured by the imaging device based on relative intensities included in the multiple captured images. As the phase shift is a function of depth, theDCA 120 determines a depth of a location within the local area from which images of the periodic illumination from the determined phase shift, as further described below in conjunction withFIGS. 4 and 5 . In various embodiments, each pixel of the sensor of the imaging device determines a depth of a location within the local area from which a pixel captured intensities of the periodic illumination pattern based on a phase shift determined for the periodic illumination pattern captured by the pixel. - The imaging device captures and records particular ranges of wavelengths of light (i.e., “bands” of light). Example bands of light captured by an imaging device include: a visible band (˜380 nm to 750 nm), an infrared (IR) band (˜750 nm to 2,200 nm), an ultraviolet band (100 nm to 380 nm), another portion of the electromagnetic spectrum, or some combination thereof. In some embodiments, an imaging device captures images including light in the visible band and in the infrared band.
- The
electronic display 125 displays 2D or 3D images to the user in accordance with data received from theconsole 110. In various embodiments, theelectronic display 125 comprises a single electronic display or multiple electronic displays (e.g., a display for each eye of a user). Examples of theelectronic display 125 include: a liquid crystal display (LCD), an organic light emitting diode (OLED) display, an active-matrix organic light-emitting diode display (AMOLED), some other display, or some combination thereof. - The optics block 130 magnifies image light received from the
electronic display 125, corrects optical errors associated with the image light, and presents the corrected image light to a user of theHMD 105. In various embodiments, the optics block 130 includes one or more optical elements. Example optical elements included in the optics block 130 include: an aperture, a Fresnel lens, a convex lens, a concave lens, a filter, a reflecting surface, or any other suitable optical element that affects image light. Moreover, the optics block 130 may include combinations of different optical elements. In some embodiments, one or more of the optical elements in the optics block 130 may have one or more coatings, such as anti-reflective coatings. - Magnification and focusing of the image light by the optics block 130 allows the
electronic display 125 to be physically smaller, weigh less and consume less power than larger displays. Additionally, magnification may increase the field of view of the content presented by theelectronic display 125. For example, the field of view of the displayed content is such that the displayed content is presented using almost all (e.g., approximately 110 degrees diagonal), and in some cases all, of the user's field of view. Additionally, in some embodiments, the amount of magnification may be adjusted by adding or removing optical elements. - In some embodiments, the optics block 130 may be designed to correct one or more types of optical error. Examples of optical error include barrel distortions, pincushion distortions, longitudinal chromatic aberrations, or transverse chromatic aberrations. Other types of optical errors may further include spherical aberrations, comatic aberrations or errors due to the lens field curvature, astigmatisms, or any other type of optical error. In some embodiments, content provided to the
electronic display 125 for display is pre-distorted, and the optics block 130 corrects the distortion when it receives image light from theelectronic display 125 generated based on the content. - The
IMU 140 is an electronic device that generates data indicating a position of theHMD 105 based on measurement signals received from one or more of theposition sensors 135 and from depth information received from theDCA 120. Aposition sensor 135 generates one or more measurement signals in response to motion of theHMD 105. Examples ofposition sensors 135 include: one or more accelerometers, one or more gyroscopes, one or more magnetometers, another suitable type of sensor that detects motion, a type of sensor used for error correction of theIMU 140, or some combination thereof. Theposition sensors 135 may be located external to theIMU 140, internal to theIMU 140, or some combination thereof. - Based on the one or more measurement signals from one or
more position sensors 135, theIMU 140 generates data indicating an estimated current position of theHMD 105 relative to an initial position of theHMD 105. For example, theposition sensors 135 include multiple accelerometers to measure translational motion (forward/back, up/down, left/right) and multiple gyroscopes to measure rotational motion (e.g., pitch, yaw, roll). In some embodiments, theIMU 140 rapidly samples the measurement signals and calculates the estimated current position of theHMD 105 from the sampled data. For example, theIMU 140 integrates the measurement signals received from the accelerometers over time to estimate a velocity vector and integrates the velocity vector over time to determine an estimated current position of a reference point on theHMD 105. Alternatively, theIMU 140 provides the sampled measurement signals to theconsole 110, which interprets the data to reduce error. The reference point is a point that may be used to describe the position of theHMD 105. The reference point may generally be defined as a point in space or a position related to the HMD's 105 orientation and position. - The
IMU 140 receives one or more parameters from theconsole 110. As further discussed below, the one or more parameters are used to maintain tracking of theHMD 105. Based on a received parameter, theIMU 140 may adjust one or more IMU parameters (e.g., sample rate). In some embodiments, certain parameters cause theIMU 140 to update an initial position of the reference point so it corresponds to a next position of the reference point. Updating the initial position of the reference point as the next calibrated position of the reference point helps reduce accumulated error associated with the current position estimated theIMU 140. The accumulated error, also referred to as drift error, causes the estimated position of the reference point to “drift” away from the actual position of the reference point over time. In some embodiments of theHMD 105, theIMU 140 may be a dedicated hardware component. In other embodiments, theIMU 140 may be a software component implemented in one or more processors. - The I/
O interface 115 is a device that allows a user to send action requests and receive responses from theconsole 110. An action request is a request to perform a particular action. For example, an action request may be an instruction to start or end capture of image or video data or an instruction to perform a particular action within an application. The I/O interface 115 may include one or more input devices. Example input devices include: a keyboard, a mouse, a game controller, or any other suitable device for receiving action requests and communicating the action requests to theconsole 110. An action request received by the I/O interface 115 is communicated to theconsole 110, which performs an action corresponding to the action request. In some embodiments, the I/O interface 115 includes anIMU 140, as further described above, that captures calibration data indicating an estimated position of the I/O interface 115 relative to an initial position of the I/O interface 115. In some embodiments, the I/O interface 115 may provide haptic feedback to the user in accordance with instructions received from theconsole 110. For example, haptic feedback is provided when an action request is received, or theconsole 110 communicates instructions to the I/O interface 115 causing the I/O interface 115 to generate haptic feedback when theconsole 110 performs an action. - The
console 110 provides content to theHMD 105 for processing in accordance with information received from one or more of: theDCA 120, theHMD 105, and the VR I/O interface 115. In the example shown inFIG. 1 , theconsole 110 includes anapplication store 150, atracking module 155 and acontent engine 145. Some embodiments of theconsole 110 have different modules or components than those described in conjunction withFIG. 1 . Similarly, the functions further described below may be distributed among components of theconsole 110 in a different manner than described in conjunction withFIG. 1 . - The
application store 150 stores one or more applications for execution by theconsole 110. An application is a group of instructions, that when executed by a processor, generates content for presentation to the user. Content generated by an application may be in response to inputs received from the user via movement of theHMD 105 or the I/O interface 115. Examples of applications include: gaming applications, conferencing applications, video playback applications, or other suitable applications. - The
tracking module 155 calibrates thesystem environment 100 using one or more calibration parameters and may adjust one or more calibration parameters to reduce error in determination of the position of theHMD 105 or of the I/O interface 115. For example, thetracking module 155 communicates a calibration parameter to theDCA 120 to adjust the focus of theDCA 120 to more accurately determine depths of locations within the local area surrounding theHMD 105 from captured intensities. Calibration performed by thetracking module 155 also accounts for information received from theIMU 140 in theHMD 105 and/or anIMU 140 included in the I/O interface 115. Additionally, if tracking of theHMD 105 is lost (e.g., theDCA 120 loses line of sight of at least a threshold number of SL elements), thetracking module 140 may re-calibrate some or all of thesystem environment 100. - The
tracking module 155 tracks movements of theHMD 105 or of the I/O interface 115 using information from theDCA 120, the one ormore position sensors 135, theIMU 140 or some combination thereof. For example, thetracking module 155 determines a position of a reference point of theHMD 105 in a mapping of a local area based on information from theHMD 105. Thetracking module 155 may also determine positions of the reference point of theHMD 105 or a reference point of the I/O interface 115 using data indicating a position of theHMD 105 from theIMU 140 or using data indicating a position of the I/O interface 115 from anIMU 140 included in the I/O interface 115, respectively. Additionally, in some embodiments, thetracking module 155 may use portions of data indicating a position of theHMD 105 from theIMU 140 as well as representations of the local area from theDCA 120 to predict a future location of theHMD 105. Thetracking module 155 provides the estimated or predicted future position of theHMD 105 or the I/O interface 115 to thecontent engine 145. - The
content engine 145 generates a 3D mapping of the area surrounding the HMD 105 (i.e., the “local area”) based on information received from theDCA 120 included in theHMD 105. In some embodiments, thecontent engine 145 determines depth information for the 3D mapping of the local area based on depths determined by each pixel of the sensor in the imaging device from a phase shift determined from relative intensities captured by a pixel of the sensor in multiple images. In various embodiments, thecontent engine 145 uses different types of information determined by theDCA 120 or a combination of types of information determined by theDCA 120 to generate the 3D mapping of the local area. - The
content engine 145 also executes applications within thesystem environment 100 and receives position information, acceleration information, velocity information, predicted future positions, or some combination thereof, of theHMD 105 from thetracking module 155. Based on the received information, thecontent engine 145 determines content to provide to theHMD 105 for presentation to the user. For example, if the received information indicates that the user has looked to the left, thecontent engine 145 generates content for theHMD 105 that mirrors the user's movement in a virtual environment or in an environment augmenting the local area with additional content. Additionally, thecontent engine 145 performs an action within an application executing on theconsole 110 in response to an action request received from the I/O interface 115 and provides feedback to the user that the action was performed. The provided feedback may be visual or audible feedback via theHMD 105 or haptic feedback via the I/O interface 115. -
FIG. 2 is a wire diagram of one embodiment of a head mounted display (HMD) 200. TheHMD 200 is an embodiment of theHMD 105, and includes a frontrigid body 205, aband 210, areference point 215, aleft side 220A, atop side 220B, aright side 220C, abottom side 220D, and afront side 220E. TheHMD 200 shown inFIG. 2 also includes an embodiment of a depth camera assembly (DCA) 120 including animaging device 225 and anillumination source 230, which are further described below in conjunction withFIGS. 3 and 4 . The frontrigid body 205 includes one or more electronic display elements of the electronic display 125 (not shown), theIMU 130, the one ormore position sensors 135, and thereference point 215. - In the embodiment shown by
FIG. 2 , theHMD 200 includes aDCA 120 comprising anillumination source 225, such as a camera, and anillumination source 230 configured to emit a series of periodic illumination patterns, with each periodic illumination pattern phase shifted by a different amount into a local area surrounding theHMD 200. In various embodiments, theillumination source 230 emits a sinusoidal pattern, a near sinusoidal pattern, or any other periodic pattern (e.g., a square wave). For example, theillumination source 230 emits a series of sinusoids that each have a different phase shift into an environment surrounding theHMD 200. In various embodiments, theillumination source 230 includes an acousto-optic modulator configured to generate two Gaussian beams of light that interfere with each other in the local area so a sinusoidal interference pattern is generated. However, in other embodiments theillumination source 230 includes one or more of an acousto-optic device, an electro-optic device, physical optics, optical interference, a diffractive optical device, or any other suitable components configured to generate the periodic illumination pattern. In some embodiments, theillumination source 230 includes additional optical elements that modify the generated sinusoidal interference pattern to be within an intensity envelope (e.g., within a Gaussian intensity pattern); alternatively, theHMD 200 includes the additional optical elements and the Gaussian beams of light generated by theillumination source 230 are directed through the additional optical elements before being emitted into the environment surrounding theHMD 200. Theimaging device 225 captures images of the local area, which are used to calculate depths relative to theHMD 200 of various locations within the local area, as further described below in conjunction withFIGS. 3-5 . -
FIG. 3 is a cross section of the frontrigid body 205 of theHMD 200 depicted inFIG. 2 . As shown inFIG. 3 , the frontrigid body 205 includes animaging device 225 and anillumination source 230. The frontrigid body 205 also has an optical axis corresponding to a path along which light propagates through the frontrigid body 205. In some embodiments, theimaging device 225 is positioned along the optical axis and captures images of alocal area 305, which is a portion of an environment surrounding the frontrigid body 205 within a field of view of theimaging device 225. Additionally, the frontrigid body 205 includes theelectronic display 125 and the optics block 130, which are further described above in conjunction withFIG. 1 . The frontrigid body 205 also includes anexit pupil 335 where the user's eye 340 is located. For purposes of illustration,FIG. 3 shows a cross section of the frontrigid body 205 in accordance with a single eye 340. Thelocal area 305 reflects incident ambient light as well as light projected by theillumination source 230, which is subsequently captured by theimaging device 225. - As described above in conjunction with
FIG. 1 , theelectronic display 125 emits light forming an image toward the optics block 130, which alters the light received from theelectronic display 125. The optics block 130 directs the altered image light to theexit pupil 335, which is a location of the frontrigid body 205 where a user's eye 340 is positioned.FIG. 3 shows a cross section of the frontrigid body 205 for a single eye 340 of the user, with anotherelectronic display 125 and optics block 130, separate from those shown inFIG. 3 , included in the frontrigid body 205 to present content, such as an augmented representation of thelocal area 305 or virtual content, to another eye of the user. - As further described above in conjunction with
FIG. 2 , theillumination source 230 of the depth camera assembly (DCA) emits a series of periodic illumination patterns, with each periodic illumination pattern phase shifted by a different amount into thelocal area 305, and theimaging device 225 captures images of the periodic illumination patterns projected onto thelocal area 305 using a sensor comprising multiple pixels. Each pixel captures intensity of light emitted by theillumination source 230 from thelocal area 305 in various images and communicates the captured intensity to a controller or to theconsole 110, which determines a phase shift for each image, as further described below in conjunction withFIGS. 4-6B , and determines a depth of a location within the local area onto which the light emitted by theillumination source 230 captured by theimaging device 225 was captured, also further described below in conjunction withFIGS. 4-6B . -
FIG. 4 example of light emitted into a local area and captured by a depth camera assembly included in a head mounted display (HMD) 105.FIG. 4 shows animaging device 225 and anillumination source 230 of a depth camera assembly (DCA) 120 included in the HMD. As shown inFIG. 4 ,imaging device 225 and theillumination source 230 are separated by a specific distance D (also referred to as a “baseline”), which is specified when theDCA 120 is assembled. The distance D between the imaging device 223 and theillumination source 230 is stored in a storage device coupled to theimaging device 225, coupled to a controller included in theDCA 120, or coupled to theconsole 110 in various embodiments. - In the example of
FIG. 4 , theillumination source 230 emits a smooth continuous intensity pattern of light 405 onto aflat target 410 within a local area surrounding theHMD 105 and within a field of view of theimaging device 225. The continuous intensity pattern oflight 405 has a period T known to theDCA 120. However, in other embodiments, theillumination source 230 emits any suitable intensity pattern having a period T known to theDCA 120. Additionally,FIG. 4 identifies an angle θi that is one half of the period T of the continuous intensity pattern oflight 405. As the continuous intensity pattern of light 405 scales laterally with the depth from theDCA 120, θi defines a depth independent periodicity of the illumination. Similarly,FIG. 4 depicts an angle θc and a line perpendicular to a plane including theimaging device 225 and a location on thetarget 410 from which a particular pixel of a sensor included in theimaging device 225 captures intensities of the continuous intensity pattern of light 405 in different images; hence, θc specifies an angle between the line perpendicular to the plane including theimaging device 225 and the location on thetarget 410 from which the specific pixel captures intensities of the continuous intensity pattern of light 405 emitted by theillumination source 230. - Each pixel of the sensor of the
imaging device 225 provides an intensity of light from the continuous intensity pattern of light 405 captured in multiple images to a controller or to theconsole 110, which determines a phase shift, ϕ, of the continuous intensity pattern of light 405 captured by each pixel of the sensor. Each image captured by theimaging device 225 is a digital sampling of the continuous intensity pattern oflight 405, so the set of images captured by the sensor represent a Fourier transform of the continuous intensity pattern oflight 405, and the Fourier components, a1 and b1, of the fundamental harmonic of thecontinuous intensity pattern 405 are directly related to the phase shift for a pixel of the sensor. For images captured by a pixel of the sensor, the Fourier components a1 and b1 are determined using the following equations: -
- In the preceding, Sn denotes an intensity of the pixel of the sensor in a particular image, n, captured by the sensor, and the set θn of represents the phase shifts introduced into the continuous intensity pattern of
light 405. For example, if three phase shifts are used, the set of θn includes 0 degrees, 120 degrees, and 240 degrees. As another example, if four phase shifts are used the set of θn includes 0 degrees, 90 degrees, 180 degrees, and 270 degrees. In some embodiments, the set of θn is determines so 0 degrees and 360 degrees are uniformly sampled by the captured images, but the set of θn may include any values in different implementations. - From the Fourier components a1 and b1 determined as described above, the controller or the console determines the phase shift ϕ of the continuous intensity pattern of light 405 captured by a pixel of the sensor as follows:
-
- In the preceding, ϕ is the phase shift of the first harmonic of the continuous intensity pattern of
light 405, R is the magnitude of the first harmonic of the continuous intensity pattern oflight 405, and θ1 is a calibration offset. For each spatial frequency of the continuous intensity pattern oflight 405, theDCA 120 determines phase shifts using the intensity of the pixel of the sensor in at least three images. - The phase shift of the first harmonic of the
continuous intensity pattern 405 determined through equation (3) above is used by acontroller 430 coupled to theimaging device 225 and to theillumination source 230. In various embodiments thecontroller 430 is a processor that may be included in in theimaging device 225, in theillumination source 230, or in theconsole 110 to determine the depth of the location of thetarget 410 from which the pixel of the sensor captures intensities of the continuous intensity pattern of light 405 as follows: -
- Where z is the depth of the location of the
target 410 from which the pixel of the sensor captures intensities of the continuous intensity pattern oflight 405; D is the distance between theillumination source 230 and theimaging device 225; θi is one half of the period T of the continuous intensity pattern oflight 405; and θc is an angle between and a line perpendicular to a plane including theimaging device 225 and a the location on thetarget 410 from which a particular pixel located at row i and column j of the sensor included in theimaging device 225 captured intensities of the continuous intensity pattern oflight 405. Additionally, ϕij is the phase shift determined for the pixel at row i and column j of the sensor, determined as further described above. Further, ϕij,cal is a calibration offset for the pixel of the sensor at row i and column j of the sensor, which is determined as further described below. - The
DCA 120 determines phase shifts for each of at least a set of pixels of the sensor of theimaging device 225, as described above. For each of at least the set of pixels, theDCA 120 determines a depth from theDCA 120 to a location within the local area surrounding theDCA 120 from which a pixel of the set captured intensities of the continuous intensity pattern of light 405 emitted into the local area. This allows different pixels of the sensor of theimaging device 225 to determine depths of locations within the local area from which different pixels captured intensities of the continuous intensity pattern oflight 405. In some embodiments, each pixel of the sensor of theimaging device 225 determines a depth from theDCA 120 to a location within the local area surrounding theDCA 120 from which a pixel captured intensities of the continuous intensity pattern of light 405 in various images. TheDCA 120 may generate a depth map identifying depths from theDCA 120 to different locations within the local area from which different pixels captured intensities of the continuous intensity pattern oflight 405. For example, the generated depth map identifies depths from theDCA 120 to different locations within the local area based on intensities captured by each pixel of the sensor, with a depth corresponding to a pixel of the sensor that captured intensities used to determine the depth. - However, because the phase shift is within a range of 0 and 2π radians, there may be ambiguities in resolving phase shifts that are integer multiples of 2π when determining the phase shift as described above. To avoid these potential ambiguities, in some embodiments, the continuous intensity pattern of light 405 emitted by the
illumination source 230 as a single, relatively lower, spatial frequency; however, use of a relatively lower spatial frequency may decrease precision of the depth determination by theDCA 120. Alternatively, the continuous intensity pattern oflight 405 includes two or more spatial frequencies in sequence. Using two or more spatial frequencies increases a range of phases within which phase shifts may be unambiguously identified. The range of phases is extended for a subset of pixels within the sensor of theimaging device 225 based on a maximum parallax expected during operation of theimaging device 225, which may be determined based on a difference between a maximum range and a minimum range of theimaging device 225. Hence, the range of phases is extended for the subset of pixels of the sensor most likely to capture light from the continuous intensity pattern oflight 405. -
FIG. 5 shows an example of using two frequencies of a continuous intensity pattern of light emitted by aDCA 120 to identify a phase shift for a pixel of the sensor. In the example ofFIG. 5 , phase shifts identified fromfrequency 505 repeat through the interval of 0 and 2π radians three times in a time interval, while phase shifts identified fromfrequency 510 repeat through the interval of 0 and 2π radians twice in the time interval, as shown inplot 520. Hence, emitting lightpatterns having frequency 505 andfrequency 510 allows theDCA 120 to identify a phase shift in the time interval over a larger interval than between 0 and 2π (i.e., “unwraps” the phase shifts that may be unambiguously identified).FIG. 5 shows another example where, phase shifts identified fromfrequency 505 repeat through the interval of 0 and 2π radians five times in a time interval, while phase shifts identified fromfrequency 515 repeat through the interval of 0 and 2π radians twice in the time interval, as shown inplot 530. This similarly allows theDCA 120 to identify a phase shift in the time interval over a larger interval than between 0 and 2π (i.e., “unwraps” the phase shifts that may be unambiguously identified). Additionally,FIG. 5 also shows an analogous threedimensional plot 540 offrequency 505,frequency 510, andfrequency 515, which may further extend the range of phases over which phase shifts may be unambiguously identified. In other embodiments, any number of frequencies of the continuous intensity pattern of light may be used to identify the phase shift for the pixel of the sensor using the process further described above. - Referring again to
FIG. 4 , a pixel of the sensor of theimaging device 225 captures intensity of the continuous intensity pattern of light 405 at a position of D+x0 relative to theillumination source 230, where x0 is a distance from a principal point of the imaging device 225 (e.g., an optical center of a detector) along an axis separating theillumination source 230 and the sensor (e.g., along a horizontal axis along which theillumination source 230 and the sensor are positioned). As further described above in conjunction withFIG. 4 , the position of the pixel along the axis separating theillumination source 230 and the sensor is related to the phase shift, ϕij, determined for the pixel. Additionally, as further described above θi defines the spatial periodicity of the continuous intensity pattern of light 405 in the local area and corresponds to half of the period T of the continuous intensity pattern of light. As the continuous intensity pattern oflight 405 expands angularly as depth z from theDCA 120 increases, the period T of the continuous intensity pattern oflight 405 corresponds to a specific depth z from theDCA 120, while the periodicity defined by θi is independent of depth z from theDCA 120. The dependence of the period T of the continuous intensity pattern oflight 405, in combination with the distance D between theimaging device 225 and theillumination source 230 allows theDCA 120 to determine the depth z of an object onto which the continuous intensity pattern oflight 405 is emitted, as the lateral distance at which the pixel captures a phase, D+x0, is equal to a product of the period T of the continuous intensity pattern of light 405 captured by theimaging device 225 and a ratio of the phase shift, ϕij, determined for the pixel to 2π (i.e., D+x0=T(ϕij/2π)). This relationship between the depth-dependent period T, the distance from a principal point of theimaging device 225, and phase shift, ϕij, determined for the pixel equates a an estimate of lateral extent at the camera plane and the plane including the object onto which the continuous intensity pattern oflight 405 was emitted, which both measure a distance from a center of the continuous intensity pattern of light 405 to a central ray of the pixel. - The continuous intensity pattern of
light 405 may be calibrated or determined using any suitable method, and scales with depth from theDCA 120. Accordingly, the period T of the continuous intensity pattern of light 405 at the depth z from theDCA 120 is equal to double a product of the depth z form theDCA 120 and a tangent of the angle, θi, which defines half of the period T of the continuous intensity pattern of light (i.e., T=(2)(z)(tan(θi))). Similarly, the location of the pixel relative to theillumination source 230 along an axis separating theillumination source 230 and the sensor, x0, is a product of the depth from theDCA 120, z. and a tangent of the angle, θc, between the line perpendicular to the plane including theimaging device 225 and the location on thetarget 410 from which the specific pixel captures intensities of the continuous intensity pattern of light 405 emitted by the illumination source 230 (i.e., x0=z(tan(θc))). Accordingly, -
- Solving equation 6 above for depth, z:
-
- However, equation 7 above is based on the phase shift, ϕij, when the location. x0, of the pixel relative to the
illumination source 230 along equals the inverse of the specific distance D separating theimaging device 225 and theillumination source 230 is zero (i.e., ϕij(x0=D)=0). To satisfy this condition, a calibration offset, ϕij,cal, is determined for each pixel via a calibration process where the sensor of theimaging device 225 captures intensities from the continuous illumination pattern of light 405 emitted onto a target at an accurately predetermined depth, zcal. In various embodiments, the target is a Lambertian surface or other surface that reflects at least a threshold amount of light incident on the target. Accounting for the calibration offset modifies equation (7) above into equation (5), -
- which was previously described above in conjunction with
FIG. 4 . With the predetermined depth, zcal, the calibration offset for each pixel is determined as: -
- The calibration offset is determined for each pixel of the sensor and for each frequency of the continuous intensity pattern of light 405 based on the predetermined depth zcal and is stored in the
DCA 120 for use during operation. A calibration offset for each pixel of the sensor is determined for each period of continuous intensity pattern of light 405 emitted by theillumination source 230 and stored during the calibration process. For example, theDCA 120 stores a calibration offset for a pixel of the sensor in association with a location (e.g., a row and a column) of the pixel within the sensor and in association with a frequency of the continuous intensity pattern oflight 405. In various embodiments, theDCA 120 stores a parameterized function for determining the calibration offset of different pixels of the sensor based on location within the sensor and frequency of the continuous intensity pattern of light 405 instead of storing calibration offsets determined for individual pixels of the sensor. TheDCA 120 stores a parameterized function corresponding to each period T of continuous intensity patterns of light 405 emitted by theillumination source 230 in various embodiments. In some embodiments, the parameterized function determining the calibration offset of different pixels is a linear function. - In embodiments where the
illumination source 230 includes an acousto-optic modulator configured to generate two Gaussian beams of light that interfere with each other in the local area so a sinusoidal interference pattern is generated as the continuous intensity pattern of light 405 emitted into the local area, the period T of the continuous intensity pattern oflight 405 is determined as: -
- In equation 9, λ is a wavelength of the
illumination source 230 and a is the separation of the Gaussian beams generated by the acousto-optic modulator to generate the continuous intensity pattern of light 405 emitted into the local area surrounding theDCA 120. The determined period T may then be used to determine the calibration offset for various pixels of the detector, as further described above. -
FIG. 6A shows anexample pixel 600 of a sensor included in animaging device 225 of a depth camera assembly (DCA) 120. In the example ofFIG. 6 , thepixel 600 includes aphotodiode 605 coupled to multiple 615, 625, 635. Whilecharge storage bins FIG. 6A shows three 615, 625, 635 coupled to thecharge storage bins photodiode 605, in other embodiments, thepixel 600 is coupled to more than three 615, 625, 635. Thecharge storage bins photodiode 605 is coupled to charge storage bin 615 viatransfer gate 610, coupled tocharge storage bin 625 viatransfer gate 620, and coupled tocharge storage bin 635 viatransfer gate 630. - A controller is coupled to the
illumination source 230 of theDCA 120, which is further described above in conjunction withFIG. 4 , and also to the sensor of theimaging device 225. The controller provides control signals to transfergate 610,transfer gate 620, and transfer 630 based on times when theillumination source 230 emits a periodic illumination pattern. In various embodiments, theillumination source 230 is activated to emit a periodic illumination pattern, is deactivated, and is activated again to emit another periodic illumination pattern. When theillumination source 230 is deactivated, the controller communicates control signals to transfergate 610,transfer gate 620, andtransfer gate 630. The control signals cause a 610, 620, 630 to open, while thesingle transfer gate 610, 620, 630 remain closed, so charge accumulated by theother transfer gates photodiode 605 while theillumination source 230 was activated is transferred to the 615, 625, 635 coupled to the photodiode via the opencharge storage bin 610, 620, 630. Thesingle transfer gate 610, 620, 630 is closed when theopen transfer gate illumination source 230 is again activated, and control signals from the controller open another 610, 620, 630, so charge accumulated by thetransfer gate photodiode 605 while theillumination source 230 was active is transferred to a 615, 625, 635 via thecharge storage bin 610, 620, 630.open transfer gate -
FIG. 6B is one example of control signals regulating operation of thepixel 600 shown inFIG. 6A . For purposes of illustration,FIG. 6B identifies asignal 650 indicating times when theillumination source 230 emits a periodic illumination pattern. When thesignal 650 has a maximum value inFIG. 6B , theillumination source 230 emits a periodic illumination pattern.FIG. 6B also shows control signals provided to transfergate 610,transfer gate 620, andtransfer gate 630. In the example ofFIG. 6B , when a control signal provided to a 610, 620, 630 has a maximum value, thetransfer gate 610, 620, 630 receiving the control signal is open; when the control signal provided to atransfer gate 610, 620, 630 has a minimum value, thetransfer gate 610, 620, 630 is closed.transfer gate - In the example of
FIG. 6B , when theillumination source 230 is initially activated and emits a periodic illumination pattern,transfer gate 610,transfer gate 620, andtransfer gate 630 are closed. When theillumination source 230 is deactivated and stops emitting the periodic illumination pattern,transfer gate 610 receives a control signal that openstransfer gate 610, whiletransfer gate 620 andtransfer gate 630 remain closed. Charge accumulated by thephotodiode 605 while theillumination source 230 was activated is transferred into charge storage bin 615 viatransfer gate 610. The control signal closestransfer gate 610 before theillumination source 230 is activated again, and thephotodiode 605 accumulates charge from light captured while theillumination source 230 is activated. When theillumination source 230 is deactivated,transfer gate 620 receives a control signal that openstransfer gate 620, whiletransfer gate 610 andtransfer gate 630 remain closed; hence, charge accumulated by thephotodiode 605 is transferred to chargestorage bin 625. The control signal closestransfer gate 620 before theillumination source 230 is activated, and thephotodiode 605 accumulates charge from light captured while theillumination source 230 is activated. When theillumination source 230 is again deactivated,transfer gate 630 receives a control signal that openstransfer gate 630, whiletransfer gate 610 andtransfer gate 620 remain closed. Accordingly, charge accumulated by thephotodiode 605 is transferred to chargestorage bin 635.Transfer gate 630 closes before theillumination source 230 is again activated, and the control signals received bytransfer gate 610,transfer gate 620, andtransfer gate 630 open and close the 610, 620, 630 as described above while thetransfer gates illumination source 230 is activated and deactivated. - In some embodiments, the controller determines a signal to noise ratio from the charge is accumulated in
615, 625, 635 and compares the determined signal to noise ratio to a threshold. If the determined signal to noise ratio is less than the threshold, the controller provides control signals to open andcharge storage bins 610, 620, 630, as further described above, until the signal to noise ratio determined from the charge accumulated inclose transfer gates 615, 625, 635 equals or exceeds the threshold. If the determined signal to noise ratio equals or exceeds the threshold, the controller combines the charge stored in each ofcharge storage bins 615, 625, 635 to determine an intensity of light from thecharge storage bin illumination source 230 captured by thepixel 600 and determines a depth of a location within the local area surrounding theDCA 120 from which thepixel 600 captured light form theillumination source 230, as further described above in conjunction withFIGS. 4 and 5 . Accumulating charge in a single 615, 625, 635 coupled to thecharge storage bin photodiode 605 by an 610, 620, 630 limits accumulation of background noise caused by theopen transfer gate photodiode 605 capturing light from sources other than theillumination source 230, allowing thepixel 600 to have a higher signal to noise ratio than other techniques. Additionally, accumulating charge from thephotodiode 605 in different 615, 625, 635 allows the pixel to multiplex phase shift determinations for the periodic illumination pattern captured by thecharge storage bins pixel 600, which reduces the number of images for theimage capture device 225 to capture to determine the phase shift of the periodic illumination pattern captured by thepixel 600 and reduces an amount of time thepixel 600 captures light emitted from the illumination source. - Referring back to
FIG. 6A , thepixel 600 also includes adrain 645 coupled to thephotodiode 605 via ashutter 640. When theillumination source 230 is deactivated, the controller provides a control signal to theshutter 640 that causes theshutter 640 to open and couple thephotodiode 605 to thedrain 645. Coupling thephotodiode 605 to thedrain 640 while the illumination source is deactivated discharges charge produced by thephotodiode 605 from ambient light in the local area surrounding theDCA 120. When theillumination source 230 is activated, the controller provides an alternative control signal to theshutter 640 that closes theshutter 640 to decouple thephotodiode 605 from thedrain 645. Additionally, theshutter 640 may be configured to open if a charge accumulated by thephotodiode 605 from captured light equals or exceeds a threshold value, allowing charge accumulated by thephotodiode 605 to be removed via the drain 604, preventing thephotodiode 605 from saturating and preventing charge accumulated by thephotodiode 605 from being transferred intoadjacent pixels 600. Theshutter 640 is configured to couple thephotodiode 605 to thedrain 645 until the charge accumulated by thephotodiode 605 is less than the threshold or is at least a threshold amount below the threshold in some embodiments. - However, in other embodiments, different control signals regulate operation of the
pixel 600 shown inFIG. 6A . For example, theillumination source 230 is activated and remains activated, while 610, 620, 630 are activated at different times, so charge is accumulated indifferent transfer gates 615, 625, 635 at different times. For example, during a time interval, thedifferent bins illumination source 230 remains activated and emitting the periodic illumination pattern, and control signals alternately activate 610, 620, 630 during different time periods, so charge accumulated by thetransfer gates photodiode 605 is alternately transferred into 615, 625, 635, respectively, during the time interval. As an example, during a first time period while thebins illumination source 230 is activated,transfer gate 610 is activated and transfer 620, 630 are closed, so charge accumulated by thegates photodiode 605 is transferred into bin 615. During a second time period while theillumination source 230 remains activated,transfer gate 620 is activated and transfer 610, 630 are closed, so charge accumulated by thegates photodiode 605 is transferred intobin 625. Similarly, During a third time period while theillumination source 230 remains activated,transfer gate 630 is activated and transfer 610, 620 are closed, so charge accumulated by thegates photodiode 605 is transferred intobin 635. The 610, 620, 630 may alternately be activated as described above during a time interval while thedifferent transfer gates illumination source 230 is emitting the periodic illumination pattern. - In other embodiments, relative timing between control signals activating the
illumination source 230 and controls activating 610, 620, 630 may differ. For example, a control signal activates atransfer gates 610, 620, 630 so thetransfer gate 610, 620, 630 is active for at least a portion of a time while thetransfer gate illumination source 230 is active and emitting the periodic illumination pattern. As another example, control signals activating a 610, 620, 630 are received by atransfer gate 610, 620, 630 after thetransfer gate illumination source 230 has been deactivated for a specific amount of time, adding a delay between deactivation of theillumination source 230 and activation of a 610, 620, 630. However, the preceding are merely examples, and thetransfer gate pixel 600 may be operated in any suitable manner in different embodiments. -
FIG. 7 is another example of control signals regulating operation of thepixel 600 shown inFIG. 6A . In the example ofFIG. 7 , each 610, 620, 630 is activated in a sequence separated by a fixed drain time. For purposes of illustration,transfer gate FIG. 7 indicates times when theillumination source 230 emits light. As illustrated inFIG. 7 anillumination source 230, further described above in conjunction withFIGS. 3 and 4 , emits pulses of light during a time interval when one oftransfer gate 610,transfer gate 620, ortransfer gate 630 is open. In the example shown byFIG. 7 , theillumination source 230 emits pulses of light synchronized with opening of each 610, 620, 630. As shown intransfer gate FIG. 7 , during a time interval when theillumination source 230 emits pulse oflight 710,transfer gate 610 is open andtransfer gate 620 andtransfer gate 230 are closed; hence, charge accumulated when pulse oflight 710 is emitted is transferred into charge storage bin 615 viatransfer gate 610. Similarly, during an additional time interval when theillumination source 230 emits pulse oflight 720,transfer gate 620 is open, whiletransfer gate 610 andtransfer gate 630 are closed; therefore, charge accumulated during emission of pulse oflight 720 is transferred intocharge storage bin 625 viatransfer gate 620. During a further time interval,illumination source 230 emits pulse oflight 730 andtransfer gate 630 is open, while andtransfer gate 610 andtransfer gate 620 are closed; thus, charge accumulated when pulse oflight 730 is emitted is transferred intocharge storage bin 635 viatransfer gate 630. In various embodiments, opening oftransfer gate 610,transfer gate 620, andtransfer gate 630 is synchronized with emission of pulse oflight 710, pulse oflight 720, and pulse oflight 730, respectively. - In the example of
FIG. 7 , theshutter 645 is opened during intervals when theillumination source 230 is not emitting pulse oflight 710, pulse oflight 720, or pulse oflight 730. Opening theshutter 645 during intervals between opening of 610, 620, 630 removes ambient background light captured by thedifferent transfer gates photodiode 605 during times when theillumination source 230 is not emitting light by transferring the captured ambient background light to thedrain 640. In various embodiments, theshutter 645 is opened within a threshold time interval from a time when theillumination source 230 stops emitting pulse oflight 710, pulse oflight 720, or pulse oflight 730. For example, theshutter 645 is open from within a threshold time interval from a time when theillumination source 230 stops emitting pulse of light 710 until a time when theillumination source 230 starts emitting pulse of light 720 (or until a time within the threshold time interval when theillumination source 230 starts emitting pulse of light 720). Similarly,shutter 645 may be open from a time within the threshold time interval from a time when theillumination source 230 stops emitting pulse of light 730 until a time when theillumination source 230 starts emitting pulse of light 730 or until a time within the threshold time interval when theillumination source 230 starts emitting pulse of light 730); theshutter 645 may further be open from a time that is within the threshold time interval from a time when theillumination source 230 stops emitting pulse of light 730 until a time when theillumination source 230 starts emitting another pulse of light (or within a threshold time interval from the time when theillumination source 230 starts emitting another pulse of light). However, in other embodiments, thepixel 600 does not include theshutter 640 and thedrain 645. - While
FIG. 7 shows example timing of illumination of pulses of light, opening oftransfer gate 610,transfer gate 620,transfer gate 630, and theshutter 640, different implementations may have different timings. For example, theshutter 640 may be opened for a longer or a shorter time interval than shown inFIG. 7 ; in an embodiment, theshutter 640 is opened for a fraction of an amount of time between emission of consecutive pulses of light by the illumination source 230 (e.g., for 10% of a time between emission of consecutive pulses of light by theillumination source 230, for 50% of a time between emission of consecutive pulses of light by the illumination source 230). Alternatively, the shutter is opened for a specific amount of time between emission of consecutive pulses of light by theillumination source 230. In some embodiments, theshutter 640 may not be opened. Similarly,transfer gate 610,transfer gate 630, ortransfer gate 640 may be opened for a longer or a shorter length of time than those shown inFIG. 7 . - In some embodiments, the
illumination source 230 emits a different pattern of light when 610, 620, 630 are open. For example, whendifferent transfer gates transfer gate 610 is open, theillumination source 230 emits a pulse of light (e.g., pulse of light 710 inFIG. 7 ) having a first illumination pattern, emits a pulse of light (e.g., pulse of light 720 inFIG. 7 ) having a second illumination pattern whentransfer gate 620 is open, and emits a pulse of light (e.g., pulse of light 730 inFIG. 7 ) having a third illumination pattern whentransfer gate 630 is open. In various embodiments, the first illumination pattern, the second illumination pattern, and the third illumination pattern are different from each other. Accordingly, theillumination source 230 emits pulses of light having different illumination patterns during time intervals when 610, 620, 630 are open.different transfer gates - Additionally, in some embodiments, the
illumination source 230 emits a variable number of pulses of light that are synchronized with opening of one oftransfer gate 610,transfer gate 620, andtransfer gate 630; the number of emitted pulses of light may be fixed or may dynamically vary (e.g., based on an auto-exposure mechanism). A different number of pulses of light may be synchronized with opening of 610, 620, 630 in some embodiments. For example, thedifferent transfer gates illumination source 230 emits pulses of light synchronized with 1000 openings oftransfer gate 610, emits pulses of light synchronized with 2000 openings oftransfer gate 620, and emits pulses of light synchronized with 3000 openings oftransfer gate 630; however, theillumination source 230 may emit any arbitrary number of pulses of light differing for opening of 610, 620, 630 and synchronized with opening ofdifferent transfer gates 610, 620, 630.different transfer gates - Alternatively, the
illumination source 230 continuously emits a pattern of light instead of discrete pulses of light. The continuous pattern of light emitted by the illumination source slowly changes over time in various embodiments (e.g. as a fringe pattern that is moving continuously in time, as further described above in conjunction withFIGS. 3 and 4 ). When theillumination source 230 emits the continuous pattern of light, the 610, 620, 630 and shutter 640 are opened as described in conjunction withtransfer gates FIG. 7 , so the continuous pattern of light is integrated over a fixed discrete time. In such a scenario, the emitted illumination pattern is configured to return to a previously emitted pattern over a specified time interval; hence, the illumination pattern changes over time, but repeats with a specific frequency or period. Opening of each 610, 620, 630 is also synchronized to repeat using the specified time interval, causingtransfer gate 610, 620, 630 to be opened when the same portion of the illumination pattern is emitted during different periods. For example,different transfer gates transfer gate 610 is synchronized to be opened when a specific portion of the continuous illumination pattern is emitted during each period of the continuous illumination pattern, sotransfer gate 610 integrates the specific portion of the continuous illumination pattern during each period of the continuous illumination pattern. Similarly,transfer gate 620 is synchronized with theillumination source 230 so a different specific portion of the continuous illumination pattern is emitted during each period of the continuous illumination pattern, sotransfer gate 620 integrates the different specific portion of the continuous illumination pattern during each period of the continuous illumination pattern. - The language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the inventive subject matter. It is therefore intended that the scope of the patent rights be limited not by this detailed description, but rather by any claims that issue on an application based hereon. Accordingly, the disclosure of the embodiments is intended to be illustrative, but not limiting, of the scope of the patent rights.
Claims (20)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US18/208,143 US20230328401A1 (en) | 2018-03-13 | 2023-06-09 | Timing of multiplexed sensor phase measurements in a depth camera assembly for depth determination using fringe interferometry |
Applications Claiming Priority (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US201862642199P | 2018-03-13 | 2018-03-13 | |
| US16/298,278 US11716548B2 (en) | 2018-03-13 | 2019-03-11 | Timing of multiplexed sensor phase measurements in a depth camera assembly for depth determination using fringe interferometry |
| US18/208,143 US20230328401A1 (en) | 2018-03-13 | 2023-06-09 | Timing of multiplexed sensor phase measurements in a depth camera assembly for depth determination using fringe interferometry |
Related Parent Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US16/298,278 Continuation US11716548B2 (en) | 2018-03-13 | 2019-03-11 | Timing of multiplexed sensor phase measurements in a depth camera assembly for depth determination using fringe interferometry |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| US20230328401A1 true US20230328401A1 (en) | 2023-10-12 |
Family
ID=67905446
Family Applications (2)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US16/298,278 Active 2042-04-29 US11716548B2 (en) | 2018-03-13 | 2019-03-11 | Timing of multiplexed sensor phase measurements in a depth camera assembly for depth determination using fringe interferometry |
| US18/208,143 Abandoned US20230328401A1 (en) | 2018-03-13 | 2023-06-09 | Timing of multiplexed sensor phase measurements in a depth camera assembly for depth determination using fringe interferometry |
Family Applications Before (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US16/298,278 Active 2042-04-29 US11716548B2 (en) | 2018-03-13 | 2019-03-11 | Timing of multiplexed sensor phase measurements in a depth camera assembly for depth determination using fringe interferometry |
Country Status (2)
| Country | Link |
|---|---|
| US (2) | US11716548B2 (en) |
| CN (1) | CN110275174A (en) |
Families Citing this family (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US11521328B2 (en) | 2019-10-16 | 2022-12-06 | Banner Engineering Corp | Image-based jam detection |
| WO2021248427A1 (en) * | 2020-06-12 | 2021-12-16 | 深圳市汇顶科技股份有限公司 | Depth sensing device and related electronic device, and method for operating depth sensing device |
| EP4305448A1 (en) * | 2021-03-09 | 2024-01-17 | Banner Engineering Corporation | Pixel domain field calibration of triangulation sensors |
| US12111397B2 (en) | 2021-03-09 | 2024-10-08 | Banner Engineering Corp. | Pixel domain field calibration of triangulation sensors |
| CN118882656B (en) * | 2024-09-27 | 2025-01-21 | 山东浪潮科学研究院有限公司 | A ship positioning method and storage medium based on Fresnel zone |
Citations (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5081530A (en) * | 1987-06-26 | 1992-01-14 | Antonio Medina | Three dimensional camera and range finder |
| US20080180650A1 (en) * | 2005-03-17 | 2008-07-31 | Iee International Electronics & Engineering S.A. | 3-D Imaging System |
| US20110157353A1 (en) * | 2009-12-28 | 2011-06-30 | Canon Kabushiki Kaisha | Measurement system, image correction method, and computer program |
| US20120050750A1 (en) * | 2009-04-21 | 2012-03-01 | Michigan Aerospace Corporation | Atmospheric measurement system |
| US20120169053A1 (en) * | 2009-07-29 | 2012-07-05 | Michigan Aerospace Corporation | Atmospheric Measurement System |
| US8224064B1 (en) * | 2003-05-21 | 2012-07-17 | University Of Kentucky Research Foundation, Inc. | System and method for 3D imaging using structured light illumination |
| US20120274937A1 (en) * | 2009-04-21 | 2012-11-01 | Michigan Aerospace Corporation | Light processing system and method |
| US20170146657A1 (en) * | 2015-11-24 | 2017-05-25 | Microsoft Technology Licensing, Llc | Imaging sensor with shared pixel readout circuitry |
| US20170180703A1 (en) * | 2015-12-13 | 2017-06-22 | Photoneo S.R.O. | Methods And Apparatus For Superpixel Modulation With Ambient Light Suppression |
| US20170195589A1 (en) * | 2014-03-03 | 2017-07-06 | Photoneo S.R.O. | Methods and Apparatus for Superpixel Modulation |
| US10491877B1 (en) * | 2017-12-21 | 2019-11-26 | Facebook Technologies, Llc | Depth camera assembly using multiplexed sensor phase measurements to determine depth using fringe interferometry |
Family Cites Families (28)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP5243100B2 (en) | 2008-05-12 | 2013-07-24 | ブレインビジョン株式会社 | Pixel structure of solid-state image sensor |
| KR101819073B1 (en) | 2010-03-15 | 2018-01-16 | 시리얼 테크놀로지즈 에스.에이. | Backplane device for a spatial light modulator and method for operating a backplane device |
| US9213405B2 (en) * | 2010-12-16 | 2015-12-15 | Microsoft Technology Licensing, Llc | Comprehension and intent-based content for augmented reality displays |
| TWI505453B (en) | 2011-07-12 | 2015-10-21 | Sony Corp | Solid-state imaging device, method for driving the same, method for manufacturing the same, and electronic device |
| US8773562B1 (en) | 2013-01-31 | 2014-07-08 | Apple Inc. | Vertically stacked image sensor |
| US9276031B2 (en) | 2013-03-04 | 2016-03-01 | Apple Inc. | Photodiode with different electric potential regions for image sensors |
| US9531976B2 (en) | 2014-05-29 | 2016-12-27 | Semiconductor Components Industries, Llc | Systems and methods for operating image sensor pixels having different sensitivities and shared charge storage regions |
| WO2015192117A1 (en) * | 2014-06-14 | 2015-12-17 | Magic Leap, Inc. | Methods and systems for creating virtual and augmented reality |
| US9832407B2 (en) * | 2014-11-26 | 2017-11-28 | Semiconductor Components Industries, Llc | Global shutter image sensor pixels having improved shutter efficiency |
| CN111799285B (en) | 2014-12-18 | 2024-05-14 | 索尼公司 | Imaging device |
| US9425233B2 (en) | 2014-12-22 | 2016-08-23 | Google Inc. | RGBZ pixel cell unit for an RGBZ image sensor |
| US9871065B2 (en) | 2014-12-22 | 2018-01-16 | Google Inc. | RGBZ pixel unit cell with first and second Z transfer gates |
| US10440355B2 (en) | 2015-11-06 | 2019-10-08 | Facebook Technologies, Llc | Depth mapping with a head mounted display using stereo cameras and structured light |
| US10708577B2 (en) | 2015-12-16 | 2020-07-07 | Facebook Technologies, Llc | Range-gated depth camera assembly |
| EP3397921A1 (en) | 2015-12-30 | 2018-11-07 | Faro Technologies, Inc. | Registration of three-dimensional coordinates measured on interior and exterior portions of an object |
| US10152121B2 (en) * | 2016-01-06 | 2018-12-11 | Facebook Technologies, Llc | Eye tracking through illumination by head-mounted displays |
| US9858672B2 (en) | 2016-01-15 | 2018-01-02 | Oculus Vr, Llc | Depth mapping using structured light and time of flight |
| US10003726B2 (en) * | 2016-03-25 | 2018-06-19 | Microsoft Technology Licensing, Llc | Illumination module for near eye-to-eye display system |
| FR3060250B1 (en) | 2016-12-12 | 2019-08-23 | Commissariat A L'energie Atomique Et Aux Energies Alternatives | IMAGE SENSOR FOR CAPTURING A 2D IMAGE AND DEPTH |
| US10616519B2 (en) | 2016-12-20 | 2020-04-07 | Microsoft Technology Licensing, Llc | Global shutter pixel structures with shared transfer gates |
| US10025384B1 (en) | 2017-01-06 | 2018-07-17 | Oculus Vr, Llc | Eye tracking architecture for common structured light and time-of-flight framework |
| JP6737192B2 (en) | 2017-01-25 | 2020-08-05 | セイコーエプソン株式会社 | Solid-state imaging device and electronic device |
| US10469775B2 (en) | 2017-03-31 | 2019-11-05 | Semiconductor Components Industries, Llc | High dynamic range storage gate pixel circuitry |
| US20180294304A1 (en) | 2017-04-05 | 2018-10-11 | Semiconductor Components Industries, Llc | Image sensors with vertically stacked photodiodes and vertical transfer gates |
| US10163963B2 (en) | 2017-04-05 | 2018-12-25 | Semiconductor Components Industries, Llc | Image sensors with vertically stacked photodiodes and vertical transfer gates |
| US20180295306A1 (en) | 2017-04-06 | 2018-10-11 | Semiconductor Components Industries, Llc | Image sensors with diagonal readout |
| FR3065836B1 (en) | 2017-04-28 | 2020-02-07 | Commissariat A L'energie Atomique Et Aux Energies Alternatives | STORAGE AREA FOR A PIXEL OF AN IMAGE MATRIX |
| US10419701B2 (en) | 2017-06-26 | 2019-09-17 | Facebook Technologies, Llc | Digital pixel image sensor |
-
2019
- 2019-03-11 US US16/298,278 patent/US11716548B2/en active Active
- 2019-03-13 CN CN201910190419.0A patent/CN110275174A/en active Pending
-
2023
- 2023-06-09 US US18/208,143 patent/US20230328401A1/en not_active Abandoned
Patent Citations (12)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5081530A (en) * | 1987-06-26 | 1992-01-14 | Antonio Medina | Three dimensional camera and range finder |
| US8224064B1 (en) * | 2003-05-21 | 2012-07-17 | University Of Kentucky Research Foundation, Inc. | System and method for 3D imaging using structured light illumination |
| US20080180650A1 (en) * | 2005-03-17 | 2008-07-31 | Iee International Electronics & Engineering S.A. | 3-D Imaging System |
| US20120050750A1 (en) * | 2009-04-21 | 2012-03-01 | Michigan Aerospace Corporation | Atmospheric measurement system |
| US20120274937A1 (en) * | 2009-04-21 | 2012-11-01 | Michigan Aerospace Corporation | Light processing system and method |
| US20120169053A1 (en) * | 2009-07-29 | 2012-07-05 | Michigan Aerospace Corporation | Atmospheric Measurement System |
| US20110157353A1 (en) * | 2009-12-28 | 2011-06-30 | Canon Kabushiki Kaisha | Measurement system, image correction method, and computer program |
| US20170195589A1 (en) * | 2014-03-03 | 2017-07-06 | Photoneo S.R.O. | Methods and Apparatus for Superpixel Modulation |
| US20170146657A1 (en) * | 2015-11-24 | 2017-05-25 | Microsoft Technology Licensing, Llc | Imaging sensor with shared pixel readout circuitry |
| US10151838B2 (en) * | 2015-11-24 | 2018-12-11 | Microsoft Technology Licensing, Llc | Imaging sensor with shared pixel readout circuitry |
| US20170180703A1 (en) * | 2015-12-13 | 2017-06-22 | Photoneo S.R.O. | Methods And Apparatus For Superpixel Modulation With Ambient Light Suppression |
| US10491877B1 (en) * | 2017-12-21 | 2019-11-26 | Facebook Technologies, Llc | Depth camera assembly using multiplexed sensor phase measurements to determine depth using fringe interferometry |
Also Published As
| Publication number | Publication date |
|---|---|
| US11716548B2 (en) | 2023-08-01 |
| US20190285751A1 (en) | 2019-09-19 |
| CN110275174A (en) | 2019-09-24 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10410373B1 (en) | Calibration of a phase interferometry depth camera assembly | |
| US10491877B1 (en) | Depth camera assembly using multiplexed sensor phase measurements to determine depth using fringe interferometry | |
| US20230328401A1 (en) | Timing of multiplexed sensor phase measurements in a depth camera assembly for depth determination using fringe interferometry | |
| US10228240B2 (en) | Depth mapping using structured light and time of flight | |
| US11625845B2 (en) | Depth measurement assembly with a structured light source and a time of flight camera | |
| US10469722B2 (en) | Spatially tiled structured light projector | |
| US10791283B2 (en) | Imaging device based on lens assembly with embedded filter | |
| KR20190028356A (en) | Range - Gate Depth Camera Assembly | |
| US11399139B2 (en) | High dynamic range camera assembly with augmented pixels | |
| US11509803B1 (en) | Depth determination using time-of-flight and camera assembly with augmented pixels | |
| US10791286B2 (en) | Differentiated imaging using camera assembly with augmented pixels | |
| US10855973B1 (en) | Depth mapping using fringe interferometry | |
| US10852434B1 (en) | Depth camera assembly using fringe interferometery via multiple wavelengths | |
| US10859702B1 (en) | Positional tracking using retroreflectors | |
| US11567318B1 (en) | Determining features of a user's eye from depth mapping of the user's eye via indirect time of flight |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| AS | Assignment |
Owner name: FACEBOOK TECHNOLOGIES, LLC, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HALL, MICHAEL;CHAO, QING;REEL/FRAME:067238/0243 Effective date: 20190311 |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| AS | Assignment |
Owner name: META PLATFORMS TECHNOLOGIES, LLC, CALIFORNIA Free format text: CHANGE OF NAME;ASSIGNOR:FACEBOOK TECHNOLOGIES, LLC;REEL/FRAME:067397/0648 Effective date: 20220318 |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
| STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO PAY ISSUE FEE |