[go: up one dir, main page]

US20180082482A1 - Display system having world and user sensors - Google Patents

Display system having world and user sensors Download PDF

Info

Publication number
US20180082482A1
US20180082482A1 US15/711,992 US201715711992A US2018082482A1 US 20180082482 A1 US20180082482 A1 US 20180082482A1 US 201715711992 A US201715711992 A US 201715711992A US 2018082482 A1 US2018082482 A1 US 2018082482A1
Authority
US
United States
Prior art keywords
user
sensors
information
environment
controller
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US15/711,992
Inventor
Ricardo J. Motta
Brett D. Miller
Tobias RICK
Manohar B. Srikanth
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apple Inc
Original Assignee
Apple Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apple Inc filed Critical Apple Inc
Priority to US15/711,992 priority Critical patent/US20180082482A1/en
Priority to KR1020197005049A priority patent/KR102230561B1/en
Priority to CN201780053117.8A priority patent/CN109643145B/en
Priority to EP17780948.0A priority patent/EP3488315B1/en
Priority to PCT/US2017/053100 priority patent/WO2018057991A1/en
Assigned to APPLE INC. reassignment APPLE INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MILLER, BRETT D., MOTTA, RICARDO J., RICK, Tobias, SRIKANTH, MANOHAR B.
Publication of US20180082482A1 publication Critical patent/US20180082482A1/en
Priority to US16/361,110 priority patent/US11217021B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/163Wearable computers, e.g. on a belt
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0093Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for monitoring data relating to the user, e.g. head-tracking, eye-tracking
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0172Head mounted characterised by optical features
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • H04N13/0484
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/344Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • H04N13/383Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0132Head-up displays characterised by optical features comprising binocular systems
    • G02B2027/0136Head-up displays characterised by optical features comprising binocular systems with a single image source for both eyes
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0138Head-up displays characterised by optical features comprising image capture systems, e.g. camera
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • G02B2027/0187Display position adjusting means not related to the information to be displayed slaved to motion of at least a part of the body of the user, e.g. head, eye

Definitions

  • Virtual reality allows users to experience and/or interact with an immersive artificial environment, such that the user feels as if they were physically in that environment.
  • virtual reality systems may display stereoscopic scenes to users in order to create an illusion of depth, and a computer may adjust the scene content in real-time to provide the illusion of the user moving within the scene.
  • MR mixed reality
  • MR combines computer generated information (referred to as virtual content) with real world images or a real world view to augment, or add content to, a user's view of the world.
  • the simulated environments of virtual reality and/or the mixed environments of augmented reality may thus be utilized to provide an interactive user experience for multiple applications, such as applications that add virtual content to a real-time view of the viewer's environment, interacting with virtual training environments, gaming, remotely controlling drones or other mechanical systems, viewing digital media content, interacting with the Internet, or the like.
  • Embodiments of a mixed reality system may include a mixed reality device such as a headset, helmet, goggles, or glasses (referred to herein as a head-mounted display (HMD)) that includes a projector mechanism for projecting or displaying frames including left and right images to a user's eyes to thus provide 3D virtual views to the user.
  • the 3D virtual views may include views of the user's environment augmented with virtual content (e.g., virtual objects, virtual tags, etc.).
  • the mixed reality system may include world-facing sensors that collect information about the user's environment (e.g., video, depth information, lighting information, etc.), and user-facing sensors that collect information about the user (e.g., the user's expressions, eye movement, hand gestures, etc.).
  • the sensors provide the information as inputs to a controller of the mixed reality system.
  • the controller may render frames including virtual content based at least in part on the inputs from the world and user sensors.
  • the controller may be integrated in the HMD, or alternatively may be implemented at least in part by a device external to the HMD.
  • the HMD may display the frames generated by the controller to provide a 3D virtual view including the virtual content and a view of the user's environment for viewing by the user.
  • the sensors may include one or more cameras that capture high-quality views of the user's environment that may be used to provide the user with a virtual view of their real environment.
  • the sensors may include one or more sensors that capture depth or range information for the user's environment.
  • the sensors may include one or more sensors that may capture information about the user's position, orientation, and motion in the environment.
  • the sensors may include one or more cameras that capture lighting information (e.g., direction, color, intensity) in the user's environment that may, for example, be used in rendering (e.g., coloring and/or lighting) content in the virtual view.
  • the sensors may include one or more sensors that track position and movement of the user's eyes.
  • the sensors may include one or more sensors that track position, movement, and gestures of the user's hands, fingers, and/or arms. In some embodiments, the sensors may include one or more sensors that track expressions of the user's eyebrows/forehead. In some embodiments, the sensors may include one or more sensors that track expressions of the user's mouth/jaw.
  • FIG. 1 illustrates a mixed reality system, according to at least some embodiments.
  • FIGS. 2A through 2C illustrate world-facing and user-facing sensors of a head-mounted display (HMD), according to at least some embodiments.
  • HMD head-mounted display
  • FIG. 3 is a flowchart of a method of operation for a mixed reality system as illustrated in FIGS. 1 through 2C , according to at least some embodiments.
  • FIG. 4 illustrates components of a mixed reality system, according to at least some embodiments.
  • Configured To Various units, circuits, or other components may be described or claimed as “configured to” perform a task or tasks.
  • “configured to” is used to connote structure by indicating that the units/circuits/components include structure (e.g., circuitry) that performs those task or tasks during operation. As such, the unit/circuit/component can be said to be configured to perform the task even when the specified unit/circuit/component is not currently operational (e.g., is not on).
  • the units/circuits/components used with the “configured to” language include hardware—for example, circuits, memory storing program instructions executable to implement the operation, etc.
  • a unit/circuit/component is “configured to” perform one or more tasks is expressly intended not to invoke 35 U.S.C. ⁇ 112, paragraph (f), for that unit/circuit/component.
  • “configured to” can include generic structure (e.g., generic circuitry) that is manipulated by software or firmware (e.g., an FPGA or a general-purpose processor executing software) to operate in manner that is capable of performing the task(s) at issue.
  • “Configure to” may also include adapting a manufacturing process (e.g., a semiconductor fabrication facility) to fabricate devices (e.g., integrated circuits) that are adapted to implement or perform one or more tasks.
  • Second “First,” “Second,” etc. As used herein, these terms are used as labels for nouns that they precede, and do not imply any type of ordering (e.g., spatial, temporal, logical, etc.).
  • a buffer circuit may be described herein as performing write operations for “first” and “second” values.
  • the terms “first” and “second” do not necessarily imply that the first value must be written before the second value.
  • Embodiments of a mixed reality system may include a mixed reality device such as a headset, helmet, goggles, or glasses (referred to herein as a head-mounted display (HMD)) that includes a projector mechanism for projecting or displaying frames including left and right images to a user's eyes to thus provide 3D virtual views to the user.
  • the 3D virtual views may include views of the user's environment augmented with virtual content (e.g., virtual objects, virtual tags, etc.).
  • the mixed reality system may also include world-facing sensors that collect information about the user's environment (e.g., video, depth information, lighting information, etc.), and user-facing sensors that collect information about the user (e.g., the user's expressions, eye movement, hand gestures, etc.).
  • the sensors may provide the collected information to a controller of the mixed reality system.
  • the controller may render frames for display by the projector that include virtual content based at least in part on the various information obtained from the sensors.
  • the mixed reality system may include world-facing sensors (also referred to as world sensors), for example located on external surfaces of a mixed reality HMD, that collect various information about the user's environment.
  • the world sensors may include one or more “video see through” cameras (e.g., RGB (visible light) cameras) that capture high-quality views of the user's environment that may be used to provide the user with a virtual view of their real environment.
  • the world sensors may include one or more world mapping sensors (e.g., infrared (IR) cameras with an IR illumination source, or Light Detection and Ranging (LIDAR) emitters and receivers/detectors) that, for example, capture depth or range information for the user's environment.
  • IR infrared
  • LIDAR Light Detection and Ranging
  • the world sensors may include one or more “head pose” sensors (e.g., IR or RGB cameras) that may capture information about the user's position, orientation, and motion in the environment; this information may, for example, be used to augment information collected by an inertial-measurement unit (IMU) of the HMD.
  • the world sensors may include one or more light sensors (e.g., RGB cameras) that capture lighting information (e.g., color, intensity, and direction) in the user's environment that may, for example, be used in rendering lighting effects for virtual content in the virtual view.
  • the mixed reality system may include user-facing sensors (also referred to as user sensors), for example located on external and internal surfaces of a mixed reality HMD, that collect information about the user (e.g., the user's expressions, eye movement, etc.).
  • the user sensors may include one or more eye tracking sensors (e.g., IR cameras with IR illumination, or visible light cameras) that track position and movement of the user's eyes.
  • eye tracking sensors e.g., IR cameras with IR illumination, or visible light cameras
  • the eye tracking sensors may also be used for other purposes, for example iris identification.
  • the user sensors may include one or more hand sensors (e.g., IR cameras with IR illumination) that track position, movement, and gestures of the user's hands, fingers, and/or arms.
  • the user sensors may include one or more eyebrow sensors (e.g., IR cameras with IR illumination) that track expressions of the user's eyebrows/forehead.
  • the user sensors may include one or more lower jaw tracking sensors (e.g., IR cameras with IR illumination) that track expressions of the user's mouth/jaw.
  • FIG. 1 illustrates a mixed reality system 10 , according to at least some embodiments.
  • a mixed reality system 10 may include a HMD 100 such as a headset, helmet, goggles, or glasses that may be worn by a user 190 .
  • virtual content 110 may be displayed to the user 190 in a 3D virtual view 102 via the HMD 100 ; different virtual objects may be displayed at different depths in the virtual space 102 .
  • the virtual content 110 may be overlaid on or composited in a view of the user 190 's environment with respect to the user's current line of sight that is provided by the HMD 100 .
  • HMD 100 may implement any of various types of virtual reality projection technologies.
  • HMD 100 may be a near-eye VR system that projects left and right images on screens in front of the user 190 's eyes that are viewed by a subject, such as DLP (digital light processing), LCD (liquid crystal display) and LCoS (liquid crystal on silicon) technology VR systems.
  • HMD 100 may be a direct retinal projector system that scans left and right images, pixel by pixel, to the subject's eyes. To scan the images, left and right projectors generate beams that are directed to left and right reflective components (e.g., ellipsoid mirrors) located in front of the user 190 's eyes; the reflective components reflect the beams to the user's eyes.
  • left and right reflective components e.g., ellipsoid mirrors
  • virtual content 110 at different depths or distances in the 3D virtual view 102 are shifted left or right in the two images as a function of the triangulation of distance, with nearer objects shifted more than more distant objects.
  • the HMD 100 may include world sensors 140 that collect information about the user 190 's environment (video, depth information, lighting information, etc.), and user sensors 150 that collect information about the user 190 (e.g., the user's expressions, eye movement, hand gestures, etc.).
  • the sensors 140 and 150 may provide the collected information to a controller of the mixed reality system 10 .
  • the controller may render frames for display by a projector component of the HMD 100 that include virtual content based at least in part on the various information obtained from the sensors 140 and 150 .
  • Example sensors 140 and 150 are shown in FIGS. 2A through 2C .
  • the mixed reality system 10 may include one or more other components.
  • the system may include a cursor control device (e.g., mouse) for moving a virtual cursor in the 3D virtual view 102 to interact with virtual content 110 .
  • the system 10 may include a computing device coupled to the HMD 100 via a wired or wireless (e.g., Bluetooth) connection that implements at least some of the functionality of the HMD 100 , for example rendering images and image content to be displayed in the 3D virtual view 102 by the HMD 100 .
  • a wired or wireless e.g., Bluetooth
  • FIGS. 2A through 2C illustrate world-facing and user-facing sensors of an example HMD 200 , according to at least some embodiments.
  • FIG. 2A shows a side view of an example HMD 200 with world and user sensors 210 - 217 , according to some embodiments.
  • FIG. 2B shows a front (world-facing) view of an example HMD 200 with world and user sensors 210 - 217 , according to some embodiments.
  • FIG. 2C shows a rear (user-facing) view of an example HMD 200 with world and user sensors 210 - 217 , according to some embodiments.
  • HMD 200 as illustrated in FIGS. 2A through 2C is given by way of example, and is not intended to be limiting.
  • the shape, size, and other features of a HMD may differ, and the locations, numbers, types, and other features of the world and user sensors may vary.
  • HMD 200 may be worn on a user 290 's head so that the projection system displays 202 (e.g. screens and optics of a near-eye VR system, or reflective components (e.g., ellipsoid mirrors) of a direct retinal projector system) are disposed in front of the user 290 's eyes 292 .
  • a HMD 200 may include world sensors 210 - 213 that collect information about the user 290 's environment (video, depth information, lighting information, etc.), and user sensors 214 - 217 that collect information about the user 290 (e.g., the user's expressions, eye movement, hand gestures, etc.).
  • the sensors 210 - 217 may provide the collected information to a controller (not shown) of the mixed reality system.
  • the controller may be implemented in the HMD 200 , or alternatively may be implemented at least in part by an external device (e.g., a computing system) that is communicatively coupled to HMD 200 via a wired or wireless interface.
  • the controller may include one or more of various types of processors, image signal processors (ISPs), graphics processing units (GPUs), coder/decoders (codecs), and/or other components for processing and rendering video and/or images.
  • ISPs image signal processors
  • GPUs graphics processing units
  • codecs coder/decoders
  • the controller may render frames (each frame including a left and right image) that include virtual content based at least in part on the various inputs obtained from the sensors 210 - 217 , and may provide the frames to the projection system of the HMD 200 for display to the left and right displays 202 .
  • FIG. 4 further illustrates components of a HMD and mixed reality system, according to some embodiments.
  • World sensors 210 - 213 may, for example, be located on external surfaces of a HMD 200 , and may collect various information about the user's environment.
  • the information collected by the world sensors may be used to provide the user with a virtual view of their real environment.
  • the world sensors may be used to provide depth information for objects in the real environment.
  • the world sensors may be used to provide orientation and motion information for the user in the real environment.
  • the world sensors may be used to collect color and lighting information in the real environment.
  • the world sensors may include one or more “video see through” cameras 210 (e.g., RGB (visible light) video cameras) that capture high-quality video of the user's environment that may be used to provide the user with a virtual view of their real environment.
  • video streams captured by cameras 210 A and 210 B may be processed by the controller of the HMD 200 to render frames including virtual content, and the rendered frames may be provided to the projection system of the device for display on respective displays 202 A and 202 B.
  • At least some video frames captured by cameras 210 A and 210 B may go directly to the projection system of the device for display on respective displays 202 A and 202 B; the controller may also receive and process the video frames to composite virtual content into the frames that are then provided to the projection system for display.
  • HMD 200 of FIGS. 2A through 2C there may be two video see through cameras 210 A and 210 B located on a front surface of the HMD 200 at positions that are substantially in front of each of the user 290 's eyes 292 A and 292 B.
  • more or fewer cameras 210 may be used in a HMD 200 to capture video of the user 290 's environment, and cameras 210 may be positioned at other locations.
  • video see through cameras 210 may include high quality, high resolution RGB video cameras, for example 10 megapixel (e.g., 3072 ⁇ 3072 pixel count) cameras with a frame rate of 60 frames per second (FPS) or greater, horizontal field of view (HFOV) of greater than 90 degrees, and with a working distance of 0.1 meters (m) to infinity.
  • high quality RGB video cameras for example 10 megapixel (e.g., 3072 ⁇ 3072 pixel count) cameras with a frame rate of 60 frames per second (FPS) or greater, horizontal field of view (HFOV) of greater than 90 degrees, and with a working distance of 0.1 meters (m) to infinity.
  • FPS frames per second
  • HFOV horizontal field of view
  • the world sensors may include one or more world mapping sensors 211 (e.g., infrared (IR) cameras with an IR illumination source, or Light Detection and Ranging (LIDAR) emitters and receivers/detectors) that, for example, capture depth or range information for objects and surfaces in the user's environment.
  • the range information may, for example, be used in positioning virtual content composited into images of the real environment at correct depths.
  • the range information may be used in adjusting the depth of real objects in the environment when displayed; for example, nearby objects may be re-rendered to be smaller in the display to help the user in avoiding the objects when moving about in the environment.
  • a world mapping sensor 211 may include an IR light source and IR camera, for example a 1 megapixel (e.g., 1000 ⁇ 1000 pixel count) camera with a frame rate of 60 frames per second (FPS) or greater, HFOV of 90 degrees or greater, and with a working distance of 0.1 m to 1.5 m.
  • IR light source and IR camera for example a 1 megapixel (e.g., 1000 ⁇ 1000 pixel count) camera with a frame rate of 60 frames per second (FPS) or greater, HFOV of 90 degrees or greater, and with a working distance of 0.1 m to 1.5 m.
  • FPS frames per second
  • the world sensors may include one or more “head pose” sensors 212 (e.g., IR or RGB cameras) that may capture information about the position, orientation, and/or motion of the user and/or the user's head in the environment.
  • the information collected by sensors 212 may, for example, be used to augment information collected by an inertial-measurement unit (IMU) of the HMD 200 .
  • IMU inertial-measurement unit
  • the augmented position, orientation, and/or motion information may be used in determining how to render and display virtual views of the user's environment and virtual content within the views. For example, different views of the environment may be rendered based at least in part on the position or orientation of the user's head, whether the user is currently walking through the environment, and so on.
  • the augmented position, orientation, and/or motion information may be used to composite virtual content into the scene in a fixed position relative to the background view of the user's environment.
  • head pose sensors 212 may include RGB or IR cameras, for example 400 ⁇ 400 pixel count cameras, with a frame rate of 120 frames per second (FPS) or greater, wide field of view (FOV), and with a working distance of 1 m to infinity.
  • the sensors 212 may include wide FOV lenses, and the two sensors 212 A and 212 B may look in different directions.
  • the sensors 212 may provide low latency monochrome imaging for tracking head position, and may be integrated with an IMU of the HMD 200 to augment position and movement information captured by the IMU.
  • the world sensors may include one or more light sensors 213 (e.g., RGB cameras) that capture lighting information (e.g., direction, color, and intensity) in the user's environment that may, for example, be used in rendering virtual content in the virtual view of the user's environment, for example in determining coloring, lighting, shadow effects, etc. for virtual objects in the virtual view. For example, if a red light source is detected, virtual content rendered into the scene may be illuminated with red light, and more generally virtual objects may be rendered with light of a correct color and intensity from a correct direction and angle.
  • lighting information e.g., direction, color, and intensity
  • light sensor 213 located on a front or top surface of the HMD 200 .
  • more than one light sensor 213 may be used, and light sensor 213 may be positioned at other locations.
  • light sensor 213 may include an RGB high dynamic range (HDR) video camera, for example a 500 ⁇ 500 pixel count camera, with a frame rate of 30 FPS, HFOV of 180 degrees or greater, and with a working distance of 1 m to infinity.
  • HDR high dynamic range
  • User sensors 214 - 217 may, for example, be located on external and internal surfaces of HMD 200 , and may collect information about the user 290 (e.g., the user's expressions, eye movement, etc.). In some embodiments, the information collected by the user sensors may be used to adjust the collection of, and/or processing of information collected by, the world sensors 210 - 213 of the HMD 200 . In some embodiments, the information collected by the user sensors may be used to adjust the rendering of images to be projected, and/or to adjust the projection of the images by the projection system of the HMD 200 . In some embodiments, the information collected by the user sensors may be used in generating an avatar of the user 290 in the 3D virtual view projected to the user by the HMD 200 . In some embodiments, the information collected by the user sensors may be used in interacting with or manipulating virtual content in the 3D virtual view projected by the HMD 200 .
  • the user sensors may include one or more eye tracking sensors 214 (e.g., IR cameras with an IR illumination source) that may be used to track position and movement of the user's eyes.
  • eye tracking sensors 214 may also be used to track dilation of the user's pupils.
  • the information collected by the eye tracking sensors 214 may be used to adjust the rendering of images to be projected, and/or to adjust the projection of the images by the projection system of the HMD 200 , based on the direction and angle at which the user's eyes are looking.
  • content of the images in a region around the location at which the user's eyes are currently looking may be rendered with more detail and at a higher resolution than content in regions at which the user is not looking, which allows available processing time for image data to be spent on content viewed by the foveal regions of the eyes rather than on content viewed by the peripheral regions of the eyes.
  • content of images in regions at which the user is not looking may be compressed more than content of the region around the point at which the user is currently looking.
  • the information collected by the eye tracking sensors 214 may be used to match direction of the eyes of an avatar of the user 290 to the direction of the user's eyes.
  • brightness of the projected images may be modulated based on the user's pupil dilation as determined by the eye tracking sensors 214 .
  • HMD 200 of FIGS. 2A through 2C there may be two eye tracking sensors 214 A and 214 B located on an inner surface of the HMD 200 at positions such that the sensors 214 A and 214 B have views of respective ones of the user 290 's eyes 292 A and 292 B.
  • more or fewer eye tracking sensors 214 may be used in a HMD 200 , and sensors 214 may be positioned at other locations.
  • each eye tracking sensor 214 may include an IR light source and IR camera, for example a 400 ⁇ 400 pixel count camera with a frame rate of 120 FPS or greater, HFOV of 70 degrees, and with a working distance of 10 millimeters (mm) to 80 mm.
  • IR light source and IR camera for example a 400 ⁇ 400 pixel count camera with a frame rate of 120 FPS or greater, HFOV of 70 degrees, and with a working distance of 10 millimeters (mm) to 80 mm.
  • the user sensors may include one or more eyebrow sensors 215 (e.g., IR cameras with IR illumination) that track expressions of the user's eyebrows/forehead.
  • the user sensors may include one or more lower jaw tracking sensors 216 (e.g., IR cameras with IR illumination) that track expressions of the user's mouth/jaw.
  • expressions of the brow, mouth, jaw, and eyes captured by sensors 214 , 215 , and 216 may be used to simulate expressions on an avatar of the user 290 in the virtual space, and/or to selectively render and composite virtual content for viewing by the user based at least in part on the user's reactions to the content projected in the 3D virtual view.
  • each eyebrow sensor 215 may include an IR light source and IR camera, for example a 250 ⁇ 250 pixel count camera with a frame rate of 60 FPS, HFOV of 60 degrees, and with a working distance of approximately 5 mm.
  • images from the two sensors 215 A and 215 B may be combined to form a stereo view of the user's forehead and eyebrows.
  • each lower jaw tracking sensor 216 may include an IR light source and IR camera, for example a 400 ⁇ 400 pixel count camera with a frame rate of 60 FPS, HFOV of 90 degrees, and with a working distance of approximately 30 mm.
  • images from the two sensors 216 A and 216 B may be combined to form a stereo view of the user's lower jaw and mouth.
  • the user sensors may include one or more hand sensors 217 (e.g., IR cameras with IR illumination) that track position, movement, and gestures of the user's hands, fingers, and/or arms.
  • hand sensors 217 e.g., IR cameras with IR illumination
  • detected position, movement, and gestures of the user's hands, fingers, and/or arms may be used to simulate movement of the hands, fingers, and/or arms of an avatar of the user 290 in the virtual space.
  • the user's detected hand and finger gestures may be used to determine interactions of the user with virtual content in the virtual space, including but not limited to gestures that manipulate virtual objects, gestures that interact with virtual user interface elements displayed in the virtual space, etc.
  • hand sensor 217 located on a bottom surface of the HMD 200 .
  • hand sensor 217 may include an IR light source and IR camera, for example a 500 ⁇ 500 pixel count camera with a frame rate of 120 FPS or greater, HFOV of 90 degrees, and with a working distance of 0.1 m to 1 m.
  • FIG. 3 is a high-level flowchart of a method of operation for a mixed reality system as illustrated in FIGS. 1 through 2C , according to at least some embodiments.
  • the mixed reality system may include a HMD such as a headset, helmet, goggles, or glasses that includes a projector mechanism for projecting or displaying frames including left and right images to a user's eyes to thus provide 3D virtual views to the user.
  • the 3D virtual views may include views of the user's environment augmented with virtual content (e.g., virtual objects, virtual tags, etc.).
  • one or more world sensors on the HMD may capture information about the user's environment (e.g., video, depth information, lighting information, etc.), and provide the information as inputs to a controller of the mixed reality system.
  • one or more user sensors on the HMD may capture information about the user (e.g., the user's expressions, eye movement, hand gestures, etc.), and provide the information as inputs to the controller of the mixed reality system.
  • Elements 1002 and 1004 may be performed in parallel, and as indicated by the arrows returning to elements 1002 and 1004 may be performed continuously to provide input to the controller of the mixed reality system as the user uses the mixed reality system.
  • the controller of the mixed reality system may render frames including virtual content based at least in part on the inputs from the world and user sensors.
  • the controller may be integrated in the HMD, or alternatively may be implemented at least in part by a device external to the HMD.
  • the HMD may display the frames generated by the controller to provide a 3D virtual view including the virtual content and a view of the user's environment for viewing by the user.
  • the controller may continue to receive and process inputs from the sensors to render frames for display as long as the user is using the mixed reality system.
  • At least some video frames of the user's real environment that are captured by the world sensors may go directly to the projection system of the device for display to the user; the controller may also receive and process the video frames to composite virtual content into the frames that are then provided to the projection system for display.
  • FIG. 4 is a block diagram illustrating components of an example mixed reality system, according to at least some embodiments.
  • a mixed reality system 1900 may include a HMD 2000 such as a headset, helmet, goggles, or glasses.
  • HMD 2000 may implement any of various types of virtual reality projector technologies.
  • the HMD 2000 may include a near-eye VR projector that projects frames including left and right images on screens that are viewed by a user, such as DLP (digital light processing), LCD (liquid crystal display) and LCoS (liquid crystal on silicon) technology projectors.
  • the HMD 2000 may include a direct retinal projector that scans frames including left and right images, pixel by pixel, directly to the user's eyes.
  • 3D three-dimensional
  • HMD 2000 may include a 3D projector 2020 that implements the VR projection technology that generates the 3D virtual view 2002 viewed by the user, for example near-eye VR projection technology or direct retinal projection technology.
  • HMD 2000 may also include a controller 2030 configured to implement functionality of the mixed reality system 1900 as described herein and to generate the frames (each frame including a left and right image) that are projected or scanned by the 3D projector 2020 into the 3D virtual view 2002 .
  • HMD 2000 may also include a memory 2032 configured to store software (code 2034 ) of the mixed reality system that is executable by the controller 2030 , as well as data 2038 that may be used by the mixed reality system 1900 when executing on the controller 2030 .
  • HMD 2000 may also include one or more interfaces 2040 (e.g., a Bluetooth technology interface, USB interface, etc.) configured to communicate with an external device 2100 via a wired or wireless connection.
  • interfaces 2040 e.g., a Bluetooth technology interface, USB interface, etc.
  • external device 2100 may be or may include any type of computing system or computing device, such as a desktop computer, notebook or laptop computer, pad or tablet device, smartphone, hand-held computing device, game controller, game system, and so on.
  • controller 2030 may be a uniprocessor system including one processor, or a multiprocessor system including several processors (e.g., two, four, eight, or another suitable number). Controller 2030 may include central processing units (CPUs) configured to implement any suitable instruction set architecture, and may be configured to execute instructions defined in that instruction set architecture. For example, in various embodiments controller 2030 may include general-purpose or embedded processors implementing any of a variety of instruction set architectures (ISAs), such as the x86, PowerPC, SPARC, RISC, or MIPS ISAs, or any other suitable ISA. In multiprocessor systems, each of the processors may commonly, but not necessarily, implement the same ISA.
  • ISAs instruction set architectures
  • each of the processors may commonly, but not necessarily, implement the same ISA.
  • Controller 2030 may employ any microarchitecture, including scalar, superscalar, pipelined, superpipelined, out of order, in order, speculative, non-speculative, etc., or combinations thereof. Controller 2030 may include circuitry to implement microcoding techniques. Controller 2030 may include one or more processing cores each configured to execute instructions. Controller 2030 may include one or more levels of caches, which may employ any size and any configuration (set associative, direct mapped, etc.). In some embodiments, controller 2030 may include at least one graphics processing unit (GPU), which may include any suitable graphics processing circuitry. Generally, a GPU may be configured to render objects to be displayed into a frame buffer (e.g., one that includes pixel data for an entire frame).
  • GPU graphics processing unit
  • a GPU may include one or more graphics processors that may execute graphics software to perform a part or all of the graphics operation, or hardware acceleration of certain graphics operations.
  • controller 2030 may include one or more other components for processing and rendering video and/or images, for example image signal processors (ISPs), coder/decoders (codecs), etc.
  • ISPs image signal processors
  • codecs coder/decoders
  • Memory 2032 may include any type of memory, such as dynamic random access memory (DRAM), synchronous DRAM (SDRAM), double data rate (DDR, DDR2, DDR3, etc.) SDRAM (including mobile versions of the SDRAMs such as mDDR3, etc., or low power versions of the SDRAMs such as LPDDR2, etc.), RAMBUS DRAM (RDRAM), static RAM (SRAM), etc.
  • DRAM dynamic random access memory
  • SDRAM synchronous DRAM
  • DDR double data rate
  • DDR double data rate
  • RDRAM RAMBUS DRAM
  • SRAM static RAM
  • one or more memory devices may be coupled onto a circuit board to form memory modules such as single inline memory modules (SIMMs), dual inline memory modules (DIMMs), etc.
  • the devices may be mounted with an integrated circuit implementing system in a chip-on-chip configuration, a package-on-package configuration, or a multi-chip module configuration.
  • the HMD 2000 may include at least one inertial-measurement unit (IMU) 2070 configured to detect position, orientation, and/or motion of the HMD 2000 , and to provide the detected position, orientation, and/or motion data to the controller 2030 of the mixed reality system 1900 .
  • IMU inertial-measurement unit
  • the HMD 2000 may include world sensors 2050 that collect information about the user's environment (video, depth information, lighting information, etc.), and user sensors 2060 that collect information about the user (e.g., the user's expressions, eye movement, hand gestures, etc.).
  • the sensors 2050 and 2060 may provide the collected information to the controller 2030 of the mixed reality system 1900 .
  • Sensors 2050 and 2060 may include, but are not limited to, visible light cameras (e.g., video cameras), infrared (IR) cameras, IR cameras with an IR illumination source, Light Detection and Ranging (LIDAR) emitters and receivers/detectors, and laser-based sensors with laser emitters and receivers/detectors.
  • World and user sensors of an example HMD are shown in FIGS. 2A through 2C .
  • the HMD 2000 may be configured to render and display frames to provide a 3D virtual view 2002 for the user at least in part according to world sensor 2050 and user sensor 2060 inputs.
  • the virtual space 2002 may include renderings of the user's environment, including renderings of real objects 2012 in the user's environment, based on video captured by one or more “video see through” cameras (e.g., RGB (visible light) video cameras) that capture high-quality, high-resolution video of the user's environment in real time for display.
  • the virtual space 2002 may also include virtual content (e.g., virtual objects, 2014 , virtual tags 2015 for real objects 2012 , avatars of the user, etc.) generated by the mixed reality system 1900 and composited with the projected 3D view of the user's real environment.
  • FIG. 3 describes an example method for collecting and processing sensor inputs to generate content in a 3D virtual view 2002 that may be used in a mixed reality system 1900 as illustrated in FIG. 4 , according to some embodiments.
  • the methods described herein may be implemented in software, hardware, or a combination thereof, in different embodiments.
  • the order of the blocks of the methods may be changed, and various elements may be added, reordered, combined, omitted, modified, etc.
  • Various modifications and changes may be made as would be obvious to a person skilled in the art having the benefit of this disclosure.
  • the various embodiments described herein are meant to be illustrative and not limiting. Many variations, modifications, additions, and improvements are possible. Accordingly, plural instances may be provided for components described herein as a single instance. Boundaries between various components, operations and data stores are somewhat arbitrary, and particular operations are illustrated in the context of specific illustrative configurations.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Hardware Design (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Optics & Photonics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Dermatology (AREA)
  • Neurology (AREA)
  • Neurosurgery (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

A mixed reality system that includes a head-mounted display (HMD) that provides 3D virtual views of a user's environment augmented with virtual content. The HMD may include sensors that collect information about the user's environment (e.g., video, depth information, lighting information, etc.), and sensors that collect information about the user (e.g., the user's expressions, eye movement, hand gestures, etc.). The sensors provide the information as inputs to a controller that renders frames including virtual content based at least in part on the inputs from the sensors. The HMD may display the frames generated by the controller to provide a 3D virtual view including the virtual content and a view of the user's environment for viewing by the user.

Description

    BACKGROUND
  • This application claims benefit of priority to U.S. Provisional Application No. 62/398,437, filed Sep. 22, 2016, titled “World And User Sensors For Mixed Reality Systems,” which is hereby incorporated by reference in its entirety.
  • Virtual reality (VR) allows users to experience and/or interact with an immersive artificial environment, such that the user feels as if they were physically in that environment. For example, virtual reality systems may display stereoscopic scenes to users in order to create an illusion of depth, and a computer may adjust the scene content in real-time to provide the illusion of the user moving within the scene. When the user views images through a virtual reality system, the user may thus feel as if they are moving within the scenes from a first-person point of view. Similarly, mixed reality (MR) combines computer generated information (referred to as virtual content) with real world images or a real world view to augment, or add content to, a user's view of the world. The simulated environments of virtual reality and/or the mixed environments of augmented reality may thus be utilized to provide an interactive user experience for multiple applications, such as applications that add virtual content to a real-time view of the viewer's environment, interacting with virtual training environments, gaming, remotely controlling drones or other mechanical systems, viewing digital media content, interacting with the Internet, or the like.
  • SUMMARY
  • Embodiments of a mixed reality system are described that may include a mixed reality device such as a headset, helmet, goggles, or glasses (referred to herein as a head-mounted display (HMD)) that includes a projector mechanism for projecting or displaying frames including left and right images to a user's eyes to thus provide 3D virtual views to the user. The 3D virtual views may include views of the user's environment augmented with virtual content (e.g., virtual objects, virtual tags, etc.). The mixed reality system may include world-facing sensors that collect information about the user's environment (e.g., video, depth information, lighting information, etc.), and user-facing sensors that collect information about the user (e.g., the user's expressions, eye movement, hand gestures, etc.). The sensors provide the information as inputs to a controller of the mixed reality system. The controller may render frames including virtual content based at least in part on the inputs from the world and user sensors. The controller may be integrated in the HMD, or alternatively may be implemented at least in part by a device external to the HMD. The HMD may display the frames generated by the controller to provide a 3D virtual view including the virtual content and a view of the user's environment for viewing by the user.
  • In some embodiments, the sensors may include one or more cameras that capture high-quality views of the user's environment that may be used to provide the user with a virtual view of their real environment. In some embodiments, the sensors may include one or more sensors that capture depth or range information for the user's environment. In some embodiments, the sensors may include one or more sensors that may capture information about the user's position, orientation, and motion in the environment. In some embodiments, the sensors may include one or more cameras that capture lighting information (e.g., direction, color, intensity) in the user's environment that may, for example, be used in rendering (e.g., coloring and/or lighting) content in the virtual view. In some embodiments, the sensors may include one or more sensors that track position and movement of the user's eyes. In some embodiments, the sensors may include one or more sensors that track position, movement, and gestures of the user's hands, fingers, and/or arms. In some embodiments, the sensors may include one or more sensors that track expressions of the user's eyebrows/forehead. In some embodiments, the sensors may include one or more sensors that track expressions of the user's mouth/jaw.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 illustrates a mixed reality system, according to at least some embodiments.
  • FIGS. 2A through 2C illustrate world-facing and user-facing sensors of a head-mounted display (HMD), according to at least some embodiments.
  • FIG. 3 is a flowchart of a method of operation for a mixed reality system as illustrated in FIGS. 1 through 2C, according to at least some embodiments.
  • FIG. 4 illustrates components of a mixed reality system, according to at least some embodiments.
  • This specification includes references to “one embodiment” or “an embodiment.” The appearances of the phrases “in one embodiment” or “in an embodiment” do not necessarily refer to the same embodiment. Particular features, structures, or characteristics may be combined in any suitable manner consistent with this disclosure.
  • “Comprising.” This term is open-ended. As used in the claims, this term does not foreclose additional structure or steps. Consider a claim that recites: “An apparatus comprising one or more processor units . . . ” Such a claim does not foreclose the apparatus from including additional components (e.g., a network interface unit, graphics circuitry, etc.).
  • “Configured To.” Various units, circuits, or other components may be described or claimed as “configured to” perform a task or tasks. In such contexts, “configured to” is used to connote structure by indicating that the units/circuits/components include structure (e.g., circuitry) that performs those task or tasks during operation. As such, the unit/circuit/component can be said to be configured to perform the task even when the specified unit/circuit/component is not currently operational (e.g., is not on). The units/circuits/components used with the “configured to” language include hardware—for example, circuits, memory storing program instructions executable to implement the operation, etc. Reciting that a unit/circuit/component is “configured to” perform one or more tasks is expressly intended not to invoke 35 U.S.C. § 112, paragraph (f), for that unit/circuit/component. Additionally, “configured to” can include generic structure (e.g., generic circuitry) that is manipulated by software or firmware (e.g., an FPGA or a general-purpose processor executing software) to operate in manner that is capable of performing the task(s) at issue. “Configure to” may also include adapting a manufacturing process (e.g., a semiconductor fabrication facility) to fabricate devices (e.g., integrated circuits) that are adapted to implement or perform one or more tasks.
  • “First,” “Second,” etc. As used herein, these terms are used as labels for nouns that they precede, and do not imply any type of ordering (e.g., spatial, temporal, logical, etc.). For example, a buffer circuit may be described herein as performing write operations for “first” and “second” values. The terms “first” and “second” do not necessarily imply that the first value must be written before the second value.
  • “Based On” or “Dependent On.” As used herein, these terms are used to describe one or more factors that affect a determination. These terms do not foreclose additional factors that may affect a determination. That is, a determination may be solely based on those factors or based, at least in part, on those factors. Consider the phrase “determine A based on B.” While in this case, B is a factor that affects the determination of A, such a phrase does not foreclose the determination of A from also being based on C. In other instances, A may be determined based solely on B.
  • “Or.” When used in the claims, the term “or” is used as an inclusive or and not as an exclusive or. For example, the phrase “at least one of x, y, or z” means any one of x, y, and z, as well as any combination thereof.
  • DETAILED DESCRIPTION
  • Various embodiments of methods and apparatus for generating mixed reality views for users are described. Embodiments of a mixed reality system are described that may include a mixed reality device such as a headset, helmet, goggles, or glasses (referred to herein as a head-mounted display (HMD)) that includes a projector mechanism for projecting or displaying frames including left and right images to a user's eyes to thus provide 3D virtual views to the user. The 3D virtual views may include views of the user's environment augmented with virtual content (e.g., virtual objects, virtual tags, etc.). The mixed reality system may also include world-facing sensors that collect information about the user's environment (e.g., video, depth information, lighting information, etc.), and user-facing sensors that collect information about the user (e.g., the user's expressions, eye movement, hand gestures, etc.). The sensors may provide the collected information to a controller of the mixed reality system. The controller may render frames for display by the projector that include virtual content based at least in part on the various information obtained from the sensors.
  • As noted above, the mixed reality system may include world-facing sensors (also referred to as world sensors), for example located on external surfaces of a mixed reality HMD, that collect various information about the user's environment. In some embodiments, the world sensors may include one or more “video see through” cameras (e.g., RGB (visible light) cameras) that capture high-quality views of the user's environment that may be used to provide the user with a virtual view of their real environment. In some embodiments, the world sensors may include one or more world mapping sensors (e.g., infrared (IR) cameras with an IR illumination source, or Light Detection and Ranging (LIDAR) emitters and receivers/detectors) that, for example, capture depth or range information for the user's environment. In some embodiments, the world sensors may include one or more “head pose” sensors (e.g., IR or RGB cameras) that may capture information about the user's position, orientation, and motion in the environment; this information may, for example, be used to augment information collected by an inertial-measurement unit (IMU) of the HMD. In some embodiments, the world sensors may include one or more light sensors (e.g., RGB cameras) that capture lighting information (e.g., color, intensity, and direction) in the user's environment that may, for example, be used in rendering lighting effects for virtual content in the virtual view.
  • As noted above, the mixed reality system may include user-facing sensors (also referred to as user sensors), for example located on external and internal surfaces of a mixed reality HMD, that collect information about the user (e.g., the user's expressions, eye movement, etc.). In some embodiments, the user sensors may include one or more eye tracking sensors (e.g., IR cameras with IR illumination, or visible light cameras) that track position and movement of the user's eyes. In the case of visible light (RGB) cameras, the eye tracking sensors may also be used for other purposes, for example iris identification. In some embodiments, the user sensors may include one or more hand sensors (e.g., IR cameras with IR illumination) that track position, movement, and gestures of the user's hands, fingers, and/or arms. In some embodiments, the user sensors may include one or more eyebrow sensors (e.g., IR cameras with IR illumination) that track expressions of the user's eyebrows/forehead. In some embodiments, the user sensors may include one or more lower jaw tracking sensors (e.g., IR cameras with IR illumination) that track expressions of the user's mouth/jaw.
  • FIG. 1 illustrates a mixed reality system 10, according to at least some embodiments. In some embodiments, a mixed reality system 10 may include a HMD 100 such as a headset, helmet, goggles, or glasses that may be worn by a user 190. In some embodiments, virtual content 110 may be displayed to the user 190 in a 3D virtual view 102 via the HMD 100; different virtual objects may be displayed at different depths in the virtual space 102. In some embodiments, the virtual content 110 may be overlaid on or composited in a view of the user 190's environment with respect to the user's current line of sight that is provided by the HMD 100.
  • HMD 100 may implement any of various types of virtual reality projection technologies. For example, HMD 100 may be a near-eye VR system that projects left and right images on screens in front of the user 190's eyes that are viewed by a subject, such as DLP (digital light processing), LCD (liquid crystal display) and LCoS (liquid crystal on silicon) technology VR systems. As another example, HMD 100 may be a direct retinal projector system that scans left and right images, pixel by pixel, to the subject's eyes. To scan the images, left and right projectors generate beams that are directed to left and right reflective components (e.g., ellipsoid mirrors) located in front of the user 190's eyes; the reflective components reflect the beams to the user's eyes. To create a three-dimensional (3D) effect, virtual content 110 at different depths or distances in the 3D virtual view 102 are shifted left or right in the two images as a function of the triangulation of distance, with nearer objects shifted more than more distant objects.
  • The HMD 100 may include world sensors 140 that collect information about the user 190's environment (video, depth information, lighting information, etc.), and user sensors 150 that collect information about the user 190 (e.g., the user's expressions, eye movement, hand gestures, etc.). The sensors 140 and 150 may provide the collected information to a controller of the mixed reality system 10. The controller may render frames for display by a projector component of the HMD 100 that include virtual content based at least in part on the various information obtained from the sensors 140 and 150. Example sensors 140 and 150 are shown in FIGS. 2A through 2C.
  • While not shown in FIG. 1, in some embodiments the mixed reality system 10 may include one or more other components. For example, the system may include a cursor control device (e.g., mouse) for moving a virtual cursor in the 3D virtual view 102 to interact with virtual content 110. As another example, in some embodiments, the system 10 may include a computing device coupled to the HMD 100 via a wired or wireless (e.g., Bluetooth) connection that implements at least some of the functionality of the HMD 100, for example rendering images and image content to be displayed in the 3D virtual view 102 by the HMD 100.
  • FIGS. 2A through 2C illustrate world-facing and user-facing sensors of an example HMD 200, according to at least some embodiments. FIG. 2A shows a side view of an example HMD 200 with world and user sensors 210-217, according to some embodiments. FIG. 2B shows a front (world-facing) view of an example HMD 200 with world and user sensors 210-217, according to some embodiments. FIG. 2C shows a rear (user-facing) view of an example HMD 200 with world and user sensors 210-217, according to some embodiments. Note that HMD 200 as illustrated in FIGS. 2A through 2C is given by way of example, and is not intended to be limiting. In various embodiments, the shape, size, and other features of a HMD may differ, and the locations, numbers, types, and other features of the world and user sensors may vary.
  • As shown in FIGS. 2A through 2C, HMD 200 may be worn on a user 290's head so that the projection system displays 202 (e.g. screens and optics of a near-eye VR system, or reflective components (e.g., ellipsoid mirrors) of a direct retinal projector system) are disposed in front of the user 290's eyes 292. In some embodiments, a HMD 200 may include world sensors 210-213 that collect information about the user 290's environment (video, depth information, lighting information, etc.), and user sensors 214-217 that collect information about the user 290 (e.g., the user's expressions, eye movement, hand gestures, etc.). The sensors 210-217 may provide the collected information to a controller (not shown) of the mixed reality system. The controller may be implemented in the HMD 200, or alternatively may be implemented at least in part by an external device (e.g., a computing system) that is communicatively coupled to HMD 200 via a wired or wireless interface. The controller may include one or more of various types of processors, image signal processors (ISPs), graphics processing units (GPUs), coder/decoders (codecs), and/or other components for processing and rendering video and/or images. The controller may render frames (each frame including a left and right image) that include virtual content based at least in part on the various inputs obtained from the sensors 210-217, and may provide the frames to the projection system of the HMD 200 for display to the left and right displays 202. FIG. 4 further illustrates components of a HMD and mixed reality system, according to some embodiments.
  • World sensors 210-213 may, for example, be located on external surfaces of a HMD 200, and may collect various information about the user's environment. In some embodiments, the information collected by the world sensors may be used to provide the user with a virtual view of their real environment. In some embodiments, the world sensors may be used to provide depth information for objects in the real environment. In some embodiments, the world sensors may be used to provide orientation and motion information for the user in the real environment. In some embodiments, the world sensors may be used to collect color and lighting information in the real environment.
  • In some embodiments, the world sensors may include one or more “video see through” cameras 210 (e.g., RGB (visible light) video cameras) that capture high-quality video of the user's environment that may be used to provide the user with a virtual view of their real environment. In some embodiments, video streams captured by cameras 210A and 210B may be processed by the controller of the HMD 200 to render frames including virtual content, and the rendered frames may be provided to the projection system of the device for display on respective displays 202A and 202B. However, note that in some embodiments, to reduce latency for the virtual view of the world that is displayed to the user 290, at least some video frames captured by cameras 210A and 210B may go directly to the projection system of the device for display on respective displays 202A and 202B; the controller may also receive and process the video frames to composite virtual content into the frames that are then provided to the projection system for display.
  • As shown in the non-limiting example HMD 200 of FIGS. 2A through 2C, in some embodiments there may be two video see through cameras 210A and 210B located on a front surface of the HMD 200 at positions that are substantially in front of each of the user 290's eyes 292A and 292B. However, in various embodiments, more or fewer cameras 210 may be used in a HMD 200 to capture video of the user 290's environment, and cameras 210 may be positioned at other locations. In an example non-limiting embodiment, video see through cameras 210 may include high quality, high resolution RGB video cameras, for example 10 megapixel (e.g., 3072×3072 pixel count) cameras with a frame rate of 60 frames per second (FPS) or greater, horizontal field of view (HFOV) of greater than 90 degrees, and with a working distance of 0.1 meters (m) to infinity.
  • In some embodiments, the world sensors may include one or more world mapping sensors 211 (e.g., infrared (IR) cameras with an IR illumination source, or Light Detection and Ranging (LIDAR) emitters and receivers/detectors) that, for example, capture depth or range information for objects and surfaces in the user's environment. The range information may, for example, be used in positioning virtual content composited into images of the real environment at correct depths. In some embodiments, the range information may be used in adjusting the depth of real objects in the environment when displayed; for example, nearby objects may be re-rendered to be smaller in the display to help the user in avoiding the objects when moving about in the environment.
  • As shown in the non-limiting example HMD 200 of FIGS. 2A through 2C, in some embodiments there may be one world mapping sensor 211 located on a front surface of the HMD 200. However, in various embodiments, more than one world mapping sensor 211 may be used, and world mapping sensor 211 may be positioned at other locations. In an example non-limiting embodiment, a world mapping sensor 211 may include an IR light source and IR camera, for example a 1 megapixel (e.g., 1000×1000 pixel count) camera with a frame rate of 60 frames per second (FPS) or greater, HFOV of 90 degrees or greater, and with a working distance of 0.1 m to 1.5 m.
  • In some embodiments, the world sensors may include one or more “head pose” sensors 212 (e.g., IR or RGB cameras) that may capture information about the position, orientation, and/or motion of the user and/or the user's head in the environment. The information collected by sensors 212 may, for example, be used to augment information collected by an inertial-measurement unit (IMU) of the HMD 200. The augmented position, orientation, and/or motion information may be used in determining how to render and display virtual views of the user's environment and virtual content within the views. For example, different views of the environment may be rendered based at least in part on the position or orientation of the user's head, whether the user is currently walking through the environment, and so on. As another example, the augmented position, orientation, and/or motion information may be used to composite virtual content into the scene in a fixed position relative to the background view of the user's environment.
  • As shown in the non-limiting example HMD 200 of FIGS. 2A through 2C, in some embodiments there may be two head pose sensors 212A and 212B located on a front or top surface of the HMD 200. However, in various embodiments, more or fewer sensors 212 may be used, and sensors 212 may be positioned at other locations. In an example non-limiting embodiment, head pose sensors 212 may include RGB or IR cameras, for example 400×400 pixel count cameras, with a frame rate of 120 frames per second (FPS) or greater, wide field of view (FOV), and with a working distance of 1 m to infinity. The sensors 212 may include wide FOV lenses, and the two sensors 212A and 212B may look in different directions. The sensors 212 may provide low latency monochrome imaging for tracking head position, and may be integrated with an IMU of the HMD 200 to augment position and movement information captured by the IMU.
  • In some embodiments, the world sensors may include one or more light sensors 213 (e.g., RGB cameras) that capture lighting information (e.g., direction, color, and intensity) in the user's environment that may, for example, be used in rendering virtual content in the virtual view of the user's environment, for example in determining coloring, lighting, shadow effects, etc. for virtual objects in the virtual view. For example, if a red light source is detected, virtual content rendered into the scene may be illuminated with red light, and more generally virtual objects may be rendered with light of a correct color and intensity from a correct direction and angle.
  • As shown in the non-limiting example HMD 200 of FIGS. 2A through 2C, in some embodiments there may be one light sensor 213 located on a front or top surface of the HMD 200. However, in various embodiments, more than one light sensor 213 may be used, and light sensor 213 may be positioned at other locations. In an example non-limiting embodiment, light sensor 213 may include an RGB high dynamic range (HDR) video camera, for example a 500×500 pixel count camera, with a frame rate of 30 FPS, HFOV of 180 degrees or greater, and with a working distance of 1 m to infinity.
  • User sensors 214-217 may, for example, be located on external and internal surfaces of HMD 200, and may collect information about the user 290 (e.g., the user's expressions, eye movement, etc.). In some embodiments, the information collected by the user sensors may be used to adjust the collection of, and/or processing of information collected by, the world sensors 210-213 of the HMD 200. In some embodiments, the information collected by the user sensors may be used to adjust the rendering of images to be projected, and/or to adjust the projection of the images by the projection system of the HMD 200. In some embodiments, the information collected by the user sensors may be used in generating an avatar of the user 290 in the 3D virtual view projected to the user by the HMD 200. In some embodiments, the information collected by the user sensors may be used in interacting with or manipulating virtual content in the 3D virtual view projected by the HMD 200.
  • In some embodiments, the user sensors may include one or more eye tracking sensors 214 (e.g., IR cameras with an IR illumination source) that may be used to track position and movement of the user's eyes. In some embodiments, eye tracking sensors 214 may also be used to track dilation of the user's pupils. As shown in FIGS. 2A and 2B, in some embodiments, there may be two eye tracking sensors 214A and 214B, with each eye tracking sensor tracking a respective eye 292A or 292B. In some embodiments, the information collected by the eye tracking sensors 214 may be used to adjust the rendering of images to be projected, and/or to adjust the projection of the images by the projection system of the HMD 200, based on the direction and angle at which the user's eyes are looking. For example, in some embodiments, content of the images in a region around the location at which the user's eyes are currently looking may be rendered with more detail and at a higher resolution than content in regions at which the user is not looking, which allows available processing time for image data to be spent on content viewed by the foveal regions of the eyes rather than on content viewed by the peripheral regions of the eyes. Similarly, content of images in regions at which the user is not looking may be compressed more than content of the region around the point at which the user is currently looking. In some embodiments, the information collected by the eye tracking sensors 214 may be used to match direction of the eyes of an avatar of the user 290 to the direction of the user's eyes. In some embodiments, brightness of the projected images may be modulated based on the user's pupil dilation as determined by the eye tracking sensors 214.
  • As shown in the non-limiting example HMD 200 of FIGS. 2A through 2C, in some embodiments there may be two eye tracking sensors 214A and 214B located on an inner surface of the HMD 200 at positions such that the sensors 214A and 214B have views of respective ones of the user 290's eyes 292A and 292B. However, in various embodiments, more or fewer eye tracking sensors 214 may be used in a HMD 200, and sensors 214 may be positioned at other locations. In an example non-limiting embodiment, each eye tracking sensor 214 may include an IR light source and IR camera, for example a 400×400 pixel count camera with a frame rate of 120 FPS or greater, HFOV of 70 degrees, and with a working distance of 10 millimeters (mm) to 80 mm.
  • In some embodiments, the user sensors may include one or more eyebrow sensors 215 (e.g., IR cameras with IR illumination) that track expressions of the user's eyebrows/forehead. In some embodiments, the user sensors may include one or more lower jaw tracking sensors 216 (e.g., IR cameras with IR illumination) that track expressions of the user's mouth/jaw. For example, in some embodiments, expressions of the brow, mouth, jaw, and eyes captured by sensors 214, 215, and 216 may be used to simulate expressions on an avatar of the user 290 in the virtual space, and/or to selectively render and composite virtual content for viewing by the user based at least in part on the user's reactions to the content projected in the 3D virtual view.
  • As shown in the non-limiting example HMD 200 of FIGS. 2A through 2C, in some embodiments there may be two eyebrow sensors 215A and 215B located on an inner surface of the HMD 200 at positions such that the sensors 215A and 215B have views of the user 290's eyebrows and forehead. However, in various embodiments, more or fewer eyebrow sensors 215 may be used in a HMD 200, and sensors 215 may be positioned at other locations than those shown. In an example non-limiting embodiment, each eyebrow sensor 215 may include an IR light source and IR camera, for example a 250×250 pixel count camera with a frame rate of 60 FPS, HFOV of 60 degrees, and with a working distance of approximately 5 mm. In some embodiments, images from the two sensors 215A and 215B may be combined to form a stereo view of the user's forehead and eyebrows.
  • As shown in the non-limiting example HMD 200 of FIGS. 2A through 2C, in some embodiments there may be two lower jaw tracking sensors 216A and 216B located on an inner surface of the HMD 200 at positions such that the sensors 216A and 216B have views of the user 290's lower jaw and mouth. However, in various embodiments, more or fewer lower jaw tracking sensors 216 may be used in a HMD 200, and sensors 216 may be positioned at other locations than those shown. In an example non-limiting embodiment, each lower jaw tracking sensor 216 may include an IR light source and IR camera, for example a 400×400 pixel count camera with a frame rate of 60 FPS, HFOV of 90 degrees, and with a working distance of approximately 30 mm. In some embodiments, images from the two sensors 216A and 216B may be combined to form a stereo view of the user's lower jaw and mouth.
  • In some embodiments, the user sensors may include one or more hand sensors 217 (e.g., IR cameras with IR illumination) that track position, movement, and gestures of the user's hands, fingers, and/or arms. For example, in some embodiments, detected position, movement, and gestures of the user's hands, fingers, and/or arms may be used to simulate movement of the hands, fingers, and/or arms of an avatar of the user 290 in the virtual space. As another example, the user's detected hand and finger gestures may be used to determine interactions of the user with virtual content in the virtual space, including but not limited to gestures that manipulate virtual objects, gestures that interact with virtual user interface elements displayed in the virtual space, etc.
  • As shown in the non-limiting example HMD 200 of FIGS. 2A through 2C, in some embodiments there may be one hand sensor 217 located on a bottom surface of the HMD 200. However, in various embodiments, more than one hand sensor 217 may be used, and hand sensor 217 may be positioned at other locations. In an example non-limiting embodiment, hand sensor 217 may include an IR light source and IR camera, for example a 500×500 pixel count camera with a frame rate of 120 FPS or greater, HFOV of 90 degrees, and with a working distance of 0.1 m to 1 m.
  • FIG. 3 is a high-level flowchart of a method of operation for a mixed reality system as illustrated in FIGS. 1 through 2C, according to at least some embodiments. The mixed reality system may include a HMD such as a headset, helmet, goggles, or glasses that includes a projector mechanism for projecting or displaying frames including left and right images to a user's eyes to thus provide 3D virtual views to the user. The 3D virtual views may include views of the user's environment augmented with virtual content (e.g., virtual objects, virtual tags, etc.).
  • As indicated at 1002, one or more world sensors on the HMD may capture information about the user's environment (e.g., video, depth information, lighting information, etc.), and provide the information as inputs to a controller of the mixed reality system. As indicated at 1004, one or more user sensors on the HMD may capture information about the user (e.g., the user's expressions, eye movement, hand gestures, etc.), and provide the information as inputs to the controller of the mixed reality system. Elements 1002 and 1004 may be performed in parallel, and as indicated by the arrows returning to elements 1002 and 1004 may be performed continuously to provide input to the controller of the mixed reality system as the user uses the mixed reality system. As indicated at 1010, the controller of the mixed reality system may render frames including virtual content based at least in part on the inputs from the world and user sensors. The controller may be integrated in the HMD, or alternatively may be implemented at least in part by a device external to the HMD. As indicated at 1020, the HMD may display the frames generated by the controller to provide a 3D virtual view including the virtual content and a view of the user's environment for viewing by the user. As indicated by the arrow returning to element 1020, the controller may continue to receive and process inputs from the sensors to render frames for display as long as the user is using the mixed reality system.
  • Note that in some embodiments, to reduce latency for the virtual view of the world that is displayed to the user, at least some video frames of the user's real environment that are captured by the world sensors (video see through cameras) may go directly to the projection system of the device for display to the user; the controller may also receive and process the video frames to composite virtual content into the frames that are then provided to the projection system for display.
  • FIG. 4 is a block diagram illustrating components of an example mixed reality system, according to at least some embodiments. In some embodiments, a mixed reality system 1900 may include a HMD 2000 such as a headset, helmet, goggles, or glasses. HMD 2000 may implement any of various types of virtual reality projector technologies. For example, the HMD 2000 may include a near-eye VR projector that projects frames including left and right images on screens that are viewed by a user, such as DLP (digital light processing), LCD (liquid crystal display) and LCoS (liquid crystal on silicon) technology projectors. As another example, the HMD 2000 may include a direct retinal projector that scans frames including left and right images, pixel by pixel, directly to the user's eyes. To create a three-dimensional (3D) effect in 3D virtual view 2002, objects at different depths or distances in the two images are shifted left or right as a function of the triangulation of distance, with nearer objects shifted more than more distant objects.
  • HMD 2000 may include a 3D projector 2020 that implements the VR projection technology that generates the 3D virtual view 2002 viewed by the user, for example near-eye VR projection technology or direct retinal projection technology. In some embodiments, HMD 2000 may also include a controller 2030 configured to implement functionality of the mixed reality system 1900 as described herein and to generate the frames (each frame including a left and right image) that are projected or scanned by the 3D projector 2020 into the 3D virtual view 2002. In some embodiments, HMD 2000 may also include a memory 2032 configured to store software (code 2034) of the mixed reality system that is executable by the controller 2030, as well as data 2038 that may be used by the mixed reality system 1900 when executing on the controller 2030. In some embodiments, HMD 2000 may also include one or more interfaces 2040 (e.g., a Bluetooth technology interface, USB interface, etc.) configured to communicate with an external device 2100 via a wired or wireless connection. In some embodiments, at least a part of the functionality described for the controller 2030 may be implemented by the external device 2100. External device 2100 may be or may include any type of computing system or computing device, such as a desktop computer, notebook or laptop computer, pad or tablet device, smartphone, hand-held computing device, game controller, game system, and so on.
  • In various embodiments, controller 2030 may be a uniprocessor system including one processor, or a multiprocessor system including several processors (e.g., two, four, eight, or another suitable number). Controller 2030 may include central processing units (CPUs) configured to implement any suitable instruction set architecture, and may be configured to execute instructions defined in that instruction set architecture. For example, in various embodiments controller 2030 may include general-purpose or embedded processors implementing any of a variety of instruction set architectures (ISAs), such as the x86, PowerPC, SPARC, RISC, or MIPS ISAs, or any other suitable ISA. In multiprocessor systems, each of the processors may commonly, but not necessarily, implement the same ISA. Controller 2030 may employ any microarchitecture, including scalar, superscalar, pipelined, superpipelined, out of order, in order, speculative, non-speculative, etc., or combinations thereof. Controller 2030 may include circuitry to implement microcoding techniques. Controller 2030 may include one or more processing cores each configured to execute instructions. Controller 2030 may include one or more levels of caches, which may employ any size and any configuration (set associative, direct mapped, etc.). In some embodiments, controller 2030 may include at least one graphics processing unit (GPU), which may include any suitable graphics processing circuitry. Generally, a GPU may be configured to render objects to be displayed into a frame buffer (e.g., one that includes pixel data for an entire frame). A GPU may include one or more graphics processors that may execute graphics software to perform a part or all of the graphics operation, or hardware acceleration of certain graphics operations. In some embodiments, controller 2030 may include one or more other components for processing and rendering video and/or images, for example image signal processors (ISPs), coder/decoders (codecs), etc.
  • Memory 2032 may include any type of memory, such as dynamic random access memory (DRAM), synchronous DRAM (SDRAM), double data rate (DDR, DDR2, DDR3, etc.) SDRAM (including mobile versions of the SDRAMs such as mDDR3, etc., or low power versions of the SDRAMs such as LPDDR2, etc.), RAMBUS DRAM (RDRAM), static RAM (SRAM), etc. In some embodiments, one or more memory devices may be coupled onto a circuit board to form memory modules such as single inline memory modules (SIMMs), dual inline memory modules (DIMMs), etc. Alternatively, the devices may be mounted with an integrated circuit implementing system in a chip-on-chip configuration, a package-on-package configuration, or a multi-chip module configuration.
  • In some embodiments, the HMD 2000 may include at least one inertial-measurement unit (IMU) 2070 configured to detect position, orientation, and/or motion of the HMD 2000, and to provide the detected position, orientation, and/or motion data to the controller 2030 of the mixed reality system 1900.
  • In some embodiments, the HMD 2000 may include world sensors 2050 that collect information about the user's environment (video, depth information, lighting information, etc.), and user sensors 2060 that collect information about the user (e.g., the user's expressions, eye movement, hand gestures, etc.). The sensors 2050 and 2060 may provide the collected information to the controller 2030 of the mixed reality system 1900. Sensors 2050 and 2060 may include, but are not limited to, visible light cameras (e.g., video cameras), infrared (IR) cameras, IR cameras with an IR illumination source, Light Detection and Ranging (LIDAR) emitters and receivers/detectors, and laser-based sensors with laser emitters and receivers/detectors. World and user sensors of an example HMD are shown in FIGS. 2A through 2C.
  • The HMD 2000 may be configured to render and display frames to provide a 3D virtual view 2002 for the user at least in part according to world sensor 2050 and user sensor 2060 inputs. The virtual space 2002 may include renderings of the user's environment, including renderings of real objects 2012 in the user's environment, based on video captured by one or more “video see through” cameras (e.g., RGB (visible light) video cameras) that capture high-quality, high-resolution video of the user's environment in real time for display. The virtual space 2002 may also include virtual content (e.g., virtual objects, 2014, virtual tags 2015 for real objects 2012, avatars of the user, etc.) generated by the mixed reality system 1900 and composited with the projected 3D view of the user's real environment. FIG. 3 describes an example method for collecting and processing sensor inputs to generate content in a 3D virtual view 2002 that may be used in a mixed reality system 1900 as illustrated in FIG. 4, according to some embodiments.
  • The methods described herein may be implemented in software, hardware, or a combination thereof, in different embodiments. In addition, the order of the blocks of the methods may be changed, and various elements may be added, reordered, combined, omitted, modified, etc. Various modifications and changes may be made as would be obvious to a person skilled in the art having the benefit of this disclosure. The various embodiments described herein are meant to be illustrative and not limiting. Many variations, modifications, additions, and improvements are possible. Accordingly, plural instances may be provided for components described herein as a single instance. Boundaries between various components, operations and data stores are somewhat arbitrary, and particular operations are illustrated in the context of specific illustrative configurations. Other allocations of functionality are envisioned and may fall within the scope of claims that follow. Finally, structures and functionality presented as discrete components in the example configurations may be implemented as a combined structure or component. These and other variations, modifications, additions, and improvements may fall within the scope of embodiments as defined in the claims that follow.

Claims (20)

What is claimed is:
1. A system, comprising:
a controller comprising one or more processors; and
a head-mounted display (HMD) configured to display a 3D virtual view to a user, wherein the HMD comprises:
left and right displays for displaying frames including left and right images to the user's eyes to provide the 3D virtual view to the user;
a plurality of sensors configured to collect information about the user and the user's environment and provide the information to the controller, wherein the plurality of sensors includes:
one or more cameras configured to capture views of the user's environment;
one or more world mapping sensors configured to determine range information for objects in the environment; and
one or more eye tracking sensors configured to track position and movement of the user's eyes;
wherein the controller is configured to render frames for display by the HMD that include virtual content composited into the captured views of the user's environment based at least in part on the range information from the one or more world mapping sensors and the position and movement of the user's eyes as tracked by the one or more eye tracking sensors.
2. The system as recited in claim 1, wherein the controller is configured to determine depths at which to render the virtual content in the 3D virtual view based at least in part on the range information from the one or more world mapping sensors.
3. The system as recited in claim 1, wherein the controller is configured to:
determine a region within the 3D virtual view at which the user is looking based on the position of the user's eyes as determined by the one or more eye tracking sensors; and
render content in the determined region at a higher resolution than in other regions of the 3D virtual view.
4. The system as recited in claim 1, wherein the plurality of sensors further includes:
one or more head pose sensors configured to capture information about the user's position, orientation, and motion in the environment;
one or more light sensors configured to capture lighting information including color, intensity, and direction in the user's environment;
one or more hand sensors configured to track position, movement, and gestures of the user's hands;
one or more eyebrow sensors configured to track expressions of the user's eyebrows; and
one or more lower jaw sensors configured to track expressions of the user's mouth and jaw.
5. The system as recited in claim 4, wherein the controller is configured to render lighting effects for the virtual content based at least in part on the lighting information captured by the one or more light sensors.
6. The system as recited in claim 4, wherein the HMD further comprises an inertial-measurement unit (IMU), wherein the controller is configured to:
augment information received from the IMU with the information captured by the one or more head pose sensors to determine current position, orientation, and motion of the user in the environment; and
render the frames for display by the HMD based at least in part on the determined current position, orientation, and motion of the user.
7. The system as recited in claim 4, wherein the controller is configured to render an avatar of the user's face for display in the 3D virtual view based at least in part on information collected by the one or more eye tracking sensors, the one or more eyebrow sensors, and the one or more lower jaw sensors.
8. The system as recited in claim 4, wherein the controller is configured to render representations of the user's hands for display in the 3D virtual view based at least in part on information collected by the one or more hand sensors.
9. The system as recited in claim 4, wherein the controller is configured to detect interactions of the user with virtual content in the 3D virtual view based at least in part on information collected by the one or more hand sensors.
10. The system as recited in claim 1, wherein one or more cameras configured to capture views of the user's environment include a left video camera corresponding to the user's left eye and a right video camera corresponding to the user's right eye.
11. A device, comprising:
a controller comprising one or more processors;
left and right displays for displaying frames including left and right images to the user's eyes to provide a 3D virtual view to a user;
a plurality of world-facing sensors configured to collect information about the user's environment and provide the information to the controller, wherein the plurality of world-facing sensors includes:
one or more cameras configured to capture views of the user's environment;
one or more world mapping sensors configured to capture depth information in the user's environment;
a plurality of user-facing sensors configured to collect information about the user and provide the information to the controller, wherein the plurality of user-facing sensors includes one or more eye tracking sensors configured to track position and movement of the user's eyes;
wherein the controller is configured to render frames for display that include virtual content composited into the captured views of the user's environment based at least in part on the depth information captured by the one or more world mapping sensors and the position and movement of the user's eyes as tracked by the one or more eye tracking sensors.
12. The device as recited in claim 11, wherein the plurality of world-facing sensors further includes:
one or more head pose sensors configured to capture information about the user's position, orientation, and motion in the environment; and
one or more light sensors configured to capture lighting information including color, intensity, and direction in the user's environment.
13. The device as recited in claim 12, wherein the controller is configured to render lighting effects for the virtual content based at least in part on the lighting information captured by the one or more light sensors.
14. The device as recited in claim 12, wherein the device further comprises an inertial-measurement unit (IMU), wherein the controller is configured to augment information received from the IMU with the information captured by the one or more head pose sensors to determine current position, orientation, and motion of the user in the environment.
15. The device as recited in claim 11, wherein the plurality of user-facing sensors further includes:
one or more hand sensors configured to track position, movement, and gestures of the user's hands;
one or more eyebrow sensors configured to track expressions of the user's eyebrows; and
one or more lower jaw sensors configured to track expressions of the user's mouth and jaw.
16. The device as recited in claim 15, wherein the controller is configured to render an avatar of the user for display in the 3D virtual view based at least in part on information collected by the one or more eye tracking sensors, the one or more eyebrow sensors, the one or more lower jaw sensors, and the one or more hand sensors.
17. The device as recited in claim 15, wherein the controller is configured to detect interactions of the user with virtual content in the 3D virtual view based at least in part on information collected by the one or more hand sensors.
18. A method, comprising:
capturing, by a plurality of world-facing sensors of a head-mounted display (HMD) worn by a user, information about the user's environment, wherein the information about the user's environment includes views of the user's environment and depth information in the user's environment;
capturing, by a plurality of user-facing sensors of the HMD, information about the user, wherein the information about the user includes position and movement of the user's eyes;
rendering, by a controller of the HMD, frames for display that include virtual content composited into the captured views of the user's environment based at least in part on the depth information captured by world-facing sensors and the position and movement of the user's eyes captured by the user-facing sensors; and
displaying, by the HMD, the rendered frames to the user to provide a 3D virtual view of the user's environment that includes the virtual content.
19. The method as recited in claim 18, further comprising:
capturing, by the world-facing sensors, information about the user's position, orientation, and motion in the environment and lighting information including color, intensity, and direction in the user's environment;
determining, by the controller, current position, orientation, and motion of the user in the environment based at least in part on the information about the user's position, orientation, and motion in the environment captured by the world-facing sensors; and
rendering, by the controller, lighting effects for the virtual content based at least in part on the lighting information captured by the world-facing sensors.
20. The method as recited in claim 18, further comprising:
tracking, by the user-facing sensors, position, movement, and gestures of the user's hands, expressions of the user's eyebrows, and expressions of the user's mouth and jaw; and
rendering, by the controller, an avatar of the user for display in the 3D virtual view based at least in part on information collected by the plurality of user-facing sensors.
US15/711,992 2016-09-22 2017-09-21 Display system having world and user sensors Abandoned US20180082482A1 (en)

Priority Applications (6)

Application Number Priority Date Filing Date Title
US15/711,992 US20180082482A1 (en) 2016-09-22 2017-09-21 Display system having world and user sensors
KR1020197005049A KR102230561B1 (en) 2016-09-22 2017-09-22 Display system with world and user sensors
CN201780053117.8A CN109643145B (en) 2016-09-22 2017-09-22 Display system with world sensor and user sensor
EP17780948.0A EP3488315B1 (en) 2016-09-22 2017-09-22 Virtual reality display system having world and user sensors
PCT/US2017/053100 WO2018057991A1 (en) 2016-09-22 2017-09-22 Display system having world and user sensors
US16/361,110 US11217021B2 (en) 2016-09-22 2019-03-21 Display system having sensors

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201662398437P 2016-09-22 2016-09-22
US15/711,992 US20180082482A1 (en) 2016-09-22 2017-09-21 Display system having world and user sensors

Related Child Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2017/053100 Continuation WO2018057991A1 (en) 2016-09-22 2017-09-22 Display system having world and user sensors

Publications (1)

Publication Number Publication Date
US20180082482A1 true US20180082482A1 (en) 2018-03-22

Family

ID=61620491

Family Applications (2)

Application Number Title Priority Date Filing Date
US15/711,992 Abandoned US20180082482A1 (en) 2016-09-22 2017-09-21 Display system having world and user sensors
US16/361,110 Active US11217021B2 (en) 2016-09-22 2019-03-21 Display system having sensors

Family Applications After (1)

Application Number Title Priority Date Filing Date
US16/361,110 Active US11217021B2 (en) 2016-09-22 2019-03-21 Display system having sensors

Country Status (5)

Country Link
US (2) US20180082482A1 (en)
EP (1) EP3488315B1 (en)
KR (1) KR102230561B1 (en)
CN (1) CN109643145B (en)
WO (1) WO2018057991A1 (en)

Cited By (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180268552A1 (en) * 2017-03-03 2018-09-20 National Institutes Of Health Eye Tracking Applications in Computer Aided Diagnosis and Image Processing in Radiology
US20180308454A1 (en) * 2017-04-21 2018-10-25 Ford Global Technologies, Llc In-vehicle projected reality motion correction
WO2019182822A1 (en) * 2018-03-23 2019-09-26 Microsoft Technology Licensing, Llc Asynchronous camera frame allocation
US20190362516A1 (en) * 2018-05-23 2019-11-28 Samsung Electronics Co., Ltd. Marker-based augmented reality system and method
EP3572916A3 (en) * 2018-05-22 2019-12-11 Facebook, Inc. Apparatus, system, and method for accelerating positional tracking of head-mounted displays
EP3605198A1 (en) * 2018-07-30 2020-02-05 HTC Corporation Head mounted display and using method thereof
US10559276B2 (en) 2018-02-03 2020-02-11 Facebook Technologies, Llc Apparatus, system, and method for mitigating motion-to-photon latency in head-mounted displays
WO2020109429A1 (en) * 2018-11-30 2020-06-04 Hins A head mounted device for virtual or augmented reality combining reliable gesture recognition with motion tracking algorithm
US10706813B1 (en) 2018-02-03 2020-07-07 Facebook Technologies, Llc Apparatus, system, and method for mitigating motion-to-photon latency in head-mounted displays
US10714055B1 (en) * 2018-06-14 2020-07-14 Facebook Technologies, Llc Systems and methods for display synchronization in head-mounted display devices
WO2020180859A1 (en) * 2019-03-05 2020-09-10 Facebook Technologies, Llc Apparatus, systems, and methods for wearable head-mounted displays
CN111868824A (en) * 2018-04-05 2020-10-30 辛纳普蒂克斯公司 Context-aware control of smart devices
US10861242B2 (en) * 2018-05-22 2020-12-08 Magic Leap, Inc. Transmodal input fusion for a wearable system
US20210295025A1 (en) * 2017-05-01 2021-09-23 Google Llc Classifying facial expressions using eye-tracking cameras
US20210352257A1 (en) * 2019-05-02 2021-11-11 Disney Enterprises, Inc. Illumination-based system for distributing immersive experience content in a multi-user environment
WO2021236100A1 (en) * 2020-05-22 2021-11-25 Hewlett-Packard Development Company, L.P. Gesture areas
US11217021B2 (en) * 2016-09-22 2022-01-04 Apple Inc. Display system having sensors
CN114859561A (en) * 2022-07-11 2022-08-05 泽景(西安)汽车电子有限责任公司 Wearable display device, control method thereof and storage medium
US11422380B2 (en) * 2020-09-30 2022-08-23 Snap Inc. Eyewear including virtual scene with 3D frames
US11508127B2 (en) * 2018-11-13 2022-11-22 Disney Enterprises, Inc. Capturing augmented reality on a head mounted display
US20220374072A1 (en) * 2020-11-16 2022-11-24 Qingdao Pico Technology Co., Ltd. Head-mounted display system and 6-degree-of-freedom tracking method and apparatus thereof
US20230024396A1 (en) * 2019-09-20 2023-01-26 Eyeware Tech Sa A method for capturing and displaying a video stream
US20230040573A1 (en) * 2021-03-24 2023-02-09 AbdurRahman Bin Shahzad Bhatti Data systems for wearable augmented reality apparatus
US11612307B2 (en) * 2016-11-24 2023-03-28 University Of Washington Light field capture and rendering for head-mounted displays
WO2023048924A1 (en) * 2021-09-21 2023-03-30 Chinook Labs Llc Waveguide-based eye illumination
US20230104738A1 (en) * 2018-05-25 2023-04-06 Tiff's Treats Holdings Inc. Apparatus, method, and system for presentation of multimedia content including augmented reality content
US20230122185A1 (en) * 2021-10-18 2023-04-20 Microsoft Technology Licensing, Llc Determining relative position and orientation of cameras using hardware
US11733824B2 (en) * 2018-06-22 2023-08-22 Apple Inc. User interaction interpreter
US20230324690A1 (en) * 2020-09-21 2023-10-12 Apple Inc. Systems With Supplemental Illumination
US11861255B1 (en) 2017-06-16 2024-01-02 Apple Inc. Wearable device for facilitating enhanced interaction
US20240062346A1 (en) * 2019-08-26 2024-02-22 Apple Inc. Image-based detection of surfaces that provide specular reflections and reflection modification
US11954268B2 (en) * 2020-06-30 2024-04-09 Snap Inc. Augmented reality eyewear 3D painting
US12051214B2 (en) 2020-05-12 2024-07-30 Proprio, Inc. Methods and systems for imaging a scene, such as a medical scene, and tracking objects within the scene
US12126916B2 (en) 2018-09-27 2024-10-22 Proprio, Inc. Camera array for a mediated-reality system
WO2024229568A1 (en) * 2023-05-10 2024-11-14 Catalyst Entertainment Technology System and method for synchronizing real world and virtual world environments
US12261988B2 (en) 2021-11-08 2025-03-25 Proprio, Inc. Methods for generating stereoscopic views in multicamera systems, and associated devices and systems
US12444149B2 (en) 2022-06-30 2025-10-14 Apple Inc. Content transformations based on reflective object recognition
US20250336168A1 (en) * 2024-04-26 2025-10-30 Optoma Corporation Multimedia system and image display method
GB2641146A (en) * 2023-05-15 2025-11-19 Apple Inc Head mountable display
US12498895B2 (en) 2017-06-16 2025-12-16 Apple Inc. Head-mounted device with publicly viewable display

Families Citing this family (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102755167B1 (en) 2017-08-25 2025-01-14 아그네틱스, 인크. Fluid-cooled LED-based lighting method and device for controlled environment agriculture
US11013078B2 (en) 2017-09-19 2021-05-18 Agnetix, Inc. Integrated sensor assembly for LED-based controlled environment agriculture (CEA) lighting, and methods and apparatus employing same
US10999976B2 (en) 2017-09-19 2021-05-11 Agnetix, Inc. Fluid-cooled lighting systems and kits for controlled agricultural environments, and methods for installing same
AU2019262676B2 (en) 2018-05-04 2025-03-13 Agnetix, Inc. Methods, apparatus, and systems for lighting and distributed sensing in controlled agricultural environments
CN113163720A (en) 2018-11-13 2021-07-23 阿格尼泰克斯股份有限公司 Fluid cooled LED-based lighting method and apparatus for controlled environment agriculture with integrated camera and/or sensor and wireless communication
CN115755397A (en) * 2019-04-26 2023-03-07 苹果公司 Head mounted display with low light operation
CN110244837A (en) * 2019-04-26 2019-09-17 北京圣威特科技有限公司 Augmented reality and the experience glasses and its imaging method being superimposed with virtual image
CN110417640A (en) * 2019-07-22 2019-11-05 北京达佳互联信息技术有限公司 Message method of sending and receiving, device, electronic equipment and storage medium
KR102218843B1 (en) * 2019-11-19 2021-02-22 광운대학교 산학협력단 Multi-camera augmented reality broadcasting system based on overlapping layer using stereo camera and providing method thereof
JP2023505677A (en) 2019-12-10 2023-02-10 アグネティックス,インコーポレイテッド Multi-perceptual imaging method and apparatus for controlled environmental horticulture using illuminators and cameras and/or sensors
AU2020401384A1 (en) 2019-12-12 2022-07-28 Agnetix, Inc. Fluid-cooled LED-based lighting fixture in close proximity grow systems for controlled environment horticulture
KR102732415B1 (en) * 2020-02-07 2024-11-22 삼성전자주식회사 Electronic apparatus and operaintg method thereof
KR102780186B1 (en) * 2020-03-27 2025-03-12 주식회사 케이티 Server, method and computer program for providing video call service
TWI745924B (en) 2020-04-10 2021-11-11 宏碁股份有限公司 Virtual reality positioning device, virtual reality positioning system and manufacturing method of virtual reality positioning device
CN113703160A (en) * 2020-05-08 2021-11-26 宏碁股份有限公司 Virtual reality positioning device, manufacturing method thereof and virtual reality positioning system
WO2021230568A1 (en) 2020-05-13 2021-11-18 삼성전자 주식회사 Electronic device for providing augmented reality service and operating method thereof
CN111897422B (en) * 2020-07-14 2022-02-15 山东大学 A real-time interaction method and system for real-time fusion of virtual and real objects
CN112115823A (en) * 2020-09-07 2020-12-22 江苏瑞科科技有限公司 Mixed reality cooperative system based on emotion avatar
CN112416125A (en) 2020-11-17 2021-02-26 青岛小鸟看看科技有限公司 VR headset
KR102441454B1 (en) * 2020-12-18 2022-09-07 한국과학기술원 3d digital twin visualization system interlocked with real and virtual iot devices and the method thereof
US12003697B2 (en) 2021-05-06 2024-06-04 Samsung Electronics Co., Ltd. Wearable electronic device and method of outputting three-dimensional image
KR20220151420A (en) * 2021-05-06 2022-11-15 삼성전자주식회사 Wearable electronic device and method for outputting 3d image
CN113535064B (en) * 2021-09-16 2022-02-01 北京亮亮视野科技有限公司 Virtual label marking method and device, electronic equipment and storage medium
US11993394B2 (en) 2021-11-10 2024-05-28 Rockwell Collins, Inc. Flight safety demonstration and infotainment through mixed reality
WO2024014592A1 (en) * 2022-07-15 2024-01-18 엘지전자 주식회사 Xr device, controller apparatus for xr device, and operating method of xr device using same
KR20240029944A (en) * 2022-08-29 2024-03-07 삼성전자주식회사 An electronic device for calibrating a virtual object using depth information on a real object, and a method for controlling the same
WO2024181695A1 (en) * 2023-03-02 2024-09-06 삼성전자주식회사 Electronic device and method for providing function related to extended reality
WO2025195511A1 (en) * 2024-03-22 2025-09-25 深圳市仙瞬科技有限公司 Near-to-eye display apparatus, wearable device, and control method for near-to-eye display apparatus
WO2025216484A1 (en) * 2024-04-12 2025-10-16 삼성전자주식회사 Wearable device, method, and non-transitory computer-readable storage medium for changing scheme for displaying avatar

Family Cites Families (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9013264B2 (en) * 2011-03-12 2015-04-21 Perceptive Devices, Llc Multipurpose controller for electronic devices, facial expressions management and drowsiness detection
US8752963B2 (en) * 2011-11-04 2014-06-17 Microsoft Corporation See-through display brightness control
JP5936155B2 (en) * 2012-07-27 2016-06-15 Necソリューションイノベータ株式会社 3D user interface device and 3D operation method
US9791921B2 (en) * 2013-02-19 2017-10-17 Microsoft Technology Licensing, Llc Context-aware augmented reality object commands
KR102560629B1 (en) * 2013-03-15 2023-07-26 매직 립, 인코포레이티드 Display system and method
US9908048B2 (en) * 2013-06-08 2018-03-06 Sony Interactive Entertainment Inc. Systems and methods for transitioning between transparent mode and non-transparent mode in a head mounted display
US9256987B2 (en) * 2013-06-24 2016-02-09 Microsoft Technology Licensing, Llc Tracking head movement when wearing mobile device
US9952042B2 (en) * 2013-07-12 2018-04-24 Magic Leap, Inc. Method and system for identifying a user location
CN103760973B (en) * 2013-12-18 2017-01-11 微软技术许可有限责任公司 Reality-enhancing information detail
CN103823553B (en) * 2013-12-18 2017-08-25 微软技术许可有限责任公司 The augmented reality of the scene of surface behind is shown
US9672416B2 (en) * 2014-04-29 2017-06-06 Microsoft Technology Licensing, Llc Facial expression tracking
US10852838B2 (en) * 2014-06-14 2020-12-01 Magic Leap, Inc. Methods and systems for creating virtual and augmented reality
US10416760B2 (en) * 2014-07-25 2019-09-17 Microsoft Technology Licensing, Llc Gaze-based object placement within a virtual reality environment
JP6492451B2 (en) * 2014-08-12 2019-04-03 セイコーエプソン株式会社 Head-mounted display device, control method therefor, and computer program
KR101614315B1 (en) * 2014-08-19 2016-04-21 한국과학기술연구원 Wearable device and method for controlling the same
US9791919B2 (en) * 2014-10-19 2017-10-17 Philip Lyren Electronic device displays an image of an obstructed target
US9489044B2 (en) * 2014-11-07 2016-11-08 Eye Labs, LLC Visual stabilization system for head-mounted displays
US9304003B1 (en) * 2015-03-18 2016-04-05 Microsoft Technology Licensing, Llc Augmented reality navigation
US9898865B2 (en) * 2015-06-22 2018-02-20 Microsoft Technology Licensing, Llc System and method for spawning drawing surfaces
WO2017192467A1 (en) * 2016-05-02 2017-11-09 Warner Bros. Entertainment Inc. Geometry matching in virtual reality and augmented reality
WO2017201191A1 (en) * 2016-05-17 2017-11-23 Google Llc Methods and apparatus to project contact with real objects in virtual reality environments
US10366536B2 (en) * 2016-06-28 2019-07-30 Microsoft Technology Licensing, Llc Infinite far-field depth perception for near-field objects in virtual environments
US20180082482A1 (en) * 2016-09-22 2018-03-22 Apple Inc. Display system having world and user sensors
US10914957B1 (en) * 2017-05-30 2021-02-09 Apple Inc. Video compression methods and apparatus
US10521947B2 (en) * 2017-09-29 2019-12-31 Sony Interactive Entertainment Inc. Rendering of virtual hand pose based on detected hand input
US11036284B2 (en) * 2018-09-14 2021-06-15 Apple Inc. Tracking and drift correction
US10778953B2 (en) * 2018-12-10 2020-09-15 Universal City Studios Llc Dynamic convergence adjustment in augmented reality headsets

Cited By (68)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11217021B2 (en) * 2016-09-22 2022-01-04 Apple Inc. Display system having sensors
US11612307B2 (en) * 2016-11-24 2023-03-28 University Of Washington Light field capture and rendering for head-mounted displays
US20240000295A1 (en) * 2016-11-24 2024-01-04 University Of Washington Light field capture and rendering for head-mounted displays
US12178403B2 (en) * 2016-11-24 2024-12-31 University Of Washington Light field capture and rendering for head-mounted displays
US10839520B2 (en) * 2017-03-03 2020-11-17 The United States Of America, As Represented By The Secretary, Department Of Health & Human Services Eye tracking applications in computer aided diagnosis and image processing in radiology
US20180268552A1 (en) * 2017-03-03 2018-09-20 National Institutes Of Health Eye Tracking Applications in Computer Aided Diagnosis and Image Processing in Radiology
US10580386B2 (en) * 2017-04-21 2020-03-03 Ford Global Technologies, Llc In-vehicle projected reality motion correction
US20180308454A1 (en) * 2017-04-21 2018-10-25 Ford Global Technologies, Llc In-vehicle projected reality motion correction
US12053301B2 (en) * 2017-05-01 2024-08-06 Google Llc Classifying facial expressions using eye-tracking cameras
US20210295025A1 (en) * 2017-05-01 2021-09-23 Google Llc Classifying facial expressions using eye-tracking cameras
US11861255B1 (en) 2017-06-16 2024-01-02 Apple Inc. Wearable device for facilitating enhanced interaction
US12498895B2 (en) 2017-06-16 2025-12-16 Apple Inc. Head-mounted device with publicly viewable display
US12197805B2 (en) 2017-06-16 2025-01-14 Apple Inc. Wearable device for facilitating enhanced interaction
US12277361B2 (en) 2017-06-16 2025-04-15 Apple Inc. Wearable device for facilitating enhanced interaction
US12282698B2 (en) 2017-06-16 2025-04-22 Apple Inc. Wearable device for facilitating enhanced interaction
US10706813B1 (en) 2018-02-03 2020-07-07 Facebook Technologies, Llc Apparatus, system, and method for mitigating motion-to-photon latency in head-mounted displays
US10803826B2 (en) 2018-02-03 2020-10-13 Facebook Technologies, Llc Apparatus, system, and method for mitigating motion-to-photon latency in headmounted displays
US10559276B2 (en) 2018-02-03 2020-02-11 Facebook Technologies, Llc Apparatus, system, and method for mitigating motion-to-photon latency in head-mounted displays
US10565678B2 (en) 2018-03-23 2020-02-18 Microsoft Technology Licensing, Llc Asynchronous camera frame allocation
WO2019182822A1 (en) * 2018-03-23 2019-09-26 Microsoft Technology Licensing, Llc Asynchronous camera frame allocation
CN111868824A (en) * 2018-04-05 2020-10-30 辛纳普蒂克斯公司 Context-aware control of smart devices
US12444146B2 (en) 2018-05-22 2025-10-14 Magic Leap, Inc. Identifying convergence of sensor data from first and second sensors within an augmented reality wearable device
US10678325B2 (en) 2018-05-22 2020-06-09 Facebook Technologies, Llc Apparatus, system, and method for accelerating positional tracking of head-mounted displays
EP3572916A3 (en) * 2018-05-22 2019-12-11 Facebook, Inc. Apparatus, system, and method for accelerating positional tracking of head-mounted displays
CN112166397A (en) * 2018-05-22 2021-01-01 脸谱科技有限责任公司 Apparatus, system, and method for accelerating position tracking of head mounted display
US10861242B2 (en) * 2018-05-22 2020-12-08 Magic Leap, Inc. Transmodal input fusion for a wearable system
US11983823B2 (en) 2018-05-22 2024-05-14 Magic Leap, Inc. Transmodal input fusion for a wearable system
US20190362516A1 (en) * 2018-05-23 2019-11-28 Samsung Electronics Co., Ltd. Marker-based augmented reality system and method
US11354815B2 (en) * 2018-05-23 2022-06-07 Samsung Electronics Co., Ltd. Marker-based augmented reality system and method
US20230104738A1 (en) * 2018-05-25 2023-04-06 Tiff's Treats Holdings Inc. Apparatus, method, and system for presentation of multimedia content including augmented reality content
US10714055B1 (en) * 2018-06-14 2020-07-14 Facebook Technologies, Llc Systems and methods for display synchronization in head-mounted display devices
US11733824B2 (en) * 2018-06-22 2023-08-22 Apple Inc. User interaction interpreter
US10935793B2 (en) 2018-07-30 2021-03-02 Htc Corporation Head mounted display and using method thereof
EP3605198A1 (en) * 2018-07-30 2020-02-05 HTC Corporation Head mounted display and using method thereof
US12126916B2 (en) 2018-09-27 2024-10-22 Proprio, Inc. Camera array for a mediated-reality system
US11508127B2 (en) * 2018-11-13 2022-11-22 Disney Enterprises, Inc. Capturing augmented reality on a head mounted display
WO2020109429A1 (en) * 2018-11-30 2020-06-04 Hins A head mounted device for virtual or augmented reality combining reliable gesture recognition with motion tracking algorithm
WO2020180859A1 (en) * 2019-03-05 2020-09-10 Facebook Technologies, Llc Apparatus, systems, and methods for wearable head-mounted displays
JP2022522579A (en) * 2019-03-05 2022-04-20 フェイスブック・テクノロジーズ・リミテッド・ライアビリティ・カンパニー Equipment, systems, and methods for wearable head-mounted displays
CN113557465A (en) * 2019-03-05 2021-10-26 脸谱科技有限责任公司 Apparatus, system, and method for wearable head-mounted display
US20210352257A1 (en) * 2019-05-02 2021-11-11 Disney Enterprises, Inc. Illumination-based system for distributing immersive experience content in a multi-user environment
US11936842B2 (en) * 2019-05-02 2024-03-19 Disney Enterprises, Inc. Illumination-based system for distributing immersive experience content in a multi-user environment
US20240062346A1 (en) * 2019-08-26 2024-02-22 Apple Inc. Image-based detection of surfaces that provide specular reflections and reflection modification
US12211139B2 (en) * 2019-09-20 2025-01-28 Eyeware Tech Sa Method for capturing and displaying a video stream
US20230024396A1 (en) * 2019-09-20 2023-01-26 Eyeware Tech Sa A method for capturing and displaying a video stream
US12051214B2 (en) 2020-05-12 2024-07-30 Proprio, Inc. Methods and systems for imaging a scene, such as a medical scene, and tracking objects within the scene
US12299907B2 (en) 2020-05-12 2025-05-13 Proprio, Inc. Methods and systems for imaging a scene, such as a medical scene, and tracking objects within the scene
US20230214025A1 (en) * 2020-05-22 2023-07-06 Hewlett-Packard Development Company, L.P. Gesture areas
WO2021236100A1 (en) * 2020-05-22 2021-11-25 Hewlett-Packard Development Company, L.P. Gesture areas
US11954268B2 (en) * 2020-06-30 2024-04-09 Snap Inc. Augmented reality eyewear 3D painting
US20240211057A1 (en) * 2020-06-30 2024-06-27 Ilteris Canberk Augmented reality eyewear 3d painting
US12353646B2 (en) * 2020-06-30 2025-07-08 Snap Inc. Augmented reality eyewear 3D painting
US20230324690A1 (en) * 2020-09-21 2023-10-12 Apple Inc. Systems With Supplemental Illumination
US11422380B2 (en) * 2020-09-30 2022-08-23 Snap Inc. Eyewear including virtual scene with 3D frames
US11675198B2 (en) * 2020-09-30 2023-06-13 Snap Inc. Eyewear including virtual scene with 3D frames
US20220326530A1 (en) * 2020-09-30 2022-10-13 Kyle Goodrich Eyewear including virtual scene with 3d frames
US20220374072A1 (en) * 2020-11-16 2022-11-24 Qingdao Pico Technology Co., Ltd. Head-mounted display system and 6-degree-of-freedom tracking method and apparatus thereof
US11797083B2 (en) * 2020-11-16 2023-10-24 Qingdao Pico Technology Co., Ltd. Head-mounted display system and 6-degree-of-freedom tracking method and apparatus thereof
US12130959B2 (en) * 2021-03-24 2024-10-29 Peloton Interactive, Inc. Data systems for wearable augmented reality apparatus
US20230040573A1 (en) * 2021-03-24 2023-02-09 AbdurRahman Bin Shahzad Bhatti Data systems for wearable augmented reality apparatus
WO2023048924A1 (en) * 2021-09-21 2023-03-30 Chinook Labs Llc Waveguide-based eye illumination
US20230122185A1 (en) * 2021-10-18 2023-04-20 Microsoft Technology Licensing, Llc Determining relative position and orientation of cameras using hardware
US12261988B2 (en) 2021-11-08 2025-03-25 Proprio, Inc. Methods for generating stereoscopic views in multicamera systems, and associated devices and systems
US12444149B2 (en) 2022-06-30 2025-10-14 Apple Inc. Content transformations based on reflective object recognition
CN114859561A (en) * 2022-07-11 2022-08-05 泽景(西安)汽车电子有限责任公司 Wearable display device, control method thereof and storage medium
WO2024229568A1 (en) * 2023-05-10 2024-11-14 Catalyst Entertainment Technology System and method for synchronizing real world and virtual world environments
GB2641146A (en) * 2023-05-15 2025-11-19 Apple Inc Head mountable display
US20250336168A1 (en) * 2024-04-26 2025-10-30 Optoma Corporation Multimedia system and image display method

Also Published As

Publication number Publication date
US11217021B2 (en) 2022-01-04
KR20190032473A (en) 2019-03-27
EP3488315A1 (en) 2019-05-29
EP3488315B1 (en) 2021-02-24
WO2018057991A1 (en) 2018-03-29
KR102230561B1 (en) 2021-03-22
CN109643145B (en) 2022-07-15
CN109643145A (en) 2019-04-16
US20190221044A1 (en) 2019-07-18

Similar Documents

Publication Publication Date Title
US11217021B2 (en) Display system having sensors
US11330241B2 (en) Focusing for virtual and augmented reality systems
US12429695B2 (en) Video compression methods and apparatus
US10877556B2 (en) Eye tracking system
US12481356B2 (en) Eye tracking using eye odometers
US10698481B1 (en) Glint-assisted gaze tracker
KR102281026B1 (en) Hologram anchoring and dynamic positioning
US20130326364A1 (en) Position relative hologram interactions
US12461594B2 (en) Visual axis enrollment
US12487667B2 (en) Corrected gaze direction and origin
US11327561B1 (en) Display system
US12429697B1 (en) Contact lens shift detection for head-mounted display devices
US20240105046A1 (en) Lens Distance Test for Head-Mounted Display Devices
US20260038149A1 (en) Split-Cadence Eye Tracking
US12444237B2 (en) Synthetic gaze enrollment
WO2026030005A1 (en) Split-cadence eye tracking

Legal Events

Date Code Title Description
AS Assignment

Owner name: APPLE INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MOTTA, RICARDO J.;MILLER, BRETT D.;SRIKANTH, MANOHAR B.;AND OTHERS;REEL/FRAME:043704/0783

Effective date: 20170912

STCB Information on status: application discontinuation

Free format text: EXPRESSLY ABANDONED -- DURING EXAMINATION