US20190377538A1 - Information Presentation Through Ambient Sounds - Google Patents
Information Presentation Through Ambient Sounds Download PDFInfo
- Publication number
- US20190377538A1 US20190377538A1 US16/007,335 US201816007335A US2019377538A1 US 20190377538 A1 US20190377538 A1 US 20190377538A1 US 201816007335 A US201816007335 A US 201816007335A US 2019377538 A1 US2019377538 A1 US 2019377538A1
- Authority
- US
- United States
- Prior art keywords
- sound
- user
- information
- eye
- digital representation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/017—Head mounted
- G02B27/0172—Head mounted characterised by optical features
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/40—Filling a planar surface by adding surface attributes, e.g. colour or texture
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
- G02B2027/0138—Head-up displays characterised by optical features comprising image capture systems, e.g. camera
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
- G02B2027/014—Head-up displays characterised by optical features comprising information/image processing systems
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
- G02B2027/0141—Head-up displays characterised by optical features characterised by the informative content of the display
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/167—Audio in a user interface, e.g. using voice commands for navigating, audio feedback
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2012—Colour editing, changing, or manipulating; Use of colour codes
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2021—Shape modification
Definitions
- the present subject matter relates to displaying information, and more specifically, to presenting information as an ambient sound associated with an object.
- HMD head-mounted display
- VR virtual reality
- AR augmented reality
- Textual information can be presented to the emergency responder through the display and the information can be updated in real-time through the digital wireless interface from a command center or other information sources.
- FIG. 1A shows a scene with a user wearing an embodiment of a head-mounted display looking away from an object
- FIG. 1B shows a scene with a user wearing an embodiment of a head-mounted display looking at an object
- FIG. 2 is a is a block diagram of an embodiment of a hybrid reality system
- FIG. 3 is a flowchart of an embodiment of a method for presenting an ambient sound associated with an object.
- FIG. 4 is a flowchart of an embodiment of a method for presenting sound to a user.
- Hybrid Reality refers to an image that merges real-world imagery with imagery created in a computer, which is sometimes called virtual imagery. While an HR image can be a still image, it can also be a moving image, such as imagery created using a video stream. HR can be displayed by a traditional two-dimensional display device, such as a computer monitor, one or more projectors, or a smartphone screen. HR imagery can also be displayed by a head-mounted display (HMD). Many different technologies can be used in an HMD to display HR imagery.
- a virtual reality (VR) HMD system may receive images of a real-world object, objects, or scene, and composite those images with a virtual object, objects, or scene to create an HR image.
- VR virtual reality
- An augmented reality (AR) HMD system may present a virtual object, objects, or scene on a transparent screen which then naturally mixes the virtual imagery with a view of a scene in the real-world.
- a display which mixes live video with virtual objects is sometimes denoted AR, but for the purposes of this disclosure, an AR HMD includes at least a portion of the display area that is transparent to allow at least some of the user's view of the real-world to be directly viewed through the transparent portion of the AR HMD.
- the display used by an HR system represents a scene which is a visible portion of the whole environment.
- the term “scene” and “field of view” (FOV) are used to indicate what is visible to a user.
- the word “occlude” is used herein to mean that a pixel of a virtual element is mixed with an image of another object to change the way the object is perceived by a viewer.
- this can be done through use of a compositing process to mix the two images, a Z-buffer technique to remove elements of the image that are hidden from view, a painter's algorithm to render closer objects later in the rendering process, or any other technique that can replace a pixel of the image of the real-world object with a different pixel value generated from any blend of real-world object pixel value and an HR system determined pixel value.
- the virtual object occludes the real-world object if the virtual object is rendered, transparently or opaquely, in the line of sight of the user as they view the real-world object.
- the terms “occlude”, “transparency”, “rendering” and “overlay” are used to denote the mixing or blending of new pixel values with existing object pixel values in an HR display.
- a sensor may be mounted on or near the display, on the viewer's body, or be remote from the user.
- Remote sensors may include, but are not limited to, fixed sensors attached in an environment, sensors attached to robotic extensions, sensors attached to autonomous or semi-autonomous drones, or sensors attached to other persons.
- Data from the sensors may be raw or filtered.
- Data from the sensors may be transmitted wirelessly or using a wired connection.
- Sensors used by some embodiments of HR systems include, but are not limited to, a camera that captures images in the visible spectrum, an infrared depth camera, a microphone, a sound locator, a Hall effect sensor, an air-flow meter, a fuel level sensor, an oxygen sensor, an electronic nose, a gas detector, an anemometer, a mass flow sensor, a Geiger counter, a gyroscope, an infrared temperature sensor, a flame detector, a barometer, a pressure sensor, a pyrometer, a time-of-flight camera, radar, or lidar.
- Sensors in some HR system embodiments that may be attached to the user include, but are not limited to, a biosensor, a biochip, a heartbeat sensor, a pedometer, a skin resistance detector, or skin temperature detector.
- the display technology used by an HR system embodiment may include any method of projecting an image to an eye.
- Conventional technologies include, but are not limited to, cathode ray tube (CRT), liquid crystal display (LCD), light emitting diode (LED), plasma or organic LED (OLED) screens, or projectors based on those technologies or digital micromirror devices (DMD).
- CTR cathode ray tube
- LCD liquid crystal display
- LED light emitting diode
- OLED organic LED
- virtual retina displays such as direct drawing on the eye's retina using a holographic grating, may be used.
- direct machine to brain interfaces may be used in the future.
- the display of an HR system may also be an HMD or a separate device, such as, but not limited to, a hand-held mobile phone, a tablet, a fixed monitor or a TV screen.
- connection technology used by an HR system may include any physical link and associated protocols, such as, but not limited to, wires, transmission lines, solder bumps, near-field connections, infra-red connections, or radio frequency (RF) connections such as cellular, satellite or Wi-Fi® (a registered trademark of the Wi-Fi Alliance).
- RF radio frequency
- Virtual connections, such as software links, may also be used to connect to external networks and/or external compute.
- aural stimuli and information may be provided by a sound system.
- the sound technology may include monaural, binaural, or multi-channel systems.
- a binaural system may include a headset or another two-speaker system but may also include systems with more than two speakers directed to the ears.
- the sounds may be presented as 3D audio, where each sound has a perceived position in space, achieved by using reverberation and head-related transfer functions to mimic how sounds change as they move in a particular space.
- objects in the display may move.
- the movement may be due to the user moving within the environment, for example walking, crouching, turning, or tilting the head.
- the movement may be due to an object moving, for example a dog running away, a car coming towards the user, or a person entering the FOV.
- the movement may also be due to an artificial movement, for example the user moving an object on a display or changing the size of the FOV.
- the motion may be due to the user deliberately distorting all or part of the FOV, for example adding a virtual fish-eye lens.
- all motion is considered relative; any motion may be resolved to a motion from a single frame of reference, for example the user's viewpoint.
- the perspective of any generated object overlay may be corrected so that it changes with the shape and position of the associated real-world object. This may be done with any conventional point-of-view transformation based on the angle of the object from the viewer; note that the transformation is not limited to simple linear or rotational functions, with some embodiments using non-Abelian transformations. It is contemplated that motion effects, for example blur or deliberate edge distortion, may also be added to a generated object overlay.
- images from cameras may be processed before algorithms are executed.
- Algorithms used after image processing for embodiments disclosed herein may include, but are not limited to, object recognition, motion detection, camera motion and zoom detection, light detection, facial recognition, text recognition, or mapping an unknown environment.
- the image processing may also use conventional filtering techniques, such as, but not limited to, static, adaptive, linear, non-linear, and Kalman filters. Deep-learning neural networks may be trained in some embodiments to mimic functions which are hard to create algorithmically. Image processing may also be used to prepare the image, for example by reducing noise, restoring the image, edge enhancement, or smoothing.
- objects may be detected in the FOV of one or more cameras.
- Objects may be detected by using conventional algorithms, such as, but not limited to, edge detection, feature detection (for example surface patches, corners and edges), greyscale matching, gradient matching, pose consistency, or database look-up using geometric hashing.
- edge detection for example surface patches, corners and edges
- feature detection for example surface patches, corners and edges
- greyscale matching for example surface patches, corners and edges
- gradient matching for example surface patches, corners and edges
- pose consistency for example geometric hashing.
- database look-up using geometric hashing such as, but not limited to, edge detection, feature detection (for example surface patches, corners and edges), greyscale matching, gradient matching, pose consistency, or database look-up using geometric hashing.
- Genetic algorithms and trained neural networks using unsupervised learning techniques may also be used in embodiments to detect types of objects, for example people, dogs, or trees.
- object may be performed on a single frame of a video stream, although techniques using multiple frames are also envisioned.
- Advanced techniques such as, but not limited to, Optical Flow, camera motion, and object motion detection may be used between frames to enhance object recognition in each frame.
- rendering the object may be done by the HR system embodiment using databases of similar objects, the geometry of the detected object, or how the object is lit, for example specular reflections or bumps.
- the locations of objects may be generated from maps and object recognition from sensor data.
- Mapping data may be generated on the fly using conventional techniques, for example the Simultaneous Location and Mapping (SLAM) algorithm used to estimate locations using Bayesian methods, or extended Kalman filtering which linearizes a non-linear Kalman filter to optimally estimate the mean or covariance of a state (map), or particle filters which use Monte Carlo methods to estimate hidden states (map).
- SLAM Simultaneous Location and Mapping
- the locations of objects may also be determined a priori, using techniques such as, but not limited to, reading blueprints, reading maps, receiving GPS locations, receiving relative positions to a known point (such as a cell tower, access point, or other person) determined using depth sensors, WiFi time-of-flight, or triangulation to at least three other points.
- a known point such as a cell tower, access point, or other person
- Gyroscope sensors on or near the HMD may be used in some embodiments to determine head position and to generate relative motion vectors which can be used to estimate location.
- sound data from one or microphones may be processed to detect specific sounds. Sounds that might be identified include, but are not limited to, human voices, glass breaking, human screams, gunshots, explosions, door slams, or a sound pattern a particular machine makes when defective.
- Gaussian Mixture Models and Hidden Markov Models may be used to generate statistical classifiers that are combined and looked up in a database of sound models.
- One advantage of using statistical classifiers is that sounds can be detected more consistently in noisy environments.
- Eye tracking of one or both viewer's eyes may be performed. Eye tracking may be used to measure the point of the viewer's gaze.
- the position of each eye is known, and so there is a reference frame for determining head-to-eye angles, and so the position and rotation of each eye can be used to estimate the gaze point.
- Eye position determination may be done using any suitable technique and/or device, including, but not limited to, devices attached to an eye, tracking the eye position using infra-red reflections, for example Purkinje images, or using the electric potential of the eye detected by electrodes placed near the eye which uses the electrical field generated by an eye independently of whether the eye is closed or not.
- HR imagery are becoming increasingly common and are making their way from entertainment and gaming into industrial and commercial applications.
- Examples of systems that may find HR imagery useful include aiding a person doing a task, for example repairing machinery, testing a system, or responding to an emergency.
- HR imagery might be used also provide information to a user.
- This information may be associated with real objects in the environment or may be related to the environment as a whole, for example an ambient or average value.
- the information to be provided to the user is unrelated to the real environment they are working in. Providing the various types of information to the user in a way that can be readily understood by the user and is not confusing, distracting or obscuring details that the user needs can be a challenge.
- HR technology information can be presented to a user using sounds presented in a non-intrusive and natural way.
- sound information can be mixed with the current sound presentation as ambient or background noise, which adds information without being distracting.
- the HR system has capabilities that can determine which object is being viewed, and in many situations can determine characteristics of the object currently being viewed. By combining these features, simple information can be presented to the user without interfering with the operation of the HR system by having to remove or obscure any part of the information currently being delivered.
- gaze detection hardware may determine the gaze direction of a user.
- object recognition techniques from a video feed
- one or more objects in the current field-of-view can be detected, including the boundary edges or convex hull of an object or group of objects.
- the HR system can determine which object is currently being viewed. Note that selecting a current object over periods of time, for example using different frames of video, can also offer other pertinent information, such as: a time that a user starts looking at an object; a time that a user stops looking at an object; and how long the user looks at an object.
- detecting an object does not have to be performed using a current video feed of the field-of-view.
- An object may be detected using other methods, such as, but not limited to, blueprints, maps, or object positions determined by other actors for example other personnel, drones or fixed sensors in the environment.
- the selected object may not be visible to the user, perhaps being obscured by a physical barrier, smoke, or lack of illumination.
- the information may be any pertinent information, such as, but not limited to, a type or class (e.g. vehicle, building, wall, hydrant, or person), a physical property which may have been received from an additional sensor coupled to the HR system (e.g. temperature, pressure, mass, or velocity), a proximal hazard, an identity of something near, behind or inside the object (e.g. water pressure, amount of fuel, or number of passengers), or a safe path to the object.
- a type or class e.g. vehicle, building, wall, hydrant, or person
- a physical property which may have been received from an additional sensor coupled to the HR system
- a proximal hazard e.g. temperature, pressure, mass, or velocity
- an identity of something near, behind or inside the object e.g. water pressure, amount of fuel, or number of passengers
- the information is often not directly visible to the user.
- the characteristic information can be used by the HR system to choose an associated sound.
- the HR system may start the sound playing when the user starts to look at the object.
- the sound may be an excerpt which plays for a predetermined time which may vary according to the characteristic.
- the sound continues to play until the user stops looking at the object, or offers a specific gesture, such as, but not limited to, blinking, operating a menu or quickly glancing out and then back to the object.
- the HR system may make the sound non-intrusive when played by mixing the sound to be in the background without degrading the current audio feed to the user.
- the sound is presented in a binaural system as having no apparent origin, thus emphasizing the background nature.
- the sound is presented in a binaural system has having a position in the 3D audio landscape, for example coming from the object, thus reducing any potential for distraction.
- the sound may be music or music. Since the reaction to certain sounds and music is largely a personal experience, it is contemplated that the sound and information association may be customized by the user—this may be done using any conventional technique such as, but limited to, a menu, a wizard, or importing a configuration. This allows the user to select a sound or music that has to have an immediate, significant and natural meaning to them
- FIG. 1A shows a user 100 wearing an embodiment of a head-mounted system 130 .
- a head-mounted system 130 may include a visible camera 132 , infra-red depth camera 134 and an internal infra-red eye gaze tracking system 136 .
- the eye gaze tracking system 136 may be located inside of the head-mounted system 130 and may not be visible to an external observer.
- Other embodiments may have different sensors, such as any sensor of combination of sensors described above.
- the head-mounted system 130 receives data from sensors 132 , 134 and determines that fire hydrant 120 is an object in the FOV.
- Head-mounted system 130 receives data from eye gaze tracking system 136 and determines that the gaze position 111 does not intersect the fire hydrant 120 and so no ambient sound is added to sound subsystem 138 .
- FIG. 1B shows the user 100 wearing the embodiment of the head-mounted system 130 .
- user 100 is looking at fire hydrant 120 at the time shown in FIG. 1B .
- the head-mounted system 130 receives data from sensors 132 , 134 and determines that fire hydrant 120 is an object in the FOV.
- Head-mounted system 130 receives data from the eye gaze tracking system 136 and determines that the gaze position 113 intersects the fire hydrant 120 and so an associated ambient sound is played for the user 100 by sound subsystem 138 , such as, but not limited to, a music excerpt selected by the user, the sound of a river, the sound of a faucet running, or the sound of the sea.
- sound subsystem 138 such as, but not limited to, a music excerpt selected by the user, the sound of a river, the sound of a faucet running, or the sound of the sea.
- FIG. 2 is a block diagram of an embodiment of an HR system 200 which may have some components implemented as part of a head-mounted assembly.
- the HR system 200 may be considered a computer system that can be adapted to be worn on the head, carried by hand, or otherwise attached to a user.
- a structure 205 is included which is adapted to be worn on the head of a user.
- the structure 205 may include straps, a helmet, a hat, or any other type of mechanism to hold the HR system on the head of the user as an HMD.
- the HR system 200 also includes a display 250 coupled to position the display 250 in a field-of-view (FOV) of the user.
- the structure 205 may position the display 250 in a field of view of the user.
- the display 250 may be a stereoscopic display with two separate views of the FOV, such as view 252 for the user's left eye, and view 254 for the user's right eye.
- the two views 252 , 254 may be shown as two images on a single display device or may be shown using separate display devices that are included in the display 250 .
- the display 250 may be transparent, such as in an augmented reality (AR) HMD.
- AR augmented reality
- the view of the FOV of the real-world as seen through the display 250 by the user is composited with virtual objects that are shown on the display 250 .
- the virtual objects may occlude real objects in the FOV as overlay elements and may themselves be transparent or opaque, depending on the technology used for the display 250 and the rendering of the virtual object.
- a virtual object, such as an overlay element may be positioned in a virtual space that could be two-dimensional or three-dimensional, depending on the embodiment, to be in the same position as an associated real object in real space.
- two different views of the overlay element may be rendered and shown in two different relative positions on the two views 252 , 254 , depending on the disparity as defined by the inter-ocular distance of a viewer.
- the HR system 200 includes one or more sensors in a sensing block 240 to sense at least a portion of the FOV of the user by gathering the appropriate information for that sensor, for example visible light from a visible light camera, from the FOV of the user. Any number of any type of sensor, including sensors described previously herein, may be included in the sensor block 240 , depending on the embodiment.
- the sensor block 240 includes an eye gaze detection subsystem 242 .
- the HR system 200 may also include an I/O block 220 to allow communication with external devices.
- the I/O block 220 may include one or both of a wireless network adapter 222 coupled to an antenna 224 and a network adapter 226 coupled to a wired connection 228 .
- the wired connection 228 may be plugged into a portable device, for example a mobile phone, or may be a component of an umbilical system such as used in extreme environments.
- the HR system 200 includes a sound processor 260 which takes input from one or microphones 262 .
- the microphones 262 may be attached to the user.
- External microphones for example attached to an autonomous drone, may send sound data samples through wireless or wired connections to I/O block 220 instead of, or in addition to, the sound data received from the microphones 262 .
- the sound processor 260 may generate sound data which is transferred to one or more speakers 264 , which are a type of sound reproduction device.
- the generated sound data may be analog samples or digital values. If more than one speaker 264 is used, the sound processor may generate or simulate 2D sound placement.
- a first speaker may be positioned to provide sound to the left ear of the user and a second speaker may be positioned to provide sound to the right ear of the user. Together, the first speaker and the second speaker may provide binaural sound to the user.
- the HR system 200 includes a stimulus block 270 .
- the stimulus block 270 is used to provide other stimuli to expand the HR system user experience.
- Embodiments may include numerous haptic pads attached to the user that provide a touch stimulus.
- Embodiments may also include other stimuli, such as, but not limited to, changing the temperature of a glove, changing the moisture level or breathability of a suit, or adding smells to a breathing system.
- the HR system 200 may include a processor 210 and one or more memory devices 230 , which may also be referred to as a tangible medium or a computer readable medium.
- the processor 210 is coupled to the display 250 , the sensing block 240 , the memory 230 , I/O block 220 , sound block 260 , and stimulus block 270 , and is configured to execute the instructions 232 encoded on (i.e. stored in) the memory 230 .
- the HR system 200 may include an article of manufacture comprising a tangible medium 230 , that is not a transitory propagating signal, encoding computer-readable instructions 232 that, when applied to a computer system 200 , instruct the computer system 200 to perform one or more methods described herein, thereby configuring the processor 210 .
- the processor 210 included in the HR system 200 may be able to perform methods described herein autonomously, in some embodiments, processing facilities outside of that provided by the processor 210 included inside of the HR system 200 may be used to perform one or more elements of methods described herein.
- the processor 210 may receive information from one or more of the sensors 240 and send that information through the wireless network adapter 222 to an external processor, such as a cloud processing system or an external server. The external processor may then process the sensor information to identify an object in the FOV and send information about the object, such as its shape and location in the FOV, to the processor 210 through the wireless network adapter 222 .
- the instructions 232 may instruct the HR system 200 to detect an object in a field-of-view (FOV) using at least one sensor 240 coupled to the computer system 200 and establish a first boundary of the object.
- the instructions 232 may further instruct the HR system 200 to determine the boundary of a second or other objects in the field of view.
- FOV field-of-view
- the instructions 232 may further instruct the HR system 200 to determine an eye gaze direction using at least one sensor 240 , such as the eye gaze detection subsystem 242 , coupled to the computer system 200 .
- the instructions 232 may further instruct the HR system 200 to determine whether the eye gaze direction intersects, or is within, the first object boundary and, if within the boundary, determine whether there is an associated ambient sound with that object.
- the instructions 232 instruct the HR system 200 to determine the object type within the boundary and use the object type as an index to lookup in a table of associated ambient sounds. If an association is present, the ambient sound may be mixed with other sound output by sound processor 260 and sent to speakers 264 to play the ambient sound to the user.
- the processor 210 may be configured to detect a gaze direction of an eye of the wearer of the HR system 200 using eye gaze detection subsystem and to select an object based on the gaze direction. The processor then obtains information related to the object and chooses a sound based on the information. A digital representation of the sound is then rendered to the user through at least one sound reproduction device, such as the speaker 264 .
- the HR system 200 includes a head-mounted display (HMD) with a transparent portion so that the user can see a real-world object through the transparent portion of the display 250 .
- HMD head-mounted display
- the processor 210 may be further configured to receive sensor data related to the object from the sensor 240 and determine positions of one or more items based on the sensor data, the one or more items including the object.
- the processor may also be configured to select the object from the one or more items based on the determined positions of the one or more items and the gaze direction.
- These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
- the computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
- each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s).
- the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
- FIG. 2 is a flowchart 200 of an embodiment of a method for presenting an ambient sound associated with an object.
- the user's gaze direction is determined at input box 202 .
- decision box 204 if the gaze is on an object that has an associated ambient sound, the flow moves to decision box 206 ; if the gaze is not on an object with an associated ambient sound, the flows returns to input box 202 .
- decision box 206 if the associated ambient sound is an excerpt, the flow moves to process box 210 which plays the excerpt for a pre-determined period; if the associated ambient sound is not an excerpt, the flow moves to process box 216 which starts playing an ambient sound.
- the user's gaze direction is determined at input box 211 .
- the decision box 212 if the user's gaze is still on the object, the gaze direction is repeatedly determined at input box 211 until the gaze is not in the object, when the flow returns to the start box 201 .
- the user's gaze direction is determined at input box 217 .
- the gaze direction is repeatedly determined at input box 217 until the gaze is not in the object, when the flow returns to the start box 201 .
- the excerpt or ambient sound played may change while playing for any reason, for example a changing status of the object currently located in the eye gaze direction.
- FIG. 4 is a flowchart 400 of an embodiment of a method for presenting sound to a user.
- the method starts 401 and a gaze direction of the user's eye is detected 402 using gaze detection hardware integrated into a head-mounted display (HMD) in some embodiments.
- the gaze detection hardware may determine the gaze direction using one eye or both eyes of the user, depending on the embodiment.
- the flowchart 400 continues by selecting 403 an object based on the gaze direction.
- a camera or other sensor may capture an image of a field-of-view (FOV) of the user and detect one or more objects in the image.
- a boundary for an object or group of objects may be defined and the gaze direction used to project a vector from the eye of the user to determine if the gaze direction intersects with a boundary of an object. If an intersection occurs, the object may be selected.
- FOV field-of-view
- the object may be visible to the user from their current position, in some embodiments the object may be hidden from view of the user.
- the location of the object may be determined using sensors that are not located on or near the user or the position of the object may be known from maps, blueprints, a radio-frequency beacon transmitted by the object, or any other suitable technique.
- the method may show the object to the user on a display and use a position of the display relative to the user's eye to detect the gaze position of the user's eye.
- the object may be a real-world object or a virtual object, depending on the embodiment. If the object is a real-world object, it may be shown to the user on the display using transmitting light reflected by, or generated by, a real object through a transparent portion of the display to the user's eye.
- the flowchart 400 continues with obtaining 404 information related to the object.
- the information related to the object may include a type or class of the object.
- the information may be based on a physical property of the object, such as a temperature, a pressure, a state of matter, a mass, or a velocity.
- the physical property of the object may be sensed 442 using a hardware sensor and the sensor data based on that sensing received 443 and used to determine 444 the information related to the object based on the sensor data.
- the information may include a hazard related to the object, an identity of something near the object, an identity of something behind the object, an identity of something inside of the object, or an indication of a safe path to the object.
- a sound is then chosen 405 based on the information and the sound may be selected to convey the information to the user.
- the sound can include music or any other type of sound.
- an association between the information and the sound is based on a setting provided by the user.
- the information being conveyed by the sound is not visible to the user's eye by viewing the object.
- a digital representation of the sound is retrieved 406 .
- the digital representation may be in any form, compressed (lossy or lossless), or uncompressed, and encoded in any format, including, but not limited to, a MP3 file, pulse-code modulated data, or advanced audio codec (AAC) data.
- the digital representation may be retrieved from any available location, including, but not limited to, local memory, an optical disc, or a remote server.
- the flowchart 400 continues with rendering 407 the digital representation of the sound to the user. If the digital representation is retrieved over a network connection, the digital representation may be downloaded and stored locally before it is rendered, or it may be streamed from the remote source in real-time as it is rendered.
- the digital representation of the sound continues to be rendered, or played, for a predetermined period of time. In other embodiments, the rendering continues for a period of time determined based on the information.
- the start and or stop of the rendering may be controlled by the user.
- a predetermined eye gesture may be detected 472 and, depending on the context and the type of gesture, the rendering may be started or stopped 474 based on a predetermined eye gesture performed by the user.
- Eye gestures that may be used include, but are not limited to, a change in the gaze direction of the user's eye from a first position pointing away from the object to a second position pointing at the object, an eye blink by the user while the gaze direction of the user's eye is pointing at the object, or a change in the gaze direction of the user's eye from the second position pointing at the object to a third position pointing away from the object.
- the rendering of the digital representation of the sound may be performed to make the sound non-intrusive to the user. This may be accomplished in many different ways, such as mixing the sound to be a background sound with other sounds or presenting the sound to the user as a non-directional sound. In some embodiments the sound may be presented to the user as a directional sound originating at the object.
- Embodiments may be useful in a variety of applications and in a variety of environments. Non-limiting examples of environments where embodiments may be used are described below.
- One example application of an embodiment is a virtual guide dog for a visually impaired, but not completely blind, individual.
- An HR HMD may be used as a virtual guide dog and as the user looks towards an object, the sound played can provide additional information about the object that the visually impaired user may not be able to discern using only their eyes. For example, if there is an object which is very hot, a sizzling sound may be played in response to the visually impaired user looking toward the object, making it clear that there is potential danger even though the object cannot be recognized.
- An environment where embodiments may be useful is immersive entertainment venues where there are multiple performances occurring simultaneously throughout the venue. Several different performances may be visible to a user from a single vantage point, making it difficult to discern what sounds are coming from which performance.
- An embodiment may be used to detect which performance the user is gazing at, and then emphasizing the sound from that performance.
- embodiments may be used at an orchestra concert to allow the sound from a particular performer or instrument to be emphasized to the user of the embodiment. In some cases, this emphasized sound may not be enabled until the user performs an eye gesture to start it, such as staring that the performer for longer than a predetermined period. Note that emphasizing the sound may be done by diminishing sounds based on a direction but may also be done by selecting audio tracks that are delivered to the HR system from, for example, a centralized console or mixing desk.
- a police officer may utilize an embodiment during their patrol. As the police officer moves through their environment, different automobiles may be automatically identified from their license plate and information about the car, such as whether or not it is stolen, illegally parked, or its tags are expired, may be provided by different ambient sounds played for the police officer. For example, a siren sound may be played if the car is stolen, an alarm clock sound played if the tags are expired, and an excerpt of music in a minor key played if the car is illegally parked.
- aspects of the various embodiments may be embodied as a system, device, method, or computer program product apparatus. Accordingly, elements of the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, or the like) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “server,” “circuit,” “module,” “client,” “computer,” “logic,” or “system,” or other terms. Furthermore, aspects of the various embodiments may take the form of a computer program product embodied in one or more computer-readable medium(s) having computer program code stored thereon.
- a computer-readable storage medium may be embodied as, for example, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or other like storage devices known to those of ordinary skill in the art, or any suitable combination of computer-readable storage mediums described herein.
- a computer-readable storage medium may be any tangible medium that can contain, or store a program and/or data for use by or in connection with an instruction execution system, apparatus, or device.
- a computer data transmission medium such as a transmission line, a coaxial cable, a radio-frequency carrier, and the like, may also be able to store data, although any data storage in a data transmission medium can be said to be transitory storage. Nonetheless, a computer-readable storage medium, as the term is used herein, does not include a computer data transmission medium.
- Computer program code for carrying out operations for aspects of various embodiments may be written in any combination of one or more programming languages, including object oriented programming languages such as Java, Python, C++, or the like, conventional procedural programming languages, such as the “C” programming language or similar programming languages, or low-level computer languages, such as assembly language or microcode.
- object oriented programming languages such as Java, Python, C++, or the like
- conventional procedural programming languages such as the “C” programming language or similar programming languages
- low-level computer languages such as assembly language or microcode.
- the computer program code if loaded onto a computer, or other programmable apparatus, produces a computer implemented method.
- the instructions which execute on the computer or other programmable apparatus may provide the mechanism for implementing some or all of the functions/acts specified in the flowchart and/or block diagram block or blocks.
- the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server, such as a cloud-based server.
- the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
- LAN local area network
- WAN wide area network
- the computer program code stored in/on (i.e. embodied therewith) the non-transitory computer-readable medium produces an article of manufacture.
- the computer program code if executed by a processor causes physical changes in the electronic devices of the processor which change the physical flow of electrons through the devices. This alters the connections between devices which changes the functionality of the circuit. For example, if two transistors in a processor are wired to perform a multiplexing operation under control of the computer program code, if a first computer instruction is executed, electrons from a first source flow through the first transistor to a destination, but if a different computer instruction is executed, electrons from the first source are blocked from reaching the destination, but electrons from a second source are allowed to flow through the second transistor to the destination. So a processor programmed to perform a task is transformed from what the processor was before being programmed to perform that task, much like a physical plumbing system with different valves can be controlled to change the physical flow of a fluid.
- the singular forms “a”, “an”, and “the” include plural referents unless the content clearly dictates otherwise.
- the term “or” is generally employed in its sense including “and/or” unless the content clearly dictates otherwise.
- the term “coupled” includes direct and indirect connections. Moreover, where first and second devices are coupled, intervening devices including active devices may be located there between.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Software Systems (AREA)
- Computer Hardware Design (AREA)
- Computer Graphics (AREA)
- Optics & Photonics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- Architecture (AREA)
- User Interface Of Digital Computer (AREA)
- Processing Or Creating Images (AREA)
Abstract
Description
- This application is related to U.S. Provisional Application 62/682,424, entitled Display Metaphors, filed Jun. 8, 2018, and to U.S. patent application Ser. No. 16/007,204, entitled Information Display by Overlay on an Object, filed Jun. 13, 2018, both of which are hereby incorporated by reference in their entirety herein for any and all purposes.
- The present subject matter relates to displaying information, and more specifically, to presenting information as an ambient sound associated with an object.
- Many situations require the presentation information to a user in a way that the user can receive the information when it is needed but is not distracting, confusing or obscures potentially more relevant information. One of many different professions where this is important is for emergency responders where the ability to receive the right information at the right time can be a matter of life or death. Traditionally, emergency responders have relied on audio transmissions over a radio for a majority of their information, but that is changing with the advent of widespread wireless digital communication.
- Another new technology that is making its way into the world of emergency responders is digital displays. These displays may be on a handheld device, such as a mobile phone, or on a head-mounted display (HMD), such as a virtual reality (VR) display or an augmented reality (AR) display, which may be integrated into their emergency equipment, such as their helmet. Textual information can be presented to the emergency responder through the display and the information can be updated in real-time through the digital wireless interface from a command center or other information sources.
- The accompanying drawings, which are incorporated in and constitute part of the specification, illustrate various embodiments. Together with the general description, the drawings serve to explain various principles. In the drawings:
-
FIG. 1A shows a scene with a user wearing an embodiment of a head-mounted display looking away from an object; -
FIG. 1B shows a scene with a user wearing an embodiment of a head-mounted display looking at an object; -
FIG. 2 is a is a block diagram of an embodiment of a hybrid reality system; -
FIG. 3 is a flowchart of an embodiment of a method for presenting an ambient sound associated with an object; and -
FIG. 4 is a flowchart of an embodiment of a method for presenting sound to a user. - In the following detailed description, numerous specific details are set forth by way of examples in order to provide a thorough understanding of the relevant teachings. However, it should be apparent to those skilled in the art that the present teachings may be practiced without such details. In other instances, well known methods, procedures and components have been described at a relatively high-level, without detail, in order to avoid unnecessarily obscuring aspects of the present concepts. A number of descriptive terms and phrases are used in describing the various embodiments of this disclosure. These descriptive terms and phrases are used to convey a generally agreed upon meaning to those skilled in the art unless a different definition is given in this specification. Some descriptive terms and phrases are presented in the following paragraphs for clarity.
- Hybrid Reality (HR), as the phrase is used herein, refers to an image that merges real-world imagery with imagery created in a computer, which is sometimes called virtual imagery. While an HR image can be a still image, it can also be a moving image, such as imagery created using a video stream. HR can be displayed by a traditional two-dimensional display device, such as a computer monitor, one or more projectors, or a smartphone screen. HR imagery can also be displayed by a head-mounted display (HMD). Many different technologies can be used in an HMD to display HR imagery. A virtual reality (VR) HMD system may receive images of a real-world object, objects, or scene, and composite those images with a virtual object, objects, or scene to create an HR image. An augmented reality (AR) HMD system may present a virtual object, objects, or scene on a transparent screen which then naturally mixes the virtual imagery with a view of a scene in the real-world. A display which mixes live video with virtual objects is sometimes denoted AR, but for the purposes of this disclosure, an AR HMD includes at least a portion of the display area that is transparent to allow at least some of the user's view of the real-world to be directly viewed through the transparent portion of the AR HMD. The display used by an HR system represents a scene which is a visible portion of the whole environment. As used herein, the term “scene” and “field of view” (FOV) are used to indicate what is visible to a user.
- The word “occlude” is used herein to mean that a pixel of a virtual element is mixed with an image of another object to change the way the object is perceived by a viewer. In a VR HMD, this can be done through use of a compositing process to mix the two images, a Z-buffer technique to remove elements of the image that are hidden from view, a painter's algorithm to render closer objects later in the rendering process, or any other technique that can replace a pixel of the image of the real-world object with a different pixel value generated from any blend of real-world object pixel value and an HR system determined pixel value. In an AR HMD, the virtual object occludes the real-world object if the virtual object is rendered, transparently or opaquely, in the line of sight of the user as they view the real-world object. In the following description, the terms “occlude”, “transparency”, “rendering” and “overlay” are used to denote the mixing or blending of new pixel values with existing object pixel values in an HR display.
- In some embodiments of HR systems, there are sensors which provide the information used to render the HR imagery. A sensor may be mounted on or near the display, on the viewer's body, or be remote from the user. Remote sensors may include, but are not limited to, fixed sensors attached in an environment, sensors attached to robotic extensions, sensors attached to autonomous or semi-autonomous drones, or sensors attached to other persons. Data from the sensors may be raw or filtered. Data from the sensors may be transmitted wirelessly or using a wired connection.
- Sensors used by some embodiments of HR systems include, but are not limited to, a camera that captures images in the visible spectrum, an infrared depth camera, a microphone, a sound locator, a Hall effect sensor, an air-flow meter, a fuel level sensor, an oxygen sensor, an electronic nose, a gas detector, an anemometer, a mass flow sensor, a Geiger counter, a gyroscope, an infrared temperature sensor, a flame detector, a barometer, a pressure sensor, a pyrometer, a time-of-flight camera, radar, or lidar. Sensors in some HR system embodiments that may be attached to the user include, but are not limited to, a biosensor, a biochip, a heartbeat sensor, a pedometer, a skin resistance detector, or skin temperature detector.
- The display technology used by an HR system embodiment may include any method of projecting an image to an eye. Conventional technologies include, but are not limited to, cathode ray tube (CRT), liquid crystal display (LCD), light emitting diode (LED), plasma or organic LED (OLED) screens, or projectors based on those technologies or digital micromirror devices (DMD). It is also contemplated that virtual retina displays, such as direct drawing on the eye's retina using a holographic grating, may be used. It is also contemplated that direct machine to brain interfaces may be used in the future.
- The display of an HR system may also be an HMD or a separate device, such as, but not limited to, a hand-held mobile phone, a tablet, a fixed monitor or a TV screen.
- The connection technology used by an HR system may include any physical link and associated protocols, such as, but not limited to, wires, transmission lines, solder bumps, near-field connections, infra-red connections, or radio frequency (RF) connections such as cellular, satellite or Wi-Fi® (a registered trademark of the Wi-Fi Alliance). Virtual connections, such as software links, may also be used to connect to external networks and/or external compute.
- In many HR embodiments, aural stimuli and information may be provided by a sound system. The sound technology may include monaural, binaural, or multi-channel systems. A binaural system may include a headset or another two-speaker system but may also include systems with more than two speakers directed to the ears. The sounds may be presented as 3D audio, where each sound has a perceived position in space, achieved by using reverberation and head-related transfer functions to mimic how sounds change as they move in a particular space.
- In many HR system embodiments, objects in the display may move. The movement may be due to the user moving within the environment, for example walking, crouching, turning, or tilting the head. The movement may be due to an object moving, for example a dog running away, a car coming towards the user, or a person entering the FOV. The movement may also be due to an artificial movement, for example the user moving an object on a display or changing the size of the FOV. In one embodiment, the motion may be due to the user deliberately distorting all or part of the FOV, for example adding a virtual fish-eye lens. In the following description, all motion is considered relative; any motion may be resolved to a motion from a single frame of reference, for example the user's viewpoint.
- When there is motion in an HR system, the perspective of any generated object overlay may be corrected so that it changes with the shape and position of the associated real-world object. This may be done with any conventional point-of-view transformation based on the angle of the object from the viewer; note that the transformation is not limited to simple linear or rotational functions, with some embodiments using non-Abelian transformations. It is contemplated that motion effects, for example blur or deliberate edge distortion, may also be added to a generated object overlay.
- In some HR embodiments, images from cameras, whether sensitive to one or more of visible, infra-red, or microwave spectra, may be processed before algorithms are executed. Algorithms used after image processing for embodiments disclosed herein may include, but are not limited to, object recognition, motion detection, camera motion and zoom detection, light detection, facial recognition, text recognition, or mapping an unknown environment. The image processing may also use conventional filtering techniques, such as, but not limited to, static, adaptive, linear, non-linear, and Kalman filters. Deep-learning neural networks may be trained in some embodiments to mimic functions which are hard to create algorithmically. Image processing may also be used to prepare the image, for example by reducing noise, restoring the image, edge enhancement, or smoothing.
- In some HR embodiments, objects may be detected in the FOV of one or more cameras. Objects may be detected by using conventional algorithms, such as, but not limited to, edge detection, feature detection (for example surface patches, corners and edges), greyscale matching, gradient matching, pose consistency, or database look-up using geometric hashing. Genetic algorithms and trained neural networks using unsupervised learning techniques may also be used in embodiments to detect types of objects, for example people, dogs, or trees.
- In embodiments of an HR system, object may be performed on a single frame of a video stream, although techniques using multiple frames are also envisioned. Advanced techniques, such as, but not limited to, Optical Flow, camera motion, and object motion detection may be used between frames to enhance object recognition in each frame.
- After object recognition, rendering the object may be done by the HR system embodiment using databases of similar objects, the geometry of the detected object, or how the object is lit, for example specular reflections or bumps.
- In some embodiments of an HR system, the locations of objects may be generated from maps and object recognition from sensor data. Mapping data may be generated on the fly using conventional techniques, for example the Simultaneous Location and Mapping (SLAM) algorithm used to estimate locations using Bayesian methods, or extended Kalman filtering which linearizes a non-linear Kalman filter to optimally estimate the mean or covariance of a state (map), or particle filters which use Monte Carlo methods to estimate hidden states (map). The locations of objects may also be determined a priori, using techniques such as, but not limited to, reading blueprints, reading maps, receiving GPS locations, receiving relative positions to a known point (such as a cell tower, access point, or other person) determined using depth sensors, WiFi time-of-flight, or triangulation to at least three other points.
- Gyroscope sensors on or near the HMD may be used in some embodiments to determine head position and to generate relative motion vectors which can be used to estimate location.
- In embodiments of an HR system, sound data from one or microphones may be processed to detect specific sounds. Sounds that might be identified include, but are not limited to, human voices, glass breaking, human screams, gunshots, explosions, door slams, or a sound pattern a particular machine makes when defective. Gaussian Mixture Models and Hidden Markov Models may be used to generate statistical classifiers that are combined and looked up in a database of sound models. One advantage of using statistical classifiers is that sounds can be detected more consistently in noisy environments.
- In some embodiments of an HR system, eye tracking of one or both viewer's eyes may be performed. Eye tracking may be used to measure the point of the viewer's gaze. In an HMD, the position of each eye is known, and so there is a reference frame for determining head-to-eye angles, and so the position and rotation of each eye can be used to estimate the gaze point. Eye position determination may be done using any suitable technique and/or device, including, but not limited to, devices attached to an eye, tracking the eye position using infra-red reflections, for example Purkinje images, or using the electric potential of the eye detected by electrodes placed near the eye which uses the electrical field generated by an eye independently of whether the eye is closed or not.
- Turning now to the current disclosure, systems that display HR imagery are becoming increasingly common and are making their way from entertainment and gaming into industrial and commercial applications. Examples of systems that may find HR imagery useful include aiding a person doing a task, for example repairing machinery, testing a system, or responding to an emergency.
- Many of the same environments where HR imagery might be used also provide information to a user. This information may be associated with real objects in the environment or may be related to the environment as a whole, for example an ambient or average value. In other cases, the information to be provided to the user is unrelated to the real environment they are working in. Providing the various types of information to the user in a way that can be readily understood by the user and is not confusing, distracting or obscuring details that the user needs can be a challenge.
- In an HR system which aids a person doing a task, for example repairing machinery, testing a system, or responding to an emergency, it is often critical to present information to the user. Traditionally, speech and/or textual information have been the primary ways to provide information to a user. While those modes of information delivery have advantages in the amount of detail that they can provide and the wide range of information that they can convey, understanding detailed speech or textual information diverts attention and takes concentration away from the task at hand, which can be dangerous. Even the presentation of simple, basic information can easily escalate when there are many instances of such information to be provided by the HR system.
- Using HR technology, information can be presented to a user using sounds presented in a non-intrusive and natural way. In particular, sound information can be mixed with the current sound presentation as ambient or background noise, which adds information without being distracting. Further, the HR system has capabilities that can determine which object is being viewed, and in many situations can determine characteristics of the object currently being viewed. By combining these features, simple information can be presented to the user without interfering with the operation of the HR system by having to remove or obscure any part of the information currently being delivered.
- In an example embodiment of an HR system, gaze detection hardware may determine the gaze direction of a user. By using object recognition techniques from a video feed, one or more objects in the current field-of-view can be detected, including the boundary edges or convex hull of an object or group of objects. By considering the internal intersection of the gaze direction and object boundaries, the HR system can determine which object is currently being viewed. Note that selecting a current object over periods of time, for example using different frames of video, can also offer other pertinent information, such as: a time that a user starts looking at an object; a time that a user stops looking at an object; and how long the user looks at an object.
- Note that detecting an object does not have to be performed using a current video feed of the field-of-view. An object may be detected using other methods, such as, but not limited to, blueprints, maps, or object positions determined by other actors for example other personnel, drones or fixed sensors in the environment. In some cases, the selected object may not be visible to the user, perhaps being obscured by a physical barrier, smoke, or lack of illumination.
- Once an object has been selected by the HR system as being currently being viewed, information related to the object is determined. The information may be any pertinent information, such as, but not limited to, a type or class (e.g. vehicle, building, wall, hydrant, or person), a physical property which may have been received from an additional sensor coupled to the HR system (e.g. temperature, pressure, mass, or velocity), a proximal hazard, an identity of something near, behind or inside the object (e.g. water pressure, amount of fuel, or number of passengers), or a safe path to the object. Note that the information is often not directly visible to the user.
- The characteristic information can be used by the HR system to choose an associated sound. The HR system may start the sound playing when the user starts to look at the object. In one example, the sound may be an excerpt which plays for a predetermined time which may vary according to the characteristic. In another example, the sound continues to play until the user stops looking at the object, or offers a specific gesture, such as, but not limited to, blinking, operating a menu or quickly glancing out and then back to the object.
- Some example ambient sounds and the associated information being contemplated are listed in Table 1. These examples are non-limiting and any particular sound could be associated with any information depending on the embodiment.
-
TABLE 1 Sound Information Sea Liquid River Water Running Faucet Liquid that can extinguish a fire Fire crackling Fire Bacon sizzling Hot Storm Danger Lightning crack Live electricity Blizzard Cold Shivering person Potential for ice Growling wolf Danger Engine sound Approaching vehicle Heart-beat Hidden person near/behind object “Hello” Hidden person near/behind object - The HR system may make the sound non-intrusive when played by mixing the sound to be in the background without degrading the current audio feed to the user. In one example, the sound is presented in a binaural system as having no apparent origin, thus emphasizing the background nature. In another example, the sound is presented in a binaural system has having a position in the 3D audio landscape, for example coming from the object, thus reducing any potential for distraction.
- In some embodiments, the sound may be music or music. Since the reaction to certain sounds and music is largely a personal experience, it is contemplated that the sound and information association may be customized by the user—this may be done using any conventional technique such as, but limited to, a menu, a wizard, or importing a configuration. This allows the user to select a sound or music that has to have an immediate, significant and natural meaning to them
- Reference now is made in detail to the examples illustrated in the accompanying drawings and discussed below.
-
FIG. 1A shows auser 100 wearing an embodiment of a head-mountedsystem 130. As indicated by theschematic eye 110A,user 100 is looking away fromfire hydrant 120 at the time shown inFIG. 1A . Embodiments of a head-mountedsystem 130 may include avisible camera 132, infra-red depth camera 134 and an internal infra-red eyegaze tracking system 136. In embodiments the eyegaze tracking system 136 may be located inside of the head-mountedsystem 130 and may not be visible to an external observer. Other embodiments may have different sensors, such as any sensor of combination of sensors described above. The head-mountedsystem 130 receives data from 132, 134 and determines thatsensors fire hydrant 120 is an object in the FOV. Head-mountedsystem 130 receives data from eyegaze tracking system 136 and determines that thegaze position 111 does not intersect thefire hydrant 120 and so no ambient sound is added to soundsubsystem 138. -
FIG. 1B shows theuser 100 wearing the embodiment of the head-mountedsystem 130. As indicated by the schematic eye 112B,user 100 is looking atfire hydrant 120 at the time shown inFIG. 1B . Similarly to the vignette shown inFIG. 1A , the head-mountedsystem 130 receives data from 132, 134 and determines thatsensors fire hydrant 120 is an object in the FOV. Head-mountedsystem 130 receives data from the eyegaze tracking system 136 and determines that thegaze position 113 intersects thefire hydrant 120 and so an associated ambient sound is played for theuser 100 bysound subsystem 138, such as, but not limited to, a music excerpt selected by the user, the sound of a river, the sound of a faucet running, or the sound of the sea. -
FIG. 2 is a block diagram of an embodiment of anHR system 200 which may have some components implemented as part of a head-mounted assembly. TheHR system 200 may be considered a computer system that can be adapted to be worn on the head, carried by hand, or otherwise attached to a user. In the embodiment of theHR system 200 shown, astructure 205 is included which is adapted to be worn on the head of a user. Thestructure 205 may include straps, a helmet, a hat, or any other type of mechanism to hold the HR system on the head of the user as an HMD. - The
HR system 200 also includes adisplay 250 coupled to position thedisplay 250 in a field-of-view (FOV) of the user. Thestructure 205 may position thedisplay 250 in a field of view of the user. In some embodiments, thedisplay 250 may be a stereoscopic display with two separate views of the FOV, such asview 252 for the user's left eye, and view 254 for the user's right eye. The two 252, 254 may be shown as two images on a single display device or may be shown using separate display devices that are included in theviews display 250. In some embodiments, thedisplay 250 may be transparent, such as in an augmented reality (AR) HMD. In systems where thedisplay 250 is transparent, the view of the FOV of the real-world as seen through thedisplay 250 by the user is composited with virtual objects that are shown on thedisplay 250. The virtual objects may occlude real objects in the FOV as overlay elements and may themselves be transparent or opaque, depending on the technology used for thedisplay 250 and the rendering of the virtual object. A virtual object, such as an overlay element, may be positioned in a virtual space that could be two-dimensional or three-dimensional, depending on the embodiment, to be in the same position as an associated real object in real space. Note that if thedisplay 250 is a stereoscopic display, two different views of the overlay element may be rendered and shown in two different relative positions on the two 252, 254, depending on the disparity as defined by the inter-ocular distance of a viewer.views - In some embodiments, the
HR system 200 includes one or more sensors in asensing block 240 to sense at least a portion of the FOV of the user by gathering the appropriate information for that sensor, for example visible light from a visible light camera, from the FOV of the user. Any number of any type of sensor, including sensors described previously herein, may be included in thesensor block 240, depending on the embodiment. In the embodiment shown, thesensor block 240 includes an eyegaze detection subsystem 242. - The
HR system 200 may also include an I/O block 220 to allow communication with external devices. The I/O block 220 may include one or both of awireless network adapter 222 coupled to anantenna 224 and anetwork adapter 226 coupled to awired connection 228. Thewired connection 228 may be plugged into a portable device, for example a mobile phone, or may be a component of an umbilical system such as used in extreme environments. - In some embodiments, the
HR system 200 includes asound processor 260 which takes input from one ormicrophones 262. In someHR systems 200, themicrophones 262 may be attached to the user. External microphones, for example attached to an autonomous drone, may send sound data samples through wireless or wired connections to I/O block 220 instead of, or in addition to, the sound data received from themicrophones 262. Thesound processor 260 may generate sound data which is transferred to one ormore speakers 264, which are a type of sound reproduction device. The generated sound data may be analog samples or digital values. If more than onespeaker 264 is used, the sound processor may generate or simulate 2D sound placement. In someHR systems 200, a first speaker may be positioned to provide sound to the left ear of the user and a second speaker may be positioned to provide sound to the right ear of the user. Together, the first speaker and the second speaker may provide binaural sound to the user. - In some embodiments, the
HR system 200 includes astimulus block 270. Thestimulus block 270 is used to provide other stimuli to expand the HR system user experience. Embodiments may include numerous haptic pads attached to the user that provide a touch stimulus. Embodiments may also include other stimuli, such as, but not limited to, changing the temperature of a glove, changing the moisture level or breathability of a suit, or adding smells to a breathing system. - The
HR system 200 may include aprocessor 210 and one ormore memory devices 230, which may also be referred to as a tangible medium or a computer readable medium. Theprocessor 210 is coupled to thedisplay 250, thesensing block 240, thememory 230, I/O block 220,sound block 260, andstimulus block 270, and is configured to execute the instructions 232 encoded on (i.e. stored in) thememory 230. Thus, theHR system 200 may include an article of manufacture comprising atangible medium 230, that is not a transitory propagating signal, encoding computer-readable instructions 232 that, when applied to acomputer system 200, instruct thecomputer system 200 to perform one or more methods described herein, thereby configuring theprocessor 210. - While the
processor 210 included in theHR system 200 may be able to perform methods described herein autonomously, in some embodiments, processing facilities outside of that provided by theprocessor 210 included inside of theHR system 200 may be used to perform one or more elements of methods described herein. In one non-limiting example, theprocessor 210 may receive information from one or more of thesensors 240 and send that information through thewireless network adapter 222 to an external processor, such as a cloud processing system or an external server. The external processor may then process the sensor information to identify an object in the FOV and send information about the object, such as its shape and location in the FOV, to theprocessor 210 through thewireless network adapter 222. - In some embodiments, the instructions 232 may instruct the
HR system 200 to detect an object in a field-of-view (FOV) using at least onesensor 240 coupled to thecomputer system 200 and establish a first boundary of the object. The instructions 232 may further instruct theHR system 200 to determine the boundary of a second or other objects in the field of view. - The instructions 232 may further instruct the
HR system 200 to determine an eye gaze direction using at least onesensor 240, such as the eyegaze detection subsystem 242, coupled to thecomputer system 200. - The instructions 232 may further instruct the
HR system 200 to determine whether the eye gaze direction intersects, or is within, the first object boundary and, if within the boundary, determine whether there is an associated ambient sound with that object. In one non-limiting example, the instructions 232 instruct theHR system 200 to determine the object type within the boundary and use the object type as an index to lookup in a table of associated ambient sounds. If an association is present, the ambient sound may be mixed with other sound output bysound processor 260 and sent tospeakers 264 to play the ambient sound to the user. - In at least one embodiment, the
processor 210 may be configured to detect a gaze direction of an eye of the wearer of theHR system 200 using eye gaze detection subsystem and to select an object based on the gaze direction. The processor then obtains information related to the object and chooses a sound based on the information. A digital representation of the sound is then rendered to the user through at least one sound reproduction device, such as thespeaker 264. In at least one embodiment, theHR system 200 includes a head-mounted display (HMD) with a transparent portion so that the user can see a real-world object through the transparent portion of thedisplay 250. In such embodiments, theprocessor 210 may be further configured to receive sensor data related to the object from thesensor 240 and determine positions of one or more items based on the sensor data, the one or more items including the object. The processor may also be configured to select the object from the one or more items based on the determined positions of the one or more items and the gaze direction. - Aspects of various embodiments are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus, systems, and computer program products according to various embodiments disclosed herein. It will be understood that various blocks of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general-purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
- These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks. The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
- The flowchart and/or block diagrams in the figures help to illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products of various embodiments. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
-
FIG. 2 is aflowchart 200 of an embodiment of a method for presenting an ambient sound associated with an object. After theflowchart 200 starts at box 201, the user's gaze direction is determined at input box 202. In decision box 204, if the gaze is on an object that has an associated ambient sound, the flow moves to decision box 206; if the gaze is not on an object with an associated ambient sound, the flows returns to input box 202. In decision box 206, if the associated ambient sound is an excerpt, the flow moves to processbox 210 which plays the excerpt for a pre-determined period; if the associated ambient sound is not an excerpt, the flow moves to process box 216 which starts playing an ambient sound. - After the excerpt finishes in
process box 210, the user's gaze direction is determined at input box 211. In the decision box 212, if the user's gaze is still on the object, the gaze direction is repeatedly determined at input box 211 until the gaze is not in the object, when the flow returns to the start box 201. - After the associated ambient sound is started in process box 216, the user's gaze direction is determined at input box 217. At decision box 218, if the user's gaze is still on the object, the gaze direction is repeatedly determined at input box 217 until the gaze is not in the object, when the flow returns to the start box 201.
- It is contemplated that the excerpt or ambient sound played may change while playing for any reason, for example a changing status of the object currently located in the eye gaze direction.
-
FIG. 4 is aflowchart 400 of an embodiment of a method for presenting sound to a user. The method starts 401 and a gaze direction of the user's eye is detected 402 using gaze detection hardware integrated into a head-mounted display (HMD) in some embodiments. The gaze detection hardware may determine the gaze direction using one eye or both eyes of the user, depending on the embodiment. Theflowchart 400 continues by selecting 403 an object based on the gaze direction. In some embodiments, a camera or other sensor may capture an image of a field-of-view (FOV) of the user and detect one or more objects in the image. A boundary for an object or group of objects may be defined and the gaze direction used to project a vector from the eye of the user to determine if the gaze direction intersects with a boundary of an object. If an intersection occurs, the object may be selected. - While the object may be visible to the user from their current position, in some embodiments the object may be hidden from view of the user. The location of the object may be determined using sensors that are not located on or near the user or the position of the object may be known from maps, blueprints, a radio-frequency beacon transmitted by the object, or any other suitable technique.
- In some embodiments, the method may show the object to the user on a display and use a position of the display relative to the user's eye to detect the gaze position of the user's eye. The object may be a real-world object or a virtual object, depending on the embodiment. If the object is a real-world object, it may be shown to the user on the display using transmitting light reflected by, or generated by, a real object through a transparent portion of the display to the user's eye.
- The
flowchart 400 continues with obtaining 404 information related to the object. In some embodiments, the information related to the object may include a type or class of the object. In some embodiments, the information may be based on a physical property of the object, such as a temperature, a pressure, a state of matter, a mass, or a velocity. - In some embodiments, the physical property of the object may be sensed 442 using a hardware sensor and the sensor data based on that sensing received 443 and used to determine 444 the information related to the object based on the sensor data. In some embodiments, the information may include a hazard related to the object, an identity of something near the object, an identity of something behind the object, an identity of something inside of the object, or an indication of a safe path to the object.
- A sound is then chosen 405 based on the information and the sound may be selected to convey the information to the user. The sound can include music or any other type of sound. In some embodiments, an association between the information and the sound is based on a setting provided by the user. In some embodiments, the information being conveyed by the sound is not visible to the user's eye by viewing the object.
- A digital representation of the sound is retrieved 406. The digital representation may be in any form, compressed (lossy or lossless), or uncompressed, and encoded in any format, including, but not limited to, a MP3 file, pulse-code modulated data, or advanced audio codec (AAC) data. The digital representation may be retrieved from any available location, including, but not limited to, local memory, an optical disc, or a remote server.
- The
flowchart 400 continues withrendering 407 the digital representation of the sound to the user. If the digital representation is retrieved over a network connection, the digital representation may be downloaded and stored locally before it is rendered, or it may be streamed from the remote source in real-time as it is rendered. - In some embodiments, the digital representation of the sound continues to be rendered, or played, for a predetermined period of time. In other embodiments, the rendering continues for a period of time determined based on the information.
- In some embodiments, the start and or stop of the rendering may be controlled by the user. A predetermined eye gesture may be detected 472 and, depending on the context and the type of gesture, the rendering may be started or stopped 474 based on a predetermined eye gesture performed by the user. Eye gestures that may be used include, but are not limited to, a change in the gaze direction of the user's eye from a first position pointing away from the object to a second position pointing at the object, an eye blink by the user while the gaze direction of the user's eye is pointing at the object, or a change in the gaze direction of the user's eye from the second position pointing at the object to a third position pointing away from the object.
- The rendering of the digital representation of the sound may be performed to make the sound non-intrusive to the user. This may be accomplished in many different ways, such as mixing the sound to be a background sound with other sounds or presenting the sound to the user as a non-directional sound. In some embodiments the sound may be presented to the user as a directional sound originating at the object.
- Embodiments may be useful in a variety of applications and in a variety of environments. Non-limiting examples of environments where embodiments may be used are described below.
- One example application of an embodiment is a virtual guide dog for a visually impaired, but not completely blind, individual. An HR HMD may be used as a virtual guide dog and as the user looks towards an object, the sound played can provide additional information about the object that the visually impaired user may not be able to discern using only their eyes. For example, if there is an object which is very hot, a sizzling sound may be played in response to the visually impaired user looking toward the object, making it clear that there is potential danger even though the object cannot be recognized.
- An environment where embodiments may be useful is immersive entertainment venues where there are multiple performances occurring simultaneously throughout the venue. Several different performances may be visible to a user from a single vantage point, making it difficult to discern what sounds are coming from which performance. An embodiment may be used to detect which performance the user is gazing at, and then emphasizing the sound from that performance. In another example, embodiments may be used at an orchestra concert to allow the sound from a particular performer or instrument to be emphasized to the user of the embodiment. In some cases, this emphasized sound may not be enabled until the user performs an eye gesture to start it, such as staring that the performer for longer than a predetermined period. Note that emphasizing the sound may be done by diminishing sounds based on a direction but may also be done by selecting audio tracks that are delivered to the HR system from, for example, a centralized console or mixing desk.
- In another example of use, a police officer may utilize an embodiment during their patrol. As the police officer moves through their environment, different automobiles may be automatically identified from their license plate and information about the car, such as whether or not it is stolen, illegally parked, or its tags are expired, may be provided by different ambient sounds played for the police officer. For example, a siren sound may be played if the car is stolen, an alarm clock sound played if the tags are expired, and an excerpt of music in a minor key played if the car is illegally parked.
- As will be appreciated by those of ordinary skill in the art, aspects of the various embodiments may be embodied as a system, device, method, or computer program product apparatus. Accordingly, elements of the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, or the like) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “server,” “circuit,” “module,” “client,” “computer,” “logic,” or “system,” or other terms. Furthermore, aspects of the various embodiments may take the form of a computer program product embodied in one or more computer-readable medium(s) having computer program code stored thereon.
- Any combination of one or more computer-readable storage medium(s) may be utilized. A computer-readable storage medium may be embodied as, for example, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or other like storage devices known to those of ordinary skill in the art, or any suitable combination of computer-readable storage mediums described herein. In the context of this document, a computer-readable storage medium may be any tangible medium that can contain, or store a program and/or data for use by or in connection with an instruction execution system, apparatus, or device. Even if the data in the computer-readable storage medium requires action to maintain the storage of data, such as in a traditional semiconductor-based dynamic random access memory, the data storage in a computer-readable storage medium can be considered to be non-transitory. A computer data transmission medium, such as a transmission line, a coaxial cable, a radio-frequency carrier, and the like, may also be able to store data, although any data storage in a data transmission medium can be said to be transitory storage. Nonetheless, a computer-readable storage medium, as the term is used herein, does not include a computer data transmission medium.
- Computer program code for carrying out operations for aspects of various embodiments may be written in any combination of one or more programming languages, including object oriented programming languages such as Java, Python, C++, or the like, conventional procedural programming languages, such as the “C” programming language or similar programming languages, or low-level computer languages, such as assembly language or microcode. The computer program code if loaded onto a computer, or other programmable apparatus, produces a computer implemented method. The instructions which execute on the computer or other programmable apparatus may provide the mechanism for implementing some or all of the functions/acts specified in the flowchart and/or block diagram block or blocks. In accordance with various implementations, the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server, such as a cloud-based server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). The computer program code stored in/on (i.e. embodied therewith) the non-transitory computer-readable medium produces an article of manufacture.
- The computer program code, if executed by a processor causes physical changes in the electronic devices of the processor which change the physical flow of electrons through the devices. This alters the connections between devices which changes the functionality of the circuit. For example, if two transistors in a processor are wired to perform a multiplexing operation under control of the computer program code, if a first computer instruction is executed, electrons from a first source flow through the first transistor to a destination, but if a different computer instruction is executed, electrons from the first source are blocked from reaching the destination, but electrons from a second source are allowed to flow through the second transistor to the destination. So a processor programmed to perform a task is transformed from what the processor was before being programmed to perform that task, much like a physical plumbing system with different valves can be controlled to change the physical flow of a fluid.
- Unless otherwise indicated, all numbers expressing quantities, properties, measurements, and so forth, used in the specification and claims are to be understood as being modified in all instances by the term “about.” The recitation of numerical ranges by endpoints includes all numbers subsumed within that range, including the endpoints (e.g. 1 to 5 includes 1, 2.78, π, 3.
33 , 4, and 5). - As used in this specification and the appended claims, the singular forms “a”, “an”, and “the” include plural referents unless the content clearly dictates otherwise. Furthermore, as used in this specification and the appended claims, the term “or” is generally employed in its sense including “and/or” unless the content clearly dictates otherwise. As used herein, the term “coupled” includes direct and indirect connections. Moreover, where first and second devices are coupled, intervening devices including active devices may be located there between.
- The description of the various embodiments provided above is illustrative in nature and is not intended to limit this disclosure, its application, or uses. Thus, different variations beyond those described herein are intended to be within the scope of embodiments. Such variations are not to be regarded as a departure from the intended scope of this disclosure. As such, the breadth and scope of the present disclosure should not be limited by the above-described exemplary embodiments, but should be defined only in accordance with the following claims and equivalents thereof.
Claims (20)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US16/007,335 US20190377538A1 (en) | 2018-06-08 | 2018-06-13 | Information Presentation Through Ambient Sounds |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US201862682424P | 2018-06-08 | 2018-06-08 | |
| US16/007,335 US20190377538A1 (en) | 2018-06-08 | 2018-06-13 | Information Presentation Through Ambient Sounds |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| US20190377538A1 true US20190377538A1 (en) | 2019-12-12 |
Family
ID=68695731
Family Applications (4)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US16/007,335 Abandoned US20190377538A1 (en) | 2018-06-08 | 2018-06-13 | Information Presentation Through Ambient Sounds |
| US16/007,204 Expired - Fee Related US10497161B1 (en) | 2018-06-08 | 2018-06-13 | Information display by overlay on an object |
| US16/579,158 Active 2038-08-23 US11282248B2 (en) | 2018-06-08 | 2019-09-23 | Information display by overlay on an object |
| US17/692,666 Abandoned US20220198730A1 (en) | 2018-06-08 | 2022-03-11 | Information display by overlay on an object |
Family Applications After (3)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US16/007,204 Expired - Fee Related US10497161B1 (en) | 2018-06-08 | 2018-06-13 | Information display by overlay on an object |
| US16/579,158 Active 2038-08-23 US11282248B2 (en) | 2018-06-08 | 2019-09-23 | Information display by overlay on an object |
| US17/692,666 Abandoned US20220198730A1 (en) | 2018-06-08 | 2022-03-11 | Information display by overlay on an object |
Country Status (1)
| Country | Link |
|---|---|
| US (4) | US20190377538A1 (en) |
Cited By (14)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US10636216B2 (en) | 2018-09-06 | 2020-04-28 | Curious Company, LLC | Virtual manipulation of hidden objects |
| US10650600B2 (en) | 2018-07-10 | 2020-05-12 | Curious Company, LLC | Virtual path display |
| US10818088B2 (en) | 2018-07-10 | 2020-10-27 | Curious Company, LLC | Virtual barrier objects |
| US10872584B2 (en) | 2019-03-14 | 2020-12-22 | Curious Company, LLC | Providing positional information using beacon devices |
| US10924875B2 (en) * | 2019-05-24 | 2021-02-16 | Zack Settel | Augmented reality platform for navigable, immersive audio experience |
| US10970935B2 (en) | 2018-12-21 | 2021-04-06 | Curious Company, LLC | Body pose message system |
| US10991162B2 (en) | 2018-12-04 | 2021-04-27 | Curious Company, LLC | Integrating a user of a head-mounted display into a process |
| US11282248B2 (en) | 2018-06-08 | 2022-03-22 | Curious Company, LLC | Information display by overlay on an object |
| US11730226B2 (en) * | 2018-10-29 | 2023-08-22 | Robotarmy Corp. | Augmented reality assisted communication |
| US11830119B1 (en) * | 2020-05-29 | 2023-11-28 | Apple Inc. | Modifying an environment based on sound |
| US20240338070A1 (en) * | 2021-08-17 | 2024-10-10 | Meta Platforms Technologies, Llc | Platformization Of Mixed Reality Objects In Virtual Reality Environments |
| US20240371355A1 (en) * | 2021-08-11 | 2024-11-07 | Jtekt Corporation | Information providing system, method, and program |
| US12498787B2 (en) * | 2023-08-21 | 2025-12-16 | Samsung Electronics Co., Ltd. | Wearable device, method and computer readable storage medium for identifying gaze of user |
| US12548245B2 (en) | 2023-03-31 | 2026-02-10 | Meta Platforms Technologies, Llc | Rendering an artificial reality environment based on a defined hierarchy of multiple states including multiple artificial reality experiences with augments |
Families Citing this family (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CA3070300A1 (en) * | 2017-07-28 | 2019-01-31 | Nuro, Inc. | Food and beverage delivery system on autonomous and semi-autonomous vehicle |
| US11380065B2 (en) * | 2019-08-20 | 2022-07-05 | Red Pacs, Llc | Advanced head display unit for fire fighters |
| US11961290B1 (en) * | 2019-09-25 | 2024-04-16 | Apple Inc. | Method and device for health monitoring |
| WO2021194747A1 (en) | 2020-03-23 | 2021-09-30 | Nuro, Inc. | Methods and apparatus for automated deliveries |
| CN112308769B (en) * | 2020-10-30 | 2022-06-10 | 北京字跳网络技术有限公司 | Image synthesis method, apparatus and storage medium |
| EP4260124A1 (en) * | 2020-12-09 | 2023-10-18 | Innotonix GmbH | Increased optical performance of head-mounted displays inside laser safety eyewear |
| EP4387517A4 (en) * | 2021-08-18 | 2025-06-18 | Advanced Neuromodulation Systems, Inc. | DIGITAL HEALTH SERVICE DELIVERY SYSTEMS AND METHODS |
| EP4336429A1 (en) * | 2022-09-08 | 2024-03-13 | Volvo Truck Corporation | Displaying concealed high-risk level components inside a vehicle |
| CN118678990A (en) * | 2023-01-17 | 2024-09-20 | 谷歌有限责任公司 | Ultra wideband radar apparatus for cloud-based game control |
| CN119668399A (en) * | 2023-09-21 | 2025-03-21 | 珠海莫界科技有限公司 | Information display method, device, head mounted display device and storage medium |
Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20020191004A1 (en) * | 2000-08-09 | 2002-12-19 | Ebersole John Franklin | Method for visualization of hazards utilizing computer-generated three-dimensional representations |
| US20140002444A1 (en) * | 2012-06-29 | 2014-01-02 | Darren Bennett | Configuring an interaction zone within an augmented reality environment |
| US20160342388A1 (en) * | 2015-05-22 | 2016-11-24 | Fujitsu Limited | Display control method, data process apparatus, and computer-readable recording medium |
| US20170277257A1 (en) * | 2016-03-23 | 2017-09-28 | Jeffrey Ota | Gaze-based sound selection |
| US20170330042A1 (en) * | 2010-06-04 | 2017-11-16 | Masoud Vaziri | Method and apparatus for an eye tracking wearable computer |
| US20170374486A1 (en) * | 2016-06-23 | 2017-12-28 | Lightbox Video Inc. | Positional audio assignment system |
| US20180246698A1 (en) * | 2017-02-28 | 2018-08-30 | Magic Leap, Inc. | Virtual and real object recording in mixed reality device |
| US20190373395A1 (en) * | 2018-05-30 | 2019-12-05 | Qualcomm Incorporated | Adjusting audio characteristics for augmented reality |
Family Cites Families (132)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US3861350A (en) | 1971-07-23 | 1975-01-21 | Albert B Selleck | Warning system and device, and malodorous warning composition of matter and process for its preparation |
| US5309169A (en) | 1993-02-01 | 1994-05-03 | Honeywell Inc. | Visor display with fiber optic faceplate correction |
| US5815411A (en) | 1993-09-10 | 1998-09-29 | Criticom Corporation | Electro-optic vision system which exploits position and attitude |
| US7406214B2 (en) | 1999-05-19 | 2008-07-29 | Digimarc Corporation | Methods and devices employing optical sensors and/or steganography |
| US20020176259A1 (en) | 1999-11-18 | 2002-11-28 | Ducharme Alfred D. | Systems and methods for converting illumination |
| US20020196202A1 (en) | 2000-08-09 | 2002-12-26 | Bastian Mark Stanley | Method for displaying emergency first responder command, control, and safety information using augmented reality |
| US6903752B2 (en) | 2001-07-16 | 2005-06-07 | Information Decision Technologies, Llc | Method to view unseen atmospheric phenomenon using augmented reality |
| US20030227542A1 (en) * | 2001-12-20 | 2003-12-11 | Xiang Zhang | Single-computer real-time stereo augmented reality system |
| WO2003060830A1 (en) | 2002-01-15 | 2003-07-24 | Information Decision Technologies, Llc | Method and system to display both visible and invisible hazards and hazard information |
| US20030210812A1 (en) | 2002-02-26 | 2003-11-13 | Ali Khamene | Apparatus and method for surgical navigation |
| KR101118459B1 (en) | 2005-07-01 | 2012-04-12 | 도쿠리츠교세이호징 붓시쯔 자이료 겐큐키코 | Fluorophor and method for production thereof and illuminator |
| US20070045641A1 (en) | 2005-08-23 | 2007-03-01 | Yin Chua Janet B | Light source with UV LED and UV reflector |
| US7853061B2 (en) | 2007-04-26 | 2010-12-14 | General Electric Company | System and method to improve visibility of an object in an imaged subject |
| US9015029B2 (en) * | 2007-06-04 | 2015-04-21 | Sony Corporation | Camera dictionary based on object recognition |
| US20090065715A1 (en) | 2007-08-24 | 2009-03-12 | Lee Wainright | Universal ultraviolet/ IR/ visible light emitting module |
| US7961202B2 (en) | 2007-10-26 | 2011-06-14 | Mitel Networks Corporation | Method and apparatus for maintaining a visual appearance of at least one window when a resolution of the screen changes |
| US8428873B2 (en) | 2008-03-24 | 2013-04-23 | Google Inc. | Panoramic images within driving directions |
| US9398266B2 (en) | 2008-04-02 | 2016-07-19 | Hernan Carzalo | Object content navigation |
| US20100117828A1 (en) | 2008-11-07 | 2010-05-13 | Stuart Owen Goldman | Alarm scheme with olfactory alerting component |
| US8009022B2 (en) | 2009-05-29 | 2011-08-30 | Microsoft Corporation | Systems and methods for immersive interaction with virtual objects |
| US20110270135A1 (en) | 2009-11-30 | 2011-11-03 | Christopher John Dooley | Augmented reality for testing and training of human performance |
| US8614539B2 (en) | 2010-10-05 | 2013-12-24 | Intematix Corporation | Wavelength conversion component with scattering particles |
| US9122053B2 (en) | 2010-10-15 | 2015-09-01 | Microsoft Technology Licensing, Llc | Realistic occlusion for a head mounted augmented reality display |
| US9348141B2 (en) | 2010-10-27 | 2016-05-24 | Microsoft Technology Licensing, Llc | Low-latency fusing of virtual and real content |
| US9129438B2 (en) | 2011-01-18 | 2015-09-08 | NedSense Loft B.V. | 3D modeling and rendering from 2D images |
| WO2012135546A1 (en) | 2011-03-29 | 2012-10-04 | Qualcomm Incorporated | Anchoring virtual images to real world surfaces in augmented reality systems |
| US8863039B2 (en) | 2011-04-18 | 2014-10-14 | Microsoft Corporation | Multi-dimensional boundary effects |
| US20120289290A1 (en) | 2011-05-12 | 2012-11-15 | KT Corporation, KT TECH INC. | Transferring objects between application windows displayed on mobile terminal |
| US20130249947A1 (en) | 2011-08-26 | 2013-09-26 | Reincloud Corporation | Communication using augmented reality |
| US20130222371A1 (en) | 2011-08-26 | 2013-08-29 | Reincloud Corporation | Enhancing a sensory perception in a field of view of a real-time source within a display screen through augmented reality |
| US20130249948A1 (en) | 2011-08-26 | 2013-09-26 | Reincloud Corporation | Providing interactive travel content at a display device |
| KR101407670B1 (en) | 2011-09-15 | 2014-06-16 | 주식회사 팬택 | Mobile terminal, server and method for forming communication channel using augmented reality |
| US9229231B2 (en) | 2011-12-07 | 2016-01-05 | Microsoft Technology Licensing, Llc | Updating printed content with personalized virtual data |
| EP2805287A4 (en) | 2012-01-20 | 2016-05-25 | Medivators Inc | USE OF RECOGNITION OF HUMAN ENTRY TO PREVENT CONTAMINATION |
| US9165381B2 (en) | 2012-05-31 | 2015-10-20 | Microsoft Technology Licensing, Llc | Augmented books in a mixed reality environment |
| JP5991039B2 (en) | 2012-06-18 | 2016-09-14 | 株式会社リコー | Information processing apparatus and conference system |
| US9645394B2 (en) | 2012-06-25 | 2017-05-09 | Microsoft Technology Licensing, Llc | Configured virtual environments |
| US8953841B1 (en) * | 2012-09-07 | 2015-02-10 | Amazon Technologies, Inc. | User transportable device with hazard monitoring |
| WO2014078811A1 (en) | 2012-11-16 | 2014-05-22 | Flir Systems, Inc. | Synchronized infrared beacon / infrared detection system |
| KR20150103723A (en) | 2013-01-03 | 2015-09-11 | 메타 컴퍼니 | Extramissive spatial imaging digital eye glass for virtual or augmediated vision |
| US9961472B2 (en) | 2013-03-14 | 2018-05-01 | Apple Inc. | Acoustic beacon for broadcasting the orientation of a device |
| JP5915813B2 (en) | 2013-03-19 | 2016-05-11 | 株式会社村田製作所 | Multilayer ceramic electronic components |
| US9367136B2 (en) | 2013-04-12 | 2016-06-14 | Microsoft Technology Licensing, Llc | Holographic object feedback |
| US9286725B2 (en) | 2013-11-14 | 2016-03-15 | Nintendo Co., Ltd. | Visually convincing depiction of object interactions in augmented reality images |
| WO2015077767A1 (en) | 2013-11-25 | 2015-05-28 | Daniel Ryan | System and method for communication with a mobile device via a positioning system including rf communication devices and modulated beacon light sources |
| CN103697900A (en) | 2013-12-10 | 2014-04-02 | 郭海锋 | Method for early warning on danger through augmented reality by vehicle-mounted emotional robot |
| US10586395B2 (en) * | 2013-12-30 | 2020-03-10 | Daqri, Llc | Remote object detection and local tracking using visual odometry |
| KR20150101612A (en) | 2014-02-27 | 2015-09-04 | 엘지전자 주식회사 | Head Mounted Display with closed-view and Method for controlling the same |
| US10203762B2 (en) | 2014-03-11 | 2019-02-12 | Magic Leap, Inc. | Methods and systems for creating virtual and augmented reality |
| US10430985B2 (en) | 2014-03-14 | 2019-10-01 | Magic Leap, Inc. | Augmented reality systems and methods utilizing reflections |
| US20150278604A1 (en) | 2014-03-30 | 2015-10-01 | Gary Stephen Shuster | Systems, Devices And Methods For Person And Object Tracking And Data Exchange |
| KR20150118813A (en) | 2014-04-15 | 2015-10-23 | 삼성전자주식회사 | Providing Method for Haptic Information and Electronic Device supporting the same |
| US20150325047A1 (en) | 2014-05-06 | 2015-11-12 | Honeywell International Inc. | Apparatus and method for providing augmented reality for maintenance applications |
| US10564714B2 (en) | 2014-05-09 | 2020-02-18 | Google Llc | Systems and methods for biomechanically-based eye signals for interacting with real and virtual objects |
| IL232853A (en) | 2014-05-28 | 2015-11-30 | Elbit Systems Land & C4I Ltd | Method and system for image georegistration |
| US9588586B2 (en) | 2014-06-09 | 2017-03-07 | Immersion Corporation | Programmable haptic devices and methods for modifying haptic strength based on perspective and/or proximity |
| KR20160013748A (en) | 2014-07-28 | 2016-02-05 | 엘지전자 주식회사 | Protable electronic device and control method thereof |
| US20160147408A1 (en) | 2014-11-25 | 2016-05-26 | Johnathan Bevis | Virtual measurement tool for a wearable visualization device |
| CN107004363B (en) * | 2014-12-10 | 2020-02-18 | 三菱电机株式会社 | Image processing device, vehicle-mounted display system, display device, and image processing method |
| US10065074B1 (en) | 2014-12-12 | 2018-09-04 | Enflux, Inc. | Training systems with wearable sensors for providing users with feedback |
| US9746921B2 (en) | 2014-12-31 | 2017-08-29 | Sony Interactive Entertainment Inc. | Signal generation and detector systems and methods for determining positions of fingers of a user |
| US9953216B2 (en) | 2015-01-13 | 2018-04-24 | Google Llc | Systems and methods for performing actions in response to user gestures in captured images |
| KR102317803B1 (en) | 2015-01-23 | 2021-10-27 | 삼성전자주식회사 | Electronic device and method for controlling a plurality of displays |
| US10212355B2 (en) | 2015-03-13 | 2019-02-19 | Thales Defense & Security, Inc. | Dual-mode illuminator for imaging under different lighting conditions |
| KR102630754B1 (en) | 2015-03-16 | 2024-01-26 | 매직 립, 인코포레이티드 | Augmented Reality Pulse Oximetry |
| US10551913B2 (en) | 2015-03-21 | 2020-02-04 | Mine One Gmbh | Virtual 3D methods, systems and software |
| WO2016181398A1 (en) | 2015-05-11 | 2016-11-17 | Vayyar Imaging Ltd | System, device and methods for imaging of objects using electromagnetic array |
| US11019872B2 (en) | 2015-06-19 | 2021-06-01 | Oakley, Inc. | Sports helmet having modular components |
| KR102447438B1 (en) | 2015-07-01 | 2022-09-27 | 삼성전자주식회사 | Notification devices and how notification devices tell you where things are |
| US9911290B1 (en) | 2015-07-25 | 2018-03-06 | Gary M. Zalewski | Wireless coded communication (WCC) devices for tracking retail interactions with goods and association to user accounts |
| US20170103440A1 (en) | 2015-08-01 | 2017-04-13 | Zhou Tian Xing | Wearable augmented reality eyeglass communication device including mobile phone and mobile computing via virtual touch screen gesture control and neuron command |
| US9852599B1 (en) | 2015-08-17 | 2017-12-26 | Alarm.Com Incorporated | Safety monitoring platform |
| US20170061696A1 (en) | 2015-08-31 | 2017-03-02 | Samsung Electronics Co., Ltd. | Virtual reality display apparatus and display method thereof |
| US10297129B2 (en) | 2015-09-24 | 2019-05-21 | Tyco Fire & Security Gmbh | Fire/security service system with augmented reality |
| FR3042925B1 (en) | 2015-10-26 | 2017-12-22 | St Microelectronics Crolles 2 Sas | SYSTEM FOR CONVERTING THERMAL ENERGY INTO ELECTRICAL ENERGY. |
| US20170169170A1 (en) | 2015-12-11 | 2017-06-15 | Yosko, Inc. | Methods and systems for location-based access to clinical information |
| WO2017113194A1 (en) * | 2015-12-30 | 2017-07-06 | 深圳市柔宇科技有限公司 | Head-mounted display apparatus, head-mounted display system, and input method |
| WO2017117562A1 (en) | 2015-12-31 | 2017-07-06 | Daqri, Llc | Augmented reality based path visualization for motion planning |
| US20170192091A1 (en) * | 2016-01-06 | 2017-07-06 | Ford Global Technologies, Llc | System and method for augmented reality reduced visibility navigation |
| US9978180B2 (en) | 2016-01-25 | 2018-05-22 | Microsoft Technology Licensing, Llc | Frame projection for augmented reality environments |
| US20180190011A1 (en) | 2017-01-04 | 2018-07-05 | Osterhout Group, Inc. | Content rendering systems for head-worn computers |
| US20190206134A1 (en) | 2016-03-01 | 2019-07-04 | ARIS MD, Inc. | Systems and methods for rendering immersive environments |
| CN105781618A (en) | 2016-03-15 | 2016-07-20 | 华洋通信科技股份有限公司 | Coal mine safety integrated monitoring system based on Internet of Things |
| WO2017161192A1 (en) | 2016-03-16 | 2017-09-21 | Nils Forsblom | Immersive virtual experience using a mobile communication device |
| US10551826B2 (en) | 2016-03-24 | 2020-02-04 | Andrei Popa-Simil | Method and system to increase operator awareness |
| US20180188537A1 (en) | 2017-01-04 | 2018-07-05 | Osterhout Group, Inc. | Audio systems for head-worn computers |
| US9928662B2 (en) | 2016-05-09 | 2018-03-27 | Unity IPR ApS | System and method for temporal manipulation in virtual environments |
| US9922464B2 (en) | 2016-05-10 | 2018-03-20 | Disney Enterprises, Inc. | Occluded virtual image display |
| US9925920B2 (en) * | 2016-05-24 | 2018-03-27 | Ford Global Technologies, Llc | Extended lane blind spot detection |
| US10046236B2 (en) | 2016-06-13 | 2018-08-14 | Sony Interactive Entertainment America, LLC | Browser-based cloud gaming |
| US10102732B2 (en) | 2016-06-28 | 2018-10-16 | Infinite Designs, LLC | Danger monitoring system |
| US9906885B2 (en) | 2016-07-15 | 2018-02-27 | Qualcomm Incorporated | Methods and systems for inserting virtual sounds into an environment |
| CN119644594A (en) | 2016-07-25 | 2025-03-18 | 奇跃公司 | Image modification, display and visualization using augmented and virtual reality glasses |
| CN107657662A (en) | 2016-07-26 | 2018-02-02 | 金德奎 | Augmented reality equipment and its system and method that can be directly interactive between a kind of user |
| US10486742B2 (en) * | 2016-08-01 | 2019-11-26 | Magna Electronics Inc. | Parking assist system using light projections |
| EP3285213A1 (en) | 2016-08-16 | 2018-02-21 | Hexagon Technology Center GmbH | Lod work package |
| US10155159B2 (en) | 2016-08-18 | 2018-12-18 | Activision Publishing, Inc. | Tactile feedback systems and methods for augmented reality and virtual reality systems |
| US10656731B2 (en) | 2016-09-15 | 2020-05-19 | Daqri, Llc | Peripheral device for head-mounted display |
| US10134192B2 (en) | 2016-10-17 | 2018-11-20 | Microsoft Technology Licensing, Llc | Generating and displaying a computer generated image on a future pose of a real world object |
| US10281982B2 (en) | 2016-10-17 | 2019-05-07 | Facebook Technologies, Llc | Inflatable actuators in virtual reality |
| US10088902B2 (en) | 2016-11-01 | 2018-10-02 | Oculus Vr, Llc | Fiducial rings in virtual reality |
| DE102016121663A1 (en) | 2016-11-11 | 2018-05-17 | Osram Gmbh | Activating a transmitting device of a lighting device |
| US9911020B1 (en) | 2016-12-08 | 2018-03-06 | At&T Intellectual Property I, L.P. | Method and apparatus for tracking via a radio frequency identification device |
| WO2018129051A1 (en) | 2017-01-04 | 2018-07-12 | Advanced Functional Fabrics Of America | Uniquely identifiable articles of fabric and social networks employing them |
| US11347054B2 (en) | 2017-02-16 | 2022-05-31 | Magic Leap, Inc. | Systems and methods for augmented reality |
| US10250328B2 (en) | 2017-03-09 | 2019-04-02 | General Electric Company | Positioning system based on visible light communications |
| US10408624B2 (en) | 2017-04-18 | 2019-09-10 | Microsoft Technology Licensing, Llc | Providing familiarizing directional information |
| US10460585B2 (en) | 2017-06-05 | 2019-10-29 | Symbol Technologies, Llc | RFID directed video snapshots capturing targets of interest |
| GB201709199D0 (en) | 2017-06-09 | 2017-07-26 | Delamont Dean Lindsay | IR mixed reality and augmented reality gaming system |
| US10528228B2 (en) | 2017-06-21 | 2020-01-07 | Microsoft Technology Licensing, Llc | Interaction with notifications across devices with a digital assistant |
| EP3422146A1 (en) | 2017-06-28 | 2019-01-02 | Nokia Technologies Oy | An apparatus and associated methods for presenting sensory scenes |
| US20190007548A1 (en) | 2017-06-28 | 2019-01-03 | The Travelers Indemnity Company | Systems and methods for discrete location-based functionality |
| EP3422149B1 (en) | 2017-06-30 | 2023-03-01 | Nokia Technologies Oy | Methods, apparatus, systems, computer programs for enabling consumption of virtual content for mediated reality |
| US10867205B2 (en) | 2017-07-18 | 2020-12-15 | Lenovo (Singapore) Pte. Ltd. | Indication of characteristic based on condition |
| IL272244B2 (en) | 2017-07-24 | 2024-02-01 | Cyalume Tech Inc | Light weight appliance to be used with smart devices to produce shortwave infrared emission |
| US10725537B2 (en) | 2017-10-02 | 2020-07-28 | Facebook Technologies, Llc | Eye tracking system using dense structured light patterns |
| KR102633727B1 (en) | 2017-10-17 | 2024-02-05 | 매직 립, 인코포레이티드 | Mixed Reality Spatial Audio |
| US10748426B2 (en) | 2017-10-18 | 2020-08-18 | Toyota Research Institute, Inc. | Systems and methods for detection and presentation of occluded objects |
| US20190132815A1 (en) | 2017-10-27 | 2019-05-02 | Sentry Centers Holdings LLC | Systems and methods for beacon integrated with displays |
| US11113883B2 (en) | 2017-12-22 | 2021-09-07 | Houzz, Inc. | Techniques for recommending and presenting products in an augmented reality scene |
| WO2019122912A1 (en) | 2017-12-22 | 2019-06-27 | Ultrahaptics Limited | Tracking in haptic systems |
| KR102050999B1 (en) | 2017-12-27 | 2019-12-05 | 성균관대학교산학협력단 | Method and apparatus for transmitting of energy and method and node for receiving of energy |
| US10773169B2 (en) | 2018-01-22 | 2020-09-15 | Google Llc | Providing multiplayer augmented reality experiences |
| US20190377538A1 (en) | 2018-06-08 | 2019-12-12 | Curious Company, LLC | Information Presentation Through Ambient Sounds |
| US10706629B2 (en) | 2018-06-15 | 2020-07-07 | Dell Products, L.P. | Coordinate override in virtual, augmented, and mixed reality (xR) applications |
| US10650600B2 (en) | 2018-07-10 | 2020-05-12 | Curious Company, LLC | Virtual path display |
| PL3821152T3 (en) | 2018-07-10 | 2024-04-02 | Saint-Gobain Performance Plastics Rencol Limited | Torque assembly and method of making and using the same |
| US10818088B2 (en) | 2018-07-10 | 2020-10-27 | Curious Company, LLC | Virtual barrier objects |
| US10902678B2 (en) | 2018-09-06 | 2021-01-26 | Curious Company, LLC | Display of hidden information |
| US11055913B2 (en) | 2018-12-04 | 2021-07-06 | Curious Company, LLC | Directional instructions in an hybrid reality system |
| US10970935B2 (en) | 2018-12-21 | 2021-04-06 | Curious Company, LLC | Body pose message system |
| US10872584B2 (en) | 2019-03-14 | 2020-12-22 | Curious Company, LLC | Providing positional information using beacon devices |
-
2018
- 2018-06-13 US US16/007,335 patent/US20190377538A1/en not_active Abandoned
- 2018-06-13 US US16/007,204 patent/US10497161B1/en not_active Expired - Fee Related
-
2019
- 2019-09-23 US US16/579,158 patent/US11282248B2/en active Active
-
2022
- 2022-03-11 US US17/692,666 patent/US20220198730A1/en not_active Abandoned
Patent Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20020191004A1 (en) * | 2000-08-09 | 2002-12-19 | Ebersole John Franklin | Method for visualization of hazards utilizing computer-generated three-dimensional representations |
| US20170330042A1 (en) * | 2010-06-04 | 2017-11-16 | Masoud Vaziri | Method and apparatus for an eye tracking wearable computer |
| US20140002444A1 (en) * | 2012-06-29 | 2014-01-02 | Darren Bennett | Configuring an interaction zone within an augmented reality environment |
| US20160342388A1 (en) * | 2015-05-22 | 2016-11-24 | Fujitsu Limited | Display control method, data process apparatus, and computer-readable recording medium |
| US20170277257A1 (en) * | 2016-03-23 | 2017-09-28 | Jeffrey Ota | Gaze-based sound selection |
| US20170374486A1 (en) * | 2016-06-23 | 2017-12-28 | Lightbox Video Inc. | Positional audio assignment system |
| US20180246698A1 (en) * | 2017-02-28 | 2018-08-30 | Magic Leap, Inc. | Virtual and real object recording in mixed reality device |
| US20190373395A1 (en) * | 2018-05-30 | 2019-12-05 | Qualcomm Incorporated | Adjusting audio characteristics for augmented reality |
Cited By (23)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US11282248B2 (en) | 2018-06-08 | 2022-03-22 | Curious Company, LLC | Information display by overlay on an object |
| US10818088B2 (en) | 2018-07-10 | 2020-10-27 | Curious Company, LLC | Virtual barrier objects |
| US10650600B2 (en) | 2018-07-10 | 2020-05-12 | Curious Company, LLC | Virtual path display |
| US10861239B2 (en) | 2018-09-06 | 2020-12-08 | Curious Company, LLC | Presentation of information associated with hidden objects |
| US10803668B2 (en) | 2018-09-06 | 2020-10-13 | Curious Company, LLC | Controlling presentation of hidden information |
| US10636216B2 (en) | 2018-09-06 | 2020-04-28 | Curious Company, LLC | Virtual manipulation of hidden objects |
| US10902678B2 (en) | 2018-09-06 | 2021-01-26 | Curious Company, LLC | Display of hidden information |
| US10636197B2 (en) | 2018-09-06 | 2020-04-28 | Curious Company, LLC | Dynamic display of hidden information |
| US11238666B2 (en) | 2018-09-06 | 2022-02-01 | Curious Company, LLC | Display of an occluded object in a hybrid-reality system |
| US11730226B2 (en) * | 2018-10-29 | 2023-08-22 | Robotarmy Corp. | Augmented reality assisted communication |
| US10991162B2 (en) | 2018-12-04 | 2021-04-27 | Curious Company, LLC | Integrating a user of a head-mounted display into a process |
| US11055913B2 (en) | 2018-12-04 | 2021-07-06 | Curious Company, LLC | Directional instructions in an hybrid reality system |
| US11995772B2 (en) | 2018-12-04 | 2024-05-28 | Curious Company Llc | Directional instructions in an hybrid-reality system |
| US10970935B2 (en) | 2018-12-21 | 2021-04-06 | Curious Company, LLC | Body pose message system |
| US10955674B2 (en) | 2019-03-14 | 2021-03-23 | Curious Company, LLC | Energy-harvesting beacon device |
| US10901218B2 (en) | 2019-03-14 | 2021-01-26 | Curious Company, LLC | Hybrid reality system including beacons |
| US10872584B2 (en) | 2019-03-14 | 2020-12-22 | Curious Company, LLC | Providing positional information using beacon devices |
| US10924875B2 (en) * | 2019-05-24 | 2021-02-16 | Zack Settel | Augmented reality platform for navigable, immersive audio experience |
| US11830119B1 (en) * | 2020-05-29 | 2023-11-28 | Apple Inc. | Modifying an environment based on sound |
| US20240371355A1 (en) * | 2021-08-11 | 2024-11-07 | Jtekt Corporation | Information providing system, method, and program |
| US20240338070A1 (en) * | 2021-08-17 | 2024-10-10 | Meta Platforms Technologies, Llc | Platformization Of Mixed Reality Objects In Virtual Reality Environments |
| US12548245B2 (en) | 2023-03-31 | 2026-02-10 | Meta Platforms Technologies, Llc | Rendering an artificial reality environment based on a defined hierarchy of multiple states including multiple artificial reality experiences with augments |
| US12498787B2 (en) * | 2023-08-21 | 2025-12-16 | Samsung Electronics Co., Ltd. | Wearable device, method and computer readable storage medium for identifying gaze of user |
Also Published As
| Publication number | Publication date |
|---|---|
| US20200020145A1 (en) | 2020-01-16 |
| US11282248B2 (en) | 2022-03-22 |
| US10497161B1 (en) | 2019-12-03 |
| US20220198730A1 (en) | 2022-06-23 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20190377538A1 (en) | Information Presentation Through Ambient Sounds | |
| US11238666B2 (en) | Display of an occluded object in a hybrid-reality system | |
| US11995772B2 (en) | Directional instructions in an hybrid-reality system | |
| US10650600B2 (en) | Virtual path display | |
| US10818088B2 (en) | Virtual barrier objects | |
| US11002965B2 (en) | System and method for user alerts during an immersive computer-generated reality experience | |
| US10970935B2 (en) | Body pose message system | |
| US12039659B2 (en) | Method and device for tailoring a synthesized reality experience to a physical setting | |
| US11699412B2 (en) | Application programming interface for setting the prominence of user interface elements | |
| KR20250002103A (en) | Realistic content providing device and realistic content providing method |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AS | Assignment |
Owner name: CURIOUS COMPANY, LLC, OREGON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:YOUNG, BRUCE A.;JONES, ANTHONY MARK;SIGNING DATES FROM 20180528 TO 20180529;REEL/FRAME:046079/0755 |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION MAILED |
|
| STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |