HK1183120A - Personal audio/visual system with holographic objects - Google Patents
Personal audio/visual system with holographic objects Download PDFInfo
- Publication number
- HK1183120A HK1183120A HK13110196.6A HK13110196A HK1183120A HK 1183120 A HK1183120 A HK 1183120A HK 13110196 A HK13110196 A HK 13110196A HK 1183120 A HK1183120 A HK 1183120A
- Authority
- HK
- Hong Kong
- Prior art keywords
- state
- virtual object
- virtual
- objects
- real
- Prior art date
Links
Description
Technical Field
The present invention relates to personal audio/video systems, in particular personal audio/video systems with holographic objects.
Background
Augmented Reality (AR) relates to providing an augmented real-world environment in which the perception of the real-world environment (or data representing the real-world environment) is augmented or modified with computer-generated virtual data. For example, data representing a real-world environment may be captured in real-time using a sensory input device such as a camera or microphone and augmented with computer-generated virtual data including virtual images and virtual sounds. The virtual data may also include information related to the real-world environment, such as textual descriptions associated with real-world objects in the real-world environment. The AR environment may be used to enhance a variety of applications including video games, drawing, navigation, and mobile device applications.
Some AR environments enable the perception of real-time interaction between real objects (i.e., objects that are present in a particular real-world environment) and virtual objects (i.e., objects that are not present in a particular real-world environment). In order to realistically integrate virtual objects into an AR environment, AR systems typically perform several steps including mapping and localization. Mapping involves the process of generating a map of the real-world environment. Localization involves the process of locating a particular perspective or pose relative to the mapping. A fundamental requirement of many AR systems is the ability to localize the pose of a mobile device moving within a real-world environment, to determine the particular views associated with the mobile device that need to be enhanced over time,
disclosure of Invention
Techniques for generating an augmented reality environment using state-based virtual objects are described. A state-based virtual object may be associated with a plurality of different states. Each state of the plurality of different states may correspond to a unique set of trigger events that are different from any other state event. The set of trigger events associated with a particular state may be used to determine when a state change from the particular state is required. In some cases, each of a plurality of different states may be associated with a different 3-D model or shape. A plurality of different states may be defined using a predetermined and standardized file format that supports state-based virtual objects. In some embodiments, one or more potential state changes from a particular state may be predicted based on one or more trigger probabilities associated with the set of trigger events.
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
Drawings
FIG. 1 is a block diagram of one embodiment of a networked computing environment in which the disclosed technology may be implemented.
FIG. 2A depicts one embodiment of a mobile device in communication with a second mobile device.
Fig. 2B depicts one embodiment of a portion of an HMD.
Fig. 2C depicts one embodiment of a portion of an HMD in which a gaze vector extending to a point of gaze is used to align the far interpupillary distance (IPD).
Fig. 2D depicts one embodiment of a portion of an HMD in which a gaze vector extending to a point of gaze is used to align the near-interpupillary distance (IPD).
Fig. 2E depicts one embodiment of a portion of an HMD with a movable display optical system including a gaze-detecting element.
Fig. 2F depicts an alternative embodiment of a portion of an HMD with a movable display optical system including a gaze-detecting element.
Fig. 2G depicts one embodiment of a side view of a portion of an HMD.
FIG. 2H depicts one embodiment of a side view of a portion of an HMD that provides support for three-dimensional adjustment of a microdisplay assembly.
Fig. 3A depicts one embodiment of an augmented reality environment as seen by an end user wearing an HMD.
Fig. 3B depicts one embodiment of an augmented reality environment as seen by an end user wearing an HMD.
FIG. 3C depicts one embodiment of an augmented reality environment.
3D-3E depict one embodiment of an augmented reality environment including state-based virtual objects.
FIG. 4 illustrates one embodiment of a computing system including a capture device and a computing environment.
FIG. 5A depicts one embodiment of an AR system for providing virtual object information associated with a particular location or place of interest.
FIG. 5B illustrates one example of a system architecture for executing one or more processes and/or software on a supplemental information provider.
FIGS. 6A and 6B are flow charts describing a set of processes for providing a personalized shopping experience using a personal A/V device.
FIG. 7A depicts one embodiment of a virtual object file that includes virtual object information associated with one or more virtual objects.
FIG. 7B is a flow chart describing one embodiment of a process for generating an augmented reality environment.
FIG. 7C is a flow chart describing one embodiment of a process for predicting a future virtual object state.
FIG. 7D is a flow chart describing one embodiment of a process for negotiating information transfer with a supplemental information provider.
FIG. 7E is a flow chart describing one embodiment of a process for obtaining one or more virtual objects from a supplemental information provider.
FIG. 7F is a flow chart describing one embodiment of a process for obtaining one or more virtual objects.
FIG. 7G is a flow chart describing one embodiment of a process for displaying one or more virtual objects.
FIG. 8 is a block diagram of an embodiment of a gaming and media system.
FIG. 9 is a block diagram of one embodiment of a mobile device.
FIG. 10 is a block diagram of an embodiment of a computing system environment.
Detailed Description
Techniques for generating a personalized augmented reality environment using a mobile device are described. The mobile device may display one or more images associated with the state-based virtual object such that the virtual object is perceived to exist within the real-world environment. A state-based virtual object may be associated with a plurality of different states. Each state of the plurality of different states may correspond to a unique set of trigger events that are different from any other state event. The set of trigger events associated with a particular state may be used to determine when a state change from the particular state is required. In some cases, each of a plurality of different states may be associated with a different 3-D model or shape. In other cases, each of a plurality of different states may be associated with a different virtual object attribute (e.g., virtual mass or virtual degree of reflectivity). A plurality of different states may be defined using a predetermined and standardized file format that supports state-based virtual objects. In some embodiments, one or more potential state changes from a particular state may be predicted based on one or more trigger probabilities associated with the set of trigger events.
With the advent and growth of mobile computing devices, such as head mounted display devices (HMDs), that are continuously enabled and connected to networks, the amount of information available to end users of these computing devices is enormous at any given moment. In some cases, the augmented reality environment may be perceived by an end user of the mobile computing device. In one example, the augmented reality environment may include a personalized augmented reality environment in which one or more virtual objects are generated and displayed based on an identification of the end user, user preferences associated with the end user, a physical location of the end user, or environmental characteristics associated with the physical location of the end user. In one embodiment, the one or more virtual objects may be acquired by the mobile computing device via a supplemental information provider. To allow efficient storage and exchange of virtual objects, one or more virtual objects may be implemented in a predetermined and standardized file format. Each of the one or more virtual objects may be associated with a plurality of different states. The current state of the virtual object may be determined via a state diagram encoded within a predetermined and standardized file format.
FIG. 1 is a block diagram of one embodiment of a networked computing environment 100 in which the disclosed technology may be implemented. The networked computing environment 100 includes a plurality of computing devices interconnected by one or more networks 180. The one or more networks 180 allow a particular computing device to connect to and communicate with another computing device. The depicted computing devices include mobile device 11, mobile device 12, mobile device machine 19, and server 15. In some embodiments, the plurality of computing devices may include other computing devices not shown. In some embodiments, the plurality of computing devices may include more or fewer computing devices than the number of computing devices shown in fig. 1. The one or more networks 180 may include a secure network, such as an enterprise private network, an unsecure network, such as a wireless open network, a Local Area Network (LAN), a Wide Area Network (WAN), and the internet. Each of the one or more networks 180 may include hubs, bridges, routers, switches, and wired transmission media, such as a wired network or direct-wired connection.
The server 15, which may comprise a supplemental information server or an application server, may allow clients to download information (e.g., text, audio, image, and video files) from the server or perform search queries related to particular information stored on the server. In general, a "server" may include a hardware device that acts as a host in a client-server relationship, or a software process that shares resources with or performs work for one or more clients. Communication between computing devices in a client-server relationship may be initiated by a client sending a request to a server to access a particular resource or perform a particular job. The server may then perform the requested action and send a response back to the client.
One embodiment of server 15 includes a network interface 155, a processor 156, a memory 157, and a translator 158, all in communication with each other. Network interface 155 allows server 15 to connect to one or more networks 180. Network interface 155 may include a wireless network interface, a modem, and/or a wired network interface. Processor 156 allows server 15 to execute computer readable instructions stored in memory 157 to perform processes discussed herein. The translator 158 may include mapping logic for translating a first file in a first file format to a corresponding second file in a second file format (i.e., the second file is a translated version of the first file). The translator 158 may be configured with file mapping instructions that provide instructions for mapping files in a first file format (or portions thereof) to corresponding files in a second file format.
One embodiment of mobile device 19 includes network interface 145, processor 146, memory 147, camera 148, sensor 149, and display 150, all in communication with each other. Network interface 145 allows mobile device 19 to connect to one or more networks 180. Network interface 145 may include a wireless network interface, a modem, and/or a wired network interface. Processor 146 allows mobile device 19 to execute computer readable instructions stored in memory 147 in order to perform processes discussed herein. The camera 148 may capture color images and/or depth images. Sensor 149 may generate motion and/or orientation information associated with mobile device 19. The sensors 149 may include an Inertial Measurement Unit (IMU). The display 150 may display digital images and/or video. Display 150 may comprise a see-through display.
The networked computing environment 100 may provide a cloud computing environment for one or more computing devices. Cloud computing refers to internet-based computing in which shared resources, software, and/or information are provided on-demand to one or more computing devices over the internet (or other global network). The term "cloud" is used as a metaphor for the internet, based on the use of a cloud graph to depict the internet in a computer network diagram as an abstraction of the underlying infrastructure it represents.
In one example, mobile device 19 includes a head mounted display device that provides an augmented reality environment or a mixed reality environment for an end user of the head mounted display device (HMD). The HMD may include a video see-through and/or optical see-through system. An optical see-through HMD worn by an end user may allow for actual direct viewing of the real-world environment (e.g., via a transparent lens), and may simultaneously project an image of a virtual object into the field of view of the end user, thereby augmenting the real-world environment perceived by the end user with the virtual object.
An end user wearing an HMD may move around in a real-world environment (e.g., a living room) with the HMD and perceive a view of the real world overlaid with images of virtual objects. The virtual object may appear to maintain a relevant spatial relationship to the real-world environment (i.e., as the end-user turns their head or moves through the real-world environment, the image displayed to the end-user will change such that the virtual object appears to exist within the real-world environment as perceived by the end-user). The virtual objects may also appear fixed relative to the end user's perspective (e.g., a virtual menu that always appears in the upper right corner of the end user's perspective regardless of how the end user turns their head or moves in the real world environment). In one embodiment, the environment mapping of the real-world environment is performed by the server 15 (i.e., on the server side), while the camera localization is performed on the mobile device 19 (i.e., on the client side). The virtual object may include a textual description associated with the real-world object. The virtual objects may also include virtual obstacles (e.g., virtual walls that cannot be moved) and virtual targets (e.g., virtual monsters).
In some embodiments, a mobile device (such as mobile device 19) may communicate with a server (such as server 15) in the cloud and may provide server location information (e.g., a location of the mobile device via GPS coordinates) and/or image information (e.g., information related to objects detected within a field of view of the mobile device) associated with the mobile device. In response, the server may transmit one or more virtual objects to the mobile device based on the location information and/or image information provided to the server. In one embodiment, the mobile device 19 may specify a particular file format for receiving the one or more virtual objects, and the server 15 may transmit the one or more virtual objects contained within the file of the particular file format to the mobile device 19.
Fig. 2A depicts one embodiment of a mobile device 19 in communication with a second mobile device 5. Mobile device 19 may comprise a see-through HMD. As depicted, mobile device 19 communicates with mobile device 5 via wired connection 6. However, mobile device 19 may also communicate with mobile device 5 via a wireless connection. Mobile device 5 may be used by mobile device 19 to offload computationally intensive processing tasks (e.g., rendering virtual objects) and to store virtual object information and other data necessary to provide an augmented reality environment on mobile device 19.
Fig. 2B depicts one embodiment of a portion of an HMD (such as mobile device 19 of fig. 1). Only the right side of head mounted display device (HMD) 200 is depicted. The HMD 200 includes a right temple 202, a nose bridge 204, a lens 216, and a lens frame 214. Right temple 202 includes a capture device 213 (e.g., a forward facing camera and/or microphone) in communication with processing unit 236. The capture device 213 may include one or more cameras for recording digital images and/or videos, and may transmit visual recordings to the processing unit 236. One or more cameras may capture color information, IR information, and/or depth information. The capture device 213 may also include one or more microphones for recording sound, and may transmit the audio recording to the processing unit 236.
The right temple 202 also includes an earpiece 230, a motion and orientation sensor 238, a GPS receiver 232, a power source 239, and a wireless interface 237, all in communication with the processing unit 236. The motion and orientation sensors 238 may include a three-axis magnetometer, a three-axis gyroscope, and/or a three-axis accelerometer. In one embodiment, the motion and orientation sensor 238 may comprise an Inertial Measurement Unit (IMU). The GPS receiver may determine a GPS location associated with the HMD 200. Processing unit 236 may include one or more processors and be used to store computer readable instructions to be executed on the one or more processors. The memory may also store other types of data to be executed on the one or more processors.
In one embodiment, the lens 216 may include a see-through display whereby the image generated by the processing unit 236 may be projected and/or displayed on the see-through display. The capture device 213 may be calibrated such that the field of view captured by the capture device 213 corresponds to the field of view seen by the end user of the HMD 200. The headphones 230 may be used to output sounds associated with the projected image of the virtual object. In some embodiments, HMD 200 may include two or more forward-facing cameras (e.g., one camera on each temple) to obtain depth from stereo information associated with the field of view captured by the forward-facing cameras. The two or more forward facing cameras may also include 3-D, IR, and/or RGB cameras. Depth information may also be obtained from a single camera that utilizes depth from motion techniques. For example, two images may be acquired from a single camera associated with two different points in space at different points in time. Thereafter, given positional information relating to two different points in space, a disparity calculation may be performed.
In some embodiments, HMD 200 may perform gaze detection for each of the end user's eyes using a gaze detection element and a three-dimensional coordinate system related to one or more human eye elements, such as a corneal center, a center of eyeball rotation, or a pupil center. Examples of gaze detection elements may include an illuminator to generate glints and a sensor to capture data representative of the generated glints. In some cases, the corneal center may be determined using planar geometry based on two glints. The corneal center links the pupil center and the center of rotation of the eyeball, which can be considered as a fixed position for determining the optical axis of the end user's eye at a particular gaze or viewing angle.
Fig. 2C depicts one embodiment of a portion of an HMD in which a gaze vector extending to a point of gaze is used to align the far interpupillary distance (IPD). HMD2 is one example of a mobile device, such as mobile device 19 in fig. 1. As depicted, gaze vectors 180l and 180r intersect at a gaze point away from the end user (i.e., when the end user is looking at a distant object, gaze vectors 180l and 180r do not intersect). An eyeball model of eyeballs 160l, 160r for each eye is shown based on the gullsland schematic eye model. Each eyeball is modeled as a sphere having a center of rotation 166, and includes a cornea 168 modeled as a sphere and having a center 164. The cornea 168 rotates with the eyeball, and the center of rotation 166 of the eyeball may be treated as a fixed point. Cornea 168 overlies iris 170 with pupil 162 centered on iris 170. On the surface 172 of each cornea are glints 174 and 176.
As depicted in fig. 2C, the sensor detection area 139 (i.e., 139l and 139r, respectively) is aligned with the optical axis of each display optical system 14 within the spectacle frame 115. In one example, the sensors associated with the detection zone may include one or more cameras capable of capturing image data representing flashes 174l and 176l generated by illuminators 153a and 153b, respectively, on the left side of the frame 115 and data representing flashes 174r and 176r generated by illuminators 153c and 153d, respectively, on the right side of the frame 115. Through the display optics 14l and 14r in the spectacle frame 115, the end user's field of view comprises real objects 190, 192 and 194 and virtual objects 182 and 184.
An axis 178 formed from the center of rotation 166 through the corneal center 164 to the pupil 162 includes the optical axis of the eye. Gaze vector 180 is also referred to as the line of sight or visual axis extending from the fovea through pupil center 162. In some embodiments, the optical axis is determined and a small correction is determined by user calibration to obtain the visual axis selected as the gaze vector. For each end user, the virtual object may be displayed by the display device at each of a plurality of predetermined locations at different horizontal and vertical positions. During the display of the object at each location, the optical axis of each eye can be calculated and the ray modeled as extending from that location into the user's eye. The gaze offset angle, with horizontal and vertical components, may be determined based on how the optical axis must be moved to align with the modeled ray. From different positions, the average gaze offset angle with horizontal or vertical components may be selected as a small correction to be applied to each calculated optical axis. In some embodiments, only the horizontal component is used for gaze offset angle correction.
As depicted in fig. 2C, gaze vectors 180l and 180r are not perfectly parallel as the vectors become closer together as they extend from the eyeball into the field of view at the gaze point. At each display optical system 14, the gaze vector 180 appears to intersect the optical axis, with the sensor detection region 139 centered at this intersection. In this configuration, the optical axis is aligned with the interpupillary distance (IPD). When the end user looks straight ahead, the measured IPD is also called far IPD.
Fig. 2D depicts one embodiment of a portion of HMD2, in which a gaze vector extending to a point of gaze is used to align the near-interpupillary distance (IPD). HMD2 is one example of a mobile device, such as mobile device 19 in fig. 1. As depicted, the cornea 168l of the left eye is rotated to the right or toward the end user's nose, and the cornea 168r of the right eye is rotated to the left or toward the end user's nose. Both pupils are looking at real objects 194 within a certain distance of the end user. The gaze vectors 180l and 180r from each eye enter the Panum's fusional area 195 where the real object 194 is located. The Panum's fusion area is an area of single vision in a binocular viewing system like human vision. The intersection of gaze vectors 180l and 180r indicates that the end user is looking at real-world object 194. At such distances, as the eyeballs rotate inward, the distance between their pupils decreases to a near IPD. The near IPD is typically about 4mm smaller than the far IPD. A near IPD distance criterion (e.g., a point of regard less than four feet from the end user) may be used to switch or adjust the IPD alignment of the display optical system 14 to that of the near IPD. For near IPDs, each display optical system 14 may be moved toward the nose of the end user such that the optical axis and detection region 139 are moved toward the nose by several millimeters, as represented by detection regions 139ln and 139 rn.
More information on determining the IPD and adjusting the display optical System accordingly for the end user of the HMD may be found in U.S. patent application No. 13/250,878 entitled "personal audio/Visual System," filed 2011, 9/30, which is hereby incorporated by reference in its entirety.
Fig. 2E depicts one embodiment of a portion of an HMD2 with a movable display optical system that includes gaze-detection elements. Appearing as a lens for each eye is a display optical system 14 for each eye, i.e. 14r and 14 l. The display optical system includes see-through lenses and optical elements (e.g., mirrors, filters) for seamlessly fusing virtual content with an actual direct real world view seen through the lenses of the HMD. Display optics 14 has an optical axis generally centered in the see-through lens, where the light is generally collimated to provide a distortion-free view. For example, when an eye care professional fits a pair of ordinary eyeglasses to the face of an end user, the eyeglasses typically land on the nose of the end user at a location where each pupil is aligned with the center or optical axis of the corresponding lens, typically so that the collimated light reaches the eyes of the end user for a clear or undistorted view.
As depicted in fig. 2E, the detection regions 139r, 139l of at least one sensor are aligned with the optical axis of its respective display optical system 14r, 14l such that the centers of the detection regions 139r, 139l capture light along the optical axis. If display optical system 14 is aligned with the pupil of the end user, each detection region 139 of the respective sensor 134 is aligned with the pupil of the end user. The reflected light of the detection region 139 is transmitted to the actual image sensor 134 of the camera via one or more optical elements, the sensor 134 being shown by a dashed line inside the frame 115 in this embodiment.
In one embodiment, the at least one sensor 134 may be a visible light camera (e.g., an RGB camera). In one example, the optical element or light directing element comprises a visible light mirror that is partially transmissive and partially reflective. The visible camera provides image data of the pupil of the end user's eye, while the IR photodetector 152 captures the glint as a reflection in the IR portion of the spectrum. If a visible light camera is used, reflections of the virtual image may appear in the eye data captured by the camera. Image filtering techniques can be used to remove virtual image reflections if desired. The IR camera is insensitive to virtual image reflections on the eye.
In another embodiment, at least one sensor 134 (i.e., 134l and 134 r) is an IR camera or Position Sensitive Detector (PSD) to which IR radiation can be directed. The IR radiation reflected from the eye may be from incident radiation of the illuminator 153, other IR illuminators (not shown) or from ambient IR radiation reflected from the eye. In some cases, the sensor 134 may be a combination of RGB and IR cameras, and the light directing elements may include visible light reflecting or turning elements and IR radiation reflecting or turning elements. In some cases, the camera 134 may be embedded in a lens of the system 14. Additionally, image filtering techniques may be applied to blend the cameras into the user's field of view to mitigate any interference to the user.
As depicted in fig. 2E, there are four sets of illuminators 153, the illuminators 153 paired with photodetectors 152 and separated by barriers 154 to avoid interference between incident light generated by the illuminators 153 and reflected light received at the photodetectors 152. To avoid unnecessary clutter in the drawings, reference numbers are shown for a representative pair. Each illuminator may be an Infrared (IR) illuminator that generates a narrow beam of light of approximately a predetermined wavelength. Each of the photodetectors may be selected to capture light at about the predetermined wavelength. Infrared may also include near infrared. Because the illuminator or photodetector may have a wavelength drift or a small range around the wavelength is acceptable, the illuminator and photodetector may have a tolerance range related to the wavelength used for generation or detection. In some embodiments where the sensor is an IR camera or an IR Position Sensitive Detector (PSD), the photodetector may comprise an additional data capture device and may also be used to monitor the operation of the luminaire, such as wavelength drift, beam width change, etc. The photodetector may also provide flash data with a visible light camera as the sensor 134.
As depicted in fig. 2E, each display optical system 14 and its arrangement of gaze detection elements facing each eye (e.g., camera 134 and its detection region 139, illuminator 153, and photodetector 152) are located on a movable inner frame portion 171l, 171 r. In this example, the display adjustment mechanism includes one or more motors 203 having a rotational axis 205, which is attached to the inner frame portion 117 that slides from left to right or vice versa under the guidance and force of a drive shaft 205 driven by the motors 203. In some embodiments, one motor 203 can drive both inner frames.
Fig. 2F depicts an alternative embodiment of a portion of an HMD2 with a movable display optical system including gaze-detecting elements. As depicted, each display optical system 14 is enclosed in a separate frame portion 115l, 115 r. Each of the frame portions can be moved separately by a motor 203. More information about HMDs with movable display optics can be found in U.S. patent application No. 13/250,878 entitled "Personal Audio/Visual System," filed 30/9.2011, which is incorporated herein by reference in its entirety.
Fig. 2G depicts one embodiment of a side view of a portion of the HMD2 including the temple 102 of the frame 115. In front of the frame 115 is a forward facing video camera 113 that can capture video and still images. In some embodiments, forward facing camera 113 may include a depth camera and a visible light or RGB camera. In one example, the depth camera may include an IR illuminator emitter and a heat reflective surface, such as a hot mirror in front of a visible image sensor, that transmits visible light and directs reflected IR radiation within a wavelength range emitted by the illuminator or around a predetermined wavelength to a CCD or other type of depth sensor. Other types of visible light cameras (e.g., RGB cameras or image sensors) and depth cameras may be used. More information about depth cameras can be found in U.S. patent application 12/813,675, filed on 11/6/2010, which is incorporated herein by reference in its entirety. Data from the camera may be sent to the control circuitry 136 for processing to identify the object by image segmentation and/or edge detection techniques.
The earpiece 130, inertial sensor 132, GPS transceiver 144, and temperature sensor 138 are internal to the temple 102 or mounted on the temple 102. In one embodiment, inertial sensors 132 include a three axis magnetometer, a three axis gyroscope, and a three axis accelerometer. Inertial sensors are used to sense the position, orientation, and sudden acceleration of the HMD 2. From these movements, the head position can also be determined.
In some cases, HMD2 may include an image generation unit that may create one or more images that include one or more virtual objects. In some embodiments, a microdisplay may be used as the image generation unit. As depicted, microdisplay assembly 173 includes light processing elements and variable focus adjuster 135. An example of a light processing element is a micro display unit 120. Other examples include one or more optical elements, such as one or more lenses of lens system 122, and one or more reflective elements, such as face 124. Lens system 122 may include a single lens or multiple lenses.
A microdisplay unit 120 is mounted on or inside the temple 102, which includes an image source and generates an image of a virtual object. The microdisplay unit 120 is optically aligned with the lens system 122 and the reflecting surface 124. The optical alignment may be along an optical axis 133 or an optical path 133 that includes one or more optical axes. The microdisplay unit 120 projects an image of the virtual object through a lens system 122, which can direct the image light to a reflecting element 124. Variable focus adjuster 135 changes the displacement between one or more light processing elements in the optical path of the microdisplay assembly or the optical power (optical power) of elements in the microdisplay assembly. The optical power of a lens is defined as the inverse of its focal length (i.e., 1/focal length) so that a change in one will affect the other. The change in focal length results in a change in the area of the field of view that is focused on the image generated by the microdisplay assembly 173.
In one example of a displacement change made by microdisplay assembly 173, the displacement change is guided within an armature 137, which armature 137 supports at least one light processing element such as lens system 122 and microdisplay 120. The armature 137 helps stabilize the alignment along the optical path 133 during physical movement of the components to achieve a selected displacement or optical power. In some examples, the adjuster 135 may move one or more optical elements, such as a lens in the lens system 122 within the armature 137. In other examples, the armature may have a slot or space in the region around the light processing element so that it slides over the element (e.g., microdisplay 120) without moving the light processing element. Another element in the armature, such as lens system 122, is attached so that system 122 or the lens therein slides or moves with moving armature 137. The displacement range is typically on the order of a few millimeters (mm). In one example, this range is 1-2 mm. In other examples, the armature 137 may provide support for a focus adjustment technique involving adjustment of other physical parameters besides displacement to the lens system 122. An example of such a parameter is polarization.
More information on adjusting the focal length of a microdisplay assembly can be found in U.S. patent application No. 12/941,825 entitled "Automatic Variable visual Focus for augmented reality Displays," filed 11/8 2010, which is incorporated herein by reference in its entirety.
In one embodiment, the adjuster 135 may be an actuator such as a piezoelectric motor. Other techniques for actuators may also be used, and some examples of such techniques are voice coils formed from coils and permanent magnets, magnetostrictive elements, and electrostrictive elements.
Several different image generation techniques may be used to implement microdisplay 120. In one example, microdisplay 120 can be implemented using a transmissive projection technology where the light source is modulated by an optically active material, backlit with white light. These techniques are typically implemented using LCD-type displays with powerful backlights and high optical power densities. Microdisplay 120 can also be implemented using a reflective technology where external light is reflected and modulated by an optically active material. Depending on the technology, the illumination may be forward lit by a white light source or an RGB source. Digital Light Processing (DLP)) Liquid Crystal On Silicon (LCOS), and from Qualcomm, IncThe display techniques are all examples of efficient reflection techniques, as most of the energy is reflected from the modulated structure and can be used in the systems described herein. Additionally, microdisplay 120 can be implemented using an emissive technology, where light is generated by the display. For example, PicoP from Microvision, IncTMThe engine uses a micro-mirror rudder to emit a laser signal onto a small screen acting as a transmissive element or to emit a beam of light (e.g., laser light) directly to the eye.
FIG. 2H depicts one embodiment of a side view of a portion of HMD2 that provides support for three-dimensional adjustment of a microdisplay assembly. Some of the reference numerals shown above in fig. 2G have been removed to avoid clutter in the drawing. In some embodiments where display optical system 14 is moved in three dimensions, the optical elements represented by reflective surface 124 and other elements of microdisplay assembly 173 may also be moved to maintain an optical path 133 of light for a virtual image to the display optical system. In this example, an XYZ transport mechanism, consisting of one or more motors under the control of the control circuit 136, represented by motor frame 203 and drive shaft 205, controls the movement of the elements of the microdisplay assembly 173. An example of a motor that may be used is a piezoelectric motor. In the example shown, one motor is attached to armature 137 and also moves variable focus adjuster 135, and another representative motor 203 controls the movement of reflective element 124.
Fig. 3A-3E provide examples of various augmented reality environments in which one or more virtual objects are generated or adapted based on environmental features identified within various real-world environments. In some embodiments, the one or more virtual objects may include state-based virtual objects.
Fig. 3A depicts one embodiment of an augmented reality environment 310 as seen by an end user wearing an HMD (such as mobile device 19 in fig. 1). The end user can see both real and virtual objects. The real object may comprise a chair 16. The virtual objects may include virtual monsters 17 a-b. Because the virtual monsters 17a-b perceived through the see-through lenses of the HMD are displayed or overlaid in the real-world environment, the end user of the HMD may perceive that the virtual monsters 17a-b are present within the real-world environment.
Fig. 3B depicts one embodiment of an augmented reality environment 315 as seen by an end user wearing an HMD (such as mobile device 19 in fig. 1). The end user can see both real and virtual objects. The real objects may include the chair 16 and the computing system 10. The virtual object may include a virtual monster 17 a. The computing system 10 may include a computing environment 12, a capture device 20, and a display 14, which are in communication with each other. The computing environment 12 may include one or more processors. The capture device 20 may include one or more color or depth sensing cameras that may be used to visually monitor one or more targets including people and one or more other real objects within a particular real-world environment. The capture device 20 may also include a microphone. In one example, the capture device 20 may include a depth sensing camera and microphone, and the computing environment 12 may include a gaming console. Computing system 10 may support multiple mobile devices or clients by providing virtual objects and/or mapping information related to a real-world environment to the multiple mobile devices or clients.
In some embodiments, computing system 10 may track and analyze virtual objects within augmented reality environment 315. Computing system 10 may also track and analyze real objects within a real-world environment corresponding to augmented reality environment 315. The rendering of images associated with virtual objects (such as virtual monster 17 a) may be performed by computing system 10 or by the HMD. The computing system 10 may also provide the HMD with a 3-D mapping associated with the augmented reality environment 315.
In one embodiment, computing system 10 may map the real-world environment associated with augmented reality environment 315 (e.g., by generating a 3-D mapping of the real-world environment) and track both real and virtual objects within augmented reality environment 315 in real-time. In one example, computing system 10 provides virtual object information for a particular store (e.g., clothing store or auto dealer). Before the end user of the HMD enters the particular store, the computing system 10 may have generated a 3-D map that includes static real-world objects inside the particular store. When the end user enters the particular store, computing system 10 may begin tracking dynamic real-world objects and virtual objects within augmented reality environment 315. Real-world objects (including end-users) moving within a real-world environment may be detected and classified using edge detection and pattern recognition techniques. As the end user walks around the particular store, the computing system may determine interactions between the real-world objects and the virtual objects and provide images of the virtual objects to the HMD for viewing by the end user. In some embodiments, a real-world environment 3-D mapping including static real-world objects inside the particular store may be transmitted to the HMD along with one or more virtual objects for use inside the particular store. The HMD may then determine the interaction of the real-world object with one or more virtual objects within the particular store and generate an augmented reality environment locally on the HMD 315.
Fig. 3C depicts one embodiment of an augmented reality environment 320. The end user can see both real and virtual objects. The real object may comprise a chair 16. The virtual objects may include virtual monsters 17 a-d. As virtual monsters 17a-d perceived through the see-through lenses of the HMD are displayed or overlaid in the real-world environment, the end user of the HMD may perceive that virtual monsters 17a-d are present within the real-world environment.
As depicted, the real-world environment associated with augmented reality environment 320 includes more open space than the real-world environment associated with augmented reality environment 310 in fig. 3A. In some cases, to achieve a particular difficulty associated with a gaming application, a larger amount of open space may require a greater number of virtual monsters to appear within augmented reality environment 320 (e.g., avoiding four virtual monsters moving within a large real-world region may be considered as difficult as avoiding two virtual monsters within a smaller real-world region). However, in other gaming applications, a greater amount of open space may correspond to a more difficult gaming environment. More information about Augmented reality environments With Adaptive Game Rules can be found in U.S. patent application No. 13/288,350 entitled "Augmented reality playspaces With Adaptive Game Rules," filed on 3.11.2011, which is incorporated herein by reference in its entirety. 3D-3E depict one embodiment of an augmented reality environment 330 that includes state-based virtual objects. As depicted, the end user 29 of the HMD19 may view both real objects and virtual objects. The real object may comprise a chair 16. The virtual objects may include virtual monsters 17a-c and state-based virtual objects including virtual box 39. Since the virtual objects perceived by the see-through lens of HMD19 are displayed or overlaid in the real-world environment, the end user of HMD19 may perceive that the virtual objects exist within the real-world environment.
In one embodiment, end user 29 may view a state-based virtual object that includes virtual box 39. In the first state depicted in fig. 3D, the virtual box appears closed. By staring at virtual box 39 for a particular period of time and/or performing a particular physical gesture (e.g., a particular hand gesture), virtual box 39 may transition from the first state depicted in fig. 3D to the second state depicted in fig. 3E. Once the virtual box 39 is set to the second state, the shape and/or other properties of the object may be altered. As depicted, virtual box 39 appears to be open and a new virtual object (i.e., virtual monster 17 d) is generated and displayed as existing within augmented reality environment 330. In one example, to close virtual box 39, end user 29 may have to perform a physical gesture different from the particular physical gesture used to open the virtual box, and/or issue a particular voice command. In some embodiments, the second state may correspond to a 3-D model of the virtual object, the 3-D model being different from the 3-D model associated with the first state (e.g., the second state may be associated with a deformed version of the virtual object in the first state).
FIG. 4 illustrates one embodiment of a computing system 10 including a capture device 20 and a computing environment 12. In some embodiments, the capture device 20 and the computing environment 12 may be integrated in a single computing device. The single computing device may comprise a mobile device, such as mobile device 19 in FIG. 1. In some cases, the capture device 20 and the computing environment 12 may be integrated in an HMD.
In one embodiment, the capture device 20 may include one or more image sensors for capturing images and video. The image sensor may include a CCD image sensor or a CMOS image sensor. In some embodiments, the capture device 20 may include an IR CMOS image sensor. The capture device 20 may also include a depth camera (or depth sensing camera) configured to capture video with depth information including a depth image, which may include depth values, via any suitable technique including, for example, time-of-flight, structured light, stereo image, or the like.
The capture device 20 may include an image camera component 32. In one embodiment, the image camera component 32 may include a depth camera that may capture a depth image of a scene. The depth image may include a two-dimensional (2-D) pixel area of the captured scene where each pixel in the 2-D pixel area may represent a depth value, such as a distance, e.g., in centimeters, millimeters, or the like, of an object in the captured scene from the image camera component 32.
The image camera component 32 may include an IR light component 34, a three-dimensional (3-D) camera 36, and an RGB camera 38 that may be used to capture depth images of a capture area. For example, in time-of-flight analysis, the IR light component 34 of the capture device 20 may emit an infrared light onto the capture area and may then use sensors to detect the backscattered light from the surface of one or more objects in the capture area with, for example, the 3-D camera 36 and/or the RGB camera 38. In some embodiments, pulsed infrared light may be used so that the time difference between an outgoing light pulse and a corresponding incoming light pulse may be measured and used to determine a physical distance from the capture device 20 to a particular location on one or more objects in the capture area. Furthermore, the phase of the outgoing light wave may be compared to the phase of the incoming light wave to determine the phase shift. The phase shift may then be used to determine a physical distance from the capture device to a particular location associated with one or more objects.
In another example, the capture device 20 may use a structured light to capture depth information. In this analysis, patterned light (i.e., light displayed as a known pattern such as a grid pattern or a stripe pattern) may be projected onto the capture area via, for example, the IR light component 34. Upon striking the surface of one or more objects (or targets) in the capture area, the pattern may deform in response. Such a deformation of the pattern may be captured by, for example, the 3-D camera 36 and/or the RGB camera 38 and analyzed to determine a physical distance from the capture device to a particular location on the one or more objects. The capture device 20 may include optics for producing collimated light. In some embodiments, a laser projector may be used to create the structured light pattern. The laser projector may include a laser, a laser diode, and/or an LED.
In some embodiments, two or more cameras may be integrated into one integrated capture device. For example, a depth camera and a video camera (e.g., an RGB video camera) may be incorporated into a common capture device. In some embodiments, two or more separate capture devices of the same or different types may be used in tandem. For example, a depth camera and a separate video camera may be used, two video cameras may be used, two depth cameras may be used, two RGB cameras may be used, or any combination and number of cameras may be used. In one embodiment, the capture device 20 may include two or more physically separated cameras that may view the capture area from different angles to obtain visual stereo data that may be resolved to generate depth information. Depth may also be determined by capturing images using multiple detectors (which may be monochromatic, infrared, RGB) or any other type of detector, and performing parallax calculations. Other types of depth image sensors may also be used to create the depth image.
As depicted in FIG. 4, the capture device 20 may include one or more microphones 40. Each of the one or more microphones 40 may include a transducer or sensor that may receive sound and convert it into an electrical signal. The one or more microphones may include a microphone array, wherein the one or more microphones may be arranged in a predetermined layout.
The capture device 20 may include a processor 42 that may be in operable communication with the image camera component 32. The processor 42 may include a standard processor, a special purpose processor, a microprocessor, or the like. Processor 42 may execute instructions that may include instructions for storing filters or profiles, receiving and analyzing images, determining whether a particular condition has occurred, or any other suitable instructions. It should be understood that at least some of the image analysis and/or target analysis and tracking operations may be performed by processors contained within one or more capture devices, such as capture device 20.
The capture device 20 may include a memory 44 that may store instructions executable by the processor 42, images or image frames captured by a 3-D camera or RGB camera, filters or profiles, or any other suitable information, images, or the like. In one example, memory 44 may include Random Access Memory (RAM), Read Only Memory (ROM), cache, flash memory, a hard disk, or any other suitable storage component. As shown, the memory 44 may be a separate component in communication with the image capture component 32 and the processor 42. In another embodiment, the memory 44 may be integrated into the processor 42 and/or the image capture component 32. In other embodiments, some or all of the components 32, 34, 36, 38, 40, 42, and 44 of the capture device 20 may be housed in a single housing.
The capture device 20 may communicate with the computing environment 12 via a communication link 46. The communication link 46 may be a wired connection including, for example, a USB connection, a firewire connection, an ethernet cable connection, and/or a wireless connection such as a wireless 802.11b, 802.11g, 802.11a, or 802.11n connection. The computing environment 12 may provide a clock to the capture device 20 that may be used to determine when to capture, for example, a scene via the communication link 46. In one embodiment, the capture device 20 may provide images captured by, for example, the 3-D camera 36 and/or the RGB camera 38 to the computing environment 12 via the communication link 46.
As depicted in fig. 4, the computing environment 12 includes an image and audio processing engine 194 in communication with an application 196. The applications 196 may include an operating system application or other computing applications such as a gaming application. The image and audio processing engine 194 includes a virtual data engine 197, an object and gesture recognition engine 190, structural data 198, a processing unit 191, and a memory unit 192, all in communication with one another. Image and audio processing engine 194 processes video, image and audio data received from capture device 20. To assist in the detection and/or tracking of objects, the image and audio processing engine 194 may utilize the structure data 198 and the object and gesture recognition engine 190. The virtual data engine 197 processes the virtual objects and records the position and orientation of the virtual objects in relation to various mappings of the real world environment stored in the memory unit 192.
The processing unit 191 may include one or more processors for executing object, face, and speech recognition algorithms. In one embodiment, image and audio processing engine 194 may apply object recognition and facial recognition techniques to the image or video data. For example, object recognition may be used to detect a particular object (e.g., football, car, person, or landmark), and facial recognition may be used to detect the face of a particular person. Image and audio processing engine 194 may apply audio and speech recognition techniques to the audio data. For example, audio recognition may be used to detect a particular sound. The particular faces, voices, sounds, and objects to be detected may be stored in one or more memories contained in memory unit 192. Processing unit 191 may execute computer readable instructions stored in memory unit 192 to perform the processes discussed herein.
The image and audio processing engine 194 may utilize the structure data 198 in performing object recognition. The structure data 198 may include structural information about the targets and/or objects to be tracked. For example, a skeletal model of a human may be stored to help identify body parts. In another example, structure data 198 may include structural information about one or more inanimate objects to help identify the one or more inanimate objects.
The image and audio processing engine 194 may also utilize the object and gesture recognition engine 190 in performing gesture recognition. In one example, the object and gesture recognition engine 190 may include a set of gesture filters, each comprising information about a gesture that may be performed by the skeletal model. The object and gesture recognition engine 190 may compare data captured by the capture device 20 (in the form of a skeletal model and movements associated therewith) to gesture filters in a gesture library to identify when a user (represented by the skeletal model) has performed one or more gestures. In one example, the image and audio processing engine 194 may use the object and gesture recognition engine 190 to help interpret movements of the skeletal model and detect performance of a particular gesture.
In some embodiments, the tracked one or more objects may be augmented with one or more markers, such as IR retro-reflective markers, to improve object detection and/or tracking. Planar reference images, encoded AR markers, QR codes, and/or bar codes may also be used to improve object detection and/or tracking. Upon detecting one or more objects and/or gestures, image and audio processing engine 194 may report the identity of each object or gesture detected, and the corresponding position and/or orientation (if applicable), to application 196.
More information on Motion Detection and tracking objects can be found in U.S. patent application 12/641,788, "Motion Detection Using Depth Images," filed 12, 18, 2009, and U.S. patent application 12/475,308, "Device for Identifying and tracking multiple human devices over Time," both of which are incorporated herein by reference in their entirety. For more information on the object and gesture recognition engine 190, see U.S. patent application 12/422,661, "Gesturre recognition System Architecture," filed on 13.4.2009, which is incorporated herein by reference in its entirety. More information about recognized Gestures may be found in U.S. patent application 12/391,150, "Standard Gestures (Standard Gestures)", filed on 23.2.2009; and us patent application 12/474,655 "gettrue Tool", filed on 29/5/2009, both of which are incorporated by reference in their entirety.
FIG. 5A depicts one embodiment of an AR system 2307 for providing virtual object information associated with a particular location or place of interest. The particular place of interest may include a department store, a furniture store, an automobile dealer, an amusement park, a museum, a zoo, or an individual's work or residence. The virtual object information may include a 3-D mapping of the environment, and/or one or more virtual objects associated with the environment. To allow efficient storage and exchange of virtual objects, one or more virtual objects may be transmitted using a predetermined and standardized file format.
The AR system 2307 includes a personal a/V apparatus 2302 (e.g., an HMD such as the mobile device 19 in fig. 1) that communicates with one of the supplemental information providers 2304 a-e. The supplemental information providers 2304a-e are in communication with a central control and information server 2306, which central control and information server 2306 may include one or more computing devices. Each supplemental information provider 2304 may be co-located with and in communication with one of the one or more sensors 2310 a-e. The sensors may include video sensors, depth image sensors, thermal sensors, IR sensors, weight sensors, and motion sensors. In some embodiments, the supplemental information provider may not be paired with any sensor.
Each of the supplemental information providers is placed in a respective location for a particular place of interest. The supplemental information provider may provide virtual object information or 3-D mapping associated with a particular area within a particular place of interest. The sensor 2310 may acquire information related to different sub-portions of a particular place of interest. For example, in the case of an amusement park, a supplemental information provider 2304 and accompanying set of one or more sensors 2310 may be placed at each ride or attraction in the amusement park. In the case of a museum, the supplemental information provider 2304 may be located in each section or room of the museum, or at each primary exhibit. The sensor 2310 may be used to determine the number of people waiting for a ride (or exhibition) route or the degree of congestion of the ride (or exhibition).
In one embodiment, the AR system 2307 may provide guidance to the end user of the personal a/V device 2302 on how to navigate through places of interest. Additionally, the central control and information server 2306 may indicate which areas of the place of interest are less congested based on information from the sensors 2310. In the case of an amusement park, the system may tell the end user of personal A/V device 2302 which ride has the shortest route. In the case of a ski mountain, the AR system 2307 may provide an indication to the end user of the personal a/V device 2302 of which elevator line is shortest or which glide slope is less congested. The personal a/V device 2302 may be active with an end user around a place of interest and may establish a connection with the nearest supplemental information provider 2304 at any given time.
Fig. 5B illustrates one example of a system architecture for executing one or more processes and/or software on a supplemental information provider 2304 (such as supplemental information provider 2304a in fig. 5A). The supplemental information provider 2304 may create and provide supplemental event or location data or may provide a service that transmits event or location data from the third party event data provider 918 to the end user's personal a/V device 2302. A plurality of supplemental information providers and third party event data providers may be used with the present techniques.
The supplemental information provider 2304 may include supplemental data for one or more events or locations for which the service is utilized. The event and/or location data may include supplemental event and location data 910 regarding one or more events known to occur within a particular time period, and/or regarding one or more locations that provide a customized experience. The user location and tracking module 912 tracks the individual users that are utilizing the system. A user may be identified by a unique user identifier, location, and/or other identifying element. The information display application 914 allows for customization of both the type of display information to be provided to the end user and the manner in which it is displayed. The information display application 914 may be used in conjunction with an information display application on the personal a/V device 2302. In one embodiment, the display processing occurs at the supplemental information provider 2304. In an alternative embodiment, information is provided to the personal A/V device 2302, such that the personal A/V device 2302 determines which information should be displayed and where within the display the information should be displayed. The authorization application 916 may authenticate a particular personal a/V device before transmitting the supplemental information to the particular personal a/V device.
The supplemental information provider 2304 also includes mapping data 915 and virtual object data 913. The mapping data 915 may include 3-D mappings associated with one or more real-world environments. Virtual object data 913 may include one or more virtual objects associated with one or more real world environments for which mapping data is available. In some embodiments, one or more virtual objects may be defined using a predetermined and standardized file format that supports state-based virtual objects.
Various types of information display applications may be utilized in accordance with the present techniques. Different applications may be provided for different events and locations. Different providers may provide different applications for the same live event. Applications may be isolated based on the amount of information provided, the amount of interaction allowed, or other characteristics. Applications may provide different types of experiences within an event or location, and different applications may compete for the ability to provide information to users during the same event or at the same location. The application processing may be split between the supplemental information provider 2304 and the personal a/V device 902.
FIGS. 6A and 6B are flow charts describing a set of processes for providing a personalized shopping experience using a personal A/V device, such as personal A/V device 2302 in FIG. 5A. The process of FIG. 6A is used to set up a system such that a personalized shopping experience can be provided when a user enters a particular commercial or sales location. In step 1602 of FIG. 6A, the user will be scanned. Examples of scanning the user may include taking a still photograph, video image, and/or depth image of the user. The system may also access the user with a profile of previous scans and details of the user. The images may be used to create information about the physical appearance of the user. In other embodiments, the user may manually enter various measurements. The user's information is stored as one or more objects in the user's profile. In step 1604, the user's home is scanned using the still image, the video image, and/or the depth image. Information about the user's home is stored as one or more objects in the user's profile. In step 1606, the user's belongings are scanned using the still image, the video image, and/or the depth image. The scanned information is stored as one or more objects in the user's profile. In step 1608, any purchases made by the user will result in information regarding the purchase items being stored as one or more objects in the user's profile. In one embodiment, additional purchases do not have to be scanned, as the information about the purchase item will already be located in the manufacturer's or retailer's database and can be loaded from the database directly into the user's profile. In one embodiment, the user profile is stored by a server, such as central control and message server 2306 in FIG. 5A.
FIG. 6B depicts one embodiment of a process for providing a personalized shopping experience. In step 1630, the user with the personal A/V device enters a point of sale location. In step 1632, the personal A/V device connects to a local supplemental information provider. In step 1634, the user will select an item while browsing the personal A/V device at the point-of-sale location. In one embodiment, the user may select the item by speaking the name of the item, pointing to the item, touching the item, or using a particular gesture. Other means for selecting items using one or more microphones, video cameras, and/or depth cameras on-board the personal/AV device may be used to sense what the user is selecting.
In step 1636, the personal A/V device forwards the selection to the local supplemental information provider at the sales location. The supplemental information provider will look up the selected item in the database to determine the type of virtual object associated with the item. In one embodiment, the database is local to the supplemental information provider. In another embodiment, the supplemental information provider would access the database over the Internet or other network. In one example, each sales location (e.g., one store in a mall) may have its own server, or a mall may have a global server shared across all stores in the mall.
In step 1638, the supplemental information provider will access the user profile. In one embodiment, the user profile is stored on a server, such as central control and message server 2306 in fig. 5A. In step 1640, the supplemental information provider or the central control and information server will identify those objects in the user profile that are related to the item based on the information obtained in step 1636. In step 1642, objects in the user profile related to the selected good are downloaded.
In step 1644, the personal A/V device will determine its orientation using on-board sensors. The a/V device will also determine the user's gaze. In step 1646, the personal A/V device or supplemental information provider will construct a graphic that combines the image of the selected item with the identified object from the user profile. In one embodiment, only one item is selected. In other embodiments, multiple items can be selected, and the graphic can include multiple items and multiple identified objects. In step 1648, based on the determined orientation and gaze, a graphic combining the image of the selected item with the identified object is presented in the personal A/V device, as appropriate. In some embodiments, the user may browse the personal A/V device to view the selected item, and the object will be automatically added to the user's field of view.
One example implementation of the process of FIG. 6B includes a user viewing a house for sale. The selected item may be one of the rooms in the house or may be the house itself. The object from the user's profile will be the user's furniture. As the user walks around the house (which may be empty), the user's furniture (i.e., the objects of the user that are marked or otherwise identified in the user profile as user furniture) will be projected into the personal a/V device so that the user will see the user's furniture in the house.
Another example implementation of FIG. 6B includes a user visiting a furniture store. The selected item of merchandise may be one or more pieces of furniture in a furniture store. The objects obtained from the user's profile will be rooms in the user's house and furniture in the user's house. For example, if the user is purchasing a couch, the selected merchandise may be one or more couches. The personal a/V device will depict an image of the living room with the selected couch projected into the user's living room so that the user can see how the couch looks in their living room. In some cases, virtual object information associated with one or more pieces of furniture selected by an end user in a furniture store may be stored for future reference. At home, a user may load and view one or more virtual objects associated with one or more pieces of furniture for sale in a furniture store while viewing their living room.
In one embodiment, the system may be used to enhance the purchase of clothing. When the user sees a piece of clothing that he is interested in, the personal a/V system may project an image of the user wearing the piece of clothing. Alternatively, the user may look into a mirror to see himself/herself wearing the garment of interest. In this case, the personal A/V system will project an image of the user wearing the piece of clothing in the reflection of the mirror. These examples illustrate how a user may browse through a see-through personal a/V apparatus (e.g., mobile device 19 in fig. 1) and images may be projected into the user's field of view such that these projected images combined with the real world viewed through the personal a/V apparatus create a personalized experience for the user.
In another embodiment, the system is used to customize in-store displays based on what the user is interested in. For example, the window models all switch to wearing apparel that the user is interested in. Considering the example where a user is wanting to purchase black clothing, each store she walks across virtually displays all of the black clothing in their front display or on a mannequin on a storefront dedicated to the head mounted display presentation.
In some embodiments, the supplemental information provider may transmit information associated with a particular location to the HMD, the particular location including real and virtual objects appearing at the particular location. The transmitted information may be used to generate an augmented reality environment on the HMD. To allow efficient storage and exchange of virtual objects, the virtual objects may be contained within a predetermined and standardized file format. In one example, a standardized file format may allow portability of virtual object data across different computing platforms or devices. In some cases, the standardized file format may support state-based virtual objects by providing state information (e.g., in the form of a state table) associated with different states of the virtual object. The state associated with the virtual object may be implemented using various data structures including a directed graph and/or a hash table.
The standardized file format may include a holographic file format. One embodiment includes a method for presenting a customized experience to a user of a personal A/V device, comprising: scanning a plurality of commodities to create a plurality of objects in a holographic file format, one object being created for each commodity, the holographic file format having a predetermined structure; storing an object in holographic file format for an identity; connecting the personal A/V device to a local server using a wireless connection; providing the identity from the personal A/V device to a local server; accessing and downloading at least a subset of the objects to a local server using the identity; accessing data in an object based on a predetermined structure of a holographic file format; and adding virtual graphics to the see-through display of the personal A/V device using the data.
Referring to fig. 6A and 6B, one example implementation of a holographic file format may be used. In the method of FIG. 6A, a user, the user's house, and the user's belongings may be scanned, and information from the scan may be stored as one or more objects in the user's profile. In one implementation, the information is stored in the profile as one or more objects in a holographic file format. In this way, when a user enters a sales location and an associated supplemental information provider local to the sales location accesses objects in the database, those objects in the holographic file format may be accessed. In this way, the supplemental information provider will know the file format of the object in advance so that the object can be used efficiently. Using this holographic file format may allow developers to more easily create systems and platforms that can utilize this data, so that more experiences can be customized using personal A/V devices.
FIG. 7A depicts one embodiment of a virtual object file 702, the virtual object file 702 including virtual object information associated with one or more virtual objects. As depicted, the virtual object file 702 includes virtual object information 701 for generating a virtual object with a virtual object identifier (or ID) "H1278". The virtual object information 701 includes an HMD version field (e.g., HMD system version 1.3.8) for specifying HMD system compatibility, an identification of whether the virtual object is associated with the real object, an owner of the real object associated with the virtual object (e.g., Sally), and a location of the real object (e.g., Sally's kitchen). Other indicia or fields (not shown) may include when and where to obtain virtual object information, as well as object descriptions such as "house furniture" or "kitchen appliances". Virtual object information 701 may also include an identification of the initial State of the virtual object (e.g., State 0).
The virtual object information 701 includes information for different states including "State 0 (State 0)" and "State 1 (State 1)". In one example, "State 0" may be associated with a virtual object in a closed State (e.g., the virtual box is closed), and "State 1" may be associated with a virtual object in an open State (e.g., the virtual box is open). In "State 0," virtual objects are associated with a 3-D model (i.e., model _ A) and object properties (e.g., Mass). When a virtual object interacts with a real object or other virtual object, the mass object properties may be used to determine momentum and velocity calculations. Other object properties (e.g., object reflectivity and/or transparency) may also be used. In "State 1," the virtual object is associated with a 3-D model (i.e., model _ B (model B)) that is different from the 3-D model associated with "State 0". In one example, model _ B can correspond to a warped version of the virtual object (e.g., the virtual object is bent or warped).
As depicted, "State 0" corresponds to a unique set of trigger events that are different from the trigger events of "State 1". Trigger events associated with a particular state may be used to determine when a state change from the particular state is required. When in "State 0," the virtual object may transition to a different virtual object State (i.e., "State 1") if two requirements are met (i.e., if Trigger1 (Trigger 1) and Trigger2 (Trigger 2) are detected). In one example, Trigger1 may correspond to the detection of a particular gesture, while Trigger2 may correspond to the detection of a particular voice command. In another example, the triggering event may correspond to detection of a particular gesture that occurs concurrently with an eye gaze toward the virtual object. Upon detection of a triggering event, the virtual object will transition to "State 1". It should be noted that the detection of Trigger3 does not cause the virtual object to transition to a different state, but rather only plays the sound associated with the virtual object (e.g., based on sound _ file _ a). In some cases, the trigger event may be detected using eye tracking techniques (such as those utilized with HMD2 with reference to fig. 2C-2D) or gesture recognition and/or audio recognition techniques (such as those utilized with reference to computing system 10 in fig. 4).
While in "State 1," a virtual object may transition back to "State 0" if a unique Trigger event occurs (i.e., if Trigger4 is detected). In one example, Trigger4 may correspond to a particular interaction that is detected to occur with a virtual object (e.g., a virtual object is hit by another virtual object). In this case, the virtual object will transition back to "State 0" upon detection of a triggering event. Likewise, upon detecting a triggering event, a new virtual object (e.g., X1) may be generated or generated. For example, when the virtual box is opened, a new virtual object, such as virtual monster 17d in FIG. 3E, may be created.
In some embodiments, the virtual object information associated with a particular virtual object may include information about the real physical size of the object (i.e., the actual real world size of the real object on which the particular virtual object is based). The virtual object information may also specify physical characteristics of a particular virtual object, such as whether the particular virtual object is deformable or squeezable. The physical characteristics may also include a weight or mass associated with the particular virtual object. The virtual object information may also specify lighting attributes associated with a particular virtual object, such as the color of any light emitted (or reflected) from the particular virtual object, as well as the translucency and reflectivity of the particular virtual object. The virtual object information may also specify a sound associated with a particular virtual object when interacting with the particular virtual object. In some embodiments, the virtual object information regarding the lighting attributes, the interactive sound attributes, and the physical characteristics may depend on the particular state of the virtual object.
FIG. 7B is a flow chart describing one embodiment of a process for generating an augmented reality environment. The augmented reality environment may utilize one or more state-based virtual objects. In one embodiment, the process of FIG. 7B is performed by a mobile device, such as mobile device 19 in FIG. 1.
In step 710, a supplemental information provider associated with the real-world environment is identified. Once the supplemental information provider is within a certain distance of the HMD, it may be detected and identified, or it may be identified to the supplemental information provider via a pointer or network address. In step 712, information transfer with the supplemental information provider is negotiated. Information transfer may be performed using a particular protocol and may include transfer of a particular type of file (e.g., a virtual object file using a holographic file format). The HMD and supplemental provider may also negotiate in what manner information transfer will occur, and what type of information will be transferred. In one example, the HMD may provide location information associated with the HMD to a supplemental information provider, and the supplemental information provider may transmit one or more files to the HMD that provide virtual object information associated with the location information.
In step 714, a 3-D map associated with the real-world environment is obtained from the supplemental information provider. In step 716, one or more virtual objects are obtained. One or more virtual objects may be acquired via virtual object information supplied by a supplemental information provider. In some cases, one or more virtual objects may be pre-stored on the HMD and may be pointed to by virtual object information obtained from the supplemental information provider. The one or more virtual objects may include a first virtual object associated with a plurality of different states. Each state of the plurality of different states may correspond to a unique set of trigger events that are different from any other state event. The set of trigger events associated with a particular state may be used to determine when a state change from the particular state is required.
In step 718, the first virtual object is set to a first state of a plurality of different states. In step 720, one or more other states associated with the first virtual object among the plurality of different states may be predicted. In one example, a trigger probability may be determined for each of one or more other states relative to the first state. The trigger probability provides a probability or likelihood of reaching another state from the current state of the virtual object. For example, if the trigger probability associated with the second state is above a certain threshold, the second state of the plurality of different states may be predicted. If one state is predicted, virtual object information associated with the predicted state may be pre-fetched and stored on the HMD for future use.
In step 722, it is determined whether a first triggering event associated with a second state of the plurality of different states has been detected. In one embodiment, the first trigger event is associated with detecting a particular gesture that occurs concurrently with an eye gaze toward the first virtual object as perceived using the HMD. In some cases, a first trigger event may be detected if an interaction from another virtual object or a real object is above a particular virtual force threshold. The triggering event (or state change requirement) may also be based on physiological characteristics of the end user wearing the HMD. For example, heart rate information and eye movement and/or pupil dilation associated with an end user may be used to infer that the end user is afraid enough to warrant a triggering event.
In step 724, the first virtual object is set to the second state. In step 726, one or more new trigger events are obtained. The one or more new trigger events may be obtained from the supplemental information provider. One or more new trigger events are pre-stored on the HMD prior to setting the first virtual object to the second state. One or more new trigger events may be loaded onto the HMD, whereby the HMD looks up and detects interactions associated with the one or more new trigger events, but not the one or more trigger events associated with the first state. In step 728, one or more virtual objects are displayed such that the one or more virtual objects are perceived to exist within the real world environment. In one example, one or more virtual objects are displayed using an HMD.
FIG. 7C is a flow chart describing one embodiment of a process for predicting a future virtual object state. The process described in FIG. 7C is one example of a process for implementing step 720 in FIG. 7B. In one embodiment, the process of FIG. 7C is performed by a mobile device, such as mobile device 19 in FIG. 1.
In step 730, one or more trigger events associated with a first state of a virtual object are identified. In one embodiment, the HMD generates a state machine in which a current state of the first virtual object may be transitioned into a different state based on one or more trigger events associated with the current state. In step 731, one or more trigger probabilities associated with the one or more trigger events are determined. The one or more trigger probabilities may be determined based on a history of an end user using the HMD, a generic probability associated with trigger events that are commonly detected (i.e., not end user-specific), and a detection rate during runtime of an augmented reality application running on the HMD associated with a particular gesture. In some cases, virtual object state prediction may be performed by a server (such as a supplemental information provider within a particular distance of the HMD).
In step 732, a second state of the virtual object is predicted based on the one or more trigger probabilities determined in step 731. In one embodiment, the second state is predicted if the trigger probability associated with the second state is above a certain threshold (e.g., 90% of the chance that a trigger event associated with the second state will be triggered). In step 733, one or more second virtual objects associated with the second state are obtained. In step 734, one or more second virtual objects are stored. If the virtual object is transferred to the second state, one or more second virtual objects may be stored or cached on the HMD and may be retrieved. In step 735, one or more second virtual objects are output. In one embodiment, one or more second virtual objects may be transmitted from the supplemental information provider to the HMD. In step 736, an identification of the second state is output. In one embodiment, the identification of the one or more second states may be transmitted from the supplemental information provider to the HMD.
FIG. 7D is a flow chart describing one embodiment of a process for negotiating information transfer with a supplemental information provider. The process described in FIG. 7D is one example of a process for implementing step 712 in FIG. 7B. In one embodiment, the process of FIG. 7D is performed by a mobile device, such as mobile device 19 in FIG. 1.
In step 740, an identification of the particular holographic file format is transmitted to the supplemental information provider. The particular holographic file format may include a standardized file format that includes virtual object information associated with one or more virtual objects. In step 741, the data compression standard is transmitted to the supplemental information provider. The data compression standard may be used in order to compress the size of a file transmitted from the supplemental information provider to the HMD. In step 742, a response is received from the supplemental information provider as to whether a particular holographic file format and data compression standard are supported. In one embodiment, the HMD may receive the response and determine whether to establish an information transmission with the supplemental information provider. In step 743, a transmission of information with the supplemental information provider is established based on the response.
FIG. 7E is a flow chart describing one embodiment of a process for obtaining one or more virtual objects from a supplemental information provider. The process described in FIG. 7E is one example of a process for implementing step 716 in FIG. 7B. In one embodiment, the process of FIG. 7E is performed by a mobile device, such as mobile device 19 in FIG. 1.
In step 750, one or more environmental features within the real-world environment are identified. The one or more environmental characteristics may include a location associated with the real-world environment (e.g., a particular amusement park or museum), a terrain type associated with the real-world environment (e.g., outdoors or crowded space), and/or a weather classification associated with the real-world environment (e.g., cold weather or rain). In step 751, a user profile including a user history is obtained. The user profile may describe particular characteristics of an end user of the HMD, such as an age of the end user. The user profile may specify user preferences associated with the augmented reality environment, such as a limit on the number of virtual objects displayed at a particular time, or a preference for the type of virtual objects displayed on the HMD. The user profile may also specify permissions associated with what types of virtual objects may be displayed. For example, a user profile may be associated with a child and may prevent the display of virtual objects associated with a particular type of advertisement.
In step 752, the one or more environmental characteristics and the user profile are transmitted to a supplemental information provider. The supplemental information provider may be detected within a particular distance of the HMD. The supplemental information provider may provide virtual objects associated with the real-world environment. For example, the real world environment may include a ride of an amusement park or an exhibition of a museum. At step 753, one or more virtual objects are obtained from a supplemental information provider based on the one or more environmental characteristics and the user profile.
FIG. 7F is a flow chart describing one embodiment of a process for obtaining one or more virtual objects. The process described in FIG. 7F is one example of a process for implementing step 716 in FIG. 7B. In one embodiment, the process of FIG. 7F is performed by a mobile device, such as mobile device 19 in FIG. 1.
In step 760, real world objects are identified within a particular environment. The HMD may use object or pattern recognition techniques to identify real-world objects. In step 761, a virtual object based on the identification of the real-world object is acquired. In one embodiment, the virtual object is obtained from the supplemental information provider by supplying an identification of the real world object to the supplemental information provider. In some cases, if there is not an exact match with the identification, more than one virtual object associated with the identification may be provided to the HMD.
In step 762, a 3-D model of the real-world object is generated based on the scan of the real-world object. The scanning of the real-world object may be performed by the HMD. In step 763, a closed surface associated with the 3-D model of the real world object is detected. In step 764, the 3-D model created in step 762 is used to validate the virtual object obtained in step 761. The virtual object may be verified to check for a one-to-one correspondence between the shape of the virtual object and the shape of the 3-D model.
In step 765, the virtual object is automatically tagged based on the particular environment by attaching the metadata to the virtual object. The metadata may be included within virtual object information associated with the virtual object. In one embodiment, the virtual object may be owned by an end user tagged as an HMD. The virtual object may also be marked as being co-located with the end user's house (or a portion thereof). The virtual objects may be automatically tagged based on information stored in an end user profile, which is stored on the HMD. The end user profile may provide identification information associated with the end user, including the end user's name, the end user's work location, and the end user's house location. In step 766, the virtual object is stored. The virtual object may be stored in non-volatile memory on the HMD. In step 767, the virtual object is output. The virtual object information may be retrieved from non-volatile memory on the HMD and used to generate one or more images of the virtual object.
FIG. 7G is a flow chart describing one embodiment of a process for displaying one or more virtual objects. The process described in FIG. 7G is one example of a process for implementing step 728 in FIG. 7B. In one embodiment, the process of FIG. 7G is performed by a mobile device, such as mobile device 19 in FIG. 1.
In step 780, a 3-D mapping of the environment is obtained. The 3-D mapping may include one or more image descriptors. In step 781, one or more viewpoint images of the environment are acquired. The one or more viewpoint images may be associated with a particular gesture of a mobile device, such as an HMD. In step 782, one or more locations associated with the one or more virtual objects are determined based on the 3-D mapping obtained in step 780. In one embodiment, one or more virtual objects are recorded as being associated with a 3-D mapping. In step 783, at least a subset of the one or more image descriptors is detected within the one or more viewpoint images. The one or more image descriptors may be detected by applying various image processing methods, such as object recognition, feature detection, corner detection, blob detection, and edge detection methods, to the one or more viewpoint images. One or more image descriptors may be used as landmarks in determining a particular pose, position, and/or orientation with respect to the 3-D mapping. The image descriptor may include color and/or depth information associated with a particular object (e.g., a red apple) or a portion of a particular object located within a particular environment (e.g., a top of a red apple).
In step 784, a six degree of freedom (6 DOF) pose including information associated with a position and orientation of a mobile device within an environment may be determined. In step 785, one or more images associated with one or more virtual objects are rendered based on the 6DOF pose determined in step 784. In step 786, one or more images are displayed such that one or more virtual objects are perceived to exist within the environment. More information on recording virtual objects and presenting corresponding images in the Augmented Reality world can be found in U.S. patent application 13/152,220 entitled "Distributed Asynchronous Localization and Mapping for Augmented Reality," which is incorporated herein by reference in its entirety.
One embodiment of the disclosed technology includes obtaining one or more virtual objects including a first virtual object. The first virtual object is associated with a first state and a second state different from the first state. The first state is associated with one or more triggering events. A first trigger event of the one or more trigger events is associated with the second state. The method also includes setting the first virtual object to a first state, detecting a first trigger event, setting the first virtual object to a second state in response to detecting the first trigger event, and displaying one or more images associated with the first virtual object in the second state on the mobile device. One or more images are displayed such that the first virtual object in the second state is perceived to exist within the real-world environment.
One embodiment of the disclosed technology includes obtaining one or more virtual objects from a supplemental information provider. The one or more virtual objects include a first virtual object. The first virtual object is associated with a first state and a second state different from the first state. The first state is associated with a first 3-D model and the second state is associated with a second 3-D model different from the first 3-D model. The method also includes setting the first virtual object to a first state, predicting a second state, retrieving one or more second virtual objects in response to predicting the second state, detecting a first trigger event of the one or more trigger events associated with the second state, setting the first virtual object to the second state in response to detecting the first trigger event, and displaying one or more images associated with the first virtual object in the second state on the mobile device. One or more images are displayed such that the first virtual object in the second state is perceived to exist within the real-world environment.
The disclosed technology may be used with a variety of computing systems. 8-10 provide examples of various computing systems that can be used to implement embodiments of the disclosed technology.
FIG. 8 is a block diagram of an embodiment of a gaming and media system 7201 (one example of computing environment 12 in FIG. 3B). The console 7203 has a Central Processing Unit (CPU)7200 and a memory controller 7202 that facilitates processor access to various memories, including a flash Read Only Memory (ROM)7204, a Random Access Memory (RAM)7206, a hard disk drive 7208, and a portable media drive 7107. In one implementation, CPU7200 includes a level 1 cache 7210 and a level 2 cache 7212, which are used to temporarily store data and thus reduce the number of memory access cycles made to hard disk drive 7208, thereby improving processing speed and throughput.
CPU7200, memory controller 7202, and various memory devices are interconnected via one or more buses (not shown). The one or more buses may include one or more of the following: serial and parallel buses, a memory bus, a peripheral bus, a processor or local bus using any of a variety of bus architectures. By way of example, such architectures can include an Industry Standard Architecture (ISA) bus, a Micro Channel Architecture (MCA) bus, an enhanced ISA (eisa) bus, a Video Electronics Standards Association (VESA) local bus, and a Peripheral Component Interconnect (PCI) bus.
In one embodiment, CPU7200, memory controller 7202, ROM7204, and RAM7206 are integrated onto a common module 7214. In this embodiment, ROM7204 is configured as a flash ROM that is connected to memory controller 7202 via a PCI bus and a ROM bus (neither of which are shown). RAM7206 is configured as multiple double data rate synchronous dynamic RAM (ddr sdram) modules that are independently controlled by memory controller 7202 via separate buses (not shown). The hard disk drive 7208 and portable media drive 7107 are shown connected to the memory controller 7202 via the PCI bus and an AT attachment (ATA) bus 7216. However, in other embodiments, different types of dedicated data bus structures may be applied in the alternative.
The three-dimensional graphics processing unit 7220 and the video encoder 7222 form a video processing pipeline for high speed and high resolution (e.g., high definition) graphics processing. Data is transferred from the graphics processing unit 7220 to the video encoder 7222 via a digital video bus (not shown). An audio processing unit 7224 and an audio codec (coder/decoder) 7226 form a corresponding audio processing pipeline for multi-channel audio processing of various digital audio formats. Audio data is transmitted between the audio processing unit 7224 and the audio codec 7226 via a communication link (not shown). The video and audio processing pipelines output data to an A/V (audio/video) port 7228 for transmission to a television or other display. In the illustrated implementation, video and audio processing components 7220 and 7228 are mounted on module 7214.
FIG. 8 shows a module 7214 that includes a USB host controller 7230 and a network interface 7232. USB host controller 7230 communicates with CPU7200 and memory controller 7202 via a bus (not shown), and functions as peripheral controller 7205(1) 7205 (4). The network interface 7232 provides access to a network (e.g., the Internet, home network, etc.) and may be any of a wide variety of various wired or wireless interface components including an Ethernet card, a modem, a wireless access card, BluetoothModules, cable modems, and the like.
In the implementation depicted in fig. 8, console 7203 includes controller support subassembly 7240 for supporting four controllers 7205(1) -7205 (4). The controller support subassembly 7240 includes any hardware and software components necessary to support wired and wireless operation with external control devices such as, for example, media and game controllers. The front panel I/O subassembly 7242 supports multiple functions of the power button 7213, the eject button 7215, as well as any LEDs (light emitting diodes) or other indicators exposed on the outer surface of the console 7203. The sub-components 7240 and 7242 communicate with the module 7214 through one or more cable components 7244. In other implementations, the console 7203 can include additional controller subcomponents. The illustrated embodiment also shows an optical I/O interface 7235 configured to send and receive signals (e.g., from a remote control 7290) that can be communicated to module 7214.
MUs 7241(1) and 7241(2) are shown as being connectable to MU ports "a" 7231(1) and "B" 7231(2), respectively. Additional MUs (e.g., MUs 7241(3) -7241 (6)) are shown connectable to controllers 7205(1) and 7205(3), i.e., two MUs per controller. Controllers 7205(2) and 7205(4) may also be configured to receive MUs (not shown). Each MU 7241 provides additional storage on which games, game parameters, and other data may be stored. An additional memory device, such as a portable USB device, may be used in place of the MU. In some implementations, the other data can include any of a digital game component, an executable gaming application, an instruction set for expanding a gaming application, and a media file. When inserted into console 7203 or a controller, MU 7241 can be accessed by memory controller 7202. The system power supply module 7250 provides power to the components of the gaming system 7201. A fan 7252 cools the circuitry within console 7203.
An application 7260 including machine instructions is stored on hard disk drive 7208. When console 7203 is powered on, various portions of application 7260 are loaded into RAM7206 and/or caches 7210 and 7212 for execution on CPU 7200. Other applications may also be stored on hard disk drive 7208 for execution on CPU 7200.
Gaming and media system 7201 may be used as a standalone system by simply connecting the system to a monitor, television, video projector, or other display device. In this standalone mode, gaming and media system 7201 allows one or more players to play games or enjoy digital media (e.g., watching movies or listening to music). However, with the integration of broadband connectivity made possible through network interface 7232, gaming and media system 7201 may also be operated as a participant in a larger network gaming community.
Figure 9 is a block diagram of one embodiment of a mobile device 8300, such as mobile device 19 in figure 1. Mobile devices may include laptop computers, pocket computers, mobile phones, personal digital assistants, and handheld media devices that have integrated wireless receiver/transmitter technology.
The mobile device 8300 includes one or more processors 8312 and memory 8310. Memory 8310 includes applications 8330 and non-volatile storage 8340. The memory 8310 may be any variety of memory storage media types including non-volatile and volatile memory. The mobile device operating system handles the different operations of the mobile device 8300 and may contain user interfaces for operations, such as making and receiving phone calls, text messaging, checking voicemail, and the like. The applications 8330 can be any variety of programs, such as a camera application for photos and/or videos, an address book, a calendar application, a media player, an internet browser, games, an alarm application, and other applications. The non-volatile storage component 8340 in the memory 8310 may contain data such as music, photos, contact data, scheduling data, and other files.
The one or more processors 8312 are also in communication with: an RF transmitter/receiver 8306, which in turn is coupled to an antenna 8302; an infrared transmitter/receiver 8308; a Global Positioning Service (GPS) receiver 8365; and a movement/orientation sensor 8314, which may include an accelerometer and/or magnetometer. The RF transmitter/receiver 8308 may be implemented, for example, by BluetoothOr various wireless technology standards such as the IEEE802.11 standard. Accelerometers may have been incorporated into mobile devices to implement applications such as: an intelligent user interface application that lets a user input commands through gestures; and a directional application that can automatically change from portrait to landscape when the mobile device is rotated. The accelerometer may be provided, for example, by a micro-electromechanical system (MEMS), which is a tiny mechanical device (micron-scale) built on a semiconductor chip. Acceleration direction, as well as orientation, vibration, and shock can be sensed. The one or more processors 8312 are also in communication with a ringer/vibrator 8316, a user interface keypad/screen 8318, a speaker 8320, a microphone 8322, a camera 8324, a light sensor 8326, and a temperature sensor 8328. The user interface keypad/screen may include a touch sensitive screen display.
The one or more processors 8312 control the transmission and reception of wireless signals. During a transmit mode, the one or more processors 8312 provide voice signals or other data signals from microphone 8322 to RF transmitter/receiver 8306. The transmitter/receiver 8306 transmits signals through the antenna 8302. The ringer/vibrator 8316 is used to signal an incoming call, text message, calendar reminder, alarm clock reminder, or other notification to the user. During a receiving mode, the RF transmitter/receiver 8306 receives a voice signal or a data signal from a remote station through the antenna 8302. The received voice signals are provided to the speaker 8320 while other received data signals are processed appropriately.
In addition, a physical connector 8388 may be used to connect the mobile device 8300 to an external power source, such as an AC adapter or powered docking cradle, to recharge the battery 8304. The physical connector 8388 may also be used as a data connection to an external computing device. The data connection allows operations such as synchronizing mobile data with computing data on another device.
FIG. 10 is a block diagram of an embodiment of a computing system environment 2200, such as computing system 10 in FIG. 3B. Computing system environment 2200 includes a general-purpose computing device in the form of a computer 2210. Components of computer 2210 may include, but are not limited to, a processing unit 2220, a system memory 2230, and a system bus 2221 that couples various system components including the system memory 2230 to the processing unit 2220. The system bus 2221 may be any of several types of bus structures including a memory bus, a peripheral bus, and a local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Computer 2210 typically includes a variety of computer readable media. Computer readable media can be any available media that can be accessed by computer 2210 and includes both volatile and nonvolatile media, removable and non-removable media. By way of example, and not limitation, computer readable media may comprise computer storage media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by computer 2210. Combinations of any of the above should also be included within the scope of computer readable media.
The system memory 2230 includes computer storage media in the form of volatile and/or nonvolatile memory such as Read Only Memory (ROM) 2231 and Random Access Memory (RAM) 2232. A basic input/output system 2233 (BIOS), containing the basic routines that help to transfer information between elements within computer 2210, such as during start-up, is typically stored in ROM 2231. RAM 2232 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 2220. By way of example, and not limitation, fig. 10 illustrates operating system 2234, application programs 2235, other program modules 2236, and program data 2237.
Computer 2210 may also include other removable/non-removable, volatile/nonvolatile computer storage media. By way of example only, FIG. 10 illustrates a hard disk drive 2241 that reads from or writes to non-removable, nonvolatile magnetic media, a magnetic disk drive 2251 that reads from or writes to a removable, nonvolatile magnetic disk 2252, and an optical disk drive 2255 that reads from or writes to a removable, nonvolatile optical disk 2256 such as a CD ROM or other optical media. Other removable/non-removable, volatile/nonvolatile computer storage media that can be used in the exemplary operating environment include, but are not limited to, magnetic tape cassettes, flash memory cards, digital versatile disks, digital video tape, solid state RAM, solid state ROM, and the like. The hard disk drive 2241 is typically connected to the system bus 2221 through a non-removable memory interface such as interface 2240, and magnetic disk drive 2251 and optical disk drive 2255 are typically connected to the system bus 2221 by a removable memory interface, such as interface 2250.
The drives and their associated computer storage media discussed above and illustrated in fig. 10, provide storage of computer readable instructions, data structures, program modules and other data for the computer 2210. In fig. 10, for example, hard disk drive 2241 is illustrated as storing operating system 2244, application programs 2245, other program modules 2246, and program data 2247. Note that these components can either be the same as or different from operating system 2234, application programs 2235, other program modules 2236, and program data 2237. Operating system 2244, application programs 2245, other program modules 2246, and program data 2247 are given different numbers here to illustrate that, at a minimum, they are different copies. A user may enter commands and information into computer 2210 through input devices such as a keyboard 2262 and pointing device 2261, commonly referred to as a mouse, trackball or touch pad. Other input devices (not shown) may include a microphone, joystick, game pad, satellite dish, scanner, or the like. These and other input devices are often connected to the processing unit 2220 through a user input interface 2260 that is coupled to the system bus, but may be connected by other interface and bus structures, such as a parallel port, game port or a Universal Serial Bus (USB). A monitor 2291 or other type of display device is also connected to the system bus 2221 via an interface, such as a video interface 2290. In addition to the monitor, computers may also include other peripheral output devices such as speakers 2297 and printer 2296, which may be connected through an output peripheral interface 2295.
Computer 2210 can operate in a networked environment using logical connections to one or more remote computers, such as remote computer(s) 2280. The remote computer 2280 may be a personal computer, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 2210, although only a memory storage device 2281 has been illustrated in fig. 10. The logical connections depicted in FIG. 10 include a Local Area Network (LAN) 2271 and a Wide Area Network (WAN) 2273, but may also include other networks. Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets and the Internet.
When used in a LAN networking environment, the computer 2210 is connected to the LAN2271 through a network interface or adapter 2270. When used in a WAN networking environment, the computer 2210 typically includes a modem 2272 or other means for establishing communications over the WAN2273, such as the Internet. The modem 2272, which may be internal or external, may be connected to the system bus 2221 via the user input interface 2260, or other appropriate mechanism. In a networked environment, program modules depicted relative to the computer 2210, or portions thereof, may be stored in the remote memory storage device. By way of example, and not limitation, fig. 10 illustrates remote application programs 2285 as residing on memory device 2281. It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers may be used.
The disclosed technology is operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known computing systems, environments, and/or configurations that may be suitable for use with the technology include, but are not limited to, personal computers, server computers, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
The disclosed technology may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, software and program modules as described herein include routines, programs, objects, components, data structures, and other types of structures that perform particular tasks or implement particular abstract data types. Hardware or a combination of hardware and software may be substituted for the software modules described herein.
The disclosed technology may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
For purposes of this document, each process associated with the disclosed technology may be performed continuously and by one or more computing devices. Each step in the process may be performed by the same or different computing devices as those used in the other steps, and each step need not be performed by a single computing device.
For purposes of this document, references in the specification to "an embodiment," "one embodiment," "some embodiments," or "another embodiment" are used to describe different embodiments and do not necessarily refer to the same embodiment.
For purposes herein, a connection may be a direct connection or an indirect connection (e.g., via another party).
For purposes herein, the term "set" of objects refers to a "set" of one or more objects.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
Claims (10)
1. A method of generating an augmented reality environment using a mobile device, comprising:
obtaining a particular file (716) in a predetermined file format, the particular file including information associated with one or more virtual objects, the particular file including state information for each of the one or more virtual objects, the one or more virtual objects including a first virtual object, the first virtual object being associated with a first state and a second state different from the first state, the first state being associated with one or more trigger events, a first trigger event of the one or more trigger events being associated with the second state;
setting the first virtual object to the first state (718);
detecting the first trigger event (722);
setting (724) the first virtual object to the second state in response to detecting the first trigger event, the setting the first virtual object to the second state comprising acquiring one or more new trigger events that are different from the one or more trigger events; and
generating and displaying (728) one or more images associated with the first virtual object in the second state on the mobile device, the one or more images being displayed such that the first virtual object in the second state is perceived as existing within a real-world environment.
2. The method of claim 1, wherein:
the first state is associated with a first 3-D model of the first virtual object; and
the second state is associated with a second 3-D model of the first virtual object that is different from the first 3-D model, the one or more images including a rendered version of the second 3-D model.
3. The method of any one of claims 1-2, further comprising:
displaying, on the mobile device, one or more other images associated with the first virtual object in the first state, the one or more other images including a rendered version of the first 3-D model, the one or more other images being displayed such that the first virtual object in the first state is perceived as being present within the real-world environment, the displaying, on the mobile device, the one or more other images associated with the first virtual object in the first state being performed prior to detecting the first trigger event.
4. The method of any one of claims 1-3, wherein:
the first triggering event comprises a make of a particular gesture concurrent with an eye gaze toward the first virtual object; and
the mobile device includes a see-through HMD.
5. The method of any one of claims 1-4, wherein:
the second state is associated with the one or more new trigger events that are different from the one or more trigger events.
6. The method of claim 1, further comprising:
predicting the second state prior to setting the first virtual object to the second state; and
in response to said predicting said second state prior to setting said first virtual object to said second state, retrieving one or more second virtual objects.
7. The method of claim 6, wherein:
the predicting the second state includes determining one or more trigger probabilities associated with each of the one or more trigger events.
8. An electronic device for generating an augmented reality environment, comprising:
one or more processors (146) that establish a connection with a supplemental information provider, the one or more processors transmitting a particular identity associated with one or more virtual objects to the supplemental information provider, the one or more processors receiving virtual object information associated with the one or more virtual objects based on the particular identity, the virtual object information contained within a particular file of a particular holographic file format, the particular holographic file format comprising a predetermined structure, the one or more virtual objects comprising a first virtual object, the one or more processors determining a pose associated with the electronic device, the one or more processors generating one or more images associated with the first virtual object based on the pose; and
a see-through display (150) that displays the one or more images associated with the first virtual object, the one or more images being displayed such that the first virtual object is perceived as existing within a real-world environment in which the electronic device is present.
9. The electronic device of claim 8, wherein:
the first virtual object is associated with a first state and a second state different from the first state, the first state is associated with one or more trigger events, a first trigger event of the one or more trigger events is associated with the second state, the one or more processors set the first virtual object to the first state, the one or more processors detect the first trigger event, the one or more processors set the first virtual object to the second state in response to detection of the first trigger event, the one or more processors acquire one or more new trigger events from the supplemental information provider that are different from the one or more trigger events in response to detection of the first trigger event, the one or more images are associated with the first virtual object in the second state, the one or more images are displayed such that the first virtual object in the second state is perceived as being present within the real-world environment.
10. The electronic device of any of claims 8-9, wherein:
the first state is associated with a first 3-D model of the first virtual object; and
the second state is associated with a second 3-D model of the first virtual object that is different from the first 3-D model.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US13/250,878 | 2011-09-30 | ||
| US13/430,972 | 2012-03-27 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| HK1183120A true HK1183120A (en) | 2013-12-13 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20130083018A1 (en) | Personal audio/visual system with holographic objects | |
| US10643389B2 (en) | Mechanism to give holographic objects saliency in multiple spaces | |
| US9105210B2 (en) | Multi-node poster location | |
| US10496910B2 (en) | Inconspicuous tag for generating augmented reality experiences | |
| US9035970B2 (en) | Constraint based information inference | |
| KR102408318B1 (en) | Virtual representations of real-world objects | |
| US9384737B2 (en) | Method and device for adjusting sound levels of sources based on sound source priority | |
| US9645394B2 (en) | Configured virtual environments | |
| US9524081B2 (en) | Synchronizing virtual actor's performances to a speaker's voice | |
| US20130083062A1 (en) | Personal a/v system with context relevant information | |
| TWI597623B (en) | Wearable behavior-based vision system | |
| CN106415444B (en) | gaze swipe selection | |
| US20130083007A1 (en) | Changing experience using personal a/v system | |
| US20130083008A1 (en) | Enriched experience using personal a/v system | |
| US20140152558A1 (en) | Direct hologram manipulation using imu | |
| US20130342571A1 (en) | Mixed reality system learned input and functions | |
| JP2016506565A (en) | Human-triggered holographic reminder | |
| WO2013173526A1 (en) | Holographic story telling | |
| WO2013029020A1 (en) | Portals: registered objects as virtualized, personalized displays | |
| HK1183120A (en) | Personal audio/visual system with holographic objects | |
| WO2025038197A1 (en) | Application programming interface for discovering proximate spatial entities in an artificial reality environment |