HK1197944A - Augmented reality display of scene behind surface - Google Patents
Augmented reality display of scene behind surface Download PDFInfo
- Publication number
- HK1197944A HK1197944A HK14111489.9A HK14111489A HK1197944A HK 1197944 A HK1197944 A HK 1197944A HK 14111489 A HK14111489 A HK 14111489A HK 1197944 A HK1197944 A HK 1197944A
- Authority
- HK
- Hong Kong
- Prior art keywords
- scene
- representation
- image data
- display
- identifying
- Prior art date
Links
Description
Technical Field
The present invention relates to enhancing the appearance of a surface via a see-through display device, and more particularly to augmented reality display of a scene behind the surface.
Background
Surfaces, such as walls and doors, may obstruct the view of the scene. To view a scene, one may need to open or otherwise manipulate the surface while in close physical proximity to the surface. However, such operation may not be possible or desirable in certain situations, such as when the surface is not timely.
Disclosure of Invention
Embodiments are disclosed that relate to enhancing the appearance of a surface via a see-through display device. For example, one disclosed embodiment provides a method of enhancing the appearance of a surface on a computing device that includes a see-through display device. The method comprises the following steps: image data of a first scene viewable through a display is acquired via an outwardly facing image sensor. The method further comprises the following steps: a surface viewable through the display is identified based on the image data, and in response to identifying the surface, a representation of a second scene is obtained, the second scene including one or more of a scene physically located behind the surface viewable through the display and a scene located behind the surface associated with a surface context (contextual) viewable through the display. The method further includes displaying the representation via the see-through display.
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Furthermore, the claimed subject matter is not limited to implementations that solve any or all disadvantages noted in any part of this disclosure.
Drawings
FIG. 1 illustrates an example use environment for an embodiment of a see-through display device, and also illustrates an embodiment of an enhancement to scene viewing by the see-through display device.
Fig. 2 and 3 illustrate other embodiments of enhancement of scene viewing by the see-through display device of fig. 1.
FIG. 4 schematically shows a block diagram illustrating an embodiment of a use environment for a see-through display device.
FIG. 5 shows a process flow depicting an embodiment of a method of enhancing a view of a scene.
FIG. 6 schematically shows an example embodiment of a computing system.
Detailed Description
As described above, various surfaces may obstruct a person's view of a scene located behind the surface. In some instances, it is advantageous for people to have the ability to see what is behind the surface without having to obtain a true, physical view behind the surface. For example, in the case of a user-operable surface, such as a refrigerator door, operating the surface to obtain a view behind the surface may allow cold air to escape. Similarly, this ability is also desirable for ease of viewing behind the surface when not physically proximate to the surface, such as when a person is sitting on a sofa across a room from the surface, or in a different location from the surface.
Accordingly, embodiments are disclosed that relate to providing a visual representation of an occluded scene, e.g., via displaying the representation of the occluded scene in a spatial registration of an occlusion surface or a context dependent surface. In this way, a user is able to visually interpret an occluded scene even if the user has not previously viewed the occluded scene and/or is not in spatial proximity to the occluded scene.
Before discussing these embodiments in detail, a non-limiting use scenario is described with reference to FIG. 1, which FIG. 1 illustrates an example environment 100 in the form of a kitchen. The kitchen includes a scene 102 viewable through a see-through display device 104 worn by a user 106. It should be appreciated that in some embodiments, the scene 102 viewable through the see-through display may be substantially coextensive with the user's field of view, while in other embodiments, the scene viewable through the see-through display may occupy a portion of the user's field of view.
As will be described in greater detail subsequently, the see-through display device 104 may include one or more outwardly facing image sensors (such as a two-dimensional camera and/or a depth camera) configured to acquire image data (such as color/grayscale images, depth images/point cloud data, etc.) representative of the environment 100 as a user navigates the environment. The image data may be used to obtain information about the layout of the environment (such as a three-dimensional surface map, etc.) and the layout of the objects and surfaces contained therein.
Image data acquired via the outward facing image sensor may be used to identify the user's location and orientation within the room. One or more feature points in the room may be identified, such as by comparison with one or more previously acquired images, to determine the orientation and/or position of the see-through display device 104 within the room.
The image data can further be used to identify a surface that obscures another scene, such as surface 108 (e.g., a refrigerator door). Identification of the surface may include, for example, detecting opening and/or closing of the surface via image data, detecting a shape of a door or such feature in the image data, and so forth. As another example, the see-through display device 104 may determine the presence of image data for a scene located behind a detected surface, and may thus identify the surface when the scene behind the surface is occluded, without directly detecting the action of opening/closing a door, without classifying the appearance of objects that include the surface, and so forth. Further, in some embodiments, the see-through display device 104 may be configured to determine a context of the scene 102 (e.g., refrigerator, living room, office, washroom, etc.) and/or a surface viewable through the display device (e.g., refrigerator door, cabinet door, wall, etc.). Such context is useful, for example, to programmatically determine whether to display image data of a scene behind the surface (such as based on one or more user preferences). By way of non-limiting example, a user may wish to view image data of a scene occluded by a door, image data of a scene located in their home, image data of a scene inside a refrigerator, and/or image data of a scene including how other suitable context is. Thus, upon identifying one or more scenes that include such context, a representation of the scene may be programmatically displayed. Such context is further useful, for example, to determine whether to display image data of a scene behind the surface based on privacy preferences, and if such display is permissible, which data to display (e.g., what the surface "depth" is in the case of one identified surface being behind another identified surface; whether to display a recent image or an earlier image of the scene, etc.). Thus, such context may allow for scene-based and/or surface-based granularity with respect to sharing, selection, and display of various scenes.
The see-through display device 104 is further configured to enhance the appearance of the surface 108 by displaying a representation 110 (e.g., image data) of a scene 112 (e.g., a refrigerator interior) that is physically behind the surface 108 as an "overlay" on top of the surface 108 (i.e., a refrigerator door). As will be described in more detail later, such enhancements may be triggered via any suitable mechanism, including but not limited to: user commands and/or surface recognition of the display device. As another example, in some embodiments, the see-through display device 104 may be configured to determine a direction of a gaze of the user 106 (e.g., via one or more imaging sensors that image locations of one or both eyes of the user), and may trigger the representation 110 based on the gaze of the user on the surface 108.
The representation 110 of the scene 112 may include previously collected image data. For example, the representation may include image data previously collected by the see-through display device 104 during a previous interaction of the user 106 with an object in conjunction with the surface 108. As another example, the displayed representation may include image data previously collected by a different device (e.g., another user's see-through display device, smartphone, IP camera, etc.). Thus, in some embodiments, the see-through display device 104 may be configured to share data with and retrieve data from multiple devices to provide recently acquired images. Further, in yet another embodiment, the user may choose to view an earlier representation than the most recently acquired image, as will be explained in more detail later.
It will be appreciated that the displayed representation of the occluded scene may include information generated from the image data rather than or in addition to the image data itself. For example, in some embodiments, the representation may include a generated model (e.g., generated from point cloud data acquired via a depth camera) and/or a generated textual description of the scene 112. In some embodiments, the viewing angle/direction of such generated models may be changed by the user.
Although the representation 110 of the scene 112 is shown as being spatially registered and coextensive with the portion of the surface 108 viewable through the see-through display, it should be appreciated that the representation 110 may be displayed in any other suitable manner and that the representation 110 may be displayed via any other suitable device other than the see-through display device. For example, in some embodiments, the enhancement to the scene 112 may be provided via a mobile computing device that does not include a see-through display, as described above. In such embodiments, the scene may be imaged via an image sensor of a mobile phone, tablet computer, or other mobile device, and a representation of the scene 102 (e.g., a "live feed" from the image sensor) may be displayed as an overlay over the surface 108, along with the representation 110, for example.
As yet another example, fig. 2 illustrates an example embodiment of a scene 200 in an environment 202 when viewed through a see-through display device (e.g., see-through display device 104 of fig. 1). As shown, the environment 202 is represented in the form of a grocery store, and includes a surface 204 (such as a see-through door) of an object 206 in the form of a refrigerated display case.
The see-through display device may be configured to identify that object 206 is a refrigerated display case and further determine that object 206 is contextually related to another object, such as a refrigerator that includes surface 108 of fig. 1. Such a determination may be made based on an analysis of the shape and/or appearance of the object for the identified shape (e.g., via a classification function), based on the shape and/or appearance of the object content (e.g., milk carton), or any other suitable manner. Further, additional context information may be considered when identifying the object. For example, location information (e.g., the user is at a grocery store) may be used to help identify the object 206.
In response to identifying the object 206, the see-through display device may display an image that enhances the appearance of the surface 204, where the image includes a representation 208 of a context-dependent scene 210 (the refrigerator interior scene 112 of FIG. 1 in this example). In this way, contextual cues of the refrigerated display case in the grocery store and/or the contents of the refrigerated display case (such as a milk carton) can trigger the most recently viewed display of the contents of the user's home refrigerator. This allows the user to view the recent contents of the home refrigerator and determine whether any products need to be purchased from the store.
The contextually relevant scene may be displayed in any suitable manner. For example, although shown as being substantially opaque, it should be appreciated that the representation 208 of the scene 210 may include less opacity such that the contents of the refrigerated display case are viewable through the representation.
It should be appreciated that there may be any number and combination of representations of scenes that are physically behind and/or contextually related to the surface detected for the surface. Thus, various mechanisms may be utilized to determine which scene, and its particular representation, to display to the user. For example, where multiple images of a scene physically located behind the surface (or behind a surface contextually related to the surface) are stored, the most recent representation may be represented as a default and the user may request another representation (e.g., an earlier representation) in some embodiments. In other embodiments, any other default value representation may be present.
Where it is desired to display different scenes, where one or more surfaces are not identified (e.g., due to lack of network connectivity, dimly lit scenes, etc.), and/or according to any other suitable mechanism, a list of scenes for which information is available may be displayed. The list may be configured to be viewed manually by the user, or may be presented in slides or other automatically progressing manner. Further, such a list may be presented via text, via an image (e.g., a thumbnail), and/or via any other suitable mechanism or combination of mechanisms. It will be appreciated that in some embodiments, two or more representations of one or more scenes may be selected for simultaneous or sequential viewing (e.g., comparing scene views taken at two different times).
Further, in some embodiments, the see-through display device may be configured to allow a user to view behind multiple surfaces. For example, FIG. 3 shows a number of scenes representing various "depths" in an environment. More specifically, fig. 3 shows a scene 300 viewable through a see-through display device (e.g., see-through display device 104 of fig. 1) in an environment 302, where scene 300 includes a surface 304 (e.g., a door) of an object 306 (e.g., a cabinet) obstructing a scene 308 (e.g., an interior of the cabinet). Further, surface 310 (e.g., a wall) and surface 312 (e.g., a door) are shown at least partially occluding scene 314 (e.g., another room).
Representations of scene 308 and/or scene 314 may be displayed to a user according to any suitable mechanism or combination of mechanisms. For example, a see-through display device may include one or more user-adjustable preferences such that the device may be configured to display a scene occluded by a door (e.g., scene 308), but not a scene occluded by a wall (e.g., scene 314). The see-through display device may also include one or more preferences regarding "positioning depth levels" to be displayed. For example, at depth level "1", scene 308 (located behind one surface) may be displayed while scene 314 (located behind both surfaces) is not displayed. As another example, at depth level "2," scene 308 and scene 314 may be displayed. Thus, where the see-through display device allows a user to view different depth scenes, the scenes may be displayed separately or together.
For example, walls (e.g., surface 310) between scenes in the use environment may be identified by identifying a thickness (e.g., via image data of the wall edges acquired with one or more depth cameras), by determining availability of information for scenes (e.g., scene 300 and scene 314) on both sides of the wall, and/or any other suitable manner. Similarly, a door (e.g., surface 312) may be identified as only present at certain times (e.g., in a temporally separated instance of the image data) by motion identification, by appearance, and/or contextual information (e.g., rectangular and extending upward from the floor), features (e.g., doorknobs), location (e.g., on a larger, flat expanse), and/or any other suitable manner.
As described above, the representation of the occluded scene (e.g., scene 314) displayed to the user may include previously collected image data. Such previously collected image data may include data collected by the user and/or collected by another user. Further, the previously collected image data may be represented as the most recent image stored for the occluded scene, or one or more earlier instances of image data. Additionally, in some embodiments, the image data may include real-time image data currently being acquired by a different computing device. As a more specific example, the representation of the scene 314 may include image data from another user (not shown) that is currently viewing the scene 314. In this manner, the user is able to view a representation of the scene 314 that is updated in real-time based on image data from other users.
Such a configuration may provide potential benefits that allow a user to find another user by viewing a representation of the other user's scene. For example, finding a route through a mall or office building based on GPS coordinates may be confusing, as the coordinates are not meaningful in themselves and the user may not be ready to access a map. Further, walls or other obstructions may prevent a direct path from the user's location to the destination. Thus, the user can view the current scene of the destination (e.g., via a friend's see-through display device) to navigate to the friend by identifying a landmark near the destination (e.g., directly or via computer vision techniques).
In embodiments where image data is shared among users, it will be appreciated that any suitable privacy and/or permission mechanism, and/or combination thereof, may be used to control cross-user access to such image data. For example, in some embodiments, a list of trusted other users may be maintained by a user that defines access to the user's image data. In other embodiments, access may also be limited based on the location of the surface. For example, a user may wish to restrict access to image data acquired in a private space (e.g., home or work), but may wish to share image data acquired in a public space (e.g., shopping mall). In another embodiment, additional granularity is provided by defining various trust levels for different users. For example, family members may be provided access to image data acquired at the user's home, while other non-family member users may be restricted from accessing such image data. It will be appreciated that these privacy/permission schemes are presented for purposes of example and are not intended to be limiting in any way.
Fig. 4 schematically shows a block diagram illustrating an embodiment of a use environment 400 for a see-through display device configured to enhance a view of a surface by a view of a scene occluded by the surface. The use environment 400 includes a plurality of see-through display devices, shown as see-through display device 1402 and see-through display device N. Each see-through display device includes a see-through display subsystem 404, the see-through display subsystem 404 configured to display images on one or more see-through display screens. The see-through display device may take any suitable form, including but not limited to a head-mounted near-eye display in the form of glasses, goggles, or the like.
Each see-through display device 402 may further include a sensor subsystem 406, with sensor subsystem 406 including any suitable sensor. For example, the sensor subsystem 406 may include one or more image sensors 408, such as, for example, one or more color or grayscale two-dimensional cameras 401 and/or one or more depth cameras 412. Depth camera 412 may be configured to measure depth using any suitable technique, including but not limited to time-of-flight, structured light, and/or stereo imaging. Image sensor 408 may include one or more outward facing cameras configured to acquire image data of a background scene (e.g., scene 102 of fig. 1) viewable through a see-through display device. Further, in some embodiments, the user device may include one or more illumination devices (e.g., IR LEDs, flash lights, structured light emitters, etc.) to aid in image acquisition. Such lighting devices may be activated in response to one or more environmental related inputs (e.g., dim light detection) and/or one or more user inputs (e.g., voice commands). In some embodiments, the image sensor may further include one or more inward facing image sensors configured to detect eye position and movement to enable gaze tracking (e.g., to allow visual operation of a menu system, to identify eye focus toward a surface, etc.).
Image data received from image sensor 408 may be stored in image data store 414 (e.g., flash memory, EEPROM, etc.) and may be used by see-through display device 402 to identify one or more surfaces present in a given environment. Further, each see-through display device 402 may be configured to interact with a remote service 416 and/or one or more other see-through display devices via a network 418 (such as a computer network and/or a wireless telephone network). Still further, in some embodiments, interaction between see-through display devices may be provided via a direct link 420 (e.g., near field communication) rather than via the network 418, or in addition to via the network 418 via the direct link 420 (e.g., near field communication).
Remote service 416 may be configured to communicate with multiple see-through display devices to receive data from and transmit data to the see-through display devices. Further, in some embodiments, at least some of the above functionality may be provided by remote service 416. As a non-limiting example, see-through display device 402 may be configured to acquire image data and display an enhanced image, however the remaining functions (e.g., surface identification, related scene acquisition, image enhancement, etc.) may be performed by a remote service.
Remote service 416 may be communicatively coupled to a data store 422, data store 422 being shown to store information for a plurality of users represented by user 1424 and user N426. It should be appreciated that any suitable data may be stored, including, but not limited to, image data 428 (e.g., image data received from image sensor 408 and/or information computed therefrom) and contextual information 430. Contextual information 430 may include, but is not limited to, the environment of one or more surfaces and/or one or more scenes represented by image data 428. Such information may be used, for example, by the see-through display device 402 to identify and retrieve a representation of a scene that is contextually related to a surface viewable through the see-through display device (e.g., scene 112 related to surface 108 of fig. 1).
Although the information in data store 422 is shown as being organized on a user-by-user basis, it will be appreciated that the information may be organized and stored in any suitable manner. For example, the image data and/or surface information may be arranged by location (e.g., via GPS coordinates, via an identified location classification such as "home" or "work"), by category (e.g., "food"), and/or the like.
The context information 430 may be determined and assigned to the image data and/or objects in the image data in any suitable manner. In some embodiments, the context information 430 may be defined at least in part by the user. In one particular example, referring to fig. 1, see-through display device 104 may detect a user's gaze toward surface 108, and user 106 may provide a voice command (e.g., "mark surface' fridge door") to enter contextual information 430 of surface 108. Similarly, the see-through display device 104 may detect a location in the environment 100, and the user 106 may provide a voice command (e.g., "mark scene 'kitchen'") to enter contextual information 430 of the environment 100.
Also, in some embodiments, the contextual information 430 may be determined automatically via the see-through display device 402, via the remote service 416, or via other devices or services. For example, one or more classification functions may be used to classify objects imaged by an outward-facing image sensor, and tags may be applied based on the results of the classification process, as well as the location of the object (home, office, etc.), and/or any other suitable contextual information. It will be understood that these scenarios are presented for purposes of example, and are not intended to be limiting in any way.
The data store 422 may further include other data 432, including but not limited to information about trusted other users with whom the image data 428 and/or the contextual information 430 may be shared. As described above, access to image data 428 and/or contextual information 430 may be controlled according to any suitable granularity. For example, access may be denied to all other users based on the location of the surface (e.g., home relative to public space), denied to certain users based on one or more users' relationships (e.g., image data in the home is restricted to family members), and/or otherwise controlled according to one or more static and/or user-adjustable preferences.
In this manner, a user of device 402 can access data previously collected by one or more different devices, such as a family member's see-through display device or other image sensing device. In this manner, image data and/or information related to various usage environments computed from the image data may be shared and updated between user devices. Thus, depending on privacy preferences, a user may have access to information relating to a given environment even if the user has not previously navigated to that environment. Further, even if the user has previously navigated the environment, more recent updated information is available.
The see-through display device 402 may further include one or more audio sensors 434, such as one or more microphones that may be used as input mechanisms. The see-through display device 402 may further include one or more location sensors 436 (e.g., GPS, RFID, proximity, etc.). In some embodiments, the location sensor may be configured to provide data for determining the location of the user device. Further, in some embodiments, information from one or more wireless communication devices may be used to determine location, e.g., via detection of proximity to a known wireless network.
Turning now to fig. 5, a process flow is shown depicting an embodiment of a method 500 for enhancing a view of a scene. At 502, method 500 includes acquiring, via an outward-facing image sensor, image data of a first scene viewable through a display. The image data may be acquired from, for example, one or more two-dimensional cameras 504 and/or one or more depth cameras 506.
At 508, method 500 further includes identifying a surface (e.g., surface 108) viewable through the display based on the image data. In some embodiments, identifying the surface may include identifying 510 a location of the computing device based on one or more of location data from a location sensor (e.g., location sensor 436) and image data from an outward-facing image sensor, and identifying the surface based on such information.
Identifying the surface may further include identifying 512 whether the surface is a movable surface or a non-movable surface. For example, a door (e.g., surface 108) may be identified as a door by detecting motion of the surface via the image data. As another example, a surface may be identified as movable by the presence of one or more scenes occluded by the surface (based on previously collected image data and/or location data) based on a comparison between two or more instances of image data (with door open being one instance and door closed being another instance), and/or in any other suitable manner.
Identifying the surface may further include determining 514 a context of the surface viewable through the display (e.g., surface 204), for example, by identifying one or more of an object containing the surface viewable through the display (e.g., a refrigerator display in a grocery store) and an object physically located behind the surface viewable through the display (e.g., a milk carton). As described above, it should be appreciated that the context of the surface may be determined in any suitable manner.
At 516, method 500 further includes, in response to identifying the surface, obtaining a representation of a second scene, the second scene including one or more of a scene physically located behind the surface viewable through the display and a scene located behind the surface contextually related to the surface viewable through the display. In some embodiments, the representation may be retrieved from a local store (e.g., image data store 414). In other embodiments, obtaining the representation of the second scenario may include retrieving the representation from a remote device on a computer network (e.g., remote service 416) and/or via a direct link (e.g., direct link 420). Regardless of the storage location, acquiring the representation may include acquiring 520 real-time image data collected by a device other than the computing device. In other embodiments, acquiring the representation may include acquiring 522 image data previously collected by a device other than the computing device.
It should be appreciated that there may be any number and/or configuration of representations of the second scenario. For example, referring to the example use environment 202 of FIG. 2, there may be scenarios other than the scenario 210 (e.g., a refrigerator at a user's home) that is contextually related to the object 206 (e.g., a refrigerated display case of a supermarket), such as a friend's refrigerator, a refrigerated display case of another store, a food storage room, and so forth. Accordingly, obtaining a representation of the second scene may include selecting 524 the representation from a plurality of scene representations including surfaces contextually related to the surface viewable through the display. Such selection may be performed manually by a user (e.g., by browsing a list) and/or may be determined programmatically.
It should also be further appreciated that for any given scene, there may be multiple versions of image data corresponding to that scene (e.g., yesterday's image data, one month's past image data, one year's past image data, etc.). Accordingly, acquiring the representation of the second scene may further include determining 526 a most recent representation of the second scene and acquiring the most recent representation of the second scene as a default representation. In other cases, it may be desirable to view a previous version of the image data. For example, it may be desirable to view one or more previous versions of the image data to identify one or more objects that were previously present in the scene. As a more specific example, the user may refer to image data of a previous version of the user's refrigerator to remember the type of beverage the user likes and wants to purchase again. It should be appreciated that the above-described scenarios are presented for purposes of example and are not intended to be limiting in any way.
At 528, the method 500 includes detecting a trigger to display the representation. Any suitable trigger may be utilized. Examples include, but are not limited to, one or more of direct voice commands, contextual triggers, programmatically generated triggers, and gestures (via eyes, arms, head, and/or others). As described above, contextual triggers can include visually determined context of the scene or audio-based context of the conversation (e.g., determination of a conversation involving food), among others.
For example, triggers that are generated in a programmable manner may be implemented based on time, date, and/or previous state of the computing device. For example, in some embodiments, a user may enable the above-described enhancement mechanism, and may perform the enhancement until the mechanism is disabled. In other words, each surface viewable through the see-through display device may be identified and then enhanced until a trigger is received requesting that such a mechanism be disabled. As another example, a user may specify one or more surfaces that are distinct (e.g., home refrigerator), one or more contexts (e.g., food-based surfaces), and/or any other granularity of operations, providing enhancements thereto until a trigger is received requesting other circumstances.
In some embodiments, the trigger may be received from a remote computing device (e.g., a see-through display device of another user) and/or based at least in part on information received from the remote computing device. In such embodiments, as with the "local" triggers discussed above, the triggers may be generated according to any suitable mechanism or combination of mechanisms. For example, as mentioned above, scene augmentation may allow a user to find another user by viewing representations of the other user's scenes. Thus, in such a scenario, a trigger may be received from the computing device of the other user to provide such functionality. It should be appreciated that other triggers are possible without departing from the scope of the present disclosure.
At 530, the method 500 further includes displaying the representation via the see-through display. For example, in some embodiments, displaying the representation includes displaying 532 an image to enhance the appearance of the surface, the image including a representation of the second scene in spatial registration of the surface. In other embodiments (e.g., representation 208 of FIG. 2), the representation may be displayed in any other suitable manner. It should be appreciated that the representation may include any suitable appearance and may include information other than (e.g., three-dimensional models, text-based information, etc.) or in addition to (e.g., three-dimensional models, text-based information, etc.) image data received from one or more image sensors.
As described above, it may be desirable to provide different "depths" of surface enhancement to a user of a display device. Accordingly, at 534, method 500 may further include receiving input to obtain a representation of a third scene (e.g., scene 314) that is physically located behind a surface in the second scene (e.g., scene 308). At 536, the method 500 may include, in response to the input, obtaining a representation of the third scene. The method 500 may further include, at 536, displaying a representation of the third scene via the see-through display. As with the representation of the second scenario, it should be appreciated that the third scenario may have any suitable configuration. For example, in some embodiments, a representation of the third scene may be displayed in a spatial registration of the surface in the second scene, while in further embodiments, the representation may be displayed in other locations via the see-through display.
In some embodiments, the above-described methods and processes may be bound to a computing system comprising one or more computers. In particular, the methods and processes described herein may be implemented as a computer application, computer service, computer API, computer library, and/or other computer program product.
FIG. 6 schematically illustrates a non-limiting computing system 600 that can perform one or more of the above-described methods and processes. See-through display device 104, see-through display device 402, and the computing device executing remote service 416 are non-limiting examples of computing system 600. Computing system 600 is shown in simplified form. It will be appreciated that virtually any computer architecture can be used without departing from the scope of the disclosure. In different embodiments, the computing system 600 may take the form of a mainframe computer, server computer, desktop computer, laptop computer, tablet computer, home entertainment computer, network computing device, mobile communication device, wearable computer, gaming device, and the like.
The computing system 600 includes: a logic subsystem 602 and a data-holding subsystem 604. Computing system 600 may optionally include a display subsystem 606, a communication subsystem 608, and/or other components not shown in fig. 6. Computing system 600 may also optionally include user input devices such as a keyboard, mouse, game controller, camera, microphone, and/or touch screen, among others.
Logic subsystem 602 may include one or more physical devices configured to execute one or more instructions. For example, the logic subsystem may be configured to execute one or more instructions that are part of one or more applications, services, programs, routines, libraries, objects, components, data structures, or other logical hooks. Such instructions may be implemented to perform a task, implement a data type, transform the state of one or more devices, or otherwise arrive at a desired result.
The logic subsystem may include one or more processors configured to execute software instructions. Additionally or alternatively, the logic subsystem may include one or more hardware or firmware logic machines configured to execute hardware or firmware instructions. The processors of the logic subsystem may be single core or multicore, and the programs executed thereon may be configured for parallel or distributed processing. The logic subsystem may optionally include individual components that are distributed throughout two or more devices, which may be remotely located and/or configured for coordinated processing. One or more aspects of the logic subsystem may be virtualized and executed by remotely accessible networked computing devices configured in a cloud computing configuration.
Data-holding subsystem 604 may include one or more physical, non-transitory devices configured to hold data and/or instructions executable by the logic subsystem to perform the methods and processes described herein. In implementing such methods and processes, the state of data-holding subsystem 604 may be transformed (e.g., to hold different data).
Data-holding subsystem 604 may include removable media and/or built-in devices. Data-holding subsystem 604 may include optical memory devices (e.g., CD, DVD, HD-DVD, Blu-ray disc, etc.), semiconductor memory devices (e.g., RAM, EPROM, EEPROM, etc.) and/or magnetic memory devices (e.g., hard disk drive, floppy disk drive, tape drive, MRAM, etc.), among others. Data-holding subsystem 604 may include devices with one or more of the following characteristics: volatile, nonvolatile, dynamic, static, read/write, read-only, random access, sequential access, location addressable, file addressable, and content addressable. In some embodiments, logic subsystem 602 and data-holding subsystem 604 may be integrated into one or more common devices, such as an application specific integrated circuit or a system on a chip.
FIG. 6 also illustrates an aspect of the data-holding subsystem in the form of removable computer-readable storage media 610, which may be used to store and/or transfer data and/or instructions executable to implement the herein described methods and processes. Removable computer-readable storage medium 610 may take the form of a CD, DVD, HD-DVD, blu-ray disc, EEPROM, and/or floppy disk, among others.
It should be appreciated that data-holding subsystem 604 includes one or more physical, non-transitory devices. Conversely, in some embodiments, aspects of the instructions described herein may propagate in a transient manner through a pure signal (e.g., an electromagnetic signal, an optical signal, etc.) that is not held by the physical device for at least a finite duration. Furthermore, data and/or other forms of information pertaining to the present disclosure may be propagated by a pure signal.
It should be appreciated that a "service" as used herein may be an application executable across multiple user sessions and available to one or more system components, programs, and/or other services. In some implementations, a service may run on a server in response to a request from a client.
When included, display subsystem 606 may be used to present a visual representation of data held by data-holding subsystem 604. As the herein described methods and processes change the data held by the data-holding subsystem, and thus transform the state of the data-holding subsystem, the state of display subsystem 606 may likewise be transformed to visually represent changes in the underlying data. Display subsystem 606 may include one or more display devices utilizing virtually any type of technology. Such display devices may be combined with logic subsystem 602 and/or data-holding subsystem 604 within a shared enclosure, or such display devices may be peripheral display devices.
When included, communication subsystem 608 may be configured to communicatively couple computing system 600 with one or more other computing devices. The communication subsystem 608 may include wired and/or wireless communication devices compatible with one or more different communication protocols. By way of non-limiting example, the communication subsystem may be configured to communicate via a wireless telephone network, a wireless local area network, a wired local area network, a wireless wide area network, a wired wide area network, or the like. In some embodiments, the communication subsystem may allow computing system 600 to send and/or receive messages to and/or from other devices via a network such as the internet.
It will be appreciated that the configurations and/or approaches described herein are exemplary in nature, and that these specific embodiments or examples are not to be considered in a limiting sense, because numerous variations are possible. The specific routines or methods described herein may represent one or more of any number of processing strategies. As such, various acts illustrated may be performed in the sequence illustrated, in other sequences, in parallel, or in some cases omitted. Likewise, the order of the above-described processes may be changed.
The subject matter of the present disclosure includes all novel and nonobvious various processes, systems and configurations, and other features, functions, acts, and/or properties disclosed herein, as well as combinations and subcombinations of any and all equivalents thereof.
Claims (10)
1. On a computing device (402) including an outward-facing image sensor (408), a method comprising:
acquiring (502) image data of a first scene via the outward-facing image sensor;
identifying (508) a surface based on the image data;
in response to identifying the surface, obtaining (516) a representation of a second scene, the second scene comprising one or more of a scene physically located behind the surface and a scene located behind a surface contextually related to the surface; and
displaying (530) the representation via a display device.
2. The method of claim 1, wherein identifying the surface comprises identifying a location of the computing device based on one or more of location data from a location sensor and image data from the outward-facing image sensor, and identifying the surface based on the location of the computing device.
3. The method of claim 1, wherein identifying the surface comprises identifying whether the surface is a movable surface or a non-movable surface, and displaying the representation only when the surface is a movable surface.
4. The method of claim 1, wherein the second scene is located behind a surface that is contextually related to the surface, and wherein identifying the surface comprises determining the context of the surface by identifying one or more of an object that contains the surface and an object that is physically located behind the surface.
5. The method of claim 4, wherein obtaining the representation of the second scene comprises selecting the representation from a plurality of representations of scenes that include surfaces that are contextually related to the surfaces.
6. The method of claim 1, wherein the second scene is physically behind the surface, and wherein the method further comprises:
receiving an input to obtain a representation of a third scene, the third scene being physically behind a surface in the second scene;
obtaining a representation of the third scene in response to the input; and
displaying, via the display device, a representation of the third scene.
7. The method of claim 1, wherein the display device is a see-through display device, and wherein displaying the representation comprises displaying an image to enhance an appearance of the surface, the image comprising a representation of the second scene in spatial registration of the surface.
8. The method of claim 1, further comprising detecting a trigger to display the representation, the trigger comprising one or more of a direct voice command, a contextual trigger, a programmatically generated trigger, and a gesture.
9. A computing device (402, 600) comprising:
a see-through display device (404, 606);
an outward facing image sensor (408) configured to acquire image data of a scene viewable through the see-through display device, the image sensor comprising one or more two-dimensional cameras (410) and/or one or more depth cameras (412)
A logic subsystem (602) configured to execute instructions; and
a data-holding subsystem (604) including instructions stored thereon that are executable by the logic subsystem to:
identifying (508), based on the image data, a surface viewable through the display;
responsive to identifying the surface, obtaining (516) a representation of one or more of a scene physically located behind the surface viewable through the display and a scene located behind the surface contextually related to the surface viewable through the display; and
displaying (530) the representation via the see-through display.
10. The computing device of claim 9, wherein the instructions are executable to retrieve the representation from a remote device over a computer network, wherein instructions executable to obtain the representation comprise one or more of instructions executable to obtain image data previously collected by a device other than the computing device and instructions executable to obtain real-time image data collected by a device other than the computing device.
Publications (2)
| Publication Number | Publication Date |
|---|---|
| HK1197944A true HK1197944A (en) | 2015-02-27 |
| HK1197944B HK1197944B (en) | 2018-05-04 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US9799145B2 (en) | Augmented reality display of scene behind surface | |
| CN103823553B (en) | The augmented reality of the scene of surface behind is shown | |
| US20130342568A1 (en) | Low light scene augmentation | |
| US12380238B2 (en) | Dual mode presentation of user interface elements | |
| US12056275B2 (en) | Method and a system for interacting with physical devices via an artificial-reality device | |
| US12175614B2 (en) | Recording the complete physical and extended reality environments of a user | |
| JP5965404B2 (en) | Customizing user-specific attributes | |
| US9063566B2 (en) | Shared collaboration using display device | |
| US11948263B1 (en) | Recording the complete physical and extended reality environments of a user | |
| KR102522814B1 (en) | 3d mapping of internet of things devices | |
| US10705602B2 (en) | Context-aware augmented reality object commands | |
| US9024844B2 (en) | Recognition of image on external display | |
| KR102508924B1 (en) | Selection of an object in an augmented or virtual reality environment | |
| US20140044305A1 (en) | Object tracking | |
| CN105376121A (en) | Image triggered pairing | |
| CN109997098A (en) | Device, associated method and associated computer-readable medium | |
| TW202324041A (en) | User interactions with remote devices | |
| WO2017187196A1 (en) | Augmented media | |
| WO2023146837A2 (en) | Extended reality for collaboration | |
| EP2887183B1 (en) | Augmented reality display of scene behind surface | |
| KR102104136B1 (en) | Augmented reality overlay for control devices | |
| HK1197944A (en) | Augmented reality display of scene behind surface | |
| HK1197944B (en) | Augmented reality display of scene behind surface | |
| JP6362325B2 (en) | Object tracking |