WO2018148076A1 - System and method for automated positioning of augmented reality content - Google Patents
System and method for automated positioning of augmented reality content Download PDFInfo
- Publication number
- WO2018148076A1 WO2018148076A1 PCT/US2018/016197 US2018016197W WO2018148076A1 WO 2018148076 A1 WO2018148076 A1 WO 2018148076A1 US 2018016197 W US2018016197 W US 2018016197W WO 2018148076 A1 WO2018148076 A1 WO 2018148076A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- content
- display
- render
- display device
- hmd
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/20—Scenes; Scene-specific elements in augmented reality scenes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/816—Monomedia components thereof involving special video data, e.g 3D video
Definitions
- the AR device used for viewing the AR content is embedded with sensors capable of producing depth information from the environment.
- a sensor or combination of sensors, may include RGB-D cameras, stereo cameras, infrared cameras, lidar, radar, sonar, and any other sort of sensor known by those with skill in the art of image and depth sensing. Combinations of sensor types and enhanced processing methods may be employed for depth detection.
- sensors collect point cloud data from the environment as the user moves the device and sensors through the environment. Sensor observations with varying points of view are combined to form a coherent 3D reconstruction of the complete environment.
- the AR device proceeds to send the reconstructed model to the server along with a request for AR content.
- the level of completeness may be measured for example as the percentage of the surrounding area coverage, number of discrete observations, duration of sensing, etc. and any similar quality value that may be used as a threshold.
- 3D reconstruction of the environment may be carried out with any known reconstruction method, such as ones featured in KinectFusion or Point Cloud Library (PCL).
- the AR device at the beginning of an AR viewing session starts to continuously stream RGB-D data to the server.
- the server performs the 3D reconstruction process using the received RGB-D data stream and stores the reconstructed environment model.
- the server constructs the per client environment model, it also begins to filter available AR content by removing content not preferable given that client's environment model.
- the content selection processing becomes more efficient.
- Another embodiment takes the form of a system that includes a communication interface, a processor, and data storage containing instructions executable by the processor for causing the system to carry out at least the functions described in the preceding paragraph.
- AR content comprises 3D virtual content. This includes but is not limited to virtual models of objects associated with the primary media, such as a racecar for an F1 event or a solar system for a Neil deGrasse Tyson show. User preferences may be used to look-up which racer is the user's favorite. The system may then provide the 3D model of that racer's car. If the primary media is footage of a security camera, then the AR content may be a 3D virtual model of the secure building.
- An exemplary process described herein comprises analyzing the real-world environment to measure visual characteristics.
- this includes hardware components such as sensors as well as software components such as object classifiers working together.
- the form of analysis executed and the visual characteristics measured vary. This is a direct result of various optimizations that may be leveraged, based on detectable differences in use case scenarios.
- the analysis of the real-world environment may be carried out by the AR headset, an external sensor, an external computing device, and a combination thereof. For example, the analysis may not search for surfaces suitable for rendering virtual 3D content if the available AR content does not include any virtual 3D content types.
- generating AR content render parameters comprises comparing each AR content in the selection with colors around the display to avoid render locations with poor contrast.
- One example is not rendering a Christmas tree over a green wall.
- generating AR content render parameters comprises comparing each AR content in the selection with lighting conditions around the display to avoid render locations with poor contrast.
- One example is not rendering black text over a dark wall.
- generating AR content render parameters comprises comparing each AR content in the selection with a visual complexity around the display to avoid visually complex render locations.
- generating AR content render parameters comprises comparing each AR content in the selection with textures around the display to avoid render locations with poor textures (e.g., stone or brick walls and curtains).
- generating AR content render parameters comprises, (i) virtually testing the available AR content in a plurality of potential render locations with a plurality of potential render styles, (ii), generating AR content - location - style compatibility scores, (iii) generating the AR content render parameters based on the AR content - location - style compatibility scores.
- Output of AR content per the render parameters 306 may comprise output for side-stream content, output for 2D planar content, output for 3D virtual content, and output for 360-degree immersive content.
- Content streaming and viewing 532 may commence.
- the AR content server 506 optimizes 524 the requested AR content by removing content that is not preferable for the present viewing conditions and viewing hardware.
- an optimized content stream 534 is sent from an AR content server 506 to an AR view client 504.
- Display content 536 may be displayed to the user 502 by an AR viewer client 504.
- FIG. 8 is a depiction of an example real-world environment 800 comprising a display 802 depicting a primary media content, in accordance with at least one embodiment.
- the real-world environment 800 is the inside of a room.
- the room is a user's viewing location of choice for TV supplemented with AR content.
- the room includes a TV display 802 depicting a soccer match, two blocks 804, 806 on the floor, and a window 808.
- Behind the left side of the display is a brick wall 810 and behind the right side of the display is a wall clock 812 mounted near the ceiling.
- FIG. 8 is a reference image for use with the subsequent descriptions of FIGs. 9-14.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- User Interface Of Digital Computer (AREA)
- Processing Or Creating Images (AREA)
Abstract
One embodiment of systems and methods disclosed herein generate and display augmented reality (AR) content for a real-world environment in which a separate display is detected by a head-mounted display (HMD). AR content may be selected based on media content identified on the separate display. AR content may be displayed at locations in proximity to the separate display device that are selected based on visual characteristics of the locations. AR content may be displayed with render parameters that increase visibility of AR content. One embodiment may track position and orientation of the HMD and select a location to display the AR content based on the position and orientation of the HMD. One embodiment may display virtual connectors between AR content and objects identified in the media content of the separate display. AR content may be displayed at locations that minimize intersections of virtual connectors.
Description
SYSTEM AND METHOD FOR AUTOMATED POSITIONING OF AUGMENTED REALITY CONTENT
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] The present application is a non-provisional filing of, and claims benefit under 35 U.S.C. §1 19(e) from, U.S. Provisional Patent Application Serial No. 62/457,442, entitled "System and Method for Automated Positioning of Augmented Reality Content" filed February 10, 2017, the entirety of which is incorporated herein by reference.
BACKGROUND
[0002] In today's internet age, there is a trend towards consuming richer and more immersive digital content. How we access this content is changing at a rapid pace. Streaming digital data has become the standard means by which a user consumes digital content. Digital media with greater levels of realism are encoded using high resolution formats which demand large file sizes. Transporting this information requires a proportionally large allocation of communication resources. Visually rich virtual reality (VR) content and augmented reality (AR) content both consume large amounts of data which is problematic for AR and VR content delivery. The limited bandwidth of the data connections between services delivering the content and clients consuming the content is a major bottleneck. This is one reason why the availability of AR content has been minimal. Next generation immersive content formats - that not only provide a full 360-degree stereoscopic view from a single viewpoint, but also allow a user to move within a limited area inside the content - will consume exponentially more information communication resources.
[0003] AR content may be obtained by an AR device, such as an AR head-mounted display or an AR headset, via a plurality of means. In some instances, AR content is in a local storage of the device and is selected manually by a user of the AR device. In other systems, a GPS signal triggers a notification of AR content (often associated with that location) being available to consume. Modern AR devices are contextually aware of their surroundings. It is a prerequisite for accurately and precisely aligning AR content with the real- world environment. Due to the advanced sensing abilities of AR devices, enhanced processes for curating various forms of AR content may be provided. Furthermore, novel and exciting media consumption experiences may be facilitated.
SUMMARY
[0004] Exemplary systems and methods disclosed herein provide for automated selection of a position for display of augmented reality content. In some embodiments, a user's head-mounted display (HMD) operates to detect the position of a separate external display, such as a monitor or TV screen. The HMD further detects visual characteristics of regions adjacent to the external display. For example, regions immediately above, to the left of, and to the right of the external display may be evaluated, among other possibilities. The detected visual characteristics may include characteristics that would affect the visibility and/or legibility of augmented reality content, such as the brightness, color, or visual complexity of the respective regions. Based on the detected visual characteristics, at least one of the evaluated regions is selected for display of the augmented reality content, and the content is displayed so that it appears (e.g. as an overlay) in the selected region or regions.
[0005] In some embodiments, the user's HMD includes a forward-facing camera that it operates to capture one or more images of a scene that includes an external display. The location of the external display may be determined using, for example, object detection techniques. Evaluation of visual characteristics of regions adjacent to the external display may be performed using various different techniques. In one embodiment, the HMD operates to determine the average brightness of captured pixels within each respective region, and the region with the lowest average brightness may be selected for display of the augmented reality content. Such an embodiment may prevent augmented reality content from being displayed as an overlay on a bright window or other light source, which may render the content illegible. In another embodiment, the HMD operates to determine the average hue of captured pixels within each respective region, and any region with substantially the same hue as the content to be displayed may be discarded as a potential region for display of that content. In a further embodiment, the HMD operates to determine the visual complexity of the image of each respective region; for example, the HMD may perform a two-dimensional Fourier transform on an image of the region, and regions with greater amplitude of high- spatial-frequency components may be considered to have greater visual complexity. The HMD may select one or more regions with lower visual complexity for display of augmented reality content. In some embodiments, a selection by the HMD of a region for display of the augmented reality content may be made based on a variety of factors (such as, e.g. brightness, hue, and complexity) and compiled into a composite visibility rating.
[0006] In some embodiments, the augmented reality content displayed in a region adjacent to the external display is content related to a video presentation on that display.
[0007] One embodiment takes the form of a process that includes detecting, using sensors of a head- mounted display (HMD), a location and extent of a separate display device in a viewing environment. This process may be executed by an augmented reality (AR) headset (such as a head-mounted display, HMD).
This process also includes detecting, using sensors of the HMD, respective visual characteristics of a plurality of regions of the viewing environment in proximity to the display device. Additionally, this process includes selecting a region from among the plurality of regions to display AR content based on the respective visual characteristics. This process also includes displaying the AR content using the HMD such that the AR content appears in the selected region.
[0008] One embodiment takes the form of a process that includes identifying a primary media depicted on a display in a real-world environment. This may be carried out using an AR headset. The process also includes accessing available AR content associated with the identified primary media. The process also includes analyzing the real-world environment to measure visual characteristics. The process also includes generating AR content render parameters based at least in part on the measured visual characteristics and the available AR content, wherein the AR content render parameters comprise (i) a selection of the available AR content and (ii) a respective render location for each AR content in the selection. The process also includes displaying the selected AR content in accordance with the generated AR content render parameters using the AR headset.
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS
[0009] The accompanying figures, where like reference numerals refer to identical or functionally similar elements throughout the separate views, together with the detailed description below, are incorporated in and form part of the specification, and serve to further illustrate embodiments of concepts that include the claimed invention and to explain various principles and advantages of those embodiments.
[0010] FIGs. 1 A and 1 B illustrate two different techniques of displaying side-stream information relating to a primary video presentation.
[0011] FIG. 2 is a flow chart of a method for AR content curation in environments with traditional displays, in accordance with at least one embodiment.
[0012] FIG. 3 is a visual outline of functional elements of exemplary systems and processes, in accordance with at least one embodiment.
[0013] FIG. 4 is an illustration of components of exemplary embodiments described herein.
[0014] FIG. 5 is a sequence diagram of a method for AR content delivery in accordance with at least one embodiment.
[0015] FIG. 6 is a sequence diagram of a method for AR content delivery in accordance with at least one embodiment.
[0016] FIG. 7 is a sequence diagram of a method for AR content delivery in accordance with at least one embodiment.
[0017] FIG. 8 is a schematic perspective view of an example real-world environment comprising a display depicting a primary media content, in accordance with at least one embodiment.
[0018] FIG. 9 is a schematic perspective view of a situation in which displayed AR content is largely illegible.
[0019] FIG. 10 is a schematic perspective view of a situation in which placement of the AR content is effected in accordance with at least one embodiment.
[0020] FIG. 1 1 is a schematic perspective view of a situation in which placement of two elements of AR content is effected in accordance with at least one embodiment.
[0021] FIG. 12 depicts a 3D virtual content in a first render location and render style, in accordance with at least on embodiment.
[0022] FIG. 13 depicts a 3D virtual content in a second render location and render style, in accordance with at least on embodiment.
[0023] FIG. 14 depicts a 360-degree immersive content, in accordance with at least on embodiment.
[0024] FIG. 15 depicts a render arrangement for three selected elements of AR content, in accordance with at least one embodiment.
[0025] FIG. 16 depicts an alternative render arrangement for the three selected elements AR contents, in accordance with at least one embodiment.
[0026] FIGs. 17A to 17C depict various AR scene boundaries, in accordance with at least one embodiment.
[0027] FIGs. 18A to 18C depict an AR headset at various distances to a display, in accordance with at least one embodiment.
[0028] FIG. 19 is a flow chart of a method of one embodiment for generating AR content for an environment.
[0029] Skilled artisans will appreciate that elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions of some of the elements in the figures may be exaggerated relative to other elements to help to improve understanding of embodiments of the present invention.
[0030] The apparatus and method components have been represented where appropriate by conventional symbols in the drawings, showing only those specific details that are pertinent to understanding the embodiments of the present invention so as not to obscure the disclosure with details that will be readily apparent to those of ordinary skill in the art having the benefit of the description herein.
DETAILED DESCRIPTION
[0031] Although television screen dimensions have grown, displays may still provide only limited real estate for the representation of media. There exist solutions capable of rendering content to predefined regions within a presently streaming media. However, in this case, the television (or any other display device) screen-space is allocated for the ancillary content. There exist solutions to enable the use of side-stream
content via other 2nd screen output devices (e.g., via a tablet or mobile phone). However, in this case there are no visible connectors between the different regions of the content shown in the primary display device (e.g., TV) and the additional content shown in the 2nd screen devices.
[0032] Current TV episodes may be associated with large amounts of additional data which provide information about the primary video. Curated broadcasts may select a portion of this data for inclusion in a transmission. These selected portions may be displayed as part of the video itself allowing little alteration. Examples of this additional data, which are a form of content (and may potentially be rendered via an AR device) include stock tickers for finance shows, scoreboards for sports matches, and basic channel identifier logos. The amount of data presented is limited by the available display space. The broadcaster typically selects a small set of data for display as appropriate for a general audience.
[0033] In order to address at least the above issues, there should be a system that enables exploitation of AR headsets (e.g. AR goggles) in representation of additional content in the surroundings of a primary display device. The primary display device may, for example, be a television, a large display surface surrounding the user, or a screen in a movie theatre.
[0034] In some embodiments, the additional content is meta-information. For example, textual or visual content may be rendered in the AR goggles around the edges of the primary display device. In some embodiments, visual connectors are rendered that connect the rendered textual or visual content to regions or objects in the primary media stream. 3D models may be rendered to enhance the primary content. For example, the primary content that is visible on the display may be an in-car video stream from a NASCAR competition. This primary media (e.g. a first-person race stream) may be augmented with a 3D model of the cockpit. The 3D model of the car in such an embodiment appears to surround and naturally extend the TV screen when viewed using the AR goggles. In some embodiments, the additional content is a 2D video. An additional 2D video feed may be provided along with the primary video stream. Examples include views from alternate cameras. Meta-information and 2D videos are both forms of 2D planar content.
[0035] In some embodiments, the additional content is 360-degree immersive content. In such an embodiment, the 360-degree video may be rendered to the visual field so that the primary display device is still visible in the visual field. The 360-degree video may enhance the user experience. For example, the primary media may be a football game and the AR goggles may be used to display a 360-degree video from the spectator stand. For 360-degree immersive content, an exemplary method defines a transparent region of the AR content so as to not obscure the primary display device. The 360-degree content may define a target alignment for the primary display device. The target alignment defines a location where the center of the display should be with respect to the 360-degree immersive content. By using the detected edges of the display, the process adjusts the viewing angle of the 360-degree video so that the content is shown with a properly oriented perspective.
[0036] In some embodiments, a system operates to determine what kind of content may be shown in the field of view of the AR goggles. The AR goggles may offer only a narrow field of view and in this case, the size of the field of view would be used in the selection of the ancillary content. Also, there are AR goggles that are not capable of hiding visual areas in the field of view. They may merely insert light into the field of view. In this case, the dark regions of the viewing environment should be used for rendering AR content whereas the use of white regions should be avoided. In a typical implementation, the dark content is preferable in the light regions of the visual field and light content in the dark regions of the visual field.
[0037] Methods described herein may be used to determine what side of the display should be used for rendering AR content. Several embodiments operate to minimize the intersecting connectors between the ancillary content and the primary content. Content with virtual object connectors should be presented on a same side where an associated object is shown in the primary display device to minimize virtual object connector lengths and intersections.
[0038] In some embodiments, surfaces on top of which 3D models are to be rendered are selected based on a size, color, and contrast of the surface in view of the 3D model. Render sizes of virtual 3D models may be adjusted based on an amount of free space above a render surface. Furthermore, exemplary methods may minimize a distance between a 3D content and the edges of the primary display - For example, a render surface may be selected so that the distance to the primary display is small, reducing excess eye movement.
[0039] In some embodiments, at the beginning of an AR session, the AR device reconstructs a digital model of the present physical environment. The generation of the model may employ previously collected data corresponding to the same location. In other cases, the model is made by scanning the environment with sensors built into the AR device. Combinations of previously collected data and presently collected data may be utilized to improve the accuracy of the model. A copy of the model may be sent to a server, so that the server has instantaneous access to a digital version of the present 3D environment. The server copy of the model may be updated and improved at any time by incorporating more sensor data from sensing client devices. The model is used in the execution of ancillary AR content visibility analyses. Collected environment data is also be used by the client device to assist in location and orientation tracking (pose tracking). The pose (point-of-view position and orientation) of the device may be used for synchronizing the translations of the augmented elements displayed by the AR device with a user's head movements. The pose information is sent to the server or calculated by the server using the environment model to allow the server to estimate visibilities of elements of AR content from the user's viewpoint. From a user's point of view, real-world objects such as tables, buildings, trees, doors, and walls may occlude AR content. This will happen whenever the depth of the AR content is farther away from the user than the real-world object but also in the same line of sight. True AR content containing high resolution depth information allows for an enhanced sense of realism.
[0040] In one embodiment, the AR device used for viewing the AR content is embedded with sensors capable of producing depth information from the environment. A sensor, or combination of sensors, may include RGB-D cameras, stereo cameras, infrared cameras, lidar, radar, sonar, and any other sort of sensor known by those with skill in the art of image and depth sensing. Combinations of sensor types and enhanced processing methods may be employed for depth detection.
[0041] During the reconstruction process executed on the AR device, sensors collect point cloud data from the environment as the user moves the device and sensors through the environment. Sensor observations with varying points of view are combined to form a coherent 3D reconstruction of the complete environment. Once the 3D reconstruction reaches a threshold level of completeness, the AR device proceeds to send the reconstructed model to the server along with a request for AR content. The level of completeness may be measured for example as the percentage of the surrounding area coverage, number of discrete observations, duration of sensing, etc. and any similar quality value that may be used as a threshold. 3D reconstruction of the environment may be carried out with any known reconstruction method, such as ones featured in KinectFusion or Point Cloud Library (PCL).
[0042] In one variation of the process, the AR device at the beginning of an AR viewing session starts to continuously stream RGB-D data to the server. In this variation, the server performs the 3D reconstruction process using the received RGB-D data stream and stores the reconstructed environment model. As the server constructs the per client environment model, it also begins to filter available AR content by removing content not preferable given that client's environment model. As the 3D reconstruction becomes more complete, the content selection processing becomes more efficient.
[0043] After the reconstruction process, the AR headset starts a pose tracking process. The objective of the pose tracking process is to estimate where the client device is and in which direction the client device is facing relative to the previously reconstructed environment. Pose tracking may be done by using any known tracking technique and may be assisted by using the reconstructed environment model and client device sensor data.
[0044] The AR headset receives AR content from the server and displays it to the user. For displaying the content, the AR client determines device pose from the client device depth sensors. Based on the orientation information, a display process aligns a received content coordinate system with that of the user. After aligning the content, in some embodiments, a sub-process compares depth values received from the client device sensors with the depth values of the selected AR content received from the server. Areas of the content that have larger depth values than the corresponding depth values in the AR headset depth sensor data may be discarded and not be rendered, as they are occluded by real physical elements in the environment. This run-time depth comparison handles occlusion caused by dynamic elements and static elements missing from the environment model sent to the content server.
[0045] One embodiment takes the form of a process that includes identifying a primary media depicted on a display in a real-world environment. This may be carried out using an AR headset. The process also includes accessing available AR content associated with the identified primary media. The process also includes analyzing the real-world environment to measure visual characteristics. The process also includes generating AR content render parameters based at least in part on the measured visual characteristics and the available AR content, wherein the AR content render parameters comprise (i) a selection of the available AR content and (ii) a respective render location for each AR content in the selection. The process also includes displaying the selected AR content in accordance with the generated AR content render parameters using the AR headset.
[0046] Another embodiment takes the form of a system that includes a communication interface, a processor, and data storage containing instructions executable by the processor for causing the system to carry out at least the functions described in the preceding paragraph.
[0047] In at least one embodiment, the process further comprises tracking a position and orientation of the AR headset within the real-world environment to align the selected AR content with the real-world environment during display. This position and orientation may be carried out by the AR headset itself or an external tracking solution.
[0048] In at least one embodiment, the process further comprises detecting the display using the AR headset. The display may be a TV, a projector surface, a smartphone display, a smartwatch display, an automobile display, an appliance display, a peripheral device display, or a computer monitor. The display may utilize any type of display technology that may be accurately sensed by the available sensors. In certain embodiments, different AR content is used to enhance multiple displays simultaneously.
[0049] I n at least one embodiment, the primary media is one of a TV show, a movie, a media broadcast, a media stream, an application interface, a CC TV footage, a security camera feed, a process monitor feed, or an airport timetable. The primary media may be any content displayed on the detected display. In some embodiments, identifying the primary media using the AR headset comprises a user manually identifying the primary media using the AR headset as an input device. Identifying the primary media using the AR headset may comprise the AR headset receiving from an external device, data regarding the primary media. In many embodiments, identifying the primary media using the AR headset comprises employing a microphone of the AR headset and a sonic media identification service. This may be modeled after the technologies employed by Shazam and the like.
[0050] I n some embodiments, identifying the primary media using the AR headset comprises employing an image sensor of the AR headset and visual media identification service. The visual media identification service recognizes a frame of the primary media and may identify the media by comparing it against a database of known media. The visual media identification service may recognize an object depicted in the primary media and compare the object against a database of known objects in media. When watching
television, it is often the case that broadcasters include a channel identifier overlaid somewhere over the media. The visual media identification service may detect and identify those channel identifiers depicted in the primary media. The service may then reference a database of channel guide information to look up the content playing on the identified channel at the current time at the relevant location.
[0051] A variety of AR content types may be made available to the AR headset. In general, the available AR content comprises 2D planar content, 3D virtual content, and 360-degree immersive content. 2D planar content may be text, images, videos, etc. that are rendered on a 2D plane. The available AR content may comprise a score and information widget of a sports broadcast when a sports game is identified as the primary media. As another example, the available AR content may comprise a virtual director's note about a movie scene.
[0052] In some embodiments, AR content comprises 3D virtual content. This includes but is not limited to virtual models of objects associated with the primary media, such as a racecar for an F1 event or a solar system for a Neil deGrasse Tyson show. User preferences may be used to look-up which racer is the user's favorite. The system may then provide the 3D model of that racer's car. If the primary media is footage of a security camera, then the AR content may be a 3D virtual model of the secure building.
[0053] In some embodiments, the 360-degree immersive content is an image or video rendered to the entire visual field. The primary display device is kept visible in the visual field, too. The 360-degree content may enhance the user experience. For example, in a case of football game, a TV screen may be used to display the primary video stream from the football game and the AR goggles may be used to display a 360- degree immersive video from the spectator stand.
[0054] In at least one embodiment a process further comprises identifying an object depicted within the primary media. In such embodiments, the selected AR content may be further associated with the identified object. An example of this is identifying a particular soccer player during a match. This may be accomplished using facial recognition technology or by detecting a name or number on a player's uniform. In one example, the available AR content comprises a statistics card of an identified athlete. Similarly, the available AR content may comprise a 3D model of a detected TV show prop. In some of these embodiments and in others, a virtual object connector is rendered between the identified object and the associated AR content. The virtual object connector is a visual link between the AR content and the identified object, not dissimilar to a callout. The connector may be stylized in whichever way a user prefers. A few sub-processes may be carried out when rendering virtual object connectors. In at least one embodiment, generating AR content render parameters comprises searching for potential virtual object connector intersections by analyzing the available AR content at potential locations and prioritizing AR content render parameters that minimize virtual object connector intersections. In this way, a plurality of AR content with virtual object connectors will not result in a mess of crossing virtual object connectors in the user's virtual field of view. Furthermore, certain AR devices may only render at a limited range of depths. This is referred to as an accommodation range of the device.
When the user is positioned in front of the display such that the display falls within the accommodation range of the AR device, the user may more easily focus on the rendered AR content as well as the display. When the user is positioned in front of the display such that the display falls outside the accommodation range of the AR device, the user will not readily be able to simultaneously focus on the rendered AR content as well as the display. In this latter situation, the process may be configured to disable rendering any virtual object connectors.
[0055] Accessing available AR content may comprise requesting and receiving AR content or simply being pushed AR content at the discretion of an external system. In at least one embodiment, accessing available AR content associated with the primary media comprises accessing available AR content from a content provider of the primary media. In some cases, an authentication of user credentials is required to access the available AR content. This is common for such services as digital media is often subscription- based. In such embodiments and others, accessing available AR content associated with the primary media comprises accessing available AR content from a content server.
[0056] In alternative examples, available AR content associated with the primary media is retrieved from a local content storage. This may occur if some AR content has been buffered by the AR headset or if a user has requested to locally store a frequently used virtual 3D model that would typically take a long time to download.
[0057] An exemplary process described herein comprises analyzing the real-world environment to measure visual characteristics. In some embodiments, this includes hardware components such as sensors as well as software components such as object classifiers working together. In different embodiments, the form of analysis executed and the visual characteristics measured vary. This is a direct result of various optimizations that may be leveraged, based on detectable differences in use case scenarios. The analysis of the real-world environment may be carried out by the AR headset, an external sensor, an external computing device, and a combination thereof. For example, the analysis may not search for surfaces suitable for rendering virtual 3D content if the available AR content does not include any virtual 3D content types. Likewise, if a boundary of an AR scene is located within the borders of the display and the only available content is a 360-degree immersive video then all other analysis is irrelevant as there exists no available region for rendering the 360-degree immersive video (e.g., no available region that would not occlude the display). In some embodiments, rendered AR content does not occlude the display. In some embodiments rendered AR content does occlude the display
[0058] In at least one embodiment, analyzing the real-world environment to measure visual characteristics comprises analyzing the real-world environment to measure a distance between the AR headset and the display. The distance measurement is used to render 2D planar content in the same viewing plane as the display and for other purposes. The measurement may be carried out using a depth sensor of the AR headset or a stereo imaging system. The camera sensors in the AR goggles capture data from the
environment. In an exemplary embodiment, this includes capturing RGB-D data from an RGB-D sensor. The captured data is used by a distance detector module that uses the captured data to detect the distance between the AR headset and the primary display device.
[0059] In at least one embodiment, analyzing the real-world environment to measure visual characteristics comprises analyzing the real-world environment to measure the edges of the display. Edge detectors such as Sobel filters may be used to assist in identifying the edges of the display. In some configurations, the edges of the display comprise the edges of the display device (e.g., the entire TV frame not just the LCD panel). In an exemplary embodiment, analyzing the real-world environment to measure visual characteristics includes capturing RGB-D data from an RGB-D sensor. The captured data is used by the edge detector module to detect the edges of the display in the AR scene.
[0060] In at least one embodiment, analyzing the real-world environment to measure visual characteristics comprises analyzing the real-world environment to measure a position and orientation of the display. The position and orientation information help to align the rendered AR content. In at least one embodiment, timestamps for the primary media and the rendered AR content are used to temporally synchronize the primary media and the rendered AR content.
[0061] In at least one embodiment, analyzing the real-world environment to measure visual characteristics comprises analyzing the real-world environment to measure lighting conditions around the display. In an exemplary embodiment, this includes capturing RGB-D data from an RGB-D sensor. Captured data is used by the lighting parameters detection module to detect the lighting conditions in different regions of the AR scene. Bright regions may be caused by a lighter wall color or a window. Dark regions may be the result of shadows or dark colors. In some embodiments, the analysis determines which regions are bright and dark with respect to the real-world environment. In some embodiments, the analysis determines which regions are bright and dark with respect to potential AR content. In at least one embodiment, analyzing the real-world environment to measure visual characteristics comprises analyzing the real-world environment to measure lighting conditions between the display and the AR headset.
[0062] In at least one embodiment, analyzing the real-world environment to measure visual characteristics comprises analyzing the real-world environment to measure an AR scene boundary. The boundary of the AR scene shown in the AR headset is based on the field of view of AR headset and the edges of the primary display device in the visual field. The field of view of AR goggles, which may be relatively narrow, sets limitations for the size of the AR scene. In addition, the primary display device takes space from the AR scene and the distance between the user and the primary display device may also affect the bounds of the AR scene.
[0063] The user may be positioned a short distance from the primary display device. In this case the primary display device may fill the whole visual field and thus, there is not space for AR content. The user may be positioned a medium distance from the primary display device. In this case the primary display device
fills a big portion of the visual field and thus the AR scene may be located on the left side, right side, top side, or bottom side of the primary display device. The user may be positioned a long distance from the primary display device. In this case the primary display device will be located inside the AR scene.
[0064] In at least one embodiment, analyzing the real-world environment to measure visual characteristics comprises analyzing the real-world environment to measure surfaces for rendering 3D virtual content. The captured data is used by a 3D reconstruction module that builds a 3D model of the surroundings of the primary display device and then detects suitable (e.g., solid) surfaces to be used for as rendering surfaces for 3D virtual content.
[0065] In at least one embodiment, analyzing the real-world environment to measure visual characteristics comprises analyzing the real-world environment to measure color of the real-world environment.
[0066] In at least one embodiment, analyzing the real-world environment to measure visual characteristics comprises analyzing the real-world environment to measure depth of the real-world environment.
[0067] In at least one embodiment, analyzing the real-world environment to measure visual characteristics comprises analyzing the real-world environment to measure contrast of the real-world environment.
[0068] In at least one embodiment, analyzing the real-world environment to measure visual characteristics comprises analyzing the real-world environment to measure texture of the real-world environment.
[0069] In at least one embodiment, analyzing the real-world environment to measure visual characteristics comprises analyzing the real-world environment to measure a visual complexity of the real- world environment.
[0070] In at least one embodiment, analyzing the real-world environment to measure visual characteristics comprises analyzing the real-world environment to generate a 3D reconstruction of the real- world environment.
[0071] In some embodiments, the selection of the available AR content includes one item of AR content. In some embodiments, the selection of the available AR content includes a plurality of items of AR content. In some exemplary embodiments, the respective render location for each item of AR content in the selection comprises a 3D position and orientation.
[0072] The render parameters may be generated in a variety of sequences. Regions and locations may be identified first. Then ranked based on preference. And then available AR content may be compared to the ranked list of regions. Alternatively, preferred content may be selected first. The selection may be based on user preferences or default settings or content provider preferences or the like. In at least one embodiment, generating AR content render parameters comprises first, generating the selection of the available AR
content, and second, generating the respective render location for each AR content in the selection. In at least one embodiment, generating AR content render parameters comprises first, generating the respective render location for each AR content in the selection, and second, generating the selection of the available AR content. I n at least one embodiment, generating AR content render parameters comprises simultaneously generating the respective render location for each AR content in the selection of the available AR content.
[0073] In at least one embodiment, generating AR content render parameters comprises comparing each AR content in the selection with colors around the display to avoid render locations with poor contrast. One example is not rendering a Christmas tree over a green wall. In at least one embodiment, generating AR content render parameters comprises comparing each AR content in the selection with lighting conditions around the display to avoid render locations with poor contrast. One example is not rendering black text over a dark wall. In at least one embodiment, generating AR content render parameters comprises comparing each AR content in the selection with a visual complexity around the display to avoid visually complex render locations. In at least one embodiment, generating AR content render parameters comprises comparing each AR content in the selection with textures around the display to avoid render locations with poor textures (e.g., stone or brick walls and curtains).
[0074] In at least one embodiment, generating AR content render parameters comprises recognizing objects around the display to avoid render locations with existing objects. In one example a wall clock is identified and the AR content render location is defined so as to avoid the position of the clock.
[0075] In an exemplary embodiment, generating AR content render parameters comprises prioritizing render locations for 2D planar content that are in the same viewing plane as the display. The AR content may border the edge of the display to create a seamless extension of the display. In at least one embodiment, generating AR content render parameters comprises prioritizing render locations for AR content that have shorter distances to the display. In at least one embodiment, generating AR content render parameters comprises prioritizing render locations for 2D planar content that are one of a left edge, a right edge, a top edge, and a bottom edge of the display. In at least one embodiment, generating AR content render parameters comprises prioritizing a render location for a 3D virtual content that is a surface between the display and the AR headset. In at least one embodiment, generating AR content render parameters comprises prioritizing render locations for 3D virtual content that have shorter distances to the display.
[0076] In at least one embodiment, generating AR content render parameters comprises prioritizing render locations for 3D virtual content that have a larger amount of free space above the render location. In such embodiments, the size of the 3D virtual content is selected to fill up the free space.
[0077] In at least one embodiment, generating AR content render parameters comprises generating a modified viewing angle for a 360-degree immersive content. In at least one embodiment, generating AR content render parameters comprises selecting a portion of a 360-degree immersive content to be transparent so that the display remains visible to a user of the AR headset.
[0078] In at least one embodiment, generating AR content render parameters comprises, (i) identifying regions having consistent depth, brightness, color, and texture, (ii) determining a respective AR content display ability score for each identified region based on the measured visual characteristics in that region, and (iii) and prioritizing regions with higher scores.
[0079] In at least one embodiment, generating AR content render parameters comprises, (i) virtually testing the available AR content in a plurality of potential render locations, (ii) generating AR content - location compatibility scores, and (iii) generating the AR content render parameters based on the AR content - location compatibility scores.
[0080] In some embodiments, AR content render parameters further include a respective render style for each AR content in the selection. Embodiments in which AR content render parameters further include a respective render style for each AR content in the selection, may be referred to as "render style embodiments".
[0081] A render style indicates display properties (other than location) for a given AR content. The render style includes at least one of a resolution, size, color, hue, saturation, brightness, contrast, sharpness, transparency, inversion, rotation, and virtual object connector status for each AR content in the selection. In at least one embodiment, the virtual object connector is enabled when the selected AR content is rendered at substantially the same depth as the display and the virtual object connector is disabled when the selected AR content is not rendered at substantially the same depth as the display.
[0082] The render style assists in aligning a visual quality of the AR content rendered on the AR headset with that of the primary media on the display. This process may include tone mapping to compensate for the brightness / darkness of the background and the limitations of the AR goggles (e.g., transparency and color bleeding from the background, surface reflectance, and contrast ratio). The white balance may be matched as well. In at least one embodiment, generating AR content render parameters comprises prioritizing a render style for each AR content that is visually consistent with the primary media.
[0083] In at least one render style embodiment, generating AR content render parameters comprises simultaneously generating the selection of the available AR content, generating the respective render location for each AR content in the selection, generating the respective render style for each AR content in the selection.
[0084] In at least one render style embodiment, generating AR content render parameters comprises, (i) virtually testing the available AR content in a plurality of potential render locations with a plurality of potential render styles, (ii), generating AR content - location - style compatibility scores, (iii) generating the AR content render parameters based on the AR content - location - style compatibility scores.
[0085] In at least one render style embodiment, generating AR content render parameters is further based on AR headset device properties. One such embodiment comprises generating a render style for each selected AR content at least in part by (i) enabling a virtual object connector when the display is within a
known accommodation range of the AR headset, (ii) disabling the virtual object connector when the display is outside of the known accommodation range of the AR headset. Another such embodiment comprises generating a render style for each selected AR content at least in part by (i) comparing the measured visual characteristics with the available AR content considering known AR headset image sensor qualities and known AR headset display qualities, and (ii) modifying each render style to match the visual characteristics of the primary media.
[0086] In at least one embodiment, generating AR content render parameters is further based on AR headset device properties. The headset device properties may include a maximum display resolution, an accommodation range, a color output bias, etc. In at least one embodiment, generating AR content render parameters is further based on user preferences. The user preferences may be associated with a user profile. Said preferences may indicate a user's preferred render location, AR content, AR content type and the like. In at least one embodiment, generating AR content render parameters is further based on content provider preferences. A content provider may prioritize its available AR content.
[0087] In at least one embodiment, generating AR content render parameters is further based on AR content target parameters (e.g., for size of content presentation and for height where the content should be presented). The AR content target parameters are provided by a content provider of the AR content and may be embedded within the available AR content and/or transmitted along with the available AR content. In various such embodiments, generating AR content render parameters comprises (i) selecting initial AR content render parameters, (ii), calculating an error between the initial AR content render parameters and the content target parameters, and (iii) adjusting the AR content render parameters and recalculating the error until the error is below an error threshold. Calculating the error between the initial AR content render parameters and the AR content target parameters comprises employing a weighted difference. Weight factors for the target parameters may be embedded within the available AR content and/or transmitted along with the available AR content. Thus, AR content render parameters may be iteratively computed by using the target parameters and their weight factors to calculate error values for the render parameters. The iterative adjusting of the render parameters may be continued until a desired error level is achieved and the render parameters are good enough to be used in the curation of AR content.
[0088] One embodiment takes the form of a process that includes identifying a primary media depicted on a display in a real-world environment using an augmented reality (AR) headset, accessing available AR content in response to identifying the primary media, analyzing the real-world environment to measure visual characteristics, generating AR content render parameters based at least in part on the measured visual characteristics and the available AR content, wherein the AR content render parameters comprise (i) a selection of the available AR content, and (ii) a respective render location for each AR content in the selection, and displaying the selected AR content in accordance with the generated AR content render parameters using the AR headset. In such a process, the available AR content is necessarily associated with the primary
media. The focus of such an embodiment is the means behind selecting AR content and corresponding render locations.
[0089] One embodiment takes the form of a process that includes accessing available augmented reality (AR) content in a real-world environment using an AR headset, analyzing the real-world environment to measure visual characteristics, generating AR content render parameters based at least in part on the measured visual characteristics and the available AR content, wherein the AR content render parameters comprise (i) a selection of the available AR content, and (ii) a respective render location for each AR content in the selection, and displaying the selected AR content in accordance with the generated AR content render parameters using the AR headset.
[0090] Various embodiments take the form of a system comprising a set of sensors configured to detect characteristics of an AR scene. Detecting characteristics of the AR scene comprises detecting a 3D position and orientation of a real-world display device depicting a primary media, and detecting a set of potential locations for rendering an AR content. The system includes a processor, and non-transitory data storage containing instructions executable by the processor, configured to generate render parameters based on the detected characteristics, wherein the render parameters indicate at least a selected location from the set of potential locations. The system further includes an augmented reality display, configured to render the AR content in accordance with the render parameters at the selected location. This system, and the variations disclosed herein, may be embodied as an AR headset.
[0091] At least one embodiment is a system comprising an AR viewing apparatus and a sensor array. The system is configured to identify a primary media depicted on a display in a real-world environment using the sensor array, access available AR content associated with the identified primary media, and analyze the real-world environment to measure visual characteristics. The system is further configured to generate AR content render parameters based at least in part on the measured visual characteristics and the available AR content, wherein the AR content render parameters comprise a selection of the available AR content, and a respective render location for each AR content in the selection. The system is further configured to display the selected AR content in accordance with the generated AR content render parameters using the AR viewing apparatus.
[0092] At least one embodiment takes the form of a process comprising (i) detecting, using sensors of a head-mounted display (HMD), information regarding the location of a display device in a real-world environment, (ii) detecting, using the sensors of the HMD, information regarding visual characteristics of a plurality of portions of the real-world environment, (iii) analyzing each available AR content at each of the plurality of portions in view of the information regarding the visual characteristics of the plurality of portions to determine a visibility rating for each available AR content at each of the plurality of portions, and (iv) rendering, using the HMD, a subset of the available AR content at locations within the plurality of portions, wherein the subset and locations are selected based on the visibility ratings. In at least one such embodiment
each portion in the plurality of portions is located near (e.g., adjacent to) the display device. The information regarding the visual characteristics of the plurality of portions of the real-world environment comprises at least one of a color, complexity, texture, brightness, or distance.
[0093] The rendered AR content may or may not be associated with content detected on the display device but the content detected on the display device is determined by using the sensors of the HMD.
[0094] Another alternative embodiment of the disclosed matter takes the form of a process. The process may be carried out by an AR headset or any other AR content viewing device. The process includes generating a digitally reconstructed 3D environment of a real-world AR viewing location using an AR client. The process also includes detecting a display in the digitally reconstructed 3D environment. The process also includes identifying a media depicted on the display. The process also includes sending information indicating the identified media to an AR content server. The process also includes receiving, at the AR client, an AR content stream from the AR content server, wherein the received AR content stream has been curated by the AR content server in a response to the AR content server receiving the information indicating the identified media.
[0095] Moreover, any of the embodiments, variations, and permutations described in the ensuing paragraphs and anywhere else in this disclosure may be implemented with respect to any embodiments, including with respect to any method embodiments and with respect to any system embodiments.
[0096] An exemplary method disclosed herein enables enhancement of the primary media viewing experience via AR goggles that display side-stream content (e.g., AR content) for the user. The content may be curated by the AR goggles, the AR content server, another computing device, or a combination thereof. By using embodiments disclosed herein, more immersive and enhanced representations of the primary content/media stream may be provided for the user to enjoy. Exemplary methods automate the curation of side-stream content. This will allow for the generation of richer experiences in a vast array of environments and use cases. Potential environments may vary from extravagant digital city environments with huge numbers of embedded display devices to modest living rooms with a few display devices to environments designed solely to support AR experiences. Use cases may vary from simple entertainment to productivity support to enhanced telepresence.
[0097] Before proceeding with this detailed description, it is noted that the entities, connections, arrangements, and the like that are depicted in— and described in connection with— the various figures are presented by way of example and not by way of limitation. As such, any and all statements or other indications as to what a particular figure "depicts," what a particular element or entity in a particular figure "is" or "has," and any and all similar statements— that may in isolation and out of context be read as absolute and therefore limiting— can only properly be read as being constructively preceded by a clause such as "In at least one embodiment, ...." And it is for reasons akin to brevity and clarity of presentation that this implied leading clause is not repeated ad nauseum in this detailed description.
[0098] FIG. 1A illustrates a traditional video display 100 on which side-stream content 106, namely the score of a sporting event, is displayed as part of the primary media video stream 104, e.g. on a physical video screen 102. FIG. 1 B illustrates a display 150 as seen through an AR headset (or other AR display device). In the example of FIG. 1 B, the video of the primary media 154 does not include side-stream content. Instead, side-stream content (in this example, the current score) is displayed as an augmented reality content (or overlay) 156 in a region adjacent to the video display 152. In this way, the complete screen area of the video display 152 may be dedicated to display of the sporting event itself. In exemplary illustrations, the display 152 is depicted as a TV, but this is not meant to be limiting in any way. The display 152 may be any style of display for disseminating visual media such as a monitor or a smartphone screen. Other examples of side- stream content 106 include virtual news broadcast elements, floating advertisements, the NFL 3D robot, and ubiquitous channel identification logos. In most cases the content is associated with the primary media.
[0099] FIG. 2 is a flow chart of a method for AR content curation in environments with traditional displays, in accordance with at least one embodiment. In particular, FIG. 2 depicts a process 200 having elements 202, 204, 206, 208, and 210. At least one embodiment of the present disclosure takes the form of the process 200. The process 200 may be carried out by an AR headset or any of the other suitable systems disclosed herein. The process 200 includes identifying a primary media depicted on a display in a real-world environment at element 202. The process 200 also includes accessing available AR content associated with the identified primary media at element 204. The process 200 also includes analyzing the real-world environment to measure visual characteristics at element 206. The process 200 also includes generating AR content render parameters based at least in part on the measured visual characteristics and the available AR content, wherein the AR content render parameters comprise (i) a selection of the available AR content and (ii) a respective render location for each AR content in the selection at element 208. The process 200 also includes displaying the selected AR content in accordance with the generated AR content render parameters using the AR headset at element 210.
[0100] FIG. 3 is a visual outline of functional elements of an exemplary system and process, in accordance with at least one embodiment. For one embodiment, the method includes the elements of measuring visible characteristics of an AR scene 302, generating render parameters 304, and outputting ancillary AR content 306 (which may be via AR goggles).
[0101] Detection of the visible characteristics of an AR scene 302 may comprise detection of the distance to the primary display device, detection of the edges of the primary display device, detection of the bounds of AR scene, detection of the lighting parameters, and detection of surfaces.
[0102] Generation of AR content render parameters based on the measured characteristics 304 may comprise selection of AR content from the available AR content, selection of positions for the AR content, selection of render style for the AR content, smart-zooming for content (so as to disable virtual object
connectors when render planes are too far apart), and adapting an accommodation distance (e.g. continually adjusting a render plane of the AR headset) to align with a render plane of the primary display.
[0103] Output of AR content per the render parameters 306 may comprise output for side-stream content, output for 2D planar content, output for 3D virtual content, and output for 360-degree immersive content.
[0104] The above listed steps are discussed in more detail throughout this disclosure.
[0105] FIG. 4 is an illustration of components of an exemplary embodiment 400. In FIG. 4, there are AR content services 402 and an AR headset 404. The AR content services 402 comprise a display device identification service 406, a content identification service 408, an object detection service 410, a content access service 412, and a rendering parameters generation service 414. The AR headset 404 may comprise various hardware elements such as sensors, displays, processors, data storage, and communication interfaces. The AR headset 404 comprises software 416 including an application for TV (or any other disclosed primary content) and head-mounted display interaction 418 as well as an output manager 420. The output manager 420 may comprise an analysis module 422 and an output module 424. In at least one embodiment, the output manager 420 performs the processes disclosed herein by interfacing with the AR content services 402. In at least one embodiment, the AR headset 404 performs the processes disclosed herein by interfacing with the AR content services 402.
[0106] The Display Device Identification Service 406 is a service that identifies a display device in a physical space. The identification may be based on the RGB-D data that is captured from RGB-D sensor in the AR headset.
[0107] The Content Identification Service 408 is a service that identifies the content that is presented in the identified display device.
[0108] The Object Detection Service 410 is a service that detects the objects that are presented in the primary content stream and produce information about the coordinates of the detected objects in the primary content stream.
[0109] The Content Access Service 412 is a service that accesses suitable content for augmenting the primary content stream.
[0110] The Rendering Parameters Generation Service 414 is a service that produces render parameters including a selection of AR content and a selection of corresponding render locations.
[0111] The output manager 420 includes Analysis and Output modules that: a) use the information produced in the AR content services, b) perform analysis for the AR scene, and c) output AR content. FIG. 4 shows one embodiment of a structure for a system disclosed herein. The locations of the modules used in the system may vary; the modules of the AR content services may be deployed to the AR headset and the modules of the output manager may be deployed to a remote server(s).
[0112] FIG. 5 is a sequence diagram 500 of a method for AR content delivery in pre-captured environments, in accordance with at least one embodiment. For use as a further resource, the following paragraphs will contain an example traversal of the sequence diagram of FIG. 5.
[0113] In some embodiments, a user's AR headset operates to reconstruct 510 the present AR viewing location. The user 502 starts 512 an AR application and scans 514 the present environment with an AR headset (or AR viewing client) 504. Scanning and reconstructing the environment 514 may be used for tracking purposes as well. In FIG. 5, if the digital reconstruction is complete, the user 502 is notified 516.
[0114] A user 502 may initiate an AR content streaming (viewing) session 518. The displayed content is identified and the content access service produces a link for AR content and gives this to the AR client. Upon reception of the link 520, the AR client 504 begins pose tracking 522, however this step may be initialized at an earlier time. The AR client 504 sends the content request 506, the digitally reconstructed 3D environment, and the current AR client pose to an AR content server 506. For one embodiment, an AR content server 506 may optimize content 524. The AR content server 506 fetches 528 and receives 530 the requested AR content from an AR content storage 508.
[0115] Content streaming and viewing 532 may commence. In one embodiment, the AR content server 506 optimizes 524 the requested AR content by removing content that is not preferable for the present viewing conditions and viewing hardware. For one embodiment, an optimized content stream 534 is sent from an AR content server 506 to an AR view client 504. Display content 536 may be displayed to the user 502 by an AR viewer client 504.
[0116] At least one embodiment takes the form of a process that includes digitally reconstructing an augmented reality (AR) viewing location using an AR device. The process also includes sending information describing the digitally reconstructed AR viewing location from the AR device 504 to an AR content server 506. The process also includes sending a request for AR content 526 from the AR device to the AR content server. The process also includes sending information describing a position and orientation of the AR device within the AR viewing location from the AR device to the AR content server. The process also includes determining, at the AR content server 506, a visibility of a requested AR content based on the received information describing the digitally reconstructed AR viewing location and the received information describing the position and orientation of the AR device. The process also includes modifying, at the AR content server 506, the requested AR content to mimic the visual character of the primary content on the display. The process also includes sending the modified AR content from the AR content server 506 to the AR device 504. The process also includes augmenting the AR viewing location with the modified AR content using the AR device 504.
[0117] FIG. 6 is a sequence diagram 600 of a method for AR content delivery in accordance with at least one embodiment. To enable the exploitation of AR content in a viewing environment, an AR headset 604 may assume the role of a master device. A master device managing an AR experience may continuously
execute the process illustrated in FIG. 6. An AR headset 604 may send a Side-Stream Request 614 to a Side-stream Content Service (also referred to herein as the AR Content Service) 606, which may be on a remote server 602. Such a request 614 is the first sequence element depicted in FIG. 6. The request 614 may provide RGB-D data that the camera sensors of the AR headset 604 have captured from the environment. The modules of the Side-stream Content Service 606 use the RGB-D data and (i) identify the primary display device and the media that is shown in the primary display device, (ii) select the side-streams (associated AR content) for the identified media, and c) return a Side-Stream Response 616 that defines the selected side-streams and the objects and coordinates for the objects that are shown in the primary display device. For one embodiment, an output manager module 608 receives a side-stream response 616 and sends an AR scene request 618 to an AR scene analysis module 610. An AR scene analysis module 610 returns an AR scene response 620 that may contain AR scene characteristics. An output manager module 608 sends a representation parameters request 622 to a side-stream content service 606, and a representation parameters response 624 containing representation parameters may be received by the output manager module 608. The output manager module 608 may send side-stream output (which may include selected side-streams, directed objects, and representation parameters) 626 to a side-stream content output module 612.
[0118] FIG. 7 is a sequence diagram 700 of a method for AR content delivery in pre-captured environments, in accordance with at least one embodiment. The AR headset (or AR display) 708 sends AR device capability information 710 to an AR Content Manager 706. This information 710 may include a maximum brightness of the display, an accommodation position and range, and a maximum resolution. Environment Sensors 704 may transmit an environment scan (RGB-D data and audio data) 712 to the AR Content Manager 706. The AR content Manager 706 identifies 714 the displayed media from the RGB-D data and the audio data. Responsive to this identification, the AR Content Manager 706 sends a request for AR content 716 to an AR Content Server 702. The AR Content Server 702 sends program-specific AR content 718 back to the AR Content Manager 706. Program-specific AR content 718 may be AR content that is associated with the specific TV program that has been identified by the AR Content Manager 706. The AR Content Manager 706 may select a subset of the program specific AR content and respective render locations 720. The AR Content Manager 706 sends the selected subset of AR content 722 to the AR display 708 for rendering at the selected locations.
[0119] FIG. 8 is a depiction of an example real-world environment 800 comprising a display 802 depicting a primary media content, in accordance with at least one embodiment. The real-world environment 800 is the inside of a room. The room is a user's viewing location of choice for TV supplemented with AR content. The room includes a TV display 802 depicting a soccer match, two blocks 804, 806 on the floor, and a window 808. Behind the left side of the display is a brick wall 810 and behind the right side of the display
is a wall clock 812 mounted near the ceiling. FIG. 8 is a reference image for use with the subsequent descriptions of FIGs. 9-14.
[0120] FIG. 9 depicts a first placement 900 of an AR content, in accordance with at least one embodiment. In FIG. 9 a 2D planar content 902 consisting of a soccer match score is selected for rendering. FIG. 9 is an example of a poor location for rendering 2D planar content 902. The score tile 902 is rendered in front of a brick wall 904. This makes it difficult for the user to read the score 902.
[0121] FIG. 10 depicts a second placement 1000 of the AR content, in accordance with at least one embodiment. In FIG. 10 the score tile 1002 from FIG. 9 is rendered on the right side of the display 1004. The wall right of the display 1004 is a smooth wall and therefore it is easy for the user to see the score 1002. Lighting and color conditions on the wall right of the display 1004 may be preferable as well. For example, the AR content 1002 may be dark in color and above the display 1004 is dark while the right of the display 1004 is light. In other cases, the process herein detected the clock 1006 on the back wall and therefore deprioritized that region. At some hours of the day, sunlight may shine through the window 1008 and brighten the currently selected render location 1002. In some cases, this may affect the measured visual characteristics enough to result in newly generated AR content render parameters (a new render location may be selected if that location becomes too bright).
[0122] FIG. 11 depicts two elements of AR content 1 102, 1 104, in accordance with at least on embodiment 1100. In particular, FIG. 11 incudes the AR score tile 1102 of FIG. 10 and an AR time tile 1104 that is also associated with the identified soccer match. In various embodiments, multiple elements of AR content are rendered and each element of content is associated with the presently displayed media 1106. In other embodiments, multiple elements of AR content are rendered and each element of content is not necessarily associated with the presently displayed media 1106.
[0123] FIG. 12 depicts 3D virtual content in a less preferred render location and render style, in accordance with at least on embodiment 1200. FIG. 12 depicts a 3D model 1202 of the soccer ball 1204 identified in the primary media 1206. A virtual object connector 1210 visually links the 3D model 1202 with the on-display analogue 1204. In FIG. 12, the process includes determining a surface for rendering an element of 3D virtual content. The top of the left block 1208 was selected as a render surface for the 3D model 1202.
[0124] FIG. 13 depicts an element 3D virtual content is a more preferred render location and render style, in accordance with at least on embodiment 1300. FIG. 13 depicts also the 3D model 1302 of the soccer ball 1304 identified in the primary media 1306. Again, the virtual object connector 1308 visually links the 3D model 1302 with the on-display analogue 1304. However, in FIG. 13, a surface is selected that minimizes the distance between the rendered model 1302 and the primary display 1312 - the surface of the right block 1310. This decision mitigates excess eye movements. In some embodiments, a surface may be selected to
provide a more desirable contrast or color combination with the 3D soccer ball. In some embodiments, the contrast, color and orientation of a detected surface provide input to an AR content selection process.
[0125] FIG. 14 depicts a 360-degree immersive content, in accordance with at least on embodiment 1400. FIG. 14 depicts a 360-degree video of outer space. In FIG. 14 the display 1402 from FIG. 8 is still viewable while the other portions of the user's field of view comprises the 360-degree immersive content. The rendered 360-degree immersive content is aligned with the real-world coordinate system so that the display is collocated with a transparent portion of the AR content.
[0126] FIG. 15 depicts a render arrangement 1500 for three selected elements of AR content. In accordance with at least one embodiment in FIG. 15, various virtual object connectors link rendered AR content elements A (1502), B (1504), and C (1506) with respective identified objects in the primary media 1508. The virtual object connectors are rendered across the front of the display 1510. The connectors 1512, 1514, 1516 cross paths and cause an unpleasant messy visual experience.
[0127] FIG. 16 depicts another render arrangement 1600 for the three selected elements of AR content. In accordance with at least one embodiment in FIG. 16, rendered AR content elements A 1602, B 1604, and C 1606 are arranged so that connectors 1612, 1614, 1616 which link the rendered AR content 1602, 1604, 1606 with the identified objects in the primary media 1608 do not intersect. In FIG. 16, AR content element B 1604 is positioned left of the display 1610 because its associated object is on the left side of the display 1610. The positions of AR content elements A 1602 and C 1606 prevent their virtual object connectors 1612, 1616 from crossing. The AR content elements A-C 1602, 1604, 1606 were selected because they have favorable visual characteristics when compared to a remainder of the available content. The systems and processes described herein may balance competing preferences for determining final render parameters. This may be accomplished by weighting measured characteristics, and quantifying various statuses and prioritizing content and locations accordingly.
[0128] FIGs. 17A to 17C depict various AR scene boundaries, in accordance with at least one embodiment. The bounds of the AR scene shown in the AR goggles may be based on (i) the field of view of the AR goggles and (ii) the edges of the primary display device in the visual field. The field of view (which is often a narrow field of view) of AR goggles set limitations for the size of the AR scene. In addition, the primary display device takes away usable space from the AR scene. The distance between the user and the primary display device will also affect the bounds of the AR scene.
[0129] FIG. 17A depicts an embodiment 1700 with a short distance between the user (and AR headset) and the primary display device 1702. In this case, the primary display device 1702 fills the whole field of view 1704 and thus, there is not space for ancillary AR content.
[0130] FIG. 17B depicts an embodiment 1720 with a typical distance between the user (and AR headset) and the primary display device 1726. In this case, the primary display device 1726 may fill a big portion of the field of view 1722 and thus the AR scene 1724 may be located left, right, above, or below the
primary display device 1726. FIG. 17B shows one embodiment with the AR scene 1724 located left of the primary display device 1726. For FIG. 17B, the field of view 1722 may occupy the combination of the AR scene 1724 and the display 1726.
[0131] FIG. 17C depicts an embodiment 1740 with a long distance between the user (and AR headset) and the primary display device 1746. In this case, the primary display device 1746 may be located inside the AR scene 1744. For Fig. 17C, the field of view 1742 may occupy the combination of the AR scene 1744 and the display 1746.
[0132] FIGs. 18A to 18C depict a user 1802, 1832, 1862 with an AR headset 1804, 1834, 1864 at three different distances 1812, 1842, 1872 from a display 1806, 1836, 1866, in accordance with at least one embodiment.
[0133] FIG. 18A depicts a configuration 1800 for the user 1802 with an AR headset 1804 at a first distance 1812 away from the display 1806. In this scenario 1800, the display plane 1806 is past the accommodation range 1810 of the AR headset 1804. For FIG. 18A, the render plane 1808 is located at a distance 1814 from the user 1802 that is approximately midway within the AR headset accommodation range 1810.
[0134] FIG. 18B depicts a configuration 1830 for the user 1832 with an AR headset 1834 at a second distance 1842 away from the display 1836. In this scenario 1830, the display plane 1836 is located within the accommodation range 1840 of the AR headset 1834. For FIG. 18B, the render plane 1838 is located just in front of the display 1836 at a distance 1844 from the user 1832.
[0135] FIG. 18C depicts a configuration 1860 for the user 1862 with an AR headset 1864 at a third distance 1872 away from the display 1866. In this scenario 1860, the display plane 1866 is located closer than the accommodation range 1870 of the AR headset 1864. For FIG. 18C, the render plane 1868 is located at a distance 1874 from the user 1862 that is approximately midway within the AR headset accommodation range 1870.
[0136] In some embodiments, virtual object connectors are disabled if the display viewing plane is outside of the AR headset accommodation range. In such embodiments, the connectors may be visible in FIG. 18B but not in FIG. 18A nor FIG. 18C. In some embodiments, the accommodation range of the AR headset may be dynamic. In such embodiments, depth tracking of the display may be used to continually align a render plane of the AR content with that of the primary display.
[0137] FIG. 19 is a flow chart of one embodiment of a method 1900 for displaying AR content in a viewing environment with a display. For one embodiment, the method 1900 may be executed by an AR headset (such as a head mounted display, HMD). The method 1900 includes detecting, using an HMD's sensors, a location and extent of a separate display device in a viewing environment 1902. The method 1900 also includes detecting, using the HMD's sensors, respective visual characteristics of a plurality of regions of the viewing environment in proximity to the display device 1904. Additionally, the method 1900 includes
selecting a region from among the plurality of regions to display AR content based on the respective visual characteristics 1906. The method 1900 also includes displaying the AR content using the HMD such that the AR content appears in the selected region 1908.
[0138] In the foregoing specification, specific embodiments have been described. However, one of ordinary skill in the art appreciates that various modifications and changes may be made without departing from the scope of the invention as set forth in the claims below. Accordingly, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of present teachings.
[0139] The benefits, advantages, solutions to problems, and any element(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential features or elements of any or all the claims. The invention is defined solely by the appended claims including any amendments made during the pendency of this application and all equivalents of those claims as issued.
[0140] Moreover in this document, relational terms such as first and second, top and bottom, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms "comprises," "comprising," "has", "having," "includes", "including," "contains", "containing" or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises, has, includes, contains a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. An element proceeded by "comprises . . . a", "has . . . a", "includes . . . a", "contains . . . a" does not, without more constraints, preclude the existence of additional identical elements in the process, method, article, or apparatus that comprises, has, includes, contains the element. The terms "a" and "an" are defined as one or more unless explicitly stated otherwise herein. The terms "substantially", "essentially", "approximately", "about" or any other version thereof, are defined as being close to as understood by one of ordinary skill in the art, and in one non-limiting embodiment the term is defined to be within 10%, in another embodiment within 5%, in another embodiment within 1 % and in another embodiment within 0.5%. The term "coupled" as used herein is defined as connected, although not necessarily directly and not necessarily mechanically. A device or structure that is "configured" in a certain way is configured in at least that way, but may also be configured in ways that are not listed.
[0141] It will be appreciated that some embodiments may be comprised of one or more generic or specialized processors (or "processing devices") such as microprocessors, digital signal processors, customized processors and field programmable gate arrays (FPGAs) and unique stored program instructions (including both software and firmware) that control the one or more processors to implement, in conjunction with certain non-processor circuits, some, most, or all of the functions of the method and/or apparatus
described herein. Alternatively, some or all functions may be implemented by a state machine that has no stored program instructions, or in one or more application specific integrated circuits (ASICs), in which each function or some combinations of certain of the functions are implemented as custom logic. Of course, a combination of the two approaches may be used.
[0142] Accordingly, some embodiments of the present disclosure, or portions thereof, may combine one or more processing devices with one or more software components (e.g., program code, firmware, resident software, micro-code, etc.) stored in a tangible computer-readable memory device, which in combination form a specifically configured apparatus that performs the functions as described herein. These combinations that form specially programmed devices may be generally referred to herein "modules". The software component portions of the modules may be written in any computer language and may be a portion of a monolithic code base, or may be developed in more discrete code portions such as is typical in object- oriented computer languages. In addition, the modules may be distributed across a plurality of computer platforms, servers, terminals, and the like. A given module may even be implemented such that separate processor devices and/or computing hardware platforms perform the described functions.
[0143] Moreover, an embodiment may be implemented as a computer-readable storage medium having computer readable code stored thereon for programming a computer (e.g., comprising a processor) to perform a method as described and claimed herein. Examples of such computer-readable storage mediums include, but are not limited to, a hard disk, a CD-ROM, an optical storage device, a magnetic storage device, a ROM (Read Only Memory), a PROM (Programmable Read Only Memory), an EPROM (Erasable Programmable Read Only Memory), an EEPROM (Electrically Erasable Programmable Read Only Memory) and a Flash memory. Further, it is expected that one of ordinary skill, notwithstanding possibly significant effort and many design choices motivated by, for example, available time, current technology, and economic considerations, when guided by the concepts and principles disclosed herein will be readily capable of generating such software instructions and programs and ICs with minimal experimentation.
[0144] The Abstract of the Disclosure is provided to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it may be seen that various features are grouped together in various embodiments for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separately claimed subject matter.
Claims
1. A method for displaying augmented reality (AR) content using a head-mounted display (HMD), the method comprising:
detecting, using sensors of the HMD, a location and extent of a separate display device in a viewing environment;
detecting, using sensors of the HMD, respective visual characteristics of a plurality of regions of the viewing environment in proximity to the separate display device;
selecting a region from among the plurality of regions to display AR content based on the respective visual characteristics; and
displaying, using the HMD, the AR content to appear at a location in the selected region.
2. The method of claim 1 , wherein the respective visual characteristic of a region comprises at least one of color, complexity, texture, brightness, or distance.
3. The method of claim 1 , wherein the displayed AR content is associated with content displayed by the separate display device.
4. The method of claim 1 , further comprising detecting, using sensors of the HMD, content displayed by the separate display device, wherein selecting the region is further based on the content displayed by the separate display device.
5. The method of claim 1 , wherein selecting the region to display AR content is based at least in part on expected visibility of the AR content.
6. The method of claim 1 , further comprising:
identifying primary media depicted on the separate display;
selecting the AR content from available AR content associated with the identified primary media; and
generating AR content render parameters based at least in part on the measured visual characteristics and selected AR content,
wherein displaying the selected AR content is further in accordance with the generated AR content render parameters.
7. The method of claim 6, wherein the AR render parameters include a respective render location for each selected AR content.
8. The method of claim 6, further comprising tracking a position and an orientation of the AR headset within the viewing environment, wherein displaying the selected AR content is further based in part on the tracked position and orientation of the HMD.
9. The method of claim 6, wherein identifying primary media comprises:
selecting at least one object depicted in a frame of the primary media; and
matching each selected object with an object in a visual media identification service database.
10. The method of claim 9, wherein generating AR content render parameters comprises:
determining virtual connector intersections between the selected AR content and at least one selected object for the plurality of regions to display AR content; and
prioritizing AR content render parameters that minimize virtual object connector intersections.
11. The method of claim 9, wherein generating AR content render parameters comprises:
enabling a virtual connector to be displayed if the separate display is within an accommodation range of the HMD; and
disabling the virtual connector to be displayed if the separate display is outside the
accommodation range of the HMD,
wherein the virtual connector is between the selected AR content and at least one selected object.
12. The method of claim 1 ,
wherein detecting respective visual characteristics includes identifying objects in the plurality of regions, and
wherein selecting the region includes prioritizing regions without identified objects.
13. The method of claim 6, wherein generating AR content render parameters comprises prioritizing
locations for displaying AR content based on amount of free space above each location.
14. The method of claim 6, further comprising calculating a visibility rating for each available AR content at each of the plurality of regions of the viewing environment in proximity to the separate display device, wherein selecting the AR content is based at least in part on the respective visibility rating for at least one of the plurality of regions.
15. A system comprising:
an augmented reality (AR) viewing apparatus;
a sensor array;
a processor;
a non-transitory computer-readable medium storing instructions that are operative, when executed on the processor, to perform the method comprising:
detecting, using the sensor array, a location and extent of a separate display device in a viewing environment;
detecting, using the sensor array, respective visual characteristics of a plurality of regions of the viewing environment in proximity to the separate display device;
selecting a region from among the plurality of regions to display AR content based on the respective visual characteristics; and
displaying, using the AR viewing apparatus, the AR content to appear at a location in the selected region.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US201762457442P | 2017-02-10 | 2017-02-10 | |
| US62/457,442 | 2017-02-10 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2018148076A1 true WO2018148076A1 (en) | 2018-08-16 |
Family
ID=61244696
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2018/016197 Ceased WO2018148076A1 (en) | 2017-02-10 | 2018-01-31 | System and method for automated positioning of augmented reality content |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2018148076A1 (en) |
Cited By (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111986276A (en) * | 2019-08-29 | 2020-11-24 | 芋头科技(杭州)有限公司 | Content generation in a visual enhancement device |
| CN112734941A (en) * | 2021-01-27 | 2021-04-30 | 深圳迪乐普智能科技有限公司 | Method and device for modifying attribute of AR content, computer equipment and storage medium |
| US20210390765A1 (en) * | 2020-06-15 | 2021-12-16 | Nokia Technologies Oy | Output of virtual content |
| US20220230396A1 (en) * | 2021-01-15 | 2022-07-21 | Arm Limited | Augmented reality system |
| US20220237913A1 (en) * | 2019-05-22 | 2022-07-28 | Pcms Holdings, Inc. | Method for rendering of augmented reality content in combination with external display |
| EP4312108A1 (en) * | 2022-07-25 | 2024-01-31 | Sony Interactive Entertainment Europe Limited | Identifying device in a mixed-reality environment |
| US12179091B2 (en) | 2019-08-22 | 2024-12-31 | NantG Mobile, LLC | Virtual and real-world content creation, apparatus, systems, and methods |
| CN119883172A (en) * | 2025-03-27 | 2025-04-25 | 深圳市嘉润原新显科技有限公司 | Multi-screen display cooperative control method and system |
| JP2025079565A (en) * | 2023-11-10 | 2025-05-22 | 円谷フィールズホールディングス株式会社 | Information processing device, head mounted display and program |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20140132484A1 (en) * | 2012-11-13 | 2014-05-15 | Qualcomm Incorporated | Modifying virtual object display properties to increase power performance of augmented reality devices |
| US20140168262A1 (en) * | 2012-12-18 | 2014-06-19 | Qualcomm Incorporated | User Interface for Augmented Reality Enabled Devices |
| US20160147492A1 (en) * | 2014-11-26 | 2016-05-26 | Sunny James Fugate | Augmented Reality Cross-Domain Solution for Physically Disconnected Security Domains |
| EP3096517A1 (en) * | 2015-05-22 | 2016-11-23 | TP Vision Holding B.V. | Wearable smart glasses |
-
2018
- 2018-01-31 WO PCT/US2018/016197 patent/WO2018148076A1/en not_active Ceased
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20140132484A1 (en) * | 2012-11-13 | 2014-05-15 | Qualcomm Incorporated | Modifying virtual object display properties to increase power performance of augmented reality devices |
| US20140168262A1 (en) * | 2012-12-18 | 2014-06-19 | Qualcomm Incorporated | User Interface for Augmented Reality Enabled Devices |
| US20160147492A1 (en) * | 2014-11-26 | 2016-05-26 | Sunny James Fugate | Augmented Reality Cross-Domain Solution for Physically Disconnected Security Domains |
| EP3096517A1 (en) * | 2015-05-22 | 2016-11-23 | TP Vision Holding B.V. | Wearable smart glasses |
Cited By (15)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US11995578B2 (en) | 2019-05-22 | 2024-05-28 | Interdigital Vc Holdings, Inc. | Method for rendering of augmented reality content in combination with external display |
| US11727321B2 (en) * | 2019-05-22 | 2023-08-15 | InterDigital VC Holdings Inc. | Method for rendering of augmented reality content in combination with external display |
| US20220237913A1 (en) * | 2019-05-22 | 2022-07-28 | Pcms Holdings, Inc. | Method for rendering of augmented reality content in combination with external display |
| US12179091B2 (en) | 2019-08-22 | 2024-12-31 | NantG Mobile, LLC | Virtual and real-world content creation, apparatus, systems, and methods |
| CN111986276A (en) * | 2019-08-29 | 2020-11-24 | 芋头科技(杭州)有限公司 | Content generation in a visual enhancement device |
| US11636644B2 (en) * | 2020-06-15 | 2023-04-25 | Nokia Technologies Oy | Output of virtual content |
| US20210390765A1 (en) * | 2020-06-15 | 2021-12-16 | Nokia Technologies Oy | Output of virtual content |
| US11544910B2 (en) * | 2021-01-15 | 2023-01-03 | Arm Limited | System and method for positioning image elements in augmented reality system |
| US20220230396A1 (en) * | 2021-01-15 | 2022-07-21 | Arm Limited | Augmented reality system |
| CN112734941A (en) * | 2021-01-27 | 2021-04-30 | 深圳迪乐普智能科技有限公司 | Method and device for modifying attribute of AR content, computer equipment and storage medium |
| EP4312108A1 (en) * | 2022-07-25 | 2024-01-31 | Sony Interactive Entertainment Europe Limited | Identifying device in a mixed-reality environment |
| US12530853B2 (en) | 2022-07-25 | 2026-01-20 | Sony Interactive Entertainment Europe Limited | Identifying devices in a mixed-reality environment |
| JP2025079565A (en) * | 2023-11-10 | 2025-05-22 | 円谷フィールズホールディングス株式会社 | Information processing device, head mounted display and program |
| JP7689173B2 (en) | 2023-11-10 | 2025-06-05 | 円谷フィールズホールディングス株式会社 | Information processing device, head mounted display and program |
| CN119883172A (en) * | 2025-03-27 | 2025-04-25 | 深圳市嘉润原新显科技有限公司 | Multi-screen display cooperative control method and system |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2018148076A1 (en) | System and method for automated positioning of augmented reality content | |
| US20250036194A1 (en) | Virtual 3d methods, systems and software | |
| US11210838B2 (en) | Fusing, texturing, and rendering views of dynamic three-dimensional models | |
| US9460351B2 (en) | Image processing apparatus and method using smart glass | |
| US20180189550A1 (en) | Facial signature methods, systems and software | |
| US10313657B2 (en) | Depth map generation apparatus, method and non-transitory computer-readable medium therefor | |
| CN108475180B (en) | Distribute video across multiple display areas | |
| CN115244494A (en) | System and method for processing scanned objects | |
| WO2015192585A1 (en) | Method and apparatus for playing advertisement in video | |
| US20120287233A1 (en) | Personalizing 3dtv viewing experience | |
| US20230152883A1 (en) | Scene processing for holographic displays | |
| US20120068996A1 (en) | Safe mode transition in 3d content rendering | |
| US10453244B2 (en) | Multi-layer UV map based texture rendering for free-running FVV applications | |
| KR20140082610A (en) | Method and apaaratus for augmented exhibition contents in portable terminal | |
| CN107111866A (en) | Method and apparatus for generating extrapolated image based on object detection | |
| CN110730340B (en) | Virtual audience display method, system and storage medium based on lens transformation | |
| US20230122149A1 (en) | Asymmetric communication system with viewer position indications | |
| CN113795863A (en) | Processing of depth maps for images | |
| CN108076359B (en) | Business object display method and device and electronic equipment | |
| US20180095347A1 (en) | Information processing device, method of information processing, program, and image display system | |
| US20200265622A1 (en) | Forming seam to join images | |
| US12231702B2 (en) | Inserting digital contents into a multi-view video | |
| WO2020193703A1 (en) | Techniques for detection of real-time occlusion | |
| TW202239201A (en) | An image synthesis system and method therefor | |
| KR20130134638A (en) | Method and server for providing video-related information |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18706022 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 18706022 Country of ref document: EP Kind code of ref document: A1 |