WO2014108799A2 - Appareils et procédés de présentation d'effets tridimensionnels à en temps réel d'effets tridimensionnels de caractère stéréoscopique en temps réel, de manière plus réaliste, et réalité soustraite avec dispositif(s) de visualisation extérieur(s) - Google Patents
Appareils et procédés de présentation d'effets tridimensionnels à en temps réel d'effets tridimensionnels de caractère stéréoscopique en temps réel, de manière plus réaliste, et réalité soustraite avec dispositif(s) de visualisation extérieur(s) Download PDFInfo
- Publication number
- WO2014108799A2 WO2014108799A2 PCT/IB2014/000030 IB2014000030W WO2014108799A2 WO 2014108799 A2 WO2014108799 A2 WO 2014108799A2 IB 2014000030 W IB2014000030 W IB 2014000030W WO 2014108799 A2 WO2014108799 A2 WO 2014108799A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user
- screen
- display
- eye
- virtual
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/25—Output arrangements for video game devices
- A63F13/26—Output arrangements for video game devices having at least one additional display device, e.g. on the game controller or outside a game booth
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/156—Mixing image signals
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/20—Input arrangements for video game devices
- A63F13/21—Input arrangements for video game devices characterised by their sensors, purposes or types
- A63F13/213—Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
Definitions
- the present invention relates to presenting 3D visual effects with stereopsis (or "Binocular Vision”) in real time, and more particularly to presenting 3D visual effects that is associated/interactive with (such as triggered) by user (or user's hand held device) more realistically with accurate depth feeling.
- This invention also relating to "subtract reality” which is the reverse of "augmented reality”(AR) in a way that make real world object or a part of it disappear in the virtual reality environment using external (Non-HMD) display.
- a new way of displaying VR objects called “subtract reality” is also discussed here and could be used together with other 3D display methods. It is basically the reverse of "augmented reality” in a way that make real world object or a part of it “disappear” in the virtual reality environment that using external (Non-HMD) display. Previously this can only be done with Mixed Reality or Augmented reality with HMD. With methods provided in this invention, this can be achieved using a normal external screen (and some other device) with out the need for head mounted display, Mixed-reality goggles or AR see-thru glasses.
- External Display/screen means the display is not head mounted or body mounted. Usually it is a relatively large screen within some distance range with user and usually do not physically move together with user's head movement.
- 3D rendering engine for example that of games or CAD, Virtual Reality system and etc, having the ability to calculate the stereo pairs from 3D scene from a certain view point using "virtual camera(s)", just like a camera shooting from that point for the scene in real life.
- Control device/prop means handheld or body carry /mounted devices, tools, props that user used in the VR environment. It is usually resembling something that is used in the virtual environment (similar to that of the function of "props” used in the movies/TV), for example having a shape of a weapon, such as a gun, or handle of a light sword that is consistent with the game/VR scene being simulated. (The position sensor/tracking system could track the position of such device/prop, and make related "augmentation” by adding visual effects and maybe also input its position into the VR system for interaction.). Some example: weapon, magic wand, gloves, or it might also be non-tangible like some kind of field, light, "force” /'fireball” etc.
- the device/prop triggered/generated by (or interactive with) the device/prop, such as muzzle flash generated by a gun, a light blade generated by light sword handle, stars generated by magic wand, etc. It's spatial location can be determined once we know the position and the orientation of the control device/Prop.
- Step Window is the plane/surface that is the picture through which the stereo image is seen (like the TV screen).
- the Stereo Window is often set at the closest object to the screen. When an object appears in front of the stereo window and is cut off by the edges of the screen, this is termed a Stereo Window Violation.
- Stereo Window Violation is an invention discussed in the later part of this document. It functions like the reverse of "augmented reality” in a way that make real world object or a part of it looks like “disappear” in the virtual reality environment, or hide by an "virtual object” that appears to be in front, in the VR environment that using external (Non-HMD) display.
- the "virtual object in front of hand-held device” can be achieved using a normal external screen with special device/prop that can emit light or providing display on at least one surfaces, as if a part of the device where eclipsed/disappeared to let the light "in behind” to pass. This will be discussed in detail in the paragraphs below.
- a first embodiment of the invention is directed to a method or corresponding apparatus of presenting 3D visual effects with stereopsis (or "Binocular Vision") in real time to user when user using a control device/prop (in a VR environment using external screens) to provide more realistic / accurate and possibly more fun user experience.
- stereopsis or "Binocular Vision”
- An other way to display the images for left eye and right eye at the correct spatial location defined by the 3D rendering engine can be achieved by modifying the 3D scene to add or enable one or more virtual object(s) for the visual effects into the scene, and use the 3D rendering engine's virtual camera to generating appropriate images for the left eye and also for the right eye for the whole scene which take considerations into user's head position in relation to the device's location, and screen location; so basically the virtual camera will "shooting" from observer real-time location.
- the displaying (or: generating and displaying) of images of the visual effect(s) for user's left eye and right eye onto a external display/screen near user” is done in a manner selectively according to circumstances/condition such as whether or not the user is looking at the direction, and/or when a visual effect is enabled/triggered (for example enabled or triggered by the control device/prop, or by the 3D rendering system such as game , but the visual effects "belongs to"/a part of the control device/prop , or the visual effect appears either close to the control device/prop or started moving from/to the device), such as (but not limited to muzzle flash, laser, light
- saber/sword energy/particle beam (such as those from a blaster), fireball, projectile(s), flame, halo, magic-wand-like trails, stars and etc.).
- the visual effect generated might be overlay(ed) or "add on” to the images generated by the 3D rendering engine (such as but not limited to game engine) of the scene. This might also be done selectively depending on conditions as discussed in the embodiment above paragraph. [0015] There could be situations in all above embodiments that hiding/resizing or not displaying the visual effects might be desired, for example in the situation when the visual effect will be displayed very close to the border of the screen such that causing "stereo window violation".
- the position acquiring/ tracing device/method can use technology such as be not limited to: KinectTM sensors measuring distances of different objects, stereo camera, which can generate image data that a processor can analyze to estimate distances to objects in the image through trigonometric analysis of stereo images, or Alternatively or in addition, distance measuring sensors (e.g., a laser or sonic range finder) that can measure distances to various surfaces within the image.
- KinectTM sensors measuring distances of different objects
- stereo camera which can generate image data that a processor can analyze to estimate distances to objects in the image through trigonometric analysis of stereo images
- distance measuring sensors e.g., a laser or sonic range finder
- a variety of different types of distance measuring sensors and algorithms may be used an imaged scene for measuring distances to objects.
- more than one sensor and type of sensor may be used in combination.
- distance AND DIRECTION sensors the various assemblages and types of distance measuring sensors that may be included in the VR system such as but not limited to on a head mounted device/user control device/prop are referred to herein collectively or individually as “distance AND DIRECTION sensors. "
- the head position tracking/accruing device and the device for acquiring control device/prop's position and orientation may also include sensors such as accelerometers, gyroscopes, magnetic sensors, optical sensors, mechanical or electronic level sensors, and inertial sensors which alone or in combination can provide data to the device's processor regarding the up/down/level orientation of the device/or user's head (e.g., by sensing the gravity force orientation) and thus the user's head position/orientation (and from that viewing perspective) and /or the control device/prop's head
- sensors such as accelerometers, gyroscopes, magnetic sensors, optical sensors, mechanical or electronic level sensors, and inertial sensors which alone or in combination can provide data to the device's processor regarding the up/down/level orientation of the device/or user's head (e.g., by sensing the gravity force orientation) and thus the user's head position/orientation (and from that viewing perspective) and /or the control device/prop's head
- the helmet /glass weared by user or control device/prop may include rotational orientation sensors, such as an electronic compass and accelerometers, that can provide data to the device's processor regarding left/right orientation and movement.
- rotational orientation sensors such as an electronic compass and accelerometers
- sensors including accelerometers, gyroscopes, magnetic sensors, optical sensors, mechanical or electronic level sensors, inertial sensors, and electronic compasses
- orientation sensors configured to provide data regarding the up/down and rotational orientation of the head mounted device (and thus the user' s viewing perspective) are referred to herein as "orientation sensors.
- the position of the device can be tracked with the same system such as when using a tracking system with sensors not placed on user, such as the case of inect, in which case the "externaPy"3rd party” tracking systems tracks the position of both user's head/glass/eye position and the position of the device (including the orintation/which direction it is pointing).
- separated sensors/tracking system can be used to detect the device's own position—such as using its own cameras to find out its own location— or the position relative to user's head/glass, for example camera(s) could be placed on user's helmet or glasses and capture the stereo images (or IR images) of the device/weapon, or "component light mesuring device” like those used in Kinect could be used to aquire a "depth map" of objects in user's view, and thus location/orintation (of the devive) in realtive to user's head can be obtained/calculated.
- camera(s) could be placed on user's helmet or glasses and capture the stereo images (or IR images) of the device/weapon, or "component light mesuring device” like those used in Kinect could be used to aquire a "depth map" of objects in user's view, and thus location/orintation (of the devive) in realtive to user's head can be obtained/calculated.
- This "event driven” model only acquire those information when needed, this can be also very useful as 1) this can be used for a another display engine that is "independent" from the Scene 3D rendering engine, as in many situations there's no need or 2) too computational expensive to re-render the whole scene because some movements of user or device, or in case the updating of the whole scene might be hardly noticeable by user because of objects are far away (so might not worthwhile doing it), or in case maybe we just want to render a part of the screen, or just do overlay of images)
- stereo surround sound effect will be provided together (if the visual effect also has a corresponding sound effect). It is also desirable that the sound effect is in sync with user's movement, and appears to the user that the sound is generated from the direction of the visual effect. This might require a sound system with multiple speakers, and additional surround sound processing capabilities either provided by individual components or by the 3D scene engine (such as some game engine) and using the position information of user's head and that of the controlling device.
- these 3D (stereo) visual effects also having (optionally) tactile/force feedback effect (such as but not limited to vibration, impact feeling etc) provided to user by the control device/prop and/or costumes/props user is wearing.
- tactile/force feedback effect such as but not limited to vibration, impact feeling etc
- the retina image size is also important. The same object should appear smaller when further away from user. External screens might have different sizes and resolution and usually the image is displayed in a fixed resolution that might be optimized to the display, but not necessarily appear to the user to have the "correct" size of virtual object like those in real world.
- a ball with 3 feet diameter 10 feet away would appear to viewer to have about 17 arc degree, and if the ball image is displayed on the display 5 feet away, it would require the ball image on the screen to be 1.5 feet in diameter to have the same size of retina projection to user as the real world ball. While for objects far away from the viewer some tolerance might be allowed, for object close to user especially when close to the real object user is handling/controlling, it is desirable to achieve close to or substantially close to 1 : 1 proportion for virtual object displayed and real world object with the same size and at the same distance/depth. This require adjusting image size according to the distance between the external display, the angle viewer's "line of sight" to the surface of the display, the resolution and size of the screen, the size, distance and angle of the virtual object.
- the image size for 1 : 1 perspective display There could be many other ways to calculate the image size for 1 : 1 perspective display.
- the image size being displayed on the screen is adjusted according to configuration of the VR environment, such as user distance and screen size, so that the virtual object having the same or similar size with the real word object, from user's perspective. This could be achieved in several ways , such as ,
- the user's "actual viewing FOV” (or angle or view) to the screen can be determined by screen size/geometry (if curved/ surround screen) and the distance user to the screen, so we are able to adjust the FOV of the virtual camera according to the configuration information above., either one time, or dynamically, according to specific requirements similar to what described in the "zoom method" mentioned above
- the zoom factor or "actual viewing FOV" is calculated or loaded once before the VR simulation starts or at the start of the simulation/game/Other VR activities, according to user's distance to screen, and adjust the screen display, and keep the factor unchanged until another scene which have different virtual camera settings or other incompatible parameters with the current scene that require re-calculate the factor.
- the zoom factor or "actual viewing FOV'js calculated dynamically in the VR simulation process according to user's distance to screen. Display system use this "zoom" factor dynamically change the image size. This is useful when user's movement relatively to the screen is frequent or user can notice the difference.
- control device/prop used together with the VR display system that using one or more external display screen could by itself emitting lights /colors or having display on one or more if its surfaces.
- the way the lights/display is arranged is that the part that can emit light / or this display area extend all the way to at least one edge(s) of the surface.
- the light emiting area or display area not extending to all edges for example in case it just extended to 2 adjacent edges of a Hexagon, those edges are to be used towards the external screen when user looking at the device.
- the idea is to eliminate any visible Non-display area between the light emitting area/display area of the device and the external screen, when the device is used in front of the screen by user, so that they appear "connected” in most of the situation, such as when both displaying the same solid color with same illumination strength appear to user, or when they both display similar patterns, or when the external screen display some stereo 3D virtual objects and the images for left and right eye appears to user converges at or in front of the device, and the light emitting area/display area having the same/similar
- multiplicity/division relation ship with the frequency of active shutter glasses user is wearing, to provide more special effects.
- the device itself can display colors or images that selectively "match” what is displayed in the screen when user looking at it, so it can become “Transparent” to user, or it can display colors or images or even stereo images on one or more surfaces that match/related to the visual effects being displayed, so that it appear to user that the visual effects (or as a whole together with the image on the screen) appears in font of the device and "hide” the device (or part of it).
- the surface could be flat or curved/spherical, etc.
- a handle of "light sword” like those in “Star Wars” is used by user in the VR environment with external display (s).
- the "upper surface” of the handle which suppose to connect to the body of the light sword, have a light emitting area / display area that extend to the edge(s).
- the whole surface or a part of it (the shape like shown in the Fig which is a "projection" of the light sword from user's point of view, it could also be stereo images that can be picked up with user's stereo glasses, such as stereo image pairs in synchronize with the active shutter glasses user is wearing) is displaying the same color and brightness of the light sword, combined with the stereo image pairs displayed on the external screen which dynamically modify their location and size according to user's head position (or eyes positions) and control device/prop (in this case the handle of the light sword) and their position relating to the screen, so that the corresponding image of the light sward body for each eye appears to user converges (or cross) at the spatial location on along the direction of the handle.
- this light emitting area/display area having the same/or very similar color/brightness with the stereo image presented and because there's no visible seam between the 2 images (because the display area extends all the way to the edges), it would appear to the user that the sward is indeed in front of the handle and hide a part of the handle.
- Fig 2B shows an alternative design of the handle which can be apply to more devices, basically the light emitting area /display area is on a curved surface, in the fig shows a spherical surface although spherical surface might also be used and might be preferable in certain occasions, such as rear projection.
- the spherical surface could be useful when the simulation environment require the device have a larger angle of view.
- the image shown on such device however needs more accurate calculation to appear in sync with the external screen.
- Fig 2C shows another shape and extended usage of the display area.
- the Fig shows a sward like control device/prop, that have display area on at least one side and extend all the way to the edges. Since we have the full control of the display area, we can make part of it such as edges to have the same color/texture with the background or display whatever content that is on the external screen that is hide by this part dynamically, so that it appears to user this part disappeared and the sword looks like to have "notched” "gapping". This is useful to display damages done to the prop, the change of shape of the prop, and etc.
- control devices/props using the "subtract reality” method(s) or having related features such as light emitting area/ display areas extend to at least one edges, as discussed in the above 9 paragraphs. So combinations of the above embodiments for "subtract reality” methods with the first independent embodiments discussed in this document (and some of its related embodiments are possible), to create multiple embodiments that take advantage of the position information (for user and device) acquired and/or features of 3D rendering engine and device to provide visual effects that is only possible with MR or AR (with HMDs) before.
- Heads up display (HUD) or see thru AR(augmented reality) glasses might be used in the VR environments together to provide even more special effects desired, such as night vision (simulated) and etc.
- HUD Heads up display
- AR(augmented reality) glasses might be used in the VR environments together to provide even more special effects desired, such as night vision (simulated) and etc.
- MR Mixed Reality
- illumination of people and/or the control device might be desired in order to achieve a "immersive" feeling for user, for example if it is day time in the simulation/game/VR scene is it desirable some illumination which might be restrict to just the user area or device area is provided so that user could see the weapon just like he would in the scene.
- the visual effects generation mechanism (wether independent or that of the 3D rendering engine) could get information of which type of controller/device user is using, so that the visual effect will be coherent with the current , device/weapon that being used.
- the display related technologies discussed hear can be used independently (such as from the 3D rendering engine for the scene generation/game and etc) as long as they can get appropriate position, orientation and other information from the sensory system, and use(utilize) image processing technology such as overlay/add on/alpha
- channel/chroma-keying and etc. technology to merge with the images from 3D rendering engine.
- Additional ways such as using Display Card hardware function or DirectX/OpenGL layer etc. to do the image processing /merging also possible, so that we just need to individually calculate the position, and no need to work together with the engine (for example the game engine, so that no need to modify the source code).
- the existing 3D rendering engine such as game do not need to modify.
- these technologies/embodiments being integrated into the 3D rendering engine (such as game engine) so that to handle all related image processing and position calculation in one place/module.
- device and weapon might be non-tangible, or even virtual, user can use hand to control, although through using glove, costume is preferable as they are easy to add markers/beacons
- the data for the "interaxial separation" distance could come from “fast on site calibration", or from data uploaded by user using various methods/means, such as but not limited to, from network/internet, from removable media, from flash drive/USB port, from I port, from Bluetooth, from other means from smart phone, and etc.
- the data could be stored in a exchangeable format such as file, XML and ec , and allow system to be configured using such format; in some situations it might even allow wireless communication of such format;
- the Interocular/Interpupillary Distance calibration parameter (maybe as well as other preferences that specific for individual user) that is acquired from the above discussed embodiments can be use to configure 3D rendering engine, for example setting the "interaxial separation" distance of (virtual) stereo camera, or configure other
- the 3D rendering engine could change "interaxial separation" distance for the stereo camera (the distance between 2 cameras) for different people (or: to match the current user's Interocular/Interpupillary distance).
- the V systems using the 1 st embodiment as discussed in this document maybe together with other "1 : 1 proportion display” display methods, "subtract reality” technologies as discussed earlier could integrate with the "Interocular/Interpupillary Distance calibration” technology discussed in the above 4 paragraphs/embodiments and make combinations that provide even more accurate 3D for every individual user to experience.
- Fig. 1 shows how to display a visual effects being triggered by user's control device/prop (such as a weapon) in different stages when the visual effect is moving away from the user.
- Fig la depicts how to get the image location on the screen/stereo window for each user's eye— line of sight from each eye to the visual effect's spatial position and extend to the screen/stereo window.
- Fig 2a shows a control device/prop with "subtract reality” features (light emitting area/display area)
- Fig 2b shows internal structure of a "subtract reality" control device.
- Fig 2c shows another form of "subtract reality" control device.
- Fig 3 shows a VR environment in using the various 3D display and interacting technologies discussed here (such as those related to claim l,claim3 and etc.) are used together with subtracting reality to provide realistic interactive immersive VR experience to user.
- Fig 4 shows the camera having different "interaxial separation" distance than the viewer so the "convergence angle" captured will be different with viewer's eyes perceived, so the object feels like in a different position to user viewer.
- IR beacons or markers on user's active glasses or helmets there are also multiple ways to detect user's head movement , for example using IR beacons or markers on user's active glasses or helmets, and have one or more (IR) camera capturing the images and thus get spatial locations, or Helmet/Glass have a up and/or font looking camera(s) to trace where is the 2 or more IR beacon; It is also possible that user have head mounted camera (maybe integrated with the glasses or helmets) that can find the position of the screen and /or position of the control device/prop. Since when user not looking at a portion of the screen or the device, there's basically little meaning to display related images for that area (or the visual effect near the device). Thus the head/glass mounted camera can be used not only to determine positions, but can also serve as an "sight tracker" to determine which part on the screen can be omitted.
- IR IR
- Helmet/Glass have a up and/or font looking camera(s) to trace where is the 2 or more
- One or more screen(s) are placed in font (and surround) the user. User will need to wear a stereo glass.
- a tracking system is used to track user's head position in real time.
- a device that can capture user's image, IR image and positions such as inect(TM) sensor is placed in upper-front direction of user, so that user's movement can be detected with out blocking.
- An other example will be using one or more camera(s) (could be wide angle cameras) on user's helmet/glasses to track IR beacons or image patterns in the environment (such as putted on the upper-front direction), and calculate the head position in relate to the markers/beacons.
- the position of the device can be tracked with the same system such as when using a tracking system with sensors not placed on user, such as the KinectTM, in which case the "external"/"3rd party” tracking systems tracks the position of both user's head/glass/eye position and the position of the device (including the orientation/which direction it is pointing).
- separated sensors/tracking system can be used to detect the device's own position—such as using its own cameras to find out its own location— or the position relative to user's head/glass, for example camera(s) could be placed on user's helmet or glasses and capture the stereo images (or I images) of the device/weapon, and thus location in relative to user's head can be obtained/calculated.
- Scenario 2 as shown in the Fig 3, user could wear a helmet and an active shutter glasses for 3D viewing (that might be integrated with helmet). User could also wear props/costumes which could provide position information of user's body and head movement, or make the movements be recognized by outside sensors of VR system thus captured the positions. User is using a light saber/sword. And using a spherical surround screen.
- the position tracking system might be Kinect like depth mapping devices or stereo cameras calculating depth from image pairs, or other method, that could be placed in front and higher than user, so that it can "see” both user's head and the control device.
- the real world light sword is just a handle, with light emitting area/display screen (Flat or curved) as described in "subtract reality” above.
- System would generate the light sword part when user turn it on, and follows user hand movement (the handle might have sensor in it).
- User can use the sward to cut virtual objects, shield/deflect incoming beams and etc., in the virtual world.
- the display on the handle bar would display the image seamlessly with the images provided by the external screen.
- the display /light surface of the device could be flat or curved shape, for example like spherical.
- the display or light cover all one side of the device for example all the "cross-section” side of the light sword, so that when user look at it , the light emit from the device and the light from the external screen behind it could seamlessly “merge together", as if that part is transparent. In this way we can give user an illusion that some virtual objects that "attached” to that surfacejis "in front”.
- At least the part towards the external screen have displayable/light emitting area extend all the way to the edge(s) of one (or more) surfaces without any visible seams (this assumes that side always towards the external display).
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
- Controls And Circuits For Display Device (AREA)
Abstract
Cette invention concerne une nouvelle manière de présenter des effets visuels 3D stéréoscopiques (ou de vision binoculaire) en temps réel, avec un dispositif/accessoire, de manière plus réaliste, avec un effet de profondeur précis, par exemple en effectuant un calcul en fonction des positions de visualisation et de la position du dispositif/accessoire. L'invention concerne en outre la réalité soustraite qui est l'inverse de la réalité augmentée (AR) en ce qu'elle fait disparaître dans l'environnement de réalité virtuelle un objet du monde réel ou une partie de celui-ci en utilisant un dispositif de visualisation extérieur (hors casque HMD). L'invention concerne en outre des procédés de configuration/calibrage rapide utilisés dans les modes de réalisation.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US201361751873P | 2013-01-13 | 2013-01-13 | |
| US61/751,873 | 2013-01-13 |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| WO2014108799A2 true WO2014108799A2 (fr) | 2014-07-17 |
| WO2014108799A3 WO2014108799A3 (fr) | 2014-10-30 |
Family
ID=51167469
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/IB2014/000030 Ceased WO2014108799A2 (fr) | 2013-01-13 | 2014-01-13 | Appareils et procédés de présentation d'effets tridimensionnels à en temps réel d'effets tridimensionnels de caractère stéréoscopique en temps réel, de manière plus réaliste, et réalité soustraite avec dispositif(s) de visualisation extérieur(s) |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2014108799A2 (fr) |
Cited By (16)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US9877016B2 (en) | 2015-05-27 | 2018-01-23 | Google Llc | Omnistereo capture and render of panoramic virtual reality content |
| US10038887B2 (en) | 2015-05-27 | 2018-07-31 | Google Llc | Capture and render of panoramic virtual reality content |
| CN109416842A (zh) * | 2016-05-02 | 2019-03-01 | 华纳兄弟娱乐公司 | 在虚拟现实和增强现实中的几何匹配 |
| US10244226B2 (en) | 2015-05-27 | 2019-03-26 | Google Llc | Camera rig and stereoscopic image capture |
| CN110799926A (zh) * | 2017-06-30 | 2020-02-14 | 托比股份公司 | 用于在虚拟世界环境中显示图像的系统和方法 |
| CN111103975A (zh) * | 2019-11-30 | 2020-05-05 | 华为技术有限公司 | 显示方法、电子设备及系统 |
| CN111915714A (zh) * | 2020-07-09 | 2020-11-10 | 海南车智易通信息技术有限公司 | 用于虚拟场景的渲染方法、客户端、服务器及计算设备 |
| US10846932B2 (en) | 2016-04-22 | 2020-11-24 | Interdigital Ce Patent Holdings | Method and device for compositing an image |
| CN113190111A (zh) * | 2015-10-08 | 2021-07-30 | Pcms控股公司 | 一种方法和设备 |
| CN113379897A (zh) * | 2021-06-15 | 2021-09-10 | 广东未来科技有限公司 | 应用于3d游戏渲染引擎的自适应虚拟视图转立体视图的方法及装置 |
| US11392636B2 (en) | 2013-10-17 | 2022-07-19 | Nant Holdings Ip, Llc | Augmented reality position-based service, methods, and systems |
| CN115335894A (zh) * | 2020-03-24 | 2022-11-11 | 奇跃公司 | 用于虚拟和增强现实的系统和方法 |
| CN115338858A (zh) * | 2022-07-14 | 2022-11-15 | 达闼机器人股份有限公司 | 智能机器人控制方法、装置、服务器、机器人和存储介质 |
| CN116935008A (zh) * | 2023-08-08 | 2023-10-24 | 北京航空航天大学 | 一种基于混合现实的展示交互方法和装置 |
| US11854153B2 (en) | 2011-04-08 | 2023-12-26 | Nant Holdings Ip, Llc | Interference based augmented reality hosting platforms |
| US12118581B2 (en) | 2011-11-21 | 2024-10-15 | Nant Holdings Ip, Llc | Location-based transaction fraud mitigation methods and systems |
Family Cites Families (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2003005303A2 (fr) * | 2001-07-02 | 2003-01-16 | Matchlight Software, Inc. | Systeme et procede permettant de confirmer des attributs et d'etablir des categories d'attributs propres a une image numerique |
| JP4500632B2 (ja) * | 2004-09-07 | 2010-07-14 | キヤノン株式会社 | 仮想現実感提示装置および情報処理方法 |
| US8094928B2 (en) * | 2005-11-14 | 2012-01-10 | Microsoft Corporation | Stereo video for gaming |
| KR101732135B1 (ko) * | 2010-11-05 | 2017-05-11 | 삼성전자주식회사 | 3차원 영상통신장치 및 3차원 영상통신장치의 영상처리방법 |
-
2014
- 2014-01-13 WO PCT/IB2014/000030 patent/WO2014108799A2/fr not_active Ceased
Cited By (26)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US12182953B2 (en) | 2011-04-08 | 2024-12-31 | Nant Holdings Ip, Llc | Augmented reality object management system |
| US11869160B2 (en) | 2011-04-08 | 2024-01-09 | Nant Holdings Ip, Llc | Interference based augmented reality hosting platforms |
| US11854153B2 (en) | 2011-04-08 | 2023-12-26 | Nant Holdings Ip, Llc | Interference based augmented reality hosting platforms |
| US11967034B2 (en) | 2011-04-08 | 2024-04-23 | Nant Holdings Ip, Llc | Augmented reality object management system |
| US12118581B2 (en) | 2011-11-21 | 2024-10-15 | Nant Holdings Ip, Llc | Location-based transaction fraud mitigation methods and systems |
| US12008719B2 (en) | 2013-10-17 | 2024-06-11 | Nant Holdings Ip, Llc | Wide area augmented reality location-based services |
| US11392636B2 (en) | 2013-10-17 | 2022-07-19 | Nant Holdings Ip, Llc | Augmented reality position-based service, methods, and systems |
| US12406441B2 (en) | 2013-10-17 | 2025-09-02 | Nant Holdings Ip, Llc | Wide area augmented reality location-based services |
| US10375381B2 (en) | 2015-05-27 | 2019-08-06 | Google Llc | Omnistereo capture and render of panoramic virtual reality content |
| US9877016B2 (en) | 2015-05-27 | 2018-01-23 | Google Llc | Omnistereo capture and render of panoramic virtual reality content |
| US10244226B2 (en) | 2015-05-27 | 2019-03-26 | Google Llc | Camera rig and stereoscopic image capture |
| US10038887B2 (en) | 2015-05-27 | 2018-07-31 | Google Llc | Capture and render of panoramic virtual reality content |
| CN113190111A (zh) * | 2015-10-08 | 2021-07-30 | Pcms控股公司 | 一种方法和设备 |
| US10846932B2 (en) | 2016-04-22 | 2020-11-24 | Interdigital Ce Patent Holdings | Method and device for compositing an image |
| US11568606B2 (en) | 2016-04-22 | 2023-01-31 | Interdigital Ce Patent Holdings | Method and device for compositing an image |
| CN109416842B (zh) * | 2016-05-02 | 2023-08-29 | 华纳兄弟娱乐公司 | 在虚拟现实和增强现实中的几何匹配 |
| CN109416842A (zh) * | 2016-05-02 | 2019-03-01 | 华纳兄弟娱乐公司 | 在虚拟现实和增强现实中的几何匹配 |
| CN110799926A (zh) * | 2017-06-30 | 2020-02-14 | 托比股份公司 | 用于在虚拟世界环境中显示图像的系统和方法 |
| CN110799926B (zh) * | 2017-06-30 | 2024-05-24 | 托比股份公司 | 用于在虚拟世界环境中显示图像的系统和方法 |
| CN111103975A (zh) * | 2019-11-30 | 2020-05-05 | 华为技术有限公司 | 显示方法、电子设备及系统 |
| US12153224B2 (en) | 2019-11-30 | 2024-11-26 | Huawei Technologies Co., Ltd. | Display method, electronic device, and system |
| CN115335894A (zh) * | 2020-03-24 | 2022-11-11 | 奇跃公司 | 用于虚拟和增强现实的系统和方法 |
| CN111915714A (zh) * | 2020-07-09 | 2020-11-10 | 海南车智易通信息技术有限公司 | 用于虚拟场景的渲染方法、客户端、服务器及计算设备 |
| CN113379897A (zh) * | 2021-06-15 | 2021-09-10 | 广东未来科技有限公司 | 应用于3d游戏渲染引擎的自适应虚拟视图转立体视图的方法及装置 |
| CN115338858A (zh) * | 2022-07-14 | 2022-11-15 | 达闼机器人股份有限公司 | 智能机器人控制方法、装置、服务器、机器人和存储介质 |
| CN116935008A (zh) * | 2023-08-08 | 2023-10-24 | 北京航空航天大学 | 一种基于混合现实的展示交互方法和装置 |
Also Published As
| Publication number | Publication date |
|---|---|
| WO2014108799A3 (fr) | 2014-10-30 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2014108799A2 (fr) | Appareils et procédés de présentation d'effets tridimensionnels à en temps réel d'effets tridimensionnels de caractère stéréoscopique en temps réel, de manière plus réaliste, et réalité soustraite avec dispositif(s) de visualisation extérieur(s) | |
| EP4058870B1 (fr) | Estimation de pose co-localisée dans un environnement de réalité artificielle partagée | |
| US10558048B2 (en) | Image display system, method for controlling image display system, image distribution system and head-mounted display | |
| Azuma | Augmented reality: Approaches and technical challenges | |
| US20160343166A1 (en) | Image-capturing system for combining subject and three-dimensional virtual space in real time | |
| US11151790B2 (en) | Method and device for adjusting virtual reality image | |
| CN104076513B (zh) | 头戴式显示装置、头戴式显示装置的控制方法、以及显示系统 | |
| US9677840B2 (en) | Augmented reality simulator | |
| US10740971B2 (en) | Augmented reality field of view object follower | |
| US10365711B2 (en) | Methods, systems, and computer readable media for unified scene acquisition and pose tracking in a wearable display | |
| US20160371884A1 (en) | Complementary augmented reality | |
| JP2022530012A (ja) | パススルー画像処理によるヘッドマウントディスプレイ | |
| US20110084983A1 (en) | Systems and Methods for Interaction With a Virtual Environment | |
| US20120188279A1 (en) | Multi-Sensor Proximity-Based Immersion System and Method | |
| US20160307374A1 (en) | Method and system for providing information associated with a view of a real environment superimposed with a virtual object | |
| US20170200313A1 (en) | Apparatus and method for providing projection mapping-based augmented reality | |
| US10884576B2 (en) | Mediated reality | |
| CN114730094A (zh) | 具有人工现实内容的变焦显示的人工现实系统 | |
| US20100315414A1 (en) | Display of 3-dimensional objects | |
| CN104243962A (zh) | 扩增实境的头戴式电子装置及产生扩增实境的方法 | |
| US20130265331A1 (en) | Virtual Reality Telescopic Observation System of Intelligent Electronic Device and Method Thereof | |
| US12327319B2 (en) | Virtual reality sharing method and system | |
| KR20160124985A (ko) | 혼합 현실 체험 공간 제공 방법 및 시스템 | |
| KR101665363B1 (ko) | 가상현실, 증강현실 및 홀로그램을 혼합한 인터랙티브 콘텐츠 시스템 | |
| KR101860680B1 (ko) | 3d 증강 프리젠테이션 구현 방법 및 장치 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 14737654 Country of ref document: EP Kind code of ref document: A2 |