WO2015066734A1 - Affichage stéréoscopique - Google Patents
Affichage stéréoscopique Download PDFInfo
- Publication number
- WO2015066734A1 WO2015066734A1 PCT/US2014/072419 US2014072419W WO2015066734A1 WO 2015066734 A1 WO2015066734 A1 WO 2015066734A1 US 2014072419 W US2014072419 W US 2014072419W WO 2015066734 A1 WO2015066734 A1 WO 2015066734A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- stereoscopic
- virtual
- display
- glasses
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B30/00—Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images
- G02B30/20—Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes
- G02B30/22—Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes of the stereoscopic type
- G02B30/23—Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes of the stereoscopic type using wavelength separation, e.g. using anaglyph techniques
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B30/00—Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images
- G02B30/20—Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes
- G02B30/26—Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes of the autostereoscopic type
- G02B30/30—Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes of the autostereoscopic type involving parallax barriers
Definitions
- the present invention relates to a stereo image display technique, by which a 3D image display may produce a stereoscopic image which takes viewpoint into account.
- a stereoscopic image may be created which appears to remain at approximately the same location in space as viewpoint changes.
- a first image for a left eye and a second image for a right eye need to arrive at both eyes in a manner of being discriminated from each other.
- various methods are explained as follows. These images shall be referred to as first or left image and second or right image.
- Prior art displays which may be viewed as 3D images generally fall into four methods for display of 3D imagery.
- the first method employs polarized light images where the planes for left and rightimages are rotated by approximately 90 degrees. These polarized left and right images pass through polarized spectacles so that the corresponding image reaches the left and right eye. A viewer who tilted their head would degrade the 3D stereoscopic image.
- liquid crystal shutter spectacles which open and close left and right shutters so as to allow the corresponding image to reach the correct eye.
- Prior art employing liquid crystal shutters do not account for a change in viewing location from one viewer to the next. Therefore a 3D image would appear to be at different locations in space when viewed from differing viewpoints. Thus if one viewer pointed at a 3D stereoscopic object, a viewer at a second viewing location would have difficulty determining what is being pointed at.
- a third method employs a lenticular screen provided between a display and both eyes.
- a propagating direction of light is refracted via lens on the lenticular screen, whereby different images arrive at both eyes, respectively.
- a fourth method requires no spectacles and utilizes parallax barriers so that only the proper image is seen by each eye.
- This technology shall be referred to as auto stereoscopic.
- Prior art applying this method required that the viewer remain in an optimal location for 3D viewing. Other spectators may not be able to see 3D imagery clearly. When it does account for tilting of the head or differing viewpoints it is limited to only one viewer. It is not possible for a second viewer to obtain a 3D image as well unless the second viewpoint is closely aligned with the first viewpoint. This limits prior 3D auto stereoscopic to smaller devices of the handheld variety. This technology is also unable to provide for a second viewer to determine which 3D stereoscopic object is being pointed at by a first viewer.
- parallax barriers at different locations of the display may have different pitch angles in relation to the display surface at the same time.
- the parallax barriers shall also be referred to as electronically configurable light guiding louvers, or louvers.
- a larger display may be viewed auto stereoscopically.
- louvers on opposite sides of the display would guide the light from the display at angles of slightly differing directions. In this way light from each location of the display is guided towards the intended viewpoint.
- another embodiment of the present invention may apply these electronically configurable light guiding louvers in more than one axis concurrently. Thus when one viewer tilts his head the light passing through the louvers is guided to the intended viewing location and is blocked or shielded from other viewing locations.
- the instant invention employs a 3D stereoscopic method combined with position tracking technology to produce a 3D stereoscopic image which remains in approximately the same location in space even when viewed from various perspectives.
- This provides a 3D image which appears to be in approximately the same location despite the viewing location. The viewer may move towards or away, up or down, left or right yet the image remains in approximately the same location in space. It moves very little as the viewpoint changes. However, the 3D stereoscopic image will change to reflect how the viewer would see the 3D objects in the image from different perspectives.
- This 3D stereoscopic image that remains approximately fixed in spatial location may also be referred to as a virtually real image, virtual real image or virtual image.
- Position tracking or position sensing shall be used interchangeably and mean the same thing in this document.
- the sensors on the display in combination with a computing device may detect when an external object or pointer is in close proximity to the virtual location of a 3D stereographic object whose position is stabilized in space. Thus stabilized, it becomes possible for viewers to interact with the 3D stereoscopic image.
- Prior art employing gestures or voice is limited in scope and does not allow the user to manipulate and interact with a stereoscopic 3D virtual image. This shall be further elucidated in the description of the instant invention. To accomplish this goal, the perspective position of the viewpoint must be sensed and measured. In this document position tracking and position sensing shall be understood to mean the same thing. From this information an image is created which is what an observer located at this position would see if the real object were present.
- This viewpoint is what one would expect to see if viewed in a monocular fashion (i.e. from one eye with the other closed). Therefore, for each viewing location (or eye location) a new image must be created. So the sensor must be able to calculate the position of each eye or lens of the viewer(s). The created images should take into account both the angular position of the viewpoint and the distance.
- the viewing angle and distance shall be referred to as viewing perspective, or viewpoint.
- the viewer is able to interact with the stabilized virtual image. This enables many applications. Input devices such as keyboards, remote controllers, musical instruments, virtual caves, virtual simulators and interactive 3D gaming systems are a few such applications, however the instant invention is not meant to be limited to these systems or devices. Some of these systems or devices will be further described in the following detailed description.
- Prior art that utilizes gestures to interact with a 2D image display are common. However these do not allow interaction with a 3D virtual image.
- the image may be completely computer generated or interpolated from photographic images taken from various perspectives.
- This same technology may be adapted to create stereographic pairs of images, which when viewed by the stereographic methods of the instant invention may produce the desired stabilized 3D image.
- current motion picture creation employs sensors which track location of body parts that are then used to create images. Such sensing technology could be used to track the eyes or lenses of glasses. Some location sensing methods employ small round objects that emit light, while others do not. These sensors may also be used to track the location of pointers, or body parts. They may also be used to track wearable devices to include, but not be limited to gloves, glasses, and hats.
- Wearable devices may or may not include objects or markers, which may emit or reflect light to the sensors. Pulses, frequency or other method to enable sensors to differentiate location may code the emitted or reflected light. The light may be visible, infrared, or of other frequencies. Other position sensing technologies that employ magnetism, accelerometers, or gravitation sensing may be employed to improve tracking of objects with the intent of improvement of speed and accuracy.
- anaglyph glasses are employed.
- the left or first image is color coordinated to pass through the left or first lens of the anaglyph glasses.
- the right or second image is color coordinated to pass through the right or second lens of the anaglyph glasses. In this way the viewer may see a 3D stereographic image.
- passively polarized glasses are employed.
- the left or first image has polarization coordinated to pass through the left or first lens of the passively polarized glasses.
- the second or right image has polarization coordinated to pass through the right or second lens of the passively polarized glasses. In this way the viewer may see a 3D stereographic image.
- Another embodiment employs a combination of anaglyph and passively polarized glasses.
- the instant invention may also display 3D stereographic images in the manner of prior art whereby the first and second image do not use information from the sensors to vary the image based on viewpoint location.
- This method shall be referred to as prior art 3D.
- This method may be employed for viewing medium such as movies or games which have been created for prior art 3D.
- the instant invention enables switching between 2D and 3D modes. In 2D mode multiple viewers may view multiple images. So two or more viewers may use the display to watch different things.
- the display of the instant invention may be presented in portrait or landscape mode.
- the landscape or portrait mode may be manually or automatically changed by means of an orientation sensor of various types. So a tablet, phone, or other handheld device may use the display of this invention.
- a left or first viewing perspective is sensed and location quantified in space.
- a left or first image is created corresponding to what would be seen by a viewer with said left or first perspective.
- the left or first image is displayed in conjunction with technology, which limits the viewing to the left or first perspective. This may be accomplished via anaglyph glasses, passively polarized, or a combination of anaglyph and passively polarized glasses.
- a right or second viewing perspective is sensed and location quantified in space.
- a right or second image is created corresponding to what would be seen by a viewer with said right or second perspective.
- the right or second image is displayed in conjunction with technology, which limits the viewing to the right or second perspective. This may be accomplished via anaglyph glasses, passively polarized glasses, or a combination of anaglyph and passively polarized glasses.
- the process is repeated for each viewer in sequence in a continuous loop.
- the sequence may vary in order so long as the image is coordinated with the stereoscopic method so that the correct image reaches the intended eye.
- the display may be a liquid crystal display device, an electroluminescent display device, an organic light emitting display device, a plasma display device, or a projected display image.
- this list of display types is for illustrative purposes only and is not intended to be limiting in any way. There are many ways of accomplishing this end. There are endless variations of placement of parts, methods of generating image patterns, different ordering of parts, and/or display images which accomplish the same objective. Someone practiced in the art will be able to design and construct many variations, which include but are not limited to those above. Hence the invention is what is in the claims and includes more than the embodiments described below.
- FIGURE 1 is a schematic diagram illustrating prior art in which the 3D stereoscopic images virtual location moves as viewpoint shifts.
- FIGURE 2 is a schematic diagram illustrating prior art in which the 3D virtual image is unable to be viewed stereoscopically when the viewers head is angularly tilted in relation to the display.
- FIGURE 3 is a schematic diagram illustrating prior art 3D auto stereoscopic displays which limit viewing location.
- FIGURE 4 is a schematic diagram illustrating an embodiment where the 3D stereoscopic image remains fixed and viewable as the viewers head is angularly tilted in relation to the display.
- FIGURE 5 is a schematic diagram illustrating an embodiment where the 3D stereoscopic image remains fixed in space as the viewing location is moved closer or fartherfrom the display.
- FIGURE 6 is a schematic diagram illustrating an embodiment where the 3D virtual object is seen from different viewpoints yet remains fixed in space.
- FIGURE 7 is a schematic diagram illustrating a flow diagram of an embodiment.
- FIGURE 8 is a schematic diagram illustrating an embodiment applying viewpoint sensors and
- FIGURE 9 is a schematic diagram illustrating an embodiment where images may be displayed as time progresses.
- FIGURE 10 is a schematic diagram illustrating an embodiment applying shutter glasses and viewpoint location sensing where images may be displayed as time progresses.
- FIGURE 1 1 is a schematic diagram illustrating an embodiment applying shutter glasses and viewpoint position sensing where images may be displayed as time progresses.
- FIGURE 12 is a schematic diagram illustrating an embodiment applying shutterglasses and viewpoint position sensing where images may be displayed as time progresses.
- FIGURE 13 is a schematic diagram illustrating an embodiment of anaglyph glasses.
- FIGURE 14 is a schematic diagram illustrating an embodiment of passively polarized glasses.
- FIGURE 15 is a schematic diagram illustrating an embodiment of passively polarized anaglyph glasses.
- FIGURE 16 is a schematic diagram illustrating prior art directional louvers and also an embodiment applying directional louvers in both horizontal and vertical directions.
- FIGURE 17 is a schematic diagram illustrating an embodiment applying directional louvers in both horizontal and vertical directions and applying viewpoint location sensing.
- FIGURE 18 is a schematic diagram illustrating an embodiment applying louvers and position sensors.
- FIGURE 19 is a schematic diagram illustrating an embodiment applying louvers and position sensors.
- FIGURE 20 is a schematic diagram illustrating an embodiment applying louvers
- FIGURE 21 is a schematic diagram illustrating an embodiment applying louvers and position sensors.
- FIGURE 22 is a schematic diagram illustrating an embodiment applying louvers and position sensors.
- FIGURE 23 is a schematic diagram illustrating an embodiment in portrait and landscape modes.
- FIGURE 24 is a schematic diagram illustrating an embodiment applying position
- FIGURE 25 is a schematic diagram illustrating an embodiment applying position sensors and illustrating user interaction with the virtual image.
- FIGURE 26 is a schematic diagram illustrating an embodiment applying position sensors and illustrating a virtual gaming system.
- FIGURE 27 is a schematic diagram illustrating an embodiment applying position sensors and illustrating a virtual cave.
- FIGURE 28 is a schematic diagram illustrating an embodiment applying position sensors and illustrating a virtual simulator.
- FIG. 1 of the drawings there is shown an illustration of prior art.
- a 3D stereoscopic image is presented to viewers positioned at A and B.
- the left or first image (item 160) as well as the right or second image (item 170) locations is fixed on the image display (item 1 14) for either viewing from position A or B.
- the result is 3D image object locations 180 and 182 which differ in space. Each tends to be more in front of the
- FIG. 2 of the drawings there is shown an illustration of prior art. It is apparent that changing viewing angle results in less than optimal 3D image or possibly failure of 3D imaging.
- Fig. 3 of the drawings there is shown an illustration of prior art, a 3D stereoscopic device which employs current louvers to aim or guide the light froman image to the viewer's eyes.
- the limitation is because the louvers are fixed and not configurable based on viewing location. Therefore the viewing location is limited.
- Fig. 4 of the drawings there is shown an illustration of an embodiment of the instant invention.
- Sensors or markers locate the viewpoint perspectives. These sensors may be passive receivers, or may be emissive and receptive of signals, or of other methods to determine viewpoint locations. Facial or object recognition may be used in lieu of sensors or markers to determine viewpoint locations. Other position sensing technologies that employ magnetism, accelerometers, or gravitation sensing may be employed to improve tracking of objects with the intent of improvement of speed and accuracy. Based on where the viewpoint perspective is sensed, an image is created corresponding to how the intended image would be seen from that viewpoint.
- the first or left displayed image is a function of the position of the left eye of viewer A.
- the second or right displayed image is a function of the position of the right eye of viewer A.
- the viewer located at B has his head tilted in relation to the display (item 1 14).
- the first or left displayed image (item 162) is a function of the position of the left eye of viewer located at B.
- the second or right displayed image is a function of the position of the right eye of viewer located at B.
- the 3D stereoscopic object image (item 190) is now seen in approximately the same location in space from both viewpoints A and B.
- the viewer located at B is able to see the 3D stereoscopic image in approximately the same location in space as when the viewer is located at A, even though his head is tilted with respect to the display.
- the 3D stereographic images location remains approximately fixed in space. This allows it's fixed position coordinates to be determined. These may then be compared with the sensed location of a viewer's body part, wearable object or pointer. In this manner it becomes possible for one or more users to interact with the 3D stereographic objects or images.
- Other position sensing or tracking technologies such as magnetic, accelerometers, inertial, or gravitation sensing may be employed with the intent of improvement of speed and accuracy.
- FIG. 5 of the drawings there is shown an illustration of an embodiment of the present invention. This illustrates the fact that in addition to viewing angle, the viewing distance also is measured in order to create the correct display image presentations (items 260 and 270). In this manner both viewpoint 1 (item 220) and 2 (item222) are able to see the virtual object image (item 292) in approximately the same location in space.
- Sensors (item 1 16) locate the viewpoint perspectives. These sensors may be passive receivers, or may be emissive and receptive of signals, or of other methods to determine viewpoint locations. Based on where the viewpoint perspective is sensed, an image is created corresponding to how the intended image would be seen from that viewpoint.
- FIG. 6 of the drawings there is shown an illustration of an embodiment showing how an object might appear when viewed from different perspectives in the instant invention.
- Fig. 7of the drawings a flow diagram of an embodiment the instant invention is presented which shows a process for creating 3D stereoscopic images which are seen in the same location in space when viewed from different perspectives.
- One means to accomplish this is for the sensors to track an object, use facial recognition. Magnetic, acceleration, and gravitational data may also be employed to determine the first and second viewpoints.
- the viewpoints correspond to the positions of first or left and second or right eye.
- the other methods for locating these viewpoint locations include but are not limited to markers that may reflect or transmit light and or sound, or create a magnetic field. These markers may be located on the face, body or on a wearable object.
- the methods given to recognize and locate a pair of eyes, glasses or facial feature viewpoints is for illustrative purposes only and is not meant to be limiting in any way.
- FIG. 8 of the drawings there is shown an illustration of anembodiment of the present invention.
- An object image (item 128), in this case a cylinder, would be presented as different images to perspective viewing locations represented by items 108 and 120.
- FIG. 9 of the drawings there is shown an illustration of anembodiment of the present invention.
- This illustration shows progression through time.
- Item 200 shows how as viewing location is changed, the 3D stereoscopic images location remains unchanged in space.
- Item 240 shows how this is accomplished by enabling each prospective viewpoint to see an image created based on the viewpoints perspective as viewing location differs.
- FIG. 10 of the drawings there is shown an illustration of anembodiment of the present invention.
- This illustration shows progression through time.
- Item 200 shows how as viewing location is changed, the 3D stereoscopic images location remains unchanged in space.
- Item 240 shows how this is accomplished by enabling each prospective viewpoint to see an image created based on the viewpoints perspective as viewing location differs.
- item 240 shows employment of shutter glasses to accomplish this effect.
- FIG. H and 12 of the drawings there is shown an illustration of an embodiment of the present invention. Images are created based on the perspective locations sensed of the lenses (items 109, 1 10,
- the image is presented with correct optical association so that a 3D image will be seen.
- Said 3D image is seen from various perspectives as it would be seen were the object immovable. Therefore the 3D object image appears in the same location no matter the viewing angle or distance. The viewing location is only limited by the size of the screen (item 1 14).
- a first lens (item 204) allows light of a different color to pass than that of a second lens (item 206).
- a first lens (item 304) allows light of an opposing polarization direction to pass than that of a second lens (item 306).
- the polarization may be linear, circular, or elliptical.
- FIG. 15 of the drawings there is shown an illustration of passively polarized
- anaglyph glasses In illustration A the planes of polarization in is the same for both lenses of a pair of glasses, while the color of the lenses is different. Between glasses 802 and 812 the polarization orientation is different. The polarization may be linear, circular, or elliptical. In illustration B the polarization pattern is in opposition between lenses of the same pair of glasses. The color in the first and second lens of the glasses is the same. However the colors of one pair of glasses (item 852) differs from the colors of the second pair of glasses (item 862). These would allow two users to interact with different images.
- Examples would be a game of scrabble or poker. However these examples are not intended to limit the use of this device in any way.
- FIG. 16 of the drawings there is shown an illustration of the prior art and also an embodiment of the present invention.
- louvers created by layers of liquid crystals which have a blocking function in the position of a "Z" shape. Since it is created of liquid crystals it may be reconfigured frame by frame to allow light to pass to the left or right eye in correct optical association with a first or second image so that a 3D stereoscopic effect is achieved without the need for glasses.
- the present invention improves upon this by enabling the louvers to vary position and rotational angle. Thereby a single viewer can see a 3D stereoscopic image in the same location in space as his viewing perspective changes and/or the head is tilted.
- louvers In part B the present invention improves upon the concept of louvers by using them in both vertical as well as horizontal planes.
- the louvers may be configured along
- any combination of axis in any shape or pattern any combination of axis in any shape or pattern.
- Several shapes or patterns of louvers will be illustrated further in the description and endless varieties are possible. The result is guiding or aiming light as if through straws.
- the cross section of the guiding straws may be one of many shapes or patterns.
- the aiming or guiding viewpoint location is the location picked up by the location sensors.
- the louvers are created to optimize viewing at the correct perspective location. In the present invention they may be angled differently at different locations of the screen to optimize this effect. This allows the viewpoint to be in any plane or angle. In this configuration it is possible for two or more viewers to observe the intended 3D stereoscopic image in the same location in space.
- FIG. 17 of the drawings there is shown an illustration of an embodiment of the present invention. This shows how louvers may be employed to direct the correct image with optical association to the proper viewpoint as determined by sensors (item 1 16), so a 3D stereoscopic image is seen. Note the 3D stereoscopic object image does not change location in space as viewpoint is changed.
- Fig. 18 of the drawings there is shown an illustration of an embodiment of the present invention. Louvers (item 217) with horizontal and vertical components are shown. These shall be referred to as electronically configurable light guiding louvers, or louvers. Using input from the position sensors a computer calculates the optimum configuration of the louvers.
- louvers may have variable pitch in more than one axes; thereby they are able to guide light from the image display through imaginary tubes (item 219) towards the intended viewpoint.
- the eye at point B In this case the eye at point B.
- the eye at point A is not at an intended viewing location and therefore sees no light from the image when it is projected or guided to viewpoint B.
- a first or left image may be viewed by the left eye and a second or rightimage may be viewed by the right eye.
- the created images may be directed with correct optical association so that a 3D stereoscopic image is seen.
- the location from which each image is seen is limited. This permits additional viewers to also receive 3D stereoscopic images which are different from the first viewer.
- those images would be of the same 3D object image in the same location in space as viewed from each viewer's unique individual perspective.
- Fig. 19 of the drawings there is shown an illustration of an embodiment of the present invention.
- the dual louver methodin time sequence from 1 to 4.
- the dual louvers direct a first image to viewpoint A.
- the dual louvers direct a second image to viewpoint B.
- the dual louvers direct a first image to viewpoint A.
- the dual louvers direct a second image to viewpoint B.
- the viewer may be the same as in sequences 1 and 2 or they may be a second viewer. In each case the image viewed has been created for the particular viewpoint. In this way multiple viewers may enjoy the 3D image regardless of their viewing orientation.
- louver patterns in sequences 3 and 4 are slightly different than those of sequences 1 and 2. This is a technique which may be used to eliminate dark spots from occurring in the image where the same pixel would be blocked by dual louvers. By moving the louvers from frame to frame this problem can be alleviated.
- Fig. 20 of the drawings there is shown an illustration of an embodiment of the present invention.
- the display (item 1 14) has cross sections expanded so that louvers from various locations of the display may be further illustrated.
- the viewpoint (item 530) is located directly in front of section 518 of the display (a location nearly centered in front of the display). So in order for the image to be seen form the viewpoint of item 530 the louvers of item 510 at the top left of the display must guide the light downward and towards the right.
- louvers of item 516 of the upper right comer must guide the light from the image downwards and to the left.
- the louvers of item 512 must guide light upwards and to the right.
- the louvers of item 514 must guide the light upwards.
- Those of item 520 must guide the light upwards and to the left. Those located directly in front of the viewing location should guide the light mostly straight ahead.
- FIG. 21 of the drawings there is shown an illustration of an embodiment of the present invention.
- the vertical component of the louvers is larger than the horizontal.
- the taller axis may be rotated to correct for a tilted head angle of the viewer.
- the taller portion is intended to coincide with the vertical axis of a viewer's face. This has the advantage of allowing more light to pass through the louvers while allowing one of a pair of viewpoints to see the image while the image is blocked form the other in a pair of viewpoints. By pair of viewpoints one may consider a left and right eye.
- louvers are not meant to limit the shape or pattern of the louvers.
- Fig. 22 of the drawings there is shown an illustration of an embodiment of the present invention. This illustrates how electronically configurable louvers may be applied so that the intended viewing location receives the correct opticalimage while other viewing locations do not.
- a small portion (item 602) of the display (item 1 14) is expanded(item 610).
- item 610 we see configurable louvers which operate in both the vertical andhorizontal directions to guide the light from the display image.
- Item 630 shows an approximate area where the light from a first image may strike the intended side of a viewers face.
- Item 650 shows an approximate area where the light from a second image may strike the other side of a viewers face. In this way a large area of light from the image is able to pass through the louvers to the intended viewers eye while limiting the light from the image which would be seen at another location.
- FIG. 604 another small portion (item 604) of the display (item 1 14) is expanded (item 620).
- item 620 we see configurable louvers which operate in both the vertical and horizontal directions to guide the light from the display image.
- the viewers head is tilted at an angle relative to the display (item 1 14).
- the configurable louvers (item 620) now tilt to match the angle of tilt of the viewers head.
- Item 640 shows an approximate area where the light from a first image may strike the intended side of a viewers face.
- Item 660 shows an approximate area where the light from a second image may strike the other side of a viewers face.
- One means to accomplish this is for the sensors to sense objects which enable a facial recognition and therefore location and pairing information of the eyes.
- Another method may involve a computing device which compares locations of eyes and creates pairs via an algorithm based on distance between eyes or some other method.
- Other methods for locating paired eye positions include, but are not limited to sensing light reflective or light transmitting devices located on the face or on a wearable device such as glasses, a hat, necklace etc.
- the means given to recognize a pair of eyes, viewpoints or facial features is for illustrative purposes only and is not meant to be limiting in any way.
- the ability to guide the light from the display to a specific area allows a privacy mode.
- This mode may use but not be limited to, facial recognition computation, eye pattern recognition or other means such as proximity are used to allow viewing by one person only.
- the electronically configurable light guiding louvers of more than one axis function to channel the light to the eyes of a single viewer.
- the electronically configurable light guiding louvers of more than one axis function to channel the light from the displayed image to the eyes of a single viewer. If desired, the number of people who may view the displayed image in privacy mode may be manually increased.
- FIG. 23 of the drawings there is shown an illustration of an embodiment of the present invention.
- a handheld device is shown which may be used in both portrait and landscape modes.
- configurable louvers are used to create an auto stereoscopic 3D image.
- the method of shutter glasses may also be applied.
- a display orientation sensor is applied. This sensor may be gravity sensing, motion or inertia sensing, but is not limited to these technologies.
- Fig. 33 of the drawings there is shown an illustration of an embodiment of the present invention.
- a 3D stereoscopic image of a box isshown. The box is manipulated by use of a pointing tool (item 700).
- This pointing tool may have a tip (item704) of emissive material, reflective material or other means to make it's location easily read by the sensors.
- the pointer may also have one or more functional buttons (item 702). These buttons may operate in a similar fashion as buttons on a computer controller such as a mouse. By applying this pointer an object may be identified, grabbed and moved, sized or any number of functions commonly associated with the computer mouse. The difference being that the virtual objects and the pointer may be operated in 3 axis or dimensions.
- a 3D stereoscopic image of a remote device is shown.
- the virtual image of the remote device in space is approximately the same for most viewing locations.
- it's virtual location in space and the virtual location in space of each individual key on the remote device may be calculated by the devices computers.
- By comparing the calculated fixed virtual location with real world objects interaction may take place.
- a virtual keyboard, virtual touch screen, virtual pottery wheel, or virtual musical instrument may be employed.
- a pointer, body part or wearable device may be located by the sensors and their position in space may likewise be calculated or quantified.
- a wearable device such as a glove may contain position markers of reflective or emissive materials which enable sensors to accurately determine it's location in space and for the case of a glove also the fingers.
- An advanced sensor may be able to detect the location of fingers without the need for gloves with position markers.
- either the method applying shutter glasses, or the method applying louvers may beused.
- keyboard entries may be made. This is similar to what occurs on a 2D screen with touch sensing. The difference being the typing takes place on a virtual image as opposed to a solid surface.
- either the method applying shutter glasses, or the method applying louvers may be used.
- the virtual keyboard and any other virtual object may be interacted in a multitude of other ways. These include stretching and shrinking, twisting and turning and any other ways a 2D touch object could be manipulated.
- the understanding is that for the 3D virtual touch object, 3axis rather than 2 axis, may be applied and manipulated. In this embodiment, either the method applying shutter glasses, or the method applying louvers may be used.
- the virtual keyboard or any other virtual interactive device described may be brought forth and/or removed by user gestures sensed by the systems location sensors.
- gestures sensed by the location sensors may be used for other functions, such as but not limited to turning the pages of an electronic book, changing stations on a television, orraising or lowering volume of the display system or other components.
- FIG. 26 of the drawings there is shown an illustration of an embodiment of the present invention.
- a 3D stereoscopic image of a game (item 196) is shown.
- the 3D virtual game pieces may be created and also manipulated by any of the methods previously described. All of the properties described in illustration 25 apply.
- the display system (item 1 14) may be made to lay flat so as to provide a better gaming surface. In this way board games and other types of games may be played and interacted with by the user or users. Virtual worlds may be created, viewed and/or interacted with. This embodiment of the present invention makes an excellent gaming system.
- FIG. 27 of the drawings there is shown an illustration of an embodiment of the present invention.
- a 3D stereoscopic virtual cave is shown which employs the technology previously illustrated.
- the objects appear more real as they remain approximately fixed in space as the viewer and viewpoint location are changed.
- the objects in the virtual cave may be interacted with in the manner which has been described above.
- Fig. 28 of the drawings there is shown an illustration of an embodiment of the present invention.
- Varying amounts of the simulator may be simulated depending on the wants of the user. It may be that only objects outside of the control environment are simulated. However it is possible for virtual controls, buttons, switches and other controlling devices to be simulated and interacted with, in the manner described above.
- the interior environment of the simulator may be created virtually. This enables simulators whose configuration may be controlled by applying computer software. For example a virtual flight simulator could be used as a B-737 for one event and reconfigured as an A-320 for the next event. This would save money for the user as fewer simulators would be needed.
- the present invention may be switched to other modes of operation. These include but are not limited to prior art 3D stereoscopic imaging where the 3D stereoscopic image location varies with viewer location. This may be a useful mode for viewing prior art technology 3D imagery such as 3D movies. Also, the display may be used to view 2D images in the manner of prior art. The switching among the various 3D and 2D modes may be automatic based on the format of the viewing material. In this embodiment, either the method applying shutter glasses, or the method applying louver technologies may be used.
- the prior art in this area of technology encompasses displays of two types, one which produce a 3D stereoscopic effect when viewed through wearable shutter glasses, the second which produces a 3D stereoscopic image through the use of light guiding louvers.
- This prior art is limited by viewing location.
- the prior art is limited to 3D stereoscopic images which may not be seen in approximately the same location as viewpoint changes nor when viewed by different users. This does not allow users to communicate about a 3D stereoscopic image by gestures, for example pointing, or gesturing.
- 3D stereoscopic images or virtual images may also be interacted with by the user(s). This is accomplished by applying location sensing technology and comparing the data with the computed 3D virtual object location.
- Prior art utilizes parallax barriers to obtain 3D stereoscopic effects.
- the prior art parallax barriers limit the eye placement of the viewer to a narrow range for large displays.
- the louvers of prior art function in only one axis at a time they have difficulties sharing the 3D imagery with other viewers.
- Prior art is also limited to small devices for virtual 3D auto stereoscopic display systems.
- the instant invention improves upon the prior art by improving upon the parallax barriers.
- the electronically configurable light guiding louvers have the advantage of variable pitch and multiple axis of blocking or guiding the light from the display. This allows multiple viewers to view large screen devices and share in the 3D
- a 3D stereoscopic image may be created which remains approximately fixed in space.
- Such a virtual image may be pointed at by one or more viewers. Because the virtual image is nearly fixed in space it's virtual location may be compared with a user's finger, other body parts or pointer. In this way a viewer may interact with a virtual 3D image by pointing or other gestures as sensed by the position sensors.
- the position sensors may be used to interpret a variety of gestures which correspond to a variety of commands. By using the position sensors gestures may be made which cause the display device to react to the viewer. Examples include but are not limited to gestures which call fora virtual keyboard or remote to be displayed. They may also cause a station of a television tochange or the volume to increase or decrease. There are many more possibilities and this listof gestures and results is not intended to be limiting in any way.
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Optics & Photonics (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
Abstract
La présente invention concerne un système et un procédé d'affichage stéréoscopique. Ledit système comprend : un panneau ou un écran d'affichage d'image ; des capteurs de suivi ; et un moyen permettant de créer des première et seconde images stéréoscopiques sur la base d'un point de vue. De ce fait, le spectateur perçoit l'image stéréoscopique en 3D comme approximativement fixe dans l'espace. La comparaison de l'emplacement suivi d'objets externes et de l'emplacement virtuel fixe de l'image stéréoscopique permet une interaction des objets avec l'image virtuelle dans l'espace réel.
Applications Claiming Priority (14)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US201361897983P | 2013-10-31 | 2013-10-31 | |
| US61/897,983 | 2013-10-31 | ||
| US201361900982P | 2013-11-06 | 2013-11-06 | |
| US61/900,982 | 2013-11-06 | ||
| US14/106,766 US10116914B2 (en) | 2013-10-31 | 2013-12-15 | Stereoscopic display |
| US14/106,766 | 2013-12-15 | ||
| US201361920755P | 2013-12-25 | 2013-12-25 | |
| US61/920,755 | 2013-12-25 | ||
| US201461934806P | 2014-02-02 | 2014-02-02 | |
| US61/934,806 | 2014-02-02 | ||
| US201462035477P | 2014-08-10 | 2014-08-10 | |
| US62/035,477 | 2014-08-10 | ||
| US14/547,555 US9883173B2 (en) | 2013-12-25 | 2014-11-19 | Stereoscopic display |
| US14/547,555 | 2014-11-19 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2015066734A1 true WO2015066734A1 (fr) | 2015-05-07 |
Family
ID=53005320
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2014/072419 Ceased WO2015066734A1 (fr) | 2013-10-31 | 2014-12-26 | Affichage stéréoscopique |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2015066734A1 (fr) |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20100103516A1 (en) * | 2008-10-27 | 2010-04-29 | Real D | Head-tracking enhanced stereo glasses |
| US20100149182A1 (en) * | 2008-12-17 | 2010-06-17 | Microsoft Corporation | Volumetric Display System Enabling User Interaction |
| WO2012044272A1 (fr) * | 2010-09-29 | 2012-04-05 | Thomson Licensing | Commutation automatique entre des contenus tridimensionnel et bidimensionnel pour un affichage |
-
2014
- 2014-12-26 WO PCT/US2014/072419 patent/WO2015066734A1/fr not_active Ceased
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20100103516A1 (en) * | 2008-10-27 | 2010-04-29 | Real D | Head-tracking enhanced stereo glasses |
| US20100149182A1 (en) * | 2008-12-17 | 2010-06-17 | Microsoft Corporation | Volumetric Display System Enabling User Interaction |
| WO2012044272A1 (fr) * | 2010-09-29 | 2012-04-05 | Thomson Licensing | Commutation automatique entre des contenus tridimensionnel et bidimensionnel pour un affichage |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10116914B2 (en) | Stereoscopic display | |
| US10469834B2 (en) | Stereoscopic display | |
| US6084594A (en) | Image presentation apparatus | |
| US10652525B2 (en) | Quad view display system | |
| EP3106963B1 (fr) | Realite induite | |
| US10298921B1 (en) | Superstereoscopic display with enhanced off-angle separation | |
| KR20230017849A (ko) | 증강 현실 안내 | |
| KR20230016209A (ko) | 포지션 추적을 사용한 인터랙티브 증강 현실 경험들 | |
| US20170280134A1 (en) | Trackable glasses system that provides multiple views of a shared display | |
| EP3454174B1 (fr) | Procédés, appareils, systèmes, programmes informatiques permettant la réalité induite | |
| EP3118722A1 (fr) | Réalité induite | |
| US20160202876A1 (en) | Indirect 3d scene positioning control | |
| US20170150108A1 (en) | Autostereoscopic Virtual Reality Platform | |
| US10866820B2 (en) | Transitioning between 2D and stereoscopic 3D webpage presentation | |
| WO2008132724A1 (fr) | Procédé et dispositif pour une interaction tridimensionnelle avec des afficheurs autostéréoscopiques | |
| US20170102791A1 (en) | Virtual Plane in a Stylus Based Stereoscopic Display System | |
| WO2007100204A1 (fr) | Dispositif de réalité virtuelle fondé sur la stéréovision | |
| CN107077199B (zh) | 用于在三维显示器上呈现虚拟对象的装置及用于控制装置的方法 | |
| US20150138184A1 (en) | Spatially interactive computing device | |
| US20180253144A1 (en) | Assisted item selection for see through glasses | |
| EP3260950A1 (fr) | Réalité induite | |
| US20180053338A1 (en) | Method for a user interface | |
| US9696842B2 (en) | Three-dimensional cube touchscreen with database | |
| JP4413203B2 (ja) | 画像呈示装置 | |
| US11443487B2 (en) | Methods, apparatus, systems, computer programs for enabling consumption of virtual content for mediated reality |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14858877 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 14858877 Country of ref document: EP Kind code of ref document: A1 |