HK1233091B - Stereo viewing - Google Patents
Stereo viewingInfo
- Publication number
- HK1233091B HK1233091B HK17106490.3A HK17106490A HK1233091B HK 1233091 B HK1233091 B HK 1233091B HK 17106490 A HK17106490 A HK 17106490A HK 1233091 B HK1233091 B HK 1233091B
- Authority
- HK
- Hong Kong
- Prior art keywords
- image
- source
- eye
- image source
- user
- Prior art date
Links
Description
Digital stereo viewing of still and moving images has become commonplace, and equipment for viewing 3D (three-dimensional) movies is more widely available. Theatres are offering 3D movies based on viewing the movie with special glasses that ensure the viewing of different images for the left and right eye for each frame of the movie. The same approach has been brought to home use with 3D-capable players and television sets. In practice, the movie consists of two views to the same scene, one for the left eye and one for the right eye. These views have been created by capturing the movie with a special stereo camera that directly creates this content suitable for stereo viewing. When the views are presented to the two eyes, the human visual system creates a 3D view of the scene. This technology has the drawback that the viewing area (movie screen or television) only occupies part of the field of vision, and thus the experience of 3D view is limited.
For a more realistic experience, devices occupying a larger area of the total field of view have been created. There are available special stereo viewing goggles that are meant to be worn on the head so that they cover the eyes and display pictures for the left and right eye with a small screen and lens arrangement. Such technology has also the advantage that it can be used in a small space, and even while on the move, compared to fairly large TV sets commonly used for 3D viewing. For gaming purposes, there are games that are compatible with such stereo glasses, and are able to create the two images required for stereo viewing of the artificial game world, thus creating a 3D view of the internal model of the game scene. The different pictures are rendered in real time from the model, and therefore this approach requires computing power especially if the game's scene model is complex and very detailed and contains a lot of objects.
The patent US 6,141,034 is directed to placing cameras in a multi-camera system for capturing panoramic images with an immersive experience. The views are handed off to a completely new camera pair when user turns his/her head. As such, a completely new pair is used once the head of the user is turned.
The patent application US 2002/0110275 discloses a telepresence system in which a scene is captured by recording pixel data elements, each associated with a pixel ray vector having a direction and an intercept on an known locus in the frame of reference of the scene. The pixel data elements may be captured by operating numerous video cameras pointing in different directions on a spherical locus. A virtual viewpoint image representing the image which would be seen from an arbitrary viewpoint, looking in an arbitrary direction, can be synthesized by determining the directions of synthetic pixel ray vectors from each pixel of the virtual viewpoint image through the virtual viewpoint and the intercepts of these vectors on the locus. Images from all the 360 cameras are transformed into an epipolar image comprising a series of line sets. At the playback phase the virtual image synthesis unit transforms the epipolar images into a series of visual images for the display devices associated with each observer, based upon the viewpoint information for that display device provided by observer viewpoint detection unit.
The international patent application WO 02/44808 discloses an imaging system for obtaining full stereoscopic spherical images of the visual environment surrounding a viewer, 360 degree both horizontally and vertically. The system comprises an array of cameras, wherein the lenses of said cameras are situated on a curved surface, pointing out from common centers of said curved surface. The captured images are arranged and processed to create sets of stereoscopic image pairs, wherein one image of each pair is designated for the observer right eye and the second image to his left eye, thus creating a three dimensional perception. The stereoscopic image pair is created from the group of selected images in the following way: Each of the selected images is divided into a left part and a right part according to the viewer's horizon by a line which is perpendicular the viewer horizon and is passing through the center of the image. All the left parts which are included in the viewer's field of vision are merged into a one uniform two dimensional image that matches the viewer's field of vision. The formed image is the right image of the stereoscopic pair to be displayed to the viewer's right eye. Following the same lines, a left image is formed by merging together the right parts.
There is, therefore, a need for solutions that enable stereo viewing, that is, viewing of a 3D image.
Now there has been invented an improved method and technical equipment implementing the method, by which the above problems are alleviated. Various aspects of the invention include a method, an apparatus, and a computer program according to the independent claims. Various embodiments of the invention are disclosed in the dependent claims.
The invention relates to creating and viewing stereo images, for example stereo video images, also called 3D video. At least three camera sources with overlapping fields of view are used to capture a scene so that an area of the scene is covered by at least three cameras. At the viewer, a camera pair is chosen from the multiple cameras to create a stereo camera pair that best matches the location of the eyes of the user if they were located at the place of the camera sources. That is, a camera pair is chosen so that the disparity created by the camera sources resembles the disparity that the user's eyes would have at that location. If the user tilts his head, or the view orientation is otherwise altered, a new pair can be formed, for example by switching the other camera. The viewer device then forms the images of the video frames for the left and right eyes by picking the best sources for each area of each image for realistic stereo disparity.
In the following, various embodiments of the invention will be described in more detail with reference to the appended drawings, in which
- Figs. 1a, 1b, 1c and 1d show a setup for forming a stereo image to a user;
- Fig. 2a shows a system and apparatuses for stereo viewing;
- Fig. 2b shows a stereo camera device for stereo viewing;
- Fig. 2c shows a head-mounted display for stereo viewing;
- Fig. 2d illustrates a camera device;
- Figs. 3a, 3b and 3c illustrate forming stereo images for first and second eye from image sources;
- Fig. 4a, 4b, 4c, 4d and 4e illustrate selection of image sources for creation of stereo images when head orientation is changing;
- Figs. 5a and 5b show an example of a camera device for being used as an image source;
- Fig. 5c shows an example of a microphone device for being used as an audio source;
- Figs. 6a, 6b, 6c and 6d show the use of source and destination coordinate systems for stereo viewing;
- Figs. 7a and 7b illustrate transmission of image source data for stereo viewing;
- Fig. 8 illustrates the use of synthetic image sources in a virtual reality model for creating images for stereo viewing;
- Fig. 9a shows a flow chart of a method for forming images for stereo viewing; and
- Fig. 9b shows a flow chart of a method for transmitting images for stereo viewing.
In the following, several embodiments of the invention will be described in the context of stereo viewing with 3D glasses. It is to be noted, however, that the invention is not limited to any specific display technology. In fact, the different embodiments have applications in any environment where stereo viewing is required, for example movies and television. Additionally, while the description uses a certain camera setup as an example of an image source, different camera setups and image source arrangements can be used.
In the setup of Fig. 1a , the spheres A1 and A2 are in the field of view of both eyes. The center-point O12 between the eyes and the spheres are on the same line. That is, from the center-point, the sphere A2 is behind the sphere A1. However, each eye sees part of sphere A2 from behind A1, because the spheres are not on the same line of view from either of the eyes.
In Fig. 1b , there is a setup shown, where the eyes have been replaced by cameras C1 and C2, positioned at the location where the eyes were in Fig. 1a . The distances and directions of the setup are otherwise the same. Naturally, the purpose of the setup of Fig. 1b is to be able to take a stereo image of the spheres A1 and A2. The two images resulting from image capture are FC1 and FC2. The "left eye" image FC1 shows the image SA2 of the sphere A2 partly visible on the left side of the image SA1 of the sphere A1. The "right eye" image FC2 shows the image SA2 of the sphere A2 partly visible on the right side of the image SA1 of the sphere A1. This difference between the right and left images is called disparity, and this disparity, being the basic mechanism with which the human visual system determines depth information and creates a 3D view of the scene, can be used to create an illusion of a 3D image.
In Fig. 1c , the creating of this 3D illusion is shown. The images FC1 and FC2 captured by the cameras C1 and C2 are displayed to the eyes E1 and E2, using displays D1 and D2, respectively. The disparity between the images is processed by the human visual system so that an understanding of depth is created. That is, when the left eye sees the image SA2 of the sphere A2 on the left side of the image SA1 of sphere A1, and respectively the right eye sees the image of A2 on the right side, the human visual system creates an understanding that there is a sphere V2 behind the sphere V1 in a three-dimensional world. Here, it needs to be understood that the images FC1 and FC2 can also be synthetic, that is, created by a computer. If they carry the disparity information, synthetic images will also be seen as three-dimensional by the human visual system. That is, a pair of computer-generated images can be formed so that they can be used as a stereo image.
The system of Fig. 2a may consist of three main parts: image sources, a server and a rendering device. A video capture device SRC1 comprises multiple (for example, 8) cameras CAM1, CAM2, ..., CAMN with overlapping field of view so that regions of the view around the video capture device is captured from at least two cameras. The device SRC1 may comprise multiple microphones to capture the timing and phase differences of audio originating from different directions. The device may comprise a high resolution orientation sensor so that the orientation (direction of view) of the plurality of cameras can be detected and recorded. The device SRC1 comprises or is functionally connected to a computer processor PROC1 and memory MEM1, the memory comprising computer program PROGR1 code for controlling the capture device. The image stream captured by the device may be stored on a memory device MEM2 for use in another device, e.g. a viewer, and/or transmitted to a server using a communication interface COMM1.
Alternatively or in addition to the video capture device SRC1 creating an image stream, or a plurality of such, one or more sources SRC2 of synthetic images may be present in the system. Such sources of synthetic images may use a computer model of a virtual world to compute the various image streams it transmits. For example, the source SRC2 may compute N video streams corresponding to N virtual cameras located at a virtual viewing position. When such a synthetic set of video streams is used for viewing, the viewer may see a three-dimensional virtual world, as explained earlier for Fig. 1d . The device SRC2 comprises or is functionally connected to a computer processor PROC2 and memory MEM2, the memory comprising computer program PROGR2 code for controlling the synthetic source device SRC2. The image stream captured by the device may be stored on a memory device MEM5 (e.g. memory card CARD1) for use in another device, e.g. a viewer, or transmitted to a server or the viewer using a communication interface COMM2.
There may be a storage, processing and data stream serving network in addition to the capture device SRC1. For example, there may be a server SERV or a plurality of servers storing the output from the capture device SRC1 or computation device SRC2. The device comprises or is functionally connected to a computer processor PROC3 and memory MEM3, the memory comprising computer program PROGR3 code for controlling the server. The server may be connected by a wired or wireless network connection, or both, to sources SRC1 and/or SRC2, as well as the viewer devices VIEWER1 and VIEWER2 over the communication interface COMM3.
For viewing the captured or created video content, there may be one or more viewer devices VIEWER1 and VIEWER2. These devices may have a rendering module and a display module, or these functionalities may be combined in a single device. The devices may comprise or be functionally connected to a computer processor PROC4 and memory MEM4, the memory comprising computer program PROGR4 code for controlling the viewing devices. The viewer (playback) devices may consist of a data stream receiver for receiving a video data stream from a server and for decoding the video data stream. The data stream may be received over a network connection through communications interface COMM4, or from a memory device MEM6 like a memory card CARD2. The viewer devices may have a graphics processing unit for processing of the data to a suitable format for viewing as described with Figs. 1c and 1d . The viewer VIEWER1 comprises a high-resolution stereo-image head-mounted display for viewing the rendered stereo video sequence. The head-mounted device may have an orientation sensor DET1 and stereo audio headphones. The viewer VIEWER2 comprises a display enabled with 3D technology (for displaying stereo video), and the rendering device may have a head-orientation detector DET2 connected to it. Any of the devices (SRC1, SRC2, SERVER, RENDERER, VIEWER1, VIEWER2) may be a computer or a portable computing device, or be connected to such. Such rendering devices may have computer program code for carrying out methods according to various examples described in this text.
The system described above may function as follows. Time-synchronized video, audio and orientation data is first recorded with the capture device. This can consist of multiple concurrent video and audio streams as described above. These are then transmitted immediately or later to the storage and processing network for processing and conversion into a format suitable for subsequent delivery to playback devices. The conversion can involve post-processing steps to the audio and video data in order to improve the quality and/or reduce the quantity of the data while preserving the quality at a desired level. Finally, each playback device receives a stream of the data from the network, and renders it into a stereo viewing reproduction of the original location which can be experienced by a user with the head mounted display and headphones.
With a novel way to create the stereo images for viewing as described below, the user may be able to turn their head in multiple directions, and the playback device is able to create a high-frequency (e.g. 60 frames per second) stereo video and audio view of the scene corresponding to that specific orientation as it would have appeared from the location of the original recording.
For using the best image sources, a model of camera and eye positions is used. The cameras may have positions in the camera space, and the positions of the eyes are projected into this space so that the eyes appear among the cameras. A realistic parallax (distance between the eyes) is employed. For example, in an 8-camera regular setup, where all the cameras are located on a sphere regularly spaced, the eyes may be projected on the sphere, as well. The solution first selects the closest camera to each eye. Head-mounted-displays can have a large field of view per eye such that there is no single image (from one camera) which covers the entire view of an eye. In this case, a view must be created from parts of multiple images, using a known technique of "stitching" together images along lines which contain almost the same content in the two images being stitched together. Fig. 3a shows the two displays for stereo viewing. The image of the left eye display is put together from image data from cameras IS2, IS3 and IS6. The image of the right eye display is put together from image data from cameras IS1, IS3 and IS8. Notice that the same image source IS3 is in this example used for both the left eye and the right eye image, but this is done so that the same region of the view is not covered by camera IS3 in both eyes. This ensures proper disparity across the whole view - that is, at each location in the view, there is a disparity between the left and right eye images.
The stitching point is changed dynamically for each head orientation to maximize the area around the central region of the view that is taken from the nearest camera to the eye position. At the same time, care is taken to ensure that different cameras are used for the same regions of the view in the two images for the different eyes. In Fig. 3b , the regions PXA1 and PXA2 that correspond to the same area in the view are taken from different cameras IS1 and IS2, respectively. The two cameras are spaced apart, so the regions PXA1 and PXA2 show the effect of disparity, thereby creating a 3D illusion in the human visual system. Seams (which can be more visible) STITCH1 and STITCH2 are also avoided from being positioned in the center of the view, because the nearest camera will typically cover the area around the center. This method leads to dynamic choosing of the pair of cameras to be used for creating the images for a certain region of the view depending on the head orientation. The choosing may be done for each pixel and each frame, using the detected head orientation.
The stitching is done with an algorithm ensuring that all stitched regions have proper stereo disparity. In Fig. 3c , the left and right images are stitched together so that the objects in the scene continue across the areas from different camera sources. For example, the closest cube in the scene has been taken from one camera to the left eye image, and from two different cameras to the right eye view, and stitched together. There is a different camera used for all parts of the cube for the left and the right eyes, which creates disparity (the right side of the cube is more visible in the right eye image).
The same camera image may be used partly in both left and right eyes but not for the same region. For example the right side of the left eye view can be stitched from camera IS3 and the left side of the right eye can be stitched from the same camera IS3, as long as those view areas are not overlapping and different cameras (IS1 and IS2) are used for rendering those areas in the other eye. In other words, the same camera source (in Fig. 3a , IS3) may be used in stereo viewing for both the left eye image and the right eye image. In traditional stereo viewing, on the contrary, the left camera is used for the left image and the right camera is used for the right image. Thus, the present method allows the source data to be utilized more fully. This can be utilized in the capture of video data, whereby the images captured by different cameras at different time instances (with a certain sampling rate like 30 frames per second) are used to create the left and right stereo images for viewing. This may be done such a manner that the same camera image captured at a certain time instance is used for creating part of an image for the left eye and part of an image for the right eye, the left and right eye images being used together to form one stereo frame of a stereo video stream for viewing. At different time instances, different cameras may be used for creating part of the left eye and part of the right eye frame of the video. This enables much more efficient use of the captured video data.
In other words, locations of a first and a second virtual eye corresponding to said eyes of the user are determined in a coordinate system using the head orientation, and then the image sources are selected based on the locations of the virtual eyes with respect to image source locations in the coordinate system.
An example of a rotational transformation Rx of coordinates around the x-axis by an angle γ (also known as pitch angle) is defined by a rotational matrix
In a similar manner rotations Ry (for yaw) and Rz (for roll) around the different axes can be formed. As a general rotation, a matrix multiplication of the three rotations by R=Rx Ry Rz can be formed. This rotation matrix can then be used to multiply any vector in a first coordinate system according to v2 = R v1 to obtain the vector in the destination coordinate system.
An example of transforming the source and eye coordinates is given in the following. All vectors are vectors in three-dimensional space and described as (x, y, z). The origin is in (0, 0, 0). All image sources have an orientation defined by yaw, pitch and roll around the origin.
For each source, the position vector is calculated:
- Create a position vector for the source and initialize it with (0, 0, 1)
- Make an identity transformation matrix
- Multiply the matrix by another that rotates coordinates around the y-axis by the amount of yaw
- Multiply the matrix by another that rotates coordinates around the x-axis by the amount of pitch
- Multiply the matrix by another that rotates coordinates around the z-axis by the amount of roll
- Transform the position vector with matrix multiplication using the matrix, the matrix applied from the left in the multiplication.
For an eye, calculate the position vector:
- Create a position vector for the eye and initialize it with (0, 0, 1)
- Take the view matrix that is used for rendering the sources according to the viewing direction (head orientation) and invert it. (To illustrate why the view matrix is inverted, for example when the viewing direction is rotated 10 degrees around y axis, the sources need to be rotated -10 degrees around y-axis. In a similar manner, if one looks at an object and rotates his head right, the object in your view moves to left. Therefore the rotation we apply to the imagined eye position may be taken as the inverse of the rotation we apply to the sources/view.)
- Rotate the inverted view matrix around the y-axis (the axis that points up in the head coordinate system) according to the simulated eye disparity (as described below).
- Transform the position vector according to the resulting matrix, with the matrix applied pre-vector.
- Calculate the distance between the eye position and the sources and pick the shortest distance (see below).
An imagined position of an eye (left or right) is positioned to equal distance from the center point than the cameras are, and rotated around the center point around all x, y and z axes according to the relative orientation of the viewer's head-mounted device compared to the capture device's orientation. As shown in Figs. 4a and 4b , this results in the position of an imaginary middle eye MEYE in the middle of the face (corresponding to O12 of Fig. 1a ). The position of the viewer's imaginary middle eye is then rotated around the view's y-axis (aligned with the viewer's head, from the chin to the top of the head) to get the position of the virtual left eye LEYE or right eye REYE. To simulate the disparity of human eyes, depending on whether the view is for the left or right eye, this rotation is done to corresponding direction. The angle between the virtual left and right eye may be between 80 and 120 degrees, e.g. approximately 100 degrees. Larger angles than 90 degrees may prevent picking of the same camera for the same region for both eyes, and smaller angles than 110 degrees may prevent cameras with too large inter-camera distance to be picked.
The sources (e.g. cameras) are the ordered according to the distance between the source and the virtual eye and the view is rendered so that pixels are picked from a source that, respectively: A) Covers that pixel B) Has the smallest distance to the virtual eye when compared against all the sources that fulfill condition A. In other words, an image source for a pixel of an image for a first eye of the user is determined to be a close image source that satisfies a closeness criterion (e.g. being the closest source) to a virtual eye corresponding to said first eye, where the close image source captures the scene portion corresponding to the pixel. If the close image source does not capture the scene portion corresponding to the pixel, an image source for a pixel of an image for the first eye of the user is selected to be another source than the close image source to said virtual eye corresponding to said first eye.
- 1. List all the sources that cover the current pixel
- 2. From all the sources on the list, pick the one that matches best what a person would see with that specific eye if his head would be positioned where the sources center point is and rotated according to the head-mounted display's (viewer's head) orientation
- 3. Adjust the imagined person's eye disparity to make sure that the source is not the same for the left and the right eye, and that the picked sources have a disparity as close as possible to the human eyes (e.g. 64 mm). The amount of this adjustment depends on the available sources and their positions. The adjustment may be done beforehand, as well. If the closest camera for the first eye has been found e.g. 10 degrees lower in pitch than the first eye, the closest second eye may also be rotated 10 degrees lower in pitch. This may be done to at least in some cases avoid tilting (creating a roll) the parallax line between the cameras that would result from the other eye picking a camera that is higher in pitch.
The virtual positions may be pre-mapped with a lookup table to closest camera lists, and the mapping may have a granularity e.g. 1 mm inside which all positions share the same list. When the pixels for the images to be displayed are being rendered, a stencil buffer may be employed so that the pixels from the closest camera are rendered first and marked in the stencil buffer as rendered. Then, a stencil test is carried out to determine the non-rendered pixels that can be rendered from the next closest camera, the pixels from the next closest are rendered and marked, and so on, until the whole image has been rendered. That is, regions of an image for an eye are rendered so that the regions correspond to image sources, wherein the regions are rendered in order of closeness of the image sources to a virtual eye corresponding to said eye in image source coordinate system.
In order to create smooth "seam" (spatial transition) from one camera area to another, the edge region of a camera may be rendered using alpha channel rendering as follows. For each pixel, the (red-green-blue) color values of the pixel is computed from the source color values of source pixels, e.g. by interpolation or by using the color values of the closest source pixel. For most pixels, the alpha value (opaqueness) is one. For the pixels on the edge of the source, the alpha value may be set to less than one. This means that the color values from the next overlapping source and the earlier computed color values are mixed, creating a smoother stitch. For the edge areas, rendering may thus start from the furthest camera that covers the pixel. That is, regions of the images may be combined by blending the edge areas of the regions.
In the above, two optional optimizations, namely the use of stencil buffer and alpha channel smoothing have been described. In this manner, the functionalities of a graphics processor may be utilized.
When the user turns his head (there is rotation represented by pitch, yaw and roll values), the head orientation of the user is determined again to obtain a second head orientation. This may happen e.g. so that there is a head movement detector in the head-mounted display. To form image regions corresponding to the first scene region, image sources are again chosen, as shown in Fig. 4d . Because the head has turned, the second image source (IS2) and now a third image source (IS8) are chosen based on the second head orientation, the second and third image source forming a stereo image source. This is done as explained above. Color values of a third region of pixels (PXA3) corresponding to the first region of a scene are formed using the third image source (IS8), the color values of the third region of pixels (PXA3) being formed into a third image for displaying to the left eye. Color values of a fourth region of pixels (PXA4) corresponding to the same first region of a scene are still formed using the second image source (IS2), the color values of the fourth region of pixels being formed into a fourth image for displaying to the right eye.
In this manner, the detected or determined head orientation affects the choosing of image sources that are used to form an image for an eye. The pair of image sources (cameras) used to create the stereo image of a region of a scene may change from one time instance to another if the user turns his head or the camera view is rotated. This is because the same image source may not be the closest image source to the (virtual) eye at all times.
When reproducing a stereo view for a specific view orientation based on input from multiple cameras the key is to have parallax between the cameras. It has been noticed that this parallax however may cause a jump in the image region (and the disparity) between two successive frames when the camera pair for the image region changes due to a change in the viewing angle (head orientation). This jump can disturb the viewer and reduce the fidelity of the reproduction. In Fig. 4c , the left image is rendered from cameras IS1, IS3 and IS7, and the right image from cameras IS2, IS3 and IS6. When the user tilts his head to the left, the images are made to naturally rotate counterclockwise. However, the position of the eyes with respect to the sources is also changing. In Fig. 4d , one camera (IS7) has been changed (to IS8) for the left image. The image from IS7 is slightly different from IS8, and thus, when the user tilts his head, the camera change may cause a noticeable change in the disparity in the lower part of the image.
A technique used in this solution is to cross-blend during multiple rendered frames between the two camera pairs, adjusting the timing and duration of the cross-blend according to the angular velocity of the viewing direction. The aim is to do the cross-blended jump when the viewing direction is changing rapidly as then there is natural motion blur already and the user is not focused on any specific point. The duration of the cross-blend may also be adjusted according to the angular velocity so that in slow motion the cross-blend is done over longer period of time and in faster motion the cross-blend duration is shorter. This method reduces the visibility of the jump from a one camera pair to another. The cross-blending can be achieved by weighted summing of the affected image region values. For example, as shown in Fig. 4e , the area to be blended may be chosen to be the combined area of IS7 and IS8. The area may also be chosen to be the area of IS8 only or IS7 only. This method has been evaluated to reduce the noticeability of the jump from a camera pair to another, especially when viewed with a head mounted display. In other words, to improve video image quality, a temporal transition may be created by blending from an image formed using a first image source to an image using another image source. The duration of the temporal transition blending may be adjusted by using information on head movement speed, e.g. angular velocity.
In the change of the source, a hysteresis of change may be applied. By hysteresis it is meant that once a change in source from a first source to a second source has been applied due to a determination that the second source is closer to a virtual eye than the first source, a change back to the first source is not made as easily as the first change. That is, if the head orientation returns to the orientation right before the change, a change back to the first source is not affected. Change back to the first source needs a larger change in head orientation so that the first source is clearly closer to a virtual eye than the second source. Such a use of hysteresis may be used to prevent flickering caused by rapid switching of cameras back and forth at the orientation where the first and second sources are almost as close to the virtual eye.
It needs to be understood that cross-blending may also happen so that the image sources for the whole area are changed, which results in the whole area to be cross-blended.
The requirement for multiple cameras covering every point around the capture device twice would require a very large number of cameras in the capture device. A novel technique used in this solution is to make use of lenses with a field of view of 180 degree (hemisphere) or greater and to arrange the cameras with a carefully selected arrangement around the capture device. Such an arrangement is shown in Fig. 5a , where the cameras have been positioned at the corners of a virtual cube, having orientations DIR_CAM1, DIR_CAM2,..., DIR_CAMN essentially pointing away from the center point of the cube.
Overlapping super wide field of view lenses may be used so that a camera can serve both as the left eye view of a camera pair and as the right eye view of another camera pair. This reduces the amount of needed cameras to half. As a surprising advantage, reducing the number of cameras in this manner increases the stereo viewing quality, because it also allows to pick the left eye and right eye cameras arbitrarily among all the cameras as long as they have enough overlapping view with each other. Using this technique with different number of cameras and different camera arrangements such as sphere and platonic solids enables picking the closest matching camera for each eye (as explained earlier) achieving also vertical parallax between the eyes. This is beneficial especially when the content is viewed using head mounted display. The described camera setup, together with the stitching technique described earlier, may allow to create stereo viewing with higher fidelity and smaller expenses of the camera device.
The wide field of view allows image data from one camera to be selected as source data for different eyes depending on the current view direction, minimizing the needed number of cameras. The spacing can be in a ring of 5 or more cameras around one axis in the case that high image quality above and below the device is not required, nor view orientations tilted from perpendicular to the ring axis.
In case high quality images and free view tilt in all directions is required, a platonic solid shape must be used, either a cube (with 6 cameras), octahedron (with 8 cameras) or dodecahedron (with 12 cameras). Of these, the octahedron, or the corners of a cube (Fig. 5a ) is a good choice since it offers a good trade-off between minimizing the number of cameras while maximizing the number of camera-pairs combinations that are available for different view orientations. An actual camera device built with 8 cameras is shown in Fig. 5b . The camera device uses 185-degree wide angle lenses, so that the total coverage of the cameras is more than 4 full spheres. This means that all points of the scene are covered by at least 4 cameras. The cameras have orientations DIR_CAM1, DIR_CAM2,..., DIR_CAMN pointing away from the center of the device.
Even with fewer cameras, such over-coverage may be achieved, e.g. with 6 cameras and the same 185-degree lenses, coverage of 3x can be achieved. When a scene is being rendered and the closest cameras are being chosen for a certain pixel, this over-coverage means that there are always at least 3 cameras that cover a point, and consequently at least 3 different camera pairs for that point can be formed. Thus, depending on the view orientation (head orientation), a camera pair with a good parallax may be more easily found.
The camera device may comprise at least three cameras in a regular or irregular setting located in such a manner with respect to each other that any pair of cameras of said at least three cameras has a disparity for creating a stereo image having a disparity. The at least three cameras have overlapping fields of view such that an overlap region for which every part is captured by said at least three cameras is defined. Any pair of cameras of the at least three cameras may have a parallax corresponding to parallax of human eyes for creating a stereo image. For example, the parallax (distance) between the pair of cameras may be between 5.0 cm and 12.0 cm, e.g. approximately 6.5 cm. The at least three cameras may have different directions of optical axis. The overlap region may have a simply connected topology, meaning that it forms a contiguous surface with no holes, or essentially no holes so that the disparity can be obtained across the whole viewing surface, or at least for the majority of the overlap region. The field of view of each of said at least three cameras may approximately correspond to a half sphere. The camera device may comprise three cameras, the three cameras being arranged in a triangular setting, whereby the directions of optical axes between any pair of cameras form an angle of less than 90 degrees. The at least three cameras may comprise eight wide-field cameras positioned essentially at the corners of a virtual cube and each having a direction of optical axis essentially from the center point of the virtual cube to the corner in a regular manner, wherein the field of view of each of said wide-field cameras is at least 180 degrees, so that each part of the whole sphere view is covered by at least four cameras (see Fig. 5b ).
A sound stream matching the position of the virtual ear may be created from the recordings of multiple microphones using multiple techniques. One technique is to choose a single original sound source closest to each virtual ear. However this gives spatial movement resolution limited to the original number of microphones. A better technique is to use well known audio beam-forming algorithms to combine the recordings from sets of 2 or more microphones and create synthetic intermediate audio streams corresponding to multiple focused lobes of space around the capture device. During rendering, these intermediate streams are then each filtered using a head-related transfer function (HRTF) corresponding to their current location relative to the virtual ear in a virtual head matching the current user head orientation, and then summed together to give a final simulated stream which matches more closely the stream that would have been heard by an ear at the same position as the virtual ear. A head-related transfer function (HRTF) is a transfer function that tells how a sound from a point in space is heard by an ear. Two head-related transfer functions (for the left and right ear) can be used to form a stereo sound that appears to come from a certain direction and distance. Multiple sound sources from different directions and distances can be simply summed up to obtain the combined stereo sound from these sources.
The orientation correction used for video described below is applied also to audio in order to optionally cancel out motion of the capture device if the viewer's head is not moving.
The immersive experience of 3D content viewed with a head mounted display comes from how the user is able to look around by turning his head and the content is seen correctly according to the head orientation. If the capture device has moved while capturing (for example when mounted to a helmet of a scuba diver or to a branch of a tree) the movement will affect the viewing angle of the user independently of the viewer's head orientation. This has been noticed to break the immersion and make it hard for the user to focus on a certain point or viewing angle.
The video data for the whole scene may need to be transmitted (and/or decoded at the viewer), because during playback, the viewer needs to respond immediately to the angular motion of the viewer's head and render the content from the correct angle. To be able to do this the whole 360 degree panoramic video needs to be transferred from the server to the viewing device as the user may turn his head any time. This requires a large amount of data to be transferred that consumes bandwidth and requires decoding power.
A technique used in this application is to report the current and predicted future viewing angle back to the server with view signaling and to allow the server to adapt the encoding parameters according to the viewing angle. The server can transfer the data so that visible regions (active image sources) use more of the available bandwidth and have better quality, while using a smaller portion of the bandwidth (and lower quality) for the regions not currently visible or expected to visible shortly based on the head motion (passive image sources). In practice this would mean that when a user quickly turns their head significantly, the content would at first have worse quality but then become better as soon as the server has received the new viewing angle and adapted the stream accordingly. An advantage may be that while head movement is less, the image quality would be improved compared to the case of a static bandwidth allocation equally across the scene. This is illustrated in Fig. 7b , where active source signals V1, V2, V5 and V7 are coded with better quality than the rest of the source signals (passive image sources) V3, V4, V6 and V8.
In broadcasting cases (with multiple viewers) the server may broadcast multiple streams where each have different area of the spherical panorama heavily compressed instead of one stream where everything is equally compressed. The viewing device may then choose according to the viewing angle which stream to decode and view. This way the server does not need to know about individual viewer's viewing angle and the content can be broadcast to any number of receivers.
To save bandwidth, the image data may be processed so that part of the spherical view is transferred in lower quality. This may be done at the server e.g. as a preprocessing step so that the computational requirements at transmission time are smaller.
In case of one-to-one connection between the viewer and the server (i.e. not broadcast) the part of the view that's transferred in lower quality is chosen so that it's not visible in the current viewing angle. The client may continuously report its viewing angle back to the server. At the same time the client can also send back other hints about the quality and bandwidth of the stream it wishes to receive.
In case of broadcasting (one-to-many connection) the server may broadcast multiple streams where different parts of the view are transferred in lower quality and the client then selects the stream it decodes and views so that the lower quality area is outside the view with its current viewing angle.
Some ways to lower the quality of a certain area of the spherical view include for example:
- Lowering the spatial resolution and/or scaling down the image data;
- Lowering color coding resolution or bit depth;
- Lowering the frame rate;
- Increasing the compression; and/or
- Dropping the additional sources for the pixel data and keeping only one source for the pixels, effectively making that region monoscopic instead of stereoscopic.
All these can be done individually, in combinations, or even all at the same time, for example per source basis by breaking the stream into two or more separate streams that are either high quality streams or low quality streams and contain one or more sources per stream.
These methods can also be applied even if all the sources are transferred in the same stream. For example a stream that contains 8 sources in an octahedral arrangement can reduce the bandwidth significantly by keeping the 4 sources intact that cover the current viewing direction completely (and more) and from the remaining 4 sources, drop 2 completely, and scale down the remaining two. In addition, the server can update those two low quality sources only every other frame so that the compression algorithm can compress the unchanged sequential frames very tightly and also possibly set the compression's region of interest to cover only the 4 intact sources. By doing this the server manages to keep all the visible sources in high quality but significantly reduce the required bandwidth by making the invisible areas monoscopic, lower resolution, lower frame rate and more compressed. This will be visible to the user if he/she rapidly changes the viewing direction, but then the client will adapt to the new viewing angle and select the stream(s) that have the new viewing angle in high quality, or in one-to-one streaming case the server will adapt the stream to provide high quality data for the new viewing angle and lower quality for the sources that are hidden.
Synthetic 3D content can be rendered from the internal model of the scene using a graphics processing unit for interactive playback. Such an approach is common e.g. in computer games. However, the complexity and realism of such content is always limited by the amount of local processing power available, which is much less than would be available for non-live rendering.
However, pre-rendering 3D films with computer-animated 3D content are conventionally delivered with a fixed viewpoint encoded into pairs of stereo images. At best, the viewer can manually select a pair of his liking, although in a cinema environment, only one pair is available. These approaches do not have the interactive potential of the locally rendered content.
At the viewing device, the wide-angle synthetic source signals may be decoded, and the stereo images of the synthetic world may be created by choosing the left and right eye source signals and possibly creating the images by the stitching method described earlier, if there is need for such stitching. The result is that each viewer of this content can be inside the virtual world of the film, able to look in all directions, even while the film is paused.
The various embodiments of the invention can be implemented with the help of computer program code that resides in a memory and causes the relevant apparatuses to carry out the invention. For example, a device may comprise circuitry and electronics for handling, receiving and transmitting data, computer program code in a memory, and a processor that, when running the computer program code, causes the device to carry out the features of an embodiment. Yet further, a network device like a server may comprise circuitry and electronics for handling, receiving and transmitting data, computer program code in a memory, and a processor that, when running the computer program code, causes the network device to carry out the features of an embodiment.
It is obvious that the present invention is not limited solely to the above-presented embodiments, but it can be modified within the scope of the appended claims.
Claims (14)
- A method, comprising:- determining head orientation of a user to obtain a first head orientation,- selecting a first image source (IS7), a second image source (IS2) based on said first head orientation, said first (IS7), and second (IS2) image source forming a stereo image source,- creating a first stereo image by rendering a first target image for one eye of the user using said first image source (IS7) and a second target image for another eye of the user using said second image source (IS2),- determining head orientation of said user to obtain a second head orientation,- selecting said second image source (IS2) and a third image source (IS8) based on said second head orientation, said second (IS2) and third (IS8) image source forming a stereo image source,- creating a second stereo image by rendering a third target image for one eye of the user using said second image source (IS2) and a fourth target image for another eye of the user using said third image source (IS8),characterized by- stitching at least a part of an image from a fourth image source (IS3) with at least one of the first target image and the second target image so that a stitching point is located aside of a centerline of the first target image and the second target image, and- stitching at least a part of the image from the fourth image source (IS3) with at least one of the third target image and the fourth target image so that a stitching point is located aside of a centerline of the third target image and the fourth target image.
- An apparatus comprising means to:- determine head orientation of a user to obtain a first head orientation,- select a first image source (IS7), a second image source (IS2) based on said first head orientation, said first (IS7) and second (IS2) image source forming a stereo image source,- create a first stereo image by rendering a first target image for one eye of the user using said first image source (IS7) and a second target image for another eye of the user using said second image source (IS2),- determine head orientation of said user to obtain a second head orientation,- select said second image source (IS2) and a third image source (IS8) based on said second head orientation, said second (IS2) and third (IS8) image source forming a stereo image source,- create a second stereo image by rendering a third target image for one eye of the user using said second image source (IS2) and a fourth target image for another eye of the user using said third image source (IS8),characterized in that the apparatus is further comprising means to:- stitch at least a part of an image from a fourth image source (IS3) with at least one of the first target image and the second target image so that a stitching point is located aside of a centerline of the first target image and the second target image, and- stitch at least a part of the image from the fourth image source (IS3) with at least one of the third target image and the fourth target image so that a stitching point is located aside of a centerline of the third target image and the fourth target image.
- An apparatus according to claim 2, wherein the apparatus is further comprising means to:- form color values of a first region of pixels corresponding to a first region of a scene using said first image source, said color values of said first region of pixels being used for forming said first target image for displaying to a first eye,- form color values of a second region of pixels corresponding to said first region of a scene using said second image source, said color values of said second region of pixels being used for forming said second target image for displaying to a second eye,- form color values of a third region of pixels corresponding to said first region of a scene using said third image source, said color values of said third region of pixels being used for forming said third target image for displaying to said first eye,- form color values of a fourth region of pixels corresponding to said first region of a scene using said second image source, said color values of said fourth region of pixels being used for forming said fourth target image for displaying to said second eye.
- An apparatus according to claim 2 or 3, wherein the apparatus is further comprising means to:- determine locations of a first and a second virtual eye corresponding to said eyes of the user in a coordinate system using said head orientations,- select said image sources based on said locations of said virtual eyes with respect to image source locations in said coordinate system.
- An apparatus according to claim 4, wherein the apparatus is further comprising means to:- determine an image source for a pixel of an image for a first eye of the user to be a close image source that satisfies a closeness criterion to a virtual eye corresponding to said first eye, where said close image source captures the scene portion corresponding to the pixel, and- determine an image source for a pixel of an image for said first eye of the user to be another source than the close image source to said virtual eye corresponding to said first eye, where the close image source does not capture the scene portion corresponding to the pixel.
- An apparatus according to claim 2 to 5, wherein the apparatus is further comprising means to:- create regions of an image for an eye so that the regions correspond to image sources, wherein the regions are created in order of closeness of the image sources to a virtual eye corresponding to said eye in image source coordinate system.
- An apparatus according to any of the claims 2 to 7, wherein the apparatus is further comprising means to:- combine said regions of said images by blending the edge areas of the regions.
- An apparatus according to any of the claims 2 to 7, wherein the apparatus is further comprising means to:- blend a temporal transition from said image formed using said first image source and said image using said third image source.
- An apparatus according to claim 8, wherein the apparatus is further comprising means to:- adjust the duration of the temporal transition blending by using information on head movement speed.
- An apparatus according to claim 2 to 9, wherein the apparatus is further comprising means to:- determine audio information for the left ear and audio information for the right ear using said head orientation information to modify audio information from two or more audio sources using a head-related transfer function.
- An apparatus according to claims 2 to 10, wherein the apparatus is further comprising means to:- determine source orientation information for said image sources, and- use said source orientation information together with said head orientation information for selecting said image sources.
- An apparatus according to claim 10, wherein the apparatus is further comprising means to:- determine source orientation information for said audio sources, and- use said source orientation information together with said head orientation information for modifying audio information from said audio sources.
- An apparatus according to claim 2, wherein said stereo images are used to form a stereo video sequence.
- A computer program product embodied on a non-transitory computer readable medium, comprising computer program code configured to, when executed on at least one processor, cause an apparatus or a system to:- determine head orientation of a user to obtain a first head orientation,- select a first image source (IS7), a second image source (IS2) based on said first head orientation, said first (IS7) and second (IS2) image source forming a stereo image source,- create a first stereo image by rendering a first target image for one eye of the user using said first image source (IS7) and a second target image for another eye of the user using said second image source (IS2),- determine head orientation of said user to obtain a second head orientation,- select said second image source (IS2) and a third image source (IS8) based on said second head orientation, said second (IS2) and third (IS8) image source forming a stereo image source,- create a second stereo image by rendering a third target image for one eye of the user using said second image source (IS2) and a fourth target image for another eye of the user using said third image source (IS8),characterized in that the computer program product comprises computer program code configured to, when executed on at least one processor, cause an apparatus or a system to:- stitch at least a part of an image from a fourth image source (IS3) with at least one of the first target image and the second target image so that a stitching point is located aside of a centerline of the first target image and the second target image, and- stitch at least a part of the image from the fourth image source (IS3) with at least one of the third target image and the fourth target image so that a stitching point is located aside of a centerline of the third target image and the fourth target image.
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB1406201.2 | 2014-04-07 |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| HK1233091A1 HK1233091A1 (en) | 2018-01-19 |
| HK1233091B true HK1233091B (en) | 2021-05-07 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| EP3130143B1 (en) | Stereo viewing | |
| US20170227841A1 (en) | Camera devices with a large field of view for stereo imaging | |
| US20210185299A1 (en) | A multi-camera device and a calibration method | |
| US10631008B2 (en) | Multi-camera image coding | |
| EP3349444B1 (en) | Method for processing media content and technical equipment for the same | |
| WO2018109265A1 (en) | A method and technical equipment for encoding media content | |
| GB2568241A (en) | Content generation apparatus and method | |
| WO2018109266A1 (en) | A method and technical equipment for rendering media content | |
| HK1233091B (en) | Stereo viewing | |
| GB2548080A (en) | A method for image transformation | |
| JP7556352B2 (en) | Image characteristic pixel structure generation and processing | |
| WO2017220851A1 (en) | Image compression method and technical equipment for the same | |
| WO2019043288A1 (en) | A method, device and a system for enhanced field of view |