HK1091993A - Steroscopic panoramic image capture device - Google Patents
Steroscopic panoramic image capture device Download PDFInfo
- Publication number
- HK1091993A HK1091993A HK06112443.2A HK06112443A HK1091993A HK 1091993 A HK1091993 A HK 1091993A HK 06112443 A HK06112443 A HK 06112443A HK 1091993 A HK1091993 A HK 1091993A
- Authority
- HK
- Hong Kong
- Prior art keywords
- image
- combined
- panoramic
- imaging system
- images
- Prior art date
Links
Description
Technical Field
Embodiments of the present invention relate generally to panoramic image capture devices, and more particularly, to panoramic image capture devices for generating stereoscopic panoramic images.
Background
Panoramic cameras are known in the art. Such cameras typically use a single rotatable camera. While such devices are suitable for static images, such devices when used to capture non-static objects typically produce blurred or distorted images. It is also known in the art to use image capture systems having multiple image capture devices. In this manner, multiple images may be captured substantially simultaneously and stitched together using processes known in the art. While such systems substantially eliminate the problems associated with capturing dynamic objects, such systems do not provide a means for producing stereoscopic images.
It is also known in the art to capture panoramic images using "fisheye" lenses. However, such images introduce a large amount of distortion into the resulting image, and capture relatively low quality images. Therefore, it is desirable to produce a panoramic image with lower distortion and higher quality.
Generally, to capture a stereoscopic image, two imaging systems are disposed close to each other to capture a specific image. Unfortunately, this approach cannot be generalized to producing stereoscopic panoramic images because one image capture device must fall within the field of view of an adjacent image capture device. Accordingly, it is desirable to provide a panoramic image capture system that can be used to generate a stereoscopic pair of panoramic images for stereoscopic display of a particular image.
Summary of The Invention
Among the advantages provided by the present invention, an image capture system produces a stereoscopic panoramic image pair.
Advantageously, the present invention provides an image capture system for generating seamless panoramic images.
Advantageously, the present invention provides an image capture system for generating a dynamic stereoscopic panoramic image.
Advantageously, the present invention provides a stereoscopic panoramic image with minimal distortion.
Advantageously, the present invention provides an imaging system for full motion, real-time, panoramic stereo imaging.
Advantageously, in a preferred example of the present invention, an imaging system is provided that includes a first image capture device, a second image capture device, and a third image capture device. Means are also provided for combining at least a first portion of a first image captured using a first image capture device with a portion of a second image captured using a second image capture device to produce a first combined image. Means are also provided for combining at least a second portion of the first image with at least a portion of a third image captured using a third image capture device to produce a second combined image.
In a preferred embodiment, a plurality of images are generated using a plurality of image capture devices, a portion of each image being combined with a portion of an adjacent image to generate a first combined panoramic image. Similarly, the second portion of each image is combined with additional portions of adjacent images to produce a second combined panoramic image. Preferably, the first combined panoramic image and the second combined panoramic image are displayed in a stereoscopic orientation to produce a stereoscopic panoramic image. The imaging system of the present invention can be used to capture multiple images to produce a fully animated stereoscopic panoramic image.
Brief Description of Drawings
The invention will now be described, by way of example, with reference to the accompanying drawings, in which:
FIG. 1 illustrates a front view of an imaging system of the present invention;
FIG. 2 illustrates a diagrammatical representation of image capture areas of adjacent image capture devices of the imaging system of the present invention;
FIG. 3 illustrates a bottom perspective view of the final panoramic stereo image displayed on the screen, and polarized glasses for viewing the image;
4A-4B illustrate a left panoramic image and a right panoramic image; and
5A-5B illustrate images associated with a first image buffer and a second image buffer;
6A-6C are flow diagrams of a conversion process used in conjunction with the imaging system of the present apparatus for converting a plurality of images captured with the imaging apparatus into a first panoramic image and a second panoramic image to produce a stereoscopic panoramic image; and
fig. 7 illustrates a perspective view of a 360-degree unobstructed panoramic stereo image created using the camera of fig. 1.
Description of The Preferred Embodiment
Referring to fig. 1, a camera (10) is shown having a body (12) constructed of plastic or other similar lightweight material. In a preferred embodiment, the body (12) is substantially spherical, having a diameter preferably between about 0.001 and about 500 centimeters, more preferably between about 10 and about 50 centimeters. Provided that there are a plurality of lenses (14) substantially equally spaced on the surface of the body (12). The lens (14) is preferably a circular lens having a diameter preferably between about 5 angstroms and about 10 centimeters, more preferably between about 0.5 and about 5 centimeters. In a preferred embodiment, the lens is a model BCL38C 3.8.8 mm microlens manufactured by CBC America, located at 55Mall Drive, Commack, NY 11725. As shown in fig. 2, the lenses (14) are each associated with Charge Coupled Device (CCD) components (16) as is well known in the art. While the GP-CX171/LM CCD color plate camera manufactured by Panasonic and available from Rock2000.com is used in the preferred embodiment, any known image capture system may be used. Thus, for purposes of this disclosure, the lens (14) and/or the CCD component (16) may also be described as "image capture units" (26), (40), and (54). As shown in fig. 2, all image capture units (26), (40), and (54) are operatively coupled to a processing unit (CPU) (22), which may thereby receive images from image capture devices (14) and/or (16). In a preferred embodiment, the CPU (22) is a 900MHz Pentium 4 class 4 personal computer equipped with the Oxyggen GVX210 graphics card manufactured by 3Dlabs of 480Pontrero, Sunnyvale, CA 94086. Although the CPU may be of any type known in the art, it is preferably capable of quadrilateral buffering and utilizing page flipping software, such as in a manner known in the art. The CPU (22) may be coupled to a head mounted display (24) which, in the preferred embodiment, is VFX3D manufactured by Interactive Imaging Systems, Inc. located in 2166Brighton Henretta Townline Road, Rochester, NY 14623.
As shown in fig. 2, the lenses (14) are discrete from one another and are offset from one another by about 20 degrees along a substantially arcuate path or line defined by the body (12). This deviation, which may be from about 5 degrees to about 45 degrees in the direction of the substantially arcuate path, allows the lenses (14) and/or the image capture units (26), (40), (54) to have substantially similar focal points. Each lens (14) has a field of view of about 53 degrees such that the fields of view of laterally adjacent lenses overlap by between about 10% and about 90%, preferably between about 50% and about 05%. As shown in fig. 2, the first image capturing unit (26) is associated with an optical axis (28) that bisects an image bounded on one side by a left plane (30) and on the other side by a right plane (32). A lens (14) of the image capture unit (26) is focused on a defined image plane (34) that is divided into a left image plane (36) and a right image plane (38).
Similarly, the second image capturing unit (40) also has an optical axis (42), a left plane (44), a right plane (46), a defined image plane (48), a left image plane (50), and a right image plane (52). A third image capturing unit (54) on the right side of the first image capturing unit (26) has an optical axis (56), a left plane (58), a right plane (60), a defined image plane (62), a left image plane (64) and a right image plane (66).
As shown in fig. 2, by providing a plurality of image capturing elements (26), (40) and (54), the defined image planes (34), (48) and (62) are divided into sections, e.g., bisected, and the image capturing elements (26), (40) and (54) are oriented such that the points associated with the final panoramic image are within the image planes defined by at least two adjacent image capturing elements (26), (40) and/or (54). As shown in FIGS. 5A-5B, the image planes defined by adjacent image capture units overlap vertically, preferably by about 1% -20%, more preferably by about 5% -10%, and in a preferred embodiment by about 7%. Similarly, the image planes defined by adjacent image capture units overlap horizontally, preferably by about 20%, more preferably by about 5% -10%, and in a preferred embodiment by about 6%. These overlaps are helpful in the "splicing" process described below.
To produce the final panoramic image (68) of the present invention, which may display an approximately 180 degree scene (e.g., a hemisphere), a first panoramic image (72) and a second panoramic image (74) are created. (FIGS. 3 and 4A-4B). Each panoramic image (72), (74) may display about a 90 degree scene (e.g., about half of a hemisphere). To create the first panoramic image (72), an image (76) associated with the left image plane (36) of the first image capturing unit (26) is combined with an image (78) associated with the left image plane (50) of the second image capturing unit (40) and an image (80) associated with the left image plane (64) of the third image capturing unit (54). (FIGS. 2, 4A and 5A). As shown in fig. 2, the associated image planes (36), (50) and (64) preferably overlap by about 0.5% -30%, more preferably by about 10% -20%, and in a preferred embodiment by about 13%, but these image planes are not parallel to each other and are not necessarily tangent to the curved surface defining the first panoramic image (72). (FIGS. 2 and 4A). Therefore, the images (76), (78), (80), (82), (84) and (86) associated with the planes (36), (50) and (64) must be transformed to remove the distortion associated with their non-parallel orientation before they are joined together to form the final panoramic image (68) as described below. (FIGS. 2, 3, 4A-4B and 5A-5B).
Once the images (76), (78), (80) associated with the left image planes (38), (52), (66) and the images (82), (84), (86) associated with the right image planes (36), (50) and (64) are collected and received from all of the image capturing units (26), (40) and (54), the images may be transferred to the CPU (22) via a hardwired, wireless or any desired connection. The CPU (22) is then operable to convert the image according to the process described in fig. 6A-6C. As shown at block (88), the source image in the preferred embodiment is a substantially rectilinear image, but could of course be any type of image obtained or received by the CPU (22) from the image capture units (26), (40) and (54). (FIGS. 2, 4A, 4B and 6A-6C). As shown in block (94), the CPU (22) may then define registration pixel pairs for the unconverted source image.
Thereafter, as shown in block (96), the CPU (22) creates an input file. The input file includes height and width of the final panoramic image (68), source information, and registration point information. The source information includes a file name and path of the source image, a height and width of the source image in pixels, and a yaw angle, a pitch angle, and a roll angle of the source of the associated image capture unit. The horizontal field of view of the image capture unit, which is defined by the associated left plane, right plane, X offset, Y offset and zoom values of the source image, is preferably between about 1 degree and about 80 degrees, more preferably between about 30 degrees and about 60 degrees, and in a preferred embodiment about 53 degrees. The information associated with the registration points includes information about the source image associated with the first pixel position of the registration point, horizontal and vertical pixel positions in the first source image, list indices of information about the source image associated with the second pixel position, and horizontal and vertical pixel positions in the second source image.
The images (76-86) associated with the image planes (36, 38, 50, 52, 64, and 66) are substantially rectilinear standard flat field images, and the panoramic images (72 and 74) are at least partially equirectangular (equirectangular) representing the mapping of pixels to spheres in a manner such as shown in fig. 4A-4B. Thus, once the CPU (22) builds the input file, the registration pixel pairs may be transformed to locate their positions on the final panoramic image (68), as shown in block (98).
Starting with an arbitrary source image, a vector representing the first pixel of a given registration pixel pair in three-dimensional space is defined, which is positioned on the final panoramic image (68). This is achieved by applying the following matrix transformation to each pixel:
define segment x: the horizontal position of a pixel in the source image is changed so that it is related to the center of the image. The x-offset and scaling variables of the source image are then compensated.
Defining a segment y: the vertical position of the pixel in the source image is changed so that it is related to the center of the image. The source image is then compensated for y-offset and scaling variables.
Define segment z: using various source image variables, z segments are determined that correspond to the scale provided by the size of the image in pixels.
The vector is transformed such that it corresponds to the rotation angle of the source camera. This is calculated by converting the vector by each of three rotation matrices:
rotation about the x-axis:
rotation about the y-axis:
rotation about the z-axis:
upon matrix transformation, the pixel vector represents the global globX, globY, globZ position of the point in three-dimensional space. The CPU (22) then converts these positions into spherical coordinates and applies them directly to the final panoramic coordinates. The yaw angle of the vector represents its horizontal panorama position newX and its pitch angle represents its vertical panorama position newY.
Once the registration pixel pairs are mapped onto the final panoramic image (68) in this manner, the CPU (22) calculates the distance between the registration pixel pairs, as shown in block (100). If the average distance of the registration pixel pairs for a given source image is not yet minimal (as in the case of the initial transformation), the yaw angle of the source image is slightly changed, as shown in block (102), after which the process returns to block (98) and the registration pixel pairs are again transformed into pixel points in the final panoramic image (68). The process continues with changing the yaw angle until the average distance of the source image registration pixel pairs is minimized. Thereafter, the pitch, roll, X-offset, Y-offset, and scaling are changed until the average distance of the associated registration pixel pairs is minimized. Once the yaw, pitch, roll, X-offset, Y-offset, and zoom of a particular source image are optimal, as shown in block (104), the conversion process is repeated for all source images until they are thus all optimal.
Once all source images have been so optimized, as shown in block (106), the average distance of all source image registration pixel pairs is calculated, and if they are not yet minimal, the yaw, pitch, roll, X, Y, and zoom of the source images are changed as shown in block (108), and processing returns to block (98), where processing continues until the distance between the registration pixel pairs is minimal on all source images.
Once the average distance between the registration pixel pairs has been minimized across all source images, as shown at block (110), an output file is created that identifies the height and width of the first panoramic image (72), the yaw, pitch, roll, X-offset, Y-offset and zoom translation image information relative to each particular source image. Thereafter, as shown at block (112), for each pixel within a given source image, a vector representing the location of a particular pixel in three-dimensional space is defined using the vector transformation described above.
Once the vector is defined, as shown at block (114), the vector is transformed to reflect yaw, pitch, roll, X-offset, Y-offset, and zoom information associated with the source image as defined in the output file. After the conversion of the vectors is completed, the converted vectors are associated with pixels in the final panoramic image (68), as shown at block (116). As shown at block (118), the process is repeated until all pixels in a particular source image have been converted to vectors and their positions are located on the final panoramic image (68).
Once all the pixels of a given source image have been located on the final panoramic image (68), two image buffers (90) and (92) are established, each having a height and width substantially equal to the height and width of the final panoramic image (68), as shown in block (120). (FIGS. 3, 5A-5B and 6B). Once the image buffers (90) and (92) are established, the quadrilateral of pixels is drawn onto the appropriate image buffer (90) or (92) using vector translation information associated with the quadrilateral of four adjacent pixels of a particular source image, as shown at block (122). (FIGS. 5A-5B and 6C). If the pixel is in the left image plane (38), (52), or (66), the pixel is written to the left image buffer (90). If the pixel is in the right image plane (36), (50) or (64), the pixel is in the right image buffer (92). (FIGS. 2 and 5A-5B).
Since the conversion is likely to unfold the quadrilateral of pixels in the image buffer, there may be gaps between pixels when converting pixels from their rectilinear position to their equirectangular position in the associated image buffer (90) or (92), as shown in block (124). (FIGS. 5A-5B and 6C). When the quadrilateral of pixels is located in the associated image buffer (90) or (92), it is possible to use a linear gradient of corner pixel colors, in a manner such as is well known in the art, whereby the gaps created are filled as smoothly as possible. Known linear gradient fill techniques may also be used to reduce visible seams between images. When additional source image information is applied to the image buffers (90) and (92), the alpha transparency of the newly overlapped pixels is linearly degraded for the areas of the image buffers (90) and (92) that are already filled with pixel data, smoothing out the resulting seams as described below.
Thus, when mapping a quadrilateral to an associated image buffer (90) or (92), the CPU (22) can eliminate gaps by interpolating internal pixel values using any method known in the art, which can include comparing gaps to adjacent pixels, or by using "white-boxed" gaps in the motion capture system for immediately preceding and succeeding frames, to infer the most appropriate pixel value for the gap. As shown by blocks (126), 122 and 124, they are repeated until all source image pixels have been mapped to the appropriate image buffer (90) or (92).
Once all pixels have been mapped, the first image buffer pixels are compared to the first panoramic image pixels, as shown at block (128). The CPU (22) sets the pixels in the first image buffer (90) to maximum visibility if the first panoramic image (72) has no pixels associated with the pixels in the first image buffer (90). If the first panoramic image (72) already has pixels associated with pixels in the first image buffer (90), then the existing pixels are compared to the corresponding pixels in the first image buffer (90). The pixels in the first image buffer (90) are set to maximum visibility if they have a closer distance to the center of their respective source image than the existing pixels. Conversely, if the pixels in the first image buffer (90) have a greater distance to the center of their respective source image than the existing pixels, the pixels in the first image buffer (90) are set to minimum visibility. The process is repeated to merge pixels in the second image buffer (92) into the second panoramic image (74).
As shown at block (130), the overlapping edges of the images in the image buffers (90) and (92) are made to be flush by reducing the visibility of the pixels on the overlapping regions. This averaging smoothes the overlapping regions of the images once the images are merged from the image buffers (90) and (92) into the panoramic images (72) and (74). Once the image buffer pixels are set to the appropriate visibility and the images of the image buffer are leveled, the images in the image buffer are merged into the first and second panoramic images (72) and (74), as shown in block (132). Blocks (122) through (132) are repeated, as indicated by block (134), until all of the source images have been merged into the first and second panoramic images (72) and (74). As described above, the images (76), (78), and (80) associated with the left image planes (36), (50), and (64) of the image capturing units (26), (40), and (54) are used to create the first panoramic image (72), and the images (82), (84), and (86) associated with the right image planes (38), (52), and (66) of the image capturing units (26), (40), and (54) are used to create the second panoramic image (74).
Once the final panoramic images (72) and (74) are established, the panoramic images (72) and (74) may be displayed together as a final panoramic image (68), as shown at block (138). (FIGS. 3, 4A and 4B). The panoramic images (72) and (74) may be of opposite polarization displayed on a standard panoramic screen (140), such as shown in fig. 3, and viewed using glasses (142) with lenses of opposite polarization. Alternatively, the panoramic images (72) and (74) may be transmitted to the head mounted display (24) shown in fig. 1. The CPU (22) may send the plurality of panoramic images to the head mounted display (24) using known "page flipping" techniques to send and display the images on the left and right displays of the head mounted display (24), respectively, as needed. This process may be used to animate the display as a full 24 frame per second animation, or to display multiple visual images to each display. Thus, an imaging system (see FIG. 2) may include a first image capture unit (26) that captures a first image (36), a second image capture unit (40) that captures a second image (50), and a third image capture unit (54) that captures a third image (64), means (22) that combines a first portion of the first image (36) with a portion of the second image (50) to produce a first combined image, and means (22) that combines a second portion of the first image (36) with a portion of the third image (64) to produce a second combined image. The first portion of the first image may be any amount up to the entire first image, or may be between about 20% and about 80% of the first image. The portion of the second image may be any amount up to the entire second image, or may be between about 20% and about 80% of the second image. The first and second combined images may be repeatedly generated and displayed to convey stereoscopic motion.
The imaging system may further comprise an image capturing unit (26) for providing an image, means (22) for using a first part of said image to provide a first stereo image, means (22) for using a second part of said image to provide a second stereo image.
The head mounted display unit (24) may also be equipped with an orientation sensor (144), such as those well known in the art, to change the image provided to the head mounted display (24) as the sensor (144) moves. In this manner, a user (not shown) can look up, down, and in any direction at the final panoramic image (68) portion associated with the user's gaze vector, and have the sensation of actually looking at the three-dimensional space.
In an alternative embodiment of the invention, the camera (10) may have a plurality of image capture pairs having substantially linear capture systems oriented substantially parallel to each other. The pair may be compensated by a predetermined factor to obtain a desired stereoscopic image. Because the images are captured in pairs, the conversion process associated with this embodiment is the same as that described above, although instead of halving the image and sending the pixels in each half to separate image buffers, all pixels associated with the image from the "left" image capture device of each image capture device pair are sent to one image buffer and all pixels associated with the image from the "right" image capture device are sent to the other image buffer.
In another alternative embodiment of the present invention, the camera (10) and CPU (22) may be utilized to capture 24 or more frames per second and display the final stereoscopic panorama image (68) as an animation in real time.
In yet another alternative embodiment of the present invention, computer generated graphical information, such as produced in a manner well known in the art, may be combined with the final panoramic images (72) and (74) in the CPU (22) to provide a seamless combination of actual images captured with the camera (10), and a digitized virtual reality image (146). (fig. 3) this combination produces a seamless display of real and virtual panoramic stereoscopic virtual images.
In yet another alternative embodiment of the present invention, the images captured by the camera (10) may be transformed using the above transformation process to produce a seamless 360 degree panoramic thematic image. As shown in FIG. 1, the camera (10) is provided with a stanchion (148) and a transport unit, such as a remotely controlled truck (150), similar or identical to those used in association with a remote control car or the like. The image associated with the left image plane of the image capture unit may be used to generate a combined image, and the image associated with the right image plane of the image capture unit may be used to overwrite and fill the combined image so as to hide the stanchion (148), carriage (150), and any other camera devices that would otherwise be visible in the combined image. The previous conversion process may be used for such overwriting and padding. In this manner, only those images that do not include the undesirable information are mapped onto the final panoramic image (152), which may be displayed on a spherical display screen (154) or on a head mounted display (24). (FIGS. 1 and 7). For portions of the image that reflect the area covered by the footprint of the vehicle frame (150), the image located beneath the vehicle frame (150) may be approximated using the interpolation and homography processes detailed above to produce the appearance of a complete unobstructed 360 degree image.
While the invention has been described with respect to its preferred embodiments, it is to be understood that the preferred embodiments are not limiting of the invention, since variations and modifications can be effected within the full intended scope of the invention, as defined by the appended claims.
Claims (46)
1. An imaging system, comprising:
a first image capturing unit that captures a first image;
a second image capturing unit that captures a second image;
a third image capturing unit that captures a third image;
means for combining a first portion of the first image with a portion of the second image to produce a first combined image; and
means for combining the second portion of the first image with a portion of the third image to produce a second combined image.
2. The imaging system of claim 1, wherein the first image capture unit, the second image capture unit, and the third image capture unit are disposed relative to each other along an arc, spaced apart from about 5 degrees to about 45 degrees.
3. The imaging system of claim 1, wherein an image plane associated with the first image overlaps an image plane associated with the second image by about 0.5% to about 30%.
4. The imaging system of claim 1, wherein an image plane associated with the first image and an image plane associated with the second image vertically overlap by about 1% to about 20%.
5. The imaging system of claim 1, wherein the first image and the second image are substantially rectilinear.
6. The imaging system of claim 1, wherein the first combined image and the second combined image are at least partially equirectangular.
7. The imaging system of claim 1, wherein the first and second combined images are displayable to provide a stereoscopic image.
8. The imaging system of claim 1, further comprising means for displaying the first combined image and the second combined image as a stereoscopic image.
9. The imaging system of claim 1, further comprising means for sequentially displaying the plurality of first and second combined images as a moving stereoscopic image.
10. The imaging system of claim 1, further comprising means for combining the first combined image with a sufficient plurality of images to produce a first combined panoramic image representing at least about 90 degrees of a scene and combining the second combined image with a sufficient plurality of other images to produce a second combined panoramic image representing about 90 degrees of the scene, and means for displaying the first combined panoramic image and the second combined panoramic image to provide a stereoscopic panoramic image.
11. The imaging system of claim 10, further comprising means for successively displaying the first set of combined panoramic images and the second set of combined panoramic images to provide a moving stereoscopic panoramic image.
12. The imaging system of claim 1, wherein the first combined image and the second combined image are combined with a digital image to produce a stereoscopic image comprising the digital image.
13. An imaging system, comprising:
an image capturing unit for providing an image;
means for using a first portion of the image to provide a first stereoscopic image; and
means for using a second portion of the image to provide a second stereoscopic image.
14. The imaging system of claim 13, further comprising means for combining the first stereoscopic image and the second stereoscopic image into a panoramic stereoscopic image.
15. The imaging system of claim 13, further comprising:
a plurality of image capturing units providing a plurality of images;
means for combining selected ones of the plurality of images with the image to produce a combined image;
means for combining the combined images into a first panoramic image and a second panoramic image; and
means for displaying the first panoramic image and the second panoramic image to provide a panoramic stereoscopic image.
16. The imaging system of claim 15, wherein the first panoramic image displays an approximately 90 degree scene.
17. The imaging system of claim 15, wherein the panoramic stereoscopic image displays an approximately 180 degree scene.
18. A method, comprising:
acquiring a first image;
acquiring a second image;
acquiring a third image;
combining a first portion of the first image with a portion of the second image to produce a first combined image;
combining a second portion of the first image with a portion of the third image to produce a second combined image; and
displaying the first combined image and the second combined image as a stereoscopic image.
19. The method of claim 18, further comprising acquiring the first image, the second image, and the third image from a plurality of image capture units arranged along an arc.
20. The method of claim 19, wherein the plurality of image capture units are disposed about 5 degrees to about 45 degrees apart in a single direction along the arc.
21. The method of claim 18, further comprising displaying the plurality of first combined images in sequence and the plurality of second combined images in sequence to provide a moving stereoscopic image.
22. The method of claim 21, wherein the moving stereoscopic image represents an approximately 180 degree scene.
23. The method of claim 18, further comprising:
a plurality of registration pixel pairs is defined.
24. The method of claim 23, further comprising:
the plurality of registration pixel pairs are transformed to minimize a distance of image registration.
25. The method of claim 23, further comprising:
a vector representing pixels of one of the plurality of registration pixel pairs is defined.
26. The method of claim 25, further comprising:
the vector is converted into a converted vector to correspond to the source rotation angle.
27. The method of claim 26, further comprising:
the transformed vectors are associated with pixels in the panoramic image.
28. The method of claim 27, further comprising:
the visibility of the pixels in the image buffer is adjusted by comparing the distance associated with the pixels in the panoramic image to the distance associated with the pixels in the image buffer.
29. The method of claim 21, further comprising:
the overlapping edges of the images in the image buffer are made uniform by reducing the visibility of the pixels in the overlap region.
30. An imaging system, comprising:
a first image capturing unit that captures a first image;
a second image capturing unit that captures a second image;
a third image capturing unit that captures a third image; and
a processing unit operatively coupled to the first, second and third image capture units to receive the first, second and third images, wherein a first portion of the first image may be combined with a portion of the second image to provide a first combined image, wherein a second portion of the first image may be combined with a portion of the third image to provide a second combined image, and wherein the first and second combined images may be displayed to provide a stereoscopic image.
31. The imaging system of claim 30, wherein the first, second and third image capture units are disposed approximately equidistant from each other along a substantially arcuate path.
32. The imaging system of claim 31, wherein the substantially arcuate path is defined by a substantially spheroid.
33. The imaging system of claim 30, wherein the first and the second image capture units are separated from each other along an arc by an angular distance of about 5 degrees to about 45 degrees, and the second and the third image capture units are separated from each other along an arc by about the angular distance.
34. The imaging system of claim 30, wherein the field of view associated with the first image capture unit overlaps the field of view associated with the second image capture unit by an overlap amount, and the field of view associated with the second image capture unit overlaps the field of view associated with the third image capture unit by the overlap amount.
35. The imaging system of claim 34, wherein the amount of overlap is about 10% to about 90%.
36. The imaging system of claim 30, wherein the defined image plane associated with the first image capture unit overlaps the defined image plane associated with the second image capture unit by about 1% to about 20%.
37. The imaging system of claim 30, wherein the first portion of the first image is between about 20% and about 80% of the first image and the portion of the second image is between about 20% and about 80% of the second image.
38. The imaging system of claim 30, wherein a plurality of said first and said second combined images are displayed in sequence to convey motion.
39. The imaging system of claim 30, wherein a first combined image is combined with a sufficient plurality of images to produce a first combined panoramic image representing an approximately 90 degree scene, and a second combined image is combined with a sufficient plurality of other images to produce a second combined panoramic image representing an approximately 90 degree scene, and the first combined panoramic image and the second combined panoramic image are displayed to provide a stereoscopic panoramic image.
40. The imaging system of claim 39, wherein the first combined panoramic image set and the second combined panoramic image set are displayed sequentially to provide a moving stereoscopic panoramic image.
41. An imaging system, comprising:
an image capturing unit providing an image;
a processing unit coupled to the image capture unit to receive a first portion of the image to provide a first stereoscopic image and to receive a second portion of the image to provide a second stereoscopic image.
42. The imaging system of claim 41, wherein the first stereoscopic image and the second stereoscopic image are combined into a panoramic stereoscopic image.
43. The imaging system of claim 41, further comprising:
a plurality of image capture units coupled to the processing unit, the plurality of image capture units providing a plurality of images, wherein selected ones of the plurality of images are combined with at least one other image to produce a plurality of combined images, wherein the plurality of combined images are combined to provide a first panoramic image and a second panoramic image, and wherein the first panoramic image and the second panoramic image are combined to provide a panoramic stereoscopic image.
44. The imaging system of claim 43, wherein the first panoramic image displays an approximately 90 degree scene.
45. The imaging system of claim 43, wherein the panoramic stereoscopic image displays an approximately 180 degree scene.
46. The imaging system of claim 41, further comprising an imaging unit coupled to the processing unit.
Publications (1)
Publication Number | Publication Date |
---|---|
HK1091993A true HK1091993A (en) | 2007-01-26 |
Family
ID=
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6947059B2 (en) | Stereoscopic panoramic image capture device | |
JP4642723B2 (en) | Image generating apparatus and image generating method | |
EP1586204A1 (en) | Stereoscopic panoramic image capture device | |
JP4257356B2 (en) | Image generating apparatus and image generating method | |
CA3017827C (en) | Efficient canvas view generation from intermediate views | |
AU2008215908B2 (en) | Banana codec | |
US9883174B2 (en) | System and method for creating a navigable, three-dimensional virtual reality environment having ultra-wide field of view | |
JP6273163B2 (en) | Stereoscopic panorama | |
JP2883265B2 (en) | Image processing device | |
CN1922544A (en) | Method and apparatus for providing a combined image | |
CN1702693A (en) | Image providing method and equipment | |
WO2021050405A1 (en) | Sensor fusion based perceptually enhanced surround view | |
US20240357247A1 (en) | Image processing system, moving object, imaging system, image processing method, and storage medium | |
CN111226264A (en) | Playback apparatus and method, and generation apparatus and method | |
CN117931120B (en) | Camera image visual angle adjusting method based on GPU | |
CN114513646A (en) | Method and device for generating panoramic video in three-dimensional virtual scene | |
CN119071651A (en) | Technologies used to display and capture images | |
CN1848966A (en) | Small window stereoscopic image production and display method | |
CN112634142A (en) | Distortion correction method for ultra-wide viewing angle image | |
KR20060015460A (en) | Stereoscopic Panoramic Image Capture Device | |
HK1091993A (en) | Steroscopic panoramic image capture device | |
CN109272445A (en) | Panoramic video joining method based on Sphere Measurement Model | |
WO2019026388A1 (en) | Image generation device and image generation method | |
EP3229470B1 (en) | Efficient canvas view generation from intermediate views | |
CN1204758C (en) | Virtual image synthesis method for spatial stereoscopic image and glasses-type stereoscopic television |