Stereoscopic Omnidirectional Imaging Devices
The invention relates to omnidirectional stereo photography apparatus for stills, video or film applications.
In stereoscopy a scene needs to be recorded twice simultaneously, so that one recording has a horizontal or lateral offset. This addresses our ability to use two eyes to view a scene known as binocular vision, and gives the sense of three- dimensional relief. The distance known as parallax between the two viewpoints gives differences in sighting foreground objects, relative to distant objects. The distance between two recordings, or lateral displacement, is generally about the average separation between two eyes, i.e. approx. 65mm and this is known as the interocular distance.
In the prior art much work has been carried out to find an optimum omnidirectional viewing system. For example, Nayer in US 6118474 teaches a system involving a camera on to a parabolic or truncated Parabolic reflector, using either standard or telecentric lens groupings to record the reflected scene along the image axis, but he makes no provision for recording stereo images by utilising reflectors of varying size. In Rosendahl, US 4012126, we see the use of hyperboloidal primary reflectors with secondary concave reflectors in confocal relation to achieve a more compact arrangement, but no provision for stereo imaging by using reflectors of different sizes. In Nalwa US 6115,176, we see a pyramidal reflector system utilising four six or eight side pyramidal reflectors. We further see no provision for recording stereo images by using a secondary reflector set.
My GB Patent Application Nos 0018017.4 filed 21 July 2000 and 0019850.7 filed 11 August 2000 disclose omnidirectional stereoscopic viewing arrangements. My GB Patent application No. 0023786.7 discloses arrangements for omnidirectional viewing which include imaging systems displaced on the same optical axis.
ln omnidirectional imaging, the problem of image separation is amplified by the all round view of the sensors negating the use of cameras in a side by side configurations. The problem can further exacerbated by a 360 degree view results in image separation reversals. The present invention seeks to address problems of the former type - that of image gathering without the second image sensor being visible to the first. The problems of reversal can be dealt with by image processing. For authentic three-dimensional viewing we need to see round the side of an object recorded by the first sensor, and there can be no real substitute for the gathering of this information.
According to one aspect of the present invention, omnidirectional stereoscopic imaging apparatus comprises two imaging systems displaced on the same optical axis, one of which systems is larger than the other by a factor depending on a predetermined interocular distance.
The term "omnidirectional image" should be taken to include an image extending over a hemisphere, besides a sphere itself, and any image which is a major part of such an image. For example, a known fisheye or panoramic lens can provide a hemispherical field of view of a panoramic scene. Parabolic convex mirrors can be used to provide similar hemispherical fields of view.
Preferably, the imaging systems employ two convex reflecting means, one of which is larger than the other by a size factor determined by the amount of interocular distance required. The main aim is to displace one recording from the next, consistently in a 360-degree horizontal axis, and ideally in a 360-degree vertical axis. Interocular distances greater or less than standard 65mm separation can still achieve credible results for the purposes of hypersteroscopy- as in say in arial photography. In this instance the whole subject is too distant to give useful visible difference in a pair of stereo images with the standard 65mm separation, and an abnormal separation or greater distance is usually achieved by allowing time for the aircraft to travel an appropriate extra distance. In a fixed image capture situation the distance in the separation is increased during image capture
In macro or close up imaging, objects will be too displaced with standard separation, consequently the dimensional relief should be reduced by reducing the separation during the image capture this technique is referred as hypostereoscopy.
Each convex reflecting means may be arranged to reflect an image of a scene onto second reflecting means, the second reflecting means being arranged to reflect the image onto image sensing means. For example, the convex reflecting means can be hemispherical, parabolic, hyperbolic, ellipsoidal, or of a polygonal type where the polygon includes a plurality of planar or curved reflecting facets surrounding a central axis through the convex reflecting means. These can be whole or frusto- convex (truncated) sections. The second reflecting means can be planar, concave or convex and any of the hemispherical, parabolic, hyperbolic, ellipsoidal shapes. Where the convex and second reflecting means have a curvature, these may be in confocal relationships. This embodiment may be enclosed by a transparent cover, or encased in a solid optic.
The two convex reflecting means may be arranged to reflect respective said images on to common second reflecting means for reflecting said images on to said image sensing means. The smaller of said convex reflecting means may be co-axially mounted on the larger of the said convex reflecting means.
At least one of said convex reflecting means may be arranged to reflect an image of a scene directly on to image sensing means.
Each convex reflecting means may have one or more apertures or light ports for receiving light reflected from the second reflecting means, whereby incident light passes to the image sensing means. For example, each convex reflecting means may be of the polygonal type, including a plurality of facets surrounding a central axis through the convex reflecting means, and wherein either a central aperture is provided in the apex of the convex reflecting means, or apertures are provided at the midpoint of each facet. If so, the convex reflecting means may comprise a first set of facets, which are the sides of the polygonal convex reflector having an axis
of symmetry, the second reflecting means including a second set of facets for reflecting light through a light port at the apex of the convex reflector, the second set of facets being arranged to reflect light which is incident on them only from the respective facets in the first set, whereby the image sensing means separately and respectively receives light from those parts of the panoramic scene reflected in the first set of facets of the convex reflector.
Instead of using convex reflectors, two fisheye lens systems can be spaced apart on the same axis, one lens system being larger than the other by the factor of the i nterocu lar d istance .
At least part of the smaller of said lens systems may be co-axially mounted on the larger of the said lens systems.
The two lens systems may be arranged to focus an image of a scene on to common image sensing means. In one embodiment each lens system is separated into two parts, a first part being located so as to gather light from the scene whereby it is incident on image deflecting means, and the other part being located so as to receive light from the image deflecting means. The first part of the smaller lens system may be mounted on the first part of the larger lens system. The image deflecting means may be located within a central aperture of the first part of the larger lens system, the first part of the smaller lens system being mounted on said image deflecting means.
Alternatively, each lens system may be arranged to focus an image of a scene on to respective image sensing means.
According to another aspect of the present invention, omnidirectional imaging apparatus comprises two imaging systems displaced on the same optical axis by the interocular distance.
In the latter aspect, it is not necessary for the size of the two imaging systems to
differ, since they are separated by the interocular distance.
The two imaging systems may comprise two convex reflectors, each convex reflector preferably being arranged to reflect an image of a scene onto a second reflector, the second reflector being arranged to reflect the image onto an image sensing means.
In one arrangement each imaging system comprises means for focussing light of a different respective wavelength on to a respective image sensor. For example, one imaging system may comprise a lens system formed from Crown optical glass for focussing visible light, and the other imaging system may comprise a lens system formed from germanium for focussing infra red light. This can render the apparatus particularly suitable for night surveillance.
According to a further aspect of the present invention, omnidirectional imaging apparatus comprises means having a surface for reflecting an omnidirectional image of a scene, means for deriving two views of the reflecting surface, which views are separated by the interocular distance.
The means for deriving two views of the reflecting surface can be a beam splitter, or a means of providing a binocular view, for example, where reflecting surfaces or prisms direct light on the respective right and left hand paths to lenses which focus the respective right and left images on a sensing surface.
The apparatus may be enclosed in a transparent cover with an air gap or fully encased in a solid glass optic, such as crown optical glass, germanium, or plastics material.
Further reference on stereo photography can be obtained from Advanced Photography by M.J. Langford ISBN 0 240 51029 1
Apart from uses in film and entertainment purposes, the gathering of stereo or 3 dimensional information has applications useful in security and defence, e.g. for
range finding and targeting.
Preferred embodiments of the invention will now be described with reference to the accompanying drawings, in which:
Figure 1 shows an embodiment with small and large convex reflectors separated on the same optic axis;
Figure 2 shows a generally similar embodiment, which is used for 360 x 360 viewing
Figs. 3, 4 and 5 show modifications of the arrangement shown in Fig. 2;
Fig. 6 shows an arrangement based on using flat reflecting surfaces in pyramidal or polygonal convex reflectors;
Fig. 7 shows a 360 x 360 version of Fig. 6;
Fig. 8 shows an alternative embodiment with convex reflectors of the same size spaced apart by the interocular distance;
Figs. 9 and 10 show binocular arrangements for viewing a reflecting surface, useful for 360 x 180 and 360 x 360 viewing;
Figs. 11 and 12 show similar viewing arrangements but using fisheye lenses of different sizes spaced apart on the optic axis;
Fig. 13 shows another arrangement based on using flat reflecting surfaces in pyramidal or polygonal convex reflectors;
Figs. 14, 15a, 15b and 15c show arrangements using co-axially mounted fisheye lenses of different sizes;
Figs. 16a and 16b shows a 360 x 360 versions of Fig. 15a and 15b respectively;
Figs. 17a and 17b show respective modifications of the arrangement shown in Fig. 15 with separate image sensors for the two lens systems;
Fig. 18 shows another arrangement using a combination of fisheye lenses and image deflecting means;
Fig.19 shows a 360 x 360 version of Fig. 18;
Fig. 20 shows a modification of the arrangement shown in Fig. 1 ;
Fig 21 shows a 360 x 360 version of Fig. 20;
Fig. 22 shows the view from each of the back to back arrangements of Fig. 21 (in a flat display plane) and the corresponding developed left and right upper and lower fields of view in a flat plane after mapping the image signal data from polar to Cartesian co-ordinates;
Fig. 23 shows the view from each of the back to back arrangements of any of Figs. 2 to 4 (in a flat display plane) and the corresponding developed left and right upper and lower fields of view in a flat plane after mapping the image signal data from polar to Cartesian co-ordinates;
Fig. 24 shows the view from each of the back to back arrangements of Fig. 12 (in a flat display plane) and the corresponding developed left and right upper and lower fields of view in a flat plane after mapping the image signal data from polar to Cartesian co-ordinates; and
Fig. 25 shows the view from each of the back to back arrangements of Fig. 16 (in a flat display plane) and the corresponding developed left and right upper and lower fields of view in a flat plane after mapping the image signal data from polar to
Cartesian co-ordinates.
In some of the embodiments to be described, two differently sized mirrors are located on the same optical axis with a size difference equal to the interocular distance of 65 mm. This figure may vary for the purpose of hypersteroscopy / hypostereoscopy for subject of varying distance. There would be a 65mm difference in size of the two mirrors in the apparatus from the central axis to the edges. The recorded scene strikes the larger mirror from a wider or horizontally displaced viewpoint allowing image artefacts unseen by the smaller mirror with its narrower viewing arc to be recorded. The recorded signals of the different cameras are recorded and displayed at the same size and resolution, however there is clear visible differences in the scene giving the output genuine three dimensional relief..
The use of a secondary reflector in each hemisphere reduces the vertical shift by bringing the two interocular ( right & left ) mirror sets closer together.
Parabolic, truncated parabolic or hyperboliodal reflectors of differing sizes can be used to achieve interocular distance in either 360 degrees horizontally and up to 210 degrees vertically. With the use of standard lens groupings or by the use of telecentric lens groupings. Other embodiments include the use of four, six, eight or any appropriate number - sided pyramidal shaped reflectors.
Other methods of stereoscopic omnidirectional imaging use horizontally arranged systems, where interocular distance can be created in a 360x360 back to back embodiment by spacing the reflector sets at interocular distance, rather than using mirrors of differing size.
Other embodiments image stereo scenes on 360 degrees on the horizontal by 360 degrees on the vertical axis utilising parabolic, truncated parabolic or hyperboliodal reflectors with use of standard lens groupings or telecentric lens groupings. The panosphereical output is intended by the use of computer image process involving spatial transformations and co ordinate image mapping providing the viewer with a
flat plane out put for navigable display. Reference can be made to my co pending applications. Further reference can be made to Digital Image warping by George Wolberg ISBN 0-8186-8944-7, Space Image Processing by Julio Sanchez, ISBN 0- 8493-3113-7.
For stills image processing, on the parabolic type reflectors, the 2 images can be processed individually using Adobe Photoshop, utilising the Polar co ordinate filter, checking polar to rectangle option. The images can be translated individually to a plane viewable image, for further use in either anyglyphical or polarising display means.
Turning now to the drawings, Figure 1 shows an embodiment where the difference in mirror size is achieved with parabolic type reflectors 1 a, 1 b, capable of recording an image hemisphere of 360 to approx. 210 degrees. The drawing shows a primary mirror (1a, 1b) and secondary mirror (2a,2b) for each image ( left & right). The secondary mirrors may be of plane or concave form. The top camera (including, for example, a lens focussing light onto a CCD) 3a, and reflectors may be raised to allow sight over the bottom reflectors. The mirror 2b and its backing shield the upper mirror assembly 1a,2a.
In Figure 2 similar reference numerals identify similar parts and these are the same above and below the horizontal plane which divides the back to back convex reflectors 1 b. The back to back configuration records stereo images over the full Vertical axis as well as the horizontal with plane secondary mirrors same for figure 3, except we replace the secondary reflectors with a concave secondary reflectors, gathering devices
Figure 3 shows a combination utilising a camera 3c, directly onto the top reflector 1a, negating the use of a secondary reflector for the smaller mirror.
Figures 4 & 5 replace secondary reflectors with cameras 3d, which are directed at the primary reflectors 2a, in each image hemisphere, giving 360x360 stereo view.
In Figure 5 we see a 360 up to 210 degree single hemisphere embodiment. This embodiment increases the vertical shift.
Figure 6 shows a pyramidal mirror assembly, with a means of gathering image information from each mirror face 1 d, 2d. These faces or facets are part of a pyramidal or polygonal structure, only six such faces being present in Fig. 6 to simplify the drawing. These facets reflect light upwardly (or downwardly) into respective reflecting systems 5a, 5b, which turn the light through two right angles before directing it through lens system 6a, 6b, onto a part of the surface of CCD 7a, 7b. Three of such ray paths can be seen in the cross section which correspond with three of the six sides or facets of this structure. The apparatus is shown with a mirror assembly transmitting each scene into the centre of the apparatus. Other embodiments may use prisms or fibre optical means, on, say, a four, six or eight side mirror, although it will be appreciated that the mirror may have any appropriate number of sides. Figure 7 shows a back to back embodiment to allow for greater fields of view.
Figure 8 shows an embodiment which, if used horizontally, the interocular distance can be created in a 360x360 back to back embodiment by spacing the reflector sets at interocular distance, rather than using mirrors of differing size. In this embodiment, parabolic convex reflectors 1a, 2a, reflect light onto respective planar mirrors 2a,2b, which in turn reflect light onto cameras 3a,3b.
Figures 9 and 10 are respective 360 x 180 and 360 x 360 arrangements in which a beam splitter or binocular means 8a,8b, views a omnidirectional image in the reflecting surface of a convex reflector 2a,2b. The separation between the ray paths is the interocular distance. The beam splitter or binocular arrangement can be fitted with means for adjusting the spacing between the ray paths, so as to provide an interocular distance suitable for viewing the particular panoramic scene. For example, the scene may be near or far and the distance can be adjusted accordingly. A device fitted with a dual camera assembly can include a worm drive or adjustable screw to allow for Hypersteroscopy, / hyposteroscopy to adjust the
distance between the 2 lenses for any extremes in the distances of the objects viewed. The normal position would use the standard interocular separation of 65 mm. To provide for 360 degree imaging on each Horizontal axis the beam splitter described in fig 9 could be utilised in each of the four lenses. The left and right images can be received on a CCD as in the previous embodiments.
In Figs. 11 and 12, differently sized fisheye lens systems 10a, 10b, are separated by the interocular distance on the same optical axis, and one lens is larger than the other by a factor related to the interocular distance. In this embodiment, the lens systems focus their respective omnidirectional images on respective imaging devices, such as CCDs 11 a, 11b.
Figure 13 shows a modification of the back to back pyramidal type of structure shown in Figure 7 where the inclined outer reflecting faces or facets 12a, 12b of a convex regular pyramid reflect light incident from the surrounding scene on to upper reflector 13a, 13b which in turn reflects light downwardly through apertures 14a, 14b in the inclined faces or facets 12a, 12b. Located beneath each aperture is a camera or CCD sensing system 15a, 15b which captures that part of the image reflected by the respective facet and the overhead part of reflector 13a, 13b. Each aperture can be open, but in order to hide the camera lens from view the aperture may be, for example, semi-silvered so that it is partly transparent, or it can be clearly transparent.
Figures 14 and 15a show modifications of the fisheye lens systems shown in Figures 11 and 12. Differently sized fisheye lens systems are provided on the same optical axis, one lens system being larger than the other by a factor related to the interocular distance. In this embodiment, the height of the apparatus is reduced by moulding or otherwise mounting the wide angle, or fisheye, lens 21b and part 22b of the secondary lens grouping of the smaller lens system on fisheye lens 21a of the larger lens system such that part 22b of the secondary lens grouping of the smaller lens system lies within a meniscus void 23 formed in the fisheye 21a of the larger lens system. Opaque material 24 is located between the lens systems in order to
prevent light cross over between the lens systems. Light focussed by the lenses 21 b, 22b passes through the void 23 with the assistance of a relay lens indicated at 22c and into a tube 25 containing, for example in the form of fibre optic means or a relay lens system and the remainder of the secondary lens grouping of the smaller lens system, for subsequent capture in the centre of the image sensor 26. Light focussed by lenses 21 a, 22a of the larger lens system passes into lens grouping 25' surrounding tube 25 to be captured on the same image sensor 26. Figure 16a shows a back to back embodiment to allow for greater fields of view. Figure 15b shows a similar arrangement to Figure 15a, in which all of the lenses of the secondary lens grouping of the smaller lens system lie within the void 23. In this embodiment tube 25 comprises a fibre optic bundle or optical glass rods, or relay lens and secondary groupings, as required. Figure 16b shows a back to back embodiment to allow for greater fields of view. Figure 15c shows a similar arrangement to Figure 15a, in which lens grouping 25' is common to both the larger and smaller lens systems.
Figure 17a shows an embodiment where a beam, splitter 27, for example, a planar reflector angled at 45 degrees to the optical axis of the apparatus, is located between the tube 25 and the sensor 26 in order to redirect the image captured by the smaller lens system 21 b, 22b to a separate image sensor 28. In an alternative arrangement, as shown in Figure 17b, the light focussed by the smaller lens system 21 b, 22b passes through a central aperture in the sensor 26 on to an additional sensor 28 located beneath sensor 26. Each image sensor may comprise a CCD array with a plane, convex or concave image receiving surface to reduce optical distortion. A polarising filter may be located optically in front of the sensor to reduce the incidence of reflections from a housing on the sensors.
In the embodiment shown in Figure 18, the larger and smaller lens systems share common secondary lens grouping 31 and image sensor 32, the smaller fisheye lens 21 b being mounted on an insert 30 located within the void of the larger fisheye lens 21 a. The outer surface of the insert 30 is reflective in order to deflect light focussed by each of the fisheye lenses 21a, 21 b on to respective portions of the secondary
lens grouping 31 for focussing on the image sensor 32. Figure 19 shows a back to back embodiment to allow for greater fields of view.
Figure 20 shows a modification of the embodiment shown in Figure 1 , which modification includes a primary parabolic mirror (1a, 1 b) for each image (left and right) and a common plane or concave mirror 2 focussed on to a CCD and lens system 3.
Figure 21 shows a back to back embodiment to allow for greater fields of view, and Figure 22 illustrates a mapping operation on each image signal data received from the image sensors of the arrangement shown in Figure 21 for transformation of the data into a cartesian co-ordinate system for output to a display device. As described in my co-pending International application no. PCT/GB01/01115, the contents of which are incorporated herein by reference, each of the circular images is notionally divided into an array of pixels in a polar co-ordinate system. Each of these pixels is then mapped, using look-up tables stored in the image processing apparatus which also compensate for distortion resulting from the mapping, into a cartesian co-ordinate system for display in a rectangular display. In this mapping technique as shown in Figure 22, four sectors, each numbered 1-4 and 5-8 in the right hand upper and lower hemispheres of view, map to rectangular areas 1 -4 and
5-8, and the four sectors A-D and E-H in the left hand upper and lower hemispheres of view, map to rectangular areas A-D and E-H.
Mapping operations suitable for some of the other embodiments described above are illustrated in Figures 23 to 25. Figure 23 illustrates a mapping operation on each image signal data received from the image sensors of the arrangement shown in any of Figures 2 to 4 for transformation of the data into a cartesian co-ordinate system for output to a display device. Figure 24 illustrates a mapping operation on each image signal data received from the image sensors of the arrangement shown in Figure 12, whilst Figure 25 illustrates a mapping operation on each image signal data received from the image sensors of the arrangement shown in any of Figures 16a and 16b. As in Figure 22, suitable for the embodiment shown in Figure 21 , four
sectors, each numbered 1 -4 and 5-8 in the right hand upper and lower hemispheres of view, map to rectangular areas 1 -4 and 5-8, and the four sectors A-D and E-H in the left hand upper and lower hemispheres of view, map to rectangular areas A-D and E-H.
Drawings are not to scale and specific lens groupings and mountings are only indicated and not detailed. Further references can be obtained from publications such as Modern Optical Engineering by Warren Smith ISBNO 07 136360 2 and Modern Lens Design by Warren Smith ISBNO 070591784. Once specific conjugate distances are established software programs such as Code V from Optical Research Associates orZemax, from Focus Software Inc, can be utilised to prepare specific or individual prescriptions.
The apparatus may use transparencies to protect the lenses and reflectors or may use a solid optic between primary and secondary mirrors. The shape of such coverings should prescribe symmetry that minimises coma or stigmatism.
In the stereo display, when viewed with anyglyphical glass (a red filter over the right eye and a green filter over the left) if the output signal is displayed separately with each side toned for red and green a perception of depth is given to viewer. Another known art is the use of polarising glasses for the viewer and polarising filters for the dual projectors. Stereo computer displays by such manufacturers as Philips or Sharp another possible alternative. Whilst both these methods are in the background art, the difference between them and in this invention is the camera designs. The invention can be embodied to allow up to 360x360 degree image capture in real time or live relay or transmission of video or film images suitable for multicast applications, or for range finding and targeting allowing the stereo image scene can be viewed, with minimal optical aberrations or distortions predominant in other systems, in the central annular viewing area.
Fibre optics or glass rods may be used in any of the described embodiments to relay the image scene, as described in my co-pending International application no.
PCT/GB01/01115, the contents of which are incorporated herein by reference. Means for combining the various scenes on to a single or multiple image sensors may be provided, as also described in my International application no. PCT/GB01/01115.
A useful side effect of the above-described stereoscopic imaging apparatus is its ability to perceptively "overlay" information from two sensors reading different wavelength information. Where separate left and right image sensors in each hemisphere are used, stereoscopy can be used to overlay invisible image data such as infra red. This is made possible by a combination of a visible spectrum sensor and an invisible spectrum sensor or any other image intensifying means, and by combining the use of separate and different lens materials, such as germanium for infra red wavelengths and Crown optical glass for visible wavelengths. When the stereo image is viewed, the invisible wavelength data is seen to be overlaid on the display, for example an infra red (or heat identifying) image overlaid on a colour visible wavelength video image. Alternatively, the two image data streams can provide a separate display of the image captured by each image sensor.