WO2004070430A2 - Ecran a plusieurs angles produit a partir de dispositifs de detection optiques situes a distance - Google Patents
Ecran a plusieurs angles produit a partir de dispositifs de detection optiques situes a distance Download PDFInfo
- Publication number
- WO2004070430A2 WO2004070430A2 PCT/US2003/005635 US0305635W WO2004070430A2 WO 2004070430 A2 WO2004070430 A2 WO 2004070430A2 US 0305635 W US0305635 W US 0305635W WO 2004070430 A2 WO2004070430 A2 WO 2004070430A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- pixel array
- ground area
- satellite
- camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
- G01C11/04—Interpretation of pictures
- G01C11/06—Interpretation of pictures by comparison of two or more pictures of the same area
Definitions
- This invention relates generally to the collection and presentation of optical information and, more particularly, to the acquisition, processing, and hard copy presentation of optical information obtained from a plurality of viewing or sensing angles.
- optical information is integral to a variety of activities. These activities include, without limitation, airborne and satellite surveillance and monitoring of areas of interest.
- the collected information is typically digitized on the platform on which the cameras or other optical sensors are mounted, pre-processed and sent by downlink for further processing.
- the information is often formatted and printed for visual inspection as well. For example, ae aLphotographs may be studied by skilled persons, both to identify items missed by computer recognition methods and to obtain further information not conveniently or accurately obtained by computer methods.
- the 2-dimensional picture has shortcomings. One is that it is 2- dimensional, which has aesthetic and related functional drawbacks. More particularly, the viewer does not obtain a sense of depth from a 2-dimensional picture, and this failure may cause a misinterpretation of information that a 3- dimensional view would have provided.
- Another shortcoming with existing art airborne and satellite surveillance systems, and the hard copy images they produce, is that the images show the photographed ground area only as seen from one position and viewing angle that it was originally obtained. For example, a photograph of a ten-foot diameter hole obtained by overflying it with a camera looking down at an angle of 45 degrees, with respect to a flight path may fail to present an image of the contents of the hole.
- One possible solution to the above example problem is to fly over the item of interest, i.e., the hole, twice, with the camera looking straight down on the second flyover.
- Other possible solutions include mounting a plurality of cameras on the airborne or satellite platform, or mounting a camera on a steerable gimbal, and thereby obtain a plurality of pictures of a particular ground area, each from a different viewing angle.
- the viewer may have a hard copy of a first picture of a ground area, taken from a first airborne surveillance viewing angle, in which a building of interest is situated in, for example, the upper left corner of the copy.
- a second picture of the same ground area, taken from a second viewing angle may show the building in its upper right corner.
- Still another problem, which relates to the previously identified problem is that the viewer must change his or her visual focus continually, namely by looking at the pictures taken from one viewing angle and then looking at the pictures taken from another viewing angle. This can be inconvenient. It also increases the probability of human error, as the user must remember how something looked from one viewing angle as he or she shifts attention to another hard copy showing how the item appeared from another viewing angle.
- the existing art does provide a type of stereoscopic visual surveillance method, in which two frames of information are captured via satellite and transmitted to, for example, the National Reconnaissance Office (NRO).
- NRO National Reconnaissance Office
- Printable images of the left and right frames are then generated, one being polarized orthogonal to the other, arranged above one another and printed.
- the user wears polarizing glasses, whereby his or her left eye sees the left image and his or her right eye sees the right image. The user thus sees an apparent three-dimensional image.
- each polarized image pair shows, and is limited to, the ground area of interest as seen from an oblique viewing angle.
- a typical stereoscopic image is formed by mounting two cameras on a satellite. One camera is ahead of the satellite, at a depression angle toward the earth. The other camera looks behind the satellite, at the same depression angle. Therefore, the left image and the right image are each obtained by looking at the ground area of interest at an oblique viewing angle. For this reason the prior art stereoscopic image does not have a direct look-down angle. This can have significant results.
- FIG. 1 shows a sim ⁇ TaTed example of an image of a building 2 as if viewed in its original polarized format through polarized glasses.
- FIG. 4 is a microlens hard copy of the same building, displaying two three-dimensional views generated and fixed in hard copy in accordance with a method of the present invention.
- FIG. 1 does not reveal any object proximal to the building 2.
- One of the two viewing angles provided by FIG. 4, though, reveals a missile 4.
- the missile 4 cannot be seen in FIG. 1 because it is close against a side wall of the building 2, and the FIG. 1 stereoscopic image was obtained from oblique viewing angles.
- the present invention advances the art and overcomes the problems identified above by placing on a single microlens sheet images of an object or ground area as seen from a remote distance at multiple viewing angles, such that the viewer can move the sheet to see the object from any of the viewing angles.
- the original images are taken from the remote distance, and then processed, formatted and fixed to the microlens sheet such that the user will see a plurality of three-dimensional images.
- the microlens sheet may comprise a plurality of semi-cylindrical or similar cross- section transparent lenses, lying in a plane and extending parallel to one another.
- the original images are obtained by, for example, one or more optical detectors mojjflted on an aerial or space-borne platform.
- the optical detectors may detect visible or non-visible frequency bands, or combinations of the same.
- the optical detectors may be steerable or pointable, either by commands entered local to the platform or by uplink.
- the optical detectors may include a telephoto lens, with or without a zoom feature.
- a first embodiment of the invention is a method including flying the platform, or orbiting it, along a platform path over a ground area of interest.
- a first detection image is detected by the optical detector from a first position on the platform path such that the ground area of interest is in the first image's field of view.
- a second detection image is detected by the optical detector when the platform -is . at a second position on the platform path.
- a third detection image is detected.
- the second detection image and the third detection image are detected such that the ground area of interest is the field of each.
- a first digital pixel array representing the first detection image is input to a data processor.
- a second digital pixel array representing the second detection image, and a third digital pixel array representing the third detected image are input to the data processor.
- the data processor generates an output interphased or interlaced digital pixel array based on the first digital pixel array, the second digital pixel array, and the third pixel array.
- a visible interphased image is printed on a printable surface of a hard copy sheet, the printed image being based on the output interphased digital pixel array, and a microlens sheet is overlaid onto the printable surface.
- the output interphased digital pixel array is generated, and the visible interphased image is printed such that when the microlens sheet is overlaid the user sees, froTfTa first viewing position, a first three-dimensional image based on the first and second detected images and, from a second viewing position, sees a second three-dimensional image based on the second and third detected images.
- the first three-dimensional image shows a surveillance line of sight extending from the ground area of interest to a point on the platform path halfway between the first viewing position and the second viewing position.
- the second three-dimensional image shows a surveillance line of sight extending from the ground area of interest to a point on the platform path halfway between the second viewing position and the third viewing position. Since the first three-dimensional image and the second three- dimensional image each include a direct down view, each provides a view into holes, between buildings and the like.
- the microlens sheet comprises a plurality of semi-cylindrical or similar cross-section transparent lenses, lying in a plane and extending parallel to one another.
- a rotation axis lies in the plane and extends in a direction parallel to the lenses.
- the first orientation and position includes a first rotation of the hard copy sheet about the rotation axis and the second orientation and position includes a second rotation of the hard copy sheet about the rotation axis.
- the microlens sheet comprises a plurality of lenses, each having a circular or elliptical circumference, and each having a hemispherical or aspherical cross-section.
- a first rotation axis lies in the plane, and a second rotation axis lies in the plane and extends normal to the first rotation axis.
- the first orientation and position includes a first rotation of the hard copy sheet about the first rotation axis and the second orientation and position includes a second rotation of the hard copy sheet about the first rotation axis.
- the microlens sheet comprises a plurality of lenses, each having a circular or elliptical circumference, and each having a hemispherical or aspherical cross-section, according to the previously identified aspect.
- a fourth detection image is detected from a fourth viewing position spaced laterally in a first direction from the first platform path, with the ground area of interest being in the third detection image field.
- a fourth digital pixel array representing the fourth detection image is input to the data processor.
- the data processor generates an output interphased or interlaced digital pixel array based on the first digital pixel array, the second digital pixel array, the third pixel array, and the fourth pixel array.
- the output interphased digital pixel array is further generated, the visible interphased image is printed, and the microlens sheet is overlaid such that when the user rotates the microlens sheet between a first rotation position and a second rotation position about the second rotation axis, the user sees a visual three-dimensional image based, at least in part, on the fourth detected image.
- a further embodiment of the invention is a method including flying the platform along a platform path above a ground area of interest.
- a first left detection image is detected by the optical detector from a first position on the platform path.
- a first right detection image is detected by the optical detector from a second position on the platform path.
- a second left detection image is detected by the optical detector from a third position on the platform path.
- a second right detection image is detected by the optical detector from a fourth position on the platform path.
- a first digital pixel array representing the first detection image is input to a data processor.
- a second digital pixel array representing the second detection image, and a third digital pixel array representing the third detected image are input to the data processor.
- the data processor generates an output interphased or interlaced digital pixel array based on the first digital pixel array, the second digital pixel array, and the third pixel array.
- a visible interphased image is printed on a printable surface of a hard copy sheetrfrTe rjrinted image being based on the output interphased digital pixel array, and a microlens sheet is overlaid onto the printable surface.
- the output interphased digital pixel array is generated, and the visible interphased image is printed such that when the microlens sheet is overlaid the user sees, from a first viewing position, a first three-dimensional image based on the first and second detected images and, from a second viewing position, sees a second three- dimensional image based on the second and third detected images.
- the platform path is any of a curved path, semicircular or circular path, or combination of such paths, about a surveillance axis extending normal to the ground area of interest.
- the microlens sheet comprises a plurality of lenses, each having a circular or elliptical circumference, and each having a hemispherical or aspherical cross- section, according to the previously identified aspect.
- a fourth detection image is detected and when the platform is at a fourth position on the platform -path. The fourth detection image is such that the ground area of interest is in its field.
- a fourth digital pixel array representing the fourth detection image is input to the data processor.
- the data processor generates an output interphased or interlaced digital pixel array based on the first digital pixel array, the second digital pixel array, the third pixel array, and the fourth pixel array.
- the output interphased digital pixel array is further generated, the visible interphased image is printed, and the microlens sheet is overlaid such that when the user rotates the microlens sheet between a first rotation position and a second rotation position about the second rotation axis, the user sees a visual three-dimensional image based, at least in part, on the fourth detected image.
- An objective of the present invention is to convert remotely acquired images into one or more hard copy motion, or hard copy multidimensional images and/or display devices, to dramatically improve visual intelligence.
- the hard copy information can then be used, for example, in briefings and distributed to those involved, e.g., pilots, special ops, before carrying out a mission.
- the micro lens sheet has been specifically designed to map special algorithms that maintain the geometric and physical properties of light waves. This is accomplished by interphasing the light waves in the precise formulas that are designed to fit the specific requirements of the particular discipline being used.
- the micro lenses are also designed to transmit back to the human visual system the light waves that closely replicate the original scene or object.
- the micro lens sheet thereby serves as a method of storage and translation of reconnaissance information.
- the micro lens sheet permits the viewer to see a multidimensional projection or a "360 degree look-around" view including, for example, height of objects including, for example, buildings, mountains, fires, etc. without the aid of further processing and display equipment.
- FIG. 1 is an example picture of a building generated for viewing through a polarization-based three-dimensional viewing system
- FIG. 2 depicts an example surveillance system according to the present invention
- FIG. 3 shows an example flow chart of a first method according to the present invention.
- FIG. 4 is a microlens hard copy showing two three-dimensional views of the building shown in FIG. 1 , generated in accordance with the methods of the present invention.
- FIG. 2 shows an example of a surveillance system for carrying out the method of the present invention.
- the FIG. 2 system comprises a low earth orbit (LEO) satellite 10, shown at position 12a, position 12b, position 12c and position 12c along an orbit path 14.
- LEO low earth orbit
- Mounted on the satellite 10 is a forward-looking camera 16, a nadir camera 18, and rearward-looking camera 20.
- VNIR visual near-infrared
- the forward camera 16 has a line of sight 16L pointing down from the orbit tangent line TL at an angle THETA.
- the line of sight 18L of the nadir camera 18 points directly down, and the rearward camera 20 has a line of sight 20L that points down at angle of minus THETA.
- the forward camera 16 is shown at position 12a as covering a field of view FV on the ground.
- the nadir camera 18 has a field of view covering -# ⁇ e same area FV.
- the field of view of the rearward camera 20 is the area FV.
- the above-described example platform is known in the art and therefore a further detailed description is not necessary.
- Other known platforms and arrangements of cameras are known in the art and may be used.
- the cameras 16, 18 and 20 could be for bandwidths other than VNIR, examples being panchromatic and shortwave infrared (SWIR). Additional cameras may be mounted on the platform as well.
- SWIR panchromatic and shortwave infrared
- FIG. 1 Also shown in FIG. 1 is a ground station 30, a processing station 32, a communicaiio ⁇ nk 34 between the ground station 30 and the processing station 32, and an inkjet printer 36.
- An uplink 38 carries command and control signals from the ground station 30 to the satellite 10
- a downlink 40 carries camera sensor data, described below, and satellite status information.
- the ground station 30, the uplink 38 and the downlink 40 are in accordance with the known art of satellite communication and control and, therefore, description is not necessary for understanding or practicing this invention.
- FIG. 3 shows an example flow chart for a first embodiment of the invention, and an example operation will be described in reference to the system illustrated by FIG. 2.
- the example describes only generating and displaying, according to this invention, a multi-view image of one ground area, labeled FV.
- a multi-view image would likely be generated of a plurality of ground areas LV and, as will be understood by one of ordinary skill in the field of satellite and airborne surveillance, such images can be obtained by repeating the FIG. 3 method.
- UPDATE commands are first sent by uplink at block 100.
- block 100 is for illustrative purposes only, and need not be repeated each time that image sensing data is to be collected.
- the timing, specific content, protocols, and signal characteristics of the UPDATE commands are specific to the particular kind and configuration of the satellite 10 and the ground station 30, and such commands are readily implemented by persons skilled in the art of satellite controls.
- SENSOR LEFT data is collected by the ground station 30 from camera 16 when the satellite is position 12a.
- SENSOR CENTER data is collected from camera 18.
- SENSOR RIGHT data is collected from camera 20. It will be understood that blocks 102, 104 and 106 are not necessarily performed as separate data collection steps.
- the SENSOR LEFT, SENSOR CENTER and SENSOR RIGHT data may be multiplexed onto a single data stream and continually collected during a time interval that includes the times that the satellite is at positions 12a, 12b and 12c. Further, the collection is not necessarily performed at the ground station 30, because other ground receiving stations (not shown) may receive the data downlink from the satellite 10. Such arrangements of ground stations and data collection stations are known in the art. Still further, the collection steps 102, 104 and 106 may include retransmission through ground repeaters (not shown), as well as encryption and decryption, and land-line transmissions. These data transfer methods and protocols are known in the art.
- step 108 formats the data, sends it over the link 34 and inputs it to a data processor, shown as item 32 in FIG. 2.
- the link 34 may be the Internet and, accordingly, the formatting, transfer and input may further include data and data network transmissions such as, for exampler-a File Transfer Protocol (FTP) transfer. Further, the link 34 is shown for purposes of example only.
- the data processor 32 may be local to the ground station 30, or to any other ground receiving station.
- the data processor 32 can be any of a large variety of standard commercially available general purpose programmable digital computers (not shown) having, for example, a standard protocol digital input port, a microprocessor, operating system storage, operating system software stored in same, application program storage, data storage, a standard protocol digital output port and, preferably, a user interface, and a video screen.
- An example computer is a Dell® model Optiplex® GX 150 having a 1 GHz Intel® Pentium® III or Celeron® microprocessor, 528 MByte RAM, a 60 GByte hard drive, a 19 inch conventional cathode ray tube (CRT) video display, and a standard keyboard and mouse for user entry of data and commands, running under Microsoft Windows 2000® or Windows XP® operating system.
- a Dell® model Optiplex® GX 150 having a 1 GHz Intel® Pentium® III or Celeron® microprocessor, 528 MByte RAM, a 60 GByte hard drive, a 19 inch conventional cathode ray tube (CRT) video display, and a standard keyboard and mouse for user entry of data and commands, running under Microsoft Windows 2000® or Windows XP® operating system.
- step 110 reformats the SENSOR LEFT SENSOR CENTER and SENSOR RIGHT data into a three M XN pixel arrays, which are labeled for reference purposes as LeftPixelArray, CenterPixelArray and RightPixelArray.
- the step 110 reformatting is based on a predetermined, user-input MicroLensData which characterizes physical parameters of the microlens sheet on which the final image set will be printed.
- Step 110 may also be based on a PrinterResData characterizing performance parameters of the printer 36, particularly the printer's resolution in, for example, dots per inch (DPI).
- DPI dots per inch
- Step 110 uses this LPI data, and the PrinterResData, to convert the SENSOR LEFT, SENSOR RIGHT and SENSOR CENTER data into N x M pixel arrays LeftPixelArray, CenterPixelArray and RightPixelArray, with N and M selected to place an optimal number of printed pixels under each lenticule.
- the pixel resolution of the nadir camera 18 may differ from the pixel resolution of the forward and rearward looking cameras 16 and 20.
- the pixel resolution may differ in terms of the number of pixels generated by the camera, and by the ground area represented by each pixel.
- One reason for the latter is that the image field of the forward and rearward cameras 16 and 20, in terms of the total ground area, is typically larger than that covered by the nadir camera 18.
- each pixel generated by the nadir camera 18 may represent 5 meters by 5 meters of ground area, while each pixel generated by the cameras 16 and 20 may represent, for example, 8 meters by 8 meters.
- image mapping algorithms may be used. Such algorithms are well-known in the art.
- the " Images represented by LeftPixelArray, CenterPixelArray and RightPixelArray are of the same ground area LV, viewed from the three positions shown in FIG. 2 as 12a, 12b, and 12c.
- the difference between the LeftPixelArray and CenterPixelArray is a parallax equivalent to a viewer having his or left eye at position 12a and his or her right eye at position 12b arid looking at the LV area.
- step 112 generates a 3dView1 and a 3dView2 image, the first being a 2N x 2M pixel array representing a rasterization and interlacing of the LeftPixelArray and CenterPixelArray, and the second being a pixel array of the same format representing a rasterization and interlacing of the CenterPixelArray and RightPixelArray.
- the 3dView1 pixels are spaced such that when overlaid by the microlens sheet the light from pixels corresponding to the LefPixellmage are diffracted in one direction and the light from pixels corresponding to the CenterPixelArray are diffracted in another direction.
- the 3dView2 pixels are spaced such that when overlaid by the microlens sheet the light from pixels corresponding to the CenterPixellmage are diffracted in one direction and the light from pixels corresponding to the RightPixelArray are diffracted in another direction.
- the optical physics of the diffraction are known in the lenticular arts. An example description is found in U.S. Patent No. 6,091 ,482.
- Step 112 Utilizing mathematical and ray-trace models of the microlens sheet, Step 112 generates 3dView1 to have a pixel spacing, relative to the lenses of the microlens sheet, such that when a user views the microlens sheet from a first viewing direction light from the LeftPixelArray pixels impinges on the viewer's left eye, and light from the CenterPixelArray pixels impinges on the viewer's right eye. Therefore, when the viewer looks at the microlens sheet from the first viewing direction the viewer sees a three-dimensional image of the LV area, as if seen from the VP1 position.
- step 112 generates 3dView2 such that the pixels are spaced relative to the lenses of the microlens sheet so that when a user views the microlens sheet from a second viewing direction light from the CenterPixelArray pixels impinges on the viewer's left eye, and light from the RightPixelArray pixels impinges on the viewer's right eye. Therefore, when the viewer looks at the microlens sheet from the second viewing direction the viewer sees a three-dimensional image of the LV area, as if seen from the VP2 position.
- the pixels for 3dView1 , 3dView2, 3dView3 and 3dView4 are printed and overlaid with a microlens sheet comprising a plurality of circular footprint lenses, each having a hemispherical or aspherical cross section.
- the microlens sheet has a first rotation axis extending in a direction in the plane of the microlenses and a second rotation axis extending in the same plane but perpendicular to the first rotation axis.
- the spacing of the 3dView1 , 3dView2, 3dView3 and 3dView4 pixels, with respect to the microlens sheet lenses, are such that when the viewer's line sight is at a first rotation about the first axis the viewer sees a three-dimensional image corresponding to 3dView1.
- the viewer's line of sight is at a second rotational position about the first axis, he or she sees an image corresponding to the 3dView2 image.
- the user rotates the microlens sheet to a particular position about the second axis the viewer sees a three-dimensional image corresponding to the 3dView3 image.
- the viewer sees a three-dimensional image corresponding to the 3dView4 image.
- the pixels can be printed directly on a microlens sheet having a printable back surface.
- the viewer is provided with a single hard copy which, with respect to a building, shows the building's left face, right face, front face and back face, each view being three-dimensional. Further, referring to FIG. 2, provided that images on which the three-dimensional images are based are obtained from a viewing position such as 12b, the viewer will have the option of rotating the hard copy so that he or she looks directly down at the ground area of interest.
- the present invention thereby presents the user, by way of a single hard copy, with the two three-dimensional views, one as if seeing the LV area from VP1 and the other as is seeing the LV area from VP2.
- the user does not have to wear glasses to see any of the 3-dimensional pictures. Instead, with the unaided eye the user can see multiple views of an area or object using only a single hard copy.
- the prior art provides only a single three-dimensional view which the user must wear special glasses to see. • Therefore, with the present invention the user does not have to wear special glasses and does not have to keep track of, and look back and forth between a plurality of pictures when studying an area or item of interest.
- the hard copies can be any viewable size such as, for example, 8 Vz" by 11 ", paper size "A4", large poster-size sheets, or 3" by 5" cards.
- FIG. 4 is a microlens hard copy of a simulated ground area imaged in accordance with the above-described invention.
- the simulated ground area includes a building 2.
- the FIG. 4 microlens hard copy provides two three dimensional views of the ground area. In one of the two three-dimensional views a missile, labeled as item 4, is seen located against a side wall of the building.
- the FIG. 1 simulation of an existing art stereoscopic image of the building 2 does not show the missile 4. It does not because the viewing angle would have to a nadir angle, as that obtained from the nadir camera 16.
- Such a nadir image of the building 2. is included in one of the two three-dimensional views of the building 2 provided by FIG. 4.
- the user would have to given a hard copy separate from FIG.. 1. Therefore, as can be seen, the user would have to wear polarizing glasses to see what is shown by FIG. 1 , and then look at a separate image, of one was provided, to see the missile 4.
- the present invention solves these problems by providing the user with a hard copy showing multiple three dimensional views of the area of interest, and the user can inspect each of these, in three dimensional viewing, by simply rotating the hard copy.
- the example above was described using three image-taking positions, namely 12a, 12b and 12c, and generating two three-dimensional images as a result.
- a larger number of image-taking positions may be used.
- the above- described example used images taken from position 12a and 12b to generate the three dimensional image along the line of sight VP1 , and the image taken from point 12b again, paired with the image taken from position 12c to generate the three dimensional image along view line VP2.
- the second three dimensional image could have used additional viewing positions, each spaced in the orbit direction beyond points 12b and 12c.
- the above- described example obtained images by orbiting a single satellite in a planar orbit forming an arc over the imaged area.
- a 360 degree viewing angle hard copy may be generated by using two satellites, with their respective orbits crossing over one another above the area of interest, at angle preferably close to ninety degrees.
- the first satellite would obtain three images representing, respectively, the area of interest as seen from a first, second and third position along that satellite's orbit. From these three images two left-right images would be generated, such as the 3dView1 and 3dView2 images described above.
- the second satellite ⁇ obtains three images representing, respectively, the area of interest as seen from a first, second and third position along that satellite's orbit. From these three images two additional left-right images are generated, and these may be labeled as 3dView3 and 3dView4.
- FIG. 2 shows the satellite 10 as an example platform.
- the present invention further contemplates use of an airborne platform.
- the airborne platform can be manned or unmanned.
- the cameras can be gimbal mounted, with automatic tracking and stabilization, as known in the art of airborne surveillance.
- Such a system may fly the platform in a circular path around a ground area of interest and obtain a plurality of detection images from various points along the path.
- the image data may be downloaded during flight or stored on board the platform for later retrieval.
- Two or more pairs of the detection images would .be used to generate left-eye and right eye-images, each pair having the parallax information for a three-dimensional view from a viewing angle halfway between the position from which the left eye image was detected and the position - om which the right eye image was detected.
- Registration and alignment on the left eye and right eye images may be performed, using known image processing techniques, before interphasing their respective pixel arrays for printing and viewing through a microlens sheet.
- the present invention further contemplates use of multiple platforms for obtaining the plurality of detection images of a particular ground area of interest.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Stereoscopic And Panoramic Photography (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| AU2003303864A AU2003303864A1 (en) | 2002-03-01 | 2003-02-28 | Multiple angle display produced from remote optical sensing devices |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US36109902P | 2002-03-01 | 2002-03-01 | |
| US60/361,099 | 2002-03-01 |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| WO2004070430A2 true WO2004070430A2 (fr) | 2004-08-19 |
| WO2004070430A3 WO2004070430A3 (fr) | 2004-10-21 |
Family
ID=32849598
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2003/005635 Ceased WO2004070430A2 (fr) | 2002-03-01 | 2003-02-28 | Ecran a plusieurs angles produit a partir de dispositifs de detection optiques situes a distance |
Country Status (2)
| Country | Link |
|---|---|
| AU (1) | AU2003303864A1 (fr) |
| WO (1) | WO2004070430A2 (fr) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20110304614A1 (en) * | 2010-06-11 | 2011-12-15 | Sony Corporation | Stereoscopic image display device and stereoscopic image display method |
| WO2014171867A1 (fr) * | 2013-04-19 | 2014-10-23 | Saab Ab | Procédé et système pour analyser des images en provenance de satellites |
Family Cites Families (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5905593A (en) * | 1995-11-16 | 1999-05-18 | 3-D Image Technology | Method and apparatus of producing 3D video by correcting the effects of video monitor on lenticular layer |
| US5600402A (en) * | 1995-05-04 | 1997-02-04 | Kainen; Daniel B. | Method and apparatus for producing three-dimensional graphic images using a lenticular sheet |
-
2003
- 2003-02-28 AU AU2003303864A patent/AU2003303864A1/en not_active Abandoned
- 2003-02-28 WO PCT/US2003/005635 patent/WO2004070430A2/fr not_active Ceased
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20110304614A1 (en) * | 2010-06-11 | 2011-12-15 | Sony Corporation | Stereoscopic image display device and stereoscopic image display method |
| WO2014171867A1 (fr) * | 2013-04-19 | 2014-10-23 | Saab Ab | Procédé et système pour analyser des images en provenance de satellites |
Also Published As
| Publication number | Publication date |
|---|---|
| AU2003303864A1 (en) | 2004-08-30 |
| AU2003303864A8 (en) | 2004-08-30 |
| WO2004070430A3 (fr) | 2004-10-21 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US6894809B2 (en) | Multiple angle display produced from remote optical sensing devices | |
| US9294755B2 (en) | Correcting frame-to-frame image changes due to motion for three dimensional (3-D) persistent observations | |
| EP1639405B1 (fr) | SYSTEME DE CAMERAS 3D/360o NUMERIQUES | |
| EP2302941B1 (fr) | Système et procédé pour la création d'une vidéo stéréoscopique 3D | |
| US5124915A (en) | Computer-aided data collection system for assisting in analyzing critical situations | |
| CA2737451C (fr) | Procede et appareil d'affichage d'images stereographiques d'une region | |
| US4807024A (en) | Three-dimensional display methods and apparatus | |
| US20180295335A1 (en) | Stereographic Imaging System Employing A Wide Field, Low Resolution Camera And A Narrow Field, High Resolution Camera | |
| US9536320B1 (en) | Multiple coordinated detectors for examination and ranging | |
| EP2659680B1 (fr) | Procédé et appareil pour réaliser une monovision dans un système multivue | |
| US20100295927A1 (en) | System and method for stereoscopic imaging | |
| CN111541887B (zh) | 一种裸眼3d视觉伪装系统 | |
| EP1999685B1 (fr) | Procédé et système de stockage d'informations 3d | |
| US10084966B2 (en) | Methods and apparatus for synchronizing multiple lens shutters using GPS pulse per second signaling | |
| US6781707B2 (en) | Multi-spectral display | |
| KR102126159B1 (ko) | 주사형 전방위 카메라 및 주사형 입체 전방위 카메라 | |
| CA2429176A1 (fr) | Imagerie combinee en deux/trois dimensions en couleurs | |
| WO2021149484A1 (fr) | Dispositif de génération d'image, procédé de génération d'image et programme | |
| WO2018118751A1 (fr) | Systèmes et procédés d'imagerie stéréoscopique | |
| US20180174270A1 (en) | Systems and Methods For Mapping Object Sizes and Positions Onto A Cylindrical Panorama Using A Pivoting Stereoscopic Camera | |
| Schultz et al. | System for real-time generation of georeferenced terrain models | |
| WO2004070430A2 (fr) | Ecran a plusieurs angles produit a partir de dispositifs de detection optiques situes a distance | |
| Buchroithner et al. | Three in one: Multiscale hardcopy depiction of the Mars surface in true-3D | |
| Ondrejka et al. | Note on the stereo interpretation of nimbus ii apt photography | |
| US6603503B1 (en) | Methods, systems and devices for displaying live 3-D, parallax and panoramic images |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AK | Designated states |
Kind code of ref document: A2 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ OM PH PL PT RO RU SD SE SG SK SL TJ TM TN TR TT TZ UA UG UZ VN YU ZA ZM ZW |
|
| AL | Designated countries for regional patents |
Kind code of ref document: A2 Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
| 122 | Ep: pct application non-entry in european phase | ||
| NENP | Non-entry into the national phase |
Ref country code: JP |
|
| WWW | Wipo information: withdrawn in national office |
Country of ref document: JP |