HK1182192B - Self-calibrated, remote imaging and data processing system - Google Patents
Self-calibrated, remote imaging and data processing system Download PDFInfo
- Publication number
- HK1182192B HK1182192B HK13109329.8A HK13109329A HK1182192B HK 1182192 B HK1182192 B HK 1182192B HK 13109329 A HK13109329 A HK 13109329A HK 1182192 B HK1182192 B HK 1182192B
- Authority
- HK
- Hong Kong
- Prior art keywords
- imaging sensor
- sensor
- imaging
- image
- rigid mounting
- Prior art date
Links
Description
Cross Reference to Related Applications
This application is a continuation-in-part application of U.S. patent application serial No. 11/581, 235, filed on 11.2006, claiming priority of U.S. patent application serial No. 10/664, 737, filed on 18.9.2003 (which claims priority of U.S. provisional patent application serial No. 60/412, 504, filed on 20.9.2002 for a "vehicle-based data collection and processing system").
Technical Field
The present invention relates generally to the field of remote imaging technology and, more particularly, to a system for rendering high resolution, high accuracy, low distortion digital images over an extremely large field of view.
Background
Remote sensing and imaging are a broad range of technologies with many different and extremely important practical applications such as geological mapping and analysis, and meteorological forecasting. Aerial and satellite based photography and imaging are particularly useful remote imaging techniques that have become extremely dependent in recent years on the collection and processing of digital image data, including spectral, spatial, elevation, and vehicle position and orientation parameters. Spatial data can now be collected, processed and transmitted in digital format-characterizing real-world house modifications and locations, roads and highways, environmental hazards and conditions, utility infrastructure (e.g., telephone lines, pipelines) and geophysical features, to conveniently provide highly accurate mapping and monitoring data for various applications (e.g., dynamic GPS mapping). Elevation data may be used to improve the spatial and positional accuracy of the overall system, and may be obtained from existing Digital Elevation Model (DEM) datasets, or together with spectral sensor data, collected from active doppler-based radiometry or passive stereographic calculations.
The main challenges facing remote sensing and imaging applications are spatial resolution and spectral fidelity. Photographic problems such as spherical aberration, astigmatism, curvature of field, distortion and chromatic aberration are well known problems that must be addressed in any sensor/imaging application. Some applications require very high image resolution-typically with a tolerance of several inches. Depending on the particular system used (e.g., aircraft, satellite, or spacecraft), the actual digital imaging device may be located anywhere from a few feet to miles from its target, resulting in a very large scale factor. Providing images with extremely large scale factors, yet with resolution tolerances of several inches, presents challenges for even the most reliable imaging systems. Thus, conventional systems typically must compromise between resolution quality and the size of the target region that can be imaged. If the system is designed to provide high resolution digital images, the field of view (FOV) of the imaging device is generally small. If the system provides a larger FOV, the resolution of the spectral and spatial data is typically reduced and the distortion increases.
Orthoimaging is one method used in an attempt to solve this problem. In general, ortho imaging renders a composite image of a target by editing distinct sub-images of the target. Generally, in an aerial imaging application, a digital imaging device with limited range and resolution sequentially records images of fixed sections of a target area. The images are then aligned in a certain order to render a composite image of the target region.
Typically, such a rendering process is very time consuming and labor intensive. In many cases, these processes require iterative processes that significantly degrade image quality and resolution-especially in the case of rendering thousands of sub-images. Where the imaging data can be processed automatically, the data is typically transformed and sampled repeatedly-thereby reducing color fidelity and image sharpness with each successive operation. If an automated correction or equalization system is employed, such a system may be sensitive to image anomalies (e.g., unusually bright or dark objects) -leading to over-correction or under-correction and unreliable interpretation of the image data. Where manual drawing of images is required or desired, the time and labor costs are significant.
Thus, there is a need for an ortho image rendering system that provides efficient and versatile imaging for extremely large FOVs and associated datasets, while maintaining image quality, accuracy, positional accuracy and clarity. In addition, automated algorithms are largely applied at each stage of planning, collecting, navigating and processing all relevant operations.
Disclosure of Invention
The present invention relates to remote data collection and processing systems that utilize various sensors. The system may include a computer console unit that controls the operation of the vehicle and system in real time. The system may also include a global positioning system linked to and in communication with the computer console. In addition, a camera and/or camera array assembly may be used to generate an image of the target viewed through the aperture. The camera array assembly is communicatively connected to a computer console. The camera array assembly has a mounting frame, a first imaging sensor centrally coupled to the frame and having a first focal axis through the aperture. The camera array assembly also has a second imaging sensor coupled to the gantry and offset from the first imaging sensor along an axis, the second imaging sensor having a second focal axis passing through the aperture and intersecting the first focal axis within an intersection region. The camera array assembly has a third imaging sensor coupled to the gantry and offset from the first imaging sensor along an axis opposite the second imaging sensor, the third imaging sensor having a third focal axis passing through the aperture and intersecting the first focal axis within an intersection region. In this manner, any number of 1 to n cameras can be used, where "n" can be any odd or even number.
The system may also include an Attitude Measurement Unit (AMU), such as an inertial, optical, or similar measurement unit communicatively connected to the computer console and the camera array assembly. The AMU can determine the yaw, pitch, and/or roll of the aircraft at any instant in time, and the continuous DGPS positions can be used to measure vehicle heading relative to the arctic of the earth. The AMU data is combined with the accurate DGPS data to produce a reliable real-time AMU system. The system may also include a mosaic module housed within the computer console. The mosaic module includes a first component that performs initial processing on an input image. The mosaic module further includes a second component that determines a geographic boundary of the input image, the second component cooperatively interfacing with the first component. The mosaic module also includes a third component that geographically accurately renders the input image into the composite image. The third component cooperatively engages the first and second components. Also included in the mosaic module is a fourth component that equalizes the color of the input image drawn into the composite image. The fourth component may be cooperatively engaged with the first, second and third components. Additionally, the mosaic module may include a fifth component that fuses boundaries between adjacent input images drawn into the composite image. The fifth component may cooperatively engage the first, second, third and fourth components.
A sixth assembly, an optional forward tilt and/or optional backward tilt camera array system, may be implemented that collects tilt image data and combines the image data with pose and position measurements to generate a digital elevation model using stereo photography techniques. The generation of the digital elevation model may be performed in real time on board the aircraft, or post-processed at a later time. The sixth component works in cooperation with the other components. All components may be mounted to a rigid platform to provide joint registration of sensor data. Vibrations, turbulence and other factors can act on the vehicle in a manner that produces errors in the alignment relationship between the sensors. The use of a common rigid platform mounting for each sensor may provide significant advantages over other systems that do not utilize such a joint registration architecture.
In addition, the present invention may employ some degree of lateral oversampling to improve output quality and/or joint mounting, joint registration oversampling, thereby overcoming physical pixel resolution limitations.
Drawings
For a better understanding of the present invention, and to show by way of example how the same may be carried into effect, reference will now be made to the detailed description of the invention along with the accompanying figures in which corresponding reference numerals in the different figures refer to corresponding parts and in which:
FIG. 1 illustrates a vehicle-based data collection and processing system of the present invention;
FIG. 1A illustrates a portion of the vehicle-based data collection and processing system of FIG. 1;
FIG. 1B illustrates a portion of the vehicle-based data collection and processing system of FIG. 1;
FIG. 2 illustrates the vehicle-based data collection and processing system of FIG. 1, while showing the camera array assembly of the present invention in greater detail;
FIG. 3 illustrates a camera array assembly in accordance with aspects of the present invention;
FIG. 4 illustrates one embodiment of an imaging mode retrieved with the camera array assembly of FIG. 1;
FIG. 5 depicts an imaging mode illustrating certain aspects of the present invention;
FIG. 6 illustrates image stripes in accordance with the present invention;
FIG. 7 illustrates another embodiment of an image stripe in accordance with the present invention;
FIG. 8 illustrates one embodiment of an imaging process in accordance with the present invention;
FIG. 9 illustrates how photographs taken with a camera array assembly can be aligned to obtain a single frame;
FIG. 10 is a block diagram of processing logic according to some embodiments of the invention;
FIG. 11 is an illustration of lateral oversampling from a vehicle top view, in accordance with some embodiments of the present invention;
FIG. 12 is an illustration of lateral oversampling from a vehicle top view, in accordance with some embodiments of the present invention;
FIG. 13 is an illustration of flight path oversampling from a vehicle overhead view, in accordance with some embodiments of the invention;
FIG. 14 is an illustration of flight path oversampling from a vehicle overhead view, in accordance with some embodiments of the invention;
FIG. 15 is an enlarged view from above of the vehicle, according to some embodiments of the invention;
FIG. 16 is a progressively enlarged illustration looking down from the vehicle, in accordance with some embodiments of the invention;
FIG. 17 is a progressively enlarged illustration looking down from the vehicle in accordance with some embodiments of the invention;
FIG. 18 is a schematic diagram of a system architecture according to some embodiments of the invention;
FIG. 19 is an illustration of co-registered cross-directional co-mounted (co-mounted) co-registered oversampling in a side-by-side overlapping sub-pixel region of a single camera array looking down from a vehicle, in accordance with some embodiments of the invention;
FIG. 20 is an illustration of lateral co-mount, co-registered oversampling in a side-by-side overlapping sub-pixel region of two overlapping camera arrays viewed from a vehicle in plan, in accordance with some embodiments of the present invention;
fig. 21 is an illustration of a forward and lateral co-registration oversampling in a side-by-side overlapping sub-pixel region of two stereo camera arrays viewed from a vehicle in plan, in accordance with some embodiments of the present invention.
Detailed Description
While the making and using of various embodiments of the present invention are discussed in detail below, it should be appreciated that the present invention provides many applicable inventive principles that can be embodied in a wide variety of specific contexts. The specific embodiments discussed herein are merely illustrative of specific ways to make and use the invention, and do not delimit the scope of the present invention.
A vehicle-based data collection and processing system 100 of the present invention is shown in fig. 1, 1A and 1B. Further aspects and embodiments of the invention are shown in fig. 2 and 18. The system 100 includes one or more computer consoles 102. The computer control station includes one or more computers 104 for controlling vehicle and system operations. Examples of computer console functions are controlling a digital color sensor system that may be associated with a data collection and processing system, providing display data to the pilot, coordinating satellite generated GPS Pulse Per Second (PPS) event triggers (which may be more than 20 pulses per second), data logging, sensor control and adjustment, checking and alerting for error events, recording and indexing photographs, saving and processing data, flight planning capabilities to automate vehicle navigation, real-time display of data and providing related information. The communication interface between the control computer console and the vehicle autopilot controller provides the ability to actually control the trajectory of the vehicle in real time. This results in a more precise control of the path of the vehicle compared to manual control. All these functions can be implemented by using various computer programs synchronized with the gps pps signal and taking into account the various electrical reaction times of the measuring device. In one embodiment, the computer is embedded within the sensor.
One or more differential global positioning systems 106 are incorporated into the system 100. The global positioning system 106 is used to navigate and determine precise tracks during vehicle and system operation. To accomplish this, the global positioning system 106 is communicatively linked to the computer console 102 to enable information from the global positioning system 106 to be obtained and processed without interrupting the flight. More than zero GPS units may be located at known measurement points to provide a record of the GPS satellite-based error for each sub-second to enable the accuracy of the system 100 to be corrected backwards (back). GPS and/or ground based location services may be used, which completely eliminates the need for ground based control points. This technique results in greatly improved sub-second-by-sub-second position accuracy of the data capture vehicle.
One or more AMUs 108 providing real-time yaw, pitch, and roll information for accurately determining the attitude of the vehicle at the time of data capture are also communicatively linked to the computer console 102. Current Attitude Measurement Units (AMU) (e.g., ApplanixPOSAV) utilize 3 high performance fiber optic gyroscopes, each for yaw, pitch, and roll measurements, respectively. Other manufacturers of AMUs and AMUs using other inertial measurement units may also be used. Additionally, the AMU may be used to determine the instantaneous attitude of the vehicle and make the system more tolerant of statistical errors in the AMU readings. One or more multi-frequency DGPS receivers 110 may be connected to the AMU. The multi-frequency DGPS receiver 110 may be combined with the yaw, pitch, and roll attitude data of the AMU to more accurately determine the position of the remote sensor platform in three-dimensional space. In addition, the direction of the Earth's North can be determined using vectors generated from successive DGPS positions recorded synchronously with the GPSPPS signal.
One or more camera array assemblies 112 for generating images of targets viewed through the aperture may also be communicatively coupled to the one or more computer consoles 102. The camera array assembly 112, which will be described in greater detail below, provides a high resolution, high precision progressive scan or line scan, color digital photography capability to the data collection and processing system.
The system may also include a DC power supply and conditioning device 114 to condition and convert the DC power to AC power to provide power to the system. The system may also include a navigation display 116, the navigation display 116 graphically presenting the location of the vehicle and the flight plan for use by the driver of the vehicle (either on the vehicle or remotely), thereby enabling precise flight paths in both the horizontal and vertical planes. The system may also include an EMU module consisting of a LIDAR, SAR118, or an array of front-to-back tilted cameras for capturing three-dimensional elevation/geomorphic data. EMU module 118 may include a laser unit 120, an EMU control unit 122, and an EMU control computer 124. Temperature control devices, such as solid state cooling modules, may also be deployed as appropriate to provide an appropriate thermal environment for the system.
The system may also include a mosaic module (not shown) housed within computer console 102. The mosaic module, which will be described in more detail below, provides the system with the ability to collect data obtained using the global positioning system 106, the AMU108 and the camera system 112 and process the data into a usable orthographic map.
The system 100 may also include self-locking track technology that provides the ability to micro-correct the positional accuracy of adjacent tracks to achieve an accuracy alone that exceeds the native accuracy of the AMU and DGPS sensors.
The full flight planning method is used for all aspects of the micro-planning task. The inputs are the individual task parameters (latitude/longitude, resolution, color, accuracy, etc.) and the outputs are detailed online digital maps and data files saved on the data collection vehicle for real-time navigation and alerting. The ability to directly connect flight plan data to the autopilot is an additional integration capability. Computer programs that automatically control the flight path, attitude adjustments, graphical displays, moving maps of the flight path, check alarm conditions and corrective actions, inform the driver and/or crew of the status of the overall system, and provide fail-safe operation and control may be used. Safe operating parameters may be constantly monitored and reported. While current systems utilize crew members, the systems may be designed to be equally well suited for use with unmanned vehicles.
Figure 2 shows another depiction of the present invention. In FIG. 2, the camera array assembly 12 is shown in greater detail. As shown, the camera array assembly 12 allows images to be obtained from a reclined position, a forward reclined position, and a nadir position. Fig. 3 depicts the camera array assembly of the present invention in more detail. Fig. 3 provides a camera array assembly 300 in the air over a target 302 (e.g., terrain). For purposes of illustration, the relative size of assembly 300, and the relative distance between assembly 300 and terrain 302, are not depicted to scale in fig. 3. The camera array assembly 300 includes a frame 304 within which imaging sensors 306, 308, 310, 312, and 314 are arranged along a concave crankshaft 316. The radius of curvature of the shaft 316 may vary or vary significantly, providing for very fine or very sharp concavities to be achieved in the shaft 316. On the other hand, the axis 316 may be perfectly linear-without any curvature at all. The imaging sensors 306, 308, 310, 312, and 314 are coupled to the gantry 304, either directly or indirectly, via coupling 318. Coupling 318 may include any number of permanent or temporary connection devices, either fixed or dynamic. For example, coupling 318 may comprise a simple weld, a removable clamp, or an electromechanically controlled universal joint.
Additionally, the system 100 may have a real-time on-board navigation system to provide a visual biofeedback display to the vehicle driver, or to provide a remote display (in terms of operation in an unmanned vehicle). The driver can adjust the position of the vehicle in real time in order to provide a more accurate track. The pilot may be on the vehicle or at a remote location to control the vehicle with the flight display via the communication link.
The system 100 may also utilize a highly fault tolerant approach that has been proposed for providing a software interleaved disk storage approach that allows one or both disk drives to fail without still losing the target data stored on the drive. This software interleaved disk storage approach provides superior fault tolerance and portability relative to other hardware approaches, such as RAID-5.
The system 100 may also include the proposed method of allowing a short circuit calibration step just prior to task data capture. This calibration method step adjusts the camera settings (especially the exposure time) based on sampling the ambient light intensity and setting an almost optimal value just before reaching the region of interest. A second-by-second camera adjustment is then made using a moving average algorithm to deliver improved consistent photographic results. This improves the color processing of the orthographic map. Additionally, calibration may be used to check or determine the precise spatial position of each sensor device (camera, DPG, AMU, EMU, etc.). In this way, variations in the spatial position of the devices can be taken into account, thereby maintaining an accuracy index of the overall system.
Additionally, the system 100 may include proposed methods that allow calibrating the precise position and attitude of each sensor device (camera, DPG, AMU, EMU, etc.) on the vehicle by flying through a vehicle containing a plurality of known, visible and very precise geographic locations. The program takes this data as input and outputs micro-position data which is then used to accurately process the orthographic maps.
As shown in fig. 3, gantry 304 comprises a simple housing within which imaging sensors 306, 308, 310, 312, and 314 are disposed. Although fig. 3 depicts a 5 camera array, the system is equally applicable when any number of camera sensors from 1 to any number are utilized. The sensors 306 and 314 are coupled together to a single transverse beam via a coupling 318 or individually to lateral beams disposed between opposing sidewalls of the rack 304. In an alternative embodiment, the gantry 304 itself may simply comprise a concave curved support beam to which the imaging sensor 306 and 314 are coupled via a member 318. In other embodiments, the rack 304 may comprise a hybrid combination of chassis and support beams. The housing 304 also contains an aperture 320 formed in a surface of the housing between the imaging sensor and the target 302. Depending on the particular type of host aircraft, the aperture 320 may comprise only an aperture, or it may comprise a protective screen or window to maintain environmental integrity within the airframe 304. In the case of using a protective transparent plate for any sensor, the transparent plate may be coated with a special coating to improve the quality of the sensor data. Optionally, aperture 320 may contain a lens or other optics to enhance or alter the properties of the image recorded by the sensor. The aperture 320 is formed with a size and shape sufficient to provide the imaging sensor 306 and 314 with an appropriate line of sight to a target area 322 on the terrain 302.
The imaging sensors 306 are disposed 314 within the gantry 304, or along the gantry 304, such that the focal axes of all the sensors converge and intersect each other within an intersection region bounded by the aperture 320. Depending on the type of image data collected, the particular imaging sensor used, and other optics or devices employed, it may be necessary or desirable to offset the intersection or convergence point above or below the aperture 320. The imaging sensors 306 and 314 are angularly spaced from each other. The exact offset angle between the imaging sensors may vary greatly depending on the number of imaging sensors used, and the type of imaging data collected. The angular displacement between the imaging sensors may also be unequal, if desired, to provide a desired image offset or alignment. Depending on the number of imaging sensors utilized, and the particular configuration of the array, the focal axes of all imaging sensors may intersect at the same point, or may intersect at multiple points, all of which are located proximate to each other and within the intersection region defined by aperture 320.
As shown in fig. 3, an imaging sensor 310 is centrally disposed within the gantry 304 along an axis 316. The imaging sensor 310 has a focal axis 324 that is orthogonally directed from the gantry 304 so as to direct a line of sight of the imaging sensor to an image area 326 of the region 322. An imaging sensor 308 is disposed within the housing 304 adjacent the imaging sensor 310 along an axis 316. Image sensor 308 is aligned such that its line of sight coincides with image area 328 of region 322 and such that its focal axis 330 converges and intersects axis 324 within the region defined by aperture 320. On the opposite side of the shaft 316 from the imaging sensor 308, adjacent to the imaging sensor 310, an imaging sensor 312 is disposed within the housing 304. Image sensor 312 is aligned such that its line of sight coincides with image area 332 of region 322 and such that its focal axis 334 converges and intersects axes 324 and 330 within the region defined by aperture 320. Adjacent to the sensors 308, an imaging sensor 306 is disposed within the gantry 304 along an axis 316. Imaging sensor 306 is aligned so that its line of sight coincides with image area 336 of region 322 and so that its focal axis 338 converges and intersects with other focal axes within the region defined by aperture 320. On the opposite side of the shaft 316, an imaging sensor 314 is disposed within the housing 304 as the sensor 306 adjacent the sensor 312. Image sensor 314 is aligned so that its line of sight coincides with image area 340 of region 322 and so that its focal axis 344 converges and intersects with other focal axes within the region defined by aperture 320.
Imaging sensor 306 and 314 may comprise a number of digital imaging devices including, for example, a separate area scan camera, line scan camera, infrared sensor, hyperspectral and/or seismic sensor. Each sensor may comprise a separate imaging device or it may itself comprise an imaging array. Imaging sensors 306 and 314 may all be of the same nature or may comprise a combination of a wide variety of imaging devices. For ease of reference, the imaging sensors 306 and 314 are hereinafter referred to as the cameras 306 and 314, respectively.
In large film or digital cameras, lens distortion is generally a source of imaging problems. Each individual lens must be carefully calibrated to determine an accurate distortion factor. In one embodiment of the present invention, a small-size digital camera with a lens angular width of 17 ° is utilized. This effectively and affordably mitigates perceptible distortion.
On the other hand, the cameras 306 and 314 are arranged within the gantry 304 along an axis 316 such that the focal axis of each camera converges to an aperture 320, passing through a focal axis 324, to align its field of view with the target area opposite its corresponding position in the array, resulting in a "squint" retinal relationship between the camera and the imaging target. The camera array assembly 300 is configured such that the abutting boundaries of the image regions 326, 328, 332, 336, and 340 slightly overlap.
If the coupling 318 is permanent and fixed (e.g., welded), then the spatial relationship between the aperture 320, the cameras and their line of sight remains fixed, as will the spatial relationship between the image areas 326, 328, 332, 336, and 340. Such a configuration is desirable in, for example, satellite monitoring applications where camera array assembly 300 will be maintained at a substantially fixed distance from region 322. The camera is positioned and aligned so that regions 326, 328, 332, 336, and 340 provide full imaging coverage of region 322. However, if coupling 318 is temporary or adjustable, it may be desirable to selectively adjust the position or alignment of the cameras, either manually or via remote automation, to move, narrow or widen regions 326, 328, 332, 336, and 340-thereby enhancing or changing the quality of the images collected with camera array assembly 300.
In one embodiment, a plurality, i.e. at least two, of the rigid mounting units are fixed to the same rigid mounting plate. The mounting unit is any rigid structure to which the at least one imaging sensor can be fixed. The mounting unit is preferably a housing enclosing the imaging sensor, but may be any rigid structure, including a strut, tripod, etc. For purposes of this disclosure, an imaging sensor means any device capable of receiving and processing active or passive radiant energy, i.e., light, sound, heat, gravity, etc., from a target area. In particular, the imaging sensor may include any number of digital cameras, including digital cameras that utilize red-blue-green filters, bushbroom filters, or hyperspectral filters, LIDAR sensors, infrared sensors, heat-sensing sensors, gravitometers, and the like. The imaging sensors do not include attitude measurement sensors, such as gyroscopes, GPS devices, and the like, for positioning the vehicle via satellite data and/or inertial data. Preferably, the plurality of sensors are different sensors.
In embodiments where the imaging sensor is a camera, LIDAR or similar imaging sensor, the mounting unit preferably has an aperture through which light and/or energy can pass. The mounting plate is preferably planar, but may be non-planar. In embodiments where the imaging sensor is a camera, LIDAR or similar imaging sensor, the mounting plate preferably has an aperture that is aligned with an aperture of the mounting unit through which light and/or energy may pass.
Rigid structures are those that bend, in use, less than about 0.01, preferably less than about 0.001, and most preferably less than about 0.0001. Preferably, the rigid structure is one that bends less than about 0.01 °, preferably less than about 0.001 °, and most preferably less than about 0.0001 °, when secured to an aircraft during normal, i.e., non-turbulent, flight. If the objects are rigidly fixed to each other during normal operation, they are bent relative to each other by less than about 0.01 °, preferably less than about 0.001 °, more preferably less than about 0.0001 °.
The camera 310 is designated as the primary camera. The image plane 326 of the camera 310 serves as a reference plane. The orientations of the other cameras 306, 308, 312, and 314 are measured relative to the reference plane. The relative orientation of each camera is measured using the yaw, pitch and roll angles required to rotate the image plane of the camera parallel to the reference plane. The sequence of rotations is preferably yaw, pitch and roll.
The imaging sensors fixed to the mounting unit may not be aligned in the same plane. The mounting of the imaging sensor may instead be angularly offset relative to the mounting angle of a first sensor fixed to the first mounting unit, preferably a standard nadir camera of the first mounting unit. Thus, the imaging sensors may be co-registered to calibrate the physical mounting angle offset of each imaging sensor relative to each other. In one embodiment, a plurality, i.e. at least two, of the rigid mounting units are secured to the same rigid mounting plate and are co-registered. In one embodiment, the cameras 306 and 314 are secured to the rigid mounting unit and are co-registered. In this embodiment, the geometric center point of the AMU, preferably a gyroscope, is determined using the GPS and inertial data. The physical position of a first sensor, preferably a standard nadir camera, affixed to the first mounting unit, is calculated relative to a reference point, preferably the geometric center point of the AMU. Likewise, the physical positions of all remaining sensors within all mounting units are calculated either directly, or indirectly with respect to the same datum.
The boresight angle of the sensor is defined as the angle from the geometric center of the sensor to the reference plane. Preferably, the reference plane is orthogonal to the target area. Using the ground target point, the boresight angle of the first sensor can be determined. Preferably, the boresight angle of subsequent sensors is calculated with reference to the boresight angle of the first sensor. The sensors are preferably calibrated using known ground targets (which are preferably photo-identifiable) and, on the other hand, using self-locking tracks or any other method as disclosed in U.S. patent application publication No.2004/0054488a1 (now U.S. patent No.7,212,938B2), the disclosure of which is hereby incorporated by reference in its entirety.
The imaging sensor in the second mounting unit may be any imaging sensor, preferably a LIDAR. In another aspect, the second imaging sensor is a digital camera, or an array of digital cameras. In one embodiment, the viewing axis angle of the sensor fixed to the second mounting unit is calculated with reference to the viewing axis angle of the first sensor. The physical offset of the imaging sensor in the second mounting unit may be calibrated with reference to the viewing axis angle of the first sensor in the first mounting unit.
In this manner, all sensors are calibrated at substantially the same time (epoch) under substantially the same atmospheric conditions using the same GPS signal, the same ground target. This significantly reduces the composite error that is realized when each sensor is calibrated individually, using different GPS signals, against different ground targets, and under different atmospheric conditions.
Referring now to FIG. 4, images of regions 336, 328, 326, 332, and 340 taken with camera 306 and 314, respectively, are illustrated in a top view. Again, due to the "squint" arrangement, an image of region 336 is taken with camera 306, an image of region 340 is taken with camera 314, and so on. In one embodiment of the present invention, after the perspective transformation, the images other than the image taken with the center camera 310 appear trapezoidal. The cameras 306 and 314 are arrayed along an axis 316 that, in most applications, is directed vertically downward. In an alternative embodiment, a second camera array, similar to the array of cameras 306-314, is aligned with respect to the first camera array to have an oblique field of view that provides a "heads-up" perspective. The tilt angle of the head-up camera array assembly relative to the horizontal may vary depending on mission objectives and parameters, but is typically 25-45 degrees. The present invention similarly contemplates other alternative embodiments for changing the mounting of the camera array. In all such embodiments, the relative position and pose of the cameras are accurately measured and calibrated to facilitate image processing according to the present invention.
In one embodiment of the invention, an external mechanism (e.g., a GPS timing signal) is used to simultaneously trigger the cameras to capture a batch of input images. The mosaic module then renders each input image from such an array as an orthorectified composite image (or "mosaic") without any apparent seams between adjacent images. The mosaic module performs a set of tasks including: determining a geographic boundary and a size of each input image; projecting each input image onto the mosaic using accurate geolocation; equalizing the colors of the images in the mosaic; and fusing the adjacent input images at a common seam of the adjacent input images. The exact order in which the various tasks are performed may vary depending on the size and nature of the input image data. In some embodiments, the mosaic module only performs one transformation on the original input image during mosaic. The transform may be represented by a 4 x 4 matrix. By combining multiple transformation matrices into a single matrix, processing time is reduced and original input image sharpness is preserved.
During mapping of an input image to a mosaic, particularly when mosaicing is performed at a higher resolution, any pixels in the input image (i.e., input pixels) may not map to pixels in the mosaic (i.e., output pixels). The distorted lines may produce artifacts in the mosaic. Some embodiments of the present invention overcome this problem with a supersampling system in which each input pixel and output pixel is further divided into an n x m grid of sub-pixels. The transformation is from sub-pixel to sub-pixel. The final value of an output pixel is the average of its sub-pixels, which have corresponding input sub-pixels. Larger values of n and m produce higher resolution mosaics, but require additional processing time.
In processing image data, the mosaic module may utilize the following information: the spatial location (e.g., x, y, z coordinates) of the focal point of each camera at the time of capturing the input image; the pose (i.e., yaw, pitch, roll) of the image plane of each camera relative to the ground plane of the target area at the time of capturing the input image; the field of view of each camera (i.e., along and across tracks); and a Digital Terrain Model (DTM) of the area. The gestures may be provided by an AMU associated with the system. From the information obtained with the LIDAR module 118, a Digital Terrain Model (DTM) or a Digital Surface Model (DSM) may be created. LIDAR, like the more common radar, may be considered a LIDAR. In radar, radio waves are transmitted into the atmosphere, which scatters some energy back to the radar's receiver. LIDAR also transmits and receives electromagnetic radiation, but at higher frequencies because it operates in the ultraviolet, visible, and infrared regions of the electromagnetic spectrum. In operation, the LIDAR emits light to a target area. The emitted light interacts with and is thereby altered by the target area. Some of the light is reflected/scattered back to the LIDAR instrument where it can be analyzed. The change in the properties of the light enables some properties of the target area to be determined. The time it takes for the light to travel to the target area and then return to the LIDAR device is used to determine the distance to the target.
DTM and DSM data sets can also be captured from the camera array assembly. Conventional means of obtaining elevation data, such as stereo photography techniques, may also be used.
There are currently three basic LIDAR types: range finder, differential absorption LIDAR (dial), and doppler LIDAR. The rangefinder LIDAR is the simplest LIDAR used to measure the distance from a LIDAR device to a solid or rigid target. DIALLIDAR are used to measure the concentration of chemicals (e.g., ozone, moisture, pollutants) in the atmosphere. DIALLIDAR utilize two different laser wavelengths that are chosen such that one of the wavelengths is absorbed by the molecule of interest while the other wavelength is not absorbed. The difference in the intensity of the two return signals can be used to infer the concentration of the molecule under investigation. Dopplerllidar is used to measure the velocity of the target. When light emitted from the LIDAR strikes a target moving toward or away from the LIDAR, the wavelength of the light reflected/scattered from the target will change slightly. This is called doppler shift and thus dopplerlfar. If the target is moving away from the LIDAR, the return light will have a longer wavelength (sometimes referred to as a red shift), and if the target is moving toward the LIDAR, the return light will have a shorter wavelength (a blue shift). The target may be a hard target or an atmospheric target (e.g., tiny dust and suspended particles carried by wind).
Preferably the focus of the camera is used as the perspective transformation center. Its position in space may be determined by, for example, a multi-frequency carrier phase post-processing GPS system installed on the host aircraft. The offset of the camera's focal point in three dimensions is preferably carefully measured against the center of the GPS antenna. These offsets may be combined with the position of the GPS antenna, as well as the orientation of the host aircraft, to determine the exact location of the focus of the camera. The position of the GPS antenna is preferably determined by processing the collected GPS data against a similarly based GPS antenna deployed at various points of the precision survey.
One or more AMUs (e.g., ApplanixPOSAV) are preferably mounted on the vehicle to determine attitude. The attitude of the AMU reference plane relative to the ground plane of the target area is preferably measured and recorded at short intervals, with an accuracy better than 0.01 °. The attitude of the AMU reference plane may be defined such that it is parallel to the ground plane, allowing a series of rotations to be made about the various axes of the AMU reference plane. The term "alignment" may also be used to describe this operation.
It is desirable to accurately calibrate the pose of the central camera 310 (i.e., its image plane) with respect to the AMU. The pose of each of the other cameras with respect to the central camera 310 is also preferably carefully calibrated. This dependent calibration is more efficient than calibrating each camera directly. When the camera array assembly 300 is reinstalled, only the central camera 310 needs to be recalibrated. In fact, a series of two transformations are applied to the input image from the central camera 310. First, the image plane of the central camera is aligned with the AMU plane. Subsequently, the AMU plane is aligned with the ground plane again. However, these transforms are combined into a single operation by leaving their respective transform matrices. For the image from each of the other cameras, an additional transformation is first performed to align it with the image plane of the central camera.
The position of the focal point of the central camera 310 may be determined as described above. The x and y components of this location preferably determine the location of nadir point 400 of the mosaic on the ground. The field of view (FOV) of each camera is known so that with the z-component of the focal point of the camera, the size of each input image can be determined. The average elevation of the ground is preferably determined by calculating an average elevation for each point in the DTM of the area, and then projecting each input image onto an imaginary horizontal plane at that elevation. The projection difference is then applied, preferably using the DTM of the region. DTMs are available from a number of sources, including: USGS30 m or 10 m DTM for use by the majority of the United states; a commercial DTM; or a DTM obtained using a LIDAR or SAREMU device mounted on the host aircraft that captures data simultaneously with the camera.
In addition to being geographically correctly placed, the resulting composite image also needs to have radiative consistency everywhere and there is no obvious seam at the junction between two adjacent images. The present invention provides a number of techniques to achieve this.
One feature of conventional cameras is exposure time (i.e., the time a shutter is opened to collect light onto an image plane). The longer the exposure time, the brighter the resulting image. The exposure time must be adapted to the light distribution by: cloud covering; ambient lighting changes caused by conditions such as the angle and position of the sun relative to the camera. The optimal exposure time also depends on the orientation of the camera relative to the light source (e.g., a camera facing a sun-illuminated object generally receives more ambient light than a camera facing a dark object). The exposure time is adjusted so that the average intensity of the image is maintained within some desired range. For example, in a 24-bit color image, each of the red, green, and blue components may have an intensity value from 0 to 255. In most cases, however, it is desirable to maintain the average intensity at the average value (i.e., 127).
In the present invention, the exposure control module controls the exposure time of each camera or imaging sensor. It examines each input image and calculates the average image intensity. Based on the moving average (i.e., the average intensity of at least X images), the exposure control module determines whether to increase or decrease the exposure time. The exposure control module may use a longer moving average to achieve a slower response to changes in lighting conditions and as a result is less sensitive to abnormally dark or bright images (e.g., asphalt roads or water). The exposure control module controls the exposure time of each camera individually.
In a system in which a camera is mounted without a forward movement compensation mechanism, there must be a maximum limit to the exposure time. Setting the exposure time to a value greater than the maximum value results in motion-induced blur. For example, assume that the camera is mounted on an airplane traveling at 170 miles per hour (or about 3 inches per ms). Assume that the desired pixel resolution is 6 inches. The forward motion during image capture should be limited to half the pixel size-in this case, equal to 3 inches. Thus, the maximum exposure time is, for example, 1 millisecond.
When controlling the imaging quality, it is beneficial to be able to determine whether the change in light intensity is caused by a change in ambient light or by the presence of an abnormally bright or dark object (e.g., a reflecting body of water, a metal roof, asphalt, etc.). Some applications of the invention relate to aerial photography or surveillance. Note that aerial images of the ground typically contain crops and vegetation-which have a more consistent reflectivity than water or man-made structures such as roads and buildings. Of course, images of crops and vegetation are generally dominated by green (i.e., the green component is the largest among the red, green, and blue values). Thus, by focusing on the green dominant pixels, the intensity correlation can be made more accurate.
The exposure control module calculates the average intensity of the image by selecting only the green dominant pixels. For example, if an image has 100 ten thousand pixels and 30 ten thousand pixels are dominant for green, then only these 30 ten thousand dominant for green are included in the calculation of the average intensity. This results in an imaging process that is less sensitive to deviations caused by artificial structures and bodies of water whose pixels are not typically green dominant. As previously mentioned, it is desirable to maintain an intensity value of about 127. When the intensity value is above 127 (i.e., overexposed), the exposure time is reduced so that less light is captured. Similarly, when the intensity value is below 127 (i.e., underexposed), the exposure time is increased so that more light is captured. For example, consider a system flying over a target terrain area with many white roofs (which are very strong). The average intensity of the captured image tends to be higher. In many conventional systems, the exposure time is reduced to compensate. However, in such an example, it is not appropriate to reduce the exposure time because the bright rooftop deviates the average intensity of the image. Reducing the exposure time results in an image in which the ground is darker than it really is. In contrast, if only green dominant pixels are processed according to the present invention, the pixels representing an overly bright rooftop do deviate from the average intensity, and the exposure time is not altered.
Thus, the exposure control module reduces the intensity difference between the input images. Nevertheless, further processing is provided to enhance tonal equalization. There are many factors (e.g., lens physical properties, atmospheric conditions, spatial/positional relationship of the imaging device) that cause light reception from unevenness of the image plane. More light is received in the center of the camera or sensor than at the edges.
The mosaic module of the present invention solves this problem with an anti-vignetting function, which is now illustrated with reference to fig. 5. As they extend through an imaging target area 512 (e.g., terrain), a number of focal columns 500, 502, 504, 506, and 508 converge from an image plane 509 and pass through a focal point 510. Column 500-508 may contain individual resolution columns for a single camera or sensor or may represent the focal axes of many independent cameras or sensors. For reference purposes, column 504 acts as an axis and the point 513 where column 504 intersects image plane 509 acts as a principal point. The exposure control module applies an anti-vignetting function that multiplies the raw intensity of the input pixel by a column-specific anti-vignetting coefficient. Since the receiving surface is represented as a plane with a coordinate system, each column will have a number of resolution rows (not shown). For a pixel p in column x and row y, this relationship can be expressed as follows:
< adjusted intensity > = < initial intensity > + f (x);
wherein f (x) is a function of the form:
f (x) = cos (off-axis angle) × 4.
Off-axis angle 514 is: 0 (for center column 504); larger (for columns 502 and 506); and larger (for columns 500 and 508). The overall field angle 516(FOVx angle) is depicted between columns 504 and 508.
The function f (x) can be approximated with many line segments between columns. For points on the line segment between any given column c1 and c2, the adjustment factor is calculated as follows:
adjustment coefficient of < c > = f (c1) + [ f (c2) -f (cl) × (c-cl)/(c2-cl) ];
where f (c1) and f (c2) are the f-function values for off-axis angles at columns c1 and c2, respectively.
Each set of input images needs to be stitched into a mosaic image. Although the exposure control module adjusts the amount of light received by each camera or sensor, the resulting input images may still differ in intensity. The present invention provides an intensity equalization module that compares overlapping regions between adjacent input images to further equalize relative intensities. Since the adjacent input images are taken simultaneously, the overlapping regions should theoretically have the same intensity in both input images. However, the intensity values are typically different due to various factors. Some of the factors that contribute to intensity differences may include, for example, that only abnormally bright or dark objects present in the field of view of a particular camera bias the exposure control module, or that the cameras differ in their apparent axial angles (i.e., more tilted cameras receive less light than those cameras that are more vertical).
To equalize two adjacent images, one image is selected as a reference image and the other image is a secondary image. The correlation vector (fR, fG, FB) is determined using, for example, the following processing. Let V be a 3 × 1 vector representing the values of the pixels (R, G and B):
the correlation matrix C can be derived as follows:
wherein FR = avgali/AvgIn; avgalir = red average intensity of the overlapping area in the reference image; AvgIn = red average intensity of the overlapping region in the new image; and FG and FB are similarly derived.
The correlation matrix scales the pixel values of the secondary image so that the average intensity of the overlapping area of the secondary image becomes equal to the average intensity of the overlapping area of the reference image. The secondary image can be equalized to the reference image by multiplying its pixel values by the correlation matrix.
Thus, in one embodiment of the equalization process according to the present invention, the center image is regarded as the reference image. The reference image is first copied to the composite image (or mosaic). The reference image and the neighboring image (e.g., near-left image) are correlated to calculate a Balanced Correlation Matrix (BCM). The BCM is then multiplied by a vector representing the pixels of the adjacent images so that the intensity of the overlapping area is the same in both images. One embodiment of this relationship can be expressed as:
let i (center) = average intensity of overlapping area in central image;
i (adjoining) = average intensity of overlap in adjoining images; then
Equalization factor = i (center)/i (equaling).
The equalization factors for each color channel (i.e., red, green, and blue) are calculated separately. These three values constitute the BCM. The now equalized contiguous image is copied to the mosaic. By "feathering" with the mask, a smooth transition at the boundary of the copied image is provided. The mask has the same dimensions as the adjacent image and contains many elements. Each element in the mask indicates the weight of the corresponding adjacent image pixel in the mosaic. For pixels at the boundary, the weight is 0 (i.e., the output value is taken from the reference image), and the weight gradually increases along the direction of the neighboring image until it becomes 1-after the selected blend width is reached. Beyond the blend region, the mosaic will be determined entirely by the pixels of the neighboring images. Similarly, the overlap between all other constituent input images is analyzed and processed to compute a correlation vector and to equalize the intensities of the images.
The correlation matrix is determined using, for example, the following process with reference to fig. 6. Fig. 6 depicts a strip 600 formed in accordance with the present invention. Base mosaic 602 and new mosaic 604 added along path (or track) 606 overlap each other in area 608. Let V be the vector representing the R, G and B values of the pixel:
let h be the transition width of the region 608, y be the distance along the trajectory 606 from the boundary 610 of the overlap region to point A, the pixel value of which is denoted by V. Let C be the correlation matrix:
the equilibrium value of V (referred to as V') is:
v ═ y/h.I + (1-y/h) C ] × V, when 0< y < h;
v' = V, when y > = h;
wherein I is an identity matrix
Note that the "feathering" technique is also used in conjunction with the gradient to minimize seam visibility.
When the mosaic is long, the difference in the strength of the overlapping portion varies from one end of the mosaic to the other. It may not be possible to compute a single correlation vector to avoid generating a distinct seam. The mosaic may be divided into a number of segments corresponding to the positions of the initial input images that make up the mosaic. The process described above is applied to each segment unit to provide better local color consistency.
According to this modified algorithm, a vertical seam (assuming a north-south route) is created at the pixels at the boundary of the two segments. To avoid this problem, the equalization factor of each pixel in the region must be "shifted" from that of one segment to that of another segment. This is now explained with reference to fig. 7.
Fig. 7 depicts a strip 700 formed in accordance with the present invention. Base mosaic 702 and new segment 704 overlap in region 706. Mosaic 702 and another new segment 708 overlap in region 710. Segments 704 and 708 overlap in region 712, and regions 706, 710, and 712 all overlap and coincide in region 714. For ease of illustration, point 716 serves as the origin of y-axis 718 and x-axis 720. Movement along the y-axis 718 represents movement along the trajectory of the imaging system. Point 716 is located on the lower left side of region 714.
According to the invention, the size of the strip is determined by the minimum and maximum x and y values that make up the mosaic. The output bands are initialized to the background color. The first mosaic is transferred to the stripe. The next mosaic is then processed (along the track). The intensity values of the overlapping areas of the new mosaic and the first mosaic are associated separately for each color channel. The new mosaic is divided into a number of segments corresponding to the initial input images that make up the mosaic. A mask matrix is created for the new mosaic that contains many mask elements. The mask elements contain the correlation matrix of the corresponding pixels in the new mosaic. All elements in the mask are initialized to 1. The size of the mask may be limited to only the transition region of the new mosaic. A correlation matrix is calculated for the center segment. The mask area corresponding to the center segment is processed. The values of the respective elements at the edge of the overlap region are set as the correlation vector. Subsequently, as one moves away from the first mosaic along the strip, the elements of the correlation matrix are increased or decreased (whether they are less than 1 or greater than 1) until they become 1 at a predetermined transition distance. The regions of the mask corresponding to the segments that abut the center segment are then similarly processed. However, the region 714 formed by the central segment and the adjoining segments of the first mosaic and the new image requires special processing. Since the correlation matrix of the neighboring segments may be different from the correlation matrix of the center segment, a seam occurs at the boundary of the two segments in the overlap region 714 with the first mosaic. Thus, the corners are affected by the correlation matrix of the two segments. For mask cell A, which is a distance x from the border of the center segment and y from the overlapping edge, the correlation matrix is a weighted average of the distances of the two segments evaluated as follows:
for pixel a (x, y) in region 714, which is a distance x from the border of the center segment, its equalization factor is calculated in the form of a distance weighted value of the values calculated with the two segments;
v1 is an equalized RGB vector based on segment 704;
v2 is an equalized RGB vector based on segment 708;
v' is the combined (final) equalized RGB vector
V′=((d-x)/d).V1+(x/d).V2;
Wherein
The x-axis is a straight line through the bottom of the overlap region;
the y-axis is the line passing through the left side of the overlap region between segments 704 and 708;
h is the transition width; and
d is the width of the overlap between segment 704 and segment 708.
Mask regions corresponding to other contiguous segments are similarly calculated.
Furthermore, in accordance with the present invention, a color fidelity (i.e., white balance) filter is applied. This multiplies the R and B components by a determinable factor to enhance color fidelity. The factor can be determined by calibrating the camera and lens. The color fidelity filter ensures that the colors in the image maintain their fidelity as perceived directly by the human eye. In the image capturing apparatus, red, green, and blue light receiving elements have different sensitivities to colors that they should capture. A "white balance" process is applied in which an image of a white object is captured. Theoretically, the pixels in the image of the white object should have the same R, G and B values. In fact, however, due to different sensitivities and other factors, the average color value of each of R, G and B may be avgR, avgG, and avgB, respectively. To equalize the color components, the R, G and B values of the pixels are multiplied by the ratio:
the R value is multiplied by the ratio avgG/avgR; and
the B value is multiplied by the ratio avgG/avgB.
The end result is that the image of the white object is set to have the same R, G, B component.
In most applications, the tape typically covers a large area of the non-aqueous surface. Thus, anomalies such as highly reflective surfaces are less likely to distort the average intensity of the ribbon. The present invention provides an intensity normalization module that normalizes the average intensity of each band so that the average and standard deviation have desired values. For example, an average value of 127 is a norm in photogrammetry. The 51 criterion helps to spread the intensity values over the optimal range for visual perception of the image feature. Each strip is acquired under different lighting conditions and thus has a different imaging data profile (i.e., average intensity and calibration offset). The module normalizes the bands so that all bands have the same mean and standard deviation. This enables the strips to be spliced together without significant seams.
This intensity normalization involves the calculation of the average intensity for each channel R, G and B, as well as for all channels. The total standard deviation is then calculated. Each R, G and B value for each pixel is transformed into a new mean and standard deviation:
new value = new mean + (old value-old mean) (new standard deviation/old standard deviation)
Thereafter, a plurality of adjacent stripes are combined, thereby generating a tiled mosaic of the region of interest. The finished tile corresponds to USGSquads or quarterly-squares. Splicing the strips into a mosaic is similar to splicing mosaics together, resulting in a strip, which now functions as a mosaic. A problem arises with the seam line between two strips if it passes through an elevated structure such as a building, bridge or the like. This classical problem in photogrammetry arises from parallax caused by the same object being viewed from two different perspectives. For example, in the imaging of a building, one strip may present a view from one side of the building while another strip presents a view from the other side of the building. After the images are stitched together, the resulting mosaic may look like a conical tent. To address this issue, a terrain-guided tessellation process may be implemented to guide the placement of seam lines. For example, LIDAR or DEM data collected with or analyzed from image data may be processed to determine the configuration and shape of the image when mosaiced together. Thus, in some mosaiced images, the seam line may not be straight-instead containing a seam line that moves back and forth, bypassing the hatrack structure.
Referring now to FIG. 8, one embodiment of an imaging process 800 is illustrated in accordance with the present invention as described above. The process 800 begins with a sequence 802 of one or more collected raw images. The images 802 are then processed through a white balance process 804 to transform them into a series of intermediate images. The sequence 802 is then processed by the dissolve black function 806 before proceeding to the ortho-correction process 808. As previously described, the orthorectification relies on position and attitude data 810 from the imaging sensor system or platform, and also relies on DTM data 812. DTM data 812 may be generated from location data 810, as well as from, for example, USGSDTM data 814 or LIDAR data 816. Sequence 802 is now orthorectified and processing continues with color balancing 818. After color balancing, sequence 802 is converted to composite image 822 by mosaic module 820. During this conversion, module 820 performs a mosaicing process and a feathering process. Now, in step 824, one or more composite images 822 are further combined into image strips 826 by mosaicing with gradients and feathering. The image strips are processed by intensity normalization 828. The now normalized strips 828 are then tessellated together in step 830 by tessellating with the gradient and feathering again, drawing the final tiled mosaic 832. Tessellation performed in step 830 may include terrain-guided tessellation that relies on DTM data 812 or LIDAR data 816.
FIG. 9 illustrates how photographs taken with a camera array assembly can be aligned to obtain a single frame. This example shows an example of a photograph mode in a plan view from a vehicle using orthorectified data from 5 cameras.
Fig. 10 is a block diagram of processing logic according to some embodiments of the invention. As shown in block diagram 1000, processing logic accepts one or more inputs including elevation measurements 1002, attitude measurements 1004, and/or photographs and sensor images 1006. Some of the inputs may be passed through an initial processing step prior to analysis, as shown in block 1008, where the attitude measurements are combined with data from the ground control points. The elevation measurements 1002 and attitude measurements 1004 may be combined to produce processed elevation data 1010. The processed elevation data 1010 may then be used to generate elevation DEM1014 and DTM 1016. Similarly, pose measurements 1006 and sensor imagery 1006 may be combined to produce an image 1012 of the geo-coordinate reference, and image 1012 then undergoes image processing 1018, which may include color balance and gradient filtering.
Depending on the data set (1020) to be used, the DTM1016 or USGSDEM1022 is combined with the processed image 1018 to produce an orthorectified image 1024. The orthographic image 1024 then enters a self-locking flight 1026. The projection mosaic 1028 is then equalized to produce the final photo output 1030.
The invention can adopt a certain degree of transverse oversampling to improve the output quality. Fig. 11 is an illustration of a lateral oversampling pattern 1100 looking down from a vehicle showing minimal lateral oversampling, in accordance with some embodiments of the present invention. In this illustration, the central nadir region 1102 assigned to the central camera only slightly overlaps the left nadir region 1104 and the right nadir region 1106 to minimize overlap. Fig. 12 is an illustration of a lateral oversampling pattern 1200 looking down from a vehicle showing a greater degree of lateral oversampling, in accordance with some embodiments of the present invention. In this illustration, central nadir area 1202 shows a greater degree of overlap with left nadir area 1204 and right nadir area 1206.
In addition to the use of lateral overlap as shown in FIGS. 11 and 12, the present invention may also employ course oversampling. FIG. 13 is an illustration of a flight line oversampling pattern 1300 looking down from a vehicle showing some degree of flight line oversampling, but minimal lateral oversampling, in accordance with some embodiments of the present invention. The central nadir regions 1302 and 1304 overlap each other along the flight line, but do not laterally overlap the left nadir regions 1306 and 1308, or the right nadir regions 1310 and 1312.
FIG. 14 is an illustration of flight path oversampling from a vehicle overhead view showing substantial flight path oversampling and substantial lateral oversampling, in accordance with some embodiments of the present invention. It can be seen that each of the central nadir regions 1402-1406 largely overlap each other and the left nadir regions 1408-1412 and right nadir regions 1414-1418. The left nadir zones 1408-1412 overlap each other, and the right nadir zones 1414-1418 overlap each other as well. Thus, each point on the surface is sampled at least twice, and in some cases, up to four times. This technique takes advantage of the fact that in areas of the image covered two or more times by different camera sensors, a doubling of the image resolution is possible in both the lateral (cross-track) and course (along-track) directions, thereby increasing the resolution by a factor of four overall. In practice, the improvement in image/sensor resolution is slightly less than two times in each dimension, approximately 40% in each dimension, or 1.4 × 1.4= ∼ 2 times. This is due to statistical variations in sub-pixel alignment/orientation. In practice, the pixel grid is rarely exactly equidistant from the overlaid pixel grid. A four-fold improvement in image resolution can be achieved if extremely precise lateral camera sensor alignment is made at the sub-pixel level.
Fig. 15 is an illustration of a progressive magnification mode 1500 from a top view of a vehicle, in accordance with some embodiments of the invention. The central nadir region 1502 is bounded at its left and right edges by a left inner nadir region 1504 and a right inner nadir region 1506, respectively. The left inner nadir area 1504 is bounded at its left edge by a left outer nadir area 1508, while the right inner nadir area 1506 is bounded at its right edge by a right outer nadir area 1510. Note that these regions exhibit minimal overlap and oversampling from one region to another.
Fig. 16 is an illustration of a progressive magnification mode 1600 looking down from a vehicle, in accordance with some embodiments of the invention. Central nadir region 1602 is bounded at its left and right edges by left inner nadir region 1604 and right inner nadir region 1606, respectively. Left inner nadir region 1604 is bounded at its left edge by left outer nadir region 1608, and right inner nadir region 1606 is bounded at its right edge by right outer nadir region 1610. Note that as described above, these regions exhibit minimal overlap and oversampling from one region to another. Within each nadir region 1604-1610, there is a central image region 1614-1620, represented by gray shading.
Fig. 17 is an illustration of a progressive magnification mode 1700 looking down from a vehicle, in accordance with some embodiments of the invention. At the center of the pattern 1700, the left internal nadir area 1702 and the right internal nadir area 1704 overlap at the center. The left middle nadir area 1706 and the right middle nadir area 1708 are partially disposed outside of the areas 1702 and 1704, respectively, sharing approximately 50% overlap area with the corresponding adjacent areas, respectively. Left outer nadir region 1710 and right outer nadir region 1712 are disposed partially outside of regions 1706 and 1708, respectively, sharing approximately 50% overlap with corresponding adjacent regions, respectively. The central image region 1714 is disposed in the center of the pattern 1700, consisting of the central portion of nadir region 1702-1712.
FIG. 18 depicts a schematic diagram of the architecture of a system 1800, according to some embodiments of the invention. The system 1800 may include one or more GPS satellites 1802 and one or more SATCOM satellites 1804. One or more GPS location systems 1806 may also be included, the one or more GPS location systems 1806 operatively connected to one or more modules 1808, the one or more modules 1808 collecting LIDAR, GPS, and/or X, Y, Z location data and then providing such information to one or more data capture system applications 1812. One or more data capture system applications 1812 may also receive spectral data from the camera array 1822. The DGPS1810 may communicate with one or more SATCOM satellites 1804 via a wireless communication link 1826. The one or more SATCOM satellites 1804, in turn, may communicate with one or more data capture system applications 1812.
One or more data capture system applications 1812 may interface with autopilots 1816, SSDs, and/or real-time stickg systems 1820, which may also interact with each other. The SSD1814 may be operatively connected to the real-time DEM 1818. Finally, real-time DEM1818 and real-time stutchg 1820 may be coupled to a storage device such as disk array 1824.
The invention can overcome the limitation of physical pixel resolution by adopting a certain degree of joint installation and joint registration oversampling. Fig. 19 is an illustration of a laterally co-mounted, co-registered oversampling architecture 1900 of a single camera array 112 looking down from a vehicle, showing minimal lateral oversampling, in accordance with some embodiments of the present invention. The cameras overlap several degrees in vertical side overlap regions 1904 and 1908. Although fig. 19 depicts a 3-camera array, these sub-pixel calibration techniques are equally applicable when using many camera sensors from 2 to any number of calibration cameras.
Similar to the imaging sensors in fig. 3 and 4, the camera sensors may be co-registered to calibrate the physical mount angle offset of each sensor relative to each other and/or to the nadir camera. This provides an initial "approximate" calibration. These initial calibration parameters may be entered into the on-board computer system 104 in the system 100 and updated during flight using oversampling techniques.
Referring now to FIG. 19, rectangles A, B and C represent image areas 1902, 1906, and 1910 from 3-camera array C-B-A (not shown). Images of areas 1902, 1906, and 1910 taken with cameras a-C (not shown), respectively, are illustrated in a top view. Again, similar to fig. 3 and 4, due to the "squint" arrangement, the image of area 1902 is taken by the right camera a, the image of area 1906 is taken by the center/nadir camera B, and the image of area 1910 is taken by the left camera C. The cameras a-C form an array (not shown) that is directed vertically downward in most applications.
In FIG. 19, the shaded regions labeled A/B and B/C side-by-side overlap represent image overlap regions 1904 and 1908, respectively. The left image overlap area 1904 is where the right camera a overlaps the center/nadir camera B, and the right image overlap area 1908 is where the left camera C overlaps the center/nadir camera B. In these side-by-side overlap regions 1904 and 1908, the camera sensor grid bisects each pixel in the overlap regions 1904 and 1908, which in effect quadruplicates the image resolution in these regions 1904 and 1908 by a mechanism of joint-mount, joint-registration oversampling. In practice, the improvement in image/sensor resolution is doubled in each dimension, or 2 × 2=4 times. This fourfold increase in image resolution also quadruplicates the accuracy of alignment between adjacent cameras.
Furthermore, this fourfold increase in alignment accuracy between adjacent cameras improves the system 100 alignment accuracy of all sensors secured to the rigid mounting plate. As described above, the camera and the sensor are fixed to the rigid mounting unit, and the rigid mounting unit is fixed to the rigid mounting plate. In particular, as the angular alignment of adjacent cameras fixed to the rigid mounting unit is improved, the angular alignment of the other sensors is also improved. This increase in the accuracy of the alignment of the other sensors fixed to the rigid mounting plate also improves the image resolution of these sensors.
A laterally co-mounted, co-registered oversampling configuration 2000 with respect to two overlapping camera arrays 112 is illustrated in fig. 20. In particular, fig. 20 is an illustration of a lateral co-mounted, co-registered oversampling configuration 2000 of two overlapping camera arrays 112 viewed from the vehicle in plan, showing maximum lateral oversampling, in accordance with some embodiments of the present invention. Adjacent cameras overlap several degrees in vertical side-by-side overlap regions 2006, 2008, 2014 and 2016, and corresponding cameras overlap completely in image regions 2002, 2010, 2018 and 2004, 2012, 2020. Although fig. 20 depicts two 3-camera arrays, these sub-pixel calibration techniques are equally applicable when using two overlapping camera arrays with many camera sensors from 2 to any number of calibrated cameras.
Similar to the imaging sensors in fig. 3 and 4, the camera sensors may be co-registered to calibrate the physical mount angle offset of each sensor relative to each other and/or to the nadir camera. In this embodiment, a plurality, i.e., at least two, of the rigid mounting units are secured to the rigid mounting plate and are in co-registration. This provides an initial "approximate" calibration. These initial calibration parameters may be entered into the on-board computer system 104 in the system 100 and updated during flight.
Referring now to fig. 20, rectangles labeled A, B and C represent image areas 2002, 2010, 2018 and 2004, 2012, 2020 from two overlapping 3-camera arrays C-B-a (not shown), respectively. The images of areas 2002, 2010, 2018 and 2004, 2012, 2020, taken with cameras a-C (not shown) and overlapping cameras a '-C' (not shown), respectively, are illustrated in a top view. Again, similar to fig. 3 and 4, due to the "squint" arrangement, the image of area 2002 is taken by right camera a, the image of area 2010 is taken by center/nadir camera B, and the image of area 2018 is taken by left camera C. Further, the image of the area 2004 is captured by the right camera a ', the image of the area 2012 is captured by the center camera B ', and the image of the area 2020 is captured by the left camera C '. The cameras a-C and the overlapping cameras a '-C' form an array (not shown) that is directed vertically downward in most applications.
In FIG. 20, the shaded regions labeled A/B and B/C side-by-side overlap represent two overlapping image overlap regions 2006, 2008 and 2014, 2016, respectively. The left image overlap regions 2006, 2008 are where the right camera a overlaps the center/nadir camera B, and where the right camera a 'overlaps the center camera B', respectively. The right image overlap regions 2014, 2016 are where the left camera C overlaps the center/nadir camera B, and where the left camera C 'overlaps the center camera B'. In these side-by-side overlap regions 2006, 2008 and 2014, 2016, the camera sensor grid bisects each pixel in the overlap regions 2006, 2008 and 2014, 2016, respectively, which in effect quadruplicates the image resolution in these regions 2006, 2008 and 2014, 2016 via a co-mounting, co-registration oversampling mechanism. In practice, the improvement in image/sensor resolution is doubled in each dimension, or 2 × 2=4 times. This quadruple increase in image resolution quadruples the accuracy of alignment between adjacent cameras, as described above.
By having two overlapping camera arrays, the image resolution is in fact quadrupled again for overlapping side-overlapping overlap regions 2006, 2008 and 2014, 2016. This yields a surprising total 64-fold improvement in system 100 calibration and camera alignment.
In overlapping side overlap regions 2006 and 2008, the overlapping camera sensor grid bisects each pixel in side overlap regions 2006 and 2008, which in effect quadruplicates the image resolution in these regions 2006 and 2008 by means of a jointly mounted, jointly registered oversampling mechanism. Similarly, in overlapping side lap regions 2014 and 2016, the overlapping camera sensor grid bisects each pixel in side lap regions 2014 and 2016, which in effect quadruplicates the image resolution in these regions 2014 and 2016. In practice, the improvement in image/sensor resolution is again doubled in each dimension, or 2 × 2 × 2 × 2 × 2=64 times. This total 64-fold improvement in image resolution also increases the alignment accuracy between adjacent cameras by a factor of 64.
This 64-fold improvement in alignment accuracy between adjacent and corresponding cameras improves the system 100 alignment accuracy of all sensors secured to a rigid mounting plate. The cameras a-C, and optionally other sensors, are fixed to a first rigid mounting unit, the cameras a '-C', and optionally other sensors, are fixed to a second rigid mounting unit, both the first and second rigid mounting units being fixed to the rigid mounting plate. In particular, as the angular alignment of adjacent and/or corresponding cameras fixed to the first and/or second rigid mounting units is improved, the angular alignment of the other sensors is also improved. This increase in the accuracy of the alignment of the other sensors fixed to the rigid mounting plate also improves the image resolution of these sensors.
By having two overlapping camera arrays, the image resolution is effectively quadrupled for the entire image, not just for the overlapping areas where the a/B and B/C side-by-side overlap. Referring now to fig. 20, the overlapping grid detail labeled "overlapping grid 4 x" represents overlapping regions 2022 and 2024 in right image regions 2018 and 2020, respectively. In the overlap regions 2022 and 2024, the overlapping camera sensor grid bisects each pixel in the overlap regions 2022 and 2024, which in effect quadruplicates the image resolution in these regions 2022 and 2024 by a co-mounted, co-registered oversampling mechanism. In practice, the improvement in image resolution is doubled in each dimension, or 2 × 2=4 times.
In a preferred embodiment, one camera array is monochromatic and the other camera array is red-green-blue. Even if each array covers a different color band, simple image processing techniques can be used, thereby allowing all color bands to achieve the benefit of increased resolution. Another advantage provided by these techniques is that in the case where one camera array is red-green-blue and the other overlapping camera array is infrared or near infrared (or some other bandwidth), this results in a superior multispectral image.
Thus, all of the improvements identified for the embodiment of fig. 19 discussed above (i.e., 4 times) are compatible with the embodiment of fig. 20, however, with two overlapping camera arrays, an additional significant increase in the calibration accuracy and overall image resolution of the system 100 (i.e., 64 times) can be achieved.
Fig. 21 is an illustration of a co-registered oversampling architecture 2100, with joint forward and lateral mounting of two camera arrays 112 from a vehicle overhead view, in accordance with some embodiments of the invention. In particular, fig. 21 is an illustration of a co-registered oversampling architecture 2100, jointly mounted in the forward and lateral directions, of two oversampling camera arrays 112 viewed from a vehicle, showing minimal forward and minimal lateral oversampling, in accordance with some embodiments of the present invention. Adjacent cameras overlap by a few degrees in vertical side overlap regions 2104, 2108, 2124, and 2128, corresponding cameras overlap by a few degrees along horizontal forward overlap regions 2112, 2116, and 2120. Although fig. 21 depicts two 3-camera arrays, these sub-pixel calibration techniques are equally applicable when using two overlapping camera arrays with many camera sensors from 2 to any number of calibrated cameras.
Similar to the imaging sensors in fig. 3 and 4, the camera sensors may be co-registered to calibrate the physical mount angle offset of each sensor relative to each other and/or to the nadir camera. In this embodiment, a plurality, i.e., at least two, of the rigid mounting units are secured to the rigid mounting plate and are in co-registration. This provides an initial "approximate" calibration. These initial calibration parameters may be entered into the on-board computer system 104 in the system 100 and updated during flight.
Referring now to FIG. 21, rectangles labeled A, B and C represent image areas 2102, 2106, and 2110 from 3 camera array C-B-A (not shown), and rectangles D, E and F represent image areas 2122, 2126, and 2130 from 3 camera array F-E-D (not shown). The images of areas 2102, 2106 and 2110, taken with cameras a-C (not shown), and areas 2122, 2126 and 2130 taken with cameras D-F (not shown), respectively, are illustrated from above in the figure. Again, similar to fig. 3 and 4, due to the "squint" arrangement, the rear left image of region 2102 is captured by rear right camera a, the rear middle image of region 2106 is captured by rear center/nadir camera B, and the rear right image of region 2110 is captured by rear left camera C. Further, the front left image of the area 2122 is captured by the front right camera D, the front center image of the area 2126 is captured by the front center camera E, and the front right image of the area 2020 is captured by the front left camera F. Cameras a-C and overlapping cameras D-F form an array (not shown) that is directed vertically downward in most applications.
In fig. 21, the vertically shaded area represents the 4 image overlap areas 2104, 2108, 2124, and 2128. The left rear image overlap area 2104 is where the right rear camera a overlaps the center/nadir camera B, and the right rear image overlap area 2108 is where the left rear camera C overlaps the center/nadir camera B. The left front image overlap region 2124 is where the right front camera D overlaps the center/nadir camera E, and the right front image overlap region 2128 is where the left front camera F overlaps the center camera E.
Referring now to FIG. 21, the overlapping grid detail labeled "side overlap region 4: 1" represents overlapping side-overlapping overlap regions 2104, 2108 and 2124, 2128. In these side-by-side overlapping overlap regions 2104, 2108 and 2124, 2128, the camera sensor network bisects each pixel in the overlap regions 2104, 2108, 2124 and 2128, which in effect quadruplicates the image resolution in these regions 2104, 2108, 2124 and 2128 by means of a jointly mounted, jointly registered oversampling mechanism. In fact, the improvement in image/sensor resolution is doubled in each dimension, or 2 × 2=4 times. This quadruple increase in image resolution quadruples the accuracy of alignment between adjacent cameras, as described above.
This fourfold increase in alignment accuracy between adjacent cameras improves the system 100 alignment accuracy of all sensors secured to a rigid mounting plate. The cameras a-C, and optionally other sensors, are fixed to a first rigid mounting unit, the cameras D-F, and optionally other sensors, are fixed to a second rigid mounting unit, both the first and second rigid mounting units being fixed to the rigid mounting plate. In particular, as the angular alignment of adjacent cameras secured to the first or second rigid mounting units is improved, the angular alignment of other sensors secured to the mounting units is also improved. This increase in the accuracy of the alignment of the other sensors fixed to the rigid mounting plate also improves the image resolution of these sensors.
Similarly, the horizontally shaded regions represent the 3 image overlap regions 2112, 2116, and 2120. The left front image overlap region 2112 is where the right rear camera a overlaps the right front camera D, the middle front image overlap region 2116 is where the rear center/nadir camera B overlaps the middle front camera E, and the right rear image overlap region 2120 is where the left rear camera C overlaps the left front camera F.
Referring now to fig. 21, the overlapping grid detail labeled "forward overlap region 4: 1" represents overlapping forward overlapping overlap regions 2112, 2116 and 2120. In these forward overlapping overlap regions 2112, 2116 and 2120, the camera sensor grid bisects each pixel in the overlap regions 2112, 2116 and 2120, which in effect quadruplicates the image resolution in these regions 2112, 2116 and 2120 by a mechanism of joint-mount, joint-registration oversampling. In fact, the improvement in image/sensor resolution is doubled in each dimension, or 2 × 2=4 times. This fourfold increase in image resolution quadruplicates the accuracy of alignment between the corresponding cameras.
This fourfold increase in alignment accuracy between corresponding cameras improves the system 100 alignment accuracy of all sensors secured to the rigid mounting plate. The cameras a-C, and optionally other sensors, are fixed to a first rigid mounting unit, the cameras D-F, and optionally other sensors, are fixed to a second rigid mounting unit, both the first and second rigid mounting units being fixed to the rigid mounting plate. In particular, as the angular alignment of the corresponding camera fixed to the first or second rigid mounting unit is improved, the angular alignment of the other sensors is also improved. This increase in the accuracy of the alignment of the other sensors fixed to the rigid mounting plate also improves the image resolution of these sensors.
Similar to the overlapping side-to-side overlapping overlap regions 2006, 2008 and 2014, 2016 in FIG. 20, the intersecting forward and side-to-side overlapping overlap regions 2114 and 2118 in FIG. 21 produce a surprising total 64-fold improvement in system calibration and camera alignment. Referring now to FIG. 21, the intersecting grid detail labeled "quadruple overlap region 64: 1" represents the overlapping region 2118 of the intersecting forward and side overlap. In the overlapping regions 2114 and 2118 of the intersecting forward and side overlap, the overlapping camera sensor grid bisects each pixel in the intersecting regions 2114 and 2118, which in effect quadruplicates the image resolution in these regions 2114 and 2118 by a mechanism of jointly mounted, jointly registered oversampling. In fact, the improvement in image/sensor resolution is doubled again in each dimension, or 2 × 2 × 2 × 2 × 2=64 times. This total 64-fold improvement in image resolution also increases the alignment accuracy between adjacent cameras by a factor of 64.
This 64-fold improvement in alignment accuracy between adjacent and corresponding cameras improves the system 100 alignment accuracy of all sensors secured to a rigid mounting plate. The cameras a-C, and optionally other sensors, are fixed to a first rigid mounting unit, the cameras D-E, and optionally other sensors, are fixed to a second rigid mounting unit, both the first and second rigid mounting units being fixed to the rigid mounting plate. In particular, as the angular alignment of adjacent and/or corresponding cameras fixed to the first and/or second rigid mounting units is improved, the angular alignment of the other sensors is also improved. This increase in the alignment angle of the other sensors fixed to the rigid mounting plate also improves the image resolution of these sensors.
In a preferred embodiment, one camera array is monochromatic and the other camera array is red-green-blue. Even if each array covers a different color band, simple image processing techniques can be used, thereby allowing all color bands to achieve the benefit of increased resolution. Another advantage provided by these techniques is that in the case where one camera array is red-green-blue and the other overlapping camera array is infrared or near infrared (or some other bandwidth), this results in a superior multispectral image.
As shown in fig. 19-21, these techniques can be used to overcome the resolution limit imposed on camera systems by the inability of optical glasses to resolve "very small" objects. In particular, there are known physical limits to the ability of optical glasses in camera lenses to resolve very small objects. This is commonly referred to as the "analytical limit of the glass". For example, if 1 millimeter pixel is required from an altitude of 10000 feet, then a telephoto lens with extremely high magnification would be required to obtain a ground coverage width of about 100 feet. This is because the resolving power of the cleanest glass does not allow for an image resolution of 1 millimeter pixel at an altitude of 10000 feet, regardless of how many pixels (e.g., billions of pixels) the ccd sensor can produce. This example serves to clearly illustrate that there are physical limits on the pixel resolution of the glass, as well as pixel density limits on the imaging sensor.
The system 100 imaging sensor alignment in a rigid mounting unit secured to a rigid mounting plate, and the associated calibration techniques, provide a unique solution to this problem, as described above. By using these techniques, the analytical limit of glass can be effectively overcome. For example, a single camera array yields the benefit of 1-fold (or no) oversampling. However, two overlapping camera arrays yield an overall improvement of 4 times in image resolution, and overall geospatial horizontal and vertical accuracy. Furthermore, 3 overlapping camera arrays result in a 16-fold overall improvement, 4 overlapping camera arrays result in a 64-fold overall improvement, and so on.
From these examples, it can be deduced that the equation for the overall improvement is as follows:
overall improvement =4N
Where N is the number of overlapping camera arrays.
If there are 4 camera arrays, then there are 3 overlapping camera arrays (i.e., N = 3). Thus, the 4-camera array provides 64 times overall improvement in image resolution, as well as overall geospatial horizontal and vertical accuracy (i.e., 4)3=64 times).
In addition, these sub-pixel calibration techniques may be combined with self-locking track techniques as disclosed in U.S. patent application publication No.2004/0054488A1 (now U.S. patent No.7,212,938B2), the disclosure of which is incorporated herein by reference in its entirety.
In addition to forward and/or transverse co-mount, co-registration oversampling as shown in fig. 19-21, the present invention may also employ en-route oversampling to further improve image resolution as shown in fig. 13-17. As shown in fig. 13-17, in the image area, the flight lines overlap each other because each flight line is parallel to each other. These overlapping image areas can be used to calibrate the sensor with along-track and cross-track parallax of the images in adjacent flight paths using stereo photography techniques.
In one embodiment, a lock-track may include any pattern that produces at least 3 substantially parallel travel routes in a set of 3 or more travel routes. Furthermore, at least one of the travel paths should be in a direction opposite to the other substantially parallel travel paths. In a preferred embodiment, the travel pattern includes at least one pair of travel routes matching directions, and at least one pair of travel routes in opposite directions.
When using self-locking tracks in the opposite direction, the observable position error can be doubled in some image areas. Thus, the self-locking course technique includes algorithms that significantly reduce these position errors. This reduction in position error is particularly important in the outer, or left-most "wing" and right-most "wing" image areas where the greatest position error occurs.
In one embodiment, these position improvements may be achieved by utilizing a pattern matching technique that automatically matches a pixel pattern region obtained from a lane (e.g., north/south) and the same pixel pattern region obtained from an adjacent lane (e.g., north/south). In a preferred embodiment, latitude/longitude coordinates from one or more GPS positioning systems may be used to speed up the pattern matching process.
Similarly, these sub-pixel calibration and self-locking track techniques can be combined with stereographic techniques, which are extremely dependent on the positional accuracy of each pixel relative to all other pixels. In particular, these techniques improve the stereophotographic image resolution, as well as overall geographic horizontal and vertical accuracy, particularly in the leftmost "wing" and rightmost "wing" image regions where the greatest positional errors occur. Furthermore, stereographic techniques are used to match known elevation data with the improved stereographic data set. Thus, the combined sub-pixel calibration, self-locking track, and stereo photography techniques provide a greatly improved digital elevation model, which results in superior images.
In addition, these sub-pixel calibration and self-locking track techniques can be used to provide dynamic real-time calibration of the system 100. In particular, these techniques provide the ability to quickly "roll-on" one or more camera array assemblies 112 to the system 100, immediately begin collecting image data of the target area, and quickly generate high quality images because, as described above, the individual sensors have been initially calibrated in a rigid mounting unit secured to a rigid mounting plate. In particular, the camera sensors are co-registered to calibrate the physical mounting angle offset of each sensor relative to each other and/or to the nadir camera. In one embodiment, a plurality, i.e., at least two, of the rigid mounting units are secured to the rigid mounting plate and are in co-registration. This provides an initial "approximate" calibration. These initial calibration parameters may be entered into the on-board computer system 104 in the system 100 and updated during flight using oversampling techniques, as described above.
In one embodiment, the system 100 includes a real-time self-calibration system to update the calibration parameters. In particular, the on-board computer 104 software contains real-time software "daemon" (i.e., background closed-loop monitoring software) to continuously monitor and update calibration parameters using joint installation, joint registration oversampling, and lane oversampling techniques, as described above. In a preferred embodiment, real-time daemon combines sub-pixel calibration, self-locking track and stereophotography techniques to improve stereophotographic image resolution, as well as overall geographic horizontal and vertical accuracy. In particular, stereographic techniques are used to match known elevation data with the improved stereographic data set. Thus, the combined sub-pixel calibration, self-locking track, and stereo photography techniques provide a greatly improved digital elevation model, resulting in superior images.
In one embodiment, system 100 includes a real-time GPS data system to provide GPS input data. The calibration accuracy is driven by input data from electronic devices such as GPS and IMU, and by calibration software augmented with industry standard GPS and IMU software systems. Thus, a key element of the real-time self-calibrating system is real-time GPS input data over a potentially low bandwidth communication channel, such as a satellite phone, cellular phone, RF modem, or similar device. Potential sources of real-time GPS input data include project-controlled point-to-point (ad-hoc) stations, fixed broadcast GPS location (or the like), or inertial navigation via an onboard IMU.
The modules, algorithms, and processes described above may be implemented using a variety of techniques and structures. Embodiments of the invention may include functional examples of software or hardware, or a combination of both. Furthermore, the modules and processes of the present invention may be combined together in a single functional instance (e.g., a software program) or may comprise separate functional devices (e.g., multiple networked processors/memory blocks) that are operatively associated. The present invention encompasses all such implementations.
The embodiments and examples set forth herein are presented to best explain the present invention and its practical application and to thereby enable those skilled in the art to make and utilize the invention. Those skilled in the art, however, will recognize that the foregoing description and examples have been presented for the purpose of illustration only. The description is not intended to be exhaustive or to limit the invention to the precise form disclosed. Many modifications and variations are possible in light of the above teaching without departing from the spirit and scope of the following claims.
Claims (35)
1. A system for generating a map, comprising:
a global positioning receiver;
a vehicle in line with the target area;
an elevation measurement unit in communication with the vehicle;
a global positioning antenna in communication with the vehicle;
an attitude measurement unit in communication with the vehicle;
an imaging sensor system disposed on the vehicle, comprising:
a rigid mounting plate secured to the vehicle;
a first rigid mounting unit secured to the mounting plate, the first rigid mounting unit having at least two imaging sensors disposed therein, wherein a first imaging sensor and a second imaging sensor both have focal axes passing through apertures in the first rigid mounting unit and the mounting plate, wherein the first imaging sensor and the second imaging sensor both produce a first data array of pixels, wherein each data array of pixels is at least two-dimensional, wherein the first imaging sensor and the second imaging sensor are offset to have a first image overlap region in the target area, wherein in the first image overlap region first imaging sensor image data bisects second imaging sensor image data;
a second rigid mounting unit secured to the mounting plate, the second rigid mounting unit having a third imaging sensor disposed therein, wherein the third imaging sensor has a focal axis passing through the second rigid mounting unit and the aperture in the mounting plate, wherein the third imaging sensor produces a third data array of pixels; and
a computer in communication with the elevation measurement unit, the global positioning antenna, the attitude measurement unit, the first imaging sensor, and the second imaging sensor; the computer correlates at least a portion of the image area from the first and second imaging sensors and a portion of the target area based on input from one or more of the elevation measurement unit, the global positioning antenna, and the attitude measurement unit.
2. The system of claim 1, wherein said third data array of pixels is at least two-dimensional.
3. The system of claim 2, further comprising:
a fourth imaging sensor disposed within the second rigid mounting unit, wherein the fourth imaging sensor has a focal axis passing through the second rigid mounting unit and an aperture in the mounting plate, wherein the fourth imaging sensor produces a fourth data array of pixels, wherein the fourth data array of pixels is at least two-dimensional, wherein the third imaging sensor and the fourth imaging sensor are offset to have a second image overlap region in the target area, wherein third imaging sensor image data bisects fourth imaging sensor image data in the second image overlap region.
4. The system of claim 3, wherein a first sensor array comprising the first imaging sensor and the second imaging sensor, and a second sensor array comprising the third imaging sensor and a fourth imaging sensor, are offset to have a third image overlap region in the target area, wherein in the third image overlap region the first imaging sensor array image data bisects the second imaging sensor array image data.
5. The system of claim 3, wherein the first imaging sensor array image data completely overlaps the second imaging sensor array image data.
6. The system of claim 1, wherein in operation, the first rigid mounting unit and the mounting plate bend less than 0.01 °.
7. The system of claim 6, wherein in operation, the first rigid mounting unit and the mounting plate bend less than 0.001 °.
8. The system of claim 7, wherein in operation, the first rigid mounting unit and the mounting plate bend less than 0.0001 °.
9. The system of claim 2, wherein the third imaging sensor is selected from the group consisting of a digital camera, a LIDAR, an infrared sensor, a heat-sensing sensor, and a gravitometer.
10. The system of claim 2, wherein the third imaging sensor is selected from the group consisting of a digital camera with a hyperspectral filter and a LIDAR.
11. The system of claim 1, wherein the first imaging sensor is calibrated with respect to one or more attitude measurement devices selected from the group consisting of a gyroscope, an IMU, and a GPS.
12. The system of claim 1, wherein the first imaging sensor and the second imaging sensor are selected from the group consisting of a digital camera, a LIDAR, an infrared sensor, a heat-sensing sensor, and a gravitometer.
13. The system of claim 2, wherein the first imaging sensor and the second imaging sensor are digital cameras and the third imaging sensor is a LIDAR.
14. The system of claim 3, wherein the third imaging sensor and the fourth imaging sensor are selected from the group consisting of a digital camera, a LIDAR, an infrared sensor, a heat-sensing sensor, and a gravitometer.
15. The system of claim 3, wherein the first imaging sensor and the second imaging sensor are digital cameras and the third imaging sensor is a LIDAR.
16. An imaging sensor system, comprising:
a rigid mounting plate secured to the vehicle in alignment with the target area;
a first rigid mounting unit secured to the mounting plate, the first rigid mounting unit having at least two imaging sensors disposed therein, wherein a first imaging sensor and a second imaging sensor both have focal axes passing through apertures in the first rigid mounting unit and the mounting plate, wherein the first imaging sensor and the second imaging sensor both produce a first data array of pixels, wherein each data array of pixels is at least two-dimensional, wherein the first imaging sensor and the second imaging sensor are offset to have a first image overlap region at a target area, wherein in the first image overlap region first imaging sensor image data bisects second imaging sensor image data;
a second rigid mounting unit secured to the mounting plate, the second rigid mounting unit having a third imaging sensor disposed therein, wherein the third imaging sensor has a focal axis passing through the second rigid mounting unit and the aperture in the mounting plate, wherein the third imaging sensor produces a third data array of pixels.
17. The system of claim 16, wherein said third data array of pixels is at least two-dimensional.
18. The system of claim 17, further comprising:
a fourth imaging sensor disposed within the second rigid mounting unit, wherein the fourth imaging sensor has a focal axis passing through an aperture in the second rigid mounting unit and the mounting board, wherein the fourth imaging sensor produces a fourth data array of pixels, wherein the fourth data array of pixels is at least two-dimensional, wherein the third imaging sensor and the fourth imaging sensor are aligned and offset to have a second image overlap region in the target area, wherein third imaging sensor image data bisects fourth imaging sensor image data in the second image overlap region.
19. The system of claim 18, wherein a first sensor array comprising the first imaging sensor and the second imaging sensor, and a second sensor array comprising the third imaging sensor and the fourth imaging sensor, are offset to have a third image overlap region in the target area, wherein in the third image overlap region the first imaging sensor array image data bisects the second imaging sensor array image data.
20. The system of claim 18, wherein the first imaging sensor array image data completely overlaps the second imaging sensor array image data.
21. The system of claim 16, wherein in operation, the first rigid mounting unit and the mounting plate bend less than 0.01 °.
22. The system of claim 21 wherein, in operation, said first rigid mounting unit and said mounting plate bend less than 0.001 °.
23. The system of claim 22, wherein in operation, the first rigid mounting unit and the mounting plate bend less than 0.0001 °.
24. The system of claim 17, wherein the third imaging sensor is selected from the group consisting of a digital camera, a LIDAR, an infrared sensor, a heat-sensing sensor, and a gravitometer.
25. The system of claim 17, wherein the third imaging sensor is selected from the group consisting of a digital camera with a hyperspectral filter and a LIDAR.
26. The system of claim 16, wherein the first imaging sensor is calibrated with respect to one or more attitude measurement devices selected from the group consisting of a gyroscope, an IMU, and a GPS.
27. The system of claim 16, wherein the first imaging sensor and the second imaging sensor are selected from the group consisting of a digital camera, a LIDAR, an infrared sensor, a heat-sensing sensor, and a gravitometer.
28. The system of claim 17, wherein the first imaging sensor and the second imaging sensor are digital cameras and the third imaging sensor is a LIDAR.
29. The system of claim 18, wherein the third imaging sensor and the fourth imaging sensor are selected from the group consisting of a digital camera, a LIDAR, an infrared sensor, a heat-sensing sensor, and a gravitometer.
30. The system of claim 18, wherein the first imaging sensor and the second imaging sensor are digital cameras and the third imaging sensor is a LIDAR.
31. A method of calibrating an imaging sensor, comprising the steps of:
setting up the system of claim 1;
performing an initial calibration of an imaging sensor, comprising:
determining a position of an attitude measurement unit;
determining a position of a first imaging sensor within a first rigid mount unit relative to the attitude measurement unit;
determining a position of a second imaging sensor within the first rigid mount unit relative to the attitude measurement unit;
calibrating the first imaging sensor against a target area and determining a boresight angle of the first imaging sensor; and
calculating a position of one or more subsequent imaging sensors within the first rigid mounting unit relative to the first imaging sensor; and
calibrating the one or more subsequent imaging sensors using the boresight angle of the first imaging sensor; and
updating at least one initial calibration parameter of the first imaging sensor against a target area and a boresight angle of the first imaging sensor using an oversampling technique;
updating a position of one or more subsequent imaging sensors within the first rigid mounting unit relative to the first imaging sensor using an oversampling technique; and
updating at least one calibration parameter of one or more subsequent imaging sensors within the first rigid mount unit with the updated boresight angle of the first imaging sensor.
32. The method of claim 31, wherein the initial calibration step further comprises the steps of:
calibrating the second imaging sensor using the updated boresight angle of the first imaging sensor;
calculating a position of one or more subsequent imaging sensors within the first rigid mounting unit relative to the first imaging sensor; and
calibrating the one or more subsequent imaging sensors within the first rigid mount unit using the updated boresight angle of the first imaging sensor.
33. The method of claim 32, further comprising the steps of:
updating a position of a second imaging sensor within the first rigid mounting unit relative to the first imaging sensor using an oversampling technique;
updating a position of one or more subsequent imaging sensors within a first rigid mounting unit relative to the first imaging sensor using an oversampling technique; and
updating at least one calibration parameter of one or more subsequent imaging sensors within the first rigid mount unit with the updated boresight angle of the first imaging sensor.
34. The method of claim 31, further comprising the steps of:
updating a calibration of the first imaging sensor against a target area and a boresight angle of the first imaging sensor using a course oversampling technique;
updating a position of the one or more subsequent imaging sensors within the first rigid mounting unit relative to the first imaging sensor using a course oversampling technique; and
updating at least one calibration parameter of one or more subsequent imaging sensors with the updated view axis angle of the first imaging sensor.
35. The method of claim 34, further comprising the steps of:
updating a position of a second imaging sensor within the first rigid mount unit relative to the first imaging sensor using a course oversampling technique;
updating a position of the one or more subsequent imaging sensors within the first rigid mounting unit relative to the first imaging sensor using a course oversampling technique; and
updating at least one calibration parameter of the one or more subsequent imaging sensors within the first rigid mount unit with the updated boresight angle of the first imaging sensor.
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US12/798,899 | 2010-04-13 |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| HK1182192A HK1182192A (en) | 2013-11-22 |
| HK1182192B true HK1182192B (en) | 2017-09-08 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN103038761B (en) | Self-alignment long-range imaging and data handling system | |
| US9797980B2 (en) | Self-calibrated, remote imaging and data processing system | |
| US7127348B2 (en) | Vehicle based data collection and processing system | |
| US8994822B2 (en) | Infrastructure mapping system and method | |
| JP6282275B2 (en) | Infrastructure mapping system and method | |
| US7725258B2 (en) | Vehicle based data collection and processing system and imaging sensor system and methods thereof | |
| US6928194B2 (en) | System for mosaicing digital ortho-images | |
| AU2003231341B2 (en) | Airborne reconnaissance system | |
| US20080211912A1 (en) | Airborne reconnaissance system | |
| USRE49105E1 (en) | Self-calibrated, remote imaging and data processing system | |
| HK1182192B (en) | Self-calibrated, remote imaging and data processing system | |
| HK1182192A (en) | Self-calibrated, remote imaging and data processing system | |
| JP2014511155A (en) | Self-calibrating remote imaging and data processing system |