SYNTHETIC ELECTRONIC IMAGING SYSTEM
FIELD OF THE INVENTION
The present invention relates to imaging systems for moving vehicles and in particular to ball turret imaging devices.
BACKGROUND OF THE INVENTION
A conventional ball turret imaging system consists of a camera having mechanised optical zoom capability mounted on an inertially stabilised platform. This platform is also capable of being adjusted to a predetermined elevation and azimuthal direction. These systems have been employed on a wide variety of vehicles ranging from airborne vehicles such as aircraft to land vehicles such as tanks, armoured personnel carriers and the like.
The important features of a conventional ball turret imaging system include the ability to select a direction of view and maintain that direction of view whilst the orientation of the vehicle is changing. Typically a Region of Interest (ROI) is chosen by an operator of the system corresponding to a visual feature which is either worthy of further inspection or may indicate a target. Another important capability is that once the camera is locked onto a given ROI further details in this region can be examined in more detail by employing the zoom capability of the camera. Clearly these systems have many applications in the fields of surveillance, navigation and targeting.
One increasingly important application of conventional ball turret imaging systems is their use in an Unmanned Aerial Vehicle (UAV). These are airborne vehicles specifically designed to fulfil a battlefield surveillance role. As these vehicles are unmanned they are either remotely piloted or follow a preset course. An operator in real time is able to use the ball turret imaging system to view features of interest at a variable resolution from a location remote to the UAV. As conventional ball
turret systems are expensive they represent a significant proportion of the capital cost of an UAV. In addition they are unsuitable due to their weight for installation in smaller UAVs which are gaining in popularity due to their relatively low cost and higher manoeuvrability. One other disadvantage of conventional ball turret imaging systems is their reliance on an inertially stabilised platform which is a complicated mechanical system requiring ongoing calibration and maintenance.
It is an object of the present invention to provide an alternative imaging system capable of substantially reproducing the characteristics of current conventional ball turret imaging systems.
SUMMARY OF THE INVENTION
In a first aspect the present invention accordingly provides a system for providing compensated image data corresponding to a region of interest within a field of view of an object having a variable orientation, said system including: a plurality of image capture means located on said object for providing image data corresponding to said field of view; orientation measurement means for measuring an orientation of said object; selection means for selecting an image data portion of said image data corresponding to said region of interest; and compensating means for compensating said image data portion for said orientation of said object to provide said compensated image data.
Combining the increased field of view that can be observed by this system due to the plurality of image capture means with the ability to compensate any region of interest within this field of view for the orientation of the object provides an effective replacement for a ball turret imaging system which does not require an inertially stabilized platform or complicated optics.
Preferably, said system further includes control means to adjust a size of said region of interest thereby adjusting a number of pixels selected in said image data portion from said imaging data.
As items of interest within the field of view may vary in size this capability provides for the ability to vary the size of the corresponding region of interest being viewed by the system.
Preferably, said plurality of camera means provide image data at a first pixel resolution and said system further includes image resolution adjusting means to adjust the resolution of said compensated image data to a second pixel resolution.
This feature allows for the further manipulation, storage or display of the compensated image data at various resolutions as required.
Preferably, said system further includes display means to display said image at said second pixel resolution.
Displaying the compensated image data is an effective way to view and inspect the region of interest.
Preferably, said display means further includes a second display to display said image data corresponding to said field of view at a predetermined third pixel resolution.
This allows the whole field of view to be displayed providing further information about the location of the region of interest within the total field of view to an operator of the system.
Preferably, said system further includes remote communication means to allow an operator of said system to operate said control means and view said display means at a location remote from said object.
This provides a number of advantages as the object can then be used to controllably view a region of interest remote from the operator. Thus this system can be used in remote controlled devices such as UAVs and the like.
Preferably, each of said plurality of image capture means are located at separate positions from each other on said object.
By not being restrained to fixing the plurality of image capture means at a single location on the object the system can be deployed in a more flexible manner.
Preferably, said plurality of image capture means each provides image data at an adjustable frame rate.
This allows the image capture means which are viewing the region of interest to provide their data at a higher frame rate then those viewing the remaining region thereby reducing the total bandwidth required for the system.
Preferably, said system further includes data recording means to record said imaging data.
This allows for further off-line inspection of the high resolution imaging data at a later time.
In a second aspect the present invention accordingly provides a method for providing compensated image data corresponding to a region of interest within a field of view of an object having a variable orientation, said method including:
capturing image data corresponding to said field of view from a plurality of image capture means located on said object; measuring an orientation of said object; selecting an image data portion of said image data corresponding to said region of interest; and compensating said image data portion for said orientation of said object to provide said compensated image data.
BRIEF DESCRIPTION OF THE DRAWINGS A preferred embodiment of the present invention will be discussed with reference to the accompanying drawings wherein:
FIGURE 1 is a functional block diagram of a synthetic ball turret imaging system according to a preferred embodiment of the present invention;
FIGURE 2 is a functional block diagram detailing the data processing modules illustrated in Figure 1;
FIGURE 3 is a diagram illustrating the image viewing capabilities of the present invention; and
FIGURE 4 is a diagram depicting the image processing algorithm according to a preferred embodiment of the present invention. In the following description, like reference characters designate like or corresponding parts throughout the several views of the drawings.
DESCRIPTION OF PREFERRED EMBODIMENT
Referring now to Figure 1, there is shown a functional block diagram of a synthetic ball turret imaging system 100 according to a preferred embodiment of the present invention optimised for use with an UAV. As would be apparent to those skilled in the art, the present invention may be generally applied to those situations where there is an imaging requirement for an object or platform having a variable orientation.
Imaging system 100 includes camera system module (CSM) 110, frame capture module (FCM) 120, orientation sensing module (OSM) 130, output frame module (OFM) 140, remote communications module (RCM) 150 and imaging control module (ICM) 160. A remote ground station 200 is used to control and view information from imaging system 100.
CSM 110 includes six high resolution cameras each having individual fields of view 101 to 106 which when arranged as two rows of three cameras cover an overall Field Of View (FOV) of approximately 90° x 45°. Whilst in this embodiment six cameras have been used, clearly the number and arrangement of cameras can be adapted to the particular viewing circumstances as required. For example, where an overall FOV of 360° * 360° is envisaged, the number and arrangement of cameras required will depend on the individual FOV of each camera being employed in the system. Each camera has a maximum resolution of 3.2 Megapixels corresponding to an image size of 2048 x 1536 pixels and is capable of outputting these images at standard video rates of 25 frames per second at this resolution and at even higher rates for lower resolutions.
As is well known to those skilled in the art the capabilities of digital cameras are constantly improving in terms of their pixel resolution and output rates and as would also be appreciated by those skilled in the art the present invention is not to be limited to the camera type described in this embodiment but is equally applicable to cameras having much higher resolutions and output rates.
CSM 110 has in this preferred embodiment been optimised for detecting information in the visible range of wavelengths. However, the present invention is also applicable to any electromagnetic imaging device or image capture means which produces data in pixellated form. This includes, but is not limited to thermal imaging cameras, X-ray cameras and other imaging systems. In addition the camera subsystem may include standard analogue cameras in combination with a frame
grabber device. The cameras may be mounted at different locations on the vehicle such as for example fore and aft positions of an UAV with the requirement that the individual viewing regions of each of the cameras are located to provide image data that substantially corresponds to the overall FOV being covered.
As best seen in Figure 2, FCM 120 includes sufficient framegrabber 122 capacity and associated RAM 124 to store the pixellated frame data being generated in real time by individual cameras 101 to 106 associated with CSM 110. FCM 120 also generates an individually timed frame synchronisation signal to control acquisition of a composite image through timing signal generator 121 which is programmable 161A from master CPU 161 located in ICM 160. The frame synchronisation signal is generated relative to a master reference signal to facilitate the synchronisation of frames and orientation data from multiple sources. All image data is subsequently stored in data storage device 123, which in this preferred embodiment includes a plurality of high speed large capacity SCSI hard disks, along with both the orientation and timing information. This provides the capability of downloading 161B and analysing post mission stabilised video thereby providing the capability of selecting different features or regions of interest from those first viewed in real time during the mission for detailed analysis.
ICM 160 receives information from FCM 120, RCM 150, and OSM 130 and provides image data equivalent to the ROI being viewed and low resolution full FOV information to OFM 140. Referring again to Figure 2, which depicts in detail the information flow between FCM 120 and ICM 160, ICM 160 includes master CPU 161 which processes incoming orientation information from OSM 130 and geospatial FOV information and ROI selection information received from RCM 150 which in turn has been relayed from ground station 200. Geospatial FOV information is limited from RCM 150 to the bounds of total coverage provided by camera array 101 to 106 associated with CSM 110. Master CPU 161 is also responsible for mathematically compensating and computing the positions of those
pixels in the FCM 120 image RAM 124 that form the required ROI after adjustment for the inertial displacement of imaging system 100. These pixels may then be further subsampled or averaged depending on any bandwidth limitations in imaging system 100.
Stored frame data in the FCM 120 image RAM 124 is randomly addressable by master CPU 161 to permit discrete access 161C to pixels required in the output to achieve real time adjustment for orientation, pixel sampling and data transfer to OFM 140. In those circumstances where real time system performance is compromised by the burden of processing data from multiple cameras in CSM 110, ICM 160 is able to multiplex data from individual cameras at discrete rates due to each frame incorporating time synchronisation information. Additionally, individual timing signals may be sent to different cameras 101 to 106 via timing signal generator 121 to change individual acquisition rates as required.
Referring back to Figure 1, OSM 130 provides absolute vehicle orientation information in the form of yaw, pitch and roll data to ICM 160. Additionally angular rate information may be sent to ICM 160 for interpolation purposes. Typically for an UAV, an Inertial Measurement Unit (IMU) is used. An IMU directly measures angular and linear accelerations which are then further processed to calculate orientation and translation information according to a specific reference frame. This information is then provided at rates equal to or greater than the camera frame rate to ICM 160. As would be appreciated by those skilled in the art, other orientation measurement means are contemplated to be within the scope of the invention. Additionally, the system may be self contained such that the orientation measurement means is fixedly mounted with respect to the plurality of cameras with both of these systems then independently movably mounted to the vehicle and hence capable of variable orientation with respect to the vehicle.
OFM 140 receives compensated image information from ICM 160 and relays this information to ground station 200 via RCM 150 which incorporates a radio or other suitable telecommunications link to ground station 200. The image information consists of a video signal of stabilised image data of approximately 500 x 500 pixel resolution delivered at approximately 25 frames per second and a further low resolution 640 x 480 image of the full field of view which is updated at a lower rate according to mission requirements. Thus the bandwidth required for RCM 150 is comparable to that required for standard analogue television signals
Ground station 200 includes user input device 210 which allows an operator of imaging system 100 to select the ROI image data portion for viewing from the total FOV image data provided by the system. In addition, it will allow the operator to further zoom in on the ROI to further inspect this region at higher detail. In practice user input device 210 will consist of software implemented on a ground station computer whereby the operator will be presented with a first low resolution image (approximately 640 x 480 pixels) on computer display 220 corresponding to the entire FOV of the imaging system 100 with an indication of the chosen ROI displayed on the low resolution image. On a separate video monitor 230 a composite video picture is displayed corresponding to the ROI shown on the computer display. The operator of the ground station is able to select, resize and change the location of the ROI on the computer display 220 and this is reflected in the composite video display 230.
Referring now to Figure 3 which illustrates a preferred embodiment of the present invention in operation, an operator at ground station 200 selects 300 a first ROI image data portion 330 on computer display 220 from the entire FOV of imaging system 100. As depicted, each single camera view corresponds to a maximum pixel resolution of approximately 2000 x 1500 pixels. The six cameras (101 to 106) thereby form a combined potential image size of 6000 x 3000 pixels corresponding to the total FOV. In practice this will be reduced somewhat due to overlapping of
individual viewing regions of cameras. Also it is likely that the FOV displayed will be somewhat smaller than the total FOV of the camera to allow for compensation of any edge effects.
Illustrated in Figure 4 is a functional block view depicting the image processing algorithm according to a preferred embodiment of the present invention. Vehicle orientation vector Θ is provided at a given update rate by the IMU which forms an integral part of OSM 130. Image processing algorithm calculates change in orientation value ΔΘ 402 from the current measured orientation of the UAV ΘCUrrent 400 and stored value Θlast 401 corresponding to the last measured orientation of the UAV. As image data is measured and stored as a two dimensional pixel array with relative pixel location essentially corresponding to a given viewing direction, by knowing the change in orientation ΔΘ 402 of the UAV in real time, the equivalent two dimensional pixel location offset array can be calculated 403 and subtracted from the original image array in real time thereby providing compensated or stabilised image data.
For a UAV, the objects in the field of view are sufficiently far from the relevant camera so that the linear translation between each measured frame is so small so as to be essentially undetectable and is therefore ignored for data processing purposes. However, this effect can be compensated if required as general linear motion will also cause a given camera pixel to view a different direction with time which again corresponds to a pixel location offset effect. In one example system, a displayed FOV could be dynamically changing with linear motion of the vehicle in question or alternatively compensated to appear static so that the image displayed is compensated for both the motion and orientation of the vehicle. Clearly, the degree to which this can be accomplished will be determined by the number of cameras employed in imaging system 100.
Referring back to Figure 3, first selected ROI image data portion 330 corresponds approximately to a region of 1200 900 pixels as viewed by the six cameras. As composite video feed ROI 230 is displayed at 500 500 the image shown will be a pixel averaged version of the original 1200 x 900 image. As would be apparent to those skilled in the art, the exact details of the averaging method can be varied according to requirement of the imaging system. ROI 330 can then be zoomed 310 to inspect a feature in more detail resulting in a new zoomed ROI 340 which corresponds to a viewing area of 500 500 pixels as viewed by the six cameras. As the corresponding composite video feed is displayed at this resolution zoomed ROI 340 corresponds to the maximum resolution of the imaging system. ROI 340 can be then repositioned 320 to shifted ROI 350 to explore other regions at this high resolution.
As it is likely that any given ROI will only be viewed by a selection of the six cameras, this subset can be determined and as described earlier high rate processing and image compensation only performed with respect to the data captured from this subset whilst the remaining cameras are sampled or driven at a lower rate. This will greatly reduce the processing burden.
Also as described previously, image data from the individual cameras plus associated time stamped orientation and position information can be captured and stored on board the UAV. This allows for post-processing of the data in a similar manner to that of the real-time inspection.
Although a preferred embodiment of the method and system of the present invention has been described in the foregoing detailed description, it will be understood that the invention is not limited to the embodiment disclosed, but is capable of numerous rearrangements, modifications and substitutions without departing from the scope of the invention as set forth and defined by the following claims.