[go: up one dir, main page]

WO2005048605A1 - Synthetic electronic imaging system - Google Patents

Synthetic electronic imaging system Download PDF

Info

Publication number
WO2005048605A1
WO2005048605A1 PCT/AU2004/001553 AU2004001553W WO2005048605A1 WO 2005048605 A1 WO2005048605 A1 WO 2005048605A1 AU 2004001553 W AU2004001553 W AU 2004001553W WO 2005048605 A1 WO2005048605 A1 WO 2005048605A1
Authority
WO
WIPO (PCT)
Prior art keywords
image data
orientation
providing
compensated image
view
Prior art date
Application number
PCT/AU2004/001553
Other languages
French (fr)
Inventor
Michael A. Lucas
Kym Ide
Brian Jarvis
Original Assignee
The Commonwealth Of Australia
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU2003906274A external-priority patent/AU2003906274A0/en
Application filed by The Commonwealth Of Australia filed Critical The Commonwealth Of Australia
Publication of WO2005048605A1 publication Critical patent/WO2005048605A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Definitions

  • the present invention relates to imaging systems for moving vehicles and in particular to ball turret imaging devices.
  • a conventional ball turret imaging system consists of a camera having mechanised optical zoom capability mounted on an inertially stabilised platform. This platform is also capable of being adjusted to a predetermined elevation and azimuthal direction.
  • These systems have been employed on a wide variety of vehicles ranging from airborne vehicles such as aircraft to land vehicles such as tanks, armoured personnel carriers and the like.
  • the important features of a conventional ball turret imaging system include the ability to select a direction of view and maintain that direction of view whilst the orientation of the vehicle is changing.
  • a Region of Interest is chosen by an operator of the system corresponding to a visual feature which is either worthy of further inspection or may indicate a target.
  • ROI Region of Interest
  • Another important capability is that once the camera is locked onto a given ROI further details in this region can be examined in more detail by employing the zoom capability of the camera.
  • these systems have many applications in the fields of surveillance, navigation and targeting.
  • UAV Unmanned Aerial Vehicle
  • UAV Unmanned Aerial Vehicle
  • UAV Unmanned Aerial Vehicle
  • These are airborne vehicles specifically designed to fulfil a battlefield surveillance role. As these vehicles are unmanned they are either remotely piloted or follow a preset course. An operator in real time is able to use the ball turret imaging system to view features of interest at a variable resolution from a location remote to the UAV.
  • conventional ball turret systems are expensive they represent a significant proportion of the capital cost of an UAV. In addition they are unsuitable due to their weight for installation in smaller UAVs which are gaining in popularity due to their relatively low cost and higher manoeuvrability.
  • One other disadvantage of conventional ball turret imaging systems is their reliance on an inertially stabilised platform which is a complicated mechanical system requiring ongoing calibration and maintenance.
  • the present invention accordingly provides a system for providing compensated image data corresponding to a region of interest within a field of view of an object having a variable orientation, said system including: a plurality of image capture means located on said object for providing image data corresponding to said field of view; orientation measurement means for measuring an orientation of said object; selection means for selecting an image data portion of said image data corresponding to said region of interest; and compensating means for compensating said image data portion for said orientation of said object to provide said compensated image data.
  • said system further includes control means to adjust a size of said region of interest thereby adjusting a number of pixels selected in said image data portion from said imaging data.
  • this capability provides for the ability to vary the size of the corresponding region of interest being viewed by the system.
  • said plurality of camera means provide image data at a first pixel resolution and said system further includes image resolution adjusting means to adjust the resolution of said compensated image data to a second pixel resolution.
  • This feature allows for the further manipulation, storage or display of the compensated image data at various resolutions as required.
  • said system further includes display means to display said image at said second pixel resolution.
  • Displaying the compensated image data is an effective way to view and inspect the region of interest.
  • said display means further includes a second display to display said image data corresponding to said field of view at a predetermined third pixel resolution.
  • said system further includes remote communication means to allow an operator of said system to operate said control means and view said display means at a location remote from said object.
  • This provides a number of advantages as the object can then be used to controllably view a region of interest remote from the operator.
  • this system can be used in remote controlled devices such as UAVs and the like.
  • each of said plurality of image capture means are located at separate positions from each other on said object.
  • the system can be deployed in a more flexible manner.
  • said plurality of image capture means each provides image data at an adjustable frame rate.
  • said system further includes data recording means to record said imaging data.
  • the present invention accordingly provides a method for providing compensated image data corresponding to a region of interest within a field of view of an object having a variable orientation, said method including: capturing image data corresponding to said field of view from a plurality of image capture means located on said object; measuring an orientation of said object; selecting an image data portion of said image data corresponding to said region of interest; and compensating said image data portion for said orientation of said object to provide said compensated image data.
  • FIGURE 1 is a functional block diagram of a synthetic ball turret imaging system according to a preferred embodiment of the present invention
  • FIGURE 2 is a functional block diagram detailing the data processing modules illustrated in Figure 1;
  • FIGURE 3 is a diagram illustrating the image viewing capabilities of the present invention.
  • FIGURE 4 is a diagram depicting the image processing algorithm according to a preferred embodiment of the present invention.
  • like reference characters designate like or corresponding parts throughout the several views of the drawings.
  • Imaging system 100 includes camera system module (CSM) 110, frame capture module (FCM) 120, orientation sensing module (OSM) 130, output frame module (OFM) 140, remote communications module (RCM) 150 and imaging control module (ICM) 160.
  • CSM camera system module
  • FCM frame capture module
  • OSM orientation sensing module
  • OFDM output frame module
  • RCM remote communications module
  • ICM imaging control module
  • a remote ground station 200 is used to control and view information from imaging system 100.
  • CSM 110 includes six high resolution cameras each having individual fields of view 101 to 106 which when arranged as two rows of three cameras cover an overall Field Of View (FOV) of approximately 90° x 45°. Whilst in this embodiment six cameras have been used, clearly the number and arrangement of cameras can be adapted to the particular viewing circumstances as required. For example, where an overall FOV of 360° * 360° is envisaged, the number and arrangement of cameras required will depend on the individual FOV of each camera being employed in the system. Each camera has a maximum resolution of 3.2 Megapixels corresponding to an image size of 2048 x 1536 pixels and is capable of outputting these images at standard video rates of 25 frames per second at this resolution and at even higher rates for lower resolutions.
  • FOV Field Of View
  • CSM 110 has in this preferred embodiment been optimised for detecting information in the visible range of wavelengths.
  • the present invention is also applicable to any electromagnetic imaging device or image capture means which produces data in pixellated form.
  • This includes, but is not limited to thermal imaging cameras, X-ray cameras and other imaging systems.
  • the camera subsystem may include standard analogue cameras in combination with a frame grabber device. The cameras may be mounted at different locations on the vehicle such as for example fore and aft positions of an UAV with the requirement that the individual viewing regions of each of the cameras are located to provide image data that substantially corresponds to the overall FOV being covered.
  • FCM 120 includes sufficient framegrabber 122 capacity and associated RAM 124 to store the pixellated frame data being generated in real time by individual cameras 101 to 106 associated with CSM 110.
  • FCM 120 also generates an individually timed frame synchronisation signal to control acquisition of a composite image through timing signal generator 121 which is programmable 161A from master CPU 161 located in ICM 160.
  • the frame synchronisation signal is generated relative to a master reference signal to facilitate the synchronisation of frames and orientation data from multiple sources. All image data is subsequently stored in data storage device 123, which in this preferred embodiment includes a plurality of high speed large capacity SCSI hard disks, along with both the orientation and timing information. This provides the capability of downloading 161B and analysing post mission stabilised video thereby providing the capability of selecting different features or regions of interest from those first viewed in real time during the mission for detailed analysis.
  • ICM 160 receives information from FCM 120, RCM 150, and OSM 130 and provides image data equivalent to the ROI being viewed and low resolution full FOV information to OFM 140.
  • ICM 160 includes master CPU 161 which processes incoming orientation information from OSM 130 and geospatial FOV information and ROI selection information received from RCM 150 which in turn has been relayed from ground station 200.
  • Geospatial FOV information is limited from RCM 150 to the bounds of total coverage provided by camera array 101 to 106 associated with CSM 110.
  • Master CPU 161 is also responsible for mathematically compensating and computing the positions of those pixels in the FCM 120 image RAM 124 that form the required ROI after adjustment for the inertial displacement of imaging system 100. These pixels may then be further subsampled or averaged depending on any bandwidth limitations in imaging system 100.
  • Stored frame data in the FCM 120 image RAM 124 is randomly addressable by master CPU 161 to permit discrete access 161C to pixels required in the output to achieve real time adjustment for orientation, pixel sampling and data transfer to OFM 140.
  • ICM 160 is able to multiplex data from individual cameras at discrete rates due to each frame incorporating time synchronisation information. Additionally, individual timing signals may be sent to different cameras 101 to 106 via timing signal generator 121 to change individual acquisition rates as required.
  • OSM 130 provides absolute vehicle orientation information in the form of yaw, pitch and roll data to ICM 160. Additionally angular rate information may be sent to ICM 160 for interpolation purposes.
  • IMU Inertial Measurement Unit
  • An IMU directly measures angular and linear accelerations which are then further processed to calculate orientation and translation information according to a specific reference frame. This information is then provided at rates equal to or greater than the camera frame rate to ICM 160.
  • other orientation measurement means are contemplated to be within the scope of the invention.
  • the system may be self contained such that the orientation measurement means is fixedly mounted with respect to the plurality of cameras with both of these systems then independently movably mounted to the vehicle and hence capable of variable orientation with respect to the vehicle.
  • OFM 140 receives compensated image information from ICM 160 and relays this information to ground station 200 via RCM 150 which incorporates a radio or other suitable telecommunications link to ground station 200.
  • the image information consists of a video signal of stabilised image data of approximately 500 x 500 pixel resolution delivered at approximately 25 frames per second and a further low resolution 640 x 480 image of the full field of view which is updated at a lower rate according to mission requirements.
  • the bandwidth required for RCM 150 is comparable to that required for standard analogue television signals
  • Ground station 200 includes user input device 210 which allows an operator of imaging system 100 to select the ROI image data portion for viewing from the total FOV image data provided by the system. In addition, it will allow the operator to further zoom in on the ROI to further inspect this region at higher detail.
  • user input device 210 will consist of software implemented on a ground station computer whereby the operator will be presented with a first low resolution image (approximately 640 x 480 pixels) on computer display 220 corresponding to the entire FOV of the imaging system 100 with an indication of the chosen ROI displayed on the low resolution image.
  • a separate video monitor 230 a composite video picture is displayed corresponding to the ROI shown on the computer display. The operator of the ground station is able to select, resize and change the location of the ROI on the computer display 220 and this is reflected in the composite video display 230.
  • an operator at ground station 200 selects 300 a first ROI image data portion 330 on computer display 220 from the entire FOV of imaging system 100.
  • each single camera view corresponds to a maximum pixel resolution of approximately 2000 x 1500 pixels.
  • the six cameras (101 to 106) thereby form a combined potential image size of 6000 x 3000 pixels corresponding to the total FOV. In practice this will be reduced somewhat due to overlapping of individual viewing regions of cameras. Also it is likely that the FOV displayed will be somewhat smaller than the total FOV of the camera to allow for compensation of any edge effects.
  • Illustrated in Figure 4 is a functional block view depicting the image processing algorithm according to a preferred embodiment of the present invention.
  • Vehicle orientation vector ⁇ is provided at a given update rate by the IMU which forms an integral part of OSM 130.
  • Image processing algorithm calculates change in orientation value ⁇ 402 from the current measured orientation of the UAV ⁇ CU rrent 400 and stored value ⁇ las t 401 corresponding to the last measured orientation of the UAV.
  • image data is measured and stored as a two dimensional pixel array with relative pixel location essentially corresponding to a given viewing direction, by knowing the change in orientation ⁇ 402 of the UAV in real time, the equivalent two dimensional pixel location offset array can be calculated 403 and subtracted from the original image array in real time thereby providing compensated or stabilised image data.
  • a UAV For a UAV, the objects in the field of view are sufficiently far from the relevant camera so that the linear translation between each measured frame is so small so as to be essentially undetectable and is therefore ignored for data processing purposes.
  • this effect can be compensated if required as general linear motion will also cause a given camera pixel to view a different direction with time which again corresponds to a pixel location offset effect.
  • a displayed FOV could be dynamically changing with linear motion of the vehicle in question or alternatively compensated to appear static so that the image displayed is compensated for both the motion and orientation of the vehicle.
  • the degree to which this can be accomplished will be determined by the number of cameras employed in imaging system 100.
  • first selected ROI image data portion 330 corresponds approximately to a region of 1200 900 pixels as viewed by the six cameras.
  • ROI 230 is displayed at 500 500 the image shown will be a pixel averaged version of the original 1200 x 900 image.
  • the exact details of the averaging method can be varied according to requirement of the imaging system.
  • ROI 330 can then be zoomed 310 to inspect a feature in more detail resulting in a new zoomed ROI 340 which corresponds to a viewing area of 500 500 pixels as viewed by the six cameras.
  • zoomed ROI 340 corresponds to the maximum resolution of the imaging system.
  • ROI 340 can be then repositioned 320 to shifted ROI 350 to explore other regions at this high resolution.
  • this subset can be determined and as described earlier high rate processing and image compensation only performed with respect to the data captured from this subset whilst the remaining cameras are sampled or driven at a lower rate. This will greatly reduce the processing burden.
  • image data from the individual cameras plus associated time stamped orientation and position information can be captured and stored on board the UAV. This allows for post-processing of the data in a similar manner to that of the real-time inspection.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Studio Devices (AREA)

Abstract

A system and method for providing compensated image data corresponding to a region of interest within a field of view of an object having a variable orientation is described. The system includes a plurality of image capture means located on the object for providing image data corresponding to the field of view and orientation measurement means for measuring an orientation of the object. Selection means are also provided for selecting an image data portion of the image data corresponding to the region of interest; and also compensating means for compensating the image data portion for the orientation of the object to provide the compensated image data.

Description

SYNTHETIC ELECTRONIC IMAGING SYSTEM
FIELD OF THE INVENTION
The present invention relates to imaging systems for moving vehicles and in particular to ball turret imaging devices.
BACKGROUND OF THE INVENTION
A conventional ball turret imaging system consists of a camera having mechanised optical zoom capability mounted on an inertially stabilised platform. This platform is also capable of being adjusted to a predetermined elevation and azimuthal direction. These systems have been employed on a wide variety of vehicles ranging from airborne vehicles such as aircraft to land vehicles such as tanks, armoured personnel carriers and the like.
The important features of a conventional ball turret imaging system include the ability to select a direction of view and maintain that direction of view whilst the orientation of the vehicle is changing. Typically a Region of Interest (ROI) is chosen by an operator of the system corresponding to a visual feature which is either worthy of further inspection or may indicate a target. Another important capability is that once the camera is locked onto a given ROI further details in this region can be examined in more detail by employing the zoom capability of the camera. Clearly these systems have many applications in the fields of surveillance, navigation and targeting.
One increasingly important application of conventional ball turret imaging systems is their use in an Unmanned Aerial Vehicle (UAV). These are airborne vehicles specifically designed to fulfil a battlefield surveillance role. As these vehicles are unmanned they are either remotely piloted or follow a preset course. An operator in real time is able to use the ball turret imaging system to view features of interest at a variable resolution from a location remote to the UAV. As conventional ball turret systems are expensive they represent a significant proportion of the capital cost of an UAV. In addition they are unsuitable due to their weight for installation in smaller UAVs which are gaining in popularity due to their relatively low cost and higher manoeuvrability. One other disadvantage of conventional ball turret imaging systems is their reliance on an inertially stabilised platform which is a complicated mechanical system requiring ongoing calibration and maintenance.
It is an object of the present invention to provide an alternative imaging system capable of substantially reproducing the characteristics of current conventional ball turret imaging systems.
SUMMARY OF THE INVENTION
In a first aspect the present invention accordingly provides a system for providing compensated image data corresponding to a region of interest within a field of view of an object having a variable orientation, said system including: a plurality of image capture means located on said object for providing image data corresponding to said field of view; orientation measurement means for measuring an orientation of said object; selection means for selecting an image data portion of said image data corresponding to said region of interest; and compensating means for compensating said image data portion for said orientation of said object to provide said compensated image data.
Combining the increased field of view that can be observed by this system due to the plurality of image capture means with the ability to compensate any region of interest within this field of view for the orientation of the object provides an effective replacement for a ball turret imaging system which does not require an inertially stabilized platform or complicated optics. Preferably, said system further includes control means to adjust a size of said region of interest thereby adjusting a number of pixels selected in said image data portion from said imaging data.
As items of interest within the field of view may vary in size this capability provides for the ability to vary the size of the corresponding region of interest being viewed by the system.
Preferably, said plurality of camera means provide image data at a first pixel resolution and said system further includes image resolution adjusting means to adjust the resolution of said compensated image data to a second pixel resolution.
This feature allows for the further manipulation, storage or display of the compensated image data at various resolutions as required.
Preferably, said system further includes display means to display said image at said second pixel resolution.
Displaying the compensated image data is an effective way to view and inspect the region of interest.
Preferably, said display means further includes a second display to display said image data corresponding to said field of view at a predetermined third pixel resolution.
This allows the whole field of view to be displayed providing further information about the location of the region of interest within the total field of view to an operator of the system. Preferably, said system further includes remote communication means to allow an operator of said system to operate said control means and view said display means at a location remote from said object.
This provides a number of advantages as the object can then be used to controllably view a region of interest remote from the operator. Thus this system can be used in remote controlled devices such as UAVs and the like.
Preferably, each of said plurality of image capture means are located at separate positions from each other on said object.
By not being restrained to fixing the plurality of image capture means at a single location on the object the system can be deployed in a more flexible manner.
Preferably, said plurality of image capture means each provides image data at an adjustable frame rate.
This allows the image capture means which are viewing the region of interest to provide their data at a higher frame rate then those viewing the remaining region thereby reducing the total bandwidth required for the system.
Preferably, said system further includes data recording means to record said imaging data.
This allows for further off-line inspection of the high resolution imaging data at a later time.
In a second aspect the present invention accordingly provides a method for providing compensated image data corresponding to a region of interest within a field of view of an object having a variable orientation, said method including: capturing image data corresponding to said field of view from a plurality of image capture means located on said object; measuring an orientation of said object; selecting an image data portion of said image data corresponding to said region of interest; and compensating said image data portion for said orientation of said object to provide said compensated image data.
BRIEF DESCRIPTION OF THE DRAWINGS A preferred embodiment of the present invention will be discussed with reference to the accompanying drawings wherein:
FIGURE 1 is a functional block diagram of a synthetic ball turret imaging system according to a preferred embodiment of the present invention;
FIGURE 2 is a functional block diagram detailing the data processing modules illustrated in Figure 1;
FIGURE 3 is a diagram illustrating the image viewing capabilities of the present invention; and
FIGURE 4 is a diagram depicting the image processing algorithm according to a preferred embodiment of the present invention. In the following description, like reference characters designate like or corresponding parts throughout the several views of the drawings.
DESCRIPTION OF PREFERRED EMBODIMENT
Referring now to Figure 1, there is shown a functional block diagram of a synthetic ball turret imaging system 100 according to a preferred embodiment of the present invention optimised for use with an UAV. As would be apparent to those skilled in the art, the present invention may be generally applied to those situations where there is an imaging requirement for an object or platform having a variable orientation. Imaging system 100 includes camera system module (CSM) 110, frame capture module (FCM) 120, orientation sensing module (OSM) 130, output frame module (OFM) 140, remote communications module (RCM) 150 and imaging control module (ICM) 160. A remote ground station 200 is used to control and view information from imaging system 100.
CSM 110 includes six high resolution cameras each having individual fields of view 101 to 106 which when arranged as two rows of three cameras cover an overall Field Of View (FOV) of approximately 90° x 45°. Whilst in this embodiment six cameras have been used, clearly the number and arrangement of cameras can be adapted to the particular viewing circumstances as required. For example, where an overall FOV of 360° * 360° is envisaged, the number and arrangement of cameras required will depend on the individual FOV of each camera being employed in the system. Each camera has a maximum resolution of 3.2 Megapixels corresponding to an image size of 2048 x 1536 pixels and is capable of outputting these images at standard video rates of 25 frames per second at this resolution and at even higher rates for lower resolutions.
As is well known to those skilled in the art the capabilities of digital cameras are constantly improving in terms of their pixel resolution and output rates and as would also be appreciated by those skilled in the art the present invention is not to be limited to the camera type described in this embodiment but is equally applicable to cameras having much higher resolutions and output rates.
CSM 110 has in this preferred embodiment been optimised for detecting information in the visible range of wavelengths. However, the present invention is also applicable to any electromagnetic imaging device or image capture means which produces data in pixellated form. This includes, but is not limited to thermal imaging cameras, X-ray cameras and other imaging systems. In addition the camera subsystem may include standard analogue cameras in combination with a frame grabber device. The cameras may be mounted at different locations on the vehicle such as for example fore and aft positions of an UAV with the requirement that the individual viewing regions of each of the cameras are located to provide image data that substantially corresponds to the overall FOV being covered.
As best seen in Figure 2, FCM 120 includes sufficient framegrabber 122 capacity and associated RAM 124 to store the pixellated frame data being generated in real time by individual cameras 101 to 106 associated with CSM 110. FCM 120 also generates an individually timed frame synchronisation signal to control acquisition of a composite image through timing signal generator 121 which is programmable 161A from master CPU 161 located in ICM 160. The frame synchronisation signal is generated relative to a master reference signal to facilitate the synchronisation of frames and orientation data from multiple sources. All image data is subsequently stored in data storage device 123, which in this preferred embodiment includes a plurality of high speed large capacity SCSI hard disks, along with both the orientation and timing information. This provides the capability of downloading 161B and analysing post mission stabilised video thereby providing the capability of selecting different features or regions of interest from those first viewed in real time during the mission for detailed analysis.
ICM 160 receives information from FCM 120, RCM 150, and OSM 130 and provides image data equivalent to the ROI being viewed and low resolution full FOV information to OFM 140. Referring again to Figure 2, which depicts in detail the information flow between FCM 120 and ICM 160, ICM 160 includes master CPU 161 which processes incoming orientation information from OSM 130 and geospatial FOV information and ROI selection information received from RCM 150 which in turn has been relayed from ground station 200. Geospatial FOV information is limited from RCM 150 to the bounds of total coverage provided by camera array 101 to 106 associated with CSM 110. Master CPU 161 is also responsible for mathematically compensating and computing the positions of those pixels in the FCM 120 image RAM 124 that form the required ROI after adjustment for the inertial displacement of imaging system 100. These pixels may then be further subsampled or averaged depending on any bandwidth limitations in imaging system 100.
Stored frame data in the FCM 120 image RAM 124 is randomly addressable by master CPU 161 to permit discrete access 161C to pixels required in the output to achieve real time adjustment for orientation, pixel sampling and data transfer to OFM 140. In those circumstances where real time system performance is compromised by the burden of processing data from multiple cameras in CSM 110, ICM 160 is able to multiplex data from individual cameras at discrete rates due to each frame incorporating time synchronisation information. Additionally, individual timing signals may be sent to different cameras 101 to 106 via timing signal generator 121 to change individual acquisition rates as required.
Referring back to Figure 1, OSM 130 provides absolute vehicle orientation information in the form of yaw, pitch and roll data to ICM 160. Additionally angular rate information may be sent to ICM 160 for interpolation purposes. Typically for an UAV, an Inertial Measurement Unit (IMU) is used. An IMU directly measures angular and linear accelerations which are then further processed to calculate orientation and translation information according to a specific reference frame. This information is then provided at rates equal to or greater than the camera frame rate to ICM 160. As would be appreciated by those skilled in the art, other orientation measurement means are contemplated to be within the scope of the invention. Additionally, the system may be self contained such that the orientation measurement means is fixedly mounted with respect to the plurality of cameras with both of these systems then independently movably mounted to the vehicle and hence capable of variable orientation with respect to the vehicle. OFM 140 receives compensated image information from ICM 160 and relays this information to ground station 200 via RCM 150 which incorporates a radio or other suitable telecommunications link to ground station 200. The image information consists of a video signal of stabilised image data of approximately 500 x 500 pixel resolution delivered at approximately 25 frames per second and a further low resolution 640 x 480 image of the full field of view which is updated at a lower rate according to mission requirements. Thus the bandwidth required for RCM 150 is comparable to that required for standard analogue television signals
Ground station 200 includes user input device 210 which allows an operator of imaging system 100 to select the ROI image data portion for viewing from the total FOV image data provided by the system. In addition, it will allow the operator to further zoom in on the ROI to further inspect this region at higher detail. In practice user input device 210 will consist of software implemented on a ground station computer whereby the operator will be presented with a first low resolution image (approximately 640 x 480 pixels) on computer display 220 corresponding to the entire FOV of the imaging system 100 with an indication of the chosen ROI displayed on the low resolution image. On a separate video monitor 230 a composite video picture is displayed corresponding to the ROI shown on the computer display. The operator of the ground station is able to select, resize and change the location of the ROI on the computer display 220 and this is reflected in the composite video display 230.
Referring now to Figure 3 which illustrates a preferred embodiment of the present invention in operation, an operator at ground station 200 selects 300 a first ROI image data portion 330 on computer display 220 from the entire FOV of imaging system 100. As depicted, each single camera view corresponds to a maximum pixel resolution of approximately 2000 x 1500 pixels. The six cameras (101 to 106) thereby form a combined potential image size of 6000 x 3000 pixels corresponding to the total FOV. In practice this will be reduced somewhat due to overlapping of individual viewing regions of cameras. Also it is likely that the FOV displayed will be somewhat smaller than the total FOV of the camera to allow for compensation of any edge effects.
Illustrated in Figure 4 is a functional block view depicting the image processing algorithm according to a preferred embodiment of the present invention. Vehicle orientation vector Θ is provided at a given update rate by the IMU which forms an integral part of OSM 130. Image processing algorithm calculates change in orientation value ΔΘ 402 from the current measured orientation of the UAV ΘCUrrent 400 and stored value Θlast 401 corresponding to the last measured orientation of the UAV. As image data is measured and stored as a two dimensional pixel array with relative pixel location essentially corresponding to a given viewing direction, by knowing the change in orientation ΔΘ 402 of the UAV in real time, the equivalent two dimensional pixel location offset array can be calculated 403 and subtracted from the original image array in real time thereby providing compensated or stabilised image data.
For a UAV, the objects in the field of view are sufficiently far from the relevant camera so that the linear translation between each measured frame is so small so as to be essentially undetectable and is therefore ignored for data processing purposes. However, this effect can be compensated if required as general linear motion will also cause a given camera pixel to view a different direction with time which again corresponds to a pixel location offset effect. In one example system, a displayed FOV could be dynamically changing with linear motion of the vehicle in question or alternatively compensated to appear static so that the image displayed is compensated for both the motion and orientation of the vehicle. Clearly, the degree to which this can be accomplished will be determined by the number of cameras employed in imaging system 100. Referring back to Figure 3, first selected ROI image data portion 330 corresponds approximately to a region of 1200 900 pixels as viewed by the six cameras. As composite video feed ROI 230 is displayed at 500 500 the image shown will be a pixel averaged version of the original 1200 x 900 image. As would be apparent to those skilled in the art, the exact details of the averaging method can be varied according to requirement of the imaging system. ROI 330 can then be zoomed 310 to inspect a feature in more detail resulting in a new zoomed ROI 340 which corresponds to a viewing area of 500 500 pixels as viewed by the six cameras. As the corresponding composite video feed is displayed at this resolution zoomed ROI 340 corresponds to the maximum resolution of the imaging system. ROI 340 can be then repositioned 320 to shifted ROI 350 to explore other regions at this high resolution.
As it is likely that any given ROI will only be viewed by a selection of the six cameras, this subset can be determined and as described earlier high rate processing and image compensation only performed with respect to the data captured from this subset whilst the remaining cameras are sampled or driven at a lower rate. This will greatly reduce the processing burden.
Also as described previously, image data from the individual cameras plus associated time stamped orientation and position information can be captured and stored on board the UAV. This allows for post-processing of the data in a similar manner to that of the real-time inspection.
Although a preferred embodiment of the method and system of the present invention has been described in the foregoing detailed description, it will be understood that the invention is not limited to the embodiment disclosed, but is capable of numerous rearrangements, modifications and substitutions without departing from the scope of the invention as set forth and defined by the following claims.

Claims

THE CLAIMS DEFINING THE INVENTION ARE AS FOLLOWS
1. A system for providing compensated image data corresponding to a region of interest within a field of view of an object having a variable orientation, said system including: a plurality of image capture means located on said object for providing image data corresponding to said field of view; orientation measurement means for measuring an orientation of said object; selection means for selecting an image data portion of said image data corresponding to said region of interest; and compensating means for compensating said image data portion for said orientation of said object to provide said compensated image data.
2. A system for providing compensated image data as claimed in claim 1, wherein said system further includes control means to adjust a size of said region of interest thereby adjusting a number of pixels selected in said image data portion from said image data.
3. A system for providing compensated image data as claimed in claim 1 or 2, wherein said plurality of image capture means provide image data at a first pixel resolution and said system further includes image resolution adjusting means to adjust the resolution of said compensated image data to a second pixel resolution.
4. A system for providing compensated image data as claimed in claim 3, wherein said system further includes display means to display said image at said second pixel resolution.
5. A system for providing compensated image data as claimed in claim 4, wherein said display means further includes a second display to display said image data corresponding to said field of view at a predetermined third pixel resolution.
6. A system for providing compensated image data as claimed in claim 4 or 5, wherein said system further includes remote communication means to allow an operator of said system to operate said control means and view said display means at a location remote from said object.
7. A system for providing compensated image data as claimed in any one of the preceding claims, wherein each of said plurality of image capture means are located at separate positions from each other on said object.
8. A system for providing compensated image data as claimed in any one of the preceding claims wherein said plurality of image capture means each provides image data at an adjustable frame rate for each camera.
9. A system for providing compensated image data as claimed in any one of the preceding claims wherein said system further includes data recording means to record said image data.
10. A method for providing compensated image data corresponding to a region of interest within a field of view of an object having a variable orientation, said method including: capturing image data corresponding to said field of view from a plurality of image capture means located on said object; measuring an orientation of said object; selecting an image data portion of said image data corresponding to said region of interest; and compensating said image data portion for said orientation of said object to provide said compensated image data.
11. A method for providing compensated image data as claimed in claim 10, wherein said method further includes the step of adjusting a size of said region of interest thereby adjusting a number of pixels selected in said image data portion from said image data.
12. A method for providing compensated image data as claimed in claim 10 or 11, wherein said image data is captured at a first pixel resolution and said method further includes the step of adjusting the image resolution of said compensated image data to a second pixel resolution.
13. A method for providing compensated image data as claimed in claim 12, wherein said method further includes the step of displaying said image at said second pixel resolution.
14. A method for providing compensated image data as claimed in claim 13, wherein said method further includes the step of displaying said image data corresponding to said field of view at a predetermined third pixel resolution.
15. A method for providing compensated image data as claimed in claim 13 or 14, wherein said steps of displaying and adjusting occur at a location remote from said object.
16. A method for providing compensated image data as claimed in any one of the claims 10 to 15 wherein said step of capturing image data further includes the step of adjusting the capture frame rate for each image capture means.
17. A method for providing compensated image data as claimed in any one of the claims 10 to 16 wherein said method further includes the step of recording said image data.
PCT/AU2004/001553 2003-11-14 2004-11-12 Synthetic electronic imaging system WO2005048605A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
AU2003906274A AU2003906274A0 (en) 2003-11-14 Synthetic electronic imaging system
AU2003906274 2003-11-14

Publications (1)

Publication Number Publication Date
WO2005048605A1 true WO2005048605A1 (en) 2005-05-26

Family

ID=34578148

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/AU2004/001553 WO2005048605A1 (en) 2003-11-14 2004-11-12 Synthetic electronic imaging system

Country Status (1)

Country Link
WO (1) WO2005048605A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2487909A1 (en) * 2011-02-10 2012-08-15 BAE Systems PLC Image capturing
WO2012107752A1 (en) * 2011-02-10 2012-08-16 Bae Systems Plc Image capturing
CN103693205A (en) * 2013-12-30 2014-04-02 广东电网公司电力科学研究院 Pod stabilized platform control method based on backlash estimation and compensation
US9561869B2 (en) 2011-02-10 2017-02-07 Bae Systems Plc Image capturing
WO2017161198A1 (en) * 2016-03-17 2017-09-21 Flir Systems, Inc. Rotation-adaptive video analytics camera and method
US10735659B2 (en) 2016-03-17 2020-08-04 Flir Systems, Inc. Rotation-adaptive video analytics camera and method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030048357A1 (en) * 2001-08-29 2003-03-13 Geovantage, Inc. Digital imaging system for airborne applications
WO2004068403A2 (en) * 2003-01-17 2004-08-12 The Insitu Group Cooperative nesting of mechanical and electronic stabilization for an airborne camera system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030048357A1 (en) * 2001-08-29 2003-03-13 Geovantage, Inc. Digital imaging system for airborne applications
WO2004068403A2 (en) * 2003-01-17 2004-08-12 The Insitu Group Cooperative nesting of mechanical and electronic stabilization for an airborne camera system

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2487909A1 (en) * 2011-02-10 2012-08-15 BAE Systems PLC Image capturing
WO2012107752A1 (en) * 2011-02-10 2012-08-16 Bae Systems Plc Image capturing
AU2012215185B2 (en) * 2011-02-10 2015-07-30 Bae Systems Plc Image capturing
US9561869B2 (en) 2011-02-10 2017-02-07 Bae Systems Plc Image capturing
US9571733B2 (en) 2011-02-10 2017-02-14 Bae Systems Plc Image capturing
CN103693205A (en) * 2013-12-30 2014-04-02 广东电网公司电力科学研究院 Pod stabilized platform control method based on backlash estimation and compensation
WO2017161198A1 (en) * 2016-03-17 2017-09-21 Flir Systems, Inc. Rotation-adaptive video analytics camera and method
US10735659B2 (en) 2016-03-17 2020-08-04 Flir Systems, Inc. Rotation-adaptive video analytics camera and method

Similar Documents

Publication Publication Date Title
US8581981B2 (en) Optical imaging system for unmanned aerial vehicle
US8106936B2 (en) Panoramic video imaging and display system
EP2417560B1 (en) Video motion compensation and stabilization gimbaled imaging system
US7746391B2 (en) Resolution proportional digital zoom
EP3684694B1 (en) Offload adjustment for satellite image diversity
US20120200703A1 (en) Imaging system for uav
US7773121B1 (en) High-resolution, continuous field-of-view (FOV), non-rotating imaging system
JP3392422B2 (en) Electro-optical image sensing array for biaxial image motion compensation
EP2791868B1 (en) System and method for processing multi-camera array images
US20150367958A1 (en) Wide-area aerial camera systems
US20100002071A1 (en) Multiple View and Multiple Object Processing in Wide-Angle Video Camera
US20020075258A1 (en) Camera system with high resolution image inside a wide angle view
JP2002500989A (en) In-flight camera system with electronic field switching
EP2867873B1 (en) Surveillance process and apparatus
US20150296142A1 (en) Imaging system and process
GB2483224A (en) Imaging device with measurement and processing means compensating for device motion
US11861849B2 (en) Systems and methods for enhanced motion detection, object tracking, situational awareness and super resolution video using microscanned images
GB2481027A (en) Image stabilising apparatus and method
WO2005048605A1 (en) Synthetic electronic imaging system
GB2239762A (en) Panoramic imaging system
US20150022662A1 (en) Method and apparatus for aerial surveillance
EP3590008A1 (en) Multi-camera system for tracking one or more objects through a scene
EP2736249A1 (en) Imaging system and process
US20160224842A1 (en) Method and apparatus for aerial surveillance and targeting
Rose et al. Real-time 360° imaging system for situational awareness

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase