US20100104185A1 - Methods and systems for the detection of the insertion, removal, and change of objects within a scene through the use of imagery - Google Patents
Methods and systems for the detection of the insertion, removal, and change of objects within a scene through the use of imagery Download PDFInfo
- Publication number
- US20100104185A1 US20100104185A1 US11/383,914 US38391406A US2010104185A1 US 20100104185 A1 US20100104185 A1 US 20100104185A1 US 38391406 A US38391406 A US 38391406A US 2010104185 A1 US2010104185 A1 US 2010104185A1
- Authority
- US
- United States
- Prior art keywords
- image
- likelihood
- change
- region
- interest
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/277—Analysis of motion involving stochastic approaches, e.g. using Kalman filters
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F41—WEAPONS
- F41H—ARMOUR; ARMOURED TURRETS; ARMOURED OR ARMED VEHICLES; MEANS OF ATTACK OR DEFENCE, e.g. CAMOUFLAGE, IN GENERAL
- F41H11/00—Defence installations; Defence devices
- F41H11/12—Means for clearing land minefields; Systems specially adapted for detection of landmines
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/387—Composing, repositioning or otherwise geometrically modifying originals
Definitions
- This invention relates to image analysis and, more specifically, to the detection of the insertion, removal, and change of objects within a scene through the use of imagery.
- Improvised Explosive Devices is a new and ongoing threat to both occupation ground forces and innocent civilians in war zones. IEDs can be constructed at a remote location and then transported and installed within a short period of time by a minimum number of opposition forces. To escape detection, IEDs are typically embedded into and appear as part of their local surrounding. Once installed, IEDs can be detonated autonomously or manually by an operator hidden nearby.
- the present invention provides systems and methods for detecting the insertion, removal and change of objects of interest through the comparison of two or more images containing a common area of interest.
- Embodiments of the present invention may advantageously provide an autonomous capability to reduce the time required for image analysts to review an imagery database by emphasizing image regions that have an increased likelihood of containing the insertion, removal, and change of an object of interest.
- Embodiments of the present invention may be used to detect a variety of objects of interest in a variety of circumstances and applications, such as detecting an IED, or detecting new facilities, capabilities, movements, or strategic thrusts by hostile parties, or for non-military applications, such as for search and rescue, or for conducting research into environmental changes or wildlife habits.
- a method for detecting at least one of insertion, removal, and change of objects of interest through the comparison of a first image and a second image containing a common area of interest includes performing a scene registration including aligning image patterns in the first image to those in the second image; performing a feature content analysis to determine a likelihood of change for each pixel in the first and second images; performing a region identification to group pixels within the first and second images into one or more image regions based upon their likelihood of change; and performing an image region partitioning to prioritize the one or more image regions according to an image region score for each of the one or more image regions, the image region score being indicative of at least one of insertion, removal, and change of an object of interest within the common area of interest.
- a method for detecting at least one of insertion, removal, and change of objects of interest through the comparison of a first image and a second image containing a common area of interest includes determining a likelihood of change for each of a plurality of portions of the first and second images; grouping the plurality of portions into one or more image regions based upon their likelihood of change; and prioritizing the one or more image regions according to an image region score for each of the one or more image regions, the image region score being indicative of at least one of insertion, removal, and change of an object of interest within the common area of interest.
- FIG. 1 is an image collection and analysis system in accordance with an embodiment of the present invention
- FIG. 2 is a schematic representation of the image collection and analysis system of FIG. 1 ;
- FIG. 3 is a flowchart of a method of detecting the insertion, removal and change of an object of interest through two images containing a common area of interest in accordance with an embodiment of the present invention
- FIG. 4 is a flowchart of a scene registration process in accordance with an embodiment of the present invention.
- FIG. 5 shows a sample pair of first and second images, their transformed counterparts, and an alignment of the transformed images typically produced by the scene registration process of FIG. 4 ;
- FIG. 6 shows the development of the General Pattern Change likelihood in accordance with an embodiment of the invention
- FIG. 7 shows the development of the GPC likelihoods over the common area between the two transformed and aligned images shown in FIG. 5 in accordance with an embodiment of the invention
- FIG. 8 is a flowchart of a GPC likelihood development process shown in FIG. 7 in accordance with an embodiment of the invention.
- FIG. 9 is a flow chart of the region identification process for an embodiment where the object of interest size is known a-priori;
- FIG. 10 is a flow chart of the region identification process for an embodiment where the object of interest size is not known a-priori;
- FIG. 11 shows the development of the region partitioning likelihood in accordance with an embodiment of the present invention
- FIG. 12 and FIG. 13 are flow charts of the region prioritization process for alternate embodiments in accordance with the present invention.
- FIG. 14 is a flow chart for an embodiment to assist a human operator in accordance with the present invention.
- FIG. 15 illustrates a computing device configured in accordance with an embodiment of the present invention.
- FIG. 16 shows a variety of sensor platforms that may be used in systems in accordance with alternate embodiments of the invention
- the present invention relates to systems and methods for detecting the insertion, removal, and change of objects of interest through the comparison of two or more images containing a common area of interest.
- Many specific details of certain embodiments of the invention are set forth in the following description and in FIGS. 1 through 16 to provide a thorough understanding of such embodiments.
- One skilled in the art, however, will understand that the present invention may have additional embodiments, or that the present invention may be practiced without several of the details described in the following description.
- embodiments of systems and methods for detecting the insertion, removal, and change of objects of interest through a comparison of two or more images containing a common area of interest in accordance with the present invention may identify and prioritize image regions within the images based on changes in feature content over a period of time in a manner which is consistent with the insertion, removal and change of an object of interest, such as an Improvised Explosive Device (IED), or for detecting new facilities, capabilities, movements, or strategic thrusts by hostile parties, or for various non-military applications.
- IED Improvised Explosive Device
- Such embodiments may advantageously detect relevant changes in feature content within images which have dissimilar sensor view points, sensor spectrums, scene composition, or period of time covered by the imagery.
- FIG. 1 is an image collection and analysis system 100 in accordance with an embodiment of the present invention.
- FIG. 2 is a schematic representation of the image collection and analysis system 100 of FIG. 1 .
- the system 100 includes an acquisition system 110 and an analysis system 120 .
- the acquisition system 110 includes a platform 112 having an image acquisition component 114 coupled to a transmitter 116 .
- the platform 112 is an aircraft, and more specifically an Unmanned Aerial Vehicle (UAV).
- UAV Unmanned Aerial Vehicle
- the platform 112 may be any suitable stationary or moveable platform.
- the image acquisition component 114 may be any suitable type of image acquisition device, including, for example, visible wavelength sensors (e.g.
- the analysis system 120 includes a receiver 122 coupled to a computer system 124 .
- the computer system 124 is configured to perform a method of detecting changes between images in accordance with embodiments of the present invention, as described more fully below. A particular embodiment of a suitable computer system 124 is described more fully below with reference to FIG. 13 .
- the acquisition system 110 is positioned such that the image acquisition component 114 may acquire one or more images of an area of interest 102 .
- the one or more images may be transmitted by the transmitter 116 to the receiver 122 of the analysis system 120 for processing by the computer system 124 .
- images of the area of interest 102 may be provided by the acquisition system 110 in a real-time manner to the analysis system 120 .
- the transmitter 116 and receiver 122 may be eliminated, and the images acquired by the image acquisition component 114 may be communicated to the computer system 124 either directly via a direct signal link, or may be stored within a suitable storage media (e.g. RAM, ROM, portable storage media, etc.) by the image acquisition component 114 and uploaded to the computer system 124 at a later time.
- a suitable storage media e.g. RAM, ROM, portable storage media, etc.
- FIG. 16 shows a variety of sensor platforms that may be used in place of the UAV 112 in image collection and analysis systems in accordance with alternate embodiments of the invention. More specifically, in alternate embodiments, sensor platforms may include satellites or other space-based platforms 602 , manned aircraft 604 , land-based vehicles 608 , or any other suitable platform embodiments.
- UAV Unmanned Aerial Vehicle
- FIG. 3 is a flowchart of a method 300 of detecting the insertion, removal, and change of objects of interest through the use of two or more images containing a common area of interest in accordance with an embodiment of the present invention.
- exemplary methods and processes are illustrated as a collection of blocks in a logical flow graph, which represent a sequence of operations that can be implemented in hardware, software, or a combination thereof.
- the blocks represent computer-executable instructions that, when executed by one or more processors, perform the recited operations.
- computer-executable instructions include routines, programs, objects, components, data structures, and the like that perform particular functions or implement particular abstract data types.
- the method 300 includes acquiring at least two images at a block 302 .
- One or more of the images may be stored images that have been acquired in the past and are retrieved from a suitable storage media, or may be images that are acquired in a real-time manner.
- the images may be acquired using similar or dissimilar (i.e. cross-spectral) sensor types, including visible wavelength sensors (e.g. photographic systems), infrared sensors, laser radar systems, radar systems, or any other suitable sensors or systems.
- a scene registration process is performed.
- the scene registration process 304 aligns all of the pixels representing a physical area which is common to the first and second images.
- the scene registration process (block 304 ) comprises some or all of the acts described, for example, in U.S. Pat. Nos. 5,809,171, 5,890,808, 5,946,422, 5,982,930, 5,982,945 issued to Neff et al., which patents are incorporated herein by reference.
- the scene registration process 304 includes the acts shown in FIGS. 4 and 5 .
- the order in which the scene registration process 304 is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method, or an alternate method.
- the image pixel values are partitioned into a set of labels at a block 1001 . This process may include all of the one to one and many to one pixel value transformations such as linear rescaling, equalization, and feature extraction.
- the original or re-labeled images are transformed into a common reference frame and may produce both a forward and inverse transform which maps the pixel locations in the original image to those in the transformed image and vice versa.
- the common reference frame may be the original view point of either image or another advantageous view point all together.
- the image patterns of the transformed images are aligned and may produce either a mathematical transform or a set transformed images or both, that account for all of the spatial effects due to translation, rotation, scale, and skew, any spectral artifacts such as shadowing and layover, and other distortions present within the transformed images that were not removed by the previous blocks such as terrain elevation and object height errors.
- all of the transformations, transformed images, and alignment parameters are saved at a block 1004 for use in the feature content analysis process (block 310 ).
- the method 300 further includes a feature content analysis process at a block 310 .
- the feature content analysis process 310 indicates how the image features representing a common physical location have changed over time.
- the feature content analysis process 310 uses the mathematical transformations and/or the transformed images produced in FIG. 4 .
- the feature content analysis process 310 may use a General Pattern Change (GPC) likelihood algorithm, such as the GPC likelihood process 2014 schematically shown in FIG. 6 to determine the likelihood of change for every pixel in the first and second images.
- GPC General Pattern Change
- FIG. 6 shows a development of a General Pattern Change (GPC) likelihood in accordance with an embodiment of the invention. Equation 1 below is an example of a GPC likelihood 2028 .
- the GPC likelihood process includes determining the number of occurrences where a pixel having value i in the common image overlap area of the sensed image overlaps a pixel having value j in the common image overlap area of the reference image, for all of the pixel values within the common image overlap area at a block 2020 .
- a number of pixels in the common image overlap area of the sensed image having value i is determined, and a number of pixels in the common image overlap area of the reference image having gray level j is determined at a block 2024 .
- a total number of pixels in the common image overlap is determined at a block 2026 .
- the likelihood of change for every pixel in the first and second image is determined by calculating the GPC likelihood using the pixels within an object of interest sized polygon centered on each corresponding pixel in the first and second image.
- the likelihood of change for every pixel in the first and second image is determined by calculating the GPC likelihood using the pixels within a minimally sized polygon centered on each corresponding pixel in the first and second image.
- FIG. 7 schematically shows a GPC likelihood process 2016 where the polygon is either minimally sized or the size of an object of interest (block 2018 ) in accordance with alternate embodiments of the present invention.
- the object of interest sized polygon can be a simple rectangle.
- the GPC likelihood is calculated (block 2017 ) for a pixel location within the transformed and aligned versions of the first and second images using the pixels within a rectangle centered on the pixel location. This process is repeated for every pixel location within an area which is common to the first and second images to produce a set of GPC likelihoods (block 2019 ).
- the GPC likelihood process 2030 receives a data set from the scene registration process 304 (either the mathematical transformations and/or the transformed images produced in FIG. 4 ).
- the center of a minimally sized neighborhood polygon is placed at an offset location relative to the transformed and aligned imagery, one of the set of offset locations which encompass the common image overlap.
- the center of an object of interest sized neighborhood polygon is placed at an offset location relative to the transformed and aligned imagery, one of the set of offset locations which encompass the common image overlap.
- the image pixels from the transformed and aligned imagery that are within the polygon at the current offset are selected.
- the GPC likelihood is determined for the selected pixels.
- the offset and the GPC likelihood are stored.
- the next polygon offset is selected if any additional offsets remain in the set of offsets. Otherwise the process is completed and the set of GPC likelihoods are available for use.
- an image region identification process is performed which groups the GPC likelihoods into regions. More specifically, the region identification process 312 spatially partitions the set of GPC likelihoods, created by the feature content analysis process, into a set of variously sized regions where the region sizes are determined by the objects within the imagery.
- a region score may be determined for each location by applying a region scoring function to all of the GPC likelihoods within an object-sized polygon centered on each location. In an alternate embodiment, shown in FIG.
- a region score may be determined for each location by applying a scoring function to all of the GPC likelihoods within each polygon from a set of polygons with various shapes and sizes. In either embodiment, the resulting regions which overlap by more than a pre-defined amount can be removed by selecting those with the larger region score.
- FIG. 9 is a flow chart 2050 of a region identification process in accordance with an alternate embodiment of the invention.
- the region identification process 2050 may be used with a polygonal shape 2062 that remains constant over the entire data set (e.g. a transformed image) 2064 , as shown in the upper portion of FIG. 9 .
- the region identification process 2050 includes creating a relative offset between the polygonal shape and the GPC likelihoods at a block 2052 .
- GPC likelihoods within the polygonal shape are selected at a block 2054 .
- the region score for the polygonal shape at the offset location is determined.
- the polygon offset location and the region score are stored.
- the scoring function used to determine each region score may calculate any meaningful statistic such as an average, a maximum, or a standard deviation.
- the set of GPC likelihoods is spatially partitioned into an arbitrarily shaped set of polygonal regions having dimensions less than or equal to the dimensions of the common area of interest based on the spatial placement, grouping, size, or any statistical grouping of the GPC likelihoods using, in one particular embodiment, the Region Partitioning likelihood 2012 shown in Equation 2 and also in FIG. 11 . After the regions have been assigned and their scores determined, all regions that overlap more than a predefined amount with an image region having a larger region score are then removed.
- FIG. 10 is a flow chart 2070 of an embodiment of a region identification process in which the polygonal shape does not remain constant over the entire data set 2088 , as shown in the upper portion of FIG. 10 .
- the region identification process 2070 includes selecting a polygonal shape at a block 2072 , and creating a relative offset between the polygonal shape and the GPC likelihood at a block 2074 . GPC likelihoods within the polygonal shape are selected at a block 2076 .
- the region score for the polygonal shape at the offset location is determined.
- the polygon, the offset and the region score are stored.
- the region partitioning process 2000 receives a set of GPC likelihoods from the feature analysis process 310 at a block 2002 .
- the GPC likelihood region partitioning process 2000 performs a first sample region partitioning process.
- the first sample region partitioning process 2004 includes selecting a first polygonal shape R 1 , placing the first polygonal shape R 1 at a first location 2005 a, and computing the GPC likelihood at the first location 2005 a according a known region partitioning likelihood expression 2012 , as shown below in Equation (2):
- N Ri number of values in region i
- the first sample region partitioning process 2004 continues successively positioning the first polygonal shape R 1 and computing the region partitioning likelihood at all successive locations 2005 a - 2005 x across the data set.
- a second sample region partitioning process (block 2006 ) selects a second polygonal shape R 2 , and successively positions the second polygonal shape R 2 and computes the region partitioning likelihood at all successive locations 2007 a - 2007 x across the data set.
- the region partitioning likelihood process 2000 continues in this manner through an n th sample region partitioning process 2008 in which an n th polygonal shape R n is positioned and the region partitioning likelihood is computed at all successive locations 2009 a - 2009 x across the data set.
- a partition 2011 with a largest region likelihood is determined at a block 2010 .
- the data set is then partitioned into a mosaic of various regions of GPC likelihood ( 2014 ) based on the region partitioning processes 2004 , 2006 , 2008 .
- a segmentation process could be used to perform the partitioning of the data set into a mosaic of various regions.
- a region partitioning process is performed at a block 314 .
- the region partitioning process 314 partitions the image regions produced by the image region identification process 312 into a set of partitions according to their image region scores.
- the region partitioning process 314 sorts the image regions in descending order according to their image region scores at a block 1202 .
- the process 314 then assigns the first N sorted image regions into one partition at a block 1204 , determines whether a next partition of image regions is needed at a block 1206 , and continues sorting the next M sorted regions into another partition and so on until all of the image regions had been assigned. After all image regions have been sorted into partitions, the process 314 applies image region assignments at a block 1208 .
- a region partitioning process 1300 would determine the likelihood for each possible set of image region score partitions and then select the set of partitions with the largest likelihood.
- the partition set likelihoods would be the region partitioning likelihood 2012 as shown in Equation 2 and in FIG. 11 .
- the region partitioning process 1300 determines all possible partition sets at a block 1302 , and selects a partition set at a block 1304 .
- the process 1300 determines a partition set likelihood and saves the likelihood and the associated partitions at a block 1306 .
- the process 1300 determines whether a next partition set is needed, and if so, returns to block 1304 to select a next partition set, and blocks 1306 through 1308 are repeated. If no additional partition sets are needed, then the process 1300 applies an image region assignment with the largest likelihood at a block 1310 .
- an optional graphical overlay process may be performed at a block 316 to enable visual inspection of the identified prioritized regions of the first and second images.
- An optional operator interface process may also be performed at a block 318 to enable a user to adjust various parameters of the process 300 .
- a determination is made whether to repeat the analysis process 300 .
- FIG. 15 illustrates a computing device 500 configured in accordance with an embodiment of the present invention.
- the computing device 500 may be used, for example, as the computer system 124 of the analysis system 120 of FIG. 1 .
- the computing device 500 includes at least one processing unit 502 and system memory 504 .
- the system memory 504 may be volatile (such as RAM), non-volatile (such as ROM and flash memory) or some combination of the two.
- the system memory 504 typically includes an operating system 506 , one or more program modules 508 , and may include program data 510 .
- the program modules 508 may include the process modules 509 that realize one or more the processes described herein. Other modules described herein may also be part of the program modules 508 . As an alternative, process modules 509 , as well as the other modules, may be implemented as part of the operating system 506 , or it may be installed on the computing device and stored in other memory (e.g., non-removable storage 522 ) separate from the system memory 504 .
- the computing device 500 may have additional features or functionality.
- the computing device 500 may also include additional data storage devices (removable and/or non-removable) such as, for example, magnetic disks, optical disks, or tape.
- additional storage is illustrated in FIG. 15 by removable storage 520 and non-removable storage 522 .
- Computer storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data.
- the system memory 504 , removable storage 520 and non-removable storage 522 are all examples of computer storage media.
- computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computing device 500 . Any such computer storage media may be part of the device 500 .
- Computing device 500 may also have input device(s) 524 such as keyboard, mouse, pen, voice input device, and touch input devices.
- Output device(s) 526 such as a display, speakers, and printer, may also be included. These devices are well know in the art and need not be discussed at length.
- the computing device 500 may also contain a communication connection 528 that allow the device to communicate with other computing devices 530 , such as over a network.
- Communication connection(s) 528 is one example of communication media.
- Communication media may typically be embodied by computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave or other transport mechanism, and includes any information delivery media.
- program modules include routines, programs, objects, components, data structures, and so forth for performing particular tasks or implement particular abstract data types.
- program modules and the like may be executed as native code or may be downloaded and executed, such as in a virtual machine or other just-in-time compilation execution environment.
- functionality of the program modules may be combined or distributed as desired in various embodiments.
- An implementation of these modules and techniques may be stored on or transmitted across some form of computer readable media.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
Abstract
Description
- This patent application is related to the following co-pending, commonly-owned U.S. patent applications: U.S. patent application Ser. No. (t.b.d.) entitled “Methods and Systems for Data Link Front End Filters for Sporadic Updates” filed on May 17, 2006 under Attorney Docket No. BO1-0201US; U.S. patent application Ser. No. (t.b.d.) entitled “Multiple Moving Target Detection” filed on May 17, 2006 under Attorney Docket No. BO1-0198US; U.S. patent application Ser. No. (t.b.d.) entitled “Route Search Planner” filed on May 17, 2006 under Attorney Docket No. BO1-0199US; and U.S. patent application Ser. No. (t.b.d.) entitled “Sensor Scan Planner” filed on May 17, 2006 under Attorney Docket No. BO1-0200US, which applications are incorporated herein by reference.
- This invention relates to image analysis and, more specifically, to the detection of the insertion, removal, and change of objects within a scene through the use of imagery.
- The detonation of Improvised Explosive Devices (IEDs) is a new and ongoing threat to both occupation ground forces and innocent civilians in war zones. IEDs can be constructed at a remote location and then transported and installed within a short period of time by a minimum number of opposition forces. To escape detection, IEDs are typically embedded into and appear as part of their local surrounding. Once installed, IEDs can be detonated autonomously or manually by an operator hidden nearby.
- The current methods used to detect IEDs prior to their detonation requires one or more human image analysts to manually conduct a detailed and thorough review of an extensive database of imagery collected by one or more Unmanned Aerial Vehicles (UAV) or by other imaging means. Given the small size and camouflaged appearance of IEDs, the required image analyses may be tedious and can be overwhelming to a given set of image analysts. Therefore, there exists an unmet need for quickly and accurately determining the insertion of an IED into an area of interest through an analysis of multiple images containing a common area of interest.
- The present invention provides systems and methods for detecting the insertion, removal and change of objects of interest through the comparison of two or more images containing a common area of interest. Embodiments of the present invention may advantageously provide an autonomous capability to reduce the time required for image analysts to review an imagery database by emphasizing image regions that have an increased likelihood of containing the insertion, removal, and change of an object of interest. Embodiments of the present invention may be used to detect a variety of objects of interest in a variety of circumstances and applications, such as detecting an IED, or detecting new facilities, capabilities, movements, or strategic thrusts by hostile parties, or for non-military applications, such as for search and rescue, or for conducting research into environmental changes or wildlife habits.
- In one embodiment, a method for detecting at least one of insertion, removal, and change of objects of interest through the comparison of a first image and a second image containing a common area of interest includes performing a scene registration including aligning image patterns in the first image to those in the second image; performing a feature content analysis to determine a likelihood of change for each pixel in the first and second images; performing a region identification to group pixels within the first and second images into one or more image regions based upon their likelihood of change; and performing an image region partitioning to prioritize the one or more image regions according to an image region score for each of the one or more image regions, the image region score being indicative of at least one of insertion, removal, and change of an object of interest within the common area of interest.
- In a further embodiment, a method for detecting at least one of insertion, removal, and change of objects of interest through the comparison of a first image and a second image containing a common area of interest includes determining a likelihood of change for each of a plurality of portions of the first and second images; grouping the plurality of portions into one or more image regions based upon their likelihood of change; and prioritizing the one or more image regions according to an image region score for each of the one or more image regions, the image region score being indicative of at least one of insertion, removal, and change of an object of interest within the common area of interest.
- In yet another embodiment, an image analysis system for detecting at least one of insertion, removal, and change of objects of interest through the comparison of a first image and a second image containing a common area of interest includes a first component configured to determine a likelihood of change for a plurality of portions of the first and second images; a second component configured to group the plurality of portions into one or more image regions based upon their likelihood of change; and a third component configured to prioritize the one or more image regions according to an image region score for each of the one or more image regions, the image region score being indicative of at least one of insertion, removal, and change of an object of interest within the common area of interest.
- Embodiments of the present invention are described in detail below with reference to the following drawings.
-
FIG. 1 is an image collection and analysis system in accordance with an embodiment of the present invention; -
FIG. 2 is a schematic representation of the image collection and analysis system ofFIG. 1 ; -
FIG. 3 is a flowchart of a method of detecting the insertion, removal and change of an object of interest through two images containing a common area of interest in accordance with an embodiment of the present invention; -
FIG. 4 is a flowchart of a scene registration process in accordance with an embodiment of the present invention; -
FIG. 5 shows a sample pair of first and second images, their transformed counterparts, and an alignment of the transformed images typically produced by the scene registration process ofFIG. 4 ; -
FIG. 6 shows the development of the General Pattern Change likelihood in accordance with an embodiment of the invention; -
FIG. 7 shows the development of the GPC likelihoods over the common area between the two transformed and aligned images shown inFIG. 5 in accordance with an embodiment of the invention; -
FIG. 8 is a flowchart of a GPC likelihood development process shown inFIG. 7 in accordance with an embodiment of the invention; -
FIG. 9 is a flow chart of the region identification process for an embodiment where the object of interest size is known a-priori; -
FIG. 10 is a flow chart of the region identification process for an embodiment where the object of interest size is not known a-priori; -
FIG. 11 shows the development of the region partitioning likelihood in accordance with an embodiment of the present invention; -
FIG. 12 andFIG. 13 are flow charts of the region prioritization process for alternate embodiments in accordance with the present invention; -
FIG. 14 is a flow chart for an embodiment to assist a human operator in accordance with the present invention; -
FIG. 15 illustrates a computing device configured in accordance with an embodiment of the present invention; and -
FIG. 16 shows a variety of sensor platforms that may be used in systems in accordance with alternate embodiments of the invention - The present invention relates to systems and methods for detecting the insertion, removal, and change of objects of interest through the comparison of two or more images containing a common area of interest. Many specific details of certain embodiments of the invention are set forth in the following description and in
FIGS. 1 through 16 to provide a thorough understanding of such embodiments. One skilled in the art, however, will understand that the present invention may have additional embodiments, or that the present invention may be practiced without several of the details described in the following description. - In general, embodiments of systems and methods for detecting the insertion, removal, and change of objects of interest through a comparison of two or more images containing a common area of interest in accordance with the present invention may identify and prioritize image regions within the images based on changes in feature content over a period of time in a manner which is consistent with the insertion, removal and change of an object of interest, such as an Improvised Explosive Device (IED), or for detecting new facilities, capabilities, movements, or strategic thrusts by hostile parties, or for various non-military applications. Such embodiments may advantageously detect relevant changes in feature content within images which have dissimilar sensor view points, sensor spectrums, scene composition, or period of time covered by the imagery.
-
FIG. 1 is an image collection andanalysis system 100 in accordance with an embodiment of the present invention.FIG. 2 is a schematic representation of the image collection andanalysis system 100 ofFIG. 1 . In this embodiment, thesystem 100 includes anacquisition system 110 and ananalysis system 120. Theacquisition system 110 includes aplatform 112 having animage acquisition component 114 coupled to atransmitter 116. In the embodiment shown inFIG. 1 , theplatform 112 is an aircraft, and more specifically an Unmanned Aerial Vehicle (UAV). In alternate embodiments, theplatform 112 may be any suitable stationary or moveable platform. Similarly, theimage acquisition component 114 may be any suitable type of image acquisition device, including, for example, visible wavelength sensors (e.g. photographic systems), infrared sensors, laser radar systems, radar systems, or any other suitable sensors or systems. In the embodiment shown inFIGS. 1 and 2 , theanalysis system 120 includes areceiver 122 coupled to acomputer system 124. Thecomputer system 124 is configured to perform a method of detecting changes between images in accordance with embodiments of the present invention, as described more fully below. A particular embodiment of asuitable computer system 124 is described more fully below with reference toFIG. 13 . - In operation, the
acquisition system 110 is positioned such that theimage acquisition component 114 may acquire one or more images of an area ofinterest 102. The one or more images may be transmitted by thetransmitter 116 to thereceiver 122 of theanalysis system 120 for processing by thecomputer system 124. Thus, images of the area ofinterest 102 may be provided by theacquisition system 110 in a real-time manner to theanalysis system 120. In alternate embodiments, thetransmitter 116 andreceiver 122 may be eliminated, and the images acquired by theimage acquisition component 114 may be communicated to thecomputer system 124 either directly via a direct signal link, or may be stored within a suitable storage media (e.g. RAM, ROM, portable storage media, etc.) by theimage acquisition component 114 and uploaded to thecomputer system 124 at a later time. - Although the image collection and
analysis system 100 shown inFIG. 1 is depicted as having aplatform 112 that is an Unmanned Aerial Vehicle (UAV), it will be appreciated that a variety of alternate embodiments of acquisition systems may be conceived, and that the invention is not limited to the particular embodiment described above. For example,FIG. 16 shows a variety of sensor platforms that may be used in place of theUAV 112 in image collection and analysis systems in accordance with alternate embodiments of the invention. More specifically, in alternate embodiments, sensor platforms may include satellites or other space-basedplatforms 602, mannedaircraft 604, land-basedvehicles 608, or any other suitable platform embodiments. -
FIG. 3 is a flowchart of amethod 300 of detecting the insertion, removal, and change of objects of interest through the use of two or more images containing a common area of interest in accordance with an embodiment of the present invention. In the following discussion, exemplary methods and processes are illustrated as a collection of blocks in a logical flow graph, which represent a sequence of operations that can be implemented in hardware, software, or a combination thereof. In the context of software, the blocks represent computer-executable instructions that, when executed by one or more processors, perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, components, data structures, and the like that perform particular functions or implement particular abstract data types. The order in which the operations are described is not intended to be construed as a limitation, and any number of the described blocks can be combined in any order and/or in parallel to implement the process. Furthermore, removal of one or more of the listed operations, or the addition of additional operations, does not depart from the scope of the invention. - As shown in
FIG. 3 , in this embodiment, themethod 300 includes acquiring at least two images at ablock 302. One or more of the images may be stored images that have been acquired in the past and are retrieved from a suitable storage media, or may be images that are acquired in a real-time manner. The images may be acquired using similar or dissimilar (i.e. cross-spectral) sensor types, including visible wavelength sensors (e.g. photographic systems), infrared sensors, laser radar systems, radar systems, or any other suitable sensors or systems. At ablock 304, a scene registration process is performed. Thescene registration process 304 aligns all of the pixels representing a physical area which is common to the first and second images. In one particular set of embodiments, the scene registration process (block 304) comprises some or all of the acts described, for example, in U.S. Pat. Nos. 5,809,171, 5,890,808, 5,946,422, 5,982,930, 5,982,945 issued to Neff et al., which patents are incorporated herein by reference. - Alternately, in another embodiment, the
scene registration process 304 includes the acts shown inFIGS. 4 and 5 . Again, it will be appreciated that the order in which thescene registration process 304 is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method, or an alternate method. In this embodiment, the image pixel values are partitioned into a set of labels at ablock 1001. This process may include all of the one to one and many to one pixel value transformations such as linear rescaling, equalization, and feature extraction. At ablock 1002 the original or re-labeled images are transformed into a common reference frame and may produce both a forward and inverse transform which maps the pixel locations in the original image to those in the transformed image and vice versa. The common reference frame may be the original view point of either image or another advantageous view point all together. - At
block 1003 the image patterns of the transformed images are aligned and may produce either a mathematical transform or a set transformed images or both, that account for all of the spatial effects due to translation, rotation, scale, and skew, any spectral artifacts such as shadowing and layover, and other distortions present within the transformed images that were not removed by the previous blocks such as terrain elevation and object height errors. When produced, all of the transformations, transformed images, and alignment parameters are saved at ablock 1004 for use in the feature content analysis process (block 310). - With continued reference to
FIG. 3 , themethod 300 further includes a feature content analysis process at ablock 310. The featurecontent analysis process 310 indicates how the image features representing a common physical location have changed over time. The featurecontent analysis process 310 uses the mathematical transformations and/or the transformed images produced inFIG. 4 . In one embodiment, the featurecontent analysis process 310 may use a General Pattern Change (GPC) likelihood algorithm, such as theGPC likelihood process 2014 schematically shown inFIG. 6 to determine the likelihood of change for every pixel in the first and second images. -
FIG. 6 shows a development of a General Pattern Change (GPC) likelihood in accordance with an embodiment of the invention.Equation 1 below is an example of aGPC likelihood 2028. -
- In the embodiment shown in
FIG. 6 , the GPC likelihood process includes determining the number of occurrences where a pixel having value i in the common image overlap area of the sensed image overlaps a pixel having value j in the common image overlap area of the reference image, for all of the pixel values within the common image overlap area at ablock 2020. At ablock 2022, a number of pixels in the common image overlap area of the sensed image having value i is determined, and a number of pixels in the common image overlap area of the reference image having gray level j is determined at ablock 2024. Next, a total number of pixels in the common image overlap is determined at ablock 2026. - In one embodiment, the likelihood of change for every pixel in the first and second image is determined by calculating the GPC likelihood using the pixels within an object of interest sized polygon centered on each corresponding pixel in the first and second image. In an alternate embodiment, where the object of interest sized polygon is not know a-priori, the likelihood of change for every pixel in the first and second image is determined by calculating the GPC likelihood using the pixels within a minimally sized polygon centered on each corresponding pixel in the first and second image.
FIG. 7 schematically shows aGPC likelihood process 2016 where the polygon is either minimally sized or the size of an object of interest (block 2018) in accordance with alternate embodiments of the present invention. As shown inFIG. 7 , the object of interest sized polygon can be a simple rectangle. - Furthermore, in
FIG. 7 , the GPC likelihood is calculated (block 2017) for a pixel location within the transformed and aligned versions of the first and second images using the pixels within a rectangle centered on the pixel location. This process is repeated for every pixel location within an area which is common to the first and second images to produce a set of GPC likelihoods (block 2019). - As shown in
FIG. 8 , theGPC likelihood process 2030 receives a data set from the scene registration process 304 (either the mathematical transformations and/or the transformed images produced inFIG. 4 ). At ablock 2032, in one embodiment the center of a minimally sized neighborhood polygon is placed at an offset location relative to the transformed and aligned imagery, one of the set of offset locations which encompass the common image overlap. In an alternate embodiment, the center of an object of interest sized neighborhood polygon is placed at an offset location relative to the transformed and aligned imagery, one of the set of offset locations which encompass the common image overlap. At ablock 2034, the image pixels from the transformed and aligned imagery that are within the polygon at the current offset are selected. At ablock 2036, the GPC likelihood is determined for the selected pixels. At ablock 2037, the offset and the GPC likelihood are stored. At ablock 2038, the next polygon offset is selected if any additional offsets remain in the set of offsets. Otherwise the process is completed and the set of GPC likelihoods are available for use. - As further shown in
FIG. 3 , at ablock 312, an image region identification process is performed which groups the GPC likelihoods into regions. More specifically, theregion identification process 312 spatially partitions the set of GPC likelihoods, created by the feature content analysis process, into a set of variously sized regions where the region sizes are determined by the objects within the imagery. In an embodiment, shown inFIG. 9 , where the approximate size of the object of interest is known a-priori, a region score may be determined for each location by applying a region scoring function to all of the GPC likelihoods within an object-sized polygon centered on each location. In an alternate embodiment, shown inFIG. 10 , where the object of interest size is not known a-priori, a region score may be determined for each location by applying a scoring function to all of the GPC likelihoods within each polygon from a set of polygons with various shapes and sizes. In either embodiment, the resulting regions which overlap by more than a pre-defined amount can be removed by selecting those with the larger region score. - Alternately,
FIG. 9 is aflow chart 2050 of a region identification process in accordance with an alternate embodiment of the invention. Theregion identification process 2050 may be used with a polygonal shape 2062 that remains constant over the entire data set (e.g. a transformed image) 2064, as shown in the upper portion ofFIG. 9 . In this embodiment, theregion identification process 2050 includes creating a relative offset between the polygonal shape and the GPC likelihoods at a block 2052. GPC likelihoods within the polygonal shape are selected at a block 2054. At ablock 2056, the region score for the polygonal shape at the offset location is determined. At a block 2057, the polygon offset location and the region score are stored. At a determination block 2058, a determination is made whether the region scores have been determined across the entirety of the data set, or whether another offset is needed. If another offset is needed, then theprocess 2050 stores the offset and region score at a block 2060, and repeats the actions described in blocks 2052 through 2056 for a next offset value. If another offset is not needed, then theprocess 2050 removes overlapping regions at a block 2062, and makes the non-overlapping regions available. - The scoring function used to determine each region score (block 2056) may calculate any meaningful statistic such as an average, a maximum, or a standard deviation. In a more general embodiment shown in
FIG. 10 , the set of GPC likelihoods is spatially partitioned into an arbitrarily shaped set of polygonal regions having dimensions less than or equal to the dimensions of the common area of interest based on the spatial placement, grouping, size, or any statistical grouping of the GPC likelihoods using, in one particular embodiment, theRegion Partitioning likelihood 2012 shown in Equation 2 and also inFIG. 11 . After the regions have been assigned and their scores determined, all regions that overlap more than a predefined amount with an image region having a larger region score are then removed. - More specifically,
FIG. 10 is aflow chart 2070 of an embodiment of a region identification process in which the polygonal shape does not remain constant over the entire data set 2088, as shown in the upper portion ofFIG. 10 . In this embodiment, theregion identification process 2070 includes selecting a polygonal shape at a block 2072, and creating a relative offset between the polygonal shape and the GPC likelihood at a block 2074. GPC likelihoods within the polygonal shape are selected at a block 2076. At a block 2078, the region score for the polygonal shape at the offset location is determined. At ablock 2079, the polygon, the offset and the region score are stored. At a determination block 2080, a determination is made whether the region scores have been determined across the entirety of the data set, or whether another offset is needed. If another offset is needed, then theprocess 2070 repeats the actions described in blocks 2074 through 2080 for a next offset value. If another offset is not needed, then theprocess 2070 proceeds to adetermination block 2084, where a determination is made whether another polygonal shape is to be analyzed. If so, theprocess 2070 returns to block 2072, selects another polygonal shape, and repeats blocks 2074 through 2084. Eventually, once it is determined atblock 2084 that there are no additional polygonal shapes to analyze, theprocess 2070 removes overlapping regions at a block 2086, and ends. - As shown in
FIG. 11 , in this embodiment, the region partitioning process 2000 receives a set of GPC likelihoods from thefeature analysis process 310 at a block 2002. At a block 2004, the GPC likelihood region partitioning process 2000 performs a first sample region partitioning process. The first sample region partitioning process 2004 includes selecting a first polygonal shape R1, placing the first polygonal shape R1 at a first location 2005 a, and computing the GPC likelihood at the first location 2005 a according a known region partitioninglikelihood expression 2012, as shown below in Equation (2): -
- Where f(a,b)=1, when a=b
-
- f(a,b)=0, when a b
- NR=number of regions
- NRi=number of values in region i
- Rimin=minimum value in region i
- Rimax=maximum value in region i
- The first sample region partitioning process 2004 continues successively positioning the first polygonal shape R1 and computing the region partitioning likelihood at all successive locations 2005 a-2005 x across the data set. Similarly, a second sample region partitioning process (block 2006) selects a second polygonal shape R2, and successively positions the second polygonal shape R2 and computes the region partitioning likelihood at all successive locations 2007 a-2007 x across the data set. The region partitioning likelihood process 2000 continues in this manner through an nth sample region partitioning process 2008 in which an nth polygonal shape Rn is positioned and the region partitioning likelihood is computed at all successive locations 2009 a-2009 x across the data set.
- Referring again to
FIG. 11 , following the region partitioning processes 2004, 2006, 2008, a partition 2011 with a largest region likelihood is determined at a block 2010. At ablock 2014, the data set is then partitioned into a mosaic of various regions of GPC likelihood (2014) based on the region partitioning processes 2004, 2006, 2008. In an alternate embodiment, a segmentation process could be used to perform the partitioning of the data set into a mosaic of various regions. - Referring again to
FIG. 3 , a region partitioning process is performed at ablock 314. Theregion partitioning process 314 partitions the image regions produced by the imageregion identification process 312 into a set of partitions according to their image region scores. In one embodiment, as shown inFIG. 12 , theregion partitioning process 314 sorts the image regions in descending order according to their image region scores at ablock 1202. Theprocess 314 then assigns the first N sorted image regions into one partition at ablock 1204, determines whether a next partition of image regions is needed at ablock 1206, and continues sorting the next M sorted regions into another partition and so on until all of the image regions had been assigned. After all image regions have been sorted into partitions, theprocess 314 applies image region assignments at ablock 1208. - Alternately, in a more general embodiment as shown in
FIG. 13 , aregion partitioning process 1300 would determine the likelihood for each possible set of image region score partitions and then select the set of partitions with the largest likelihood. In one particular embodiment, the partition set likelihoods would be theregion partitioning likelihood 2012 as shown in Equation 2 and inFIG. 11 . - More specifically, as shown in
FIG. 13 , theregion partitioning process 1300 determines all possible partition sets at ablock 1302, and selects a partition set at ablock 1304. Theprocess 1300 then determines a partition set likelihood and saves the likelihood and the associated partitions at ablock 1306. At ablock 1308, theprocess 1300 determines whether a next partition set is needed, and if so, returns to block 1304 to select a next partition set, and blocks 1306 through 1308 are repeated. If no additional partition sets are needed, then theprocess 1300 applies an image region assignment with the largest likelihood at ablock 1310. - In an alternate embodiment designed to assist a human operator detect the insertion, removal, and change of an object within a scene through the use of imagery, referring again to
FIG. 3 andFIG. 14 , after the region partitioning process is performed (block 314), an optional graphical overlay process may be performed at a block 316 to enable visual inspection of the identified prioritized regions of the first and second images. An optional operator interface process may also be performed at ablock 318 to enable a user to adjust various parameters of theprocess 300. Finally, at a block 320, a determination is made whether to repeat theanalysis process 300. -
FIG. 15 illustrates a computing device 500 configured in accordance with an embodiment of the present invention. The computing device 500 may be used, for example, as thecomputer system 124 of theanalysis system 120 ofFIG. 1 . In a very basic configuration, the computing device 500 includes at least oneprocessing unit 502 andsystem memory 504. Depending on the exact configuration and type of computing device 500, thesystem memory 504 may be volatile (such as RAM), non-volatile (such as ROM and flash memory) or some combination of the two. Thesystem memory 504 typically includes anoperating system 506, one ormore program modules 508, and may includeprogram data 510. - For the present methods of detecting the insertion, removal and change of objects of interest through a comparison of images containing a common area of interest, the
program modules 508 may include theprocess modules 509 that realize one or more the processes described herein. Other modules described herein may also be part of theprogram modules 508. As an alternative,process modules 509, as well as the other modules, may be implemented as part of theoperating system 506, or it may be installed on the computing device and stored in other memory (e.g., non-removable storage 522) separate from thesystem memory 504. - The computing device 500 may have additional features or functionality. For example, the computing device 500 may also include additional data storage devices (removable and/or non-removable) such as, for example, magnetic disks, optical disks, or tape. Such additional storage is illustrated in
FIG. 15 byremovable storage 520 and non-removable storage 522. Computer storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. Thesystem memory 504,removable storage 520 and non-removable storage 522 are all examples of computer storage media. Thus, computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computing device 500. Any such computer storage media may be part of the device 500. Computing device 500 may also have input device(s) 524 such as keyboard, mouse, pen, voice input device, and touch input devices. Output device(s) 526 such as a display, speakers, and printer, may also be included. These devices are well know in the art and need not be discussed at length. - The computing device 500 may also contain a
communication connection 528 that allow the device to communicate withother computing devices 530, such as over a network. Communication connection(s) 528 is one example of communication media. Communication media may typically be embodied by computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave or other transport mechanism, and includes any information delivery media. - Various modules and techniques may be described herein in the general context of computer-executable instructions, such as program modules, executed by one or more computers or other devices. Generally, program modules include routines, programs, objects, components, data structures, and so forth for performing particular tasks or implement particular abstract data types. These program modules and the like may be executed as native code or may be downloaded and executed, such as in a virtual machine or other just-in-time compilation execution environment. Typically, the functionality of the program modules may be combined or distributed as desired in various embodiments. An implementation of these modules and techniques may be stored on or transmitted across some form of computer readable media.
- While preferred and alternate embodiments of the invention have been illustrated and described, as noted above, many changes can be made without departing from the spirit and scope of the invention. Accordingly, the scope of the invention is not limited by the disclosure of these preferred and alternate embodiments. Instead, the invention should be determined entirely by reference to the claims that follow.
Claims (20)
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US11/383,914 US7702183B1 (en) | 2006-05-17 | 2006-05-17 | Methods and systems for the detection of the insertion, removal, and change of objects within a scene through the use of imagery |
| GB0709142A GB2439627B (en) | 2006-05-17 | 2007-05-11 | Methods and systems for the detection of the insertion,removal, and change of objects within a scene through the use of imagery |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US11/383,914 US7702183B1 (en) | 2006-05-17 | 2006-05-17 | Methods and systems for the detection of the insertion, removal, and change of objects within a scene through the use of imagery |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US7702183B1 US7702183B1 (en) | 2010-04-20 |
| US20100104185A1 true US20100104185A1 (en) | 2010-04-29 |
Family
ID=38219285
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US11/383,914 Active 2027-03-31 US7702183B1 (en) | 2006-05-17 | 2006-05-17 | Methods and systems for the detection of the insertion, removal, and change of objects within a scene through the use of imagery |
Country Status (2)
| Country | Link |
|---|---|
| US (1) | US7702183B1 (en) |
| GB (1) | GB2439627B (en) |
Cited By (13)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20080270976A1 (en) * | 2007-04-27 | 2008-10-30 | International Business Machines Corporation | Management of graphical information notes |
| US20100274487A1 (en) * | 2006-05-17 | 2010-10-28 | Neff Michael G | Route search planner |
| US8928695B2 (en) * | 2012-10-05 | 2015-01-06 | Elwha Llc | Formatting of one or more persistent augmentations in an augmented view in response to multiple input factors |
| US9077647B2 (en) | 2012-10-05 | 2015-07-07 | Elwha Llc | Correlating user reactions with augmentations displayed through augmented views |
| US9105126B2 (en) | 2012-10-05 | 2015-08-11 | Elwha Llc | Systems and methods for sharing augmentation data |
| US9111383B2 (en) | 2012-10-05 | 2015-08-18 | Elwha Llc | Systems and methods for obtaining and using augmentation data and for sharing usage data |
| US9141188B2 (en) | 2012-10-05 | 2015-09-22 | Elwha Llc | Presenting an augmented view in response to acquisition of data inferring user activity |
| US9671863B2 (en) | 2012-10-05 | 2017-06-06 | Elwha Llc | Correlating user reaction with at least an aspect associated with an augmentation of an augmented view |
| US10269179B2 (en) | 2012-10-05 | 2019-04-23 | Elwha Llc | Displaying second augmentations that are based on registered first augmentations |
| US10423169B2 (en) * | 2016-09-09 | 2019-09-24 | Walmart Apollo, Llc | Geographic area monitoring systems and methods utilizing computational sharing across multiple unmanned vehicles |
| US10507918B2 (en) | 2016-09-09 | 2019-12-17 | Walmart Apollo, Llc | Systems and methods to interchangeably couple tool systems with unmanned vehicles |
| US10514691B2 (en) | 2016-09-09 | 2019-12-24 | Walmart Apollo, Llc | Geographic area monitoring systems and methods through interchanging tool systems between unmanned vehicles |
| US10520953B2 (en) | 2016-09-09 | 2019-12-31 | Walmart Apollo, Llc | Geographic area monitoring systems and methods that balance power usage between multiple unmanned vehicles |
Families Citing this family (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| FR2939188B1 (en) * | 2008-12-02 | 2012-12-28 | Mbda France | METHOD AND SYSTEM FOR DETECTING IMPROVED OR SIMILAR EXPLOSIVE DEVICES |
| JP5428618B2 (en) * | 2009-07-29 | 2014-02-26 | ソニー株式会社 | Image processing apparatus, imaging apparatus, image processing method, and program |
| US9607370B2 (en) | 2014-01-15 | 2017-03-28 | The Boeing Company | System and methods of inspecting an object |
| US9934431B2 (en) * | 2016-07-27 | 2018-04-03 | Konica Minolta Laboratory U.S.A., Inc. | Producing a flowchart object from an image |
| US10169663B2 (en) | 2016-09-01 | 2019-01-01 | The Boeing Company | Scene change detection via multiple sensors |
| US11749074B2 (en) * | 2019-12-13 | 2023-09-05 | Sony Group Corporation | Rescue support in large-scale emergency situations |
| US12120180B2 (en) | 2021-09-10 | 2024-10-15 | Toyota Motor Engineering & Manufacturing North America, Inc. | Determining an existence of a change in a region |
| TWI797787B (en) * | 2021-10-21 | 2023-04-01 | 炳碩生醫股份有限公司 | Device for controlling raman spectrometer |
Citations (15)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US3799676A (en) * | 1972-05-26 | 1974-03-26 | Us Air Force | Optical tracking system |
| US4133003A (en) * | 1977-10-11 | 1979-01-02 | Rca Corporation | Raster registration system for a television camera |
| US5150426A (en) * | 1990-11-20 | 1992-09-22 | Hughes Aircraft Company | Moving target detection method using two-frame subtraction and a two quadrant multiplier |
| US5436672A (en) * | 1994-05-27 | 1995-07-25 | Symah Vision | Video processing system for modifying a zone in successive images |
| US5453840A (en) * | 1991-06-10 | 1995-09-26 | Eastman Kodak Company | Cross correlation image sensor alignment system |
| US5581637A (en) * | 1994-12-09 | 1996-12-03 | Xerox Corporation | System for registering component image tiles in a camera-based scanner device transcribing scene images |
| US5672872A (en) * | 1996-03-19 | 1997-09-30 | Hughes Electronics | FLIR boresight alignment |
| US5809171A (en) * | 1996-01-05 | 1998-09-15 | Mcdonnell Douglas Corporation | Image processing method and apparatus for correlating a test image with a template |
| US6154567A (en) * | 1998-07-01 | 2000-11-28 | Cognex Corporation | Pattern similarity metric for image search, registration, and comparison |
| US6173087B1 (en) * | 1996-11-13 | 2001-01-09 | Sarnoff Corporation | Multi-view image registration with application to mosaicing and lens distortion correction |
| US20030222789A1 (en) * | 2002-03-05 | 2003-12-04 | Leonid Polyakov | System of and method for warning about unauthorized objects attached to vehicle bottoms and/or adjoining areas |
| US20040006424A1 (en) * | 2002-06-28 | 2004-01-08 | Joyce Glenn J. | Control system for tracking and targeting multiple autonomous objects |
| US6798897B1 (en) * | 1999-09-05 | 2004-09-28 | Protrack Ltd. | Real time image registration, motion detection and background replacement using discrete local motion estimation |
| US20060058954A1 (en) * | 2003-10-08 | 2006-03-16 | Haney Philip J | Constrained tracking of ground objects using regional measurements |
| US7103234B2 (en) * | 2001-03-30 | 2006-09-05 | Nec Laboratories America, Inc. | Method for blind cross-spectral image registration |
Family Cites Families (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| DE19729483A1 (en) | 1997-07-10 | 1999-01-14 | Bodenseewerk Geraetetech | Air-born land mine retrieval method |
| US6732050B2 (en) | 2001-05-23 | 2004-05-04 | Nokia Mobile Phones Ltd | Two-stage interacting multiple models filter for use in a global positioning system |
| EP1293925A1 (en) | 2001-09-18 | 2003-03-19 | Agfa-Gevaert | Radiographic scoring method |
| IL155034A0 (en) | 2003-03-23 | 2004-06-20 | M A M D Digital Data Proc Syst | Automatic aerial digital photography and digital data processing systems |
| JP2004325165A (en) | 2003-04-23 | 2004-11-18 | Mitsubishi Electric Corp | Foreign object detection device and method, and mine detection device |
-
2006
- 2006-05-17 US US11/383,914 patent/US7702183B1/en active Active
-
2007
- 2007-05-11 GB GB0709142A patent/GB2439627B/en active Active
Patent Citations (19)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US3799676A (en) * | 1972-05-26 | 1974-03-26 | Us Air Force | Optical tracking system |
| US4133003A (en) * | 1977-10-11 | 1979-01-02 | Rca Corporation | Raster registration system for a television camera |
| US5150426A (en) * | 1990-11-20 | 1992-09-22 | Hughes Aircraft Company | Moving target detection method using two-frame subtraction and a two quadrant multiplier |
| US5453840A (en) * | 1991-06-10 | 1995-09-26 | Eastman Kodak Company | Cross correlation image sensor alignment system |
| US5436672A (en) * | 1994-05-27 | 1995-07-25 | Symah Vision | Video processing system for modifying a zone in successive images |
| US5581637A (en) * | 1994-12-09 | 1996-12-03 | Xerox Corporation | System for registering component image tiles in a camera-based scanner device transcribing scene images |
| US5890808A (en) * | 1996-01-05 | 1999-04-06 | Mcdonnell Douglas Corporation | Image processing method and apparatus for correlating a test image with a template |
| US5809171A (en) * | 1996-01-05 | 1998-09-15 | Mcdonnell Douglas Corporation | Image processing method and apparatus for correlating a test image with a template |
| US5946422A (en) * | 1996-01-05 | 1999-08-31 | Mcdonnell Douglas Corporation | Image processing method and apparatus for correlating a test image with a template |
| US5982930A (en) * | 1996-01-05 | 1999-11-09 | Mcdonnell Douglas Corporation | Image processing method and apparatus for correlating a test image with a template |
| US5982945A (en) * | 1996-01-05 | 1999-11-09 | Mcdonnell Douglas Corporation | Image processing method and apparatus for correlating a test image with a template |
| US5672872A (en) * | 1996-03-19 | 1997-09-30 | Hughes Electronics | FLIR boresight alignment |
| US6173087B1 (en) * | 1996-11-13 | 2001-01-09 | Sarnoff Corporation | Multi-view image registration with application to mosaicing and lens distortion correction |
| US6154567A (en) * | 1998-07-01 | 2000-11-28 | Cognex Corporation | Pattern similarity metric for image search, registration, and comparison |
| US6798897B1 (en) * | 1999-09-05 | 2004-09-28 | Protrack Ltd. | Real time image registration, motion detection and background replacement using discrete local motion estimation |
| US7103234B2 (en) * | 2001-03-30 | 2006-09-05 | Nec Laboratories America, Inc. | Method for blind cross-spectral image registration |
| US20030222789A1 (en) * | 2002-03-05 | 2003-12-04 | Leonid Polyakov | System of and method for warning about unauthorized objects attached to vehicle bottoms and/or adjoining areas |
| US20040006424A1 (en) * | 2002-06-28 | 2004-01-08 | Joyce Glenn J. | Control system for tracking and targeting multiple autonomous objects |
| US20060058954A1 (en) * | 2003-10-08 | 2006-03-16 | Haney Philip J | Constrained tracking of ground objects using regional measurements |
Cited By (24)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US9127913B2 (en) * | 2006-05-17 | 2015-09-08 | The Boeing Company | Route search planner |
| US20100274487A1 (en) * | 2006-05-17 | 2010-10-28 | Neff Michael G | Route search planner |
| US8584091B2 (en) * | 2007-04-27 | 2013-11-12 | International Business Machines Corporation | Management of graphical information notes |
| US20080270976A1 (en) * | 2007-04-27 | 2008-10-30 | International Business Machines Corporation | Management of graphical information notes |
| US9674047B2 (en) | 2012-10-05 | 2017-06-06 | Elwha Llc | Correlating user reactions with augmentations displayed through augmented views |
| US9671863B2 (en) | 2012-10-05 | 2017-06-06 | Elwha Llc | Correlating user reaction with at least an aspect associated with an augmentation of an augmented view |
| US9105126B2 (en) | 2012-10-05 | 2015-08-11 | Elwha Llc | Systems and methods for sharing augmentation data |
| US9111383B2 (en) | 2012-10-05 | 2015-08-18 | Elwha Llc | Systems and methods for obtaining and using augmentation data and for sharing usage data |
| US9111384B2 (en) | 2012-10-05 | 2015-08-18 | Elwha Llc | Systems and methods for obtaining and using augmentation data and for sharing usage data |
| US8941689B2 (en) * | 2012-10-05 | 2015-01-27 | Elwha Llc | Formatting of one or more persistent augmentations in an augmented view in response to multiple input factors |
| US9141188B2 (en) | 2012-10-05 | 2015-09-22 | Elwha Llc | Presenting an augmented view in response to acquisition of data inferring user activity |
| US9448623B2 (en) | 2012-10-05 | 2016-09-20 | Elwha Llc | Presenting an augmented view in response to acquisition of data inferring user activity |
| US8928695B2 (en) * | 2012-10-05 | 2015-01-06 | Elwha Llc | Formatting of one or more persistent augmentations in an augmented view in response to multiple input factors |
| US9077647B2 (en) | 2012-10-05 | 2015-07-07 | Elwha Llc | Correlating user reactions with augmentations displayed through augmented views |
| US10180715B2 (en) | 2012-10-05 | 2019-01-15 | Elwha Llc | Correlating user reaction with at least an aspect associated with an augmentation of an augmented view |
| US10254830B2 (en) | 2012-10-05 | 2019-04-09 | Elwha Llc | Correlating user reaction with at least an aspect associated with an augmentation of an augmented view |
| US10269179B2 (en) | 2012-10-05 | 2019-04-23 | Elwha Llc | Displaying second augmentations that are based on registered first augmentations |
| US10713846B2 (en) | 2012-10-05 | 2020-07-14 | Elwha Llc | Systems and methods for sharing augmentation data |
| US10665017B2 (en) | 2012-10-05 | 2020-05-26 | Elwha Llc | Displaying in response to detecting one or more user behaviors one or more second augmentations that are based on one or more registered first augmentations |
| US10514691B2 (en) | 2016-09-09 | 2019-12-24 | Walmart Apollo, Llc | Geographic area monitoring systems and methods through interchanging tool systems between unmanned vehicles |
| US10520938B2 (en) | 2016-09-09 | 2019-12-31 | Walmart Apollo, Llc | Geographic area monitoring systems and methods through interchanging tool systems between unmanned vehicles |
| US10520953B2 (en) | 2016-09-09 | 2019-12-31 | Walmart Apollo, Llc | Geographic area monitoring systems and methods that balance power usage between multiple unmanned vehicles |
| US10507918B2 (en) | 2016-09-09 | 2019-12-17 | Walmart Apollo, Llc | Systems and methods to interchangeably couple tool systems with unmanned vehicles |
| US10423169B2 (en) * | 2016-09-09 | 2019-09-24 | Walmart Apollo, Llc | Geographic area monitoring systems and methods utilizing computational sharing across multiple unmanned vehicles |
Also Published As
| Publication number | Publication date |
|---|---|
| GB2439627B (en) | 2011-10-26 |
| GB2439627A (en) | 2008-01-02 |
| US7702183B1 (en) | 2010-04-20 |
| GB0709142D0 (en) | 2007-06-20 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US7702183B1 (en) | Methods and systems for the detection of the insertion, removal, and change of objects within a scene through the use of imagery | |
| Peng et al. | Wild animal survey using UAS imagery and deep learning: modified Faster R-CNN for kiang detection in Tibetan Plateau | |
| US7505608B2 (en) | Methods and apparatus for adaptive foreground background analysis | |
| US7227975B2 (en) | System and method for analyzing aerial photos | |
| Palace et al. | Amazon forest structure from IKONOS satellite data and the automated characterization of forest canopy properties | |
| Ometto et al. | A biomass map of the Brazilian Amazon from multisource remote sensing | |
| Lizarazo et al. | Automatic mapping of land surface elevation changes from UAV-based imagery | |
| Uezato et al. | A novel spectral unmixing method incorporating spectral variability within endmember classes | |
| Aguilar et al. | Optimizing multiresolution segmentation for extracting plastic greenhouses from WorldView-3 imagery | |
| Stothard et al. | Application of UAVs in the mining industry and towards an integrated UAV-AI-MR technology for mine rehabilitation surveillance | |
| Baur et al. | How to implement drones and machine learning to reduce time, costs, and dangers associated with landmine detection | |
| US7450761B2 (en) | Spectral geographic information system | |
| Wolfe et al. | Hyperspectral analytics in ENVI | |
| Wolfe et al. | Hyperspectral analytics in ENVI: target detection and spectral mapping methods | |
| Inamdar et al. | Implementation of the directly-georeferenced hyperspectral point cloud | |
| Koc-San et al. | A model-based approach for automatic building database updating from high-resolution space imagery | |
| JP2007128141A (en) | System and method for determining road lane number in road image | |
| CN116228782B (en) | Wheat Tian Sui number counting method and device based on unmanned aerial vehicle acquisition | |
| Potter | Mobile laser scanning in forests: Mapping beneath the canopy | |
| EP4102472B1 (en) | Land type-based segmentation for prioritization of search areas | |
| Atarita | Hyperspectral Imaging Simulator and Applications for Unmanned Aerial Vehicles | |
| EP4425445A1 (en) | Remote aerial minefield survey | |
| US20240428579A1 (en) | Methods and systems for image processing | |
| Becker et al. | Reconnaissance of coastal areas using simulated EnMAP data in an ERDAS IMAGINE environment | |
| Loghin | Potential of very high resolution satellite imagery for 3D reconstruction and classification |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AS | Assignment |
Owner name: BOEING COMPANY, THE,ILLINOIS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:JOHNSON, TED L.;NEFF, MICHAEL G.;REEL/FRAME:017670/0790 Effective date: 20060517 |
|
| FEPP | Fee payment procedure |
Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
| FPAY | Fee payment |
Year of fee payment: 4 |
|
| MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552) Year of fee payment: 8 |
|
| MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 12 |