[go: up one dir, main page]

IL165556A - System and method for automatic detection of inconstancies of objects in a sequence of images - Google Patents

System and method for automatic detection of inconstancies of objects in a sequence of images

Info

Publication number
IL165556A
IL165556A IL165556A IL16555604A IL165556A IL 165556 A IL165556 A IL 165556A IL 165556 A IL165556 A IL 165556A IL 16555604 A IL16555604 A IL 16555604A IL 165556 A IL165556 A IL 165556A
Authority
IL
Israel
Prior art keywords
images
image
segment
inconstancy
segments
Prior art date
Application number
IL165556A
Other languages
Hebrew (he)
Other versions
IL165556A0 (en
Inventor
Yair Shimoni
Chen Brestel
Original Assignee
Yair Shimoni
Electro Optics Ind Ltd
Chen Brestel
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yair Shimoni, Electro Optics Ind Ltd, Chen Brestel filed Critical Yair Shimoni
Priority to IL165556A priority Critical patent/IL165556A/en
Priority to PCT/IL2005/001298 priority patent/WO2006059337A2/en
Publication of IL165556A0 publication Critical patent/IL165556A0/en
Publication of IL165556A publication Critical patent/IL165556A/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)

Description

SYSTEM AND METHOD FOR AUTOMATIC DETECTION OF INCONSTANCIES OF OBJECTS IN A SEQUENCE OF IMAGES *?v jmapv-iwft uttWN >^>> nw>m -nmvtt Ref. 002427IL Applicant: ELOP Electro-Optics Industries Ltd.
Borochov, Korakh, Eliezri & Co.
SYSTEM AND METHOD FOR AUTOMATIC DETECTION OF INCONSTANCIES OF OBJECTS IN A SEQUENCE OF IMAGES FIELD OF THE DISCLOSED TECHNIQUE The disclosed technique relates to image processing, in general, and to methods of detecting inconstancies of objects in an image, in particular.
BACKGROUND OF THE DISCLOSED TECHNIQUE It is a common technique to observe aerial and satellite images to determine the appearance or disappearance of an object or objects in a sequence of images of a scene, including disappearance in one part of the scene and appearance in another part. It is often desired to use an automatic technique to detect the appearance or disappearance of an object or objects in a sequence of images of a scene. Detecting the appearance or disappearance of an object or objects in a sequence of images of a scene will be referred hereinafter as inconstancy detection or detecting inconstancies.
Some image sequences may be acquired differently. These differences may consist of time interval, in which the images of a scene were acquired. This time interval may sometimes be in the order of days or even years. The type of the image, such as aerial or satellite may be different. The devices, by which the images were acquired, such as camera, may change as well. The image sequences that were acquired differently may cause complex scenes, which may include rocks, bushes, buildings and vehicles, to appear very different in two different images. These differences may be semantic (i.e., meaningful objects may have appeared or disappeared). These differences may further be different light conditions, changes in shadows, change of viewpoint, seasonal changes of the scene and distortions of the scene. Some image sequences may be acquired by different imaging devices. These different imaging devices may employ different types of sensors. These different types of sensors may be sensors operative at different spectrums (e.g., one imaging device is operative at the visible light spectrum and another imaging device is operative at the infrared spectrum), sensors that are operative with different resolutions, sensors from different vendors or sensors of different models. These differences, in image acquisition devices or in the image acquisition time, may result in discrepancies in the images.
Inconstancy detection relates closely to the field of change detection, known in the art. Change detection is directed to comparing between different images of the same scene, and detecting regions in these images, in which change has occurred. These different images are either acquired at different times or by different image acquisition devices. Techniques for detecting changes between two images, which are known in the art, are based on examining the properties of individual picture elements (i.e., pixels). A decision is made whether change has occurred in a pixel. The publication "Image Change Detection Algorithms: A Systematic Survey" by Radke et. al., provides an overview of such techniques. This can be found in the following address: htto://www. ecse. mi. edu/homeDaaes/riradke/DaDers/radketiD04.Ddf It is often advantageous to detect changes based on examining a region or regions in the images. Image segmentation is a technique that divides the image into regions (i.e., segments) based on colour, texture, brightness and other properties. Each segment may represent a meaningful object in the image such as a building, a car or a field. Multi-scale segmentation is a technique of creating multiple segmentations of the image. Each segmentation is of a different scale (i.e., the segmentation is finer or coarser with respect to the segment size). These techniques measure the differences between a segment in one image and a corresponding segment in another image. The system may interpret (i.e., classify) the differences as changes, for example in vegetation growth.
European patent 1217580 issued to Kim et. al, entitled "Method And Apparatus For Measuring Colour-Texture Distance And Image Segmentation Based On Said Measure", directs to a system and method for multi-scale segmentation. The system pre-processes the image and calculates a colour measure and a texture measure for each pixel. The system calculates a colour distance and a texture distance, between two pixels. The system adds the colour distance and the texture distance to_ form the colour-texture distance between two pixels. The system considers two pixels as belonging to the same segment, if their colour-texture distance does not exceed a certain threshold. A large threshold value will cause a coarser segmentation (i.e., the segments will be larger). The system creates an image graph describing the relationship between the segments. The image graph further contains information about each segment. The system refines the segmentation by merging neighbouring segments with similar colour-texture distances based on a second threshold and updates the image graph.
The publication "Comparison Of Object Oriented Classification Techniques And Standard Image Analysis for The Use Of Change Detection Between SPOT Multispectral Satellite Images and Aerial Photos" by G. Willhauck in ISPRS Vol. XXXIII, 2000, describes the use of multi-scale segmentation for the purpose of change detection. Specifically, the publication is directed at a method for detecting the deforested areas since nineteen sixty in the temperate forest in Tierra del Fuego in Argentina. The technique uses commercial computer software. The technique uses three recent satellite images, and one aerial photo from nineteen sixty. The technique uses the computer software to segment the aerial image from nineteen sixty into two regions, and classifies the segments as forest and non-forest. The software segments the recent satellite images to a finer scale based on the coarser segmentation of the aerial image. The software classifies segments of non-forest originating from a forest segment as deforested areas. Thus, changes between the image acquired in nineteen sixty to the current image are detected.
SUMMARY OF THE PRESENT DISCLOSED TECHNIQUE It is an object of the disclosed technique, to provide a novel system and method for detecting inconstancies between images of the same scene. In accordance with an aspect of the disclosed technique, there is thus provided a system for detecting inconstant representations of objects across a plurality of images of substantially the same scene. The system includes an image segmentor and an inconstancy detector. The image segmentor is coupled with the inconstancy detector. The image segmentor segments each of the images, into a plurality of segments, thereby producing a respective segmentation representation. The inconstancy detector detects segment inconstancy between the segmentation representation of one of the images and the segmentation representation of at least another image, thereby identifying inconstant segments. Segment inconstancy is defined by the existence of a certain segment, at a certain location, in one image and the inexistence of a substantially similar segment or of a group of pixels which can be used to define a substantially similar segment, at essentially the same location, in the other images.
According to another aspect of the disclosed technique there is thus provided a method for detecting inconstant representations of objects across a plurality of images of substantially the same scene. The method includes the procedures of segmenting each of the images into a plurality of segments, thereby producing a respective segmentation representation and detecting segment inconstancy. Segments inconstancy is detected between segmentation representation of one of the images and the segmentation representation of at least another image. The procedure of detecting segments inconstancy identifies inconstant segments. Segment inconstancy is defined by the existence of a certain segment, at a certain location, in one image and the inexistence of a substantially similar segment or of a group of pixels which can be used to define a substantially similar segment, at essentially the same location, in the other images.
According to a further aspect of the disclosed technique there is thus provided a system for detecting inconstant representations of objects across a plurality of images of substantially the same scene. The system includes an image segmentor, a segment identifier and an inconstancy detector. The segment identifier is coupled with the image segmentor and with the inconstancy detector. The image segmentor segments each of the images, into a plurality of segments, thereby producing a respective segmentation representation. The segment identifier identifies, in each of the images, segments of interest, with essentially the same segment characteristics. The inconstancy detector detects inconstancy between the segmentation representation of one of the images and the segmentation representation of at least another image, thereby identifying inconstant segments. Detecting inconstancy is defined by the existence of a certain segment of interest, at a certain location, in one image and the inexistence of a substantially similar segment or of a group of pixels which can be used to define a respective segment being substantially similar to a certain segment of interest, at essentially the same location, in the other images.
According to another aspect of the disclosed technique there is thus provided a method for detecting inconstant representations of objects across a plurality of images of substantially the same scene. The method includes the procedures of segmenting each said images into a plurality of segments thereby producing a respective segmentation representation, identifying segments of interest with essentially the same segment characteristics in each of the images, and detecting segments inconstancy. Segments inconstancy is detected between segmentation representation of one of the images and the segmentation representation of at least another image. The procedure of detecting segments inconstancy identifies inconstant segments. Segment inconstancy is defined by the existence of a certain segment, at a certain location, in one image and the inexistence of a substantially similar segment or of a group of pixels which can be used to define a substantially similar segment, at essentially the same location, in the other images.
BRIEF DESCRIPTION OF THE DRAWINGS The disclosed technique will be understood and appreciated more fully from the following detailed description taken in conjunction with the drawings in which: Figure 1 is a schematic illustration of a system for detecting inconstancy between images constructed and operative in accordance with an embodiment of the disclosed technique; Figure 2 is a schematic illustration of a method for detecting inconstancy between images, constructed and operative in accordance with another embodiment of the disclosed technique; Figure 3 is a schematic illustration of a system for detecting inconstancy between images constructed and operative in accordance with a further embodiment of the disclosed technique; Figure 4, is a schematic illustration of a method for detecting inconstancy between images, constructed and operative in accordance with a another embodiment of the disclosed technique; Figures 5A and 6A are illustrations of two images, each acquired at a different time, to be analyzed, according to the disclosed technique; Figures 5B and 6B are illustrations of respective segmentations of the images of Figures 5A and 6A, at certain segmentation levels, according to the disclosed technique; Figures 5C and 6C provide object images which demonstrate the results of inconstancy analysis and detection, according to the disclosed technique; and Figure 7 is the same image as in Figure 6C with the circles from Figure 5C superimposed according to the disclosed technique.
DETAILED DESCRIPTION OF THE EMBODIMENTS The disclosed technique overcomes the disadvantages of the prior art by providing a system and method for automatically detecting inconstancies of selected segments, representing objects in a scene, in a sequence of multi-scale segmented images. The disclosed technique compares selected segments in one image with segments from multiple segmentation scales in another image.
The system according to the disclosed technique, detects inconstancies between images, which were acquired at different time instances or by different imaging devices. The system may further detect inconstancies between an image, of substantially the same scene, acquired at one time instance with a first imaging device, and images acquired at other time instances with at least a second imaging device. For example, one image may be acquired by a camera employing a first type of sensor (e.g., a visible light sensor) at a first point in time, and another image may be acquired by a camera employing a second type of sensor (e.g., an infrared sensor) at a second point in time. The different imaging devices may employ different types of sensors. These different types of sensors may be sensors operative at different spectrums (e.g., one imaging device is operative at the visible light spectrum and another imaging device is operative at the infrared spectrum), sensors that are operative with different resolutions, sensors from different vendors or sensors of different models.
The disclosed technique may be adapted for civil or military purposes (e.g., detecting changes in general scenery), medical purposes (e.g., detecting changes in tissues), industrial purposes (e.g., detecting the appearance over time of failures such as cracks in structures using X-ray imaging), and the like. Accordingly, the types of imaging system (i.e., and the respective types of sensors used) which may be employed by the disclosed technique for acquiring these images, can be visible light imaging systems, near infrared imaging systems, microbolometer imaging systems, ultraviolet imaging systems, X-ray imaging systems, MRI imaging systems, ultrasound imaging systems, and the like. The different images, in the image sequences, may include discrepancies in image properties, such as contrast, luminance, chrominance, resolution, and the like. However, actual objects which appear in the same place in the scene, in different images acquired by different imaging devices, are likely to result in substantially similar segments in the respective segmented images, regardless of the acquiring imaging device. The similarities of the segments are based on certain segment characteristics (e.g., color, size, shape and texture).
A system, according to an embodiment of the disclosed technique, initially co-registers the images, to align approximately the images coordinate systems. The system may further smooth the images. The system segments the images multiple times, each at a different scale. Each segment may represent a meaningful object in the scene. The system attempts to identify, for each segment at each segmentation scale, a corresponding segment from any of the segmentation scales in the other images. The system disregards segments, which agree in location shape and size. The system further disregards segments in one image, for which the pixels of the segment are correlated with a group of pixels, which can be used to define a substantially similar segment at essentially the same location in the other images (i.e., a segment does not necessarily exist in the respective area of the other image). The system retains the segments to which a corresponding segment was not identified. The system further retains segments, for which the pixels of the segment are not correlated with a group of pixels in the respective area of the other images. The system selects segments from the retained segments according to color, texture, shape and size criteria. The system declares the selected segments inconstant.
The system described above is able to find inconstancies between images which were acquired by different imaging devices or different sensors or sensor types. The different images may include many differences in contrast, in light level, in resolution and in other image properties, but real objects in the scene cause the segniejTtcj o create similar segments regardless of the imaging device.
According to another embodiment of the disclosed technique, after segmenting the images, the system selects segments of interest according to color, size, shape and texture criteria. The system disregards segments to which a corresponding segment was identified in the other images. The system further disregards segments in one image, for which the pixels of the segment are correlated with a group of pixels, which can be used to define a substantially similar segment at essentially the same location in the other images.
Reference is now made to Figure 1 , which is a schematic illustration of a system, generally referenced 100, constructed and operative in accordance with an embodiment of the disclosed technique. System 100 receives a sequence of images as its input and outputs a sequence of images indicating the inconstant objects in the scene. System 100 includes a pre-processor 102, an image segmentor 104, an inconstancy detector 106 and a segments identifier 108. Pre-processor 102 is coupled with Image segmentor 104. Image segmentor 104 is coupled with inconstancy detector 106. Inconstancy detector 106 is coupled with segments identifier 108.
Pre-processor 102, receives a sequence of images, and performs operations (e.g., co-registering, smoothing and enhancing) which are required to prepare the images for inconstancy detection. Preprocessor 102 provides the prepared images to image segmentor 104. Image segmentor 104 segments the images at multiple segmentation scales. Image Segmentor 104 provides segmented images to inconstancy detector 106. Inconstancy detector 106 declares a segment constant or inconstant. Inconstancy detector 106 provides the segmented images with designated inconstant segments to segment identifier 108. Segments identifier 108 identifies segments of interest from each of the images and provides a representation, indicating the inconstant segments.
Reference is now made to Figure 2, which is a schematic illustration of a method for detecting inconstancy between images, operative in accordance with another embodiment of the disclosed technique. In procedure 120, a plurality of images to be compared for inconstancies, are pre-processed. Such pre-processing is directed at preparing the images for inconstancy detection. For example, a pre-processing sub-procedure may include co-registration. Co-registration is aimed at aligning the image coordinate systems. Pre-processing may further include a smoothing sub-procedure. Smoothing is aimed at removing noise artifacts from the images. Pre-processing may further include distortion correction, enhancement and any other operation required to prepare the images for inconstancy detection. With reference to Figure 1 , pre-processor 102 pre processes a plurality of images to be compared for inconstancies.
In procedure 122, a plurality of prepared images, are segmented. Such segmentation is aimed at dividing the images into regions (i.e., segments). The images are segmented a plurality of times, each at a different scale. A segment at any scale may represent a meaningful object in the scene. With reference to Figure 1 , image segmentor 104 segments the prepared images at a plurality of different scales. *ΜΜβ' In procedure 124, for each segment in one image, an attempt is made to identify a corresponding segment in the other images. The attempt to identify a corresponding segment is made at any of the segmentation scales. A segment is identified, if a corresponding segment in the other images exists in essentially the same location with essentially the same segment characteristics as the selected segment. The segment reference to Figure 1 to identify a corresponding segment in the other images.
In procedure 126, for each segment in one image an attempt is made to identify a group of pixels, used to define a substantially similar segment, at essentially the same location, in the other image correlated with the pixels of the segment. A segment is identified, if a group of pixels in the other images, correlated with the pixels of the segment, exists in essentially the same location of the segment. With reference to Figure 1 , inconstancy detector 106 attempts to identify a group of pixels in the other images.
In procedure 128, the segments in one image, to which a corresponding segment, which agree in location, color, shape, size and texture, does not exist in the other images, are retained. The segments in one image, to which a corresponding segment or a group of pixels, correlated with the pixels of the segment, was not identified in the other images, are further retained. With reference to Figure 1 , inconstancy detector 106jetains the segments in one image, to which a corresponding segment in the other images was not identified., In procedure 130, segments of interest are selected from the retained segments. The segments are selected according to segment characteristics. Segment characteristics may include color, size, shape and texture. With reference to Figure 1 , the segments of interest.
In procedure 132, the selected segments are declared inconstant. With reference to Figure 1 , segments identifier 108 declares the selected segments inconstant.
In procedure 134, the inconstant objects are represented. The inconstant objects may be represented as a list, marked on an image or a map, alerted for, displayed on a video monitor or saved on a computer memory. With reference to figure 1 , segments identifier 108 provides a representation of the inconstant objects.
In order to reduce computational complexity, the system, according to a further embodiment of the disclosed technique, may first select segments of interest. The system then detects if these segments of interest are constant or not.
Reference is now made to Figure 3, which is a schematic illustration of a system, generally referenced 160, constructed and operative in accordance with a further embodiment of the disclosed technique. System 160 receives a sequence of images as its input and outputs a sequence of images indicating the inconstant objects in the scene. System 160 includes a pre-processor 162, an image segmentor 164, a segments identifier 166 and an inconstancy detector 168. Pre-processor 162 is coupled with Image segmentor 164. Image segmentor 164 is coupled with segments identifier 166. Segments identifier 166 is coupled with Inconstancy detector 168.
Pre-processor 162, receives a sequeicje_-ofJrim the images for inconstancy detection and performs operations (e.g., co-registering, smoothing and enhancing) which are required to prepare the images for inconstancy detection. Pre-processor 162 provides the prepared images to image segmentor 164. Image segmentor 164 segments the images at multiple segmentation scales. Image Segmentor 164 provides segmented images to segments identifier 166. Segments identifier 166 identifies segments of interest from each of the images and provides the segmented images with the segments of interest designated to inconstancy detector 168. Inconstancy detector 168 declares a segment constant or inconstant. Inconstancy detector 168 provides a representation, indicating the inconstant segments.
Reference is now made to Figure 4, which is a schematic illustration of a method for detecting inconstancy between images, operative in accordance with another embodiment of the disclosed technique.
In procedure 180, a plurality of images to be compared for inconstancies, are pre-processed. Such pre-processing is directed at preparing the images for inconstancy detection. For example, a preprocessing sub-procedure may include co-registration. Co-registration is « aimed at aligning the image coordinate systems. Pre-processing may further include a smoothing sub-procedure. Smoothing is aimed at removing noise artifacts from the images. Pre-processing may further include distortion correction, enhancement and any other operation required to prepare the images for inconstancy detection. With reference to Figure 3, pre-processor 162 pre processes a plurality of images to be compared for inconstancies.
In procedure 182, a plurality of prepared images, are segmented. Such segmentation is aimed at dividing the images into regions (i.e., segments). The images are segmented a plurality times, each at a different scale. A segment at any scale may represent a meaningful object in the scene. With reference to Figure 3, image segmentor 164 segments the prepared images, at a plurality of different scales.
In procedure 184, segments of interest are selected in the multi-scale segmented images. These segments are selected from any of the segmentation scales. The segments are selected according to the segment characteristics. The segments characteristics may include color, shape, size and texture. With reference to Figure 3, segments identifier 166 selects the segments of interest from a plurality of segmentation scales, according to segment characteristics.
In procedure 186, an attempt is made to identify, for each selected segment of interest in one image, a corresponding group of pixels which can be used to define a substantially similar segment, at essentially the same location, in the other image. The group of pixels may be a segment in the other image. The attempt to identify a corresponding segment is based on location of the segments in the images. The attempt to identify a corresponding segment is further based on the characteristics of the segments, and correlation between the pixels of the segments. With reference to Figure 3, segments identifier 166 attempts to identify for each selected segment of interest in one image, a corresponding selected segment in the other images.
In procedure 188, a selected segment of interest is disregarded in one image, if a corresponding segment is identified in the other images. The segment is disregarded, if the corresponding segment in the other image is identified in essentially the same location with essentially the same segment characteristics as the selected segment. The corresponding segment may be a selected segment of interest. The segment characteristics may include color, size, shape and texture. With reference to Figure 3, inconstancy detector 168 disregards a segment according to the compared segments location and characteristics.
In procedure 190, a selected segment in one image is disregarded, if the pixels of the segment are correlated with a group of pixels used to define a substantially similar segment, at essentially the same location, in the other image. A segment is disregarded, if a group of pixels in the other images, correlated with the pixels of the segment, exists in essentially the same location of the segment. With reference to Figure 3, inconstancy detector 168 disregards a segment according to compared segments location and pixel correlation.
In procedure 192, a selected segment is declared inconstant, if a corresponding selected segment, which agrees in location, color, shape and size, was not identified in the other images. A selected segment is further declared inconstant if a group of pixels, correlated with the pixels of the segment was not identified in essentially the same location in the other images. With reference to Figure 3, inconstancy detector 168 declares a segment inconstant.
In procedure 194, the inconstant objects are represented. The inconstant objects may represented as a list, marked on an image or a map, alerted for, displayed on a video monitor or saved on a computer memory. With reference to Figure 3, inconstancy detector 168 provides a representation of the inconstant objects.
Reference is now made to Figures 5A, 5B, 5C, 6A, 6B, 6C and 7. Figures 5A and 6A are illustrations of two images, each acquired at a different time, to be analyzed, according to the disclosed technique. Each of the images of Figures 5A and 6A exhibits objects that are candidates for inconstancy detection. Both images exhibit a highway section with surrounding scene, and some vehicle traffic.
Figures 5B and 6B are illustrations of respective segmentations of the images of Figures 5A and 6A, at certain segmentation levels. It is noted that the respective segmentation levels of the images, can be identical or different. Some of the segments represent the vehicles on the highway section.
Figures 5C and 6C provide object images which demonstrate the results of inconstancy analysis and detection, according to the disclosed technique. The detection is presented over the images of Figures 5A and 6A. The objects marked (i.e., by circles) in Figure 5C are objects in Figure 5A which exhibit a change, with respect to the objects of Figure 6A. The objects marked (i.e., by circles) in Figure 6C are objects in Figure 6A which exhibit change, with respect to the objects of Figure 5A.
Figure 7 which is the same image as in Figure 6C with the circles from Figure 5C superimposed. Circle 214 represents circle 210 in Figure 5C. Circle 216 represents circle 212 in Figure 6C. Circles 214 and 216 are in close proximity to one another. Therefore, the object marked by the circles 210 and 212 may be declared constant in both images. The rest of the marked objects in Figure 5C are declared inconstant in Figure 6C. Similarly, all the objects in Figure 6C, excluding the object marked by circle 212, are declared inconstant in Figure 5C.
It will be appreciated by persons skilled in the art that the disclosed technique is not limited to what has been particularly shown and described hereinabove. Rather the scope of the disclosed technique is defined only by the claims, which follow.

Claims (45)

1. CLAIMS System for detecting inconstant representations of objects across a plurality of images of substantially the same scene, the system comprising: an image segmentor for segmenting each of said images, into a plurality of segments, thereby producing a respective segmentation representation; an inconstancy detector coupled with said image segmentor, detecting inconstant objects in said scene according to segment inconstancy between the segmentation representation of one of said images and the segmentation representation of at least another of said images, thereby identifying inconstant segments, said inconstancy detector including a comparator and a correlator, said comparator comparing each segment in one image with segments at substantially the same location in the other image, said correlator correlating the pixels of one segment in one image with a group of pixels at substantially the same location in the other image, said segment inconstancy is defined by the existence of a certain segment, at a certain location, in said one image and the inexistence of a group of pixels which can be used to define a substantially similar segment, at essentially the same location, in said at least other image, said inexistence of a group of pixels is determined according to the output of at least one of said correlator and said comparator.
2. The system according to claim 1 , wherein said segment inconstancy is further defined by the existence of a certain segment, at a certain location, in said one image and the inexistence of a substantially similar segment, at essentially the same location, in said at least other image. 165556/2
3. The system according to claim 1 , wherein said inconstancy detector is further operative for detecting segment inconstancy between the segmentation representation of said at least other image and said one image, thereby identifying inconstant segments.
4. The system according to claim 1 , wherein said image segmentor produces said segmentation representation, at multiple segmentation levels.
5. The system according to claim 1 , further comprising a preprocessor, coupled with said image segmentor for preparing said images for inconstancy detection, wherein said preparing includes at least one of smoothing said images and registering said images.
6. The system according to claim 1 , further comprising a segment identifier, coupled with said inconstancy detector for identifying segments of interest from said inconstant segments.
7. The system according to claim 6, wherein said segments of interest are identified according to at least one characteristic, selected from the group consisting of: size; color; texture; and shape.
8. The system according to claim 1 , wherein said segment inconstancy is further defined according to at least one characteristic, selected from the group consisting of: size; 165556/2 color; texture; shape; and correlation.
9. The system according to claim 1 , wherein said at least one of said plurality of images was acquired by a first image acquisition device and at least another of said plurality of images was acquired by a second image acquisition device.
10. The system according to claim 9, wherein said first image acquisition device employs a first type of sensor and said second image acquisition device employs at least a second type of sensor.
11. The system according to claim 1 , wherein said at least one of said plurality of images was acquired at a first time instance and at least a second of said plurality of images was acquired at a second time instance.
12. A method for detecting inconstant representations of objects across a plurality of images of substantially the same scene, the method comprising the procedures: acquiring at least two images, each image being associate with at least different respective acquisition time. segmenting each of said images, into a plurality of segments, thereby producing a respective segmentation representation; detecting inconstant objects in said scene according to segment inconstancy between the segmentation representation of one of said images and the segmentation representation of at least another of said images, thereby identifying inconstant segments, 165556/2 wherein said segment inconstancy is defined by the existence of a certain segment, at a certain location, in said one image and the inexistence of a group of pixels which can be used to define a substantially similar segment, at essentially the same location, in said at least one other image.
13. The method according to claim 12, wherein said segment inconstancy is further defined by the existence of a certain segment, at a certain location, in said one image and the inexistence of a substantially similar segment, at essentially the same location, in said at least other image.
14. The method according to claim 12, wherein said procedure of detecting segment inconstancy is further performed between the segmentation representations of said at least other image and said one image.
15. The method according to claim 12, wherein said segmenting produces said segmentation representation, at multiple segmentation levels.
16. The method according to claim 12, further comprising the procedure of preprocessing for preparing said images for inconstancy detection, wherein said preparing includes at least one of smoothing said images and registering said images.
17. The method according to claim 12, further comprising the procedure of identifying segments of interest from said inconstant segments. 165556/2
18. The method according to claim 17, wherein said segments of interest are identified according to at least one characteristic, selected from the group consisting of: size; color; texture; and shape.
19. The method according to claim 12, wherein said segment inconstancy is further defined according to at least one characteristic, selected from the group consisting of: size; color; texture; shape; and correlation.
20. The method according to claim 12, wherein said at least one of said plurality of images was acquired by a first image acquisition device and at least another of said plurality of images was acquired by a second image acquisition device.
21. The method according to claim 20, wherein said first image acquisition device employs a first type of sensor and said second image acquisition device employs at least a second type of sensor.
22. The method according to claim 12, wherein said a least one of said plurality of images was acquired at a first time instance and at least a second of said plurality of images was acquired at a second time instance. 165556/2
23. System for detecting inconstant representations of objects across a plurality of images of substantially the same scene, each of the images acquired at a different time, the system comprising: an image segmentor for segmenting each of said images, into a plurality of segments, thereby producing a respective segmentation representation; a segments identifier coupled with said image segmentor for identifying, in each of said images, segments of interest, with essentially the same segment characteristics; an inconstancy detector, coupled with said segments identifier for detecting inconstant objects in said scene according to inconstancy between the segmentation representation of one of said images and the segmentation representation of at least another of said images, thereby identifying inconstant segments, said inconstancy detector including a comparator and a correlator, said comparator comparing each segment in one image with segments at substantially the same location in the other image, said correlator correlating the pixels of one segment in one image with a group of pixels at substantially the same location in the other image, said segment inconstancy is defined by the existence of a certain segment of interest, at a certain location, in said one image and the inexistence of a group of pixels which can be used to define a substantially similar, at essentially the same location, in said at least other image, said inexistence of a group of pixels is determined according to the output of at least one of said correlator and said comparator.
24. The system according to claim 23, wherein said segment inconstancy is further defined by the existence of a certain segment of interest, at a certain location, in said one image and the inexistence of a 165556/2 substantially similar segment, at essentially the same location, in said at least other image.
25. The system according to claim 23, wherein said substantially similar segment is a segment of interest.
26. The system according to claim 23, wherein said inconstancy detector is further operative for detecting segment inconstancy between the segmentation representations of at least other image and said one image, thereby identifying inconstant segments.
27. The system according to claim 23, wherein said image segmentor produces said segmentation representation, at multiple segmentation levels.
28. The system according to claim 23, further comprising a preprocessor, coupled with said image segmentor for preparing said images for inconstancy detection, wherein said preparing includes at least one of smoothing said images and registering said images.
29. The system according to claim 23, wherein said segments of interest are identified according to at least one characteristic, selected from the group consisting of: size; color; texture; and shape. 165556/2
30. The system according to claim 23, wherein said segment inconstancy is further defined according to at least one characteristic, selected from the group consisting of: size; color; texture; shape; and correlation.
31. The system according to claim 23, wherein said at least one of said plurality of images was acquired by a first image acquisition device and at least another of said plurality of images was acquired by a second image acquisition device.
32. The system according to claim 31 , wherein said first image acquisition device employs a first type of sensor and said second image acquisition device employs at least a second type of sensor.
33. The system according to claim 23, wherein said a least one of said plurality of images was acquired at a first time instance and at least a second of said plurality of images was acquired at a second time instance.
34. A method for detecting inconstant representations of objects across a plurality of images of substantially the same scene, the method comprising the procedures: acquiring at least two images, each image being associate with at least a different respective acquisition time. segmenting each said images into a plurality of segments thereby producing a respective segmentation representation; 165556/2 Identifying segments of interest, with essentially the same segment characteristics in each of said plurality of images; detecting inconstant objects in said scene according to segment inconstancy of said identified segment between the segmentation representation of one of said images and the segmentation representation of at least another of said images, thereby identifying inconstant segments, wherein said segment inconstancy is defined by the existence of a segment of interest, at a certain location, in said one image and the inexistence of a group of pixels which can be used to define a substantially similar segment, at essentially the same location, in said at least other image.
35. The method according to claim 34, wherein said segment inconstancy is further defined by the existence of a certain segment of interest, at a certain location, in said one image and the inexistence of a substantially similar segment, at essentially the same location, in said at least other image.
36. The method according to claim 35, wherein said substantially similar segment is a segment of interest.
37. The method according to claim 34, wherein said procedure of detecting segment inconstancy is further performed between the segmentation representations of said at least other image, and said one image.
38. The method according to claim 34, wherein said image segmenting produces said segmentation representation, at multiple segmentation levels. 165556/2
39. The method according to claim 34, wherein said detecting segment inconstancy produces segments, detected as inconstant, in said images.
40. The method according to claim 34, further comprising the procedure of preprocessing for preparing said images for inconstancy detection, wherein said preparing includes at least one of smoothing said images and registering said images
41. The method according to claim 34, wherein said segments of interest are identified according to at least one characteristic, selected from the group consisting of: size; color; texture; and shape.
42. The method according to claim 34, wherein said segment inconstancy is further defined according to at least one characteristic, selected from the group consisting of: size; color; texture; shape; and correlation.
43. The method according to claim 34, wherein said at least one of said plurality of images was acquired by a first image acquisition device and at least another of said plurality of images was acquired by a second image acquisition device. 165556/2
44. The method according to claim 43, wherein said first image acquisition device employs a first type of sensor and said second image acquisition device employs at least a second type of sensor.
45. The method according to claim 34, wherein said a least one of said plurality of images was acquired at a first time instance and at least a second of said plurality of images was acquired at a second time instance. Agent for the Applicant: Amnon Yaacobi, Patent Attorney Borochov Korakh & Co.
IL165556A 2004-12-05 2004-12-05 System and method for automatic detection of inconstancies of objects in a sequence of images IL165556A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
IL165556A IL165556A (en) 2004-12-05 2004-12-05 System and method for automatic detection of inconstancies of objects in a sequence of images
PCT/IL2005/001298 WO2006059337A2 (en) 2004-12-05 2005-12-04 System and method for automatic detection of inconstancies of objects in a sequence of images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
IL165556A IL165556A (en) 2004-12-05 2004-12-05 System and method for automatic detection of inconstancies of objects in a sequence of images

Publications (2)

Publication Number Publication Date
IL165556A0 IL165556A0 (en) 2006-01-15
IL165556A true IL165556A (en) 2013-08-29

Family

ID=36283788

Family Applications (1)

Application Number Title Priority Date Filing Date
IL165556A IL165556A (en) 2004-12-05 2004-12-05 System and method for automatic detection of inconstancies of objects in a sequence of images

Country Status (2)

Country Link
IL (1) IL165556A (en)
WO (1) WO2006059337A2 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8352410B2 (en) * 2009-12-17 2013-01-08 Utility Risk Management Corporation, Llc Method and system for estimating vegetation growth relative to an object of interest

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100378351B1 (en) 2000-11-13 2003-03-29 삼성전자주식회사 Method and apparatus for measuring color-texture distance, and method and apparatus for sectioning image into a plurality of regions using the measured color-texture distance

Also Published As

Publication number Publication date
WO2006059337A2 (en) 2006-06-08
IL165556A0 (en) 2006-01-15
WO2006059337A3 (en) 2006-08-03

Similar Documents

Publication Publication Date Title
US10127448B2 (en) Method and system for dismount detection in low-resolution UAV imagery
JP5542889B2 (en) Image processing device
JP6497579B2 (en) Image composition system, image composition method, image composition program
CN109215063B (en) Registration method of event trigger camera and three-dimensional laser radar
CN111462128B (en) A pixel-level image segmentation system and method based on multi-modal spectral images
JP5551595B2 (en) Runway monitoring system and method
JP6554169B2 (en) Object recognition device and object recognition system
JP2009146407A (en) Method and apparatus for segmenting object region
US11455710B2 (en) Device and method of object detection
KR101021994B1 (en) Object detection method
JP7230507B2 (en) Deposit detection device
CN101383005A (en) Method for separating passenger target image and background by auxiliary regular veins
US7630534B2 (en) Method for radiological image processing
CN112613568A (en) Target identification method and device based on visible light and infrared multispectral image sequence
JP7092616B2 (en) Object detection device, object detection method, and object detection program
CN105787870A (en) Graphic image splicing fusion system
Chaloeivoot et al. Building detection from terrestrial images
Kröhnert et al. Segmentation of environmental time lapse image sequences for the determination of shore lines captured by hand-held smartphone cameras
JP4674179B2 (en) Shadow recognition method and shadow boundary extraction method
IL165556A (en) System and method for automatic detection of inconstancies of objects in a sequence of images
CN113657332B (en) Ground warning line identification method and device, computer equipment and storage medium
Abraham et al. A fuzzy based automatic bridge detection technique for satellite images
JP2023093668A (en) Method of acquiring image with no shadow from image capturing ground surface
JPH10208059A (en) Moving object extraction device
Ghewari et al. Analysis of model based shadow detection and removal in color images

Legal Events

Date Code Title Description
FF Patent granted
KB Patent renewed
KB Patent renewed
KB Patent renewed