WO2019187494A1 - 領域検出装置および領域検出方法 - Google Patents
領域検出装置および領域検出方法 Download PDFInfo
- Publication number
- WO2019187494A1 WO2019187494A1 PCT/JP2019/000619 JP2019000619W WO2019187494A1 WO 2019187494 A1 WO2019187494 A1 WO 2019187494A1 JP 2019000619 W JP2019000619 W JP 2019000619W WO 2019187494 A1 WO2019187494 A1 WO 2019187494A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- category
- region
- feature
- area
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/174—Segmentation; Edge detection involving the use of two or more images
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/187—Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
Definitions
- Non-Patent Document 1 discloses a method of classifying land cover information from a satellite image using machine learning.
- Non-Patent Document 2 discloses a technique for detecting a feature using a discriminator obtained by machine learning.
- Patent Document 1 discloses a method for detecting a region to be detected that has changed from aerial photographs taken at a first time point and a second time point.
- learning data is generated from the aerial photograph at the first time point and the map information at the first time point.
- the land cover photographed in the aerial photograph at the second time point is identified by using the detection target classifier learned from the generated learning data. Using this result, an area not including the detection target is excluded from areas different between the aerial photograph at the first time point and the aerial photograph at the second time point.
- the change area detection device described in Patent Document 1 requires corresponding map information for the first time point out of the two time points to be compared. Since the update frequency of map information is very low compared to the update frequency of satellite images and the like, the images that can be used as the first time point are limited. Thereby, the time interval suitable for the detection target cannot be set, and the detection accuracy may be low.
- an object of the present invention is to provide an apparatus for detecting a region where a feature is changing from feature images at two arbitrary time points.
- the aforementioned predetermined condition may indicate that the first category is a predetermined first condition category and the second category is a predetermined second condition category.
- the area extracting unit described above may include a shape determining unit.
- the shape determination unit may calculate a shape feature of the separation region in which pixels adjacent to each other and included in the change region are connected.
- the shape determination unit excludes the separation region of the shape feature from the change region. Also good.
- the aforementioned predetermined condition may indicate that either the first category or the second category is a predetermined third condition category.
- the area extracting unit described above may include a shape determining unit.
- the shape determination unit may calculate a shape feature of the separation region in which pixels adjacent to each other and included in the change region are connected.
- the shape determination unit may exclude the separation region of the shape feature from the change region.
- the aforementioned predetermined condition may indicate that the first category and the second category are different.
- the area extracting unit described above may include a shape determining unit.
- the shape determination unit may calculate a shape feature of the separation region in which pixels adjacent to each other and included in the change region are connected.
- the shape determination unit may exclude the separation region of the shape feature from the change region.
- the area extraction unit described above may include a classification determination unit.
- the classification determining unit integrates the first category areas based on the areas of the first areas formed by connecting the pixels of the same category adjacent to each other in the area corresponding to the changed area in the first classified image.
- the first integrated area may be calculated.
- the classification determination unit determines the second category region based on the area of the second region formed by connecting the pixels of the same category adjacent to each other in the region corresponding to the change region in the second classification image.
- the integrated second integrated area may be calculated.
- the region extraction unit described above extracts, from the change region, a region that satisfies a predetermined condition for the first category of the first integrated region and the second category of the second integrated region at the same position as the first category. May be.
- the classification determination unit described above includes the first category including pixels adjacent to the first region when the area of the first region is smaller than a predetermined threshold.
- the first area may be integrated into the area.
- the classification determination unit when the area of the second area is smaller than a predetermined threshold in the area corresponding to the change area in the second classification image, the classification determination unit includes a second category including pixels adjacent to the second area. The second area may be integrated with the area.
- the classification determining unit integrates the first region into the first category region having the longest boundary line between the first region and the first category region among the first category regions adjacent to the first region. May be.
- the classification determination unit integrates the second region into the second category region having the longest boundary line between the second region and the second category region among the second category regions adjacent to the second region. May be.
- the above-described region extraction unit may include a post-processing unit that connects the adjacent region to the extraction region based on the first category and the second category of the adjacent region adjacent to the extraction region.
- the post-processing unit described above connects the adjacent area to the extracted area. May be.
- the aforementioned adjacent area may be an area where pixels of the same category are connected to each other.
- FIG. 1 is a schematic diagram of an area detection apparatus according to the first embodiment.
- FIG. 2 is a diagram showing a functional configuration of the area detection apparatus of FIG.
- FIG. 3 is a diagram illustrating a functional configuration of the candidate extraction unit in FIG.
- FIG. 4 is a diagram illustrating a functional configuration of the classification unit in FIG. 2.
- FIG. 5 is a diagram illustrating a functional configuration of the region extraction unit in FIG. 2.
- FIG. 6 is a flowchart relating to the processing of the shape determination unit of FIG.
- FIG. 7 is a diagram for explaining the shape feature of FIG. 6.
- FIG. 8 is a flowchart related to the processing of the classification determining unit in FIG.
- FIG. 9 is a diagram for explaining the smoothing of FIG. FIG.
- FIG. 10 is a diagram for explaining the smoothing of FIG.
- FIG. 11 is a diagram for explaining the integration of the category areas of FIG.
- FIG. 12 is a diagram for explaining the integration of the category areas of FIG.
- FIG. 13 is a flowchart related to the processing of the change determination unit in FIG.
- FIG. 14 is a flowchart relating to the processing of the post-processing unit of FIG.
- FIG. 15 is a diagram for explaining the adjacent region in FIG. 14.
- FIG. 16 is a diagram for explaining the adjacent region in FIG. 14.
- FIG. 17 is a diagram illustrating a functional configuration of an area detection apparatus that can input a plurality of feature images.
- FIG. 18 is a diagram illustrating a functional configuration of the area detection device according to the second embodiment.
- the region detection device 5 includes an input device 900, a calculation device 901, a storage device 902, and a communication device 903.
- the input device 900 includes a keyboard, a mouse, a scanner, and the like, and is a device that inputs data to the area detection device 5.
- the calculation device 901 includes a central processing unit (CPU), a dedicated circuit, and the like, and performs calculations for processing by the area detection device 5.
- the arithmetic device 901 reads the program 910 stored in the storage device 902 and performs processing based on instructions of the program 910.
- the arithmetic device 901 acquires data input from the input device 900 and uses it to execute the instructions of the program 910.
- the storage device 902 stores various data used by the arithmetic device 901.
- the storage device 902 stores a program 910 indicating the processing content of the arithmetic device 901.
- the communication device 903 communicates with the outside of the area detection device 5 and transmits / receives data necessary for processing of the arithmetic device 901.
- the area detection device 5 includes, for example, a computer.
- the program 910 may be stored in the external storage medium 800, read from the storage medium 800, and stored in the storage device 902.
- the storage medium 800 may be a non-transition type.
- the area detection device 5 is connected to the display device 7 and outputs the result calculated by the arithmetic device 901 to the display device 7.
- the display device 7 displays the output calculation result.
- the region detection device 5 includes a candidate extraction unit 10, a classification unit 20, and a region extraction unit 30.
- the region detection device 5 extracts a region where the feature is changing from the two first feature images 1-1 and the second feature image 1-2 obtained by photographing the ground surface from the sky.
- the region detection device 5 generates an extraction signal including extraction region information indicating the extracted region, and transmits the extraction signal to the display device 7.
- the display device 7 displays an image indicating the extracted area based on the extraction signal.
- the first feature image 1-1 and the second feature image 1-2 are collectively referred to as a feature image 1.
- the feature image 1 includes various images obtained by photographing the ground surface from the sky such as satellite images and aerial photographs.
- the candidate extraction unit 10 extracts a region where the image has changed between the first feature image 1-1 and the second feature image 1-2 as a change region. As illustrated in FIG. 3, the candidate extraction unit 10 includes a feature calculation unit 11, a change calculation unit 12, and a candidate determination unit 13.
- the feature calculation unit 11 calculates the feature amount of each pixel of the feature image 1.
- the feature amount is a value indicating the feature of the small area including the corresponding pixel. For example, for each of the vertical direction and the horizontal direction of the small region, the feature calculation unit 11 calculates the information of the amplitude spectrum in the spatial frequency region as the feature amount of each pixel included in the small region.
- the feature amount can be selected from various calculation methods according to a desired detection target. For example, the feature calculation unit 11 may calculate a feature amount for each small region.
- the feature quantity may be a vector quantity or a scalar quantity.
- the change calculation unit 12 calculates a difference between the feature amount of the first feature image 1-1 calculated by the feature calculation unit 11 and the feature amount of the second feature image 1-2 at the same position as the change amount. .
- the first pixel of the first feature image 1-1 is associated with the second pixel of the second feature image 1-2 at the same position.
- the difference between the feature amount of the first pixel and the feature amount of the second pixel is calculated as a change amount. This is performed for all pixels.
- the change amount is a vector indicating a difference between the feature amount of the first pixel and the feature amount of the second pixel.
- the candidate determination unit 13 extracts pixels in which the amount of change calculated by the change calculation unit 12 satisfies a predetermined condition.
- the change amount is a scalar amount
- pixels whose change amount is equal to or greater than a predetermined threshold are extracted.
- the amount of change is a vector amount
- a pixel having a length of the amount of change equal to or greater than a threshold value is calculated.
- the value of the component of the change amount may be compared with the value of the component of a predetermined threshold vector, and a pixel in which a specific component satisfies a predetermined condition may be extracted. For example, in a specific component, a pixel whose change amount is equal to or greater than a threshold vector value may be extracted.
- a pixel in which the value of the change amount in a specific component is greater than or equal to the threshold vector value and the value of the change amount in other specific components is less than or equal to the value of the threshold vector may be extracted.
- the candidate determination unit 13 may have a plurality of threshold vectors. For example, when there are a plurality of threshold vectors, a pixel whose amount of change satisfies at least one threshold vector may be extracted. One or more threshold vectors may be selected according to the detection target. The candidate determination unit 13 calculates an area occupied by the extracted pixels as a change area.
- the classifier 21 learns the feature category using the sample image.
- the classifier 21 estimates the category of the feature being photographed according to the position of the feature image 1 using the learned result.
- the classifier 21 generates a classification image indicating the relationship between the position of the feature image 1 and the estimated category.
- Various methods can be selected for learning by the classifier 21. For example, the classifier 21 inputs a plurality of sample images and clusters these images. The category of each cluster may be determined from the result of clustering and the sample image assigned to each cluster. Moreover, you may be made to learn by deep learning. Furthermore, the classifier 21 can also classify mobile objects such as automobiles that are not included in the map information.
- the sample construction unit 22 constructs a sample image for the classifier 21 to learn from the plurality of learning images 2.
- the learning image 2 may include a feature image 1 that is input to the region detection device 5.
- the sample construction unit 22 constructs a sample image corresponding to the learning method of the classifier 21.
- a category to be classified and a sample image corresponding to the category are constructed.
- the sample images are not limited to the first feature image 1-1 and the second feature image 1-2, and various learning images 2 can be used. Therefore, the classification unit 20 can classify the feature category even if the color, shape, or the like changes due to differences in the shooting direction, shooting time, or the like. Also, the classification unit 20 can appropriately classify changes such as fallen leaves and snowfall of trees.
- the region extraction unit 30 extracts a region in which the feature has changed based on the change region calculated by the candidate extraction unit 10 and the classification image generated by the classification unit 20. As illustrated in FIG. 5, the region extraction unit 30 includes a shape determination unit 31, a classification determination unit 32, a change determination unit 33, and a post-processing unit 34.
- the shape determination unit 31 excludes a region not included in the detection target category from the shape of the change region calculated by the candidate extraction unit 10. Specifically, the process shown in FIG. 6 is performed.
- the shape determination unit 31 acquires change area information representing the change area calculated by the candidate extraction unit 10.
- the change area is an area occupied by pixels in which the difference between the feature quantity of the first feature image 1-1 and the feature quantity of the second feature image 1-2 satisfies a predetermined condition.
- the separation regions 200-1, 200-2,... which are adjacent to each other and connected to the pixels included in the change region, may include a plurality of regions separated from each other.
- the separation regions 200-1, 200-2,... are collectively referred to as a separation region 200.
- the shape determining unit 31 calculates the shape characteristics of the separation region 200-1 included in the change region.
- the shape feature is a feature amount obtained from the shape of the separation region 200-1.
- the shape determining unit 31 calculates the minimum rectangle surrounding the separation region 200-1 as the bounding box 201-1.
- the shape characteristics of the separation region 200-1 include the length of the long side of the bounding box 201-1 surrounding the separation region 200-1, the length of the short side, the ratio of the long side to the short side, the area, It may include the area, the ratio of the area of the bounding box 201-1 and the area of the separation region, and the like.
- the shape feature may be a scalar quantity selected from one of the long side lengths and the like. Further, the shape feature may be a vector quantity selected from a plurality of long side lengths.
- the shape determining unit 31 performs the same process for the bounding box 201-2.
- the shape determination unit 31 determines whether the shape feature is included in the shape features of the category to be detected. For example, the shape determination unit 31 stores a plurality of shape features for one detection target category. The shape determining unit 31 calculates the difference between the shape feature obtained from the separation region 200-1 and the shape feature stored in the shape determining unit 31. When the calculated minimum value of the differences is equal to or smaller than a predetermined threshold value, the shape determining unit 31 determines that the shape feature of the separation region 200-1 is included in the shape feature of the category to be detected. When the calculated minimum value of the difference is larger than the threshold value, the shape determining unit 31 determines that the shape feature of the separation region 200-1 is not included in the shape feature of the category to be detected.
- the process proceeds to step 103. If the shape feature of the separation region 200-1 is included in the shape feature of the category to be detected, the process proceeds to step 104.
- the shape determination unit 31 calculates the difference between the shape feature of each category and the shape feature of the separation region 200-1. The shape determination unit 31 determines the minimum value of the calculated differences by comparing it with a threshold value. There may be one shape feature for the category to be detected. Here, the shape feature of the category may be calculated based on the sample image constructed by the sample construction unit 22 of the classification unit 20.
- the category of the detection target includes, for example, a category A and a category B when detecting a region changed from the category A to the category B, for example, a region changed from soil to a building.
- category A is the detection target category. included.
- step 103 the shape determination unit 31 excludes the separation region 200-1 from the change region, assuming that the separation region 200-1 is not included in the detection target. Thereafter, the process proceeds to step 104.
- step 104 the shape determination unit 31 confirms whether or not the determination in step 102 has been performed for all the separation regions 200.
- the shape determination unit 31 proceeds to step 101 and calculates the shape feature of the separation region 200.
- the shape determination unit 31 ends the process.
- the shape determination unit 31 excludes the separation region that is not included in the category to be detected from the change region based on the shape of the change region calculated by the candidate extraction unit 10.
- the classification determination unit 32 calculates an integrated region integrated with the adjacent category region based on the area occupied by pixels of the same category adjacent to each other in the classification image.
- the integrated area indicates a category and its position.
- the classification determination unit 32 calculates a first integrated region obtained by integrating regions in the first classified image and a second integrated region obtained by integrating regions in the second classified image. Specifically, the classification determination unit 32 performs the process of FIG.
- the classification determination unit 32 acquires the classification image generated by the classification unit 20 and the change area information representing the change area for which the processing of the shape determination unit 31 has been completed.
- the classification image is data indicating the category number estimated by the classification unit 20 at the pixel position.
- a pixel described as “1” indicates that the classification unit 20 has estimated it as the first category.
- the pixel described as “2” is estimated as the second category
- the pixel described as “3” is estimated as the third category
- the pixel described as “4” is estimated as the fourth category. Show. That is, in FIG.
- the classification unit 20 adds a first category area 300-1 of the first category, a second category area 300-2 of the second category, and a third category area of the third category to the feature image 1.
- 300-3 and the fourth category region 300-4 of the fourth category are estimated to be included.
- the classification determination unit 32 smoothes the classification image generated by the classification unit 20. Thereby, the classification determination unit 32 removes noise from the classification image.
- the classification determination unit 32 for example, when the area of a region formed by connecting adjacent pixels of the same category is smaller than a predetermined threshold, the classification determination unit 32 is adjacent to the region and has the same category. The category of the area is changed to the category of the adjacent area in accordance with the number of areas. In other words, the classification determination unit 32 calculates an area that is adjacent to each other and occupied by pixels of the same category as the category area 300. When the calculated area of the category region 300 is smaller than the threshold value, the classification determining unit 32 extracts pixels adjacent to the region, and calculates the number of extracted pixels for each category.
- the number of pixels in the fourth category region 300-4 adjacent to the first category region 300-1 is two.
- the classification determining unit 32 integrates the first category area 300-1 into the third category area 300-3 because the third category area 300-3 has the largest number of pixels.
- the classification determining unit 32 changes the first category region 300-1 to the third category as shown in FIG.
- the smoothing is not limited to this method, and various methods that can remove noise may be selected.
- the classification determination unit 32 extracts areas corresponding to the changed areas from the classification image as classification areas 310-1, 310-2,.
- the classification areas 310-1, 310-2,... are collectively referred to as a classification area 310.
- a classification area 310-1 corresponding to the change area is extracted. Since the classification area 310 is an area corresponding to the change area, a plurality of separated areas may be included.
- the classification determining unit 32 calculates a category area 300 obtained by connecting adjacent pixels of the same category in the extracted classification area 310 (step 113). As shown in FIG. 11, the classification determining unit 32 calculates the first category area 300-1 by connecting the pixels of the first category in the classification area 310-1. Similarly, the pixels of the second category are connected, the second category area 300-2 of the second category is connected, and the pixels of the third category are connected to calculate the third category area 300-3.
- step 114 the classification determining unit 32 calculates the area of the obtained category region 300, and searches for the category region 300 having an area smaller than a predetermined threshold. As a result, if there is a category region 300 having an area smaller than the threshold, the process proceeds to step 115. For example, in the classification region 310-1, the areas of the first category region 300-1, the second category region 300-2, and the third category region 300-3 are calculated. Assume that the second category region 300-2 is obtained as the category region 300 smaller than the threshold value.
- the classification determining unit 32 integrates the retrieved category area 300 into the adjacent category area 300.
- the classification determination unit 32 integrates the category area 300 into the category area 300 having the longest boundary line with the category area 300 to be integrated among the adjacent category areas 300.
- the classification determining unit 32 searches for the category area 300 adjacent to the second category area 300-2.
- the first category area 300-1 and the third category area 300-3 are extracted.
- the category region 300 having the longest boundary line is selected.
- the first category area 300-1 is selected as the category area 300 having the longest boundary line with the second category area 300-2.
- the classification determination unit 32 integrates the second category region 300-2 having an area smaller than the threshold value into the selected first category region 300-1. As a result, the classification determining unit 32 obtains a classification area 310-1 including a first category area 300-1 and a third category area 300-3 as shown in FIG.
- step 114 search for a category region 300 having an area smaller than the threshold. If there is no category region 300 having an area smaller than the threshold, the classification determining unit 32 ends the process. For example, as shown in FIG. 12, in the classification area 310-1, the first category area 300-1 and the third category area 300-3 remain. Therefore, the integrated area includes the first category area 300-1 and the third category area 300-3.
- the classification determination unit 32 thus calculates an integrated region obtained by integrating the category regions of the classification image based on the area of the region formed by connecting the pixels of the same category adjacent to each other.
- the change determination unit 33 extracts a region where the feature has changed from the change region based on the change region calculated by the shape determination unit 31 and the integrated region calculated by the classification determination unit 32. . Specifically, the change determination unit 33 performs the process shown in FIG.
- the change determination unit 33 extracts a region in which the first integrated region and the second integrated region are detection target categories.
- a region that has changed from category A to category B for example, a region that has changed from soil to a building, is shown.
- the change determination unit 33 extracts an area belonging to the category A in the first integrated area and an area belonging to the category B in the second integrated area.
- the change determination unit 33 extracts a change area corresponding to the extracted area as the extraction area 320. In other words, the change determination unit 33 extracts an area that is the second condition category in the second integrated area from areas that are the first condition category in the first integrated area.
- the change determination unit 33 may extract an area belonging to the category A, for example, an area belonging to the category representing an automobile, in either the first feature image 1-1 or the second feature image 1-2. Good. In this case, the change determination unit 33 extracts a region belonging to the category A in the first integrated region or a region belonging to the category A in the second integrated region. The change determination unit 33 extracts a change area corresponding to the extracted area as the extraction area 320. As described above, the change determination unit 33 can extract an area where the category of the feature belongs to the category A in either the first feature image 1-1 or the second feature image 1-2.
- the change determination unit 33 may switch the above-described processing based on an input signal from the input device 900. In other words, the process of the change determination unit 33 may be switched based on the content input by the user to the input device 900.
- the post-processing unit 34 connects the area to the extraction area 320 based on the category of the adjacent area in order to improve the accuracy of the extraction area 320. Due to the influence of a shadow or the like, a part of the area belonging to the category to be detected may not be extracted as a change area. Therefore, the post-processing unit 34 connects an area adjacent to the extraction area 320 and having the same category to the extraction area 320. This improves the accuracy of the area to be detected. Specifically, the post-processing unit 34 performs the process shown in FIG.
- the post-processing unit 34 acquires the category of the first integrated region at the position corresponding to the extraction region 320. Thereby, the post-processing unit 34 acquires the feature category in the extraction region 320. Specifically, as shown in FIG. 15, the post-processing unit 34 acquires the category of the feature included in the extraction area 320-1.
- the post-processing unit 34 searches for an adjacent area that can be connected to the extraction area 320. For this reason, the post-processing unit 34 acquires the category of the pixel adjacent to the extraction region 320 from the first classified image. Pixels in the same category as the category corresponding to the extraction region 320 are extracted from the pixels adjacent to the extraction region 320.
- the post-processing unit 34 calculates a first adjacent region by connecting pixels that are adjacent to the extracted pixel and in the same category as the extracted pixel in the first classified image. For example, the post-processing unit 34 acquires an adjacent region 330-1 indicating the same category as the extraction region 320-1, among the regions adjacent to the extraction region 320-1, as shown in FIG.
- the adjacent region 330-1 is a region that is adjacent to each other and that connects pixels of the same category as the extraction region 320-1.
- the category of the non-adjacent region 335-1 is different from the category of the extraction region 320-1. For this reason, the non-adjacent region 335-1 is not included in the first adjacent region. In this way, the post-processing unit 34 acquires the first adjacent area for the first feature image 1-1.
- the extraction area 320 includes an extraction area 320-1-1 and an extraction area 320-1-2.
- the category of the adjacent region 330-1 adjacent to the extraction region 320-1-1 is the same category as the extraction region 320-1-1, it is acquired as the first adjacent region.
- the adjacent area 330-2 adjacent to the extraction area 320-1-2 is also in the same category as the extraction area 320-1-2, and thus is acquired as the first adjacent area.
- the first adjacent region includes the adjacent region 330-1 and the adjacent region 330-2.
- the post-processing unit 34 obtains the second adjacent region for the second feature image 1-2 in steps 133 to 135, as in steps 130 to 132. Specifically, in step 133, the post-processing unit 34 acquires the second classification image generated by the classification unit 20 and the second integrated region calculated by the classification determination unit 32.
- step 134 the post-processing unit 34 acquires the category of the second integrated region at the position corresponding to the extraction region 320. Thereby, the post-processing unit 34 acquires the feature category in the extraction region 320.
- the post-processing unit 34 acquires pixels of the same category adjacent to the extraction region 320 from the second classified image.
- the post-processing unit 34 extracts pixels in the same category as the category corresponding to the extraction region 320 from adjacent pixels.
- the post-processing unit 34 calculates a second adjacent region by connecting pixels of the same category that are adjacent to the extracted pixel in the second classified image. As described above, the post-processing unit 34 acquires the second adjacent region for the second feature image 1-2.
- the post-processing unit 34 extracts an area where the first adjacent area and the second adjacent area overlap.
- the post-processing unit 34 connects the extracted area to the extraction area 320.
- the category of the region adjacent to the extraction region 320 is equal to the category of the extraction region 320 of the first integrated region in the first classified image, and is equal to the category of the extraction region 320 of the second integrated region in the second classified image.
- the post-processing unit 34 connects this area to the extraction area 320.
- the area detection device 5 can detect a detection target area even when a part of the detection target is not included in the change area due to an influence of a shadow or the like.
- the area extraction unit 30 In order to display the extracted extraction area 320 on the display device 7, the area extraction unit 30 generates an extraction signal including information on the extracted extraction area 320.
- the display device 7 displays an area that changes between the first feature image 1-1 and the second feature image 1-2 based on the extracted signal. By viewing the display device 7, the user can change between the first feature image 1-1 and the second feature image 1-2, and can confirm a region belonging to the target category.
- the time point at which the first feature image 1-1 and the second feature image 1-2 are photographed can be arbitrarily selected.
- the region detection apparatus 5 performs the processing of the candidate extraction unit 10, the classification unit 20, and the region extraction unit 30 on the first feature image 1-1 and the second feature image 1-2. I do. Thereby, it is possible to extract only the area of the category to be detected among the areas changing between the first feature image 1-1 and the second feature image 1-2.
- the second feature image 1-2 may be estimated as a feature category different from the detection target due to the influence of a shadow or the like.
- the region detection device 5 allows the first feature image 1-1. And is included in the extraction region 320. In other words, the influence of shadows and the like can be suppressed by estimating the detection target in both the first feature image 1-1 and the second feature image 1-2. That is, the area detection device 5 has higher detection accuracy than the conventional method.
- the post-processing unit 34 further changes the change in the extraction region 320 based on the feature amount difference calculated by the change calculation unit 12 and the first integrated region and the second integrated region calculated by the classification determination unit 32.
- the likelihood may be calculated.
- the area detection device 5 may add the calculated likelihood to the extracted signal.
- the display device 7 can display the extracted area 320 with the same likelihood.
- the user can confirm the change of the feature from the area with high likelihood, and the work efficiency is improved.
- the likelihood area may be excluded from the extraction area 320. Thereby, only the area
- FIG. The post-processing unit 34 may omit the process illustrated in FIG. 14 and execute only the likelihood calculation process.
- the classification unit 20 may acquire the change area calculated by the candidate extraction unit 10 and calculate the first classification image and the second classification image for the change area. Thereby, the time required for processing in the classification unit 20 can be shortened. In this case, since the feature category is calculated only in the range of the change area, the process of connecting the adjacent areas 330 in the post-processing unit 34, specifically, the process shown in FIG. 14 may be omitted. .
- the sample construction unit 22 constructs a sample image that the classifier 21 learns. For this reason, when the classifier 21 has already been learned, the classification unit 20 may not include the sample construction unit 22. In this case, the user may input the learned classifier 21 using the input device 900. Further, the learned classifier 21 may be acquired from the outside via the communication device 903.
- the area detection device 5 calculates a plurality of change areas in the first feature image 1-1 and the second feature image 1-2.
- the feature image 1 may be used.
- the candidate extraction unit 10 extracts a change area from two of the plurality of feature images 1 including the first feature image 1-1 and the second feature image 1-2.
- the classification unit 20 generates a classification image for a plurality of feature images 1 including the first feature image 1-1 and the second feature image 1-2.
- the region extraction unit 30 extracts regions belonging to the category to be detected from the change region based on the plurality of change regions calculated by the candidate extraction unit 10 and the plurality of classification images generated by the classification unit 20.
- an area calculated as a change area at two target time points may not be calculated as a change area at the other two time points. Since it can be determined that this region has been extracted by a change such as a shadow, the region extraction unit 30 may exclude this region from the change region. Thereby, erroneous detection is suppressed. Furthermore, since the region detection device 5 uses a plurality of feature images 1, the region detection device 5 can extract a region where the feature has changed based on the order in which the feature images 1 were taken and the change between them. it can.
- a large number of feature images 1 are used to extract a region of the feature on which the moving body such as a road or a parking lot moves. . Thereafter, from the feature images 1 before and after the time to be extracted, a region where the feature is changing can be extracted in the region of the feature where the moving body moves.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Remote Sensing (AREA)
- Astronomy & Astrophysics (AREA)
- Databases & Information Systems (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Description
実施の形態1に係る領域検出装置5は、図1に示すように、入力装置900と、演算装置901と、記憶装置902と、通信装置903とを備える。入力装置900は、キーボード、マウス、スキャナなどを含み、領域検出装置5にデータを入力する装置である。演算装置901は、中央演算装置(CPU)、専用回路などを含み、領域検出装置5で処理するための演算を行う。また、演算装置901は、記憶装置902に格納されたプログラム910を読み出し、プログラム910の命令に基づき処理を行う。さらに、演算装置901は、入力装置900から入力されるデータを取得し、プログラム910の命令を実行するために利用する。記憶装置902は、演算装置901で使用される各種データが格納されている。また、記憶装置902は、演算装置901の処理内容を示すプログラム910を格納する。通信装置903は、領域検出装置5の外部と通信し、演算装置901の処理に必要なデータの送受信を行う。領域検出装置5は、例えば、コンピュータを含む。一実施形態では、プログラム910は、外部の格納媒体800に格納されていて、格納媒体800から読み出されて記憶装置902に格納されていてもよい。格納媒体800は、非遷移型であってもよい。
実施の形態2では、候補抽出部10が、分類部20が算出した第1分類画像と第2分類画像とに基づき、特徴量を算出する範囲を限定する例を説明する。この場合、候補抽出部10の処理時間を短縮することができる。
Claims (16)
- 第1時点に上空から地表を撮影した第1地物画像と、前記第1時点と異なる第2時点に前記地表を撮影した第2地物画像とで異なる変化領域を抽出する候補抽出部と、
前記第1地物画像の位置に応じて撮影されている第1地物の第1カテゴリを推定し、前記第1地物の位置と前記第1カテゴリとの関係を示す第1分類画像と、前記第2地物画像の位置に応じて撮影されている第2地物の第2カテゴリを推定し、前記第2地物の位置と前記第2カテゴリとの関係を示す第2分類画像とを算出する分類部と、
前記第1カテゴリと、前記第1カテゴリと同じ位置における前記第2カテゴリとが予め決められた条件を満たす抽出領域を、前記変化領域から抽出し、前記抽出領域を表す抽出領域情報を含む抽出信号を生成し送信する領域抽出部と、
を備える領域検出装置。 - 前記予め決められた条件は、前記第1カテゴリが予め決められた第1条件カテゴリであり、前記第2カテゴリが予め決められた第2条件カテゴリであることを示す
請求項1に記載の領域検出装置。 - 前記領域抽出部は、
互いに隣接し前記変化領域に含まれる画素を連結した分離領域の形状特徴を算出し、
前記形状特徴が、前記第1条件カテゴリの第1形状特徴または前記第2条件カテゴリの第2形状特徴に含まれない場合に、当該形状特徴の分離領域を前記変化領域から除外する
形状判定部を備える
請求項2に記載の領域検出装置。 - 前記予め決められた条件は、前記第1カテゴリまたは前記第2カテゴリのいずれかが予め決められた第3条件カテゴリであることを示す
請求項1に記載の領域検出装置。 - 前記領域抽出部は、
互いに隣接し前記変化領域に含まれる画素を連結した分離領域の形状特徴を算出し、
前記形状特徴が、前記第3条件カテゴリの第3形状特徴に含まれない場合に、当該形状特徴の分離領域を前記変化領域から除外する
形状判定部を備える
請求項4に記載の領域検出装置。 - 前記予め決められた条件は、前記第1カテゴリと前記第2カテゴリとが異なることを示す
請求項1に記載の領域検出装置。 - 前記領域抽出部は、
互いに隣接し前記変化領域に含まれる画素を連結した分離領域の形状特徴を算出し、
前記形状特徴が、前記分類部が推定し得るカテゴリの形状特徴に含まれない場合に、当該形状特徴の分離領域を前記変化領域から除外する
形状判定部を備える
請求項6に記載の領域検出装置。 - 前記領域抽出部は、
前記第1分類画像の中の前記変化領域に対応する領域において、互いに隣接し同一カテゴリの画素を連結して形成される第1領域の面積に基づき、前記第1カテゴリの領域を統合した第1統合領域を算出し、
前記第2分類画像の中の前記変化領域に対応する領域において、互いに隣接し同一カテゴリの画素を連結して形成される第2領域の面積に基づき、前記第2カテゴリの領域を統合した第2統合領域を算出する
分類決定部を備え、
前記領域抽出部は、前記第1統合領域の前記第1カテゴリと、前記第1カテゴリと同じ位置における前記第2統合領域の前記第2カテゴリとが予め決められた条件を満たす領域を、前記変化領域から抽出する
請求項1から7のいずれか1項に記載の領域検出装置。 - 前記分類決定部は、
前記第1分類画像の中の前記変化領域に対応する領域において、前記第1領域の面積が予め決められた閾値より小さい場合、前記第1領域に隣接する画素を含む前記第1カテゴリの領域に、前記第1領域を統合し、
前記第2分類画像の中の前記変化領域に対応する領域において、前記第2領域の面積が予め決められた閾値より小さい場合、前記第2領域に隣接する画素を含む前記第2カテゴリの領域に、前記第2領域を統合する
請求項8に記載の領域検出装置。 - 前記分類決定部は、
前記第1領域に隣接する前記第1カテゴリの領域のうち、前記第1領域と前記第1カテゴリの領域との境界線が最も長い前記第1カテゴリの領域に、前記第1領域を統合し、
前記第2領域に隣接する前記第2カテゴリの領域のうち、前記第2領域と前記第2カテゴリの領域との境界線が最も長い前記第2カテゴリの領域に、前記第2領域を統合する
請求項9に記載の領域検出装置。 - 前記領域抽出部は、前記抽出領域に隣接する隣接領域の前記第1カテゴリと前記第2カテゴリとに基づき、前記抽出領域に前記隣接領域を連結する後処理部を備える
請求項1から10のいずれか1項に記載の領域検出装置。 - 前記後処理部は、前記隣接領域における前記第1カテゴリが、前記抽出領域の前記第1カテゴリと等しく、前記隣接領域における前記第2カテゴリが、前記抽出領域の前記第2カテゴリと等しい場合に、前記抽出領域に前記隣接領域を連結する
請求項11に記載の領域検出装置。 - 前記隣接領域は、互いに隣接し同一のカテゴリの画素を連結した領域である
請求項12に記載の領域検出装置。 - 第1時点に上空から地表を撮影した第1地物画像の位置に応じて撮影されている第1地物の第1カテゴリを推定し、前記第1地物の位置と前記第1カテゴリとの関係を示す第1分類画像と、第1時点と異なる第2時点に前記地表を撮影した第2地物画像の位置に応じて撮影されている第2地物の第2カテゴリを推定し、前記第2地物の位置と前記第2カテゴリとの関係を示す第2分類画像とを算出する分類部と、
前記第1カテゴリと、前記第1カテゴリと同じ位置における前記第2カテゴリとが予め決められた条件を満たす第1領域を抽出する変化判定部と、
前記第1領域において、前記第1地物画像と、前記第2地物画像とで異なる領域を抽出する候補抽出部と、
を備える領域検出装置。 - 演算装置が、第1時点に上空から地表を撮影した第1地物画像と、前記第1時点と異なる第2時点に前記地表を撮影した第2地物画像とで異なる変化領域を抽出する候補抽出手段と、
演算装置が、前記第1地物画像の位置に応じて撮影されている第1地物の第1カテゴリを推定し、前記第1地物の位置と前記第1カテゴリとの関係を示す第1分類画像と、前記第2地物画像の位置に応じて撮影されている第2地物の第2カテゴリを推定し、前記第2地物の位置と前記第2カテゴリとの関係を示す第2分類画像とを算出する分類手段と、
演算装置が、前記第1カテゴリと、前記第1カテゴリと同じ位置における前記第2カテゴリとが予め決められた条件を満たす抽出領域を、前記変化領域から抽出する領域抽出手段と、
を含む領域検出方法。 - 第1時点に上空から地表を撮影した第1地物画像と、前記第1時点と異なる第2時点に前記地表を撮影した第2地物画像とで異なる変化領域を抽出する候補抽出手段と、
前記第1地物画像の位置に応じて撮影されている第1地物の第1カテゴリを推定し、前記第1地物の位置と前記第1カテゴリとの関係を示す第1分類画像と、前記第2地物画像の位置に応じて撮影されている第2地物の第2カテゴリを推定し、前記第2地物の位置と前記第2カテゴリとの関係を示す第2分類画像とを算出する分類手段と、
前記第1カテゴリと、前記第1カテゴリと同じ位置における前記第2カテゴリとが予め決められた条件を満たす抽出領域を、前記変化領域から抽出する領域抽出手段と、
を演算装置に実行させるためのプログラムが格納された非遷移型格納媒体。
Priority Applications (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| EP19775373.4A EP3745351B1 (en) | 2018-03-26 | 2019-01-11 | Region extraction apparatus and region extraction method |
| US16/966,605 US11216952B2 (en) | 2018-03-26 | 2019-01-11 | Region extraction apparatus and region extraction method |
| AU2019244533A AU2019244533B2 (en) | 2018-03-26 | 2019-01-11 | Region extraction apparatus and region extraction method |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2018-057894 | 2018-03-26 | ||
| JP2018057894A JP7077093B2 (ja) | 2018-03-26 | 2018-03-26 | 領域検出装置、領域検出方法及びそのプログラム |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2019187494A1 true WO2019187494A1 (ja) | 2019-10-03 |
Family
ID=68058114
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2019/000619 Ceased WO2019187494A1 (ja) | 2018-03-26 | 2019-01-11 | 領域検出装置および領域検出方法 |
Country Status (5)
| Country | Link |
|---|---|
| US (1) | US11216952B2 (ja) |
| EP (1) | EP3745351B1 (ja) |
| JP (1) | JP7077093B2 (ja) |
| AU (1) | AU2019244533B2 (ja) |
| WO (1) | WO2019187494A1 (ja) |
Families Citing this family (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR102850864B1 (ko) * | 2022-01-12 | 2025-08-25 | 국방과학연구소 | 광역 영상에서의 객체 변화 탐지 방법 및 장치 |
| JP7634498B2 (ja) * | 2022-03-18 | 2025-02-21 | 三菱電機株式会社 | 情報処理装置、情報処理方法及び情報処理プログラム |
| CN120894702A (zh) * | 2025-10-10 | 2025-11-04 | 江西农业大学 | 基于遥感数据的土地生态状况监测系统及方法 |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2010061852A1 (ja) * | 2008-11-25 | 2010-06-03 | Necシステムテクノロジー株式会社 | 建築物変化検出装置、建築物変化検出方法、及び記録媒体 |
| WO2015151553A1 (ja) * | 2014-03-31 | 2015-10-08 | Necソリューションイノベータ株式会社 | 異動検出支援装置、異動検出支援方法、及びコンピュータ読み取り可能な記録媒体 |
| JP2017033197A (ja) | 2015-07-30 | 2017-02-09 | 日本電信電話株式会社 | 変化領域検出装置、方法、及びプログラム |
| JP2018057894A (ja) | 2017-11-29 | 2018-04-12 | サミー株式会社 | 遊技機 |
Family Cites Families (16)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US8224078B2 (en) * | 2000-11-06 | 2012-07-17 | Nant Holdings Ip, Llc | Image capture and identification system and process |
| JP4780215B2 (ja) * | 2009-03-27 | 2011-09-28 | 富士ゼロックス株式会社 | 画像処理装置および画像処理プログラム |
| JP5762251B2 (ja) * | 2011-11-07 | 2015-08-12 | 株式会社パスコ | 建物輪郭抽出装置、建物輪郭抽出方法及び建物輪郭抽出プログラム |
| JP2013161126A (ja) * | 2012-02-01 | 2013-08-19 | Honda Elesys Co Ltd | 画像認識装置、画像認識方法および画像認識プログラム |
| JP6233869B2 (ja) * | 2012-06-07 | 2017-11-22 | 日本電気株式会社 | 画像処理装置、画像処理装置の制御方法およびプログラム |
| JP6045417B2 (ja) * | 2012-12-20 | 2016-12-14 | オリンパス株式会社 | 画像処理装置、電子機器、内視鏡装置、プログラム及び画像処理装置の作動方法 |
| KR102001636B1 (ko) * | 2013-05-13 | 2019-10-01 | 삼성전자주식회사 | 이미지 센서와 대상 객체 사이의 상대적인 각도를 이용하는 깊이 영상 처리 장치 및 방법 |
| EP3085298A4 (en) * | 2013-12-19 | 2017-08-16 | Olympus Corporation | Image-processing apparatus, image-processing method, and image-processing program |
| US9465981B2 (en) * | 2014-05-09 | 2016-10-11 | Barron Associates, Inc. | System and method for communication |
| JP6653467B2 (ja) * | 2015-06-15 | 2020-02-26 | パナソニックIpマネジメント株式会社 | 脈拍推定装置、脈拍推定システムおよび脈拍推定方法 |
| JP6647013B2 (ja) * | 2015-10-30 | 2020-02-14 | キヤノン株式会社 | 画像処理装置、画像処理方法及び光干渉断層撮影装置 |
| JP2017191501A (ja) * | 2016-04-14 | 2017-10-19 | キヤノン株式会社 | 情報処理装置、情報処理方法及びプログラム |
| EP3319041B1 (en) * | 2016-11-02 | 2022-06-22 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method, and storage medium |
| EP3653106B1 (en) * | 2017-07-14 | 2025-07-09 | FUJIFILM Corporation | Medical image processing device, endoscope system, diagnosis assistance device, and medical operation assistance device |
| JP6984256B2 (ja) * | 2017-09-11 | 2021-12-17 | ソニーグループ株式会社 | 信号処理装置、および信号処理方法、プログラム、並びに移動体 |
| CN110322556B (zh) * | 2019-04-29 | 2022-06-03 | 武汉大学 | 一种基于边界裁剪的高速高精度矢栅叠置分析方法 |
-
2018
- 2018-03-26 JP JP2018057894A patent/JP7077093B2/ja active Active
-
2019
- 2019-01-11 EP EP19775373.4A patent/EP3745351B1/en active Active
- 2019-01-11 WO PCT/JP2019/000619 patent/WO2019187494A1/ja not_active Ceased
- 2019-01-11 AU AU2019244533A patent/AU2019244533B2/en active Active
- 2019-01-11 US US16/966,605 patent/US11216952B2/en active Active
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2010061852A1 (ja) * | 2008-11-25 | 2010-06-03 | Necシステムテクノロジー株式会社 | 建築物変化検出装置、建築物変化検出方法、及び記録媒体 |
| WO2015151553A1 (ja) * | 2014-03-31 | 2015-10-08 | Necソリューションイノベータ株式会社 | 異動検出支援装置、異動検出支援方法、及びコンピュータ読み取り可能な記録媒体 |
| JP2017033197A (ja) | 2015-07-30 | 2017-02-09 | 日本電信電話株式会社 | 変化領域検出装置、方法、及びプログラム |
| JP2018057894A (ja) | 2017-11-29 | 2018-04-12 | サミー株式会社 | 遊技機 |
Non-Patent Citations (2)
Also Published As
| Publication number | Publication date |
|---|---|
| EP3745351B1 (en) | 2025-08-20 |
| EP3745351A1 (en) | 2020-12-02 |
| JP2019169060A (ja) | 2019-10-03 |
| JP7077093B2 (ja) | 2022-05-30 |
| AU2019244533B2 (en) | 2022-01-27 |
| US20210056707A1 (en) | 2021-02-25 |
| EP3745351A4 (en) | 2021-04-28 |
| AU2019244533A1 (en) | 2020-08-13 |
| US11216952B2 (en) | 2022-01-04 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN110119728B (zh) | 基于多尺度融合语义分割网络的遥感图像云检测方法 | |
| US10607089B2 (en) | Re-identifying an object in a test image | |
| EP2858008B1 (en) | Target detecting method and system | |
| US8243987B2 (en) | Object tracking using color histogram and object size | |
| JP6397379B2 (ja) | 変化領域検出装置、方法、及びプログラム | |
| JP2018072938A (ja) | 目的物個数推定装置、目的物個数推定方法及びプログラム | |
| CN103383451B (zh) | 基于恒边长梯度加权图切的优化雷达微弱目标检测方法 | |
| WO2019187494A1 (ja) | 領域検出装置および領域検出方法 | |
| CN119295740A (zh) | 模型训练方法、红外弱小目标检测方法、装置及电子设备 | |
| CN117765363A (zh) | 一种基于轻量型记忆库的图像异常检测方法及系统 | |
| CN109785302B (zh) | 一种空谱联合特征学习网络及多光谱变化检测方法 | |
| Kusetogullari et al. | Unsupervised change detection in landsat images with atmospheric artifacts: a fuzzy multiobjective approach | |
| US10210621B2 (en) | Normalized probability of change algorithm for image processing | |
| CN111553474A (zh) | 船只检测模型训练方法及基于无人机视频的船只跟踪方法 | |
| CN111815677B (zh) | 目标追踪方法、装置、终端设备和可读存储介质 | |
| CN108665489B (zh) | 用于检测地理空间影像的变化的方法及数据处理系统 | |
| CN118799827A (zh) | 一种双极化sar图像海面舰船目标智能检测定位方法 | |
| CN118230111A (zh) | 图像点云融合方法、装置、电子设备和存储介质 | |
| CN117928540A (zh) | 机器人的定位方法、定位装置、机器人以及存储介质 | |
| KR101507998B1 (ko) | 배경확산 및 영역확장을 이용한 물체 검출 방법 및 장치, 이를 이용한 물체 추적 방법 및 장치 | |
| RU2752246C1 (ru) | Программно-аппаратный комплекс, предназначенный для обработки аэрофотоснимков видимого и дальнего инфракрасного диапазонов с целью обнаружения, локализации и классификации строений вне населенных пунктов | |
| Chen et al. | Detection and Classification of Vehicles in Ultra-High Resolution Images Using Neural Networks | |
| CN120953786A (zh) | 一种基于图像的耕地保护检测方法、系统、设备及程序 | |
| CN119310561A (zh) | 用于相机到雷达知识提炼的系统和方法 | |
| CN119741522A (zh) | 目标检测方法、装置、设备及存储介质 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19775373 Country of ref document: EP Kind code of ref document: A1 |
|
| ENP | Entry into the national phase |
Ref document number: 2019244533 Country of ref document: AU Date of ref document: 20190111 Kind code of ref document: A |
|
| ENP | Entry into the national phase |
Ref document number: 2019775373 Country of ref document: EP Effective date: 20200828 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| WWG | Wipo information: grant in national office |
Ref document number: 2019775373 Country of ref document: EP |