GB2633374A - Method for determining occlusions of optics of a camera, driving system and vehicle - Google Patents
Method for determining occlusions of optics of a camera, driving system and vehicle Download PDFInfo
- Publication number
- GB2633374A GB2633374A GB2313692.2A GB202313692A GB2633374A GB 2633374 A GB2633374 A GB 2633374A GB 202313692 A GB202313692 A GB 202313692A GB 2633374 A GB2633374 A GB 2633374A
- Authority
- GB
- United Kingdom
- Prior art keywords
- image
- occlusion
- area
- camera
- probability
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/98—Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
- G06V10/993—Evaluation of the quality of the acquired pattern
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Quality & Reliability (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
Method for determining image occlusions, comprising: acquiring a first and second image 2 by a camera, the first and second image having the same field of view; determining an optical flow between the first and second image 4; evaluating the optical flow for a plurality of areas within the image 3, 5; and determining whether an area of the plurality of areas corresponds to an occlusion 15. Determining the optical flow comprises identifying features in the first and second images and determining flow vectors and flow strengths between features. Evaluating the optical flow for each area is based on number, length, direction and/or strength of flow vectors. Determining an area corresponds to an occlusion comprises determining a first probability based on the evaluated optical flow 6. Image analysis (edge detection) may be performed on images 7, 8 to derive a second occlusion probability 9 which is combined with the first probability 10. The combined probability for each area may be compared to a threshold 11 and a counter incremented if the combined probability exceeds a threshold 13. The counter exceeding a second threshold indicates an occlusion 14. Areas corresponding to occlusions may be provided to an object detection function of a driver assistance system of an autonomous vehicle.
Description
METHOD FOR DETERMINING OCCLUSIONS OF OPTICS OF A CAMERA,
DRIVING SYSTEM AND VEHICLE
TECHNICAL FIELD
The invention relates to digital imaging, image processing and computer vision. In particular, the invention relates to a method for determining occlusions of optics of a camera, to a driving system with a camera that is adapted to determine occlusions of the optics of the camera and to a vehicle with said driving system.
BACKGROUND
Occlusions of optics of a camera, e.g., caused by contaminations or blockages, for example due to salt, dust, or fungus, impair the images taken with said camera.
In particular, when said images are used for computer vision, more particularly with an advanced driver assistance system or an autonomous driving system, said occlusions may distort the feature extraction mechanism and/or other functionalities. Hence, it is important to identify these occlusions such that appropriate measure may be taken.
Known techniques comprise a detection of occlusion based on edge detection, however, these algorithms have problems with false edges and some particular textures of occlusions.
Another technique, disclosed in United States patent application US 2020/0090322 A1, detects occlusions using deep neural networks. However, this algorithm takes much computing run time and computational cost.
SUMMARY
It is therefore an object of the present invention to provide a method for determining occlusions of optics of a camera that overcomes the above-mentioned problems, in particular that improves the detection of occlusions while keeping reasonable computational effort. It is a further object of the present invention to provide a related driving system and a vehicle comprising such a driving system.
The object of the present invention is solved by the subject-matter of the independent claims, wherein further embodiments are incorporated in the dependent claims.
According to an aspect of the invention, a method for determining occlusions of optics of a camera is provided. In this context, the camera may be any digital camera, in particular an RGB camera, a black-and-white camera, or an infrared camera. The optics of the camera are to be understood to include lenses of the camera, a camera sensor, and the space between the lenses of the camera and/or the camera sensor. Occlusions of said optics of the camera are foreign objects that impair the images taken by the camera. In particular, the occlusions may be due to salt, dust, fungus, fog, and/or dirt, in particular on at least one of the lenses, on the sensor, or anywhere in between.
According to the method, at least a first image and a second image are acquired by the camera, wherein the first image and the second image have a same image area. Further, the first image is acquired at a first time and the second image is acquired at a second time, wherein the second time is different from the first time. While it does not make a difference whether the first time or the second time is the later time, we will assume that the second time is later than the first time for the following discussion.
Further, an optical flow between the first image and the second image is determined and evaluated for a plurality of areas within the image area. Said plurality of areas may be predefined, may be chosen based on the evaluated optical flow, or may be chosen based on evaluation results of other functions.
Based on said evaluated optical flow for an areas out of the plurality of areas, it is determined whether the area out of the plurality of areas corresponds to an occlusion of the optics of the camera. Since occlusions do not move, the optical flow in an area corresponding to an occlusion will be very little or zero, such that the presence of an optical flow will indicate an area without occlusions. This principle is used for determining whether the area corresponds to an occlusion of the optics of the camera.
Hence, the method provides a good detection of occlusions while needing little computational effort.
According to an embodiment which may be combined with the above-described embodiment or with any below described further embodiment, the optical flow comprises flow vectors in the image area. The determination of the optical flow between the first image and the second image comprises identifying features in the first image and features in the second image. This identification of features may be performed, e.g., by edge detection, by feature recognition, or with artificial intelligence. Based on the identified features, the features in the second image are associated with the features in the first image. In particular, those features in the second image and first image are associated that correspond to the same object. A flow vector is then determined between the features in the first image and the features in the second image associated with the respective features in the first image. Hence, the flow vector may start at a point of the feature in the first image and end at a corresponding point of the feature in the second image. This is an easy-to-implement and efficient way of determining the optical flow.
According to an embodiment which may be combined with the above-described embodiment or with any below described further embodiment, the optical flow further comprises flow strengths associated with the flow vectors. Said flow strengths may correspond to, e.g., the strength of the detected edges that are used to determine the flow vector. Alternatively, the flow strength may correspond to a size of identified features that are used to determine the flow vector. Yet alternatively, the flow strength may correspond to a confidence level for identified features that are used to determine the flow vector and/or to a confidence level of the association of the features in the second image with the features in the first image. Using the flow strength, the determination of occlusions may be further improved.
According to an embodiment which may be combined with any above-described embodiment or with any below described further embodiment, each of the plurality of areas is a part of a grid pattern of the image area. This is an easy way to predetermine the plurality of areas and makes the evaluation of the optical flow easy. Each area of the plurality of areas may have the same aspect ratio as the image area or may be, at least approximately, a square. As an example, the grid may comprise 5 x 5, i.e., 25 areas in the plurality of areas.
According to an embodiment which may be combined with any above-described embodiment or with any below described further embodiment, evaluating the optical flow for an area is based on the number of flow vectors in the area, the length of the flow vectors in the area, the direction of the flow vectors in the area and/or the flow strength associated with the flow vectors in the area. For example, the more flow vectors that are present in the area, the longer the flow vectors that are in the area, and the higher the flow strength of the flow vectors in the area, the less likely it is that there is an occlusion associated with the area. Also, if the flow vectors are directed in the same or at least in a similar direction, it is less likely that there is an occlusion associated with the area. Hence, using the above-mentioned features of the flow vectors helps further improving the determination of occlusions.
According to an embodiment which may be combined with any above-described embodiment or with any below described further embodiment, determining whether an area out of the plurality of areas corresponds to an occlusion of the optics of the camera comprises determining a probability for an occlusion based on the evaluated optical flow for the area. In particular, this probability may be a function of the number of flow vectors in the area, the length of the flow vectors in the area, the direction of the flow vectors in the area and/or the flow strength associated with the flow vectors in the area. Said probability may be combined with other probabilities, enhancing the applicability of the determined occlusions.
According to an embodiment which may be combined with the above-described embodiment or with any below described further embodiment, the method further comprises performing an image analysis of the first image and/or the second image. The step of determining whether an area out of the plurality of areas corresponds to an occlusion of the optics of the camera then further comprises determining another probability for an occlusion based on the analyzed first image and/or second image and combining the probability for an occlusion and the other probability for an occlusion to obtain a combined probability for an occlusion. In other words, the other probability for an occlusion is determined based on standard image processing. By combining the probability for an occlusion and the other probability for an occlusion, an even better and more accurate probability for an occlusion will result.
According to an embodiment which may be combined with the above-described embodiment or with any below described further embodiment, the image analysis comprises an edge detection of the image. Areas with strong edges are unlikely to correspond to an occlusion, whereas areas with weak edges, few edges and/or no edges are likely to correspond to an occlusion. Since edge detection is very fast, this approach yields improved results with little effort.
According to an embodiment which may be combined with any above-described embodiment or with any below described further embodiment, the step of determining whether an area out of the plurality of areas corresponds to an occlusion of the optics of the camera comprises determining whether the probability for an occlusion or the combined probability for an occlusion exceeds a predetermined threshold and based on determining that the probability for an occlusion or the combined probability for an occlusion exceeds the predetermined threshold, inferring that the area corresponds to an occlusion of the optics of the camera. This is an easy, yet effective, way to determine whether the area out of the plurality of areas corresponds to an occlusion.
Alternatively, the step of determining whether an area out of the plurality of areas corresponds to an occlusion of the optics of the camera comprises, as above, determining whether the probability for an occlusion or the combined probability for an occlusion exceeds a predetermined threshold. If it is determined that the probability for an occlusion or the combined probability for an occlusion exceeds the predetermined threshold, a counter for the area is incremented. Said counter may be an integer counter and may be initialized at zero, e.g., when the vehicle starts. It is then determined whether the counter exceeds a counter threshold, and if the counter exceeds the counter threshold, it is inferred that the area corresponds to an occlusion of the optics of the camera. In this way, it is less likely that an occlusion is falsely identified, since several identifications are needed in order to determine that the area is associated with an occlusion.
According to an embodiment which may be combined with any above-described embodiment or with any below described further embodiment, the method further comprises providing the areas corresponding to occlusions of the optics of the camera to an image analysis function and/or image processing function. Image analysis functions may then, using the information about the occlusion, not evaluate the areas corresponding to the occlusions, leading to reduced false analysis results. Image processing functions may try to interpolate neighboring areas to the area associated with the occlusion, in order to improve the image.
According to an embodiment which may be combined with the above-described embodiment, the image analysis function comprises an object detection function for a driving system. Hence, having knowledge about the occlusions and therefore having less false analysis results, the object detection function is improved and consequently the safety of the vehicle is improved. Further, the image analysis function may be configured to issue an alert to the driving system when occlusions are detected, to inform the driver of the vehicle of potential problems with the driving system.
According to another aspect of the invention, a driving system is provided. The driving system comprises at least one camera and a computing unit and is configured to be operated according to the above description. The advantages and further embodiments correspond to those given in the above description.
According to an embodiment which may be combined with the above-described embodiment, the driving system is a driver assistance system and/or an autonomous driving system. In either case, the operation of the driving system is improved by the improved determination of occlusions of the optics of the camera.
According to yet another aspect of the invention, a vehicle is provided. The vehicle comprises a driving system according to the above description. Hence, the advantages and further embodiments correspond to those given in the above description.
BRIEF DESCRIPTION OF THE DRAWINGS
These and other aspects of the invention will be apparent from and elucidated further with reference to the embodiments described by way of examples in the following description and with reference to the accompanying drawings, in which Fig. 1 shows a flowchart of an embodiment of a method for determining occlusions of optics of a camera; and Fig. 2 shows a schematic top view of an embodiment of a vehicle with a driving system.
In the figures, elements which correspond to elements already described may have the same reference numerals. Examples, embodiments or optional features, whether indicated as non-limiting or not, are not to be understood as limiting the invention as claimed.
DESCRIPTION OF EMBODIMENTS
Figure 1 shows a flowchart of an embodiment of a method 1 for determining occlusions of optics of a camera. According to the method 1, images 2 are acquired from the camera. For the method, at least two images 2 have to be acquired, the first image taken at a first time and the second image taken at a second time which is different from the first time. Usually, a plurality of images is obtained, as part of a video stream from the camera.
Based on the obtained images 2, a plurality of areas within an image area of the images 2 may be determined 3. Alternatively, the plurality of areas may be predetermined, in which case this step may be skipped.
Then, based on at least the first image and the second image, an optical flow is determined 4. As an example, the optical flow may be determined by identifying features in the first image and features in the second image, associating the features in the second image with the features in the first image, and determining a flow vector between the features in the first image and the features in the second image associated with the respective features in the first image. Further, a flow strength may be associated with each flow vector, e.g., based on the strength of the identified features. The optical flow is then evaluated 5 for each area of the plurality of areas within the image area. Based on said evaluation 5, a probability for an occlusion, based on the evaluated optical flow, is determined 6 for each area.
Further, an image analysis on at least one of the images 2 is performed 7. Said image analysis is then evaluated 8 for each of the areas out of the plurality of areas. As an example, the image analysis may comprise the detection of edges and in the evaluation 8, the amount of edges and their strengths are evaluated. Based on said evaluation 8, another probability for an occlusion, based on the image analysis, is determined 9 for each area.
The probability for an occlusion and the other probability for an occlusion are then combined 10 to obtain a combined probability for an occlusion. For each area out of the plurality of areas, said combined probability is then compared 11 to a predetermined threshold. If the combined probability is less than the predetermined threshold, a counter, which may have been initialized to zero at a start of a vehicle, is set 12 to zero. If, on the other hand, the combined probability is greater than the predetermined threshold, the counter is incremented 13. Subsequently, the areas for which the counter exceeds a counter threshold are selected 14 from the plurality of areas. Here, the counter threshold is another predetermined value. Finally, the selected 14 areas are output 15, e.g, to an image processing function or to an image analysis function, in particular an image analysis function comprising an object detection function for a driving system.
Figure 2 shows a schematic top view of an embodiment of a vehicle 16 with a driving system 17. Said driving system 17 may be a driver assistance system, in particular an advanced driver assistance system, and/or an autonomous driving system. The driving system 17 comprises at least one camera 18 that is connected to a computing unit 19. While the connection is shown as a wired connection, a wireless connection is also possible. The driving system 17 is configured to perform the method for determining occlusions of optics of the camera 18 described above and hence provides an improved determination of occlusions of the optics of the camera, which leads to improved object detection and hence a safer driving system 17.
Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from the study of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps and the indefinite article "a" or "an" does not exclude a plurality. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage. Any reference signs in the claims should not be construed as limiting the scope of the claims.
List of reference signs 1 method 2 image 3 determination 4 determination evaluation 6 determination 7 performance 8 evaluation 9 determination combination 11 comparison 12 setting 13 increment 14 selection output 16 vehicle 17 driving system 18 camera 19 computing unit
Claims (15)
- Patent claims 1. Method (1) for determining occlusions of optics of a camera (18), comprising: acquiring at least a first image (2) and a second image (2) by the camera (18), wherein the first image (2) and the second image (2) have a same image area, the first image (2) is acquired at a first time, the second image (2) is acquired at a second time, and the second time is different from the first time; determining an optical flow between the first image (2) and the second image (2); evaluating the optical flow for a plurality of areas within the image area; and determining whether an area out of the plurality of areas corresponds to an occlusion of the optics of the camera (18) based on the evaluated optical flow for the area.
- 2. Method (1) according to claim 1, wherein the optical flow comprises flow vectors in the image area; and determining the optical flow between the first image (2) and the second image (2) comprises identifying features in the first image (2) and features in the second image (2), associating the features in the second image (2) with the features in the first image (2), and determining a flow vector between the features in the first image (2) and the features in the second image (2) associated with the respective features in the first image (2).
- 3. Method (1) according to claim 2, wherein the optical flow further comprises flow strengths associated with the flow vectors.
- 4. Method (1) according to any one of claims 1 to 3, wherein each of the plurality of areas is a part of a grid pattern of the image area.
- 5. Method (1) according to any one of claims 2 to 4, wherein evaluating the optical flow for an area is based on the number of flow vectors in the area, the length of the flow vectors in the area, the direction of the flow vectors in the area and/or the flow strength associated with the flow vectors in the area.
- 6. Method (1) according to any one of claims 1 to 5, wherein determining whether an area out of the plurality of areas corresponds to an occlusion of the optics of the camera (18) comprises determining a probability for an occlusion based on the evaluated optical flow for the area.
- 7. Method (1) according to claim 6, wherein the method (1) further comprises performing an image analysis of the first image (2) and/or the second image (2); and determining whether an area out of the plurality of areas corresponds to an occlusion of the optics of the camera (18) comprises determining another probability for an occlusion based on the analyzed first image (2) and/or second image (2); and combining the probability for an occlusion and the other probability for an occlusion to obtain a combined probability for an occlusion.
- 8. Method (1) according to claim 7, wherein the image analysis comprises an edge detection of the image (2).
- 9. Method (1) according to any one of claims 6 to 8, wherein determining whether an area out of the plurality of areas corresponds to an occlusion of the optics of the camera (18) comprises determining whether the probability for an occlusion or the combined probability for an occlusion exceeds a predetermined threshold; and based on determining that the probability for an occlusion or the combined probability for an occlusion exceeds the predetermined threshold, inferring that the area corresponds to an occlusion of the optics of the camera (18).
- 10. Method (1) according to any one of claims 6 to 8, wherein determining whether an area out of the plurality of areas corresponds to an occlusion of the optics of the camera (18) comprises determining whether the probability for an occlusion or the combined probability for an occlusion exceeds a predetermined threshold; based on determining that the probability for an occlusion or the combined probability for an occlusion exceeds the predetermined threshold, incrementing a counter for the area; determining whether the counter exceeds a counter threshold; and based on determining that the counter exceeds the counter threshold, inferring that the area corresponds to an occlusion of the optics of the camera (18).
- 11. Method (1) according to any one of claims 1 to 10, further comprising: providing the areas corresponding to occlusions of the optics of the camera (18) to an image analysis function and/or image processing function.
- 12. Method (1) according to claim 11, wherein the image analysis function comprises an object detection function for a driving system (17).
- 13. Driving system (17), comprising at least one camera (18) and a computing unit (19), wherein the driving system (17) is adapted to be operated according to the method (1) according to any one of claims 1 to 12.
- 14. Driving system (17) according to claim 13, wherein the driving system (17) is a driver assistance system and/or an autonomous driving system.
- 15. Vehicle (16), comprising a driving system (17) according to claim 13 or 14.
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB2313692.2A GB2633374A (en) | 2023-09-08 | 2023-09-08 | Method for determining occlusions of optics of a camera, driving system and vehicle |
| PCT/EP2024/074967 WO2025051940A1 (en) | 2023-09-08 | 2024-09-06 | Method for determining occlusions of optics of a camera, driving system and vehicle |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB2313692.2A GB2633374A (en) | 2023-09-08 | 2023-09-08 | Method for determining occlusions of optics of a camera, driving system and vehicle |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| GB202313692D0 GB202313692D0 (en) | 2023-10-25 |
| GB2633374A true GB2633374A (en) | 2025-03-12 |
Family
ID=88412799
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| GB2313692.2A Pending GB2633374A (en) | 2023-09-08 | 2023-09-08 | Method for determining occlusions of optics of a camera, driving system and vehicle |
Country Status (2)
| Country | Link |
|---|---|
| GB (1) | GB2633374A (en) |
| WO (1) | WO2025051940A1 (en) |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110913212A (en) * | 2019-12-27 | 2020-03-24 | 上海智驾汽车科技有限公司 | Intelligent vehicle-mounted camera shielding monitoring method and device based on optical flow and auxiliary driving system |
| EP3839888A1 (en) * | 2019-12-18 | 2021-06-23 | Clarion Co., Ltd. | Compute device and method for detection of occlusions on a camera |
Family Cites Families (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2008060874A (en) * | 2006-08-31 | 2008-03-13 | Hitachi Ltd | In-vehicle camera and in-vehicle camera deposit detection device |
| US10515455B2 (en) * | 2016-09-29 | 2019-12-24 | The Regents Of The University Of Michigan | Optical flow measurement |
| WO2020112213A2 (en) | 2018-09-13 | 2020-06-04 | Nvidia Corporation | Deep neural network processing for sensor blindness detection in autonomous machine applications |
| CN114359170A (en) * | 2021-12-15 | 2022-04-15 | 奇酷软件(深圳)有限公司 | Blocking detection method, device, equipment and storage medium based on optical flow method |
-
2023
- 2023-09-08 GB GB2313692.2A patent/GB2633374A/en active Pending
-
2024
- 2024-09-06 WO PCT/EP2024/074967 patent/WO2025051940A1/en active Pending
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP3839888A1 (en) * | 2019-12-18 | 2021-06-23 | Clarion Co., Ltd. | Compute device and method for detection of occlusions on a camera |
| CN110913212A (en) * | 2019-12-27 | 2020-03-24 | 上海智驾汽车科技有限公司 | Intelligent vehicle-mounted camera shielding monitoring method and device based on optical flow and auxiliary driving system |
Also Published As
| Publication number | Publication date |
|---|---|
| GB202313692D0 (en) | 2023-10-25 |
| WO2025051940A1 (en) | 2025-03-13 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP3307335B2 (en) | Vehicle region detection device and vehicle region verification method | |
| KR100476019B1 (en) | Monitoring method for detecting an intruding object and monitoring apparatus therefor | |
| US6658150B2 (en) | Image recognition system | |
| KR101176693B1 (en) | Method and System for Detecting Lane by Using Distance Sensor | |
| KR101281260B1 (en) | Method and Apparatus for Recognizing Vehicle | |
| CN102314599A (en) | Identification and deviation-detection method for lane | |
| JP2008286725A (en) | Person detection apparatus and method | |
| JP4263737B2 (en) | Pedestrian detection device | |
| US11530993B2 (en) | Deposit detection device and deposit detection method | |
| JP7418315B2 (en) | How to re-identify a target | |
| JP3823782B2 (en) | Leading vehicle recognition device | |
| KR101236223B1 (en) | Method for detecting traffic lane | |
| KR101018033B1 (en) | Lane Departure Warning Method and System | |
| US20210089818A1 (en) | Deposit detection device and deposit detection method | |
| GB2633374A (en) | Method for determining occlusions of optics of a camera, driving system and vehicle | |
| US11288882B2 (en) | Deposit detection device and deposit detection method | |
| JP2009295112A (en) | Object recognition device | |
| JP4765113B2 (en) | Vehicle periphery monitoring device, vehicle, vehicle periphery monitoring program, and vehicle periphery monitoring method | |
| KR101437228B1 (en) | Obstacle detection device and method using boundary weighting | |
| JP2000125288A (en) | Object tracking method and object tracking device | |
| US11308709B2 (en) | Deposit detection device and deposit detection method | |
| CN114387500B (en) | Image recognition method and system for self-propelled equipment, self-propelled equipment and readable storage medium | |
| CN102906801A (en) | Vehicle surroundings monitoring device | |
| JP6698966B2 (en) | False detection determination device and false detection determination method | |
| JP2015215235A (en) | Object detection device and object detection method |