WO2017077261A1 - Système d'imagerie cognitive à caméra monoculaire pour véhicule - Google Patents
Système d'imagerie cognitive à caméra monoculaire pour véhicule Download PDFInfo
- Publication number
- WO2017077261A1 WO2017077261A1 PCT/GB2015/053359 GB2015053359W WO2017077261A1 WO 2017077261 A1 WO2017077261 A1 WO 2017077261A1 GB 2015053359 W GB2015053359 W GB 2015053359W WO 2017077261 A1 WO2017077261 A1 WO 2017077261A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- frame
- environment
- sub
- vehicle
- hazard
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
Definitions
- the present invention concerns a system for cognitive imaging in a vehicle using a monocular camera to capture image data.
- the field of cognitive imaging generally requires one or a sequence of image frames to be captured and analysed to deduce useful information from the image. For example in the field of vehicle control, an image of a forward field of view may be captured and analysed to determine if a hazard appears ahead of the vehicle. The resulting information may be used to steer the vehicle.
- Such systems are commonly stereoscopic in order to exploit parallax for ranging and sizing. Such systems are therefore complex and expensive.
- CN101303732A discloses an imaging system using a vehicle mounted on-board
- the system of CN101303732A provides a warning alarm if another hazard target vehicle is detected ahead of the camera vehicle.
- the hazard target vehicle is detected using feature point extraction and feature point matching.
- the system of CN101303732A is therefore unable to identify and detect stationary target hazards.
- EP0671706A2 presents an image processing method and system, which makes use of background reconstruction, partial region updating and background differentiation. It realizes the moving target detection by gradually extracting the moving objects from multiple continuous image. As the vehicle mounted camera keeps moving each of the background and the target hazard keep changing, simultaneously in each frame captured. Therefore, the system of EP0671706A2 cannot extract a moving object accurately in the dynamic changing background.
- CN102542571 A presents a moving objects detection method and equipment. It uses continuous multi-frame imaging and dissembles the video into several video segment.
- the background feature matrix of each video chunk is constructed based on the corresponding structure and texture feature. Based on an obtained matrix, the feature vector of each video chunk to be detected will be represented linearly.
- the detector makes a decision whether a moving target is present. This decision is made based on residual and sparseness of a minimum residual solution.
- the system is poor at training itself to identify particular hazard targets, especially where there are multiple targets within the field of view.
- the system cannot be applied efficiently where the camera is moving as well as the target hazard because of the poor correlation between the background of sequential frames. This prior art system cannot recognize different types of object.
- CN102915545A must obtain multi-parameters during the hazard target tracking
- CN102915545A uses a Kalman filter for dynamic position estimation and detection template updating. This is poor for tracking target hazards moving non-linearly.
- US2013243322A1 discloses a system which requires the camera to be fixed in a
- JP201 1 154634A1 is unable to detect a target hazard from other than the grey
- a cognitive imaging system for a vehicle comprising:
- a monocular camera arranged to focus an image of a field of view onto an image sensor, said image sensor processing the image into image data
- a processor device responsive to machine readable code stored in a memory to be capable of driving the camera to capture a sequence of images to implement an image processing method in order to process the image data and identify a hazard in the field of view;
- the processor is responsive to capture a time sequence of images from the camera, the processor is responsive to scan pixel image data of a first captured image in a sequence and applies a series of filters to identify a sub frame in which a vehicular hazard may exist and to distinguish the background environment from the foreground vehicle hazard;
- the processor is responsive to calculate a fast environment classification criterion for the each sub frame in the current frame
- the processor is responsive to add each fast environment classification to a fast environment identification queue of a predetermined length to form an environment analysing vector
- the processor responding to analyse the pixel image data of a subsequent captured image to add fast environment classification data to the fast environment identification queue;
- a slow environment result is updated according to the fast environment analysing vector calculated from the preceding sequence of captured frames.
- fast environment addresses the problem of a fast changing environment when the environment (background) changes fast.
- the slow environment loop seeks to reduce processing required to track a target first identified in the fast environment loop by identifying and exploiting a situation where the environment, and therefore the background is changing slowly.
- the "slow environment loop" is a faster process and is liable to reduce the burden on hardware assets.
- figure 1 is an isometric view of a user vehicle travelling on a road approaching a tunnel following a target vehicle
- figure 2 is a diagram of the field of view of a system camera from a system equipped user vehicle
- figure 3 is a diagram of the bottom shadow of each target vehicle in figure 2;
- figure 4a is a side elevation of the scene in figure 1 ;
- figure 4b is a plan of the scene in figure 1 ;
- figure 5 is a side elevation of the user vehicle illustrating aspects of the system installation
- figure 6 is a high level chart of the system operation:
- Figure 1 illustrates a user vehicle 1 equipped with the system traveling on a road
- a target vehicle T1 is leading the user vehicle 1 into the tunnel, while a second target vehicle T2 is exiting the tunnel towards the user vehicle 1 .
- the target vehicles T1 and T2 form the foreground against a background environment which is moving relative to the user vehicle 1 .
- the user or camera vehicle is fitted with a system camera 2 mounted to have a field of view (FOV) in the forward direction.
- the camera is mounted to have a lens axis at a height h-camera above the ground.
- To calibrate the system it is necessary to load the system with the height of the vehicle h-car.
- the system may be interfaced with the user vehicle management systems in order to read the vehicle speed, indicator condition and steering angle.
- the camera is installed in a host vehicle preferably behind the windscreen as shown in figure 5, where it will not obscure the drivers view and has a field of view looking horizontally forwards through the windscreen.
- the camera forms part of a cognitive imaging system (CIS) shown diagrammatically in figure 8 including memory and a processor running machine readable code to implement an image recognition process and to intelligently output alerts to the vehicle driver via a vehicle management system (VMS) of the vehicle. Alerts may be via visual and or audible means.
- the output means may be integral with the CIS or may interface to output via a VDU or loudspeaker integral with the vehicle.
- the CIS will preferably have an interface for communication with the output of the
- Vehicle management sensors may be hardwired or wireless and may be via ports in the vehicle management system or direct to the sensors.
- Vehicle sensor data captured by the CIS will preferably include: vehicle speed, accelerometer data, steering angle, accelerator angle, brake sensor and indicator actuation.
- the interface may also be installed to be capable of driving the vehicle management system to actuate the vehicle brake or other systems.
- step 0.1 initialises a frame counter to a fixed initial value such as 1 .
- S-1 the first process step, S-1 , is to capture a first real time image frame (FR1 ) and subject it to the fast environment process loop (FEPL).
- FEPL fast environment process loop
- the frame may be subject to image pre-processing steps such as
- ROI region of interest
- An ROI is any portion of FR1 which contains image data suspected of indicating a potential hazard.
- An example of a potential hazard is a vehicle moving relative to the host vehicle, and not part of the background which could adopt a vector resulting in a collision.
- ROI are identified by thresholding and boundary detection techniques.
- ROI are identified in the captured image by inspecting the pixels along a search path which starts at the middle of the bottom of the frame and proceeds towards the top; and progresses from the middle to each of the two sides, either sequentially or simultaneously.
- search path which starts at the middle of the bottom of the frame and proceeds towards the top; and progresses from the middle to each of the two sides, either sequentially or simultaneously.
- the target vehicle height up_car is determined according to the position where the vehicle bottom appears in the row of the search point in the image.
- mrow is the frame row in which the stationary point for the ROI lies
- irow is the row in which the point at infinity lies
- h camera is the vertical height from the ground to the camera in the host vehicle
- h ear is the a priori height of the target vehicle.
- step S-2.1 the system determines if the count of ROI exceeds zero, in other words if ROIn is the number of regions of interest is ROIn>0. If so it goes to step S3 and if not to step S1 where the next real time image is captured.
- Each ROI is uniquely identifiable in a frame by a "stationary point".
- the stationary point is a coordinate, common to any frame, that is to say that in a sequence of frames the stationary point will not move from one frame to another.
- the stationary point is established in the middle of the bottom row of each ROI from which the examination of the ROI ordinarily commences.
- the apparent depth or height of the bottom shadow is an indication of the distance of the target from the host vehicle.
- the depth of the bottom shadow is compared to a threshold value, if the threshold value is exceeded at step 3.2 the target is close enough to the host vehicle to be a hazard.
- the above mean grey level is obtained by calculating the weighted average of a grey value of the corresponding region. The weight is based on the distance of the potential hazard object from the camera.
- Detection of a hazard target is determined at step 3.1 if:
- step 3.3 If the threshold is not exceeded the process goes to step 3.3 where the ROI is tagged as examined and if there are any remaining ROI unexamined the step 3.4 returns the process to step 3 and if there are no remaining unexamined ROI the process steps to S1 to capture a new frame.
- AEDD comprises the steps of determining the vertical edge accumulative decision area of the left side as follows:
- the system seeks edges in each are using a Sobel operator and conducts 2- dimensional convolution, to obtain the left and right vertical edge image
- Sobel_2 by utilizing binarization dynamic threshold and accumulate downward.
- the units of accumulated values could be obtained after accumulating left and right vertical edges sub-frame.
- the search point goes to next stationary point along the search path, updates the interesting sub-frame and goes into the next dynamic bottom shadow decision state, goes back to step S3.
- the ROI fails step 4.1 and goes to step 3.3.
- SVM dynamic support vector machine
- the captured and pre-processed sub- frames are regularized to the same size and normalized, preferably to 64 * 64 pixels.
- a gradient histogram is established for the ROI.
- the system adopts block units with size of 8 * 8 pixels, cell unit with size of 4 * 4 , smooth step size of 8 and a histogram bin number of 9.
- the normalized 64 * 64 ROI with a potential vehicle object is transformed to 1 * 1764 feature vector with feature vector dimension of 1764 and each of dimension value represents statistics accumulated value along the certain direction of gradient in the specific sub area.
- the ROI is regularized and normalised the ROI is compared with a database of templates of potential target road vehicles, such as cars, vans, lorries, coaches etc.
- the templates are sampled not only for a range of vehicles but each vehicle before a range of backgrounds and conditions such as weather and/or daylight and/or night. Comparison may be via adding the ROI to the negative template and examining the output result. If the result is zero or near zero the target vehicle in the ROI is deemed a match with the template.
- the output from the SVM is one of three SVM classifiers.
- the steps to generate classifiers are to obtain the histogram of orientated gradients (HOG) feature vector matrix of the block unit and cell unit gradient histogram after regularizing. Construct an augmented feature matrix based on the feature vector matrix by utilizing the feature vector matrix and combining the class samples and then find a hyper classification plane that can distinguish positive templates and negative templates through SVM.
- HOG histogram of orientated gradients
- Identifying process Based on the three SVM classifiers corresponding to different scenarios choose the corresponding SVM according to the slow environment identifying result and match it with the current identified slow environment.
- the SVM classifies the interesting sub-frame into a
- step 3 there is no hazard target vehicle present in the ROI and the process goes to step 3 to examine a new unexamined ROI or step 1 to capture a new frame where step 3.4 finds no unexamined ROI.
- the step of tagging may be achieved in any suitable way, for example by setting a flag against the stationary point of the ROI.
- the process goes to step 5.2 and records the stationary point to track the target vehicle in subsequent cycles.
- the stationary point of the ROI is associated with a recording of the background of the of the ROI.
- the system may advantageously record only the ROI background
- the system obtains grey level information of every pixel of the backgrounds in sub regions to the left and right of the vehicle and within the ROI where the left region may be defined by:
- the system extracts the maximum and minimum values of row mean vectors which correspond to the left and right background sub-frame of the ROI respectively. Based on the maximum and minimum values, the system constructs a feature matrix
- the fast environment classification criterion for the current single frame is as follows.
- the system updates the parameters as following at step 6.2, as single frame dynamic decision system parameter update based on the fast environment detection results.
- step 7 the system selects one of the recorded stationary points identifying an ROI which has passed each of steps 3 to 5. It is now desirable to determine if the background exhibits a high level of complexity or a low level of complexity (the background is simple). To determine the complexity of the background each ROI is scanned backwards at step 7.1 as compared to step 3, thus the search pattern is from top to bottom and from the edges towards the centre. However, at step 7.2 the bottom shadow is determined from the backscan results and again compared to the bottom shadow threshold mentioned at step 3 above.
- Step 7.3 indicates that there are some (one or more) approximate false-alarm stationary points and the background environment is therefore determined to be complex at step 7.4 so that the process can advance directly to step 7.7.
- step 7.4 If the decision at step 7.4 is that the environment is simple 7.5, the process advances to step 7.6. In this case, if the result of the fast environment decision is "1", it requires rectification for the current fast environment identification result into "2". The system will re-identify the fast environment around the target and rectify the fast environment error at step 7.6. The dynamic decision parameter of the next frame will be based on the rectified fast environment decision.
- a fast environment identification que is formed with a length of 100. This is the fast environment identification queue for the purpose of slow environment identification (SEI).
- SEI slow environment identification
- the SEI is outer loop of the faster environment loop. Every output of the environment identification result will be pushed into the queue. If the current queue stores 100 statistic results, then the first statistic I that is pushed in the queue will be squeezed out of the queue and keeps the "first in, first out” rule.
- the push strategies are:
- S3 dynamic bottom shadow decision
- S4 dynamic edge accumulative decision
- S5 dynamic support vector machine (SVM) decision
- the fast environment decision of current sub-frame is "2" or "3” or "4"
- the fast environment decision result of current sub-frame will be pushed into the queue. If there is an interest sub-frame that passes the S3, S4 and S5, the rectified fast environment decision results "0” "1” "2” "3” "4" will be pushed into the queue sequentially.
- step 7.8 the system then utilizes the 100 statistic fast environment decision result to update the slow environment result according to the following strategies:
- the slow environment will be set to 2 (the normal condition).
- step 8.0 the fast environment identification according to the slow environment result is set at a certain interval.
- the fast environment identification will be set to "3, nightfall”.
- the fast environment related parameters including "Dynamic Binarization Sobel Threshold for Global Gradient Image”, “Dynamic Distinguish Threshold for Shadow and Ground” and "Two Stage Dynamic Bimodal vertical direction Cumulative Threshold” will change according to the reset of fast environment, from 1 to 3, for the next frame.
- the fast environment identification result is "2" or "3"
- the fast environment identification will be forced to "4, dusk”.
- the fast environment related parameters including "Dynamic Binarization Sobel Threshold for Global Gradient Image”, “Dynamic Distinguish Threshold for Shadow and Ground” and “Two Stage Dynamic Bimodal vertical direction Cumulative Threshold” will change according to the reset of fast environment for the next frame.
- sub-frames (ROI) pass S3, S4 and S5 in several consecutive frames, it can be concluded that a vehicle object exists in the sub-frame (ROI) of the corresponding stationary point.
- Each sub-frames of interest will be marked as a sub-frame of object tracking. The detailed process is described below:
- the environment identification result is complex. If there still is an interested sub-frame in the current and previous four frames that can pass the 3 stage dynamic decision under the condition that the stationary point of these 5 interested frames is close to each other, then it can be regarded that there exists a object and the object is marked as the final object detection result.
- the object feature is weakened. Therefore, if there still exists an interested sub-frame in the current and previous three image frames that can pass the three stage dynamic decision under the condition that the stationary point of these interested sub-frames is close to each other, it can be regarded that an object exists and the object is marked as the final object detection result. [097] Referring again to step 5.3, where each ROI has been scanned so that there are no
- step 5.3.1 a frame counter is incremented from N to N+1 .
- step 5.3.2 the frame count is compared to a predetermined frame count number "X".
- X may be five but might be set to a higher or lower value. This ensures that a sequence of five captured frames must be inspected with the corresponding ROI exhibiting the presence of a potential hazard target before target tracking is confirmed.
- step 5.3.3 the frame count is reset to 1 . From step 5.3.3 the system advances to step 9.1 where the target data is output for subsequent processing.
- step 9.1 the method advances to step 1 where a new frame is captured for examination.
- step 1 If the frame count number is not exceeded at step 5.3.2 the method advances directly to step 1 to capture a new frame for examination.
- the cognitive imaging system having confirmed the presence of a foreground object which may prove to be a hazard is able to track the hazard with a high degree of confidence against even fast changing background environments. The tracking of any hazard target is achieved across several sequentially captured frames, which facilitates the step of calculating a vector for the target and hence a plot of the position of the target vehicle against time which can be compared to the current vector of the host vehicle.
- Integral warning system may, for example, include a screen, warning lights or an acoustic warning from a loudspeaker. Alternatively the warning may be transferred over the interface to drive the warning indicators of the vehicle.
- a feature of the system is an ability to correlate the lateral movement of the vehicle with the operation of the vehicle indicators in order to warn the driver of changing lane without indicating or otherwise driving erratically.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
Abstract
L'invention concerne un système d'imagerie cognitive et un procédé d'imagerie cognitive comprenant une étape consistant à classifier la complexité d'un environnement d'arrière-plan dans un segment, une sous-trame ou une région d'intérêt (ROI) d'une trame capturés par une caméra monoculaire. La caméra monoculaire est installée dans un véhicule (un navire ou un aéronef) afin d'avoir un champ de vision dans la direction normale de déplacement du véhicule. La classification est ajoutée à une file d'attente de longueur prédéterminée et mise à jour en ajoutant une nouvelle classification à chaque fois qu'une sous-trame est balayée dans une nouvelle trame capturée suivante pendant le suivi de risque cible. La séquence de classifications forme un vecteur d'analyse d'environnement. Le vecteur d'analyse d'environnement est utilisé pour déterminer si l'arrière-plan d'environnement pour la sous-trame peut être réutilisé dans une soustraction d'arrière-plan suivante pour suivre la cible dans une trame capturée ultérieurement. Les données de suivi résultantes peuvent être utilisées pour déterminer si une situation dangereuse est imminente et pour alerter le conducteur du véhicule ou pour commander le véhicule afin d'éviter le danger.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/GB2015/053359 WO2017077261A1 (fr) | 2015-11-05 | 2015-11-05 | Système d'imagerie cognitive à caméra monoculaire pour véhicule |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/GB2015/053359 WO2017077261A1 (fr) | 2015-11-05 | 2015-11-05 | Système d'imagerie cognitive à caméra monoculaire pour véhicule |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2017077261A1 true WO2017077261A1 (fr) | 2017-05-11 |
Family
ID=54754687
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/GB2015/053359 Ceased WO2017077261A1 (fr) | 2015-11-05 | 2015-11-05 | Système d'imagerie cognitive à caméra monoculaire pour véhicule |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2017077261A1 (fr) |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20190012549A1 (en) * | 2017-07-10 | 2019-01-10 | Nanjing Yuanjue Information and Technology Company | Scene analysis method and visual navigation device |
| US11024042B2 (en) * | 2018-08-24 | 2021-06-01 | Incorporated National University Iwate University; | Moving object detection apparatus and moving object detection method |
| CN110458885B (zh) * | 2019-08-27 | 2024-04-19 | 纵目科技(上海)股份有限公司 | 基于行程感知与视觉融合的定位系统和移动终端 |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP0671706A2 (fr) | 1994-03-09 | 1995-09-13 | Nippon Telegraph And Telephone Corporation | Procédé et dispositif d'extraction d'un objet mobile, utilisant la soustraction de l'arrière-plan |
| CN101303732A (zh) | 2008-04-11 | 2008-11-12 | 西安交通大学 | 基于车载单目相机的运动目标感知与告警方法 |
| JP2011154634A (ja) | 2010-01-28 | 2011-08-11 | Toshiba Information Systems (Japan) Corp | 画像処理装置、画像処理方法及び画像処理用プログラム |
| CN102542571A (zh) | 2010-12-17 | 2012-07-04 | 中国移动通信集团广东有限公司 | 一种运动目标检测方法及装置 |
| CN102915545A (zh) | 2012-09-20 | 2013-02-06 | 华东师范大学 | 一种基于OpenCV的视频目标跟踪算法 |
| US20130243322A1 (en) | 2012-03-13 | 2013-09-19 | Korea University Research And Business Foundation | Image processing method |
-
2015
- 2015-11-05 WO PCT/GB2015/053359 patent/WO2017077261A1/fr not_active Ceased
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP0671706A2 (fr) | 1994-03-09 | 1995-09-13 | Nippon Telegraph And Telephone Corporation | Procédé et dispositif d'extraction d'un objet mobile, utilisant la soustraction de l'arrière-plan |
| CN101303732A (zh) | 2008-04-11 | 2008-11-12 | 西安交通大学 | 基于车载单目相机的运动目标感知与告警方法 |
| JP2011154634A (ja) | 2010-01-28 | 2011-08-11 | Toshiba Information Systems (Japan) Corp | 画像処理装置、画像処理方法及び画像処理用プログラム |
| CN102542571A (zh) | 2010-12-17 | 2012-07-04 | 中国移动通信集团广东有限公司 | 一种运动目标检测方法及装置 |
| US20130243322A1 (en) | 2012-03-13 | 2013-09-19 | Korea University Research And Business Foundation | Image processing method |
| CN102915545A (zh) | 2012-09-20 | 2013-02-06 | 华东师范大学 | 一种基于OpenCV的视频目标跟踪算法 |
Non-Patent Citations (6)
| Title |
|---|
| BING-FEI WU ET AL: "Embedded weather adaptive lane and vehicle detection system", INDUSTRIAL ELECTRONICS, 2008. ISIE 2008. IEEE INTERNATIONAL SYMPOSIUM ON, 1 January 2008 (2008-01-01), Piscataway, NJ, USA, pages 1255 - 1260, XP055286344, ISBN: 978-1-4244-1665-3, DOI: 10.1109/ISIE.2008.4677194 * |
| EN-FONG CHOU ET AL: "Weather-adapted Vehicle Detection for Forward Collision Warning System", LECTURE NOTES IN ENGINEERING AND COMPUTER SCIENCE, 1 July 2011 (2011-07-01), pages 1294 - 1299, XP055282726, Retrieved from the Internet <URL:http://www.iaeng.org/publication/WCE2011/WCE2011_pp1294-1299.pdf> [retrieved on 20160706] * |
| Jà CR Rà CR MIE BOSSU ET AL: "Rain or Snow Detection in Image Sequences Through Use of a Histogram of Orientation of Streaks", INTERNATIONAL JOURNAL OF COMPUTER VISION, KLUWER ACADEMIC PUBLISHERS, BO, vol. 93, no. 3, 29 January 2011 (2011-01-29), pages 348 - 367, XP019894573, ISSN: 1573-1405, DOI: 10.1007/S11263-011-0421-7 * |
| TAE HUNG KIM ET AL: "Road Sign Detection with Weather/Illumination Classifications and Adaptive Color Models in Various Road Images", KIPS TRANSACTIONS ON SOFTWARE AND DATA ENGINEERING, vol. 4, no. 11, 1 November 2015 (2015-11-01), pages 521 - 528, XP055286279, ISSN: 2287-5905, DOI: 10.3745/KTSDE.2015.4.11.521 * |
| WENG T L ET AL: "Weather-adaptive flying target detection and tracking from infrared video sequences", EXPERT SYSTEMS WITH APPLICATIONS, OXFORD, GB, vol. 37, no. 2, 1 March 2010 (2010-03-01), pages 1666 - 1675, XP026736226, ISSN: 0957-4174, [retrieved on 20090705], DOI: 10.1016/J.ESWA.2009.06.092 * |
| XUDONG ZHAO ET AL: "A time, space and color-based classification of different weather conditions", VISUAL COMMUNICATIONS AND IMAGE PROCESSING (VCIP), 2011 IEEE, IEEE, 6 November 2011 (2011-11-06), pages 1 - 4, XP032081366, ISBN: 978-1-4577-1321-7, DOI: 10.1109/VCIP.2011.6115972 * |
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20190012549A1 (en) * | 2017-07-10 | 2019-01-10 | Nanjing Yuanjue Information and Technology Company | Scene analysis method and visual navigation device |
| US10614323B2 (en) * | 2017-07-10 | 2020-04-07 | Nanjing Yuanjue Information and Technology Company | Scene analysis method and visual navigation device |
| US11024042B2 (en) * | 2018-08-24 | 2021-06-01 | Incorporated National University Iwate University; | Moving object detection apparatus and moving object detection method |
| CN110458885B (zh) * | 2019-08-27 | 2024-04-19 | 纵目科技(上海)股份有限公司 | 基于行程感知与视觉融合的定位系统和移动终端 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP5297078B2 (ja) | 車両の死角における移動物体を検知するための方法、および死角検知装置 | |
| US11170272B2 (en) | Object detection device, object detection method, and computer program for object detection | |
| Wu et al. | Lane-mark extraction for automobiles under complex conditions | |
| US20180150704A1 (en) | Method of detecting pedestrian and vehicle based on convolutional neural network by using stereo camera | |
| US10867403B2 (en) | Vehicle external recognition apparatus | |
| KR101912914B1 (ko) | 전방 카메라를 이용한 속도제한 표지판 인식 시스템 및 방법 | |
| US9626599B2 (en) | Reconfigurable clear path detection system | |
| CN107667378B (zh) | 用于识别和评估路面反射的方法和装置 | |
| CN107463890B (zh) | 一种基于单目前视相机的前车检测与跟踪方法 | |
| US8452054B2 (en) | Obstacle detection procedure for motor vehicle | |
| US8879786B2 (en) | Method for detecting and/or tracking objects in motion in a scene under surveillance that has interfering factors; apparatus; and computer program | |
| US20160104047A1 (en) | Image recognition system for a vehicle and corresponding method | |
| US20170032676A1 (en) | System for detecting pedestrians by fusing color and depth information | |
| US20130208945A1 (en) | Method for the detection and tracking of lane markings | |
| Ciberlin et al. | Object detection and object tracking in front of the vehicle using front view camera | |
| WO2011067790A2 (fr) | Système et procédé économique de détection, classification et suivi d'un piéton au moyen d'une caméra à infrarouge proche | |
| EP2741234B1 (fr) | Localisation de l'objet en utilisant la symétrie verticale | |
| CN111626170A (zh) | 一种铁路边坡落石侵限检测的图像识别方法 | |
| Romera et al. | A Real-Time Multi-scale Vehicle Detection and Tracking Approach for Smartphones. | |
| WO2016059643A1 (fr) | Système et procédé de détection de piéton | |
| CN107506739B (zh) | 一种夜间前向车辆检测及测距方法 | |
| CN108629225B (zh) | 一种基于多幅子图与图像显著性分析的车辆检测方法 | |
| WO2017077261A1 (fr) | Système d'imagerie cognitive à caméra monoculaire pour véhicule | |
| KR20150002040A (ko) | 호그 연속기법에 기반한 칼만 필터와 클러스터링 알고리즘을 이용하여 실시간으로 보행자를 인식하고 추적하는 방법 | |
| CN107256382A (zh) | 基于图像识别的虚拟保险杠控制方法和系统 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15802183 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 03/09/2018) |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 15802183 Country of ref document: EP Kind code of ref document: A1 |