CN102542256B - Advanced Warning System with Forward Collision Warning for Traps and Pedestrians - Google Patents
Advanced Warning System with Forward Collision Warning for Traps and Pedestrians Download PDFInfo
- Publication number
- CN102542256B CN102542256B CN201110404574.1A CN201110404574A CN102542256B CN 102542256 B CN102542256 B CN 102542256B CN 201110404574 A CN201110404574 A CN 201110404574A CN 102542256 B CN102542256 B CN 102542256B
- Authority
- CN
- China
- Prior art keywords
- model
- described image
- image point
- road surface
- light stream
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
- G08G1/165—Anti-collision systems for passive traffic, e.g. including static obstacles, trees
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
- G08G1/166—Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
- G06T2207/30256—Lane; Road marking
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
Abstract
Description
背景background
1.技术背景1. Technical Background
本发明涉及提供前部碰撞警告的驾驶员辅助系统。The present invention relates to driver assistance systems that provide forward collision warning.
2.相关技术的描述2. Description of related technologies
近几年来以摄像机为基础的驾驶员辅助系统(driver assistance system,DAS)已经进入市场;该驾驶员辅助系统包括车道偏离警告(lane departure warning,LDW)、自动远光控制(Automatic High-beam Control,AHC)、行人识别和前部碰撞警告(forwardcollision warning,FCW)。In recent years, camera-based driver assistance systems (driver assistance systems, DAS) have entered the market; the driver assistance systems include lane departure warning (lane departure warning, LDW), automatic high beam control (Automatic High-beam Control , AHC), pedestrian recognition and forward collision warning (forwardcollision warning, FCW).
车道偏离警告(LDW)系统被设计用于在非故意的车道偏离的情况下发出警告。当车辆通过或即将通过车道标志时发出警告。基于转向信号的使用,方向盘的角度的改变、车辆速度和刹车激活来确定驾驶员意图。Lane Departure Warning (LDW) systems are designed to warn of unintentional lane departures. Warns when vehicles pass or are about to pass lane markings. Driver intent is determined based on the use of turn signals, changes in the angle of the steering wheel, vehicle speed, and brake activation.
在图像处理中,Moravec角点检测算法可能是最早的角点检测算法之一并定义角点是具有较低自相似性的点。Moravec算法通过考虑集中在像素上的斑块(patch)与附近大部分重叠的斑块有多么相似,测试图像中的每一像素来看角点是否存在。通过采用两个斑块之间的平方差和(sum of squared difference,SSD)测量相似性。数字越小说明相似性越大。可选的检测图像中的角点的方法基于由Harris和Stephens提出的方法,该方法是对由Moravec提出的方法的改进。Harris和Stephens通过考虑直接与方向相关的角点分数的微分而非使用Moravec邻近斑块,对Moravec的角点检测算法做了改进。In image processing, the Moravec corner detection algorithm may be one of the earliest corner detection algorithms and defines corners as points with low self-similarity. The Moravec algorithm tests each pixel in the image to see if a corner exists by considering how similar a patch centered on a pixel is to nearby patches that mostly overlap. Similarity was measured by taking the sum of squared difference (SSD) between two patches. The smaller the number, the greater the similarity. An alternative method for detecting corners in an image is based on the method proposed by Harris and Stephens, which is a modification of the method proposed by Moravec. Harris and Stephens improved Moravec's corner detection algorithm by considering the differentiation of corner scores directly related to orientation rather than using Moravec's neighboring patches.
在计算机视觉中,用于光流估计的广泛使用的微分方法是由Bruce D.Lucas和Takeo Kanade开发的。Lucas-Kanade方法假设光流在考虑中的像素的局部邻域中基本上是不变的,且通过最小二乘准则对该邻域中的所有像素求解基本光流方程。通过综合来自几个邻近像素的信息,Lucas-Kanade方法一般能够解决光流方程的固有二义性。与逐点方法相比,该方法对图像噪声也是不敏感的。另一方面,因为该方法是纯局部方法,所以它不能够提供图像的内部统一区域中的流信息。In computer vision, a widely used differential method for optical flow estimation was developed by Bruce D. Lucas and Takeo Kanade. The Lucas-Kanade method assumes that the optical flow is basically invariant in the local neighborhood of the pixel under consideration, and solves the basic optical flow equation for all pixels in the neighborhood by the least squares criterion. By combining information from several neighboring pixels, the Lucas-Kanade method is generally able to resolve the inherent ambiguity of the optical flow equation. This method is also insensitive to image noise compared to point-wise methods. On the other hand, since this method is a purely local method, it cannot provide flow information in the inner uniform region of the image.
概述overview
根据本发明的特征,提供了用于发出前部碰撞警告信号的不同方法,所述方法使用可安装在机动车中的摄像机。按已知的时间间隔获得多个图像帧。可在至少一个图像帧中选择图像斑块。可在图像帧之间跟踪斑块的多个图像点的光流。图像点可拟合到至少一个模型。基于图像点的拟合,可以确定是否预期有碰撞,以及如果预期有的话,可以确定碰撞时间(TTC)。图像点可拟合到路面模型,且图像点的一部分可建模为如成像自路面。可基于图像点与路面模型的拟合确定预期不会有碰撞。图像点可拟合到垂直表面模型,其中图像点的一部分可建模为成像自垂直对象。可基于图像点与垂直表面模型的拟合确定碰撞时间TTC。图像点可拟合到混合模型,其中图像点的第一部分可建模为成像自路面,且图像点的第二部分可建模为成像自实质上垂直的或直立的对象而非横放路面中的对象。According to a feature of the invention, different methods are provided for signaling a forward collision warning which use a camera which can be installed in a motor vehicle. A plurality of image frames are acquired at known time intervals. An image patch can be selected in at least one image frame. The optical flow of multiple image points of the plaque can be tracked between image frames. The image points can be fitted to at least one model. Based on the fit of the image points, it can be determined whether a collision is expected and, if so, a time to collision (TTC) can be determined. The image points can be fitted to a road surface model, and a portion of the image points can be modeled as imaged from the road surface. It may be determined that no collision is expected based on the fit of the image points to the road surface model. The image points can be fitted to a vertical surface model, where a portion of the image points can be modeled as being imaged from the vertical object. The time to collision TTC can be determined based on the fit of the image points to a vertical surface model. The image points can be fitted to a mixture model where a first portion of the image points can be modeled as being imaged from the road surface and a second portion of the image points can be modeled as being imaged from a substantially vertical or upright object rather than lying across the road surface Object.
在图像帧中,可检测行人的候选图像,其中,所述斑块被选择以包括行人的候选图像。当最佳拟合模型是垂直表面模型时,可验证候选图像是直立的行人的图像而非路面中的对象。在图像帧中,可检测垂直线,其中,所述斑块被选择以包括该垂直线。当最佳拟合模型是垂直表面模型时,可验证垂直线是垂直对象的图像而非在路面中的对象的图像。In the image frames, candidate images of pedestrians may be detected, wherein the blobs are selected to include candidate images of pedestrians. When the best fitting model is a vertical surface model, it can be verified that the candidate image is an image of an upright pedestrian rather than an object in the road. In the image frame, a vertical line may be detected, wherein the blob is selected to include the vertical line. When the best-fit model is a vertical surface model, it can be verified that the vertical lines are images of vertical objects rather than objects in the road surface.
在不同方法中,可基于碰撞时间小于阈值而发出警告。在不同方法中,可基于图像帧之间的光流确定斑块的相对比例,以及可响应于该相对比例和时间间隔确定碰撞时间(TTC)。该方法可避免在确定相对比例之前在斑块中进行对象识别。In a different approach, a warning may be issued based on a time-to-collision being less than a threshold. In a different approach, the relative proportion of the plaque can be determined based on the optical flow between image frames, and the time to collision (TTC) can be determined responsive to the relative proportion and the time interval. This approach avoids object recognition in patches before determining relative proportions.
根据本发明的特征,提供了包括摄像机和处理器的系统。所述系统可用于使用可安装在机动车中的摄像机提供前部碰撞警告。所述系统也可用于按已知的时间间隔获得多个图像帧,用于在图像帧的至少一个中选择斑块;用于跟踪斑块的多个图像点的图像帧之间的光流;用于将图像点拟合到至少一个模型并基于图像点与该至少一个模型的拟合确定是否预期会有碰撞,如果被预期有的话则确定碰撞时间(TTC)。所述系统还可用于将图像点拟合到路面模型。可以基于图像点与路面模型的拟合确定预期不会有碰撞。According to features of the invention, a system including a camera and a processor is provided. The system can be used to provide forward collision warning using a camera that can be installed in a motor vehicle. The system is also operable to acquire a plurality of image frames at known time intervals for selecting a plaque in at least one of the image frames; for tracking optical flow between the image frames of a plurality of image points of the plaque; For fitting the image points to at least one model and determining whether a collision is expected and, if so, a time to collision (TTC) based on the fit of the image points to the at least one model. The system can also be used to fit image points to a road surface model. It may be determined that no collision is expected based on the fit of the image points to the road surface model.
根据本发明的其他实施方式,可选择图像帧中的斑块,该斑块可对应机动车将在预定的时间间隔后所处的位置。可监视该斑块;如果对象成像在该斑块中则发出前部碰撞警告。可通过跟踪在斑块中的对象的多个图像点的图像帧之间的光流来确定对象实质上是否是垂直的、直立的或不在路面中。图像点可拟合到至少一个模型。图像点的一部分可建模为成像自对象。基于图像点与至少一个模型的拟合,确定是否预期会有碰撞,如果预期有的话则确定碰撞时间(TTC)。当最佳拟合模型包括垂直表面模型时可发出前部碰撞警告。图像点可拟合到路面模型。可基于图像点与路面模型的拟合确定预期不会有碰撞。According to other embodiments of the invention, blobs in the image frame can be selected, which blobs can correspond to where the motor vehicle will be after a predetermined time interval. The patch can be monitored; if an object is imaged in the patch a forward collision warning is issued. Whether an object is substantially vertical, upright, or not in the roadway may be determined by tracking optical flow between image frames of multiple image points of the object in the patch. The image points can be fitted to at least one model. A portion of the image points can be modeled as being imaged from the object. Based on the fit of the image points to the at least one model, it is determined whether a collision is expected and, if so, a time to collision (TTC). A forward collision warning may be issued when the best fit model includes a vertical surface model. The image points can be fitted to a road surface model. It may be determined that no collision is expected based on the fit of the image points to the road surface model.
根据本发明的特征,提供了一种用于在机动车中提供前部碰撞警告的系统。所述系统包括可安装在机动车中的摄像机和处理器。摄像机可用于按已知的时间间隔获得多个图像帧。处理器可用于选择图像帧中的斑块,该斑块对应机动车将在预定的时间间隔后所处的位置。如果对象成像在斑块中,如果发现对象是直立的和/或不在路面中则可发出前部碰撞警告。处理器还可用于在图像帧之间跟踪斑块中的对象的多个图像点,并将图像点拟合到一个或多个模型。所述模型可包括垂直对象模型、路面模型和/或混合模型,混合模型包括假设来自路面的一个或多个图像点和来自不在路面中的直立对象的一个或多个图像点。基于图像点与模型的拟合,确定是否预期有碰撞,如果预期有碰撞则确定碰撞时间(TTC)。处理器可用于基于TTC小于阈值发出前部碰撞警告。According to a feature of the invention, a system for providing forward collision warning in a motor vehicle is provided. The system includes a camera and a processor mountable in a motor vehicle. A camera can be used to acquire multiple image frames at known time intervals. The processor is operable to select a blob in the image frame that corresponds to where the motor vehicle will be after a predetermined time interval. If the object is imaged in the patch, a forward collision warning may be issued if the object is found to be upright and/or out of the road. The processor is also operable to track a plurality of image points of objects in the plaque between image frames and fit the image points to one or more models. The model may include a vertical object model, a road surface model, and/or a hybrid model including one or more image points assumed to be from the road surface and one or more image points from vertical objects not in the road surface. Based on the fit of the image points to the model, it is determined whether a collision is expected and, if so, a time to collision (TTC). The processor may be used to issue a forward collision warning based on the TTC being less than a threshold.
附图的简要描述Brief description of the drawings
本文仅通过举例的方式参考附图描述本发明,其中:The invention is herein described, by way of example only, with reference to the accompanying drawings, in which:
图1a和1b示意性地示出根据本发明的特征的、当车辆接近金属护栏时从安装在车辆内的前视摄像机捕获的两个图像。Figures 1a and 1b schematically illustrate two images captured from a forward looking camera installed in a vehicle as the vehicle approaches a metal barrier, according to a feature of the invention.
图2a示出根据本发明的特征的、用于使用安装在主车(host vehicle)中的摄像机提供前部碰撞警告的方法。Figure 2a illustrates a method for providing forward collision warning using a camera installed in a host vehicle according to a feature of the invention.
图2b示出根据本发明的特征的、在图2a中示出的确定碰撞时间的步骤的进一步的细节。Fig. 2b shows further details of the step of determining time to collision shown in Fig. 2a according to a feature of the present invention.
图3a示出根据本发明的特征的、直立表面的图像帧(厢式车的背面)。Figure 3a shows an image frame of an upright surface (the back of a van) according to a feature of the invention.
图3c示出根据本发明的特征的、主要是路面的矩形区域。Fig. 3c shows a mainly rectangular area of the road surface according to a feature of the invention.
图3b示出根据本发明的特征的、关于图3a的作为垂直图像位置(y)的函数的点的垂直运动δy。Figure 3b shows the vertical movement δy of the point as a function of vertical image position (y) with respect to Figure 3a, according to a feature of the present invention.
图3d示出根据本发明的特征的、关于图3c的作为垂直图像位置(y)的函数的点的垂直运动δy。Figure 3d shows the vertical movement δy of the point as a function of vertical image position (y) with respect to Figure 3c, according to a feature of the invention.
图4a示出根据本发明的特征的、包括具有水平线和矩形斑块的金属护栏的图像的图像帧。Figure 4a shows an image frame comprising an image of a metal barrier with horizontal lines and rectangular patches according to a feature of the invention.
图4b和4c示出根据本发明的特征的、在图4a中示出的矩形斑块的更多细节。Figures 4b and 4c show more details of the rectangular patch shown in Figure 4a according to a feature of the invention.
图4d示出根据本发明的特征的、点的垂直运动(δy)相对于垂直的点位置(y)的曲线图。Figure 4d shows a graph of the vertical movement (δy) of a point versus the vertical point position (y) according to a feature of the invention.
图5示出根据本发明的特征的、在图像帧中的蜃景的另一例子。Fig. 5 shows another example of a mirage in an image frame according to a feature of the present invention.
图6示出根据本发明的特征的、用于提供前部碰撞警告陷阱的方法。Figure 6 illustrates a method for providing a forward collision warning trap according to a feature of the invention.
图7a和7b示出根据本发明的示例性特征的、针对墙所触发的前部碰撞陷阱警告的例子。Figures 7a and 7b show an example of a front collision trap warning triggered for a wall according to an exemplary feature of the invention.
图7c示出根据本发明的示例性特征的、针对盒子所触发的前部碰撞陷阱警告的例子。Figure 7c shows an example of a forward collision trap warning triggered for a box according to an exemplary feature of the invention.
图7d示出根据本发明的示例性特征的、针对汽车的侧面所触发的前部碰撞陷阱警告的例子。Figure 7d shows an example of a forward collision trap warning triggered for the side of a car according to an exemplary feature of the invention.
图8a示出根据本发明的一方面的、具有在盒子上明显垂直线的对象的例子。Figure 8a shows an example of an object with a distinct vertical line on a box, according to an aspect of the invention.
图8b示出根据本发明的一方面的、具有在灯柱上明显垂直线的对象的例子。Figure 8b shows an example of an object with a distinct vertical line on a lamppost according to an aspect of the invention.
图9和10示出根据本发明的一方面的、包括安装在车辆中的摄像机或图像传感器的系统。9 and 10 illustrate a system including a camera or an image sensor installed in a vehicle according to an aspect of the present invention.
详细描述A detailed description
现将详细地参考本发明的特征,其例子在附图中示出,其中相同的参考数字自始至终指相同的元件。下面通过参考图来描述特征以讲解本发明。Reference will now be made in detail to the features of the invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to like elements throughout. The features are described below in order to explain the present invention by referring to the figures.
在详细地讲解本发明的特征之前,应该理解本发明不受限于其在下面的描述中所陈述的或在附图中所示出的部件的设计和布置的细节上的应用。本发明具有其他特征或能够用不同方式实践或执行。此外,还应该理解本文所使用的措辞和术语是用于描述目的而不应该理解为是限制性的。Before the features of the invention are explained in detail, it is to be understood that the invention is not limited in its application to the details of design and arrangement of parts set forth in the following description or shown in the drawings. The invention has other features or is capable of being practiced or carried out in various ways. Also, it is to be understood that the phraseology and terminology used herein are for the purpose of description and should not be regarded as limiting.
通过介绍的方式,本发明的实施方式涉及前部碰撞警告(FCW)系统。根据美国专利7113867,前车的图像被识别。车辆的宽度可被用于检测在图像帧之间的比例或相对比例S中的改变,且相对比例用于确定碰撞的时间。具体地,例如前车的宽度具有在第一图像和第二图像中分别用w(t1)和w(t2)表示的长度(正如例如用像素或毫米所测量的长度)。那么可选地,相对比例是S(t)=w(t2)/w(t1)。By way of introduction, embodiments of the present invention relate to forward collision warning (FCW) systems. According to US patent 7113867, the image of the vehicle in front is identified. The width of the vehicle can be used to detect changes in the scale or relative scale S between image frames, and the relative scale is used to determine the time of collision. Specifically, eg the width of the preceding vehicle has a length denoted by w(t1) and w(t2) in the first image and the second image respectively (as measured eg in pixels or millimeters). Then optionally, the relative ratio is S(t)=w(t2)/w(t1).
根据美国专利7113867的教导,前部碰撞警告(FCW)系统依赖对障碍物或对象的图像的识别,例如,如在图像帧中所识别的前车。在前部碰撞警告系统中,如美国专利7113867所公开的,被检测的对象(例如车辆)的尺寸(例如宽度)的比例改变用于计算碰撞时间(TTC)。然而,对象首先被检测并与周围的场景分割。本公开内容描述了使用相对比例改变的系统,其基于光流确定碰撞时间TTC和碰撞的可能性,如果需要,发出FCW警告。光流引起蜃景现象(1ooming phenomenon):随着被成像的对象变得越近,感知的图像显得越大。根据本发明的不同特征,可执行对象检测和/或识别,或者可避免对象检测和/或识别。According to the teachings of US Patent 7113867, Forward Collision Warning (FCW) systems rely on the recognition of images of obstacles or objects, eg a vehicle ahead as identified in an image frame. In a forward collision warning system, as disclosed in US Patent 7113867, a proportional change in the size (eg width) of a detected object (eg vehicle) is used to calculate the time to collision (TTC). However, objects are first detected and segmented from the surrounding scene. This disclosure describes a system that uses relative scaling to determine time-to-collision TTC and likelihood of collision based on optical flow and, if necessary, issue a FCW warning. Looming phenomenon: The perceived image appears larger as the imaged object gets closer. According to different features of the invention, object detection and/or recognition may be performed, or object detection and/or recognition may be avoided.
在生物学系统中已经广泛研究了蜃景现象。蜃景对人似乎是一种非常低水平的视觉注意机制并能触发本能反应。在计算机视觉中有过多种尝试来检测蜃景,甚至有硅传感器设计用于检测纯平移情况下的蜃景。Mirage phenomena have been extensively studied in biological systems. Mirages appear to be a very low-level visual attention mechanism in humans and trigger visceral responses. There have been various attempts in computer vision to detect mirages, and there are even silicon sensors designed to detect mirages in the case of pure translation.
可在具有不断改变的照明条件、包括了多个对象的复杂场景、和主车的现实环境中执行蜃景检测,该蜃景检测包括平移动和转动。Mirage detection, which includes translational movement and rotation, can be performed in real world environments with changing lighting conditions, complex scenes including multiple objects, and host vehicles.
本文所使用的术语“相对比例”指在一图像帧中的图像斑块和在随后的图像帧中的对应图像斑块的相对尺寸的增加(或减少)。As used herein, the term "relative scale" refers to an increase (or decrease) in the relative size of an image patch in one image frame and a corresponding image patch in a subsequent image frame.
现参考图9和10,根据本发明的一方面,图9和10示出包括安装在车辆18中的摄像机或图像传感器12的系统16。对前方视野成像的图像传感器12实时地传递图像,这些图像以图像帧15的时间序列被捕获。图像处理器14可用于同时地和/或并行地处理图像帧15来为许多驾驶员辅助系统服务。可使用具有板载软件的特定硬件电路和/或存储器13中的软件控制算法来实现驾驶员辅助系统。图像传感器12可以是单色的或黑白的,即没有色彩分离,或者图像传感器12可以是感色的。通过举图10中的例子,图像帧15用来服务于行人警告(PW)20、车道偏离警告(LDW)21、根据美国专利7113867的教导基于对象检测和跟踪的前部碰撞警告(FCW)22、基于图像蜃景的前部碰撞警告(FCWL)209和/或基于FCW陷阱(FCWT)601的前部碰撞警告601。图像处理器14用于处理图像帧15以检测用于基于图像蜃景和FCWT601的前部碰撞警告209的摄像机12的前方视野中的图像的蜃景。基于图像蜃景的前部碰撞警告209和基于陷阱的前部碰撞警告(FCWT)601可以与传统的FCW 22并行执行,以及与其他驾驶员辅助功能、行人检测(PW)20、车道偏离警告(LDW)21、交通标志检测和自我运动检测并行执行。FCWT 601可用于验证来自FCW 22的常规信号。如本文所使用的术语“FCW信号”指前部碰撞警告信号。本文可互换地使用术语“FCW信号”、“前部碰撞警告”和“警告”。Referring now to FIGS. 9 and 10 , a system 16 including a camera or image sensor 12 installed in a vehicle 18 is shown in accordance with an aspect of the present invention. An image sensor 12 imaging the forward field of view delivers images in real time, which are captured in a temporal sequence of image frames 15 . Image processor 14 may be used to process image frames 15 simultaneously and/or in parallel to serve many driver assistance systems. Driver assistance systems may be implemented using specific hardware circuits with on-board software and/or software control algorithms in memory 13 . Image sensor 12 may be monochrome or black and white, ie without color separation, or image sensor 12 may be color sensitive. By way of example in Figure 10, the image frame 15 is used to serve Pedestrian Warning (PW) 20, Lane Departure Warning (LDW) 21, Forward Collision Warning (FCW) 22 based on object detection and tracking according to the teachings of US Patent 7113867 , image mirage based forward collision warning (FCWL) 209 and/or FCW trap based (FCWT) 601 based forward collision warning 601 . Image processor 14 is used to process image frames 15 to detect mirage for images in the forward view of camera 12 for image mirage and FCWT 601 based forward collision warning 209 . Image mirage-based forward collision warning 209 and trap-based forward collision warning (FCWT) 601 can be performed in parallel with conventional FCW 22, as well as with other driver assistance functions, pedestrian detection (PW) 20, lane departure warning ( LDW)21, traffic sign detection and ego motion detection are performed in parallel. FCWT 601 can be used to verify conventional signals from FCW 22. The term "FCW signal" as used herein refers to a forward collision warning signal. The terms "FCW signal", "forward collision warning" and "warning" are used interchangeably herein.
本发明的特征在示出了光流或蜃景的例子的图1a和1b中示出。当车辆18接近金属护栏30时,显示来自安装在车辆18内的前视摄像机12的被捕获的两个图像。图1a中的图像示出视野和护栏30。图1b中的图像示出相同的特征,其中车辆18更接近金属护栏30,如果观察护栏中的小矩形p 32(用虚线标明),可能看到在图1b中水平线34似乎随着车辆18接近护栏30有所伸展。Features of the invention are illustrated in Figures 1a and 1b which show examples of optical flow or mirages. As the vehicle 18 approaches the metal barrier 30, two captured images from the forward looking camera 12 mounted within the vehicle 18 are displayed. The image in FIG. 1 a shows the field of view and the guardrail 30 . The image in Figure 1b shows the same feature, where the vehicle 18 is much closer to the metal barrier 30, and if one looks at the small rectangle p 32 in the barrier (marked with dashed lines), one can see that in Figure 1b the horizontal line 34 appears to be approaching as the vehicle 18 The guardrail 30 is extended.
现参考图2a,其示出根据本发明的特征的、用于使用安装在主车18中的摄像机12提供前部碰撞警告209(FCWL 209)的方法201。方法201不依赖车辆18的前方视野中的对象的对象识别。在步骤203中,由摄像机12获得多个图像帧15。在图像帧的捕获之间的时间间隔是Δt。在步骤205中选择图像帧15中的斑块32,且在步骤207中确定斑块32的相对比例(S)。在步骤209中,基于帧15之间的相对比例(S)和时间间隔(Δt)确定碰撞时间(TTC)。Reference is now made to Figure 2a, which illustrates a method 201 for providing forward collision warning 209 (FCWL 209) using camera 12 installed in host vehicle 18, in accordance with features of the present invention. Method 201 does not rely on object recognition of objects in the forward field of view of vehicle 18 . In step 203 a plurality of image frames 15 are acquired by the camera 12 . The time interval between captures of image frames is Δt. The blob 32 in the image frame 15 is selected in step 205 and the relative scale (S) of the blob 32 is determined in step 207 . In step 209, a time to collision (TTC) is determined based on the relative scale (S) and time interval (Δt) between frames 15 .
现参考图2b,其示出根据本发明的特征的、在图2a中示出的确定碰撞时间的步骤209的进一步的细节。在步骤211中,可在图像帧15之间跟踪斑块32中的多个图像点。在步骤213中,图像点可拟合到一个或多个模型。第一模型可为垂直表面模型,其可包括诸如行人、车辆、墙、灌木、树或灯柱的对象。第二模型可为路面模型,其考虑在路面上的图像点的特征。混合模型可包括来自道路的一个或多个图像点,以及来自直立的对象的一个或多个图像点。对于至少假设包括直立对象的一部分图像点的模型来说,可计算多个碰撞时间(TTC)。在步骤215中,图像点与路面模型、垂直表面模型或混合模型的最佳拟合使得能够选择碰撞时间(TTC)值。基于小于阈值的碰撞时间(TTC)并且当最佳拟合模型是垂直表面模型或混合模型时,可发出警告。Reference is now made to Figure 2b which shows further details of the step 209 of determining time to collision shown in Figure 2a in accordance with a feature of the present invention. In step 211 , a plurality of image points in the patch 32 may be tracked between the image frames 15 . In step 213, the image points may be fitted to one or more models. The first model may be a vertical surface model, which may include objects such as pedestrians, vehicles, walls, bushes, trees or lampposts. The second model may be a road surface model, which takes into account the characteristics of the image points on the road surface. The hybrid model may include one or more image points from the road, and one or more image points from the upright object. A number of time-to-collision (TTC) can be calculated for a model that is assumed to include at least a fraction of the image points of an upright object. In step 215, the best fit of the image points to a road surface model, a vertical surface model or a hybrid model enables selection of a time-to-collision (TTC) value. A warning may be issued based on a time to collision (TTC) less than a threshold and when the best fit model is a vertical surface model or a hybrid model.
可选地,步骤213还可包括在图像帧15中的候选图像的检测。候选图像可以是行人或垂直对象例如灯柱的垂直线。在是行人或垂直线的情况下,可选择斑块32以包括候选图像。一旦选定斑块32,那么有可能执行候选图像是直立的行人的图像和/或垂直线的图像的验证。该验证可确认当最佳拟合模型是垂直表面模型时,候选图像不是路面中的对象。Optionally, step 213 may also include the detection of candidate images in the image frame 15 . Candidate images can be pedestrians or vertical objects such as vertical lines of lampposts. In the case of pedestrians or vertical lines, a patch 32 may be selected to include a candidate image. Once a patch 32 has been selected, it is possible to perform a verification that the candidate image is an image of an upright pedestrian and/or an image of a vertical line. This validation confirms that the candidate image is not an object in the road surface when the best fitting model is a vertical surface model.
回顾图1a和1b,从图1a中示出的第一图像到图1b中示出的第二图像的斑块32的子像素排列可引起尺寸增加8%或相对比例S增加8%(S=1.08)(步骤207)。假设图像之间的时间差Δt=0.5秒,碰撞时间(TTC)可用下面的等式1计算(步骤209):Referring back to FIGS. 1a and 1b, the subpixel arrangement of the blob 32 from the first image shown in FIG. 1a to the second image shown in FIG. 1.08) (step 207). Assuming a time difference Δt=0.5 seconds between images, the time to collision (TTC) can be calculated with the following Equation 1 (step 209):
如果已知车辆18的速度为v(v=4.8m/s),则到目标的距离Z也可用下面的等式2计算:If the velocity of the vehicle 18 is known to be v (v=4.8m/s), the distance Z to the target can also be calculated with Equation 2 below:
根据本发明的特征,图3b和3d示出作为垂直的图像位置(y)的函数的点的垂直运动δy。垂直运动δy在水平线处是零,在水平线之下是负值。点的垂直运动δy以下面的等式3示出。Figures 3b and 3d show the vertical motion δy of the point as a function of the vertical image position (y), according to a feature of the invention. The vertical motion δy is zero at the horizontal line and negative below the horizontal line. The vertical motion δy of the point is shown in Equation 3 below.
等式(3)是关于y和δy的线性模型并实际上具有两个变量。可使用两个点来求解这两个变量。Equation (3) is a linear model with respect to y and δy and actually has two variables. These two variables can be solved for using two points.
对于垂直表面来说,因为所有的点是等距离的,如在图3b中所示出的图像中的距离,运动在水平线(y0)处是零并随图像位置线性改变。对于路面来说,点在图像中越低则越近(Z较小),如下面的等式4所示出的:For vertical surfaces, since all points are equidistant, as in the distance in the image shown in Figure 3b, the motion is zero at the horizontal line ( y0 ) and varies linearly with image position. For road surfaces, the lower the point is in the image, the closer it is (Z is smaller), as shown in Equation 4 below:
因此,图像运动δy不仅仅以线性率增加,如在下面的等式5中和图3d的图中所示的。Therefore, the image motion δy does not simply increase at a linear rate, as shown in Equation 5 below and in the graph of Figure 3d.
等式(5)是实际上具有两个变量的约束二次等式。Equation (5) is actually a constrained quadratic equation with two variables.
同样,可使用两个点来求解这两个变量。Again, two points can be used to solve for these two variables.
现参考表示不同的图像帧15的图3a和3c。在图3a和3c中,两个矩形区域以虚线示出。图3a示出直立的表面(厢式车的后面)。正方形点是被跟踪(步骤211)的点,运动与在图3b中相比于点的高度y的图像运动(δy)的图像中所示出的直立表面的运动模型相匹配(步骤213)。在图3a中的三角形点的运动不匹配直立表面的运动模型。现参考图3c,其示出主要是路面的矩形区域。正方形点是与在图3d中相比于点的高度y的图像运动(δy)的图像中所示出的路面模型相匹配的点。三角形点的运动不匹配路面的运动模型且是异常值(outlier)。因此一般来说,这里的任务是确定哪些点属于模型(且属于哪个模型)以及哪些点是异常值,这可通过如下面所说明的鲁棒拟合方法执行。Reference is now made to FIGS. 3 a and 3 c showing different image frames 15 . In Figures 3a and 3c, two rectangular areas are shown with dashed lines. Figure 3a shows the upright surface (the back of the van). The square point is the point that is tracked (step 211 ) and the motion matches the motion model of the upright surface shown in the image of the image motion (δy) in Figure 3b compared to the point's height y (step 213). The motion of the triangular point in Figure 3a does not match the motion model of the upright surface. Reference is now made to Figure 3c, which shows a mainly rectangular area of the road surface. The square points are points that match the road surface model shown in the image of the image motion (δy) compared to the point's height y in Fig. 3d. The motion of the triangle points does not match the motion model of the road surface and is an outlier. In general, therefore, the task here is to determine which points belong to the model (and to which model) and which points are outliers, which can be performed by robust fitting methods as explained below.
现参考图4a、4b、4c和4d,它们示出根据本发明的特征的、位于图像中的两个运动模型的混合的典型状况。图4a示出包括金属护栏30的图像和矩形斑块32a的图像帧15,其中金属护栏30的图像具有水平线34。斑块32a的进一步的细节在图4b和4c中示出。图4b示出一个之前的图像帧15中的斑块32a的细节,图4c示出当车辆18更加靠近护栏30时在一个随后的图像帧15中的斑块32a的细节。在图4c和4d中,一些图像点被显示为在直立障碍物30上的正方形、三角形和圆形,而一些图像点被示出在障碍物30前方的路面上。在矩形区域32a内的跟踪点显示出,一些点在对应于道路模型的区域32a的下部分中,而一些点在对应于直立的表面模型的区域32a的上部分中。图4d示出点的垂直运动(δy)相比于垂直的点位置(y)的曲线图。在图4d中,用图示出的被恢复的模型具有两个部分:弯曲(抛物线的)部分38a和线性部分38b。在部分38a和38b之间的过渡点对应直立表面30的底部。该过渡点还通过图4c中的水平虚线36标记。在图4b和4c中有一些通过三角形示出的点,它们被跟踪但不匹配模型,一些匹配模型的被跟踪点通过正方形示出而一些未被良好跟踪的点被示为圆形。Reference is now made to Figures 4a, 4b, 4c and 4d, which illustrate typical situations of a mixture of two motion models located in an image according to a feature of the present invention. FIG. 4 a shows an image frame 15 comprising an image of a metal fence 30 with horizontal lines 34 and a rectangular patch 32 a. Further details of the plaque 32a are shown in Figures 4b and 4c. FIG. 4 b shows a detail of a blob 32 a in a previous image frame 15 , and FIG. 4 c shows a detail of a blob 32 a in a subsequent image frame 15 as the vehicle 18 comes closer to the barrier 30 . In FIGS. 4c and 4d , some image points are shown as squares, triangles and circles on the upright obstacle 30 , while some image points are shown on the road ahead of the obstacle 30 . Tracking points within the rectangular area 32a shows that some points are in the lower part of the area 32a corresponding to the road model and some points are in the upper part of the area 32a corresponding to the upright surface model. Figure 4d shows a graph of the vertical motion of the dot (δy) versus the vertical dot position (y). In Fig. 4d, the restored model is illustrated as having two parts: a curved (parabolic) part 38a and a linear part 38b. The transition point between portions 38a and 38b corresponds to the bottom of upright surface 30 . This transition point is also marked by the horizontal dashed line 36 in Fig. 4c. In Figures 4b and 4c there are some points shown by triangles that are tracked but do not match the model, some tracked points that match the model are shown by squares and some points that are not well tracked are shown as circles.
现参考图5,其示出图像帧15中的蜃景的另一例子。在图5的图像帧15中,在斑块32b中没有直立的表面,只有前方无障碍的道路,且在两个模型之间的过渡点在水平线处以虚线50标记。Referring now to FIG. 5 , another example of a mirage in image frame 15 is shown. In image frame 15 of Fig. 5, there is no upright surface in patch 32b, only the clear road ahead, and the transition point between the two models is marked with dashed line 50 at the horizontal.
运动模型和碰撞时间(TTC)的估计Motion model and time-to-collision (TTC) estimation
运动模型和碰撞时间(TTC)的估计(步骤215)假设提供一个区域32,例如在图像帧15中的矩形区域。矩形区域的例子是例如在图3和5中所示出的矩形32a和32b。可基于所检测的诸如行人的对象或基于主车18的运动来选择这些矩形。The estimation of motion model and time to collision (TTC) (step 215 ) assumes that an area 32 is provided, for example a rectangular area in the image frame 15 . Examples of rectangular areas are, for example, the rectangles 32a and 32b shown in FIGS. 3 and 5 . These rectangles may be selected based on detected objects such as pedestrians or based on the motion of host vehicle 18 .
1.跟踪点(步骤211):1. Tracking point (step 211):
(a)矩形区域32可被细分为5x20个子矩形格。(a) The rectangular area 32 can be subdivided into 5x20 sub-rectangular grids.
(b)可为每一子矩形执行算法以便找到图像的角点,例如使用Harris和Stephens方法,且可跟踪该点。最好使用5x5Harris点,可考虑下面的矩阵的特征值,(b) An algorithm can be performed for each sub-rectangle to find the corner point of the image, eg using the Harris and Stephens method, and this point can be tracked. Preferably using 5x5Harris points, consider the eigenvalues of the following matrix,
并且寻找出两个强特征值。And find out two strong eigenvalues.
(c)可通过在具有宽度W和高度H的矩形搜索区域中穷举搜索最佳的一些平方差(SSD)匹配来执行跟踪。在开始时该穷举搜索是很重要的,因为它意味着之前的运动没有采用,且来自所有子矩形的测量在统计学上是更加独立的。在搜索之后是使用了光流估计的微调,其中光流估计使用了例如Lukas Kanade方法。Lukas Kanade方法允许子像素运动。(c) Tracking can be performed by an exhaustive search for the best few square difference (SSD) matches in a rectangular search area with width W and height H. This exhaustive search is important at the beginning because it means that previous motions are not taken and the measurements from all sub-rectangles are statistically more independent. The search is followed by fine-tuning using optical flow estimation using, for example, the Lukas Kanade method. The Lukas Kanade method allows for sub-pixel motion.
2.鲁棒的模型拟合(步骤213):2. Robust model fitting (step 213):
(a)从100个被跟踪的点中随机选取两个或三个点。(a) Randomly select two or three points from 100 tracked points.
(b)被选取的对的数量(N对)取决于车辆速度(v),例如通过下式给出:(b) The number of pairs selected (N pairs ) depends on the vehicle speed (v), for example given by:
N对=min(40,max(5,50-v))(7)N pairs = min(40, max(5, 50-v)) (7)
其中v单位为米/秒。三元组(triplet)的数量(N三元组)通过下式给出:where the unit of v is m/s. The number of triplets (N triplets ) is given by:
N三元组=50-N对(8)N triples = 50-N pairs (8)
(c)对于两个点,它们可拟合两个模型(步骤213)。一个模型假设这两个点在直立的对象上。第二模型假设这两个点都在道路上。(c) For two points, they can fit two models (step 213). One model assumes that the two points are on upright objects. The second model assumes that both points are on the road.
(d)对于三个点,它们也可拟合两个模型。一个模型假设上面的两个点在直立的对象上而第三个(最下面的)点在道路上。第二模型假设最上面的一个点在直立的对象上而下面的两个点在道路上。(d) For three points, they also fit two models. One model assumes that the upper two points are on upright objects and the third (lowermost) point is on the road. The second model assumes that the topmost point is on an upright object and the bottom two points are on a road.
两个模型可关于三个点求解,这通过使用两个点求解第一模型(等式3),然后用结果y0和第三个点求解第二模型(等式5)。The two models can be solved with respect to three points by solving the first model (Equation 3) using two points, and then solving the second model (Equation 5) with the result y 0 and the third point.
(e)在(d)中的每一个模型都给出碰撞时间TTC值(步骤215)。每一个模型还基于98个其他点与模型拟合得有多好来得到一个分数。通过在点的y运动和预测的模型运动之间的距离的截尾平方和(Sum of the Clipped Square ofthe Distance,SCSD)给出该分数。SCSD值被转化成类似于概率的函数:(e) Each model in (d) gives a time-to-collision TTC value (step 215). Each model also gets a score based on how well the 98 other points fit the model. The score is given by the Sum of the Clipped Square of the Distance (SCSD) between the point's y-motion and the predicted model motion. SCSD values are transformed into a probability-like function:
其中N是点的数量(N=98)。where N is the number of points (N=98).
(f)基于TTC值、车辆18的速度并假设这些点在静止的对象上,可以计算到这些点的距离Z=v x TTC。根据每一图像点距离的x图像坐标,可以计算世界坐标中的横向位置:(f) Based on the TTC value, the speed of the vehicle 18 and assuming that these points are on stationary objects, the distance Z=v x TTC to these points can be calculated. From the x image coordinates of the distance of each image point, the lateral position in world coordinates can be calculated:
(g)因此计算出在时间TTC的横向位置。二进制横向分数要求来自对或三元组的点中的至少一个必须在车辆18的路径中。(g) The lateral position at time TTC is thus calculated. Binary lateral scores require that at least one of the points from a pair or triplet must be in the path of the vehicle 18 .
3.多帧的分数:在每一帧15可产生新的模型,每一个新的模型具有其相关的TTC和分数。可从之前的4个帧15保留200个最佳(分数最高)的模型,其中分数被加权如下:3. Scores for multiple frames: new models can be generated at each frame 15, each new model has its associated TTC and score. The 200 best (highest scoring) models may be retained from the previous 4 frames 15, where the scores are weighted as follows:
分数(n)=αn分数(12)fraction( n ) = α nfraction(12)
其中n=0.3是分数的年龄(age),且α=0:95。where n=0.3 is the age of the fraction and α=0:95.
4.FCW判断:如果下面的三个条件中的任何一个发生,则发出真实的FCW警告:4. FCW Judgment: If any of the following three conditions occurs, a real FCW warning is issued:
(a)具有最高分数的模型的TTC在TTC阈值之下且分数大于0.75,且(a) the TTC of the model with the highest score is below the TTC threshold and the score is greater than 0.75, and
(b)具有最高分数的模型的TTC在TTC阈值之下且(b) The TTC of the model with the highest score is below the TTC threshold and
(c)(c)
图3和4已经示出如何为给定的矩形32内的点鲁棒地提供FCW警告。如何限定矩形取决于如通过图7a-7d和8a、8b的其他示例性特征示出的应用。3 and 4 have shown how to robustly provide FCW warnings for points within a given rectangle 32 . How the rectangle is defined depends on the application as shown by the other exemplary features of Figures 7a-7d and 8a, 8b.
一般的静止对象的FCW陷阱General FCW pitfalls for stationary objects
现参考图6,其示出根据本发明的特征的、用于提供前部碰撞警告陷阱(FCWT)601的方法601。在步骤203中,通过摄像机12获得多个图像帧15。在步骤605中,选择图像帧15中的斑块32,该斑块对应机动车18将在预定的时间间隔后所处的位置。接着在步骤607中监视斑块32。在判断步骤609中,如果一般对象成像在斑块32中并在其中被检测到,则在步骤611中发出前部碰撞警告。否则图像帧的捕获按步骤203继续。Reference is now made to FIG. 6 , which illustrates a method 601 for providing a forward collision warning trap (FCWT) 601 in accordance with features of the present invention. In step 203 , a plurality of image frames 15 are obtained by the camera 12 . In step 605, a blob 32 in the image frame 15 is selected, which blob corresponds to where the motor vehicle 18 will be after a predetermined time interval. Plaque 32 is then monitored in step 607 . In decision step 609 , if a general object is imaged and detected in the plaque 32 , then in step 611 a forward collision warning is issued. Otherwise the capture of image frames continues with step 203 .
图7a和7b示出根据本发明的示例性特征的,针对墙70所触发的FCWT 601警告的例子;在图7d中示出根据本发明的示例性特征的、针对汽车72的侧面所触发的警告的例子;并且,在图7c中示出根据本发明的示例性特征的、针对盒子74a和74b所触发的警告的例子。图7a-7d是不要求之前的基于类的检测的、一般的静止对象的例子。虚线矩形区域被限定为在一定距离处的、W=1m宽的目标,所述距离为主车将在t=4s后所处的距离。Figures 7a and 7b show examples of FCWT 601 warnings triggered against a wall 70 according to an exemplary feature of the invention; An example of an alert; and, an example of an alert triggered for boxes 74a and 74b according to an exemplary feature of the present invention is shown in FIG. 7c. Figures 7a-7d are examples of generic stationary objects that do not require prior class-based detection. The dashed rectangular area is defined as a W=1m wide target at a distance where the main vehicle will be after t=4s.
Z=vt (16)Z=vt (16)
其中v是车辆18的速度,H是摄像机12的高度,以及w和y分别是矩形的宽度和在图像中的垂直位置。该矩形区域是FCW陷阱的例子。如果对象“落”入该矩形区域内,则如果TTC小于阈值,FCW陷阱可产生警告。使用多个陷阱改进性能:where v is the velocity of the vehicle 18, H is the height of the camera 12, and w and y are the width and vertical position of the rectangle in the image, respectively. This rectangular area is an example of a FCW trap. If an object "falls" within this rectangular area, the FCW trap can generate a warning if the TTC is less than a threshold. Improve performance with several traps:
为了提高检测率,FCW陷阱可被复制到具有50%重叠部分的5个区域中,以产生3m宽的总陷阱区域。To increase the detection rate, FCW traps can be replicated in 5 regions with 50% overlap to produce a total trap area of 3m width.
可以根据偏航率(yaw rate)选择FCW陷阱的动态位置:可基于根据偏航率传感器、车辆18的速度和主车18的动态模型确定的车辆18的路径来横向地平移陷阱区域32。The dynamic position of the FCW trap can be selected according to the yaw rate: the trap area 32 can be translated laterally based on the path of the vehicle 18 determined from the yaw rate sensor, the velocity of the vehicle 18 and the dynamic model of the host vehicle 18 .
用于验证前部碰撞警告信号的FCW陷阱FCW trap for verifying forward collision warning signal
诸如车辆和行人的特殊类对象可使用图案识别技术在图像15中检测到。根据美国专利7113867的教导,这些对象之后随时间跟踪,且使用比例中的改变能够产生FCW 22信号。然而,在发出警告之前重要的是使用独立的技术验证FCW 22信号。如果系统16将会激活刹车的话,那么使用独立的技术,例如使用方法209(图2b)来验证FCW 22信号就可能尤为重要。在雷达/视觉融合的系统中,独立的验证可来自雷达。在只基于视觉的系统16中,独立的验证来自独立的视觉算法。Special classes of objects such as vehicles and pedestrians can be detected in the image 15 using pattern recognition techniques. These objects are then tracked over time according to the teachings of US Patent 7113867, and using changes in the ratio can generate the FCW 22 signal. However, it is important to verify the FCW 22 signal using independent techniques before issuing a warning. It may be especially important to use an independent technique, such as using method 209 (FIG. 2b), to verify the FCW 22 signal if the system 16 will activate the brakes. In a radar/vision fusion system, independent verification can come from the radar. In a vision-only system 16, independent validation comes from independent vision algorithms.
对象(例如行人、前车)的检测不是问题。能够实现非常高的检测速率而只有非常低的错误率。本发明的一个特征是产生没有太多错误报警的可靠的FCW信号,太多错误警报将使驾驶员烦躁,或更糟糕地会致使驾驶员不必要地刹车。关于传统的行人FCW系统的一个可能问题是要避免错误的前部碰撞警告,因为在场景中的行人的数量庞大而真正的前部碰撞情况的数量则非常小。即使5%的错误率也将意味着驾驶员将可能收到频繁的错误警报,而可能从未经历真正的警告。Detection of objects (eg pedestrians, vehicles ahead) is not a problem. A very high detection rate can be achieved with a very low error rate. It is a feature of the present invention to generate a reliable FCW signal without too many false alarms that would annoy the driver, or worse cause the driver to brake unnecessarily. One possible problem with conventional pedestrian FCW systems is to avoid false forward collision warnings because the number of pedestrians in the scene is large and the number of true forward collision situations is very small. Even a 5% error rate would mean that the driver would likely receive frequent false alerts, possibly never experiencing real warnings.
行人目标对于FCW系统来说尤其具有挑战性,因为目标是非刚性的,这使得跟踪困难(根据美国专利7113867的教导),且比例改变特别会受到很多干扰。因此,鲁棒的模型(方法209)可用于验证针对行人的前部碰撞警告。可通过行人检测系统20确定矩形区域32。根据美国专利7113867,只有通过FCW 22执行目标跟踪才可产生FCW信号,且鲁棒的FCW(方法209)给出了比可以或不可以预先确定的一个或多个阈值小的TTC。前部碰撞警告FCW 22可具有与在鲁棒的模型(方法209)中使用的阈值不同的阈值。Pedestrian targets are especially challenging for FCW systems because the targets are non-rigid, which makes tracking difficult (as taught by US Patent 7113867), and scale changes in particular are subject to many disturbances. Therefore, the robust model (method 209) can be used to validate forward collision warning for pedestrians. The rectangular area 32 may be determined by the pedestrian detection system 20 . According to US Patent 7113867, only object tracking performed by FCW 22 can produce FCW signal, and robust FCW (method 209) gives a TTC smaller than one or more thresholds which may or may not be predetermined. The forward collision warning FCW 22 may have a different threshold than that used in the robust model (method 209 ).
可能增加错误警告的数量的因素之一是,行人通常出现在较少结构化的道路中,在这样的道路中驾驶员的驾驶模式可能相当不稳定,其中包括急转弯和变道。因此针对警告的发出可能需要包括一些进一步的约束:One of the factors that may increase the number of false warnings is that pedestrians are often present on less structured roads where the driver's driving patterns can be quite erratic, including sharp turns and lane changes. Therefore the issuance of warnings may need to include some further constraints:
当检测到路缘或车道标志,如果行人在路缘或/和车道的远侧且没有发生下面的条件中的任何一个时,则FCW信号被阻止:When a curb or lane marking is detected, the FCW signal is blocked if a pedestrian is on the far side of the curb and/or lane and none of the following conditions occur:
1.行人正在穿过车道标志或路缘(或非常快地接近)。对此,检测行人的脚可能很重要。1. A pedestrian is crossing (or approaching very quickly) a lane marking or curb. For this, detection of pedestrian's feet may be important.
2.主车18不是正在穿过车道标志或路缘(例如,如通过LDW 21系统所检测的)。2. The host vehicle 18 is not crossing a lane marking or curb (eg, as detected by the LDW 21 system).
驾驶员的意图较难预测。如果驾驶员正直向驾驶,没有激活转向信号且不预计有另外的车道标记,那么有理由假设驾驶员将继续直向前进。因此,如果有行人在路径中且TTC在阈值之下,则可发出FCW信号。然而,如果驾驶员正在转弯中,那么他/她将继续转弯或停止转弯而继续前行是同样可能的。因此,当检测偏航率时,仅当假设车辆18将以相同的偏航角继续转弯且行人在路径中,以及如果车辆直行且行人在路径中时才发出FCW信号。Driver's intentions are less predictable. If the driver is driving straight ahead, the turn signal is not activated and no additional lane markings are anticipated, it is reasonable to assume that the driver will continue straight ahead. Therefore, if there is a pedestrian in the path and the TTC is below the threshold, a FCW signal can be issued. However, if the driver is turning, it is equally possible that he/she will continue to turn or stop turning and keep going. Therefore, when detecting yaw rate, the FCW signal is only issued when it is assumed that the vehicle 18 will continue to turn at the same yaw angle with the pedestrian in the path, and if the vehicle is going straight and the pedestrian is in the path.
FCW陷阱601的概念可延伸到主要包含垂直线(或水平线)的对象。对这样的对象使用基于点的技术的可能问题是,良好的Harris(角点)点通常来说通过将对象的边缘上的垂直线与远处背景的水平线交叉来产生。这些点的垂直运动将类似于远处的路面。The concept of FCW trap 601 can be extended to objects that contain mostly vertical lines (or horizontal lines). A possible problem with using point-based techniques for such objects is that good Harris (corner) points are generally produced by intersecting vertical lines on the edge of the object with horizontal lines of the distant background. The vertical motion of these points will resemble the road surface in the distance.
图8a和8b示出具有明显的垂直线82的对象的例子,所述垂直线82在图8b中的灯柱80上和在图8a中的盒子84上。在陷阱区域32中检测到垂直线82。可在图像之间跟踪被检测到的直线82。可通过逐帧地配对直线82并且计算每一直线对的TTC模型,假设垂直对象,然后基于其他直线82的SCSD给出分数,以便执行鲁棒的估算。由于直线的数量可能较小,通常是测试所有组合的可能的线对。只使用有重要的重叠部分的直线对。就水平线而言,和使用点的时候一样,三元组线也给出了两个模型。Figures 8a and 8b show examples of objects with distinct vertical lines 82 on the lamp post 80 in Figure 8b and on the box 84 in Figure 8a. A vertical line 82 is detected in the trap area 32 . Detected straight lines 82 can be tracked between images. Robust estimation can be performed by pairing lines 82 frame by frame and computing a TTC model for each line pair, assuming a vertical object, and then giving a score based on the SCSD of the other lines 82 . Since the number of lines may be small, it is usual to test all combinations of possible pairs. Only use line pairs that have significant overlap. As far as horizontal lines are concerned, triplet lines give two models as when using points.
本文所使用的不定冠词“一(a)”、“一(an)”,如“一图像”(“an image”)、“一矩形区域”(“a rectangular region”),具有“一个或多个”的意思,即“一个或多个图像”或“一个或多个矩形区域”。The indefinite articles "one (a)" and "one (an)" used herein, such as "an image" ("an image"), "a rectangular region" ("a rectangular region"), have "one or Multiple" means "one or more images" or "one or more rectangular areas".
尽管已经示出和描述了本发明的所选择的特征,但应该理解本发明不受限于所描述的特征。相反,应该意识到,可不偏离本发明的原理和精神对这些特征进行改变,本发明的范围通过权利要求及其等价物限定。While selected features of the invention have been shown and described, it should be understood that the invention is not limited to the described features. Rather, it should be realized that changes may be made in these characteristics without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.
Claims (17)
- It is 1. a kind of to the method use the video camera that can be installed in a motor vehicle for determining method expected from front shock, Methods described includes:Multiple images frame is obtained by known time interval;Patch is selected at least one of described image frame;The light stream between the picture frame of the multiple images point of the patch is tracked to produce the light stream of tracking;For multiple models, at least one of the tracked light stream of described image point is fitted, to produce to the multiple mould Multiple fittings of type, wherein, the multiple model is selected from the group being made up of following item:(i) road surface model, wherein, described image A part for point is modeled as imaging from road surface, and a part for (ii) vertical surface model, wherein described image point is modeled as The object from substantial orthogonality, and (iii) mixed model are imaged, the Part I of wherein described image point is modeled as imaging Object of the imaging from substantial orthogonality is modeled as from the Part II on road surface, and described image point;Using at least a portion of described image point, to the light stream for being tracked and the fitting of each model score, it is each to produce Individual fraction;AndThere is a model for fraction by selection, it is determined that whether expection collides and determines collision time, the fraction pair Should be in described image point and the best fit of the light stream for being tracked.
- 2. the method for claim 1, also includes:Best fit based on described image point with the light stream for being tracked for corresponding to the road surface model, it is determined that expection does not have Collision.
- 3. the method for claim 1, also includes:It is optimal with the light stream for being tracked corresponding to the vertical surface model or the mixed model based on described image point Fitting, it is determined that contemplating that collision.
- 4. method as claimed in claim 3, also includes:The candidate image of pedestrian is detected in the patch;AndWhen best fit model is the vertical surface model, verify the candidate image be upright pedestrian image without It is the image of the object in road surface.
- 5. method as claimed in claim 3, also includes:Vertical line is detected in described image frame, wherein selecting the patch with including the vertical line;When best fit model is the vertical surface model, verify the vertical line be vertical object image rather than The image of the object in road surface.
- 6. the method for claim 1, also includes:Given a warning less than threshold value based on the collision time.
- 7. method as claimed in claim 4, also includes:When the best fit model is road surface model, verify that the candidate image is the object in road surface and expection will not There is collision.
- 8. a kind of for determining system expected from front shock, it includes:Video camera;AndProcessor in a motor vehicle can be installed;Wherein described system is configured to determine that front shock is expected,Wherein described processor is configured as obtaining multiple images frame by known time interval;Wherein described processor is configured as selecting patch at least one of described image frame;Wherein described processor is configured as multiple model followings between the picture frame of the multiple images point of the patch Light stream, wherein the multiple model is selected from the group that is made up of following item:(i) road surface model, wherein, one of described image point Divide and be modeled as imaging from road surface, (ii) vertical surface model a, part for wherein described image point is modeled as imaging from fact Vertical object in matter, and (iii) mixed model, the Part I of wherein described image point are modeled as imaging from road surface, And the Part II of described image point is modeled as object of the imaging from substantial orthogonality;Wherein, the processor is configured as being fitted at least one of the tracked light stream of described image point, is arrived with producing Multiple fittings of the multiple model;Wherein, the processor be configured with described image point at least a portion and to the light stream for being tracked and each The fitting of model is scored to produce each fraction;AndWherein, the processor is configured with least a portion of described image point, has fraction by selection Model determines whether expection collides and determine collision time, and the fraction corresponds to described image point and the light stream for being tracked Best fit.
- 9. system as claimed in claim 8, wherein, the processor is configured as described image point being fitted to road surface mould Type;Wherein described processor is configured as the fitting with the road surface model based on described image point, it is determined that expection does not have and touches Hit.
- 10. a kind of method expected from determination front shock, the method use video camera and the place that can be installed in a motor vehicle Reason device, methods described includes:Multiple images frame is obtained by known time interval;Patch in selection picture frame, the patch correspondence motor vehicle will the location of after a predetermined interval of time;The light stream between the picture frame of the multiple images point of the patch is tracked to produce the light stream of tracking;For multiple models, at least one of the tracked light stream of described image point is fitted, to produce to the multiple mould Multiple fittings of type, wherein, the multiple model is selected from the group being made up of following item:(i) road surface model, wherein, described image A part for point is modeled as imaging from road surface, and a part for (ii) vertical surface model, wherein described image point is modeled as The object from substantial orthogonality, and (iii) mixed model are imaged, the Part I of wherein described image point is modeled as imaging Object of the imaging from substantial orthogonality is modeled as from the Part II on road surface, and described image point;Using at least a portion of described image point, to the light stream for being tracked and the fitting of each model score, it is each to produce Individual fraction;AndThere is a model for fraction by selection, it is determined that whether expection collides and determines collision time, the fraction pair Should be in described image point and the best fit of the light stream for being tracked.
- 11. methods as claimed in claim 10, also include:It is determined that whether the object being imaged in the patch includes the part of substantial orthogonality.
- 12. methods as claimed in claim 11, also include:Described image point is fitted to road surface model;AndBest fit based on described image point with the road surface model, it is determined that expection does not have collision.
- 13. methods as claimed in claim 11, also include:When best fit model is vertical surface model or mixed model, front shock warning is sent.
- 14. methods as claimed in claim 10, also include:The yaw rate of the motor vehicle is input into or calculated from described image frame;AndThe yaw rate based on motor vehicle dynamic on described image frame laterally translates the patch.
- 15. is a kind of for determining system expected from front shock in a motor vehicle, and the system includes:Video camera, it can be arranged in the motor vehicle, and the video camera can be operated and obtained by known time interval Multiple images frame;Processor, it is configured as selecting the patch in picture frame, and the patch correspondence motor vehicle is by the predetermined time The location of behind interval;Wherein described processor is configured as the multiple images point in the patch for multiple model followings Picture frame between light stream to produce the light stream of tracking, wherein the multiple model is selected from the group being made up of following item:(i) road Surface model, wherein, a part for described image point is modeled as imaging from road surface, (ii) vertical surface model, wherein the figure A part for picture point is modeled as imaging from the object of substantial orthogonality, and (iii) mixed model, wherein described image point Part I is modeled as imaging from road surface, and the Part II of described image point is modeled as imaging from the right of substantial orthogonality As,Wherein, the processor is configured as being fitted at least one of the tracked light stream of described image point, is arrived with producing Multiple fittings of the multiple model;Wherein, the processor be configured with least a portion of described image point, to the light stream for being tracked and each mould The fitting of type is scored to produce each fraction;AndWherein, the processor is configured to selection, and there is the model of fraction to determine whether expection has collision and really Determine collision time, the fraction corresponds to the best fit of described image point and the light stream for being tracked.
- 16. systems as claimed in claim 15, wherein the processor be additionally configured to when described image point with tracked When the best fit of light stream is the best fit of the vertical surface model or the mixed model, it is determined that in the patch Whether the object of middle imaging includes the part of substantial orthogonality.
- 17. systems as claimed in claim 15, wherein the processor is configured as sending front portion less than threshold value based on TTC Conflict alert.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710344179.6A CN107423675B (en) | 2010-12-07 | 2011-12-07 | Advanced warning system for forward collision warning of traps and pedestrians |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US42040510P | 2010-12-07 | 2010-12-07 | |
US61/420,405 | 2010-12-07 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710344179.6A Division CN107423675B (en) | 2010-12-07 | 2011-12-07 | Advanced warning system for forward collision warning of traps and pedestrians |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102542256A CN102542256A (en) | 2012-07-04 |
CN102542256B true CN102542256B (en) | 2017-05-31 |
Family
ID=46349111
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710344179.6A Active CN107423675B (en) | 2010-12-07 | 2011-12-07 | Advanced warning system for forward collision warning of traps and pedestrians |
CN201110404574.1A Active CN102542256B (en) | 2010-12-07 | 2011-12-07 | Advanced Warning System with Forward Collision Warning for Traps and Pedestrians |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710344179.6A Active CN107423675B (en) | 2010-12-07 | 2011-12-07 | Advanced warning system for forward collision warning of traps and pedestrians |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN107423675B (en) |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5877897A (en) | 1993-02-26 | 1999-03-02 | Donnelly Corporation | Automatic rearview mirror, vehicle lighting control and vehicle interior monitoring system using a photosensor array |
US6822563B2 (en) | 1997-09-22 | 2004-11-23 | Donnelly Corporation | Vehicle imaging system with accessory control |
US7655894B2 (en) | 1996-03-25 | 2010-02-02 | Donnelly Corporation | Vehicular image sensing system |
US7038577B2 (en) | 2002-05-03 | 2006-05-02 | Donnelly Corporation | Object detection system for vehicle |
US7526103B2 (en) | 2004-04-15 | 2009-04-28 | Donnelly Corporation | Imaging system for vehicle |
WO2008024639A2 (en) | 2006-08-11 | 2008-02-28 | Donnelly Corporation | Automatic headlamp control system |
DE102013213812B4 (en) * | 2013-07-15 | 2024-07-18 | Volkswagen Aktiengesellschaft | Device and method for displaying a traffic situation in a vehicle |
WO2015114654A1 (en) * | 2014-01-17 | 2015-08-06 | Kpit Technologies Ltd. | Vehicle detection system and method thereof |
WO2018049643A1 (en) * | 2016-09-18 | 2018-03-22 | SZ DJI Technology Co., Ltd. | Method and system for operating a movable object to avoid obstacles |
EP4230964A1 (en) * | 2017-01-12 | 2023-08-23 | Mobileye Vision Technologies Ltd. | Navigation based on vehicle activity |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040175019A1 (en) * | 2003-03-03 | 2004-09-09 | Lockheed Martin Corporation | Correlation based in frame video tracker |
US7113867B1 (en) * | 2000-11-26 | 2006-09-26 | Mobileye Technologies Limited | System and method for detecting obstacles to vehicle motion and determining time to contact therewith using sequences of images |
US20100191391A1 (en) * | 2009-01-26 | 2010-07-29 | Gm Global Technology Operations, Inc. | multiobject fusion module for collision preparation system |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3515926B2 (en) * | 1999-06-23 | 2004-04-05 | 本田技研工業株式会社 | Vehicle periphery monitoring device |
US7089114B1 (en) * | 2003-07-03 | 2006-08-08 | Baojia Huang | Vehicle collision avoidance system and method |
JP2005226670A (en) * | 2004-02-10 | 2005-08-25 | Toyota Motor Corp | Vehicle deceleration control device |
EP3454315A1 (en) * | 2004-04-08 | 2019-03-13 | Mobileye Vision Technologies Ltd. | Collision warning system |
JP4304517B2 (en) * | 2005-11-09 | 2009-07-29 | トヨタ自動車株式会社 | Object detection device |
EP1837803A3 (en) * | 2006-03-24 | 2008-05-14 | MobilEye Technologies, Ltd. | Headlight, taillight and streetlight detection |
CN101261681B (en) * | 2008-03-31 | 2011-07-20 | 北京中星微电子有限公司 | Road image extraction method and device in intelligent video monitoring |
US8050459B2 (en) * | 2008-07-25 | 2011-11-01 | GM Global Technology Operations LLC | System and method for detecting pedestrians |
-
2011
- 2011-12-07 CN CN201710344179.6A patent/CN107423675B/en active Active
- 2011-12-07 CN CN201110404574.1A patent/CN102542256B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7113867B1 (en) * | 2000-11-26 | 2006-09-26 | Mobileye Technologies Limited | System and method for detecting obstacles to vehicle motion and determining time to contact therewith using sequences of images |
US20040175019A1 (en) * | 2003-03-03 | 2004-09-09 | Lockheed Martin Corporation | Correlation based in frame video tracker |
US20100191391A1 (en) * | 2009-01-26 | 2010-07-29 | Gm Global Technology Operations, Inc. | multiobject fusion module for collision preparation system |
Also Published As
Publication number | Publication date |
---|---|
CN102542256A (en) | 2012-07-04 |
CN107423675A (en) | 2017-12-01 |
CN107423675B (en) | 2021-07-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10940818B2 (en) | Pedestrian collision warning system | |
CN102542256B (en) | Advanced Warning System with Forward Collision Warning for Traps and Pedestrians | |
US9251708B2 (en) | Forward collision warning trap and pedestrian advanced warning system | |
US12249153B2 (en) | Method and device for recognizing and evaluating roadway conditions and weather-related environmental influences | |
JP3822515B2 (en) | Obstacle detection device and method | |
CN107845104B (en) | Method for detecting overtaking vehicle, related processing system, overtaking vehicle detection system and vehicle | |
CN102779430B (en) | Collision-warning system, controller and method of operating thereof after the night of view-based access control model | |
US8305431B2 (en) | Device intended to support the driving of a motor vehicle comprising a system capable of capturing stereoscopic images | |
WO2017138286A1 (en) | Surrounding environment recognition device for moving body | |
CN113998034A (en) | Rider assistance system and method | |
JP5145585B2 (en) | Target detection device | |
JP2007249841A (en) | Image recognition device | |
JP5482672B2 (en) | Moving object detection device | |
JP2017215743A (en) | Image processing device, and external world recognition device | |
Kim et al. | An intelligent and integrated driver assistance system for increased safety and convenience based on all-around sensing | |
JP3916930B2 (en) | Approach warning device | |
Wu et al. | A new vehicle detection with distance estimation for lane change warning systems | |
Ma et al. | A real-time rear view camera based obstacle detection | |
Wu et al. | A vision-based collision warning system by surrounding vehicles detection | |
JP3961269B2 (en) | Obstacle alarm device | |
JP6378594B2 (en) | Outside environment recognition device | |
Jazayeri et al. | Motion based vehicle identification in car video | |
Yi et al. | A fast and accurate forward vehicle start alarm by tracking moving edges obtained from dashboard camera | |
JP4381394B2 (en) | Obstacle detection device and method | |
Sato et al. | Mobile Alert System Using Lane Detection Based on Vehicle Clustering |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
ASS | Succession or assignment of patent right |
Owner name: WUBISHI VISUAL TECHNOLOGY CO., LTD. Free format text: FORMER OWNER: MOBILEYE TECHNOLOGIES LTD. Effective date: 20141120 |
|
C41 | Transfer of patent application or patent right or utility model | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20141120 Address after: Israel Jerusalem Applicant after: MOBILEYE TECHNOLOGIES LTD. Address before: Cyprus Nicosia Applicant before: Mobileye Technologies Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |