[go: up one dir, main page]

CN106407951A - Monocular vision-based nighttime front vehicle detection method - Google Patents

Monocular vision-based nighttime front vehicle detection method Download PDF

Info

Publication number
CN106407951A
CN106407951A CN201610873523.6A CN201610873523A CN106407951A CN 106407951 A CN106407951 A CN 106407951A CN 201610873523 A CN201610873523 A CN 201610873523A CN 106407951 A CN106407951 A CN 106407951A
Authority
CN
China
Prior art keywords
formula
value
threshold
image
sigma
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610873523.6A
Other languages
Chinese (zh)
Other versions
CN106407951B (en
Inventor
蒋卓韵
戴芳
赵凤群
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Technology
Original Assignee
Xian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Technology filed Critical Xian University of Technology
Priority to CN201610873523.6A priority Critical patent/CN106407951B/en
Publication of CN106407951A publication Critical patent/CN106407951A/en
Application granted granted Critical
Publication of CN106407951B publication Critical patent/CN106407951B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a monocular vision-based nighttime front vehicle detection method. The method includes the following steps that: 1, images are acquired, and vehicle lamp detection is performed on a nighttime front vehicle based on a CenSurE operator, so that strong corner points can be obtained; 2, nighttime front vehicle segmentation is carried out based on vehicle lamp color information, so that segmented regions can be obtained; 3, a region with the most strong corner points is selected from segmented regions obtained in the step 2, so that a detection region can be obtained; and 4, vehicle lamp matching is performed on the detection region, so that the location of a target vehicle can be determined. According to the method of the invention, the CenSurE operator under a plurality of scales is calculated, the results of the CenSurE operator are detected, and vehicle matching operation is carried out, and therefore, the problems of large matching error and inaccurate target detection of an existing night nighttime front vehicle detection method can be solved. The monocular vision-based nighttime front vehicle detection method has a high application value.

Description

一种基于单目视觉的夜间前方车辆检测方法A night-time front vehicle detection method based on monocular vision

技术领域technical field

本发明属于夜间目标检测方法技术领域,具体涉及一种基于单目视觉的夜间前方车辆检测方法。The invention belongs to the technical field of night target detection methods, and in particular relates to a nighttime front vehicle detection method based on monocular vision.

背景技术Background technique

汽车的飞跃式增长,加剧了道路交通事故的发生,特别是夜间交通事故的频发,给人们的生命财产带来巨大的损失。基于视觉的目标检测技术为夜间交通场景目标检测提供了可能,由于多目(包含双目)视觉系统相邻帧间特征点匹配计算量大,当车辆数目较多时,实时性差,所以基于单目视觉的车辆检测技术应运而生。然而,现有的基于单目视觉的车辆检测技术大多适应于白天交通环境,由于夜间驾驶环境变差导致夜间交通事故较白天更为频发,因此,研究基于单目视觉的夜间前方车辆检测方法对于改善驾驶环境,减少交通事故具有重要的意义。The rapid growth of automobiles has aggravated the occurrence of road traffic accidents, especially the frequent occurrence of traffic accidents at night, which has brought huge losses to people's lives and properties. Vision-based target detection technology provides the possibility for target detection in night traffic scenes. Since the matching of feature points between adjacent frames of the multi-eye (including binocular) vision system has a large amount of calculation, when the number of vehicles is large, the real-time performance is poor. Therefore, based on monocular Vision vehicle detection technology came into being. However, most of the existing vehicle detection technologies based on monocular vision are suitable for the daytime traffic environment. Due to the worsening driving environment at night, traffic accidents at night are more frequent than during the daytime. Therefore, research on the detection method of vehicles ahead at night based on monocular vision It is of great significance to improve the driving environment and reduce traffic accidents.

夜晚光照条件差,车辆的外形特征很难被检测到,夜间车辆最显著的特征是高亮度的车灯,因此,大多夜间车辆检测方法都是通过检测车灯从而检测车辆的。Junbin Guo等人统计300幅不同环境下尾灯亮度的分布,使用最大类间方差法(Otsu)确定最佳分割阈值,并根据HSV颜色空间的红色阈值剔除非尾灯目标,最后进行基于位置、面积等先验知识的尾灯配对。The light conditions at night are poor, and the appearance characteristics of the vehicle are difficult to be detected. The most prominent feature of the vehicle at night is the high-brightness headlights. Therefore, most nighttime vehicle detection methods detect the vehicle by detecting the headlights. Junbin Guo et al. counted the distribution of taillight brightness in 300 different environments, and used the maximum inter-class variance method (Otsu) to determine the optimal segmentation threshold, and eliminated non-taillight targets according to the red threshold of the HSV color space, and finally based on position, area, etc. Taillight pairing with prior knowledge.

Wei Zhang等人基于光散射衰减模型得出车头灯的反射灰度图和反射抑制图,作为马尔科夫随机场的输入向量,但是如果车辆距离摄像头很近,反射系数很难计算。Based on the light scattering attenuation model, Wei Zhang et al. obtained the reflection grayscale map and reflection suppression map of the headlights as the input vector of the Markov random field, but if the vehicle is very close to the camera, the reflection coefficient is difficult to calculate.

Jiann-Der Lee等人使用LoG算子和光散射模型得出车灯区域并使用光流法对车辆进行跟踪,使用LoG算子,解决了近距离车辆难检测的问题。Jiann-Der Lee et al. used the LoG operator and light scattering model to obtain the headlight area and used the optical flow method to track the vehicle. Using the LoG operator, the problem of difficult detection of close-range vehicles was solved.

O’Malley等人基于HSV颜色空间提出红色阈值分割方法,并根据互相关性系数验证车灯的对称性及使用卡尔曼滤波进行跟踪,但是只根据互相关性系数进行车灯配对,误差较大。O'Malley et al. proposed a red threshold segmentation method based on the HSV color space, and verified the symmetry of the lights according to the cross-correlation coefficient and used the Kalman filter for tracking, but only paired the lights according to the cross-correlation coefficient, and the error was large .

Naoya Kosaka等人采用双层中心环绕滤波器来近似LoG算子,筛选出响应值高的特征点,再对特征点使用支持向量机(SVM)进行分类,并通过车道线检测及运动轨迹排除噪声点,检测出车灯正确率高,但进行配对时只根据滤波器响应值一致的原则,致使配对误差较大。Naoya Kosaka et al. used a double-layer center-surround filter to approximate the LoG operator, screened out the feature points with high response values, and then used the support vector machine (SVM) to classify the feature points, and eliminated noise through lane line detection and motion trajectory point, the detection accuracy rate of headlights is high, but the pairing is only based on the principle of consistent filter response values, resulting in a large pairing error.

Hulin Kuang等人使用EdgeBoxes寻找多尺度Retinex增强后图像中得分较高的感兴趣区域(ROI),提取ROI5个特征后,由SVM训练每个特征的权重,修改最终得分,得分高的ROI则为车辆。该方法不需要进行车灯配对,减少了一些误差,但是对于昏暗的交通场景,增强算法不是很有效,得出的ROI准确度也随之下降。Hulin Kuang et al. used EdgeBoxes to find a region of interest (ROI) with a higher score in the multi-scale Retinex enhanced image. After extracting 5 features of the ROI, the SVM trained the weight of each feature and modified the final score. The ROI with a high score is vehicle. This method does not require car light pairing, which reduces some errors, but for dim traffic scenes, the enhancement algorithm is not very effective, and the accuracy of the ROI obtained also decreases.

发明内容Contents of the invention

本发明的目的是提供一种基于单目视觉的夜间前方车辆检测方法,解决了现有夜间车辆检测方法中,配对误差较大,检测目标不准确的问题。The purpose of the present invention is to provide a nighttime vehicle detection method based on monocular vision, which solves the problems of relatively large matching errors and inaccurate detection targets in the existing nighttime vehicle detection method.

本发明一种基于单目视觉的夜间前方车辆检测方法所采用的技术方案是,包括以下步骤:A kind of technical scheme adopted in the night vehicle detection method based on monocular vision of the present invention is, comprises the following steps:

步骤1,采集图像,基于CenSurE算子对夜间前方车辆进行车灯检测,得到强角点;Step 1, collect images, and detect the headlights of the vehicle ahead at night based on the CenSurE operator to obtain strong corner points;

步骤2,基于车灯颜色信息进行夜间前方车辆的分割,得到分割区域;Step 2: Segment the vehicle ahead at night based on the light color information to obtain the segmented area;

步骤3,选择步骤2的分割区域中占有步骤1的强角点最多的区域,得到检测区域;Step 3, select the region that occupies the most strong corner points in step 1 in the segmented region in step 2, and obtain the detection region;

步骤4,对步骤3中的检测区域进行车灯配对,确定目标车辆的位置。Step 4, perform vehicle light pairing on the detection area in step 3, and determine the position of the target vehicle.

本发明的特征还在于,The present invention is also characterized in that,

步骤1的操作步骤具体为:The operation steps of step 1 are as follows:

步骤1.1,拍照采集图像,根据采集图像计算对应的积分图的值,积分图中任一点I(x,y)的值均为原图像中对应位置左上角区域所有值的总和,如式(1),Step 1.1, take pictures and collect images, calculate the value of the corresponding integral map according to the collected image, the value of any point I(x, y) in the integral map is the sum of all values in the upper left corner of the corresponding position in the original image, such as formula (1 ),

步骤1.2,构造CenSurE滤波器对积分图进行对数尺度采样,Step 1.2, construct a CenSurE filter to sample the integral image on a logarithmic scale,

将任一点I(x,y)的值表示的尺度空间分成三组,其中,第一组中每层的CenSurE滤波器内核大小依次增加2,第二组中每层的CenSurE滤波器内核大小依次增加4,第三组中每层的CenSurE滤波器内核大小依次增加8,每一组中均选择5层尺度图像,CenSurE滤波器外核大小也按照上述方式进行计算;The scale space represented by the value of any point I(x, y) is divided into three groups, in which the size of the CenSurE filter kernel of each layer in the first group is increased by 2 in turn, and the size of the CenSurE filter kernel of each layer in the second group is in turn Increase by 4, the CenSurE filter kernel size of each layer in the third group is increased by 8 in turn, and 5 layers of scale images are selected in each group, and the CenSurE filter outer kernel size is also calculated according to the above method;

即CenSurE滤波器的内核尺寸应满足(2n+1)×(2n+1),外核尺寸应满足(4n+1)×(4n+1),为了使滤波器的DC响应为零,对尺度空间归一化,则内核的权重系数In应满足式(2),That is, the kernel size of the CenSurE filter should satisfy (2n+1)×(2n+1), and the outer kernel size should satisfy (4n+1)×(4n+1). In order to make the DC response of the filter zero, the scale Space normalization, then the weight coefficient I n of the kernel should satisfy formula (2),

外核的权重系数On应满足式(3)The weight coefficient O n of the outer core should satisfy formula (3)

当外核包含的像素值总和为out_value,内核包含的像素值总和为in_value,则像素滤波响应值L满足式(4),When the sum of the pixel values contained in the outer core is out_value and the sum of the pixel values contained in the inner core is in_value, the pixel filter response value L satisfies formula (4),

L=On·out_value-In·in_value, (4)L=O n ·out_value-I n ·in_value, (4)

步骤1.3,对步骤1.2中的尺度空间进行极值检测,Step 1.3, perform extreme value detection on the scale space in step 1.2,

将经过步骤1.2处理的图像,按照式(4)计算图像中每个尺度空间的像素滤波响应值,然后在尺度空间上进行进行非极大值抑制,并记录极值点;With the image processed in step 1.2, calculate the pixel filter response value of each scale space in the image according to formula (4), then perform non-maximum value suppression in the scale space, and record the extreme point;

步骤1.4,对步骤1.3中的极值点进行滤除不稳定特征点,Step 1.4, filter the extreme points in step 1.3 to remove unstable feature points,

Lx和Ly为像素滤波响应值L在x和y方向的偏导,对Lx、Ly、LxLy进行高斯滤波,获得Harris矩阵特征值,如果较小的特征值大于自适应阈值t,则得到强角点。L x and L y are the partial derivatives of the pixel filter response value L in the x and y directions. Gaussian filtering is performed on L x , L y , and L x L y to obtain the eigenvalues of the Harris matrix. If the smaller eigenvalues are larger than the adaptive Threshold t, strong corner points are obtained.

步骤2的操作步骤具体为,将步骤1采集的图像由RGB空间转换到HSV颜色空间,使用红色阈值分割出的所有区域,使用白色阈值分割出的所有区域中位于左侧1/3的区域,共同作为HSV颜色空间分割的结果,The operation steps of step 2 are as follows: convert the image collected in step 1 from RGB space to HSV color space, use the red threshold to segment all the regions, use the white threshold to segment all the regions located in the left 1/3 region, Together as a result of HSV color space segmentation,

H表示色调,其中红色阈值的H≥340°或H≤30°,白色阈值的H为0°~360°;S表示饱和度,红色阈值的S≤30,白色阈值的S≤20;V表示色彩的明度,红色阈值和白色阈值均取80≤V≤100。H means hue, where H≥340° or H≤30° for red threshold, H for white threshold is 0°~360°; S means saturation, S≤30 for red threshold, S≤20 for white threshold; V means The lightness of the color, the red threshold and the white threshold are all taken as 80≤V≤100.

步骤4的车灯配对的操作步骤具体为,假设Li、Lj为两候选车灯,面积分别为Ai、Aj,车灯中心的图像坐标为(xi,yi)、(xj,yj),配对约束条件如下:The operation steps of headlight pairing in step 4 are specifically as follows, assuming that L i and L j are two candidate lights, the areas are A i and A j respectively, and the image coordinates of the center of the lights are (x i , y i ), (x j ,y j ), the pairing constraints are as follows:

a.当两候选车灯高度一致,既两车灯纵坐标应满足式(5)a. When the heights of the two candidate lights are the same, the vertical coordinates of the two lights should satisfy the formula (5)

|yi-yj|<Δh, (5)|y i -y j |<Δh, (5)

b.当两候选车灯水平方向距离在一定范围内,应满足式(6)b. When the horizontal distance between the two candidate lights is within a certain range, formula (6) should be satisfied

Δw1<|xi-xj|<Δw2, (6)Δw 1 <| xi -x j |<Δw 2 , (6)

c.当两候选车灯面积一致,应满足式(7)c. When the areas of the two candidate lights are the same, formula (7) should be satisfied

|Ai-Aj|<ΔA, (7)|A i -A j |<ΔA, (7)

式(5)中Δh为高度差阈值,式(6)中Δw1和Δw2为水平差阈值,式(7)中ΔA为面积差阈值,满足配对约束条件,配对完成后得到车灯对应的外接矩形框应满足宽高比在一定范围内,满足式(8),In formula (5), Δh is the height difference threshold, in formula (6), Δw 1 and Δw 2 are the level difference thresholds, in formula (7), ΔA is the area difference threshold, which satisfies the pairing constraint conditions, and after the pairing is completed, the corresponding The circumscribed rectangular frame should satisfy the aspect ratio within a certain range and satisfy formula (8),

其中,xi,left、xj,right分别为区域的最左边和最右边坐标,yi,top、yj,bottom为区域的最上边和最下边坐标,Δration为框的宽高比阈值。Among them, x i,left , x j,right are the leftmost and rightmost coordinates of the area, y i,top , y j,bottom are the uppermost and lowermost coordinates of the area, and Δration is the aspect ratio threshold of the box.

步骤1.3中的非极大值抑制的具体步骤为:尺度空间中的每个点与其26个相邻点,其中26个相邻点包括位于中间的检测点和其同尺度的8个相邻点,以及上下相邻尺度对应的9×2个点进行比较,然后记录极值点。The specific steps of non-maximum value suppression in step 1.3 are: each point in the scale space and its 26 adjacent points, where the 26 adjacent points include the detection point in the middle and its 8 adjacent points of the same scale , and the 9×2 points corresponding to the upper and lower adjacent scales are compared, and then the extreme points are recorded.

步骤1.4中的自适应阈值使用多级Otsu方法获取,其具体步骤为:The adaptive threshold in step 1.4 is obtained using the multi-level Otsu method, and the specific steps are:

在灰度直方图中,设fi为灰度级为i的像素点个数,N为像素点总数,则N满足式(9)In the gray histogram, let f i be the number of pixels with gray level i, and N be the total number of pixels, then N satisfies formula (9)

N=f0+f1+…+fl-1, (9)N=f 0 +f 1 +...+f l-1 , (9)

其中l为直方图个数,l=1,2,3,4……,Where l is the number of histograms, l=1, 2, 3, 4...,

则灰度级为i的像素点个数fi的分布概率Pi为式(10),Then the distribution probability P i of the number f i of pixels with gray level i is formula (10),

使用k个阈值T={t1,…,tn,…,tk},将图像分为k+1个类别,类间方差VBC(T)为式(11)Use k thresholds T={t 1 ,...,t n ,...,t k } to divide the image into k+1 categories, and the inter-class variance V BC (T) is formula (11)

其中,式(11)μn为k=n时的灰度均值,μT为总体的灰度均值,wn和μn的值如式(12),Wherein, formula (11) μ n is the gray scale mean value when k=n, μ T is the overall gray scale mean value, the values of w n and μ n are as formula (12),

类内方差vWC(T)为式(13)The intra-class variance v WC (T) is formula (13)

其中,式(13)中σn为k=n时的灰度方差,wn和σn的值如式(14)Among them, σ n in formula (13) is the gray variance when k=n, and the values of w n and σ n are as in formula (14)

将式(9)~式(14)联合,得出图像的总方差vT和图像的总均值μT,为式(15)Combining formula (9) to formula (14), the total variance v T of the image and the total mean value μ T of the image can be obtained, which is formula (15)

定义图像的分割因子SF为式(16),Define the segmentation factor SF of the image as formula (16),

当SF>0.9时,停止分类,取此时的tk为自适应阈值。When SF>0.9, the classification is stopped, and t k at this time is taken as the adaptive threshold.

本发明的有益效果是:本发明一种基于单目视觉的夜间前方车辆检测方法通过采用计算多尺度下的CenSurE算子,然后检测其结果,可进行车辆配对的运算,不仅解决了夜间车辆车灯检测方法中,配对误差较大,检测目标不准确的问题,又很好的应用价值。The beneficial effect of the present invention is that: a nighttime front vehicle detection method based on monocular vision of the present invention can carry out the operation of vehicle pairing by using the CenSurE operator to calculate multi-scales, and then detect the results, which not only solves the problem of night vehicle In the light detection method, the pairing error is large, and the detection target is not accurate, and it has a good application value.

附图说明Description of drawings

图1是本发明CenSurE滤波器的内核和外核的结构图;Fig. 1 is the structural diagram of the inner core and outer core of CenSurE filter of the present invention;

图2是区域像素和计算示意图。Figure 2 is a schematic diagram of area pixels and calculations.

具体实施方式detailed description

下面结合附图和具体实施方式对本发明进行详细说明。The present invention will be described in detail below in conjunction with the accompanying drawings and specific embodiments.

本发明一种基于单目视觉的夜间前方车辆检测方法,包括以下步骤:A kind of nighttime front vehicle detection method based on monocular vision of the present invention, comprises the following steps:

步骤1,采集图像,基于CenSurE算子对夜间前方车辆进行车灯检测,得到强角点;Step 1, collect images, and detect the headlights of the vehicle ahead at night based on the CenSurE operator to obtain strong corner points;

步骤2,基于车灯颜色信息进行夜间前方车辆的分割,得到分割区域;Step 2: Segment the vehicle ahead at night based on the light color information to obtain the segmented area;

步骤3,选择步骤2的分割区域中占有步骤1的强角点最多的区域,得到检测区域;Step 3, select the region that occupies the most strong corner points in step 1 in the segmented region in step 2, and obtain the detection region;

步骤4,对步骤3中的检测区域进行车灯配对,确定目标车辆的位置。Step 4, perform vehicle light pairing on the detection area in step 3, and determine the position of the target vehicle.

步骤1的操作步骤具体为:The operation steps of step 1 are as follows:

步骤1.1,拍照采集图像,根据采集图像计算对应的积分图的值,积分图中任一点I(x,y)的值均为原图像中对应位置左上角区域所有值的总和,如式(1),Step 1.1, take pictures and collect images, calculate the value of the corresponding integral map according to the collected image, the value of any point I(x, y) in the integral map is the sum of all values in the upper left corner of the corresponding position in the original image, such as formula (1 ),

步骤1.2,构造CenSurE滤波器,对积分图进行对数尺度采样,目的是为了提高局部极值点的稳定性,如图1所示,CenSurE滤波器采用正方形核,正方形核计算效率最高,满足了实时性要求,积分图构造好后,图像中任何矩阵区域的像素累加和都可以通过简单运算得到,如图2所示,In step 1.2, construct a CenSurE filter, and sample the integral image on a logarithmic scale. The purpose is to improve the stability of local extremum points. As shown in Figure 1, the CenSurE filter uses a square kernel, which has the highest calculation efficiency and satisfies the Real-time requirements, after the integral map is constructed, the cumulative sum of pixels in any matrix area in the image can be obtained through simple operations, as shown in Figure 2,

将任一点I(x,y)的值表示的尺度空间分成三组,每一组中均选择5层尺度图像,其中,第一组中每层的CenSurE滤波器内核大小依次增加2,选取滤波器内核,其大小依次为3×3、5×5、7×7、9×9和11×11,第二组中每层的CenSurE滤波器内核大小依次增加4,选取滤波器内核,其大小依次为7×7、11×11、15×15、19×19和23×23,第三组中每层的CenSurE滤波器内核大小依次增加8,选取滤波器内核,其大小依次为15×15、23×23、31×31、39×39、47×47,CenSurE滤波器外核大小也按照上述方式进行计算;The scale space represented by the value of any point I(x, y) is divided into three groups, and 5 layers of scale images are selected in each group, among which, the kernel size of the CenSurE filter of each layer in the first group is increased by 2 in turn, and the filter is selected filter kernel, its size is 3×3, 5×5, 7×7, 9×9 and 11×11 in turn, the size of the CenSurE filter kernel in each layer in the second group is increased by 4 in sequence, and the filter kernel is selected, and its size The order is 7×7, 11×11, 15×15, 19×19 and 23×23, the size of the CenSurE filter kernel of each layer in the third group is increased by 8 in turn, and the filter kernel is selected, and its size is 15×15 in turn , 23×23, 31×31, 39×39, 47×47, the size of the outer core of the CenSurE filter is also calculated according to the above method;

即CenSurE滤波器的内核尺寸应满足(2n+1)×(2n+1),外核尺寸应满足(4n+1)×(4n+1),为了使滤波器的DC响应为零,对尺度空间归一化,则内核的权重系数In应满足式(2),That is, the kernel size of the CenSurE filter should satisfy (2n+1)×(2n+1), and the outer kernel size should satisfy (4n+1)×(4n+1). In order to make the DC response of the filter zero, the scale Space normalization, then the weight coefficient I n of the kernel should satisfy formula (2),

外核的权重系数On应满足式(3)The weight coefficient O n of the outer core should satisfy formula (3)

当外核包含的像素值总和为out_value,内核包含的像素值总和为in_value,则像素滤波响应值L满足式(4),When the sum of the pixel values contained in the outer core is out_value and the sum of the pixel values contained in the inner core is in_value, the pixel filter response value L satisfies formula (4),

L=On·out_value-In·in_value, (4)L=O n ·out_value-I n ·in_value, (4)

步骤1.3,对步骤1.2中的尺度空间进行极值检测,Step 1.3, perform extreme value detection on the scale space in step 1.2,

将经过步骤1.2处理的图像,按照式(4)计算图像中每个尺度空间的像素滤波响应值,然后在尺度空间上进行非极大值抑制,并记录极值点;With the image processed in step 1.2, calculate the pixel filter response value of each scale space in the image according to formula (4), then perform non-maximum suppression in the scale space, and record the extreme points;

步骤1.4,对步骤1.3中的极值点进行滤除不稳定特征点,Step 1.4, filter the extreme points in step 1.3 to remove unstable feature points,

为了得到稳定的特征点,只根据阈值滤除弱响应点是不够的,因为滤波器对图像边缘有比较强的响应值,一旦特征点落在图像的边缘上,这些点就很不稳定。由于边缘或线上的特征点在平行方向上具有较大主曲率而在垂直方向上具有较小主曲率,利用尺度自适应的Harris方法计算主曲率比例H去除不稳定响应点,如下式所示,In order to obtain stable feature points, it is not enough to filter out weak response points based on the threshold value, because the filter has a relatively strong response value to the edge of the image, and once the feature points fall on the edge of the image, these points are very unstable. Since the feature points on the edge or line have a large principal curvature in the parallel direction and a small principal curvature in the vertical direction, the scale-adaptive Harris method is used to calculate the principal curvature ratio H to remove the unstable response points, as shown in the following formula ,

Lx和Ly为像素滤波响应值L在x和y方向的偏导,对主曲率比例H进行高斯滤波,获得Harris矩阵特征值,如果较小的特征值大于自适应阈值t,则得到强角点。L x and L y are the partial derivatives of the pixel filter response value L in the x and y directions. Gaussian filtering is performed on the principal curvature ratio H to obtain the eigenvalues of the Harris matrix. If the smaller eigenvalues are greater than the adaptive threshold t, a strong corner.

由于车头灯和车尾灯分别高亮白色和红色,因此需分割出图像的红色和白色区域,则步骤2的操作步骤具体为,将步骤1采集的图像由RGB空间转换到HSV颜色空间,使用阈值分割红色和白色区域,由经验可知,相向而来的车辆,一般出现在图像的左侧,检测车头灯,同向行驶的车辆检测车尾灯,将红色阈值分割出的所有区域,白色阈值分割出的所有区域中位于左侧1/3的区域,共同作为HSV颜色空间分割的结果,Since the headlights and taillights highlight white and red respectively, it is necessary to segment the red and white areas of the image, the operation steps of step 2 are as follows: convert the image collected in step 1 from the RGB space to the HSV color space, and use the threshold Segment the red and white areas. From experience, the vehicles coming from the opposite direction generally appear on the left side of the image, detect the headlights, detect the tail lights of the vehicles traveling in the same direction, and segment all the areas separated by the red threshold and the white threshold. The area located in the left 1/3 of all areas of , together as the result of HSV color space segmentation,

H表示色调,其中红色阈值的H≥340°或H≤30°,白色阈值的H为0°~360°;S表示饱和度,红色阈值的S≤30,白色阈值的S≤20;V表示色彩的明度,红色阈值和白色阈值均取80≤V≤100。H means hue, where H≥340° or H≤30° for red threshold, H for white threshold is 0°~360°; S means saturation, S≤30 for red threshold, S≤20 for white threshold; V means The lightness of the color, the red threshold and the white threshold are all taken as 80≤V≤100.

步骤4的车灯配对的操作步骤具体为,假设Li、Lj为两候选车灯,面积分别为Ai、Aj,车灯中心的图像坐标为(xi,yi)、(xj,yj),配对约束条件如下:The operation steps of headlight pairing in step 4 are specifically as follows, assuming that L i and L j are two candidate lights, the areas are A i and A j respectively, and the image coordinates of the center of the lights are (x i , y i ), (x j ,y j ), the pairing constraints are as follows:

a.当两候选车灯高度一致,既两车灯纵坐标应满足式(5)a. When the heights of the two candidate lights are the same, the vertical coordinates of the two lights should satisfy the formula (5)

|yi-yj|<Δh, (5)|y i -y j |<Δh, (5)

b.当两候选车灯水平方向距离在一定范围内,应满足式(6)b. When the horizontal distance between the two candidate lights is within a certain range, formula (6) should be satisfied

Δw1<|xi-xj|<Δw2, (6)Δw 1 <| xi -x j |<Δw 2 , (6)

c.当两候选车灯面积一致,应满足式(7)c. When the areas of the two candidate lights are the same, formula (7) should be satisfied

|Ai-Aj|<ΔA, (7)|A i -A j |<ΔA, (7)

式(5)中Δh为高度差阈值,式(6)中Δw1和Δw2为水平差阈值,式(7)中ΔA为面积差阈值,满足配对约束条件,配对完成后得到车灯对应的外接矩形框应满足宽高比在一定范围内,满足式(8),In formula (5), Δh is the height difference threshold, in formula (6), Δw 1 and Δw 2 are the level difference thresholds, in formula (7), ΔA is the area difference threshold, which satisfies the pairing constraint conditions, and after the pairing is completed, the corresponding The circumscribed rectangular frame should satisfy the aspect ratio within a certain range and satisfy formula (8),

其中,xi,left、xj,right分别为区域的最左边和最右边坐标,yi,top、yj,bottom为区域的最上边和最下边坐标,Δration为框的宽高比阈值。Among them, x i,left , x j,right are the leftmost and rightmost coordinates of the area, y i,top , y j,bottom are the uppermost and lowermost coordinates of the area, and Δration is the aspect ratio threshold of the box.

式(7)~(8)中,由于日常生活经验,将Δh取10像素,Δw1取20像素,Δw2取50像素,ΔA取30像素,Δration取10像素进行运算配对。In formulas (7)-(8), due to daily life experience, Δh is taken as 10 pixels, Δw 1 is taken as 20 pixels, Δw 2 is taken as 50 pixels, ΔA is taken as 30 pixels, and Δration is taken as 10 pixels for calculation and matching.

步骤1.3中的非极大值抑制的具体步骤为:尺度空间中的每个点与其26个相邻点,其中26个相邻点包括位于中间的检测点和其同尺度的8个相邻点,以及上下相邻尺度对应的9×2个点进行比较,然后记录极值点。The specific steps of non-maximum value suppression in step 1.3 are: each point in the scale space and its 26 adjacent points, where the 26 adjacent points include the detection point in the middle and its 8 adjacent points of the same scale , and the 9×2 points corresponding to the upper and lower adjacent scales are compared, and then the extreme points are recorded.

步骤1.4中的自适应阈值使用多级Otsu方法获取,其具体步骤为:The adaptive threshold in step 1.4 is obtained using the multi-level Otsu method, and the specific steps are:

在灰度直方图中,设fi为灰度级为i的像素点个数,N为像素点总数,则N满足式(9)In the gray histogram, let f i be the number of pixels with gray level i, and N be the total number of pixels, then N satisfies formula (9)

N=f0+f1+…+fl-1, (9)N=f 0 +f 1 +...+f l-1 , (9)

其中l为直方图个数,l=1,2,3,4……,Where l is the number of histograms, l=1, 2, 3, 4...,

则灰度级为i的像素点个数fi的分布概率Pi为式(10),Then the distribution probability P i of the number f i of pixels with gray level i is formula (10),

使用k个阈值T={t1,…,tn,…,tk},将图像分为k+1个类别,类间方差VBC(T)为式(11)Use k thresholds T={t 1 ,...,t n ,...,t k } to divide the image into k+1 categories, and the inter-class variance V BC (T) is formula (11)

其中,式(11)μn为k=n时的灰度均值,μT为总体的灰度均值,wn和μn的值如式(12),Wherein, formula (11) μ n is the gray scale mean value when k=n, μ T is the overall gray scale mean value, the values of w n and μ n are as formula (12),

类内方差vWC(T)为式(13)The intra-class variance v WC (T) is formula (13)

其中,式(13)中σn为k=n时的灰度方差,wn和σn的值如式(14)Among them, σ n in formula (13) is the gray variance when k=n, and the values of w n and σ n are as in formula (14)

将式(9)~式(14)联合,得出图像的总方差vT和图像的总均值μT,为式(15)Combining formula (9) to formula (14), the total variance v T of the image and the total mean value μ T of the image can be obtained, which is formula (15)

定义图像的分割因子SF为式(16),Define the segmentation factor SF of the image as formula (16),

当SF>0.9时,停止分类,取此时的tk为自适应阈值。When SF>0.9, the classification is stopped, and t k at this time is taken as the adaptive threshold.

本发明的有益效果是:本发明通过采用计算多尺度下的CenSurE算子,然后检测其结果,可进行车辆配对的运算,不仅解决了夜间车辆车灯检测方法中,配对误差较大,检测目标不准确的问题,又很好的应用价值。The beneficial effect of the present invention is: the present invention can carry out the calculation of vehicle pairing by adopting the CenSurE operator under multi-scale calculation, and then detecting the result, which not only solves the problem of relatively large pairing error in the detection method of vehicle lights at night, and the detection target Inaccurate problem, but also very good application value.

Claims (6)

1.一种基于单目视觉的夜间前方车辆检测方法,其特征在于,包括以下步骤:1. a night vehicle detection method based on monocular vision, is characterized in that, comprises the following steps: 步骤1,采集图像,基于CenSurE算子对夜间前方车辆进行车灯检测,得到强角点;Step 1, collect images, and detect the headlights of the vehicle ahead at night based on the CenSurE operator to obtain strong corner points; 步骤2,基于车灯颜色信息进行夜间前方车辆的分割,得到分割区域;Step 2: Segment the vehicle ahead at night based on the light color information to obtain the segmented area; 步骤3,选择步骤2的分割区域中占有步骤1的强角点最多的区域,得到检测区域;Step 3, select the region that occupies the most strong corner points in step 1 in the segmented region in step 2, and obtain the detection region; 步骤4,对步骤3中的检测区域进行车灯配对,确定目标车辆的位置。Step 4, perform vehicle light pairing on the detection area in step 3, and determine the position of the target vehicle. 2.根据权利要求1所述的一种基于单目视觉的夜间前方车辆检测方法,其特征在于,所述的步骤1的操作步骤具体为:2. a kind of night vehicle detection method based on monocular vision according to claim 1, is characterized in that, the operation step of described step 1 is specifically: 步骤1.1,拍照采集图像,根据采集图像计算对应的积分图的值,积分图中任一点I(x,y)的值均为原图像中对应位置左上角区域所有值的总和,如式(1),Step 1.1, take pictures and collect images, calculate the value of the corresponding integral map according to the collected image, the value of any point I(x, y) in the integral map is the sum of all values in the upper left corner of the corresponding position in the original image, such as formula (1 ), II (( xx ,, ythe y )) == &Sigma;&Sigma; xx &prime;&prime; &le;&le; xx ythe y &prime;&prime; &le;&le; ythe y ii (( xx &prime;&prime; ,, ythe y &prime;&prime; )) ;; -- -- -- (( 11 )) 步骤1.2,构造CenSurE滤波器对积分图进行对数尺度采样,Step 1.2, construct a CenSurE filter to sample the integral image on a logarithmic scale, 将任一点I(x,y)的值表示的尺度空间分成三组,其中,第一组中每层的CenSurE滤波器内核大小依次增加2,第二组中每层的CenSurE滤波器内核大小依次增加4,第三组中每层的CenSurE滤波器内核大小依次增加8,每一组中均选择5层尺度图像,CenSurE滤波器外核大小也按照上述方式进行计算;The scale space represented by the value of any point I(x, y) is divided into three groups, in which the size of the CenSurE filter kernel of each layer in the first group is increased by 2 in turn, and the size of the CenSurE filter kernel of each layer in the second group is in turn Increase by 4, the CenSurE filter kernel size of each layer in the third group is increased by 8 in turn, and 5 layers of scale images are selected in each group, and the CenSurE filter outer kernel size is also calculated according to the above method; 即CenSurE滤波器的内核尺寸应满足(2n+1)×(2n+1),外核尺寸应满足(4n+1)×(4n+1),为了使滤波器的DC响应为零,对尺度空间归一化,则内核的权重系数In应满足式(2),That is, the kernel size of the CenSurE filter should satisfy (2n+1)×(2n+1), and the outer kernel size should satisfy (4n+1)×(4n+1). In order to make the DC response of the filter zero, the scale Space normalization, then the weight coefficient I n of the kernel should satisfy formula (2), II nno ++ 11 == II nno (( 22 nno ++ 11 )) 22 (( 22 (( nno ++ 11 )) ++ 11 )) 22 ,, nno == 11 ,, 22 ,, 3...3... ,, -- -- -- (( 22 )) 外核的权重系数On应满足式(3)The weight coefficient O n of the outer core should satisfy formula (3) Oo nno == II nno (( 22 nno ++ 11 )) 22 (( 44 nno ++ 11 )) 22 ,, nno == 11 ,, 22 ,, 3...3... ,, -- -- -- (( 33 )) 当外核包含的像素值总和为out_value,内核包含的像素值总和为in_value,则像素滤波响应值L满足式(4),When the sum of the pixel values contained in the outer core is out_value and the sum of the pixel values contained in the inner core is in_value, the pixel filter response value L satisfies formula (4), L=On·out_value-In·in_value, (4)L=O n ·out_value-I n ·in_value, (4) 步骤1.3,对步骤1.2中的尺度空间进行极值检测,Step 1.3, perform extreme value detection on the scale space in step 1.2, 将经过步骤1.2处理的图像,按照式(4)计算图像中每个尺度空间的像素滤波响应值,然后在尺度空间上进行非极大值抑制,并记录极值点;With the image processed in step 1.2, calculate the pixel filter response value of each scale space in the image according to formula (4), then perform non-maximum suppression in the scale space, and record the extreme points; 步骤1.4,对步骤1.3中的极值点进行滤除不稳定特征点,Step 1.4, filter the extreme points in step 1.3 to remove unstable feature points, Lx和Ly为像素滤波响应值L在x和y方向的偏导,对Lx、Ly、LxLy进行高斯滤波,获得Harris矩阵特征值,如果较小的特征值大于自适应阈值t,则得到强角点。L x and L y are the partial derivatives of the pixel filter response value L in the x and y directions. Gaussian filtering is performed on L x , L y , and L x L y to obtain the eigenvalues of the Harris matrix. If the smaller eigenvalues are larger than the adaptive Threshold t, strong corner points are obtained. 3.根据权利要求1所述的一种基于单目视觉的夜间前方车辆检测方法,其特征在于,所述的步骤2的操作步骤具体为,将步骤1采集的图像由RGB空间转换到HSV颜色空间,使用红色阈值分割出的所有区域,使用白色阈值分割出的所有区域中位于左侧1/3的区域,共同作为HSV颜色空间分割的结果,3. a kind of nighttime front vehicle detection method based on monocular vision according to claim 1, is characterized in that, the operation step of described step 2 is specifically, the image that step 1 collects is converted into HSV color by RGB space Space, all regions segmented using the red threshold value, and the left 1/3 region of all regions segmented using the white threshold value are collectively the result of HSV color space segmentation. H表示色调,其中红色阈值的H≥340°或H≤30°,白色阈值的H为0°~360°;S表示饱和度,红色阈值的S≤30,白色阈值的S≤20;V表示色彩的明度,红色阈值和白色阈值均取80≤V≤100。H means hue, where H≥340° or H≤30° for red threshold, H for white threshold is 0°~360°; S means saturation, S≤30 for red threshold, S≤20 for white threshold; V means The lightness of the color, the red threshold and the white threshold are all taken as 80≤V≤100. 4.根据权利要求1所述的一种基于单目视觉的夜间前方车辆检测方法,其特征在于,所述的步骤4的车灯配对的操作步骤具体为,假设Li、Lj为两候选车灯,面积分别为Ai、Aj,车灯中心的图像坐标为(xi,yi)、(xj,yj),配对约束条件如下:4. A night-time front vehicle detection method based on monocular vision according to claim 1, characterized in that, the operation step of the headlight pairing in step 4 is specifically assuming that L i and L j are two candidates Car lights, the areas are A i , A j , the image coordinates of the center of the car lights are ( xi , y i ), (x j , y j ), and the pairing constraints are as follows: a.当两候选车灯高度一致,既两车灯纵坐标应满足式(5)a. When the heights of the two candidate lights are the same, the vertical coordinates of the two lights should satisfy the formula (5) |yi-yj|<Δh, (5)|y i -y j |<Δh, (5) b.当两候选车灯水平方向距离在一定范围内,应满足式(6)b. When the horizontal distance between the two candidate lights is within a certain range, formula (6) should be satisfied Δw1<|xi-xj|<Δw2, (6)Δw 1 <| xi -x j |<Δw 2 , (6) c.当两候选车灯面积一致,应满足式(7)c. When the areas of the two candidate lights are the same, formula (7) should be satisfied |Ai-Aj|<ΔA, (7)|A i -A j |<ΔA, (7) 式(5)中Δh为高度差阈值,式(6)中Δw1和Δw2为水平差阈值,式(7)中ΔA为面积差阈值,满足配对约束条件,配对完成后得到车灯对应的外接矩形框应满足宽高比在一定范围内,满足式(8),In formula (5), Δh is the height difference threshold, in formula (6), Δw 1 and Δw 2 are the level difference thresholds, in formula (7), ΔA is the area difference threshold, which satisfies the pairing constraint conditions, and after the pairing is completed, the corresponding The circumscribed rectangular frame should satisfy the aspect ratio within a certain range and satisfy formula (8), xx jj ,, rr ii gg hh tt -- xx ii ,, ll ee ff tt maxmax (( ythe y ii ,, bb oo tt tt oo mm ,, ythe y jj ,, bb oo tt tt oo mm )) -- minmin (( ythe y ii ,, tt oo pp ,, ythe y jj ,, tt oo pp )) &le;&le; &Delta;&Delta; rr aa tt ii oo nno ,, -- -- -- (( 88 )) 其中,xi,left、xj,right分别为区域的最左边和最右边坐标,yi,top、yj,bottom为区域的最上边和最下边坐标,Δration为框的宽高比阈值。Among them, x i,left , x j,right are the leftmost and rightmost coordinates of the area, y i,top , y j,bottom are the uppermost and lowermost coordinates of the area, and Δration is the aspect ratio threshold of the box. 5.根据权利要求2所述的一种基于单目视觉的夜间前方车辆检测方法,其特征在于,步骤1.3中的非极大值抑制的具体步骤为:尺度空间中的每个点与其26个相邻点,其中26个相邻点包括位于中间的检测点和其同尺度的8个相邻点,以及上下相邻尺度对应的9×2个点进行比较,然后记录极值点。5. a kind of nighttime front vehicle detection method based on monocular vision according to claim 2, is characterized in that, the concrete step of the non-maximum suppression in step 1.3 is: each point in the scale space and its 26 Adjacent points, among which 26 adjacent points include the detection point in the middle and 8 adjacent points of the same scale, and 9×2 points corresponding to the upper and lower adjacent scales for comparison, and then record the extreme points. 6.根据权利要求2所述的一种基于单目视觉的夜间前方车辆检测方法,其特征在于,所述的步骤1.4中的自适应阈值使用多级Otsu方法获取,其具体步骤为:6. a kind of night front vehicle detection method based on monocular vision according to claim 2, is characterized in that, the self-adaptive threshold in described step 1.4 uses multistage Otsu method to obtain, and its concrete steps are: 在灰度直方图中,设fi为灰度级为i的像素点个数,N为像素点总数,则N满足式(9)In the gray histogram, let f i be the number of pixels with gray level i, and N be the total number of pixels, then N satisfies formula (9) N=f0+f1+…+fl-1, (9)N=f 0 +f 1 +...+f l-1 , (9) 其中l为直方图个数,l=1,2,3,4……,Where l is the number of histograms, l=1, 2, 3, 4..., 则灰度级为i的像素点个数fi的分布概率Pi为式(10),Then the distribution probability P i of the number f i of pixels with gray level i is formula (10), PP ii == ff ii // NN &Sigma;&Sigma; ii == 00 ll -- 11 PP ii == 11 ,, (( ii == 11 ,, 22 ,, 33 ,, 4......4... )) -- -- -- (( 1010 )) 使用k个阈值T={t1,…,tn,…,tk},将图像分为k+1个类别,类间方差VBC(T)为式(11)Use k thresholds T={t 1 ,...,t n ,...,t k } to divide the image into k+1 categories, and the inter-class variance V BC (T) is formula (11) VV BB CC (( TT )) == &Sigma;&Sigma; nno == 00 kk ww nno (( &mu;&mu; nno -- &mu;&mu; TT )) 22 ,, (( kk == 11 ,, 22 ,, 3...3... ,, nno ,, ...... ;; 11 &le;&le; nno &le;&le; kk )) -- -- -- (( 1111 )) 其中,式(11)μn为k=n时的灰度均值,μT为总体的灰度均值,wn和μn的值如式(12),Wherein, formula (11) μ n is the gray scale mean value when k=n, μ T is the overall gray scale mean value, the values of w n and μ n are as formula (12), ww nno == &Sigma;&Sigma; ii == tt nno ++ 11 tt nno ++ 11 PP ii &mu;&mu; nno == &Sigma;&Sigma; ii == tt nno ++ 11 tt nno ++ 11 PP ii ww nno -- -- -- (( 1212 )) 类内方差vWC(T)为式(13)The intra-class variance v WC (T) is formula (13) vv WW CC (( TT )) == &Sigma;&Sigma; nno == 00 kk ww nno &sigma;&sigma; nno 22 ,, -- -- -- (( 1313 )) 其中,式(13)中σn为k=n时的灰度方差,wn和σn的值如式(14)Among them, σ n in formula (13) is the gray variance when k=n, and the values of w n and σ n are as in formula (14) ww nno == &Sigma;&Sigma; ii == tt nno ++ 11 tt nno ++ 11 PP ii &sigma;&sigma; nno 22 == &Sigma;&Sigma; ii == tt nno ++ 11 tt nno ++ 11 PP ii (( ii -- &mu;&mu; nno )) 22 ww nno ,, -- -- -- (( 1414 )) 将式(9)~式(14)联合,得出图像的总方差vT和图像的总均值μT,为式(15)Combining formula (9) to formula (14), the total variance v T of the image and the total mean value μ T of the image can be obtained, which is formula (15) vv TT == &Sigma;&Sigma; ii == 00 LL -- 11 (( ii -- &mu;&mu; TT )) 22 PP ii &mu;&mu; TT == &Sigma;&Sigma; ii == 00 LL -- 11 iPIP ii ,, -- -- -- (( 1515 )) 定义图像的分割因子SF为式(16),Define the segmentation factor SF of the image as formula (16), SS Ff == vv BB CC (( TT )) vv TT == 11 -- vv BB CC (( TT )) vv TT ,, -- -- -- (( 1616 )) 当SF>0.9时,停止分类,取此时的tk为自适应阈值。When SF>0.9, the classification is stopped, and t k at this time is taken as the adaptive threshold.
CN201610873523.6A 2016-09-30 2016-09-30 A kind of night front vehicles detection method based on monocular vision Expired - Fee Related CN106407951B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610873523.6A CN106407951B (en) 2016-09-30 2016-09-30 A kind of night front vehicles detection method based on monocular vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610873523.6A CN106407951B (en) 2016-09-30 2016-09-30 A kind of night front vehicles detection method based on monocular vision

Publications (2)

Publication Number Publication Date
CN106407951A true CN106407951A (en) 2017-02-15
CN106407951B CN106407951B (en) 2019-08-16

Family

ID=59228048

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610873523.6A Expired - Fee Related CN106407951B (en) 2016-09-30 2016-09-30 A kind of night front vehicles detection method based on monocular vision

Country Status (1)

Country Link
CN (1) CN106407951B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109308442A (en) * 2017-07-26 2019-02-05 株式会社斯巴鲁 Exterior environment recognition device
CN109523555A (en) * 2017-09-18 2019-03-26 百度在线网络技术(北京)有限公司 Front truck brake behavioral value method and apparatus for automatic driving vehicle
CN109800693A (en) * 2019-01-08 2019-05-24 西安交通大学 A kind of vehicle detection at night method based on Color Channel composite character
CN110020575A (en) * 2018-01-10 2019-07-16 富士通株式会社 Vehicle detection apparatus and method, electronic equipment
CN110132302A (en) * 2019-05-20 2019-08-16 中国科学院自动化研究所 Binocular visual odometer positioning method and system fusing IMU information
JPWO2018225462A1 (en) * 2017-06-05 2020-02-06 日立オートモティブシステムズ株式会社 Image processing device and light distribution control system
CN116543566A (en) * 2023-04-21 2023-08-04 重庆长安汽车股份有限公司 Data processing method, device, equipment and system

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101344988A (en) * 2008-06-16 2009-01-14 上海高德威智能交通系统有限公司 Image acquisition and processing device and method, vehicle monitoring and recording system
CN101382997A (en) * 2008-06-13 2009-03-11 青岛海信电子产业控股股份有限公司 Vehicle detecting and tracking method and device at night
CN101556739A (en) * 2009-05-14 2009-10-14 浙江大学 Vehicle detecting algorithm based on intrinsic image decomposition
CN102044151A (en) * 2010-10-14 2011-05-04 吉林大学 Night vehicle video detection method based on illumination visibility identification
CN103020948A (en) * 2011-09-28 2013-04-03 中国航天科工集团第二研究院二○七所 Night image characteristic extraction method in intelligent vehicle-mounted anti-collision pre-warning system
CN103366571A (en) * 2013-07-03 2013-10-23 河南中原高速公路股份有限公司 Intelligent method for detecting traffic accident at night
CN104732235A (en) * 2015-03-19 2015-06-24 杭州电子科技大学 Vehicle detection method for eliminating night road reflective interference
CN105303160A (en) * 2015-09-21 2016-02-03 中电海康集团有限公司 Method for detecting and tracking vehicles at night
CN105718893A (en) * 2016-01-22 2016-06-29 江苏大学 Car tail light pair detecting method for night environment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101382997A (en) * 2008-06-13 2009-03-11 青岛海信电子产业控股股份有限公司 Vehicle detecting and tracking method and device at night
CN101344988A (en) * 2008-06-16 2009-01-14 上海高德威智能交通系统有限公司 Image acquisition and processing device and method, vehicle monitoring and recording system
CN101556739A (en) * 2009-05-14 2009-10-14 浙江大学 Vehicle detecting algorithm based on intrinsic image decomposition
CN102044151A (en) * 2010-10-14 2011-05-04 吉林大学 Night vehicle video detection method based on illumination visibility identification
CN103020948A (en) * 2011-09-28 2013-04-03 中国航天科工集团第二研究院二○七所 Night image characteristic extraction method in intelligent vehicle-mounted anti-collision pre-warning system
CN103366571A (en) * 2013-07-03 2013-10-23 河南中原高速公路股份有限公司 Intelligent method for detecting traffic accident at night
CN104732235A (en) * 2015-03-19 2015-06-24 杭州电子科技大学 Vehicle detection method for eliminating night road reflective interference
CN105303160A (en) * 2015-09-21 2016-02-03 中电海康集团有限公司 Method for detecting and tracking vehicles at night
CN105718893A (en) * 2016-01-22 2016-06-29 江苏大学 Car tail light pair detecting method for night environment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
吴海涛 等: "复杂环境下的夜间视频车辆检测", 《计算机应用研究》 *
祁秋红 等: "基于尾灯跟踪的夜间车辆检测", 《通信技术》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPWO2018225462A1 (en) * 2017-06-05 2020-02-06 日立オートモティブシステムズ株式会社 Image processing device and light distribution control system
CN109308442A (en) * 2017-07-26 2019-02-05 株式会社斯巴鲁 Exterior environment recognition device
CN109308442B (en) * 2017-07-26 2023-09-01 株式会社斯巴鲁 Vehicle exterior environment recognition device
CN109523555A (en) * 2017-09-18 2019-03-26 百度在线网络技术(北京)有限公司 Front truck brake behavioral value method and apparatus for automatic driving vehicle
US10824885B2 (en) 2017-09-18 2020-11-03 Baidu Online Network Technology (Beijing) Co., Ltd. Method and apparatus for detecting braking behavior of front vehicle of autonomous vehicle
CN110020575A (en) * 2018-01-10 2019-07-16 富士通株式会社 Vehicle detection apparatus and method, electronic equipment
CN110020575B (en) * 2018-01-10 2022-10-21 富士通株式会社 Vehicle detection device and method and electronic equipment
CN109800693A (en) * 2019-01-08 2019-05-24 西安交通大学 A kind of vehicle detection at night method based on Color Channel composite character
CN109800693B (en) * 2019-01-08 2021-05-28 西安交通大学 A night-time vehicle detection method based on color channel mixing features
CN110132302A (en) * 2019-05-20 2019-08-16 中国科学院自动化研究所 Binocular visual odometer positioning method and system fusing IMU information
CN116543566A (en) * 2023-04-21 2023-08-04 重庆长安汽车股份有限公司 Data processing method, device, equipment and system

Also Published As

Publication number Publication date
CN106407951B (en) 2019-08-16

Similar Documents

Publication Publication Date Title
CN106407951B (en) A kind of night front vehicles detection method based on monocular vision
CN107798335B (en) A Vehicle Logo Recognition Method Fusion Sliding Window and Faster R-CNN Convolutional Neural Network
CN110942000B (en) Unmanned vehicle target detection method based on deep learning
CN104318258B (en) Time domain fuzzy and kalman filter-based lane detection method
CN104732235B (en) A kind of vehicle checking method for eliminating the reflective interference of road at night time
CN104011737B (en) Method for detecting mist
KR101395094B1 (en) Method and system for detecting object in input image
CN102509098B (en) A fisheye image vehicle recognition method
CN112613392A (en) Lane line detection method, device and system based on semantic segmentation and storage medium
CN103927548B (en) Novel vehicle collision avoiding brake behavior detection method
CN107590492B (en) A method of vehicle logo location and recognition based on convolutional neural network
CN105809138A (en) Road warning mark detection and recognition method based on block recognition
CN110263635B (en) Marker detection and identification method based on structural forest and PCANet
CN108734189A (en) Vehicle License Plate Recognition System based on atmospherical scattering model and deep learning under thick fog weather
CN107066986A (en) A kind of lane line based on monocular vision and preceding object object detecting method
CN103198300B (en) A Parking Event Detection Method Based on Two-layer Background
CN106682586A (en) Method for real-time lane line detection based on vision under complex lighting conditions
CN103198332A (en) Real-time robust far infrared vehicle-mounted pedestrian detection method
TWI401473B (en) Night time pedestrian detection system and method
KR20070027768A (en) Method for traffic sign detection
CN114299002A (en) An intelligent detection system and method for abnormal behavior of pavement throwing
CN109948552A (en) A method of lane line detection in complex traffic environment
CN114022705A (en) An adaptive object detection method based on scene complexity pre-classification
CN110060221B (en) A Bridge Vehicle Detection Method Based on UAV Aerial Images
CN106781513A (en) The recognition methods of vehicle behavior in a kind of urban transportation scene of feature based fusion

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190816

CF01 Termination of patent right due to non-payment of annual fee