[go: up one dir, main page]

CN115409873A - Obstacle detection method, system and electronic equipment based on optical flow feature fusion - Google Patents

Obstacle detection method, system and electronic equipment based on optical flow feature fusion Download PDF

Info

Publication number
CN115409873A
CN115409873A CN202210829872.3A CN202210829872A CN115409873A CN 115409873 A CN115409873 A CN 115409873A CN 202210829872 A CN202210829872 A CN 202210829872A CN 115409873 A CN115409873 A CN 115409873A
Authority
CN
China
Prior art keywords
moving object
range
optical flow
grid
aspect ratio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210829872.3A
Other languages
Chinese (zh)
Other versions
CN115409873B (en
Inventor
李森林
吴琼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Kotei Informatics Co Ltd
Original Assignee
Wuhan Kotei Informatics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Kotei Informatics Co Ltd filed Critical Wuhan Kotei Informatics Co Ltd
Priority to CN202210829872.3A priority Critical patent/CN115409873B/en
Publication of CN115409873A publication Critical patent/CN115409873A/en
Application granted granted Critical
Publication of CN115409873B publication Critical patent/CN115409873B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

本发明提供基于光流特征融合的障碍物检测方法、系统及电子设备,该方法包括:采集泊车过程中的连续图像,对采集的图像预处理;将预处理后的图像划分为多个网格,采用光流跟踪算法跟踪每个网格在相邻帧的位置变化,得到包含移动物体的网格,将包含移动物体的网格进行融合,得到移动物体范围;根据预设的宽高比阈值、将移动物体范围进行区域划分,对每个区域分别进行纹理计算,得到移动物体所在的区域。本发明解决了自动泊车过程中移动物体检测依赖超声波雷达和AI检测算法的问题,只用车辆自带的相机录下的图片,经过计算就能检测图片中是否存在移动物体,成本低,硬件要求低,计算量小,速度快,检测准确度高。

Figure 202210829872

The invention provides an obstacle detection method, system and electronic equipment based on optical flow feature fusion. The method includes: collecting continuous images in the parking process, and preprocessing the collected images; dividing the preprocessed images into a plurality of grids Grid, using the optical flow tracking algorithm to track the position changes of each grid in adjacent frames, get the grid containing the moving object, and fuse the grid containing the moving object to get the range of the moving object; according to the preset aspect ratio Threshold, divide the range of the moving object into regions, perform texture calculation on each region separately, and obtain the region where the moving object is located. The invention solves the problem that the detection of moving objects in the process of automatic parking relies on ultrasonic radar and AI detection algorithms. Only the pictures recorded by the camera of the vehicle can be used to detect whether there are moving objects in the pictures after calculation. The cost is low and the hardware requirements are low. Low, small amount of calculation, fast speed, high detection accuracy.

Figure 202210829872

Description

基于光流特征融合的障碍物检测方法、系统及电子设备Obstacle detection method, system and electronic equipment based on optical flow feature fusion

技术领域technical field

本发明涉及智能驾驶技术领域,更具体地,涉及基于光流特征融合的障碍物检测方法、系统及电子设备。The present invention relates to the technical field of intelligent driving, and more specifically, to an obstacle detection method, system and electronic equipment based on fusion of optical flow features.

背景技术Background technique

随着智能驾驶的发展,自动泊车成为了越来越多车型选用的配置。自动泊车的目的是提高泊车的方便性,为驾驶员提供安全舒适且快捷的泊车服务,降低驾驶员泊车的难度。自动泊车过程中纯视觉算法的移动物体检测,主要依赖于图像中光流特征的变化来找到移动物体的大致范围,再通过精细的特征区分来找到移动物体的位置。With the development of intelligent driving, automatic parking has become a configuration selected by more and more models. The purpose of automatic parking is to improve the convenience of parking, provide drivers with safe, comfortable and fast parking services, and reduce the difficulty of parking for drivers. The moving object detection of the pure vision algorithm in the automatic parking process mainly relies on the change of the optical flow feature in the image to find the approximate range of the moving object, and then finds the position of the moving object through fine feature distinction.

目前自动泊车过程中的移动物体检测主要依赖于超声波雷达和AI检测算法。但是,超声波雷达的设备和安装成本较高,而AI检测算法先找到可疑目标再进行位置筛选和匹配,计算量巨大,且对硬件要求较高,同样难以将成本降下来。At present, the detection of moving objects in the process of automatic parking mainly relies on ultrasonic radar and AI detection algorithms. However, the equipment and installation costs of ultrasonic radar are relatively high, while the AI detection algorithm finds suspicious targets first and then performs position screening and matching. The calculation is huge, and the hardware requirements are high, and it is also difficult to reduce the cost.

发明内容Contents of the invention

本发明针对现有技术中存在的技术问题,提供基于光流特征融合的障碍物检测方法、系统及电子设备,其解决了自动泊车过程中移动物体检测依赖成本较高的超声波雷达和AI检测算法的问题。Aiming at the technical problems existing in the prior art, the present invention provides an obstacle detection method, system and electronic equipment based on optical flow feature fusion, which solves the problem that the detection of moving objects in the automatic parking process relies on ultrasonic radar and AI detection algorithms with high costs The problem.

根据本发明的第一方面,提供了基于光流特征融合的障碍物检测方法,包括:According to the first aspect of the present invention, an obstacle detection method based on optical flow feature fusion is provided, including:

采集泊车过程中的连续图像,对采集的图像预处理;Collect continuous images during the parking process, and preprocess the collected images;

将预处理后的图像划分为多个网格,采用光流跟踪算法跟踪每个网格在相邻帧的位置变化,得到包含移动物体的网格,将包含移动物体的网格进行融合,得到移动物体范围;The preprocessed image is divided into multiple grids, and the optical flow tracking algorithm is used to track the position change of each grid in adjacent frames to obtain the grid containing the moving object, and the grid containing the moving object is fused to obtain range of moving objects;

根据预设的宽高比阈值、将移动物体范围进行区域划分,对每个区域分别进行纹理计算,得到移动物体所在的区域。According to the preset aspect ratio threshold, the range of the moving object is divided into regions, and the texture calculation is performed on each region to obtain the region where the moving object is located.

在上述技术方案的基础上,本发明还可以作出如下改进。On the basis of the above technical solution, the present invention can also make the following improvements.

可选的,所述对采集的图像预处理,包括:Optionally, the preprocessing of the collected images includes:

根据当前车速,对采集的泊车过程中连续帧图像进行抽稀处理,得到间隔n帧的连续图像的集合,n为大于1的自然数。According to the current vehicle speed, thinning processing is performed on the collected continuous frame images during the parking process to obtain a collection of continuous images with intervals of n frames, where n is a natural number greater than 1.

可选的,所述将预处理后的图像划分为多个网格,采用光流跟踪算法跟踪每个网格在相邻帧的位置变化,得到包含移动物体的网格,包括:Optionally, the preprocessed image is divided into multiple grids, and the optical flow tracking algorithm is used to track the position change of each grid in adjacent frames to obtain a grid containing moving objects, including:

将预处理得到的每帧图像分别划分为多个网格,获取每个网格中心点的坐标;Divide each frame of image obtained by preprocessing into multiple grids, and obtain the coordinates of the center point of each grid;

对所述中心点采用光流跟踪算法从前一帧图像到后一帧图像进行跟踪,得到对应的跟踪点;对跟踪点采用光流跟踪算法进行后一帧图片到前一帧图片的反跟踪,得到对应的反跟踪点的坐标;The center point is tracked from the previous frame image to the next frame image using the optical flow tracking algorithm to obtain the corresponding tracking point; the optical flow tracking algorithm is used to track the tracking point from the next frame picture to the previous frame picture, Get the coordinates of the corresponding anti-tracking point;

根据坐标值判断所述中心点与其对应的反跟踪点是否重合:若判断结果为重合,则判定当前中心点所在的网格发生移动;若判断结果为不重合,则判定当前中心点所在的网格静止。Judging whether the center point and its corresponding anti-tracking point coincide according to the coordinate value: if the judgment result is coincidence, it is determined that the grid where the current center point is located has moved; grid still.

可选的,将图像划分为多个网格的过程中,先根据像素点数排除图像的边缘区域,然后将剩余区域划分为多个网格。Optionally, in the process of dividing the image into multiple grids, the edge area of the image is first excluded according to the number of pixels, and then the remaining area is divided into multiple grids.

可选的,所述将包含移动物体的网格进行融合,得到移动物体范围,包括:Optionally, the merging of the grids containing the moving objects is performed to obtain the range of the moving objects, including:

将包含移动物体的网格进行膨胀和腐蚀操作后,再次筛选包含移动物体的网格,将筛选得到的新的包含移动物体的网格进行融合,得到移动物体范围。After dilation and erosion operations are performed on the mesh containing the moving object, the mesh containing the moving object is screened again, and the screened new mesh containing the moving object is fused to obtain the range of the moving object.

可选的,所述根据预设的宽高比阈值、将移动物体范围进行区域划分,对每个区域分别进行纹理计算,得到移动物体所在的区域,包括:Optionally, the range of the moving object is divided into regions according to the preset aspect ratio threshold, and texture calculation is performed on each region to obtain the region where the moving object is located, including:

根据预设的第一宽高比阈值,计算并判断所述移动物体范围是否符合正常移动物体的宽高比;Calculating and judging whether the range of the moving object conforms to the aspect ratio of a normal moving object according to a preset first aspect ratio threshold;

若所述移动物体范围符合正常移动物体的宽高比,则将所述移动物体范围作为移动物体所在的区域;If the range of the moving object conforms to the aspect ratio of a normal moving object, then use the range of the moving object as the area where the moving object is located;

若所述移动物体范围不符合正常移动物体的宽高比,则根据预设的第二宽高比阈值范围、将移动物体范围划分为k个区域,k为大于1的自然数;对划分的每个区域分别进行横向纹理、纵向纹理和斜向纹理的计算,根据每个区域的纹理计算结果筛选出最符合正常移动物体宽高比的区域,将此区域作为移动物体所在的区域。If the range of the moving object does not conform to the aspect ratio of a normal moving object, the range of the moving object is divided into k regions according to the preset second aspect ratio threshold range, where k is a natural number greater than 1; The horizontal texture, vertical texture and oblique texture are calculated in each area, and the area that best matches the aspect ratio of the normal moving object is selected according to the texture calculation results of each area, and this area is used as the area where the moving object is located.

可选的,所述第二宽高比阈值范围包括多级宽高比阈值范围,每一级宽高比阈值范围分别对应一个区域划分的数量。Optionally, the second aspect ratio threshold range includes multiple levels of aspect ratio threshold ranges, and each level of aspect ratio threshold range corresponds to a number of area divisions.

根据本发明的第二方面,提供基于光流特征融合的障碍物检测系统,包括:According to a second aspect of the present invention, an obstacle detection system based on optical flow feature fusion is provided, including:

采集及预处理模块,用于采集泊车过程中的连续图像,对采集的图像预处理;The collection and preprocessing module is used to collect continuous images during the parking process and preprocess the collected images;

光流跟踪检测模块,用于将预处理后的图像划分为多个网格,采用光流跟踪算法跟踪每个网格在相邻帧的位置变化,得到包含移动物体的网格,将包含移动物体的网格进行融合,得到移动物体范围;The optical flow tracking detection module is used to divide the preprocessed image into multiple grids, use the optical flow tracking algorithm to track the position change of each grid in adjacent frames, and obtain a grid containing moving objects, which will contain moving The grid of the object is fused to obtain the range of the moving object;

精细特征识别模块,用于根据预设的宽高比阈值、将移动物体范围进行区域划分,对每个区域分别进行纹理计算,得到移动物体所在的区域。The fine feature recognition module is used to divide the range of the moving object into regions according to the preset aspect ratio threshold, and perform texture calculation on each region to obtain the region where the moving object is located.

根据本发明的第三方面,提供了一种电子设备,包括存储器、处理器,所述处理器用于执行存储器中存储的计算机管理类程序时实现基于光流特征融合的障碍物检测方法的步骤。According to a third aspect of the present invention, an electronic device is provided, including a memory and a processor, and the processor is used to implement the steps of an obstacle detection method based on optical flow feature fusion when executing a computer management program stored in the memory.

根据本发明的第四方面,提供了一种计算机可读存储介质,其上存储有计算机管理类程序,所述计算机管理类程序被处理器执行时实现基于光流特征融合的障碍物检测方法的步骤。According to the fourth aspect of the present invention, there is provided a computer-readable storage medium, on which a computer management program is stored, and when the computer management program is executed by a processor, an obstacle detection method based on optical flow feature fusion is implemented. step.

本发明提供的一种基于光流特征融合的障碍物检测方法、系统、电子设备及存储介质,其解决了自动泊车过程中移动物体检测依赖超声波雷达和AI检测算法的问题,只用车辆自带的相机录下的图片,经过计算就能检测图片中是否存在移动物体,成本低,硬件要求低,计算量小,速度快,检测准确度高。The present invention provides an obstacle detection method, system, electronic equipment and storage medium based on optical flow feature fusion, which solves the problem that the detection of moving objects in the automatic parking process relies on ultrasonic radar and AI detection algorithms, and only uses the vehicle's own The pictures recorded by the camera can detect whether there is a moving object in the picture after calculation, with low cost, low hardware requirements, small amount of calculation, fast speed and high detection accuracy.

附图说明Description of drawings

图1为本发明提供的基于光流特征融合的障碍物检测方法流程图;Fig. 1 is the flow chart of the obstacle detection method based on optical flow feature fusion provided by the present invention;

图2为本发明提供方法的光流跟踪与反跟踪的流程图;Fig. 2 is the flow chart of optical flow tracking and anti-tracking of the method provided by the present invention;

图3为实施例中对图像进行网格划分的示意图;Fig. 3 is the schematic diagram that image is carried out grid division in the embodiment;

图4为实施例中包含了移动物体的网格示意图;FIG. 4 is a schematic diagram of a grid including a moving object in an embodiment;

图5为实施例中融合后的移动物体的区域示意图;FIG. 5 is a schematic diagram of the region of the fused moving object in the embodiment;

图6为实施例中符合正常移动物体宽高比的区域示意图;Fig. 6 is a schematic diagram of an area conforming to the aspect ratio of a normal moving object in the embodiment;

图7为实施例中移动物体范围划分为三个区域示意图;Fig. 7 is a schematic diagram of dividing the range of the moving object into three regions in the embodiment;

图8为实施例中移动物体范围划分为两个区域示意图;Fig. 8 is a schematic diagram of dividing the range of the moving object into two regions in the embodiment;

图9为实施例中筛选出最符合移动目标的区域示意图一;Fig. 9 is a schematic diagram 1 of selecting the region most suitable for the moving target in the embodiment;

图10为实施例中筛选出最符合移动目标的区域示意图二;FIG. 10 is a schematic diagram 2 of the region selected to be the most suitable for the moving target in the embodiment;

图11为本发明提供的一种基于光流特征融合的障碍物检测系统结构图;Fig. 11 is a structural diagram of an obstacle detection system based on optical flow feature fusion provided by the present invention;

图12为本发明提供的一种可能的电子设备的硬件结构示意图;FIG. 12 is a schematic diagram of a hardware structure of a possible electronic device provided by the present invention;

图13为本发明提供的一种可能的计算机可读存储介质的硬件结构示意图。FIG. 13 is a schematic diagram of a hardware structure of a possible computer-readable storage medium provided by the present invention.

具体实施方式Detailed ways

下面结合附图和实施例,对本发明的具体实施方式作进一步详细描述。以下实施例用于说明本发明,但不用来限制本发明的范围。The specific implementation manners of the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. The following examples are used to illustrate the present invention, but are not intended to limit the scope of the present invention.

图1为本发明提供的一种基于光流特征融合的障碍物检测方法流程图,如图1所示,该方法包括:Fig. 1 is a flow chart of an obstacle detection method based on optical flow feature fusion provided by the present invention. As shown in Fig. 1, the method includes:

101、采集泊车过程中的连续图像,对采集的图像预处理;101. Collect continuous images during the parking process, and preprocess the collected images;

102、将预处理后的图像划分为多个网格,采用光流跟踪算法跟踪每个网格在相邻帧的位置变化,得到包含移动物体的网格,将包含移动物体的网格进行融合,得到移动物体范围;102. Divide the preprocessed image into multiple grids, use the optical flow tracking algorithm to track the position change of each grid in adjacent frames, obtain a grid containing moving objects, and fuse the grids containing moving objects , get the moving object range;

103、根据预设的宽高比阈值、将移动物体范围进行区域划分,对每个区域分别进行纹理计算,得到移动物体所在的区域。103. Divide the range of the moving object into regions according to the preset aspect ratio threshold, perform texture calculation on each region respectively, and obtain the region where the moving object is located.

可以理解的是,基于背景技术中的缺陷,本发明实施例提出了一种基于光流特征融合的障碍物检测方法。该方法解决了自动泊车过程中移动物体检测依赖超声波雷达和AI检测算法的问题,不需要在车上安装雷达做移动物体检测,只用车辆自带的相机(例如单目相机或者单目鱼眼相机)录下的图片,经过计算就能检测图片中是否存在移动物体,成本低且使用方便;对硬件要求低,计算量小,速度快,检测准确度高。It can be understood that, based on the defects in the background technology, the embodiment of the present invention proposes an obstacle detection method based on optical flow feature fusion. This method solves the problem that the detection of moving objects in the automatic parking process relies on ultrasonic radar and AI detection algorithms. It does not need to install radar on the car for moving object detection, and only uses the camera (such as a monocular camera or a monocular fisheye) that comes with the vehicle. The pictures recorded by the camera) can be calculated to detect whether there are moving objects in the picture, which is low in cost and easy to use; low in hardware requirements, small in calculation amount, fast in speed, and high in detection accuracy.

在一种可能的实施例方式中,所述对采集的图像预处理,包括:In a possible embodiment, the preprocessing of the collected images includes:

根据当前车速,对采集的泊车过程中连续帧图像进行抽稀处理,得到间隔n帧的连续图像的集合,n为大于1的自然数。According to the current vehicle speed, thinning processing is performed on the collected continuous frame images during the parking process to obtain a collection of continuous images with intervals of n frames, where n is a natural number greater than 1.

可以理解的是,因为车辆在自动泊车过程中速度一般在5km/h~10km/h之间,即车辆1s移动距离在1.39m~2.78m之间,单目相机录取图片1s大约30帧,即前后相邻两帧图片车辆移动了0.046m~0.092m,如果前后两帧图片间隔8帧,车辆移动0.368m~0.736m,这个移动距离拍下的图片,是最适合处理图片上光流变化特征的。因此,对相机采集的图片集进行抽稀处理,每间隔几帧图像就抽取一张图像,组成新的图像集合,供后续光流跟踪使用。对图像进行抽稀处理后,在保障后续检测精度的基础上减小了光流跟踪计算的计算量,提升了图像处理速度。It is understandable that because the speed of the vehicle during the automatic parking process is generally between 5km/h and 10km/h, that is, the moving distance of the vehicle in 1s is between 1.39m and 2.78m, and the monocular camera records about 30 frames in 1s. That is, the vehicle has moved 0.046m to 0.092m in the two adjacent frames of pictures. If the interval between the two frames of pictures is 8 frames, the vehicle moves 0.368m to 0.736m. The pictures taken by this moving distance are the most suitable for processing the optical flow changes on the pictures. charactermatic. Therefore, the image collection collected by the camera is thinned out, and an image is extracted every few frames to form a new image collection for subsequent optical flow tracking. After the image is thinned out, the calculation amount of optical flow tracking calculation is reduced on the basis of ensuring the accuracy of subsequent detection, and the image processing speed is improved.

在一种可能的实施例方式中,如图2的流程图所示,所述将预处理后的图像划分为多个网格,采用光流跟踪算法跟踪每个网格在相邻帧的位置变化,得到包含移动物体的网格,包括步骤201、202和203,其中:In a possible embodiment, as shown in the flow chart of Figure 2, the preprocessed image is divided into multiple grids, and the optical flow tracking algorithm is used to track the position of each grid in adjacent frames Change, obtain the grid that contains moving object, comprise step 201,202 and 203, wherein:

201、将预处理得到的每帧图像分别划分为多个网格,获取每个网格中心点的坐标。201. Divide each frame of image obtained by preprocessing into multiple grids, and obtain the coordinates of the center point of each grid.

可以理解的是,本步骤中,将预处理后的每帧图像分别划分为多个小网格,取每个网格的中心点。例如,如图3的图像所示,相机录取的图片大小为1920*1080,图片边缘会有光流跟踪的丢失问题,因此在这个步骤中,先根据像素点数排除图像的边缘区域,然后将剩余区域划分为多个网格。例如距离左右边缘100个像素的区域视为边缘区域、不做光流跟踪,距离上下边缘50个像素的区域视为边缘区域、不做光流跟踪,然后将剩下区域划分为172*98个小网格,同时记录下每个小网格的中心点坐标。It can be understood that, in this step, each preprocessed image is divided into multiple small grids, and the center point of each grid is taken. For example, as shown in the image in Figure 3, the size of the image captured by the camera is 1920*1080, and there will be loss of optical flow tracking at the edge of the image. Therefore, in this step, the edge area of the image is first excluded according to the number of pixels, and then the remaining The area is divided into multiple grids. For example, the area 100 pixels away from the left and right edges is regarded as the edge area, and optical flow tracking is not performed, and the area 50 pixels away from the upper and lower edges is regarded as the edge area, and optical flow tracking is not performed, and then the remaining area is divided into 172*98 Small grids, record the center point coordinates of each small grid at the same time.

202、对所述中心点采用光流跟踪算法从前一帧图像到后一帧图像进行跟踪,得到对应的跟踪点;对跟踪点采用光流跟踪算法进行后一帧图片到前一帧图片的反跟踪,得到对应的反跟踪点的坐标。202. Use the optical flow tracking algorithm to track the central point from the previous frame image to the next frame image to obtain the corresponding tracking point; use the optical flow tracking algorithm to perform the reverse image from the next frame image to the previous frame image on the tracking point Track to get the coordinates of the corresponding anti-tracking point.

可以理解的是,本步骤中,通过光流跟踪以及反跟踪,得到了中心点对应的反跟踪点坐标,该反跟踪点坐标可作为后续对网格内物体是否发生移动进行判断的依据。It can be understood that in this step, the coordinates of the anti-tracking point corresponding to the center point are obtained through optical flow tracking and anti-tracking, and the coordinates of the anti-tracking point can be used as a basis for subsequent judgment on whether the object in the grid moves.

203、根据坐标值判断所述中心点与其对应的反跟踪点是否重合:若判断结果为重合,则判定当前中心点所在的网格发生移动;若判断结果为不重合,则判定当前中心点所在的网格静止。203. Determine whether the center point is coincident with its corresponding anti-tracking point according to the coordinate value: if the judgment result is coincidence, it is determined that the grid where the current center point is located has moved; if the judgment result is non-coincident, then it is determined that the current center point is located grid is static.

可以理解的是,本步骤中,对光流跟踪结果进行了验证。通过步骤201和步骤202能获取到网格中心点、对应的跟踪点、对应的反跟踪点,如果网格中心点和对应的反跟踪点基本重合,说明这个中心点是静止的点,即在实际情况中这个点无移动;反之如果网格中心点和对应的反跟踪点不能重合,说明这个中心点在实际情况中是在移动的。如果中心点是静止的,则估计整个小网格里的点也是静止的;如果中心点是移动的,则估计整个小网格里的点也是移动的,得到的包含了移动物体的网格示意图如图4所示。It can be understood that in this step, the optical flow tracking result is verified. Through steps 201 and 202, the grid center point, corresponding tracking point, and corresponding anti-tracking point can be obtained. If the grid center point and the corresponding anti-tracking point basically coincide, it means that the center point is a static point, that is, in In the actual situation, this point does not move; on the contrary, if the grid center point and the corresponding anti-tracking point cannot coincide, it means that the center point is moving in the actual situation. If the center point is stationary, it is estimated that the points in the entire small grid are also stationary; if the center point is moving, it is estimated that the points in the entire small grid are also moving, and the resulting grid diagram containing moving objects As shown in Figure 4.

在一种可能的实施例方式中,所述将包含移动物体的网格进行融合,得到移动物体范围,包括:In a possible embodiment, the fusion of the grids containing the moving objects to obtain the range of the moving objects includes:

将包含移动物体的网格进行膨胀和腐蚀操作后,再次筛选包含移动物体的网格,将筛选得到的新的包含移动物体的网格进行融合,得到移动物体范围。After dilation and erosion operations are performed on the mesh containing the moving object, the mesh containing the moving object is screened again, and the screened new mesh containing the moving object is fused to obtain the range of the moving object.

可以理解的是,通过步骤102或者步骤201~203能得到当前图片上哪些网格内包含移动物体。由于光流跟踪会有丢失和误检的情况,因此得到的移动区域有时候并不是完整理想的,需要对这些区域进行腐蚀和膨胀操作,再进行筛选和融合操作,如图4所示为通过光流跟踪后提取得到的包含移动物体的网格区域,如图5所示为融合后的移动物体的区域。It can be understood that, through step 102 or steps 201-203, which grids on the current picture contain moving objects can be obtained. Due to the loss and false detection of optical flow tracking, sometimes the obtained moving areas are not complete and ideal. It is necessary to perform erosion and expansion operations on these areas, and then perform screening and fusion operations, as shown in Figure 4. The grid area containing the moving object extracted after optical flow tracking is shown in Figure 5 as the area of the fused moving object.

在一种可能的实施例方式中,所述根据预设的宽高比阈值、将移动物体范围进行区域划分,对每个区域分别进行纹理计算,得到移动物体所在的区域,包括:In a possible embodiment, the range of the moving object is divided into regions according to the preset aspect ratio threshold, and texture calculation is performed on each region to obtain the region where the moving object is located, including:

根据预设的第一宽高比阈值,计算并判断所述移动物体范围是否符合正常移动物体的宽高比;Calculating and judging whether the range of the moving object conforms to the aspect ratio of a normal moving object according to a preset first aspect ratio threshold;

若所述移动物体范围符合正常移动物体的宽高比,则将所述移动物体范围作为移动物体所在的区域;If the range of the moving object conforms to the aspect ratio of a normal moving object, then use the range of the moving object as the area where the moving object is located;

若所述移动物体范围不符合正常移动物体的宽高比,则根据预设的第二宽高比阈值范围、将移动物体范围划分为k个区域,k为大于1的自然数;对划分的每个区域分别进行横向纹理、纵向纹理和斜向纹理的计算,根据每个区域的纹理计算结果筛选出最符合正常移动物体宽高比的区域,将此区域作为移动物体所在的区域。If the range of the moving object does not conform to the aspect ratio of a normal moving object, the range of the moving object is divided into k regions according to the preset second aspect ratio threshold range, where k is a natural number greater than 1; The horizontal texture, vertical texture and oblique texture are calculated in each area, and the area that best matches the aspect ratio of the normal moving object is selected according to the texture calculation results of each area, and this area is used as the area where the moving object is located.

可以理解的是,在本实施例中,对移动物体范围过大的情况,进行精细特征识别。提前设定一个第一宽高比阈值范围,用于判断得到的移动物体范围是否属于正常移动物体的范围。通过步骤102得到的移动物体范围的宽高比与第一宽高比阈值范围进行比对后,确定得到的移动物体范围是正常移动物体的宽高比,那么可以直接得到移动物体的区域,如图6所示。It can be understood that, in this embodiment, the fine feature recognition is performed for the case where the range of the moving object is too large. A first aspect ratio threshold range is set in advance for judging whether the obtained moving object range belongs to the normal moving object range. After comparing the aspect ratio of the moving object range obtained in step 102 with the first aspect ratio threshold range, it is determined that the obtained moving object range is the aspect ratio of a normal moving object, then the area of the moving object can be directly obtained, such as Figure 6 shows.

通过步骤102得到了移动物体的大致范围,但有时候由于车速的突然变化或者光线的突然变化,会导致通过光流跟踪检测出的移动物体范围过大,不符合移动物体的正常宽高比,这时候就需要对这个范围进行重新划分,并计算每个划分区域的特征是否符合移动物体特征。The approximate range of the moving object is obtained through step 102, but sometimes due to a sudden change in vehicle speed or a sudden change in light, the range of the moving object detected by optical flow tracking is too large, which does not conform to the normal aspect ratio of the moving object. At this time, it is necessary to re-divide this range, and calculate whether the characteristics of each divided area conform to the characteristics of the moving object.

如图7和图8所示,根据移动物体范围的宽高比的不同,设定第二宽高比阈值范围,根据第二宽高比阈值范围,将这个移动物体范围划分为左右两个区域或者左中右三个区域。为了增加区分划分的精确度,可选的,所述第二宽高比阈值范围包括多级宽高比阈值范围,每一级宽高比阈值范围分别对应一个区域划分的数量。As shown in Figure 7 and Figure 8, according to the difference in the aspect ratio of the moving object range, set the second aspect ratio threshold range, and divide the moving object range into left and right regions according to the second aspect ratio threshold range Or the three areas of left, middle and right. In order to increase the accuracy of division, optionally, the second aspect ratio threshold range includes multiple levels of aspect ratio threshold ranges, and each level of aspect ratio threshold range corresponds to the number of area divisions.

例如宽高比范围在1.2~2.4,将范围划分为左右两个区域;如宽高比范围在2.4~3.6,将范围划分为左中右三个区域。然后对每个划分的区域进行横向纹理、纵向纹理和斜向纹理的计算,从而筛选出最符合移动目标的区域,如图9和图10所示,虚线框代表移动物体候选区域,实线框代表最终找到的移动物体区域。For example, if the aspect ratio ranges from 1.2 to 2.4, divide the range into left and right regions; if the aspect ratio ranges from 2.4 to 3.6, divide the range into left, middle and right regions. Then calculate the horizontal texture, vertical texture and oblique texture for each divided area, so as to filter out the area that best matches the moving target, as shown in Figure 9 and Figure 10, the dotted line box represents the candidate area of the moving object, and the solid line box Represents the finally found moving object area.

图11为本发明实施例提供的一种基于光流特征融合的障碍物检测系统结构图,如图11所示,一种基于光流特征融合的障碍物检测系统,包括采集及预处理模块1101、光流跟踪检测模块1102和精细特征识别模块1106,其中:FIG. 11 is a structural diagram of an obstacle detection system based on optical flow feature fusion provided by an embodiment of the present invention. As shown in FIG. 11 , an obstacle detection system based on optical flow feature fusion includes an acquisition and preprocessing module 1101 , optical flow tracking detection module 1102 and fine feature recognition module 1106, wherein:

采集及预处理模块1101,用于采集泊车过程中的连续图像,对采集的图像预处理;The collection and preprocessing module 1101 is used to collect continuous images during the parking process and preprocess the collected images;

光流跟踪检测模块1102,用于将预处理后的图像划分为多个网格,采用光流跟踪算法跟踪每个网格在相邻帧的位置变化,得到包含移动物体的网格,将包含移动物体的网格进行融合,得到移动物体范围;The optical flow tracking detection module 1102 is used to divide the preprocessed image into multiple grids, use the optical flow tracking algorithm to track the position change of each grid in adjacent frames, and obtain a grid containing moving objects, which will contain The grid of the moving object is fused to obtain the range of the moving object;

精细特征识别模块1103,用于根据预设的宽高比阈值、将移动物体范围进行区域划分,对每个区域分别进行纹理计算,得到移动物体所在的区域。The fine feature recognition module 1103 is configured to divide the range of the moving object into regions according to a preset aspect ratio threshold, and perform texture calculation on each region to obtain the region where the moving object is located.

可以理解的是,本发明提供的一种基于光流特征融合的障碍物检测系统与前述各实施例提供的基于光流特征融合的障碍物检测方法相对应,基于光流特征融合的障碍物检测系统的相关技术特征可参考基于光流特征融合的障碍物检测方法的相关技术特征,在此不再赘述。It can be understood that the obstacle detection system based on optical flow feature fusion provided by the present invention corresponds to the obstacle detection method based on optical flow feature fusion provided in the foregoing embodiments, and the obstacle detection based on optical flow feature fusion For the relevant technical features of the system, please refer to the relevant technical features of the obstacle detection method based on optical flow feature fusion, which will not be repeated here.

请参阅图12,图12为本发明实施例提供的电子设备的实施例示意图。如图12所示,本发明实施例提了一种电子设备,包括存储器1210、处理器1220及存储在存储器1210上并可在处理器1220上运行的计算机程序1211,处理器1220执行计算机程序1211时实现以下步骤:Please refer to FIG. 12 . FIG. 12 is a schematic diagram of an embodiment of an electronic device provided by an embodiment of the present invention. As shown in Figure 12, the embodiment of the present invention provides an electronic device, including a memory 1210, a processor 1220, and a computer program 1211 stored in the memory 1210 and operable on the processor 1220, and the processor 1220 executes the computer program 1211 When performing the following steps:

采集泊车过程中的连续图像,对采集的图像预处理;Collect continuous images during the parking process, and preprocess the collected images;

将预处理后的图像划分为多个网格,采用光流跟踪算法跟踪每个网格在相邻帧的位置变化,得到包含移动物体的网格,将包含移动物体的网格进行融合,得到移动物体范围;The preprocessed image is divided into multiple grids, and the optical flow tracking algorithm is used to track the position change of each grid in adjacent frames to obtain the grid containing the moving object, and the grid containing the moving object is fused to obtain range of moving objects;

根据预设的宽高比阈值、将移动物体范围进行区域划分,对每个区域分别进行纹理计算,得到移动物体所在的区域。According to the preset aspect ratio threshold, the range of the moving object is divided into regions, and the texture calculation is performed on each region to obtain the region where the moving object is located.

请参阅图13,图13为本发明提供的一种计算机可读存储介质的实施例示意图。如图13所示,本实施例提供了一种计算机可读存储介质1300,其上存储有计算机程序1311,该计算机程序1311被处理器执行时实现如下步骤:Please refer to FIG. 13 , which is a schematic diagram of an embodiment of a computer-readable storage medium provided by the present invention. As shown in FIG. 13 , this embodiment provides a computer-readable storage medium 1300 on which a computer program 1311 is stored. When the computer program 1311 is executed by a processor, the following steps are implemented:

采集泊车过程中的连续图像,对采集的图像预处理;Collect continuous images during the parking process, and preprocess the collected images;

将预处理后的图像划分为多个网格,采用光流跟踪算法跟踪每个网格在相邻帧的位置变化,得到包含移动物体的网格,将包含移动物体的网格进行融合,得到移动物体范围;The preprocessed image is divided into multiple grids, and the optical flow tracking algorithm is used to track the position change of each grid in adjacent frames to obtain the grid containing the moving object, and the grid containing the moving object is fused to obtain range of moving objects;

根据预设的宽高比阈值、将移动物体范围进行区域划分,对每个区域分别进行纹理计算,得到移动物体所在的区域。According to the preset aspect ratio threshold, the range of the moving object is divided into regions, and the texture calculation is performed on each region to obtain the region where the moving object is located.

本发明实施例提供的一种基于光流特征融合的障碍物检测方法、系统、电子设备及存储介质,其解决了自动泊车过程中移动物体检测依赖超声波雷达和AI检测算法的问题,只用车辆自带的相机录下的图片,经过计算就能检测图片中是否存在移动物体,成本低,硬件要求低,计算量小,速度快,检测准确度高。The embodiment of the present invention provides an obstacle detection method, system, electronic device and storage medium based on optical flow feature fusion, which solves the problem that the detection of moving objects in the automatic parking process relies on ultrasonic radar and AI detection algorithms, and only uses the vehicle The pictures recorded by the built-in camera can detect whether there are moving objects in the pictures after calculation, with low cost, low hardware requirements, small amount of calculation, fast speed and high detection accuracy.

需要说明的是,在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详细描述的部分,可以参见其它实施例的相关描述。It should be noted that, in the foregoing embodiments, descriptions of each embodiment have their own emphases, and for parts that are not described in detail in a certain embodiment, reference may be made to relevant descriptions of other embodiments.

本领域内的技术人员应明白,本发明的实施例可提供为方法、系统、或计算机程序产品。因此,本发明可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。Those skilled in the art should understand that the embodiments of the present invention may be provided as methods, systems, or computer program products. Accordingly, the present invention can take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.

本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式计算机或者其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It should be understood that each procedure and/or block in the flowchart and/or block diagram, and a combination of procedures and/or blocks in the flowchart and/or block diagram can be realized by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded computer, or other programmable data processing device to produce a machine such that the instructions executed by the processor of the computer or other programmable data processing device produce a machine for A device for realizing the functions specified in one or more procedures of a flowchart and/or one or more blocks of a block diagram.

这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to operate in a specific manner, such that the instructions stored in the computer-readable memory produce an article of manufacture comprising instruction means, the instructions The device realizes the function specified in one or more procedures of the flowchart and/or one or more blocks of the block diagram.

这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。These computer program instructions can also be loaded onto a computer or other programmable data processing device, causing a series of operational steps to be performed on the computer or other programmable device to produce a computer-implemented process, thereby The instructions provide steps for implementing the functions specified in the flow chart or blocks of the flowchart and/or the block or blocks of the block diagrams.

尽管已描述了本发明的优选实施例,但本领域内的技术人员一旦得知了基本创造概念,则可对这些实施例作出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本发明范围的所有变更和修改。While preferred embodiments of the invention have been described, additional changes and modifications to these embodiments can be made by those skilled in the art once the basic inventive concept is understood. Therefore, it is intended that the appended claims be construed to cover the preferred embodiment as well as all changes and modifications which fall within the scope of the invention.

显然,本领域的技术人员可以对本发明进行各种改动和变型而不脱离本发明的精神和范围。这样,倘若本发明的这些修改和变型属于本发明权利要求及其等同技术的范围之内,则本发明也意图包括这些改动和变型在内。Obviously, those skilled in the art can make various changes and modifications to the present invention without departing from the spirit and scope of the present invention. Thus, if these modifications and variations of the present invention fall within the scope of the claims of the present invention and equivalent technologies thereof, the present invention also intends to include these modifications and variations.

Claims (10)

1.基于光流特征融合的障碍物检测方法,其特征在于,包括:1. The obstacle detection method based on optical flow feature fusion, is characterized in that, comprises: 采集泊车过程中的连续图像,对采集的图像预处理;Collect continuous images during the parking process, and preprocess the collected images; 将预处理后的图像划分为多个网格,采用光流跟踪算法跟踪每个网格在相邻帧的位置变化,得到包含移动物体的网格,将包含移动物体的网格进行融合,得到移动物体范围;The preprocessed image is divided into multiple grids, and the optical flow tracking algorithm is used to track the position change of each grid in adjacent frames to obtain the grid containing the moving object, and the grid containing the moving object is fused to obtain range of moving objects; 根据预设的宽高比阈值、将移动物体范围进行区域划分,对每个区域分别进行纹理计算,得到移动物体所在的区域。According to the preset aspect ratio threshold, the range of the moving object is divided into regions, and the texture calculation is performed on each region to obtain the region where the moving object is located. 2.根据权利要求1所述的方法,其特征在于,所述对采集的图像预处理,包括:2. The method according to claim 1, wherein the preprocessing of the collected image comprises: 根据当前车速,对采集的泊车过程中连续帧图像进行抽稀处理,得到间隔n帧的连续图像的集合,n为大于1的自然数。According to the current vehicle speed, thinning processing is performed on the collected continuous frame images during the parking process to obtain a collection of continuous images with intervals of n frames, where n is a natural number greater than 1. 3.根据权利要求1或2所述的方法,其特征在于,所述将预处理后的图像划分为多个网格,采用光流跟踪算法跟踪每个网格在相邻帧的位置变化,得到包含移动物体的网格,包括:3. The method according to claim 1 or 2, wherein the preprocessed image is divided into a plurality of grids, and an optical flow tracking algorithm is used to track the position changes of each grid in adjacent frames, Get a mesh with moving objects, including: 将预处理得到的每帧图像分别划分为多个网格,获取每个网格中心点的坐标;Divide each frame of image obtained by preprocessing into multiple grids, and obtain the coordinates of the center point of each grid; 对所述中心点采用光流跟踪算法从前一帧图像到后一帧图像进行跟踪,得到对应的跟踪点;对跟踪点采用光流跟踪算法进行后一帧图片到前一帧图片的反跟踪,得到对应的反跟踪点的坐标;The center point is tracked from the previous frame image to the next frame image using the optical flow tracking algorithm to obtain the corresponding tracking point; the optical flow tracking algorithm is used to track the tracking point from the next frame picture to the previous frame picture, Get the coordinates of the corresponding anti-tracking point; 根据坐标值判断所述中心点与其对应的反跟踪点是否重合:若判断结果为重合,则判定当前中心点所在的网格发生移动;若判断结果为不重合,则判定当前中心点所在的网格静止。Judging whether the center point and its corresponding anti-tracking point coincide according to the coordinate value: if the judgment result is coincidence, it is determined that the grid where the current center point is located has moved; grid still. 4.根据权利要求3所述的方法,其特征在于,将图像划分为多个网格的过程中,先根据像素点数排除图像的边缘区域,然后将剩余区域划分为多个网格。4. The method according to claim 3, wherein, in the process of dividing the image into a plurality of grids, the edge regions of the image are first excluded according to the number of pixels, and then the remaining areas are divided into a plurality of grids. 5.根据权利要求1所述的方法,其特征在于,所述将包含移动物体的网格进行融合,得到移动物体范围,包括:5. The method according to claim 1, wherein said merging the grid containing the moving object to obtain the range of the moving object comprises: 将包含移动物体的网格进行膨胀和腐蚀操作后,再次筛选包含移动物体的网格,将筛选得到的新的包含移动物体的网格进行融合,得到移动物体范围。After dilation and erosion operations are performed on the mesh containing the moving object, the mesh containing the moving object is screened again, and the screened new mesh containing the moving object is fused to obtain the range of the moving object. 6.根据权利要求1所述的方法,其特征在于,所述根据预设的宽高比阈值、将移动物体范围进行区域划分,对每个区域分别进行纹理计算,得到移动物体所在的区域,包括:6. The method according to claim 1, wherein the range of the moving object is divided into regions according to a preset aspect ratio threshold, and texture calculation is performed on each region to obtain the region where the moving object is located, include: 根据预设的第一宽高比阈值,计算并判断所述移动物体范围是否符合正常移动物体的宽高比;Calculating and judging whether the range of the moving object conforms to the aspect ratio of a normal moving object according to a preset first aspect ratio threshold; 若所述移动物体范围符合正常移动物体的宽高比,则将所述移动物体范围作为移动物体所在的区域;If the range of the moving object conforms to the aspect ratio of a normal moving object, then use the range of the moving object as the area where the moving object is located; 若所述移动物体范围不符合正常移动物体的宽高比,则根据预设的第二宽高比阈值范围、将移动物体范围划分为k个区域,k为大于1的自然数;对划分的每个区域分别进行横向纹理、纵向纹理和斜向纹理的计算,根据每个区域的纹理计算结果筛选出最符合正常移动物体宽高比的区域,将此区域作为移动物体所在的区域。If the range of the moving object does not conform to the aspect ratio of a normal moving object, the range of the moving object is divided into k regions according to the preset second aspect ratio threshold range, where k is a natural number greater than 1; The horizontal texture, vertical texture and oblique texture are calculated in each area, and the area that best matches the aspect ratio of the normal moving object is selected according to the texture calculation results of each area, and this area is used as the area where the moving object is located. 7.根据权利要求6所述的方法,其特征在于,所述第二宽高比阈值范围包括多级宽高比阈值范围,每一级宽高比阈值范围分别对应一个区域划分的数量。7 . The method according to claim 6 , wherein the second aspect ratio threshold range includes multiple levels of aspect ratio threshold ranges, and each level of aspect ratio threshold range corresponds to the number of area divisions. 8.基于光流特征融合的障碍物检测系统,其特征在于,包括:8. An obstacle detection system based on optical flow feature fusion, characterized in that, comprising: 采集及预处理模块,用于采集泊车过程中的连续图像,对采集的图像预处理;The collection and preprocessing module is used to collect continuous images during the parking process and preprocess the collected images; 光流跟踪检测模块,用于将预处理后的图像划分为多个网格,采用光流跟踪算法跟踪每个网格在相邻帧的位置变化,得到包含移动物体的网格,将包含移动物体的网格进行融合,得到移动物体范围;The optical flow tracking detection module is used to divide the preprocessed image into multiple grids, use the optical flow tracking algorithm to track the position change of each grid in adjacent frames, and obtain a grid containing moving objects, which will contain moving The grid of the object is fused to obtain the range of the moving object; 精细特征识别模块,用于根据预设的宽高比阈值、将移动物体范围进行区域划分,对每个区域分别进行纹理计算,得到移动物体所在的区域。The fine feature recognition module is used to divide the range of the moving object into regions according to the preset aspect ratio threshold, and perform texture calculation on each region to obtain the region where the moving object is located. 9.一种电子设备,其特征在于,包括存储器、处理器,所述处理器用于执行存储器中存储的计算机管理类程序时实现如权利要求1-7任一项所述的基于光流特征融合的障碍物检测方法的步骤。9. An electronic device, characterized in that it includes a memory and a processor, and the processor is used to implement the optical flow-based feature fusion as described in any one of claims 1-7 when executing a computer management program stored in the memory The steps of the obstacle detection method. 10.一种计算机可读存储介质,其特征在于,其上存储有计算机管理类程序,所述计算机管理类程序被处理器执行时实现如权利要求1-7任一项所述的基于光流特征融合的障碍物检测方法的步骤。10. A computer-readable storage medium, characterized in that a computer management program is stored thereon, and when the computer management program is executed by a processor, the optical flow-based Steps of a feature fusion obstacle detection method.
CN202210829872.3A 2022-07-14 2022-07-14 Obstacle detection method and system based on optical flow feature fusion and electronic equipment Active CN115409873B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210829872.3A CN115409873B (en) 2022-07-14 2022-07-14 Obstacle detection method and system based on optical flow feature fusion and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210829872.3A CN115409873B (en) 2022-07-14 2022-07-14 Obstacle detection method and system based on optical flow feature fusion and electronic equipment

Publications (2)

Publication Number Publication Date
CN115409873A true CN115409873A (en) 2022-11-29
CN115409873B CN115409873B (en) 2025-07-08

Family

ID=84158068

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210829872.3A Active CN115409873B (en) 2022-07-14 2022-07-14 Obstacle detection method and system based on optical flow feature fusion and electronic equipment

Country Status (1)

Country Link
CN (1) CN115409873B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005170290A (en) * 2003-12-12 2005-06-30 Nissan Motor Co Ltd Obstacle detection device
JP2009076094A (en) * 2008-11-20 2009-04-09 Panasonic Corp Moving object monitoring device
CN101930609A (en) * 2010-08-24 2010-12-29 东软集团股份有限公司 Approximate target object detecting method and device
US20110142283A1 (en) * 2009-12-10 2011-06-16 Chung-Hsien Huang Apparatus and method for moving object detection
JP2012088861A (en) * 2010-10-18 2012-05-10 Secom Co Ltd Intrusion object detection device
CN105069808A (en) * 2015-08-31 2015-11-18 四川虹微技术有限公司 Video image depth estimation method based on image segmentation
CN108596946A (en) * 2018-03-21 2018-09-28 中国航空工业集团公司洛阳电光设备研究所 A kind of moving target real-time detection method and system
JP2019021990A (en) * 2017-07-12 2019-02-07 キヤノン株式会社 Image processing apparatus, image processing method, and program
CN113156421A (en) * 2021-04-07 2021-07-23 南京邮电大学 Obstacle detection method based on information fusion of millimeter wave radar and camera
JP2022014735A (en) * 2020-07-07 2022-01-20 フジテック株式会社 Image processing apparatus and image processing method

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005170290A (en) * 2003-12-12 2005-06-30 Nissan Motor Co Ltd Obstacle detection device
JP2009076094A (en) * 2008-11-20 2009-04-09 Panasonic Corp Moving object monitoring device
US20110142283A1 (en) * 2009-12-10 2011-06-16 Chung-Hsien Huang Apparatus and method for moving object detection
CN101930609A (en) * 2010-08-24 2010-12-29 东软集团股份有限公司 Approximate target object detecting method and device
JP2012088861A (en) * 2010-10-18 2012-05-10 Secom Co Ltd Intrusion object detection device
CN105069808A (en) * 2015-08-31 2015-11-18 四川虹微技术有限公司 Video image depth estimation method based on image segmentation
JP2019021990A (en) * 2017-07-12 2019-02-07 キヤノン株式会社 Image processing apparatus, image processing method, and program
CN108596946A (en) * 2018-03-21 2018-09-28 中国航空工业集团公司洛阳电光设备研究所 A kind of moving target real-time detection method and system
JP2022014735A (en) * 2020-07-07 2022-01-20 フジテック株式会社 Image processing apparatus and image processing method
CN113156421A (en) * 2021-04-07 2021-07-23 南京邮电大学 Obstacle detection method based on information fusion of millimeter wave radar and camera

Also Published As

Publication number Publication date
CN115409873B (en) 2025-07-08

Similar Documents

Publication Publication Date Title
Teoh et al. Symmetry-based monocular vehicle detection system
US8050459B2 (en) System and method for detecting pedestrians
EP2958054B1 (en) Hazard detection in a scene with moving shadows
CN104715471B (en) Target locating method and its device
WO2021134441A1 (en) Automated driving-based vehicle speed control method and apparatus, and computer device
CN112947419A (en) Obstacle avoidance method, device and equipment
CN112507862A (en) Vehicle orientation detection method and system based on multitask convolutional neural network
CN103824070A (en) A Fast Pedestrian Detection Method Based on Computer Vision
CN104183127A (en) Traffic surveillance video detection method and device
JP2014170540A (en) Road surface altitude shape estimation method and system
JP6226368B2 (en) Vehicle monitoring apparatus and vehicle monitoring method
CN114137512B (en) A method for tracking multiple vehicles ahead by integrating millimeter-wave radar and deep learning vision
CN110443142B (en) A deep learning vehicle counting method based on road surface extraction and segmentation
JP2008262333A (en) Road surface discrimination device and road surface discrimination method
EP2813973B1 (en) Method and system for processing video image
CN104168444A (en) Target tracking method of tracking ball machine and tracking ball machine
CN113177439A (en) Method for detecting pedestrian crossing road guardrail
CN103679121A (en) Method and system for detecting roadside using visual difference image
CN105023002A (en) Vehicle logo positioning method based on active vision
CN103886609A (en) Vehicle tracking method based on particle filtering and LBP features
JP2013069045A (en) Image recognition device, image recognition method, and image recognition program
CN116091542A (en) Vehicle tracking method and device based on multiple cameras
CN114067290A (en) A visual perception method and system based on rail transit
CN104268889A (en) Vehicle tracking method based on automatic characteristic weight correction
CN115409873A (en) Obstacle detection method, system and electronic equipment based on optical flow feature fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant