[go: up one dir, main page]

CN120260018A - A roadway obstacle detection method based on three-dimensional point cloud - Google Patents

A roadway obstacle detection method based on three-dimensional point cloud Download PDF

Info

Publication number
CN120260018A
CN120260018A CN202510757182.5A CN202510757182A CN120260018A CN 120260018 A CN120260018 A CN 120260018A CN 202510757182 A CN202510757182 A CN 202510757182A CN 120260018 A CN120260018 A CN 120260018A
Authority
CN
China
Prior art keywords
point cloud
obstacle
points
motion
calculating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202510757182.5A
Other languages
Chinese (zh)
Other versions
CN120260018B (en
Inventor
丁林海
何龙龙
闫磊
文彦春
冯元党
马宇
胡增光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongchuan Coal Mine Machinery Co ltd
Original Assignee
Tongchuan Coal Mine Machinery Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongchuan Coal Mine Machinery Co ltd filed Critical Tongchuan Coal Mine Machinery Co ltd
Priority to CN202510757182.5A priority Critical patent/CN120260018B/en
Publication of CN120260018A publication Critical patent/CN120260018A/en
Application granted granted Critical
Publication of CN120260018B publication Critical patent/CN120260018B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • G06V10/765Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional [3D] objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

本发明涉及图像处理技术领域,尤其涉及一种基于三维点云的巷道障碍物检测方法,包括采集三维点云图像,并获取其中的点云集合;使用障碍物检测模型处理点云集合,识别伪障碍物点云集合;从三维点云图像中去除伪障碍物点云集合的点云。障碍物检测模型包括运动特征分类层,包括:获取点云集合并计算其中点的速度;基于点的速度识别点云集合属于快速运动障碍物或疑似障碍物;计算疑似障碍物中点的运动方向;基于点的运动方向是否一致识别点云集合属于伪障碍物或慢速运动障碍物。本发明运动特征分类层,通过速度阈值和运动方向来识别伪障碍物,能够有效筛选伪障碍物,避免将其误判为真实障碍物,从而降低检测的误检率。

The present invention relates to the field of image processing technology, and in particular to a method for detecting obstacles in a lane based on a three-dimensional point cloud, including collecting a three-dimensional point cloud image and obtaining a point cloud set therein; using an obstacle detection model to process the point cloud set and identify a pseudo-obstacle point cloud set; and removing the point cloud of the pseudo-obstacle point cloud set from the three-dimensional point cloud image. The obstacle detection model includes a motion feature classification layer, including: obtaining a point cloud set and calculating the speed of the points therein; identifying that the point cloud set belongs to a fast-moving obstacle or a suspected obstacle based on the speed of the points; calculating the movement direction of the points in the suspected obstacle; and identifying that the point cloud set belongs to a pseudo-obstacle or a slow-moving obstacle based on whether the movement direction of the points is consistent. The motion feature classification layer of the present invention identifies pseudo-obstacles through speed thresholds and movement directions, and can effectively screen pseudo-obstacles to avoid misjudging them as real obstacles, thereby reducing the false detection rate of the detection.

Description

Roadway obstacle detection method based on three-dimensional point cloud
Technical Field
The invention relates to the technical field of image processing, in particular to a roadway obstacle detection method based on three-dimensional point cloud.
Background
Currently, the field of artificial intelligence profession is vigorously developed, and unmanned technology which is one of important application scenes is gradually moving from a laboratory to real life. The unmanned technique depends on the perception and detection of the environment by the unmanned vehicle, and the obstacle detection is an important link of the environmental perception of the unmanned vehicle, and is a key place for whether the unmanned vehicle can safely and efficiently run. The real obstacle and the false obstacle are accurately identified and distinguished, and the method plays a decisive role in the stability and safety of the unmanned system.
In practical application, unmanned vehicles face complex and changeable environmental conditions, and particularly when natural phenomena such as rain, snow and sand are encountered, false obstacles such as rain, snow and sand are easy to identify obstacles. For example, in stormy weather, the splashing of raindrops interferes with a sensor of an unmanned vehicle, so that the sensor can misidentify a dense rain curtain as an obstacle, and further, measures such as braking or automatic diversion and danger avoidance are triggered, so that the running efficiency is reduced, and traffic accidents such as traffic confusion and rear-end collision can be caused;
In the prior art, normally, by initializing confidence values of points in a three-dimensional point cloud image, according to a manually set imaging point space distance and a target size threshold, an obstacle area and a non-obstacle area of the three-dimensional point cloud image are divided. And the confidence level is further adjusted according to the size and density information of the regional point cloud so as to remove the pseudo obstacle.
However, there are significant limitations to such methods. Because it relies on fixed rules such as size threshold value, density standard that manual work preset, be difficult to adapt to the variety of pseudo-obstacle in the complicated changeable scene. Under the environments of sand lifting, different forms of rain and snow and the like with different concentrations, the point cloud characteristics of the pseudo obstacles are obviously different, the conventional method cannot flexibly cope with the situation, misjudgment or missed judgment is extremely easy to occur, the unmanned safety is reduced, and the reliability of the intelligent detection system is insufficient.
Disclosure of Invention
The invention provides a roadway obstacle detection method based on three-dimensional point cloud, which aims to solve the technical problem that the unmanned scene is easy to misjudge or miss in a complex environment, and comprises the steps of acquiring a three-dimensional point cloud image, acquiring a plurality of point cloud sets in the three-dimensional point cloud image, processing any point cloud set by using a constructed obstacle detection model, identifying category information of an obstacle to which the point cloud set belongs, and removing the point cloud of the pseudo obstacle from the three-dimensional point cloud image when the point cloud set belongs to the pseudo obstacle;
The obstacle detection model comprises a motion characteristic classification layer and is used for acquiring the types of obstacles in point cloud collection points and outputting the speeds and the motion directions of the point cloud collection points, wherein the method comprises the steps of acquiring the point cloud collection points, calculating the speeds of the point cloud collection points, judging that the point cloud collection points belong to fast motion obstacles when the speeds of the point cloud collection points are above a speed threshold, otherwise, judging that the point cloud collection points are suspected obstacles, calculating the motion directions of the point points of the suspected obstacles, and judging that the point cloud collection points belong to pseudo obstacles when the motion directions of the point cloud collection points of the suspected obstacles are inconsistent with the motion directions of other points in the suspected obstacles.
In the motion characteristic classification layer, the types of the obstacles are primarily identified through the speed threshold, and different types of obstacles such as vehicles running at high speed, pedestrians moving at low speed, wind, sand, rain, snow and the like are distinguished in a complex environment. Then, the difference of the obstacles such as sand, rain, snow, pedestrians and the like is captured by analyzing the movement direction, the false obstacle is identified, the false obstacle is effectively prevented from being misjudged as a real obstacle, and the false detection rate is reduced. Finally, the false obstacle is eliminated from the image, so that the final detection result is more reliable.
The method comprises the steps of obtaining position information of the current moment of the point cloud collection points, predicting position information of the next moment of the point cloud collection points through Kalman filtering, and calculating the speed of the point cloud collection points based on the position information of the current moment and the position information of the next moment.
The Kalman filtering algorithm can effectively integrate historical data with current observation, has good robustness to noise, and can accurately predict the position, speed and the like of a point in a complex environment, such as when the light in a roadway is poor and electromagnetic interference exists to cause noise of measured data, so that a reliable basis is provided for subsequent detection.
As a still further improvement of the method, the method for calculating the movement direction of the suspected obstacle midpoint comprises the steps of obtaining a speed vector based on the position information of the suspected obstacle midpoint, calculating the movement direction vector of the point according to the speed vector, and calculating the azimuth angle and the pitch angle of the suspected obstacle midpoint based on the movement direction vector.
The motion direction of the midpoint of the suspected obstacle can be described in an omnibearing and high-precision manner by acquiring a speed vector from the position information, deriving a motion direction vector based on the speed vector and finally calculating an azimuth angle and a pitch angle. In a complex roadway or unmanned scene, the movement direction characteristics of the real obstacle and the false obstacle are slightly different, and the refined calculation mode can effectively capture the differences, so that a key basis is provided for the subsequent false obstacle identification.
As a further improvement of the method of the invention, the obstacle detection model also comprises a loss calculation, wherein the loss calculation comprises the steps of respectively calculating the classification loss of the obstacleLoss of positioning of obstacleMotion vector loss of obstacleCalculating the total loss of the obstacle, wherein,AndIs a preset weighting coefficient.
By calculating the classification loss, the positioning loss and the motion vector loss, the classification accuracy is improved, the positioning accuracy is enhanced, and the motion vector estimation is optimized. The total loss function can flexibly adjust the proportion of classification loss, positioning loss and motion vector loss in the total loss through presetting a weighting coefficient, so that the adaptability and performance of the model under various complex scenes are improved.
As a further improvement of the method, the obstacle detection model further comprises an input layer, wherein the input layer comprises the steps of obtaining coordinate data of any point in the point cloud set, constructing a coordinate data matrix of the point cloud set according to the coordinate data, and constructing a point cloud data set of all points in the point cloud set according to the coordinate data matrix.
The obstacle detection model further comprises a feature extraction layer, wherein the feature extraction layer comprises a motion feature of a point cloud set output by a motion feature classification layer, the motion feature comprises a speed and a motion direction, and the motion feature is subjected to three-layer convolution to obtain a feature map of the three-layer convolution.
The obstacle detection model further comprises a dense classification layer, the dense classification layer comprises a feature map output by the feature extraction layer, the probability of an obstacle category which the feature map belongs to is calculated through an activation function, the obstacle category comprises a fast moving obstacle, a slow moving obstacle and a pseudo obstacle, the nonlinear relation between the motion feature and the obstacle category is extracted through a multi-layer perceptron MLP, the motion state probability of a suspected obstacle midpoint is calculated, and the motion state probability and the probability of the obstacle category are subjected to weighted fusion to obtain the category information of the obstacle of the suspected obstacle midpoint.
The dense classification layer can quickly convert the data in the feature map into the probability distribution of each obstacle category. Meanwhile, the multi-layer perceptron MLP deeply digs a nonlinear relation between the motion characteristics and the obstacle categories, and calculates the motion state probability of the points of the suspected obstacle. The two are cooperated to carry out deep analysis on the feature map from a plurality of angles, hidden information in the data is fully utilized, the expression capability of the obstacle features is enhanced, the fine differences of different types of obstacles can be captured more accurately, and a foundation is laid for accurate classification.
As a further improvement of the method, the three-dimensional point cloud image acquisition method comprises the steps of acquiring the three-dimensional point cloud image and denoising the three-dimensional point cloud image.
As a further improvement of the method, the method for acquiring the plurality of point cloud sets in the three-dimensional point cloud image comprises the step of adopting mean shift clustering to the point cloud in the three-dimensional point cloud image to acquire the plurality of point cloud sets in the three-dimensional point cloud image.
As a further improvement of the method of the present invention, when the movement direction of the point of the suspected obstacle is inconsistent with the movement direction of other points in the suspected obstacle, the difference between the angles of the point and any other point is greater than a preset difference threshold.
The method has the advantages that the motion characteristic classification layer initially distinguishes different types of obstacles according to the speed threshold, reduces the subsequent analysis range, improves the detection efficiency, then analyzes the motion direction to identify the false obstacle, reduces the false detection rate, and eliminates the false obstacle to ensure that the detection result is more reliable. The Kalman filtering algorithm fuses the history and the current data, accurately predicts the position and the speed of the point in a complex environment, and provides a basis for detection. By calculating classification, positioning and motion vector loss, classification accuracy, positioning accuracy and motion vector estimation are improved. The total loss function can flexibly adjust the proportion of each loss through a preset weighting coefficient, and the adaptability and performance of the complex scene of the model are enhanced.
Drawings
Fig. 1 is a flowchart of a roadway obstacle detection method based on three-dimensional point cloud according to an embodiment of the present invention;
fig. 2 is a flowchart of constructing an obstacle detection model in the present embodiment;
FIG. 3 is a flow chart of motion detection in the present embodiment;
Fig. 4 is a schematic diagram of a three-dimensional point cloud identified by the constructed obstacle detection model in the present embodiment;
fig. 5 is a schematic illustration of a three-dimensional point cloud image obtained after processing by a method according to an embodiment of the invention.
In the figure, 31, building, 32, sand lifting, 33 and ground.
Detailed Description
The embodiment provides a roadway obstacle detection method based on three-dimensional point cloud, as shown in fig. 1, the method comprises the steps of S100-S300:
And step S100, acquiring a three-dimensional point cloud image, and acquiring a plurality of point cloud sets in the three-dimensional point cloud image.
For deployment, three-dimensional point cloud images may be acquired by a three-dimensional laser scanner or depth camera.
After the three-dimensional point cloud image is obtained, the point cloud is required to be clustered, and a plurality of clusters can be obtained after the clustering, wherein each cluster is a point cloud set, the points have certain similarity in space positions, and each cluster can be regarded as a point cloud representation of an obstacle or an area with similar characteristics. For example, in an autopilot scenario, one cluster may represent a vehicle, another cluster may represent a pedestrian, and some clusters represent sand, rain, snow.
The clustering of the point cloud may be achieved by a clustering algorithm, such as a mean shift clustering algorithm, a K-means clustering algorithm, a gaussian mixture model clustering algorithm, and the like.
In addition, the three-dimensional point cloud image is required to be subjected to denoising processing, and Gaussian filtering can be adopted to remove noise points in the point cloud and enhance data.
And step 200, processing the point cloud set by using the constructed obstacle detection model, and identifying category information of the obstacle to which the point cloud set belongs.
In the development, as shown in fig. 2, the obstacle detection model refers to a model for detecting a fast obstacle and a pseudo obstacle in the present invention. The model is constructed based on the existing PointNet, and comprises four layers in total, namely an input layer, a motion feature classification layer, a feature extraction layer and a dense classification layer. The model mainly comprises a motion characteristic classification layer which is used for identifying category information of the obstacle. How these four layers are built up is described one by one.
Step S210, an input layer is constructed.
The input layer is used as an inlet of the model and is used for receiving three-dimensional point cloud collection data and converting the data so as to facilitate the subsequent motion characteristic layer to process. The method comprises the steps of receiving point cloud sets and obtaining coordinate data of any point in the point cloud sets, constructing a coordinate data matrix of the point cloud sets according to the coordinate data, and constructing point cloud data sets of all points in the point cloud sets according to the coordinate data matrix.
Specifically, in general, each point in the three-dimensional point cloud set has coordinates, and since the obstacle is moving, the coordinates of each point at different times are different, so each point in the point cloud set forms a point cloud coordinate matrix at each time:
;
Wherein, the Represent the firstAt a point ofA point cloud coordinate data matrix of time,Is the firstAnd at the firstCoordinates corresponding to the time.
After obtaining point cloud coordinate matrixes of all points in the point cloud set at different moments, constructing the point cloud coordinate matrixes of all points at the same moment into a point cloud data set:
;
Wherein, the A set of point cloud data is represented,Represent the firstAt a point ofA point cloud coordinate data matrix of time.
For example, a point cloud set contains two points, the coordinates of which are respectively,The point cloud coordinate matrixes corresponding to the kth moment are respectively as follows:
;
;
The corresponding point cloud data set is:
and after the input layer completes the construction of the point cloud data, outputting the point cloud data to the motion characteristic classification layer.
And S220, constructing a motion feature classification layer.
In a development, the motion feature classification layer is used for acquiring the categories of the obstacles in the point cloud set, and comprises the steps of classifying the point cloud set as a fast motion obstacle or a slow motion obstacle or a pseudo obstacle according to motion detection, and outputting the motion features such as speed and motion direction of the point cloud set to the dense classification layer.
As shown in fig. 3, the motion detection classification includes steps S221 to S224:
step S221, acquiring a point cloud set, and calculating the speed of points in the point cloud set.
In a development, the motion feature classification layer first receives a point cloud data set output by the input layer. The method comprises the steps of predicting the position at the next moment, namely coordinate information, through a Kalman filter, then calculating the speed of points in the point cloud collection, judging that the point cloud collection belongs to a fast moving obstacle when the speed of the points in the point cloud collection is above a speed threshold, otherwise, calculating the movement direction of the points in the suspected obstacle, and judging that the point cloud collection belongs to a pseudo obstacle when the movement direction of the points in the suspected obstacle is inconsistent with the movement direction of other points in the suspected obstacle.
It should be noted that, in the point cloud data set output by the input layer, each point has only coordinate information and no speed information, so that it is necessary to reconstruct the point cloud coordinate matrix of each point, and add the speed coordinates, and the speed value can be initialized to 0 when only one moment is needed. The reconstructed point cloud speed coordinate matrix is as follows:
;
Wherein, the Represent the firstAt a point ofA point cloud coordinate data matrix of time,Is the firstAnd at the firstThe coordinates corresponding to the time of day,Represent the firstThe point is atSpeed in the time coordinate direction.
After the point cloud speed coordinate matrix of each point is constructed, the movement speed of the points in the point cloud set is calculated.
Specifically, the first point of the point cloud set is obtainedPredicting the point in the point cloud set by Kalman filteringTime-based location informationTime position information and the firstThe position information of the moment calculates the velocity and velocity vector of the movement of the points in the point cloud.
Step S222, judging that the point cloud set belongs to a fast moving obstacle or a suspected obstacle.
Specifically, when the speed of the points in the point cloud set is above a speed threshold value of 30km/h, judging that the point cloud set belongs to a fast movement obstacle, and otherwise, judging that the point cloud set is a suspected obstacle.
The types of the obstacles can be quickly and efficiently distinguished by primarily identifying the types of the obstacles through the speed threshold. The speed threshold is set as an intelligent screener, and can rapidly distinguish fast moving objects from slow or static objects. The preliminary screening greatly reduces the range of subsequent further analysis, reduces unnecessary calculation amount and processing time, and improves the operation efficiency of the whole detection system.
For selection of the speed threshold, in an unmanned scenario, an obstacle may generally be considered a fast moving obstacle when its speed reaches 30km/h and above relative to an autonomous car. For example, on a highway, vehicles can be considered as fast moving obstacles if they approach from behind at a significantly higher speed than the vehicle, or if they suddenly cut into the front of the vehicle from a side lane at a high speed.
Step S223, calculating the movement direction of the midpoint of the suspected obstacle.
Specifically, if it is determined that the point cloud set belongs to a suspected obstacle, the next identification is required. This determination is identified by the direction of movement. The method comprises the steps of obtaining a speed vector based on position information of a suspected obstacle midpoint, calculating a motion direction vector of the point according to the speed vector, and calculating an azimuth angle and a pitch angle of the suspected obstacle midpoint based on the motion direction vector to serve as a motion direction of the suspected obstacle midpoint. The calculation of the direction of motion for the position information is prior art and will not be described here.
Step S224, judging that the suspected obstacle point cloud set belongs to a pseudo obstacle or a slow movement obstacle.
The specific judgment logic is that when the movement direction of the point is inconsistent with the movement directions of other points in the suspected obstacle, the point cloud set is judged to belong to the pseudo obstacle, otherwise, the slow movement obstacle is judged. And when the movement directions are inconsistent, the angle difference between the point and any other point is larger than a preset difference threshold.
After the suspected obstacle is initially screened, the false obstacle is further judged and identified according to the movement direction to be a key bright point of the method. In a practical environment, the pseudo-obstacle is often caused by various interference factors, such as rain and snow, sand lifting, and the like. The motion direction of the pseudo obstacles has obvious irregularity and randomness, and is quite different from the motion direction characteristics of the real obstacles. By analyzing the direction of motion, the system is able to accurately capture this discrepancy. For example, debris that occasionally flies through a roadway may have a direction of movement that is affected by a variety of factors such as airflow that is significantly inconsistent with the direction of movement of the surrounding real obstacles. At this time, the motion feature classification layer can accurately identify the motion feature classification layer as a false obstacle, and avoid misjudging the motion feature classification layer as a true obstacle, so that the false detection rate of detection is obviously reduced.
In addition, considering that the directions of the arms and legs are generally not uniform when the pedestrian moves, and thus, the movement directions of all the corresponding points cannot be completely uniform, erroneous judgment can be prevented by setting a threshold for judging whether the movement directions are uniform.
For example, setting the threshold value as 10, if 15 points in the point cloud set are inconsistent with the movement direction of the point, judging that the movement direction of the point is inconsistent with the movement directions of other points, and if the point cloud set to which the point belongs is a pseudo obstacle. If 5 points in the point cloud set are inconsistent with the movement direction of the point, judging that the movement direction of the point is consistent with the movement direction of other points, and if the point cloud set to which the point belongs is a slow movement obstacle.
Finally, in order for the feature extraction layer to process data, the input layer needs to output the speed and the movement direction of the point cloud set.
And step S230, constructing a feature extraction layer.
For development, the feature extraction layer needs to receive the motion features obtained by the motion feature classification layer, namely the speed and the motion direction, then three-layer convolution is carried out on the motion features, the three-layer convolution is identical to PointNet in specific operation, for example, the first-layer convolution is used for extracting the features output by the feature extraction layer to extract local features, the second-layer convolution is used for carrying out feature enhancement on the output of the first layer to extract mesoscale features, the third-layer convolution is used for carrying out global feature extraction on the output of the second layer to extract global features, and finally the feature map of the three-layer convolution is obtained.
In addition, since the PointNet model has a function of identifying the static obstacle, the static obstacle in the three-dimensional point cloud image can be acquired in the step.
And step S240, constructing a dense classification layer.
The method comprises the steps of receiving a feature map output by a feature extraction layer, calculating the probability of an obstacle category to which the feature map belongs through an activation function, wherein the obstacle category comprises a fast moving obstacle, a slow moving obstacle and a pseudo obstacle, extracting a nonlinear relation between a motion feature and the obstacle category by using a multi-layer perceptron MLP, calculating the motion state probability of a point, and carrying out weighted fusion on the motion state probability and the probability of the obstacle category to obtain the category information of the obstacle of the point.
In particular, a dense classification layer typically applies an activation function, such as a softmax function. The feature map output by the feature extraction layer is first converted into a probability distribution, in this invention the probability of the obstacle class, by a softmax function. I.e. the probability of categories such as fast moving obstacles, slow moving obstacles, static obstacles, pseudo-obstacles, etc.
The multi-layer perceptron MLP has strong nonlinear fitting capability, and can automatically learn the complex nonlinear relation through the combination of multi-layer neurons and the action of an activation function, so that the obstacles with different movement characteristics can be more accurately classified and understood. Then, extracting a nonlinear relation between the motion characteristics and the obstacle categories through a multi-layer perceptron MLP, and calculating the motion state probability of the points;
and finally, carrying out weighted fusion on the motion state probability and the probability of the obstacle category to obtain the category information of the obstacle of the point.
The category information of the obstacle refers to a composite score including the obstacle classification result and the movement state result.
For example, the obstacle classification result is one of a static obstacle, a fast moving obstacle, a slow moving obstacle and a pseudo obstacle, and the corresponding movement state result can be respectively that the speed is 0, the speed is more than 30km/h, the speed is 5m/s inconsistent with the movement direction, and the movement direction is consistent between 5 km/h and 30 km/h.
In conclusion, the method is a construction process of four layers of the obstacle detection model. Based on this, it is also necessary to perform loss judgment and optimization processing on the result after the model detection.
The loss function is used for measuring the difference between the model prediction result and the real result, so that the model optimization parameters are guided to realize obstacle classification and three-dimensional point cloud reconstruction. The loss function may be calculated using a mean square error loss, an average absolute error loss, and a cross entropy loss. The loss of the model comprises three parts of classification loss, positioning loss and motion loss.
The classification loss is the difference between the predicted probability of the model for the obstacle class and the true class. The positioning loss is the difference between the predicted positions of the model to the actual positions of the obstacles. The motion loss is a measure of the difference between the predicted probability of the obstacle's direction of motion and the true direction of motion.
Finally, calculating the total loss of the obstacle:
;
Wherein, the AndFor the preset weighting coefficient to be used,Represents the classification loss of the obstacle,Indicating the positioning loss of the obstacle,Motion vector loss of the obstacle;
With respect to AndIs critical for unmanned decision, such as distinguishing between different types of obstacles such as pedestrians, vehicles and traffic signs, so that the vehicles respond differently, then the size of the obstacle can be increased appropriatelyThe weight of the classification task is enhanced. If the accurate positioning of the obstacle is more critical during the driving of the vehicle, for example, the position of the obstacle needs to be accurately determined to avoid collision, thenMay need to be relatively large. For example, in urban roads, vehicles need to frequently avoid pedestrians and other vehicles, and the importance of positioning tasks is high at this time, and the method can be provided=0.4,=0.6. In some specific situations, such as parking lots, it may be more important to accurately identify parking space marks and obstacle types=0.6,=0.4。
In the PointNet model, optimization algorithms are also involved, and random gradient descent (SGD) and its variants, such as Adagrad, adadelta, RMSProp, adam, adamW, are typically used to update the parameters of the model. The algorithm can adaptively adjust the learning rate according to the gradient information of the loss function so as to accelerate the convergence rate of the model and avoid sinking into a local optimal solution. This part is prior art and is not described in detail here.
And step S300, removing the point cloud of the pseudo obstacle from the three-dimensional point cloud image when the point cloud set belongs to the pseudo obstacle.
As shown in fig. 4, through the operations of the above steps, the obstacle category information of the pseudo obstacle, the fast moving obstacle, the slow moving obstacle, the static obstacle, and the like in the three-dimensional point cloud image can be identified. As shown in fig. 4, static obstacles such as a building 31, pseudo obstacles such as a sand lifting 32, and a ground 33 are mainly given. The point cloud set belonging to the pseudo obstacle needs to be counted into the sand lifting 32 to be removed from the three-dimensional point cloud image, so that measures such as braking or automatic diversion and danger avoidance of the unmanned vehicle due to the fact that the obstacle is misidentified can be prevented. The three-dimensional point cloud image after sand lifting is removed, as shown in fig. 5, the ground 33 is not shown, and the ground 33 is generally removed after the three-dimensional image is acquired, so that the detection of subsequent obstacles is prevented from being influenced.
While various embodiments of the present invention have been shown and described herein, it will be obvious to those skilled in the art that such embodiments are provided by way of example only. Many modifications, changes, and substitutions will now occur to those skilled in the art without departing from the spirit and scope of the invention. It should be understood that various alternatives to the embodiments of the invention described herein may be employed in practicing the invention.

Claims (10)

1. The roadway obstacle detection method based on the three-dimensional point cloud is characterized by comprising the following steps of:
Acquiring a three-dimensional point cloud image, and acquiring a plurality of point cloud sets in the three-dimensional point cloud image;
processing any point cloud set by using the constructed obstacle detection model, and identifying category information of obstacles to which the point cloud set belongs;
removing the point cloud of the pseudo obstacle from the three-dimensional point cloud image when the point cloud set belongs to the pseudo obstacle;
The obstacle detection model comprises a motion feature classification layer, is used for acquiring the types of the obstacles at points in the point cloud collection and outputting the speed and the motion direction of the point cloud collection, and comprises the steps of acquiring the point cloud collection, calculating the speed of the points in the point cloud collection, judging that the point cloud collection belongs to a fast motion obstacle when the speed of the points in the point cloud collection is above a speed threshold, otherwise, calculating the motion direction of the points in the suspected obstacle, and judging that the point cloud collection belongs to a pseudo obstacle when the motion direction of the points in the suspected obstacle is inconsistent with the motion direction of other points in the suspected obstacle.
2. The roadway barrier detection method of claim 1, wherein said calculating the velocity of the points of the point cloud collection comprises:
Acquiring the position information of the current moment of the point cloud collection points;
predicting the position information of the point in the point cloud set at the next moment through Kalman filtering;
and calculating the speed of the point in the point cloud collection based on the position information of the current moment and the position information of the next moment.
3. The roadway barrier detection method based on the three-dimensional point cloud of claim 1, wherein said calculating a direction of movement of the suspected barrier midpoint comprises:
Acquiring a speed vector based on the position information of the midpoint of the suspected obstacle;
Calculating a motion direction vector of the point according to the speed vector;
and calculating the azimuth angle and the pitch angle of the midpoint of the suspected obstacle based on the motion direction vector.
4. The roadway barrier detection method of claim 1, wherein the barrier detection model further comprises a loss calculation comprising:
Separately calculating classification losses of the obstacles Loss of positioning of obstacleMotion vector loss of obstacle;
Calculating the total loss of the obstacle, wherein,AndIs a preset weighting coefficient.
5. The roadway barrier detection method of claim 1, wherein the barrier detection model further comprises an input layer comprising:
Acquiring coordinate data of any point in the point cloud set;
constructing a coordinate data matrix according to the coordinate data;
and constructing a point cloud data set of all points in the point cloud set according to the coordinate data matrix.
6. The roadway barrier detection method of claim 1, wherein the barrier detection model further comprises a feature extraction layer comprising:
receiving motion characteristics of the point cloud set output by the motion characteristic classification layer, wherein the motion characteristics comprise speed and motion direction;
And carrying out three-layer convolution on the motion characteristics to obtain a three-layer convolution characteristic diagram.
7. The three-dimensional point cloud based roadway obstacle detection method of claim 6, wherein the obstacle detection model further comprises a dense classification layer comprising:
Receiving a feature map output by the feature extraction layer;
calculating the probability of an obstacle category to which the feature map belongs through an activation function, wherein the obstacle category comprises a fast movement obstacle, a slow movement obstacle and a pseudo obstacle;
Extracting a nonlinear relation between motion characteristics and the obstacle category by using a multi-layer perceptron MLP, and calculating the motion state probability of the points of the suspected obstacle;
And carrying out weighted fusion on the motion state probability and the probability of the obstacle category to obtain category information of the obstacle in the point of the suspected obstacle.
8. The roadway barrier detection method based on the three-dimensional point cloud of claim 1, wherein the acquiring the three-dimensional point cloud image comprises:
acquiring a three-dimensional point cloud image;
And denoising the three-dimensional point cloud image.
9. The roadway barrier detection method based on three-dimensional point cloud of claim 1, wherein said acquiring a plurality of point cloud sets in the three-dimensional point cloud image comprises:
and adopting mean shift clustering to the point clouds in the three-dimensional point cloud image to obtain a plurality of point cloud sets in the three-dimensional point cloud image.
10. The roadway obstacle detection method based on the three-dimensional point cloud as claimed in claim 1, wherein when the movement direction of the points in the suspected obstacle is inconsistent with the movement direction of other points in the suspected obstacle, the angle difference between the points and any other point is larger than a preset difference threshold.
CN202510757182.5A 2025-06-09 2025-06-09 A roadway obstacle detection method based on three-dimensional point cloud Active CN120260018B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202510757182.5A CN120260018B (en) 2025-06-09 2025-06-09 A roadway obstacle detection method based on three-dimensional point cloud

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202510757182.5A CN120260018B (en) 2025-06-09 2025-06-09 A roadway obstacle detection method based on three-dimensional point cloud

Publications (2)

Publication Number Publication Date
CN120260018A true CN120260018A (en) 2025-07-04
CN120260018B CN120260018B (en) 2025-10-28

Family

ID=96178175

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202510757182.5A Active CN120260018B (en) 2025-06-09 2025-06-09 A roadway obstacle detection method based on three-dimensional point cloud

Country Status (1)

Country Link
CN (1) CN120260018B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN120808321A (en) * 2025-09-10 2025-10-17 西安重装铜川煤矿机械有限公司 Intelligent detection method for bending processing of side guard plates of hydraulic support

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020170685A1 (en) * 2000-08-24 2002-11-21 Weik Martin Herman Parking barrier with accident event logging and self-diagnostic control system
CN108037768A (en) * 2017-12-13 2018-05-15 常州工学院 Unmanned plane obstruction-avoiding control system, avoidance obstacle method and unmanned plane
CN112683288A (en) * 2020-11-30 2021-04-20 北方工业大学 Intelligent guide robot system and method for assisting blind in crossing street in intersection environment
JPWO2021084583A1 (en) * 2019-10-28 2021-05-06
CN112799426A (en) * 2020-12-25 2021-05-14 陈南方 Unmanned aerial vehicle navigation control system and method based on big data analysis
US20220257218A1 (en) * 2020-03-10 2022-08-18 Yunnan University Ultrasonic doppler blood flow imaging method and system
CN115273035A (en) * 2022-08-18 2022-11-01 广州文远知行科技有限公司 Method, device and equipment for detecting air floating object and storage medium
CN119556303A (en) * 2025-01-21 2025-03-04 北京飞安航空科技有限公司 Road obstacle perception system for unmanned vehicles based on lidar

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020170685A1 (en) * 2000-08-24 2002-11-21 Weik Martin Herman Parking barrier with accident event logging and self-diagnostic control system
CN108037768A (en) * 2017-12-13 2018-05-15 常州工学院 Unmanned plane obstruction-avoiding control system, avoidance obstacle method and unmanned plane
JPWO2021084583A1 (en) * 2019-10-28 2021-05-06
US20220257218A1 (en) * 2020-03-10 2022-08-18 Yunnan University Ultrasonic doppler blood flow imaging method and system
CN112683288A (en) * 2020-11-30 2021-04-20 北方工业大学 Intelligent guide robot system and method for assisting blind in crossing street in intersection environment
CN112799426A (en) * 2020-12-25 2021-05-14 陈南方 Unmanned aerial vehicle navigation control system and method based on big data analysis
CN115273035A (en) * 2022-08-18 2022-11-01 广州文远知行科技有限公司 Method, device and equipment for detecting air floating object and storage medium
CN119556303A (en) * 2025-01-21 2025-03-04 北京飞安航空科技有限公司 Road obstacle perception system for unmanned vehicles based on lidar

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN120808321A (en) * 2025-09-10 2025-10-17 西安重装铜川煤矿机械有限公司 Intelligent detection method for bending processing of side guard plates of hydraulic support

Also Published As

Publication number Publication date
CN120260018B (en) 2025-10-28

Similar Documents

Publication Publication Date Title
CN106652468B (en) The detection and from vehicle violation early warning alarm set and method in violation of rules and regulations of road vehicle front truck
CN115034324B (en) Multi-sensor fusion perception efficiency enhancement method
CN118298628A (en) Multi-mode integrated traffic abnormal event detection method
CN113822221B (en) A target detection method based on adversarial neural network and multi-sensor fusion
CN112257522A (en) Multi-sensor fusion environment sensing method based on environment characteristics
CN112666553B (en) Road ponding identification method and equipment based on millimeter wave radar
CN115205796A (en) Method and system for monitoring foreign matter invasion limit and early warning risk of track line
Munawar Image and video processing for defect detection in key infrastructure
CN112101316B (en) Target detection method and system
CN120260018B (en) A roadway obstacle detection method based on three-dimensional point cloud
CN115223106B (en) A method for detecting projectiles by fusing differential video sequences and convolutional neural networks
CN118262301A (en) An optimization strategy for traffic anomaly detection based on intelligent algorithm
CN108765453B (en) Expressway agglomerate fog identification method based on video stream data
CN114581863A (en) Method and system for identifying dangerous state of vehicle
CN118963353A (en) Road driving environment intelligent perception method, system, terminal and storage medium
CN120534382A (en) Deep learning-based autonomous vehicle environmental perception system
CN114155720A (en) A vehicle detection and trajectory prediction method for roadside lidar
CN111524121A (en) Road and bridge fault automatic detection method based on machine vision technology
CN116778449A (en) A detection method for improving the efficiency of three-dimensional target detection in autonomous driving
CN113850112B (en) Road condition recognition method and system based on twin neural network
CN112629881A (en) Method for extracting automatic driving simulation test element
CN117830739A (en) Target object identification method, system, electronic device and storage medium
CN118351496A (en) Traffic abnormal event detection method and system based on radar fusion
CN114167446A (en) Radar abnormal point cloud detection method and detection device based on spatial neighborhood information
CN120147996B (en) Road safety risk identification and early warning system and method based on image semantic segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant