Disclosure of Invention
The present invention is directed to a monocular distance measuring system and method for an underground unmanned vehicle to solve or improve at least one of the above technical problems.
In view of the above, a first aspect of the present invention is to provide a monocular distance measuring method for an underground unmanned vehicle.
A second aspect of the present invention is to provide a monocular distance measuring system of a downhole unmanned vehicle.
The invention provides a monocular distance measurement method of an underground unmanned vehicle, which comprises the following steps: s1, acquiring a forward vehicle image through a sensor in a vehicle driving route, and automatically identifying detection frame information and type information of a forward vehicle; s2, judging the vehicle pose of the forward vehicle according to the type information and the detection frame information; s3, solving the pixel coordinates of the closest point of the distance between the forward vehicle and the sensor in the forward vehicle image according to the vehicle pose; s4, substituting the pixel coordinates into a monocular geometric distance measurement model to obtain the measurement distance between the nearest point and the sensor, and then predicting and updating the measurement distance by adopting a Kalman filtering algorithm to obtain the vehicle distance; wherein the type information includes: and the vehicle body information, the vehicle tail information and the hub information are respectively in one-to-one correspondence with the detection frame information.
The invention provides a monocular distance measurement system and a monocular distance measurement method for an underground unmanned vehicle, and provides a monocular distance measurement system and a monocular distance measurement method applied to the underground unmanned vehicle aiming at the problems. In addition, the invention designs a pose estimation module to judge the pose of the vehicle and acquire the pixel coordinate corresponding to the nearest point of the front vehicle. And finally, designing a two-stage accurate distance measurement method, firstly solving the vehicle distance based on a monocular geometric distance measurement model, and then predicting and updating the distance by adopting Kalman filtering, so that the robustness and the stability of the distance measurement are ensured.
The invention provides a multi-target information fusion method based on detection results by sensing the vehicle body, the vehicle tail and the wheel hub in real time, provides a precondition for accurate distance measurement, and particularly realizes the real-time sensing of the vehicle body, the vehicle tail and the wheel hub by adopting a target detection algorithm of a convolutional neural network;
according to the invention, the distance measurement precision is improved compared with the traditional method by obtaining the relative position and attitude information of the forward vehicle and calculating the pixel coordinate of the closest point on the image by means of the position and attitude information, and specifically, the position and attitude of the forward vehicle at the parking space are obtained by adopting the existing commercially available vehicle position and attitude estimation module;
the invention provides two-stage accurate distance measurement, combines the traditional geometric distance measurement model and the Kalman filtering algorithm, further improves the accuracy of the final result of the distance measurement of the traditional geometric distance measurement model, and improves the stability and the robustness of the distance measurement of the underground unmanned vehicle.
In addition, the technical solution provided by the embodiment of the present invention may further have the following additional technical features:
in any of the above technical solutions, the forward vehicle image is obtained by the sensor shooting a real-time video stream frame by frame; the sensor is arranged at the windshield of the head of the vehicle, and the camera shooting and collecting surface is arranged in the same direction as the running direction of the vehicle; wherein the sensor is an infrared camera.
In the technical scheme, a sensor can be used for continuously shooting a front vehicle so as to obtain a real-time image for measurement and calculation of a subsequent algorithm, the image is obtained in a frame-by-frame mode, the most accurate and time-efficient image can be obtained in a continuous shooting video stream, and the measured and calculated distance can be matched with the current actual distance as far as possible;
the sensor is arranged at the windshield of the head of the vehicle, and particularly arranged in the vehicle, so that certain dustproof and interference imitation can be provided through the windshield, the sensor can continuously and effectively work, the camera shooting and collecting surface is arranged in the same direction as the driving direction of the vehicle, and a forward vehicle in the driving direction of the vehicle can be shot most directly;
the method can be suitable for underground dim working environment by adopting the infrared camera, and the stability of the whole work is ensured.
In any of the above technical solutions, when the vehicle body information in front is detected, it is determined that there is a forward vehicle in front, and the step of determining the vehicle pose of the forward vehicle specifically includes: when only the vehicle body information is detected and the vehicle tail information and the hub information are not detected, judging that the forward vehicle is positioned right ahead of the vehicle at the moment; when the vehicle body information and the vehicle tail information are detected and the wheel hub information is not detected, judging that the forward vehicle is positioned right ahead of the vehicle at the moment; when vehicle body information, vehicle tail information and wheel hub information are detected at the same time, judging that the forward vehicle is positioned right ahead of the vehicle at the moment; when the vehicle body information, the vehicle tail information and the two wheel hub information are detected at the same time, it is judged that the forward vehicle is located in the lateral front of the vehicle at the moment.
In the technical scheme, when the vehicle detects that the vehicle is ahead in the process of traveling, the pose of the vehicle in the forward direction needs to be judged, so that the accuracy of data can be ensured in the subsequent distance measurement and calculation;
when only the vehicle body information is detected, judging that the detected vehicle is positioned in front at the moment, but not detecting the tail and the hub of the vehicle, and considering that the closest point of the front vehicle is the pixel coordinate of the position of the middle point of the bottom end of the vehicle body detection frame under the condition;
when the vehicle body and the tail of the vehicle are detected, the hub is not detected, the hub of the forward vehicle is indicated to be shielded by the vehicle body in the forward looking direction of the vehicle, and therefore the detected forward vehicle is judged to be positioned right in front of the vehicle;
when the vehicle body information, the vehicle tail information and the wheel hub information are detected at the same time, the fact that a driver generates a deflection angle relative to the vehicle body exists on the forward vehicle is shown, so that a sensor of the vehicle only detects the driver, and the forward vehicle is in front of the side of the vehicle at the moment, but the deflection angle is smaller;
when the vehicle body information, the vehicle tail information and the two hub information are detected simultaneously, the fact that the advancing directions of the forward vehicle between the vehicle body and the vehicle are different is indicated, so that two hub perseverations on one side are detected, and the fact that the forward vehicle is located in front of the side of the vehicle body is indicated.
In any of the above technical solutions, when it is determined that the forward vehicle is located in the lateral front of the vehicle at this time, the pixel coordinate p of the bottom midpoint of the detection frame information of the two wheels is obtained 1 (x 1 ,y 1 ),p 2 (x 2 ,y 2 ) (ii) a When (x) 1 -x 2 )(y 1 -y 2 )<0, determining that the vehicle is in the front left direction at the moment; when (x) 1 -x 2 )(y 1 -y 2 )>0, it is determined that the vehicle is right-front at this time.
In the technical scheme, when the forward vehicle is positioned in the lateral front of the vehicle, the left and right deviation of the forward vehicle needs to be judged, and the left and right deviation of the vehicle body can be known by comparing the pixel coordinates of the midpoint of the bottom ends of the detection frame information of the two wheels, namely (x is proper) 1 -x 2 )(y 1 -y 2 )<0, determining that the vehicle is in the front left direction at the moment; when (x) 1 -x 2 )(y 1 -y 2 )>0, it is determined that the vehicle is right ahead at this time.
In any of the above technical solutions, let a pixel coordinate of the closest point be a (x) a ,y a ) The S3 specifically includes: when the forward vehicle is positioned right ahead of the vehicle, acquiring the pixel coordinate of the bottom midpoint position of the detection frame information of the vehicle body information as the pixel coordinate of the closest point A; when the forward vehicle is positioned in lateral front of the vehicle, acquiring a lower right corner or lower left corner pixel coordinate Q (x) of the detection frame information of the vehicle tail information at the moment q ,y q ) At this time p 1 、p 2 The straight line is as follows:
finding the coordinates of the closest point A as
In the technical scheme, when a forward vehicle is in the front of the vehicle, the pixel coordinate of the midpoint of the bottom end of the detection frame information corresponding to the vehicle body information is directly set as the closest point, when the forward vehicle is in the left front of the vehicle, the pixel coordinate of the lower right corner of the detection frame information corresponding to the vehicle tail information is obtained, or when the forward vehicle is in the right front of the vehicle, the pixel coordinate of the lower left corner of the detection frame information corresponding to the vehicle tail information is obtained, and the closest point coordinate is obtained when the forward vehicle is at the front end or the right front end of the vehicle by calculation according to the formula;
according to different postures of the forward vehicle relative to the vehicle, different selection means are adopted for the closest point, so that the final point location information is more accurate and adaptive.
In any of the above technical solutions, in the S4, the closest point a (x) is determined a ,y a ) The pixel coordinates of (a) are substituted into the monocular geometric ranging model, the following formula can be obtained:
h is the height of the sensor from the ground; γ is the depression of the sensor; y is a Is the longitudinal coordinate of the forward vehicle in pixel coordinates of the image from the sensor closest point; v. of 0 、f v Is a camera internal parameter, v 0 Is the pixel ordinate, f, corresponding to the principal point v The length of the focal length in the y-axis direction is described by using pixels and can be obtained by calibration through a Zhang Zhengyou calibration method; d is the distance sought.
In the technical scheme, only the monocular geometric distance measurement model applied in the prior art is available, the expression of the measurement distance d can be obtained, and the specific numerical value of d is solved according to the expression.
In any one of the above technical solutions, the kalman filtering algorithm includes: the system comprises a prediction part model and an updating part model, wherein the output result of the prediction part model is used as the input condition of the updating part model.
In the technical scheme, the solved measurement value d is further refined through a prediction part model and an update part model respectively, so that the accuracy of the final output number is improved.
In any of the above technical solutions, the prediction part model includes the following formula:
P t - =FP t-1 F T +Q; (2)
wherein,
for the predicted state variable u at time t
t Is a control variable at time t, F is a state matrix, B is a control matrix,
For error covariance predicted at time t, P
t_1 Is the error covariance updated at time t-1,
The state variable predicted at time t-1 and Q the observation noise.
In the technical scheme, a formula (1) is a prior estimation formula of a state variable, and the state variable is estimated at t-1 moment
As is known, the control matrix B is known and the variable u is controlled at time t-1
t-1 Under known conditions, a priori estimate of time t can be found
Equation (2) is a priori error covariance estimation equation, with the error covariance p at time t-1
t-1 Under the condition that the state matrix F and the observation noise are known, the prior error covariance at the time t can be obtained
In any of the above technical solutions, the update part model includes the following formula:
K t =P t - H T (HP t - H T +R) -1 ; (3)
P t =(I-K t H)P t - ; (5)
wherein, P t As error covariance, Q as observation noise, K t Is a gain coefficient, H is a gain matrix, z t For observed variables, I is the identity matrix, H T Transpose matrix for H, R noise covariance.
In the technical scheme, formula (3) is a Kalman gain coefficient calculation formula, formula (4) is a state variable posterior estimation formula, and formula (5) is an error covariance posterior estimation formula. Determining the gain factor K at time t
t The observation distance z obtained by the traditional geometric distance measurement model at the moment t
t And predicted state variables
Can obtain the updated value of the state variable at the time t
I.e. the closest point distance of the last output.
A second aspect of the present invention provides a monocular distance measuring system for an underground unmanned vehicle, comprising a storage and a processor, wherein the storage stores a computer program, and the processor implements the steps of any one of the above technical solutions when executing the program.
In this technical solution, since the processor included therein may implement the steps of any one of the above technical solution methods, the whole technical effects of the monocular distance measuring system for an underground unmanned vehicle provided by the second aspect of the present invention are not described herein again.
The existing monocular distance measurement method does not consider errors introduced by vehicle pose relation change, and has poor robustness in an underground unstructured scene. Aiming at the problems, the invention provides a real-time and accurate underground unmanned vehicle monocular distance measurement method, which has the following advantages compared with other methods:
the invention realizes real-time perception of the vehicle body, the vehicle tail and the wheel hub based on the target detection algorithm of the convolutional neural network, provides a multi-target information fusion method based on the detection result, and provides a precondition for accurate distance measurement;
according to the invention, the vehicle pose estimation module is designed to obtain the relative pose information of the front vehicle, and the pixel coordinate of the closest point on the image is calculated through the pose information, so that the distance measurement precision is improved compared with the traditional method;
the invention provides a two-stage accurate distance measurement method, which combines a traditional geometric distance measurement model and a Kalman filtering algorithm to improve the stability and robustness of underground unmanned vehicle distance measurement.
Additional aspects and advantages of embodiments in accordance with the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of embodiments in accordance with the invention.
Detailed Description
So that the manner in which the above recited objects, features and advantages of the present invention can be understood in detail, a more particular description of the invention, briefly summarized above, may be had by reference to the embodiments thereof which are illustrated in the appended drawings. It should be noted that the embodiments and features of the embodiments of the present application may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, however, the present invention may be practiced in other ways than those specifically described herein, and therefore the scope of the present invention is not limited by the specific embodiments disclosed below.
Referring to fig. 1 to 8, a first aspect of the present invention provides a monocular distance measuring method for an underground unmanned vehicle, including the following steps: s1, collecting a forward vehicle image through a sensor in a vehicle driving route, and automatically identifying detection frame information and type information of a forward vehicle; s2, judging the vehicle pose of the forward vehicle according to the type information and the detection frame information; s3, solving the pixel coordinates of the closest point of the forward vehicle and the sensor in the forward vehicle image according to the vehicle pose; s4, substituting the pixel coordinates into a monocular geometric distance measurement model to obtain the measurement distance between the nearest point and the sensor, and predicting and updating the measurement distance by adopting a Kalman filtering algorithm to obtain the vehicle distance; wherein the type information includes: and the vehicle body information, the vehicle tail information and the hub information are respectively in one-to-one correspondence with the detection frame information.
The invention provides a monocular distance measurement system and method of an underground unmanned vehicle, which provides a precondition for accurate distance measurement by sensing a vehicle body, a vehicle tail and a wheel hub in real time and providing a method of multi-target information fusion based on a detection result, and particularly, the real-time sensing of the vehicle body, the vehicle tail and the wheel hub is realized by adopting a target detection algorithm of a convolutional neural network;
according to the design, the relative position and attitude information of the forward vehicle is obtained, and the pixel coordinate of the closest point on the image is calculated by means of the position and attitude information, so that the distance measurement precision is improved compared with that of a traditional method, and specifically, the position and attitude of the forward vehicle of the parking space are obtained by adopting a vehicle position and attitude estimation module which is sold in the market;
the invention provides two-stage accurate distance measurement, combines the traditional geometric distance measurement model and a Kalman filtering algorithm, further improves the accuracy of the final result of the distance measurement of the traditional geometric distance measurement model, and improves the stability and robustness of the distance measurement of the underground unmanned vehicle.
Due to the particularity of the underground scene, the general monocular distance measuring method has poor robustness and low precision in the underground. The invention aims to provide a monocular distance measurement method applied to an underground unmanned driving scene. Particularly, in consideration of the special working conditions of numerous underground unstructured roads and uneven ground and urgent requirements for safety guarantee and cost reduction, due to the instability and limitation of the traditional monocular distance measurement method, the method designs a multi-information fusion module, a pose estimation module and a two-stage accurate distance measurement module based on a convolutional neural network, solves the problem of accurate perception of distance information of an underground unmanned vehicle to a forward vehicle in the driving process, and enhances the safety and reliability of underground unmanned transportation.
In any of the above embodiments, as shown in fig. 1-8, the forward vehicle images are obtained by capturing a real-time video stream frame by the sensor; the sensor is arranged at the windshield of the head of the vehicle, and the camera shooting acquisition surface is arranged in the same direction as the running direction of the vehicle; wherein the sensor is an infrared camera.
In the embodiment, the sensor can continuously shoot the front vehicle so as to obtain a real-time image for measurement and calculation of a subsequent algorithm, the image is obtained in a frame-by-frame mode, and the most accurate and time-efficient image can be obtained in a continuous shooting video stream so that the measured and calculated distance can be matched with the current actual distance as far as possible;
the sensor is arranged at the windshield of the head of the vehicle, and particularly arranged in the vehicle, so that certain dustproof and interference imitation can be provided through the windshield, the sensor can continuously and effectively work, the camera shooting acquisition surface is arranged in the same direction as the driving direction of the vehicle, and a forward vehicle in the driving direction of the vehicle can be shot most directly;
the method can be suitable for underground dim working environment by adopting the infrared camera, and the stability of the whole work is ensured.
In any of the embodiments described above, as shown in fig. 1 to 8, when the vehicle body information in front is detected, the step of determining the vehicle pose of the forward vehicle, which determines that there is a forward vehicle in front, specifically includes: when only the vehicle body information is detected and the vehicle tail information and the hub information are not detected, judging that the forward vehicle is positioned right ahead of the vehicle at the moment; when the vehicle body information and the vehicle tail information are detected and the wheel hub information is not detected, judging that the forward vehicle is positioned right ahead of the vehicle at the moment; when the vehicle body information, the vehicle tail information and the wheel hub information are detected at the same time, judging that the forward vehicle is positioned right ahead of the vehicle at the moment; when the vehicle body information, the vehicle tail information and the two hub information are detected at the same time, it is judged that the forward vehicle is positioned in the lateral front of the vehicle at the moment.
In the embodiment, when the vehicle detects that the vehicle is ahead in the process of traveling, the pose of the vehicle in the front direction needs to be judged, so that the accuracy of data can be ensured in the subsequent distance measurement and calculation;
when only the vehicle body information is detected, judging that the detected vehicle is positioned in front at the moment, but not detecting the tail and the hub of the vehicle, and considering that the closest point of the front vehicle is the pixel coordinate of the position of the middle point of the bottom end of the vehicle body detection frame under the condition;
when the vehicle body and the tail of the vehicle are detected, the hub is not detected, the hub of the forward vehicle is indicated to be shielded by the vehicle body in the forward looking direction of the vehicle, and therefore the detected forward vehicle is judged to be positioned right in front of the vehicle;
when the vehicle body information, the vehicle tail information and the wheel hub information are detected at the same time, it is indicated that a driver of the forward vehicle generates a deflection angle relative to the vehicle body at the moment, so that a sensor of the vehicle only detects a driver, and the forward vehicle is positioned in front of the side of the vehicle at the moment, but the deflection angle is smaller;
when the vehicle body information, the vehicle tail information and the two hub information are detected simultaneously, the fact that the advancing directions of the forward vehicle between the vehicle body and the vehicle are different is indicated, so that two hub perseverations on one side are detected, and the fact that the forward vehicle is located in front of the side of the vehicle body is indicated.
In any of the above embodiments, as shown in fig. 1 to 8, when it is judged that the forward vehicle is positioned laterally forward of the vehicle at this time, the pixel coordinate p of the bottom end midpoint of the detection frame information of the two wheels is acquired 1 (x 1 ,y 1 ),p 2 (x 2 ,y 2 ) (ii) a When (x) 1 -x 2 )(y 1 -y 2 )<0, determining that the vehicle is in the front left direction at the moment; when (x) 1 -x 2 )(y 1 -y 2 )>0, it is determined that the vehicle is right-front at this time.
In this embodiment, when the forward vehicle is located in the lateral front of the vehicle, the left-right deviation of the forward vehicle needs to be determined, and the left-right deviation of the vehicle body can be known by comparing the pixel coordinates of the bottom midpoint of the detection frame information of the two wheels, that is, when (x) is the right 1 -x 2 )(y 1 -y 2 )<0, determining that the vehicle is in the front left direction at the moment; when (x) 1 -x 2 )(y 1 -y 2 )>0, it is determined that the vehicle is right-front at this time.
In any of the above embodiments, as shown in fig. 1-8, let the pixel coordinate of the closest point be a (x) a ,y a ) And S3 specifically comprises: when the forward vehicle is positioned right ahead of the vehicle, acquiring the pixel coordinate of the midpoint position at the bottom end of the detection frame information of the vehicle body information as the pixel coordinate of the closest point A; when the forward vehicle is positioned in the lateral front of the vehicle, acquiring the pixel coordinate Q (x) of the lower right corner or lower left corner of the detection frame information of the vehicle tail information at the moment q ,y q ) At this time p 1 、p 2 The straight line is as follows:
finding the coordinates of the closest point A as
In this embodiment, when the forward vehicle is directly in front of the vehicle, the pixel coordinate of the midpoint at the bottom end of the detection frame information corresponding to the vehicle body information is directly set as the closest point, when the forward vehicle is in front of the left side of the vehicle, the pixel coordinate of the lower right corner of the detection frame information corresponding to the vehicle tail information is obtained, or when the forward vehicle is in front of the right side of the vehicle, the pixel coordinate of the lower left corner of the detection frame information corresponding to the vehicle tail information is obtained, and the closest point coordinate is obtained when the forward vehicle is at the front end or the front right end of the vehicle by calculation according to the formula;
according to different postures of the forward vehicle relative to the vehicle, different selection means are adopted for the closest point, so that the final point location information is more accurate and adaptive.
In any of the above embodiments, as shown in FIGS. 1-8, in S4, the closest point A (x) a ,y a ) The pixel coordinates of (a) are substituted into the monocular geometric ranging model, the following formula can be obtained:
h is the height of the sensor from the ground; γ is the depression of the sensor; y is a Is the longitudinal coordinate of the forward vehicle in pixel coordinates of the image from the sensor closest point; v. of 0 、f v Is a camera internal parameter, v 0 Is the pixel ordinate, f, corresponding to the principal point v The length of the focal length in the y-axis direction is described by using pixels and can be obtained by calibration through a Zhang Zhengyou calibration method; d is the distance sought.
In this embodiment, only the monocular geometric distance measurement model applied in the prior art is available, and an expression of the measurement distance d can be obtained, and the specific value of d is solved according to the expression.
In any of the above embodiments, as shown in fig. 1 to 8, the kalman filtering algorithm includes: the system comprises a prediction part model and an updating part model, wherein the output result of the prediction part model is used as the input condition of the updating part model.
In this embodiment, the solved measurement value d is further refined by predicting the partial model and updating the partial model, respectively, to improve the accuracy of the final output number.
In any of the above embodiments, as shown in fig. 1-8, the predictive part model includes the following formula:
P t - =FP t-1 F T +Q; (2)
wherein,
for the predicted state variable u at time t
t For the control variable at time t, F is the state matrix, B is the control matrix,
For error covariance predicted at time t, P
t_1 Is the error covariance updated at time t-1,
The state variable predicted at time t-1, and Q the observed noise.
In this embodiment, equation (1) is an a priori estimate of the state variable, which is at time t-1
As is known, the control matrix B is known and the variable u is controlled at time t-1
t-1 Under known conditions, a priori estimate of time t can be found
Equation (2) is a priori error covariance estimation equation, with the error covariance p at time t-1
t-1 Under the condition that the state matrix F and the observation noise are known, the prior error covariance p at the time t can be obtained
t - 。
In any of the above embodiments, as shown in fig. 1-8, updating the partial model includes the following equations:
K t =P t - H T (HP t - H T +R) -1 ; (3)
P t =(I-K t H)P t - ; (5)
wherein, P t Is the error covariance, K t Is the gain coefficient, H is the gain matrix, z t For observed variables, I is the identity matrix, H T Transpose matrix for H, R noise covariance.
In this embodiment, formula (3) is a kalman gain coefficient calculation formula, formula (4) is a state variable posterior estimation formula, and formula (5) is an error covariance posterior estimation formula. Determining the gain factor K at time t
t Observation distance z obtained by traditional geometric ranging model at time t
t And predicted state variables
Can obtain the updated value of the state variable at the time t
I.e. the closest point distance of the last output.
In any of the above embodiments, the prediction part model and the update part model are derived by the following steps:
in the parameter selection process, when t =0, the default initial distance is
I.e. the nearest detection distance (dead zone distance) and speed of the camera
dt =0.33, predictive covariance matrix
Distance measurement noise R =1, system process covariance noise
(1) After the distance information d is obtained, the state variables of the object are therefore compared
Is selected as
Observed quantity z of target
t The distance d obtained by the measurement in the first stage is selected, and the sampling period is the time dt for the camera to acquire a frame of picture.
(2) And predicting the current distance. The state variable prediction equation is:
since the vehicle-mounted camera itself and the forward obstacle are both moving, and the respective speeds and accelerations are different. The distance measurement information of the obstacle is actually the relative distance between the two, and the relative movement of the two is idealized into the uniform variable speed movement, including
distance t =distance t-1 +velocity t-1 gdt+agdt 2 /2
velocity t =velocity t-1 +agdt
Where velocity is the relative velocity and a is the relative acceleration. Let u denote distance, v denote velocity, and then
At this time, the process of the present invention,
u
t =[a]。
(3) And predicting a covariance matrix. The error covariance equation is:
P t - =FP t-1 F T +Q
estimating system parameters from a priori
System process covariance noise Q of
Since the distance noise and the velocity noise are independent, it is obtained
The variance of the distance noise and the velocity noise is a constant, and may be empirically determined or calculated.
Setting the last prediction covariance matrix as
This prediction covariance matrix is
Will P t - And P t-1 Substituting into the predictive covariance formula, have
Can be obtained by finishing
(4) And establishing a measurement equation. The system measurement equation is
z t =Hu t +V
Due to system output u t Is obstacle distance information, so there is H = [ 10 =]。
(5) And calculating a Kalman gain. The Kalman gain coefficient equation is
K t =P t - H T (HP t - H T +R) -1
Can be obtained by finishing
Where the distance measurement noise R is constant.
(6) A current optimization estimate is calculated. Equation based on the optimized estimate
The substitution parameter has
Can be obtained by finishing
At this time
The distance of the closest point obtained finally, namely the distance of the front vehicle.
(7) The covariance matrix is updated. According to the error covariance calculation formula
P t =(I-K t H)P t -
The substitution parameter has
A second aspect of the present invention provides a monocular distance measuring system for an underground unmanned vehicle, including a storage and a processor, where the storage stores a computer program, and the processor implements the steps of any of the above embodiments when executing the program.
In this embodiment, since the processor included therein may implement the steps of the method according to any one of the above embodiments, the technical effects of the monocular distance measuring system for an underground unmanned vehicle provided by the second aspect of the present invention are not repeated herein.
Example 1
As shown in figures 1, 6, 7 and 8
An infrared camera is installed at the position of a windshield of the trackless rubber-tyred vehicle, and the installation visual angle faces to the driving direction of the trackless rubber-tyred vehicle. The vehicle forward environment data acquisition can be realized.
The method comprises the following steps: convolutional neural network performing target detection
And (3) carrying out data acquisition by applying the infrared camera installed in the step (1). The collected data is a real-time video stream. Target detection is carried out on each frame of image in the video based on a convolutional neural network, the detection contents are the body, the tail and the hub of the front rubber wheel vehicle (only the tail can be detected but the head cannot be detected because a coal mine underground roadway can only be communicated with a single vehicle), and the detection results are type information and detection frame information of the detection target.
Step two: multi-target information fusion
The position of the vehicle is detected according to the forward direction and divided into three scenes: detecting that the vehicle is in the front right, detecting that the vehicle is in the front left, and detecting that the vehicle is in the front right. The three scenes will be described with reference to fig. 3, 4, and 5. The fusion judging step is as follows:
inputting the detection result into a multi-information fusion decision process;
(1) Judging whether the vehicle body is detected or not, if not, judging that no vehicle exists in front, and if so, continuing to judge;
(2) Judging whether the tail of the vehicle is detected, if not, judging that the vehicle is positioned right ahead, and if so, continuing to judge;
(3) Judging whether the wheels are detected, if not, judging that the vehicle is positioned right ahead, and if so, detecting the number of the wheels;
(4) If one wheel is detected, judging that the vehicle is positioned in the right front, and if two wheels are detected, judging that the vehicle is positioned in the lateral front;
(5) When the vehicle is located laterally forward, first, according to p 1 And p 2 The relative position of the vehicle is judged.
When (x) 1 -x 2 )(y 1 -y 2 )<0, it is determined that the vehicle is in the front left direction at this time.
When (x) 1 -x 2 )(y 1 -y 2 )>0, it is determined that the vehicle is right-front at this time.
Step three: pose estimation and acquisition closest point
The nearest point pixel coordinate solving method is different when the vehicle is detected to be in the lateral front and the right front. Let the nearest point pixel coordinate be A (x) a ,y a ). When the vehicle is positioned right ahead, the pixel coordinate of the middle point position of the bottom end of the vehicle body detection frame is obtained as the pixel coordinate of the closest point A. When the vehicle is positioned in the lateral front, taking the vehicle in the left front as an example, the pixel coordinate Q (x) of the lower right corner of the vehicle tail detection frame at the moment is obtained q ,y q ). At this time p 1 、p 2 In a straight line of
The coordinates of the closest point A can be obtained as
Step four: monocular geometric distance measurement model calculation distance
In the first stage, the pixel coordinate of the nearest point A is substituted into a monocular geometric distance measurement model, and the distance d between the nearest point and the camera in the real world is obtained
h is the height of the camera from the ground, gamma is the depression angle of the camera, y a Is the pixel coordinate of the closest point in the image, v 0 、f v The internal parameters of the camera can be obtained by calibration, and d is the required distance.
Step five: monocular geometric distance measurement model calculation distance
And performing Kalman prediction and updating in the second stage. The Kalman filtering model includes two parts, prediction and update. Wherein the prediction part model is
P t - =FP t-1 F T +Q
Update part of the model as
K t =P t - H T (HP t - H T +R) -1
P t =(I-K t H)P t -
Step six, outputting the vehicle distance, namely
In the description of the present invention, it is to be understood that the terms "longitudinal", "lateral", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like, indicate orientations or positional relationships based on those shown in the drawings, are merely for convenience of description of the present invention, and do not indicate or imply that the referenced devices or elements must have a particular orientation, be constructed and operated in a particular orientation, and thus, are not to be construed as limiting the present invention.
The above-described embodiments are merely illustrative of the preferred embodiments of the present invention, and do not limit the scope of the present invention, and various modifications and improvements of the technical solutions of the present invention can be made by those skilled in the art without departing from the spirit of the present invention, and the technical solutions of the present invention are within the scope of the present invention defined by the claims.