[go: up one dir, main page]

CN115326009A - Monocular distance measurement system and method for underground unmanned vehicle - Google Patents

Monocular distance measurement system and method for underground unmanned vehicle Download PDF

Info

Publication number
CN115326009A
CN115326009A CN202210810678.0A CN202210810678A CN115326009A CN 115326009 A CN115326009 A CN 115326009A CN 202210810678 A CN202210810678 A CN 202210810678A CN 115326009 A CN115326009 A CN 115326009A
Authority
CN
China
Prior art keywords
vehicle
information
distance
monocular
sensor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210810678.0A
Other languages
Chinese (zh)
Inventor
黄丽莎
于淼
郭旭东
韩蕾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Tage Idriver Technology Co Ltd
Original Assignee
Beijing Tage Idriver Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Tage Idriver Technology Co Ltd filed Critical Beijing Tage Idriver Technology Co Ltd
Priority to CN202210810678.0A priority Critical patent/CN115326009A/en
Publication of CN115326009A publication Critical patent/CN115326009A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C3/00Measuring distances in line of sight; Optical rangefinders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Automation & Control Theory (AREA)
  • Navigation (AREA)

Abstract

本发明公开了一种井下无人驾驶车辆的单目测距系统与方法,包括以下步骤:在车辆行驶路线中,通过传感器采集前向车辆图像,并自动识别检测框信息和前向车辆的类型信息;通过类型信息和检测框信息判断前向车辆的车辆位姿;根据车辆位姿求得前向车辆上与传感器的距离最近点在前向车辆图像中的像素坐标;得到最近点与传感器的测量距离,再将测量距离进行预测更新得到车辆距离,通过对车身、车尾、轮毂实时感知,基于检测结果提出多目标信息融合的方法,为精准测距提供了前提条件,然后采用二阶段的精准测距,结合传统几何测距模型与卡尔曼滤波算法,进一步提高最终结果的准确性,提高了井下无人驾驶车辆测距的稳定性和鲁棒性。

Figure 202210810678

The invention discloses a monocular ranging system and method for an underground unmanned vehicle, comprising the following steps: in the driving route of the vehicle, a forward vehicle image is collected by a sensor, and detection frame information and the type of the forward vehicle are automatically identified. information; determine the vehicle pose of the forward vehicle through the type information and detection frame information; obtain the pixel coordinates of the closest point in the forward vehicle image with the distance between the forward vehicle and the sensor according to the vehicle pose; obtain the distance between the closest point and the sensor. Measure the distance, and then predict and update the measured distance to obtain the vehicle distance. Through real-time perception of the body, rear, and wheel hub, a multi-target information fusion method is proposed based on the detection results, which provides preconditions for accurate ranging, and then adopts a two-stage method. Accurate ranging, combined with the traditional geometric ranging model and Kalman filtering algorithm, further improves the accuracy of the final result, and improves the stability and robustness of the ranging of underground unmanned vehicles.

Figure 202210810678

Description

Monocular distance measurement system and method for underground unmanned vehicle
Technical Field
The invention relates to the technical field of underground unmanned sensing, in particular to a monocular distance measuring system and method for an underground unmanned vehicle.
Background
In the face of increasing energy demand, high labor cost and severe international competitive pressure caused by shortage of skilled technicians, mining companies and energy groups are actively exploring and developing intelligent construction of mining areas, and as the working environment of the mining areas is severe and all-weather operation is required, unmanned technology for mining area transportation becomes a key point for promoting intelligent construction. At present, the problems of difficult recruitment, low efficiency, high safety risk and the like mainly exist in underground transportation.
Unmanned driving is taken as the current mainstream research direction, the safety problem of drivers can be fundamentally solved by unmanned transportation in mining areas, the labor cost is reduced, and the production and transportation efficiency can be greatly improved. Especially for the industrial mine, compared with the open-pit mine, the working scene is worse, the labor cost is higher, and the requirements for intelligent and unmanned transportation are stronger. The sensing system is used as an important component of unmanned driving, and especially has important research value based on a monocular vision distance measurement technology of an underground unmanned vehicle.
The existing monocular distance measuring method is mostly used for urban roads and expressways. The patent CN113720300a proposes a monocular distance measurement method based on a target recognition neural network, which obtains a distance by acquiring a near image and a far image of the same target and bringing a detection result into an optical relational expression. Patent CN111046843A proposes a monocular distance measurement method in an intelligent driving environment, which uses a detection frame to segment an object to be measured and extracts numerical information. Patent CN114046769a proposes a monocular distance measurement method based on multidimensional reference information, which measures distance of a detection object based on vanishing point and similar triangle method. However, the above monocular distance measuring method has some problems: (1) Most underground roadways are irregular roads, and the working condition of uneven ground brings large errors for distance measurement; (2) Because the target detection frame is two-dimensional, there is no way to obtain accurate depth information of the vehicle by means of a single detection frame.
Disclosure of Invention
The present invention is directed to a monocular distance measuring system and method for an underground unmanned vehicle to solve or improve at least one of the above technical problems.
In view of the above, a first aspect of the present invention is to provide a monocular distance measuring method for an underground unmanned vehicle.
A second aspect of the present invention is to provide a monocular distance measuring system of a downhole unmanned vehicle.
The invention provides a monocular distance measurement method of an underground unmanned vehicle, which comprises the following steps: s1, acquiring a forward vehicle image through a sensor in a vehicle driving route, and automatically identifying detection frame information and type information of a forward vehicle; s2, judging the vehicle pose of the forward vehicle according to the type information and the detection frame information; s3, solving the pixel coordinates of the closest point of the distance between the forward vehicle and the sensor in the forward vehicle image according to the vehicle pose; s4, substituting the pixel coordinates into a monocular geometric distance measurement model to obtain the measurement distance between the nearest point and the sensor, and then predicting and updating the measurement distance by adopting a Kalman filtering algorithm to obtain the vehicle distance; wherein the type information includes: and the vehicle body information, the vehicle tail information and the hub information are respectively in one-to-one correspondence with the detection frame information.
The invention provides a monocular distance measurement system and a monocular distance measurement method for an underground unmanned vehicle, and provides a monocular distance measurement system and a monocular distance measurement method applied to the underground unmanned vehicle aiming at the problems. In addition, the invention designs a pose estimation module to judge the pose of the vehicle and acquire the pixel coordinate corresponding to the nearest point of the front vehicle. And finally, designing a two-stage accurate distance measurement method, firstly solving the vehicle distance based on a monocular geometric distance measurement model, and then predicting and updating the distance by adopting Kalman filtering, so that the robustness and the stability of the distance measurement are ensured.
The invention provides a multi-target information fusion method based on detection results by sensing the vehicle body, the vehicle tail and the wheel hub in real time, provides a precondition for accurate distance measurement, and particularly realizes the real-time sensing of the vehicle body, the vehicle tail and the wheel hub by adopting a target detection algorithm of a convolutional neural network;
according to the invention, the distance measurement precision is improved compared with the traditional method by obtaining the relative position and attitude information of the forward vehicle and calculating the pixel coordinate of the closest point on the image by means of the position and attitude information, and specifically, the position and attitude of the forward vehicle at the parking space are obtained by adopting the existing commercially available vehicle position and attitude estimation module;
the invention provides two-stage accurate distance measurement, combines the traditional geometric distance measurement model and the Kalman filtering algorithm, further improves the accuracy of the final result of the distance measurement of the traditional geometric distance measurement model, and improves the stability and the robustness of the distance measurement of the underground unmanned vehicle.
In addition, the technical solution provided by the embodiment of the present invention may further have the following additional technical features:
in any of the above technical solutions, the forward vehicle image is obtained by the sensor shooting a real-time video stream frame by frame; the sensor is arranged at the windshield of the head of the vehicle, and the camera shooting and collecting surface is arranged in the same direction as the running direction of the vehicle; wherein the sensor is an infrared camera.
In the technical scheme, a sensor can be used for continuously shooting a front vehicle so as to obtain a real-time image for measurement and calculation of a subsequent algorithm, the image is obtained in a frame-by-frame mode, the most accurate and time-efficient image can be obtained in a continuous shooting video stream, and the measured and calculated distance can be matched with the current actual distance as far as possible;
the sensor is arranged at the windshield of the head of the vehicle, and particularly arranged in the vehicle, so that certain dustproof and interference imitation can be provided through the windshield, the sensor can continuously and effectively work, the camera shooting and collecting surface is arranged in the same direction as the driving direction of the vehicle, and a forward vehicle in the driving direction of the vehicle can be shot most directly;
the method can be suitable for underground dim working environment by adopting the infrared camera, and the stability of the whole work is ensured.
In any of the above technical solutions, when the vehicle body information in front is detected, it is determined that there is a forward vehicle in front, and the step of determining the vehicle pose of the forward vehicle specifically includes: when only the vehicle body information is detected and the vehicle tail information and the hub information are not detected, judging that the forward vehicle is positioned right ahead of the vehicle at the moment; when the vehicle body information and the vehicle tail information are detected and the wheel hub information is not detected, judging that the forward vehicle is positioned right ahead of the vehicle at the moment; when vehicle body information, vehicle tail information and wheel hub information are detected at the same time, judging that the forward vehicle is positioned right ahead of the vehicle at the moment; when the vehicle body information, the vehicle tail information and the two wheel hub information are detected at the same time, it is judged that the forward vehicle is located in the lateral front of the vehicle at the moment.
In the technical scheme, when the vehicle detects that the vehicle is ahead in the process of traveling, the pose of the vehicle in the forward direction needs to be judged, so that the accuracy of data can be ensured in the subsequent distance measurement and calculation;
when only the vehicle body information is detected, judging that the detected vehicle is positioned in front at the moment, but not detecting the tail and the hub of the vehicle, and considering that the closest point of the front vehicle is the pixel coordinate of the position of the middle point of the bottom end of the vehicle body detection frame under the condition;
when the vehicle body and the tail of the vehicle are detected, the hub is not detected, the hub of the forward vehicle is indicated to be shielded by the vehicle body in the forward looking direction of the vehicle, and therefore the detected forward vehicle is judged to be positioned right in front of the vehicle;
when the vehicle body information, the vehicle tail information and the wheel hub information are detected at the same time, the fact that a driver generates a deflection angle relative to the vehicle body exists on the forward vehicle is shown, so that a sensor of the vehicle only detects the driver, and the forward vehicle is in front of the side of the vehicle at the moment, but the deflection angle is smaller;
when the vehicle body information, the vehicle tail information and the two hub information are detected simultaneously, the fact that the advancing directions of the forward vehicle between the vehicle body and the vehicle are different is indicated, so that two hub perseverations on one side are detected, and the fact that the forward vehicle is located in front of the side of the vehicle body is indicated.
In any of the above technical solutions, when it is determined that the forward vehicle is located in the lateral front of the vehicle at this time, the pixel coordinate p of the bottom midpoint of the detection frame information of the two wheels is obtained 1 (x 1 ,y 1 ),p 2 (x 2 ,y 2 ) (ii) a When (x) 1 -x 2 )(y 1 -y 2 )<0, determining that the vehicle is in the front left direction at the moment; when (x) 1 -x 2 )(y 1 -y 2 )>0, it is determined that the vehicle is right-front at this time.
In the technical scheme, when the forward vehicle is positioned in the lateral front of the vehicle, the left and right deviation of the forward vehicle needs to be judged, and the left and right deviation of the vehicle body can be known by comparing the pixel coordinates of the midpoint of the bottom ends of the detection frame information of the two wheels, namely (x is proper) 1 -x 2 )(y 1 -y 2 )<0, determining that the vehicle is in the front left direction at the moment; when (x) 1 -x 2 )(y 1 -y 2 )>0, it is determined that the vehicle is right ahead at this time.
In any of the above technical solutions, let a pixel coordinate of the closest point be a (x) a ,y a ) The S3 specifically includes: when the forward vehicle is positioned right ahead of the vehicle, acquiring the pixel coordinate of the bottom midpoint position of the detection frame information of the vehicle body information as the pixel coordinate of the closest point A; when the forward vehicle is positioned in lateral front of the vehicle, acquiring a lower right corner or lower left corner pixel coordinate Q (x) of the detection frame information of the vehicle tail information at the moment q ,y q ) At this time p 1 、p 2 The straight line is as follows:
Figure BDA0003738828750000051
finding the coordinates of the closest point A as
Figure BDA0003738828750000061
In the technical scheme, when a forward vehicle is in the front of the vehicle, the pixel coordinate of the midpoint of the bottom end of the detection frame information corresponding to the vehicle body information is directly set as the closest point, when the forward vehicle is in the left front of the vehicle, the pixel coordinate of the lower right corner of the detection frame information corresponding to the vehicle tail information is obtained, or when the forward vehicle is in the right front of the vehicle, the pixel coordinate of the lower left corner of the detection frame information corresponding to the vehicle tail information is obtained, and the closest point coordinate is obtained when the forward vehicle is at the front end or the right front end of the vehicle by calculation according to the formula;
according to different postures of the forward vehicle relative to the vehicle, different selection means are adopted for the closest point, so that the final point location information is more accurate and adaptive.
In any of the above technical solutions, in the S4, the closest point a (x) is determined a ,y a ) The pixel coordinates of (a) are substituted into the monocular geometric ranging model, the following formula can be obtained:
Figure BDA0003738828750000062
h is the height of the sensor from the ground; γ is the depression of the sensor; y is a Is the longitudinal coordinate of the forward vehicle in pixel coordinates of the image from the sensor closest point; v. of 0 、f v Is a camera internal parameter, v 0 Is the pixel ordinate, f, corresponding to the principal point v The length of the focal length in the y-axis direction is described by using pixels and can be obtained by calibration through a Zhang Zhengyou calibration method; d is the distance sought.
In the technical scheme, only the monocular geometric distance measurement model applied in the prior art is available, the expression of the measurement distance d can be obtained, and the specific numerical value of d is solved according to the expression.
In any one of the above technical solutions, the kalman filtering algorithm includes: the system comprises a prediction part model and an updating part model, wherein the output result of the prediction part model is used as the input condition of the updating part model.
In the technical scheme, the solved measurement value d is further refined through a prediction part model and an update part model respectively, so that the accuracy of the final output number is improved.
In any of the above technical solutions, the prediction part model includes the following formula:
Figure BDA0003738828750000071
P t - =FP t-1 F T +Q; (2)
wherein,
Figure BDA0003738828750000072
for the predicted state variable u at time t t Is a control variable at time t, F is a state matrix, B is a control matrix,
Figure BDA0003738828750000078
For error covariance predicted at time t, P t_1 Is the error covariance updated at time t-1,
Figure BDA0003738828750000073
The state variable predicted at time t-1 and Q the observation noise.
In the technical scheme, a formula (1) is a prior estimation formula of a state variable, and the state variable is estimated at t-1 moment
Figure BDA0003738828750000074
As is known, the control matrix B is known and the variable u is controlled at time t-1 t-1 Under known conditions, a priori estimate of time t can be found
Figure BDA0003738828750000075
Equation (2) is a priori error covariance estimation equation, with the error covariance p at time t-1 t-1 Under the condition that the state matrix F and the observation noise are known, the prior error covariance at the time t can be obtained
Figure BDA0003738828750000076
In any of the above technical solutions, the update part model includes the following formula:
K t =P t - H T (HP t - H T +R) -1 ; (3)
Figure BDA0003738828750000077
P t =(I-K t H)P t - ; (5)
wherein, P t As error covariance, Q as observation noise, K t Is a gain coefficient, H is a gain matrix, z t For observed variables, I is the identity matrix, H T Transpose matrix for H, R noise covariance.
In the technical scheme, formula (3) is a Kalman gain coefficient calculation formula, formula (4) is a state variable posterior estimation formula, and formula (5) is an error covariance posterior estimation formula. Determining the gain factor K at time t t The observation distance z obtained by the traditional geometric distance measurement model at the moment t t And predicted state variables
Figure BDA0003738828750000081
Can obtain the updated value of the state variable at the time t
Figure BDA0003738828750000082
I.e. the closest point distance of the last output.
A second aspect of the present invention provides a monocular distance measuring system for an underground unmanned vehicle, comprising a storage and a processor, wherein the storage stores a computer program, and the processor implements the steps of any one of the above technical solutions when executing the program.
In this technical solution, since the processor included therein may implement the steps of any one of the above technical solution methods, the whole technical effects of the monocular distance measuring system for an underground unmanned vehicle provided by the second aspect of the present invention are not described herein again.
The existing monocular distance measurement method does not consider errors introduced by vehicle pose relation change, and has poor robustness in an underground unstructured scene. Aiming at the problems, the invention provides a real-time and accurate underground unmanned vehicle monocular distance measurement method, which has the following advantages compared with other methods:
the invention realizes real-time perception of the vehicle body, the vehicle tail and the wheel hub based on the target detection algorithm of the convolutional neural network, provides a multi-target information fusion method based on the detection result, and provides a precondition for accurate distance measurement;
according to the invention, the vehicle pose estimation module is designed to obtain the relative pose information of the front vehicle, and the pixel coordinate of the closest point on the image is calculated through the pose information, so that the distance measurement precision is improved compared with the traditional method;
the invention provides a two-stage accurate distance measurement method, which combines a traditional geometric distance measurement model and a Kalman filtering algorithm to improve the stability and robustness of underground unmanned vehicle distance measurement.
Additional aspects and advantages of embodiments in accordance with the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of embodiments in accordance with the invention.
Drawings
The drawings are only for purposes of illustrating particular embodiments and are not to be construed as limiting the invention.
FIG. 1 is a flow diagram of the overall scheme of the present invention;
FIG. 2 is a diagram of a ranging scenario in accordance with the present invention;
FIG. 3 is a view of the present invention in a scene with a detected vehicle in front;
FIG. 4 is a view of the present invention in a scenario in which a vehicle is detected to be in the front left;
FIG. 5 is a view of the present invention in a scenario in which a vehicle is detected to be in a front right direction;
FIG. 6 is a flow chart of a multi-information fusion decision making process of the present invention;
FIG. 7 is a geometric distance measurement model for monocular vision in accordance with the present invention;
FIG. 8 is a flow chart of the Kalman filtering algorithm of the present invention.
Detailed Description
So that the manner in which the above recited objects, features and advantages of the present invention can be understood in detail, a more particular description of the invention, briefly summarized above, may be had by reference to the embodiments thereof which are illustrated in the appended drawings. It should be noted that the embodiments and features of the embodiments of the present application may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, however, the present invention may be practiced in other ways than those specifically described herein, and therefore the scope of the present invention is not limited by the specific embodiments disclosed below.
Referring to fig. 1 to 8, a first aspect of the present invention provides a monocular distance measuring method for an underground unmanned vehicle, including the following steps: s1, collecting a forward vehicle image through a sensor in a vehicle driving route, and automatically identifying detection frame information and type information of a forward vehicle; s2, judging the vehicle pose of the forward vehicle according to the type information and the detection frame information; s3, solving the pixel coordinates of the closest point of the forward vehicle and the sensor in the forward vehicle image according to the vehicle pose; s4, substituting the pixel coordinates into a monocular geometric distance measurement model to obtain the measurement distance between the nearest point and the sensor, and predicting and updating the measurement distance by adopting a Kalman filtering algorithm to obtain the vehicle distance; wherein the type information includes: and the vehicle body information, the vehicle tail information and the hub information are respectively in one-to-one correspondence with the detection frame information.
The invention provides a monocular distance measurement system and method of an underground unmanned vehicle, which provides a precondition for accurate distance measurement by sensing a vehicle body, a vehicle tail and a wheel hub in real time and providing a method of multi-target information fusion based on a detection result, and particularly, the real-time sensing of the vehicle body, the vehicle tail and the wheel hub is realized by adopting a target detection algorithm of a convolutional neural network;
according to the design, the relative position and attitude information of the forward vehicle is obtained, and the pixel coordinate of the closest point on the image is calculated by means of the position and attitude information, so that the distance measurement precision is improved compared with that of a traditional method, and specifically, the position and attitude of the forward vehicle of the parking space are obtained by adopting a vehicle position and attitude estimation module which is sold in the market;
the invention provides two-stage accurate distance measurement, combines the traditional geometric distance measurement model and a Kalman filtering algorithm, further improves the accuracy of the final result of the distance measurement of the traditional geometric distance measurement model, and improves the stability and robustness of the distance measurement of the underground unmanned vehicle.
Due to the particularity of the underground scene, the general monocular distance measuring method has poor robustness and low precision in the underground. The invention aims to provide a monocular distance measurement method applied to an underground unmanned driving scene. Particularly, in consideration of the special working conditions of numerous underground unstructured roads and uneven ground and urgent requirements for safety guarantee and cost reduction, due to the instability and limitation of the traditional monocular distance measurement method, the method designs a multi-information fusion module, a pose estimation module and a two-stage accurate distance measurement module based on a convolutional neural network, solves the problem of accurate perception of distance information of an underground unmanned vehicle to a forward vehicle in the driving process, and enhances the safety and reliability of underground unmanned transportation.
In any of the above embodiments, as shown in fig. 1-8, the forward vehicle images are obtained by capturing a real-time video stream frame by the sensor; the sensor is arranged at the windshield of the head of the vehicle, and the camera shooting acquisition surface is arranged in the same direction as the running direction of the vehicle; wherein the sensor is an infrared camera.
In the embodiment, the sensor can continuously shoot the front vehicle so as to obtain a real-time image for measurement and calculation of a subsequent algorithm, the image is obtained in a frame-by-frame mode, and the most accurate and time-efficient image can be obtained in a continuous shooting video stream so that the measured and calculated distance can be matched with the current actual distance as far as possible;
the sensor is arranged at the windshield of the head of the vehicle, and particularly arranged in the vehicle, so that certain dustproof and interference imitation can be provided through the windshield, the sensor can continuously and effectively work, the camera shooting acquisition surface is arranged in the same direction as the driving direction of the vehicle, and a forward vehicle in the driving direction of the vehicle can be shot most directly;
the method can be suitable for underground dim working environment by adopting the infrared camera, and the stability of the whole work is ensured.
In any of the embodiments described above, as shown in fig. 1 to 8, when the vehicle body information in front is detected, the step of determining the vehicle pose of the forward vehicle, which determines that there is a forward vehicle in front, specifically includes: when only the vehicle body information is detected and the vehicle tail information and the hub information are not detected, judging that the forward vehicle is positioned right ahead of the vehicle at the moment; when the vehicle body information and the vehicle tail information are detected and the wheel hub information is not detected, judging that the forward vehicle is positioned right ahead of the vehicle at the moment; when the vehicle body information, the vehicle tail information and the wheel hub information are detected at the same time, judging that the forward vehicle is positioned right ahead of the vehicle at the moment; when the vehicle body information, the vehicle tail information and the two hub information are detected at the same time, it is judged that the forward vehicle is positioned in the lateral front of the vehicle at the moment.
In the embodiment, when the vehicle detects that the vehicle is ahead in the process of traveling, the pose of the vehicle in the front direction needs to be judged, so that the accuracy of data can be ensured in the subsequent distance measurement and calculation;
when only the vehicle body information is detected, judging that the detected vehicle is positioned in front at the moment, but not detecting the tail and the hub of the vehicle, and considering that the closest point of the front vehicle is the pixel coordinate of the position of the middle point of the bottom end of the vehicle body detection frame under the condition;
when the vehicle body and the tail of the vehicle are detected, the hub is not detected, the hub of the forward vehicle is indicated to be shielded by the vehicle body in the forward looking direction of the vehicle, and therefore the detected forward vehicle is judged to be positioned right in front of the vehicle;
when the vehicle body information, the vehicle tail information and the wheel hub information are detected at the same time, it is indicated that a driver of the forward vehicle generates a deflection angle relative to the vehicle body at the moment, so that a sensor of the vehicle only detects a driver, and the forward vehicle is positioned in front of the side of the vehicle at the moment, but the deflection angle is smaller;
when the vehicle body information, the vehicle tail information and the two hub information are detected simultaneously, the fact that the advancing directions of the forward vehicle between the vehicle body and the vehicle are different is indicated, so that two hub perseverations on one side are detected, and the fact that the forward vehicle is located in front of the side of the vehicle body is indicated.
In any of the above embodiments, as shown in fig. 1 to 8, when it is judged that the forward vehicle is positioned laterally forward of the vehicle at this time, the pixel coordinate p of the bottom end midpoint of the detection frame information of the two wheels is acquired 1 (x 1 ,y 1 ),p 2 (x 2 ,y 2 ) (ii) a When (x) 1 -x 2 )(y 1 -y 2 )<0, determining that the vehicle is in the front left direction at the moment; when (x) 1 -x 2 )(y 1 -y 2 )>0, it is determined that the vehicle is right-front at this time.
In this embodiment, when the forward vehicle is located in the lateral front of the vehicle, the left-right deviation of the forward vehicle needs to be determined, and the left-right deviation of the vehicle body can be known by comparing the pixel coordinates of the bottom midpoint of the detection frame information of the two wheels, that is, when (x) is the right 1 -x 2 )(y 1 -y 2 )<0, determining that the vehicle is in the front left direction at the moment; when (x) 1 -x 2 )(y 1 -y 2 )>0, it is determined that the vehicle is right-front at this time.
In any of the above embodiments, as shown in fig. 1-8, let the pixel coordinate of the closest point be a (x) a ,y a ) And S3 specifically comprises: when the forward vehicle is positioned right ahead of the vehicle, acquiring the pixel coordinate of the midpoint position at the bottom end of the detection frame information of the vehicle body information as the pixel coordinate of the closest point A; when the forward vehicle is positioned in the lateral front of the vehicle, acquiring the pixel coordinate Q (x) of the lower right corner or lower left corner of the detection frame information of the vehicle tail information at the moment q ,y q ) At this time p 1 、p 2 The straight line is as follows:
Figure BDA0003738828750000131
finding the coordinates of the closest point A as
Figure BDA0003738828750000132
In this embodiment, when the forward vehicle is directly in front of the vehicle, the pixel coordinate of the midpoint at the bottom end of the detection frame information corresponding to the vehicle body information is directly set as the closest point, when the forward vehicle is in front of the left side of the vehicle, the pixel coordinate of the lower right corner of the detection frame information corresponding to the vehicle tail information is obtained, or when the forward vehicle is in front of the right side of the vehicle, the pixel coordinate of the lower left corner of the detection frame information corresponding to the vehicle tail information is obtained, and the closest point coordinate is obtained when the forward vehicle is at the front end or the front right end of the vehicle by calculation according to the formula;
according to different postures of the forward vehicle relative to the vehicle, different selection means are adopted for the closest point, so that the final point location information is more accurate and adaptive.
In any of the above embodiments, as shown in FIGS. 1-8, in S4, the closest point A (x) a ,y a ) The pixel coordinates of (a) are substituted into the monocular geometric ranging model, the following formula can be obtained:
Figure BDA0003738828750000133
h is the height of the sensor from the ground; γ is the depression of the sensor; y is a Is the longitudinal coordinate of the forward vehicle in pixel coordinates of the image from the sensor closest point; v. of 0 、f v Is a camera internal parameter, v 0 Is the pixel ordinate, f, corresponding to the principal point v The length of the focal length in the y-axis direction is described by using pixels and can be obtained by calibration through a Zhang Zhengyou calibration method; d is the distance sought.
In this embodiment, only the monocular geometric distance measurement model applied in the prior art is available, and an expression of the measurement distance d can be obtained, and the specific value of d is solved according to the expression.
In any of the above embodiments, as shown in fig. 1 to 8, the kalman filtering algorithm includes: the system comprises a prediction part model and an updating part model, wherein the output result of the prediction part model is used as the input condition of the updating part model.
In this embodiment, the solved measurement value d is further refined by predicting the partial model and updating the partial model, respectively, to improve the accuracy of the final output number.
In any of the above embodiments, as shown in fig. 1-8, the predictive part model includes the following formula:
Figure BDA0003738828750000141
P t - =FP t-1 F T +Q; (2)
wherein,
Figure BDA0003738828750000142
for the predicted state variable u at time t t For the control variable at time t, F is the state matrix, B is the control matrix,
Figure BDA0003738828750000147
For error covariance predicted at time t, P t_1 Is the error covariance updated at time t-1,
Figure BDA0003738828750000143
The state variable predicted at time t-1, and Q the observed noise.
In this embodiment, equation (1) is an a priori estimate of the state variable, which is at time t-1
Figure BDA0003738828750000144
As is known, the control matrix B is known and the variable u is controlled at time t-1 t-1 Under known conditions, a priori estimate of time t can be found
Figure BDA0003738828750000145
Equation (2) is a priori error covariance estimation equation, with the error covariance p at time t-1 t-1 Under the condition that the state matrix F and the observation noise are known, the prior error covariance p at the time t can be obtained t -
In any of the above embodiments, as shown in fig. 1-8, updating the partial model includes the following equations:
K t =P t - H T (HP t - H T +R) -1 ; (3)
Figure BDA0003738828750000146
P t =(I-K t H)P t - ; (5)
wherein, P t Is the error covariance, K t Is the gain coefficient, H is the gain matrix, z t For observed variables, I is the identity matrix, H T Transpose matrix for H, R noise covariance.
In this embodiment, formula (3) is a kalman gain coefficient calculation formula, formula (4) is a state variable posterior estimation formula, and formula (5) is an error covariance posterior estimation formula. Determining the gain factor K at time t t Observation distance z obtained by traditional geometric ranging model at time t t And predicted state variables
Figure BDA0003738828750000151
Can obtain the updated value of the state variable at the time t
Figure BDA0003738828750000152
I.e. the closest point distance of the last output.
In any of the above embodiments, the prediction part model and the update part model are derived by the following steps:
in the parameter selection process, when t =0, the default initial distance is
Figure BDA0003738828750000153
I.e. the nearest detection distance (dead zone distance) and speed of the camera
Figure BDA0003738828750000154
dt =0.33, predictive covariance matrix
Figure BDA0003738828750000155
Distance measurement noise R =1, system process covariance noise
Figure BDA0003738828750000156
(1) After the distance information d is obtained, the state variables of the object are therefore compared
Figure BDA0003738828750000157
Is selected as
Figure BDA0003738828750000158
Observed quantity z of target t The distance d obtained by the measurement in the first stage is selected, and the sampling period is the time dt for the camera to acquire a frame of picture.
(2) And predicting the current distance. The state variable prediction equation is:
Figure BDA0003738828750000159
since the vehicle-mounted camera itself and the forward obstacle are both moving, and the respective speeds and accelerations are different. The distance measurement information of the obstacle is actually the relative distance between the two, and the relative movement of the two is idealized into the uniform variable speed movement, including
distance t =distance t-1 +velocity t-1 gdt+agdt 2 /2
velocity t =velocity t-1 +agdt
Where velocity is the relative velocity and a is the relative acceleration. Let u denote distance, v denote velocity, and then
Figure BDA0003738828750000161
At this time, the process of the present invention,
Figure BDA0003738828750000162
u t =[a]。
(3) And predicting a covariance matrix. The error covariance equation is:
P t - =FP t-1 F T +Q
estimating system parameters from a priori
Figure BDA0003738828750000163
System process covariance noise Q of
Figure BDA0003738828750000164
Since the distance noise and the velocity noise are independent, it is obtained
Figure BDA0003738828750000165
The variance of the distance noise and the velocity noise is a constant, and may be empirically determined or calculated.
Setting the last prediction covariance matrix as
Figure BDA0003738828750000166
This prediction covariance matrix is
Figure BDA0003738828750000167
Will P t - And P t-1 Substituting into the predictive covariance formula, have
Figure BDA0003738828750000171
Can be obtained by finishing
Figure BDA0003738828750000172
(4) And establishing a measurement equation. The system measurement equation is
z t =Hu t +V
Due to system output u t Is obstacle distance information, so there is H = [ 10 =]。
(5) And calculating a Kalman gain. The Kalman gain coefficient equation is
K t =P t - H T (HP t - H T +R) -1
Can be obtained by finishing
Figure BDA0003738828750000173
Where the distance measurement noise R is constant.
(6) A current optimization estimate is calculated. Equation based on the optimized estimate
Figure BDA0003738828750000174
The substitution parameter has
Figure BDA0003738828750000175
Can be obtained by finishing
Figure BDA0003738828750000176
Figure BDA0003738828750000177
At this time
Figure BDA0003738828750000178
The distance of the closest point obtained finally, namely the distance of the front vehicle.
(7) The covariance matrix is updated. According to the error covariance calculation formula
P t =(I-K t H)P t -
The substitution parameter has
Figure BDA0003738828750000181
A second aspect of the present invention provides a monocular distance measuring system for an underground unmanned vehicle, including a storage and a processor, where the storage stores a computer program, and the processor implements the steps of any of the above embodiments when executing the program.
In this embodiment, since the processor included therein may implement the steps of the method according to any one of the above embodiments, the technical effects of the monocular distance measuring system for an underground unmanned vehicle provided by the second aspect of the present invention are not repeated herein.
Example 1
As shown in figures 1, 6, 7 and 8
An infrared camera is installed at the position of a windshield of the trackless rubber-tyred vehicle, and the installation visual angle faces to the driving direction of the trackless rubber-tyred vehicle. The vehicle forward environment data acquisition can be realized.
The method comprises the following steps: convolutional neural network performing target detection
And (3) carrying out data acquisition by applying the infrared camera installed in the step (1). The collected data is a real-time video stream. Target detection is carried out on each frame of image in the video based on a convolutional neural network, the detection contents are the body, the tail and the hub of the front rubber wheel vehicle (only the tail can be detected but the head cannot be detected because a coal mine underground roadway can only be communicated with a single vehicle), and the detection results are type information and detection frame information of the detection target.
Step two: multi-target information fusion
The position of the vehicle is detected according to the forward direction and divided into three scenes: detecting that the vehicle is in the front right, detecting that the vehicle is in the front left, and detecting that the vehicle is in the front right. The three scenes will be described with reference to fig. 3, 4, and 5. The fusion judging step is as follows:
inputting the detection result into a multi-information fusion decision process;
(1) Judging whether the vehicle body is detected or not, if not, judging that no vehicle exists in front, and if so, continuing to judge;
(2) Judging whether the tail of the vehicle is detected, if not, judging that the vehicle is positioned right ahead, and if so, continuing to judge;
(3) Judging whether the wheels are detected, if not, judging that the vehicle is positioned right ahead, and if so, detecting the number of the wheels;
(4) If one wheel is detected, judging that the vehicle is positioned in the right front, and if two wheels are detected, judging that the vehicle is positioned in the lateral front;
(5) When the vehicle is located laterally forward, first, according to p 1 And p 2 The relative position of the vehicle is judged.
When (x) 1 -x 2 )(y 1 -y 2 )<0, it is determined that the vehicle is in the front left direction at this time.
When (x) 1 -x 2 )(y 1 -y 2 )>0, it is determined that the vehicle is right-front at this time.
Step three: pose estimation and acquisition closest point
The nearest point pixel coordinate solving method is different when the vehicle is detected to be in the lateral front and the right front. Let the nearest point pixel coordinate be A (x) a ,y a ). When the vehicle is positioned right ahead, the pixel coordinate of the middle point position of the bottom end of the vehicle body detection frame is obtained as the pixel coordinate of the closest point A. When the vehicle is positioned in the lateral front, taking the vehicle in the left front as an example, the pixel coordinate Q (x) of the lower right corner of the vehicle tail detection frame at the moment is obtained q ,y q ). At this time p 1 、p 2 In a straight line of
Figure BDA0003738828750000191
The coordinates of the closest point A can be obtained as
Figure BDA0003738828750000192
Step four: monocular geometric distance measurement model calculation distance
In the first stage, the pixel coordinate of the nearest point A is substituted into a monocular geometric distance measurement model, and the distance d between the nearest point and the camera in the real world is obtained
Figure BDA0003738828750000201
h is the height of the camera from the ground, gamma is the depression angle of the camera, y a Is the pixel coordinate of the closest point in the image, v 0 、f v The internal parameters of the camera can be obtained by calibration, and d is the required distance.
Step five: monocular geometric distance measurement model calculation distance
And performing Kalman prediction and updating in the second stage. The Kalman filtering model includes two parts, prediction and update. Wherein the prediction part model is
Figure BDA0003738828750000202
P t - =FP t-1 F T +Q
Update part of the model as
K t =P t - H T (HP t - H T +R) -1
Figure BDA0003738828750000203
P t =(I-K t H)P t -
Step six, outputting the vehicle distance, namely
Figure BDA0003738828750000204
In the description of the present invention, it is to be understood that the terms "longitudinal", "lateral", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like, indicate orientations or positional relationships based on those shown in the drawings, are merely for convenience of description of the present invention, and do not indicate or imply that the referenced devices or elements must have a particular orientation, be constructed and operated in a particular orientation, and thus, are not to be construed as limiting the present invention.
The above-described embodiments are merely illustrative of the preferred embodiments of the present invention, and do not limit the scope of the present invention, and various modifications and improvements of the technical solutions of the present invention can be made by those skilled in the art without departing from the spirit of the present invention, and the technical solutions of the present invention are within the scope of the present invention defined by the claims.

Claims (10)

1. A monocular distance measurement method of an underground unmanned vehicle is characterized by comprising the following steps:
s1, acquiring a forward vehicle image through a sensor in a vehicle driving route, and automatically identifying detection frame information and type information of a forward vehicle;
s2, judging the vehicle pose of the forward vehicle according to the type information and the detection frame information;
s3, solving the pixel coordinates of the closest point on the forward vehicle and the sensor in the forward vehicle image according to the vehicle pose;
s4, substituting the pixel coordinates into a monocular geometric distance measurement model to obtain the measurement distance between the nearest point and the sensor, and then predicting and updating the measurement distance by adopting a Kalman filtering algorithm to obtain the vehicle distance;
wherein the type information includes: and the vehicle body information, the vehicle tail information and the hub information are respectively in one-to-one correspondence with the detection frame information.
2. The monocular distance measuring method of an unmanned downhole vehicle of claim 1, wherein the forward vehicle image is obtained by the sensor capturing a real-time video stream frame by frame;
the sensor is arranged at the windshield of the head of the vehicle, and the camera shooting and collecting surface is arranged in the same direction as the running direction of the vehicle;
wherein the sensor is an infrared camera.
3. The monocular distance measuring method of an underground unmanned vehicle according to claim 1, wherein when the vehicle body information in front is detected, it is determined that there is a forward vehicle in front, and the step of determining the vehicle pose of the forward vehicle specifically includes:
when only the vehicle body information is detected and the vehicle tail information and the hub information are not detected, judging that the forward vehicle is positioned right ahead of the vehicle at the moment;
when the vehicle body information and the vehicle tail information are detected and the wheel hub information is not detected, judging that the forward vehicle is positioned right ahead of the vehicle at the moment;
when vehicle body information, vehicle tail information and wheel hub information are detected at the same time, judging that the forward vehicle is positioned right ahead of the vehicle at the moment;
when vehicle body information, vehicle tail information and two pieces of wheel hub information are detected at the same time, it is judged that the forward vehicle is located in the lateral front of the vehicle at the moment.
4. The monocular distance measuring method of an unmanned vehicle under well according to claim 3, wherein when it is judged that the vehicle in the forward direction at this time is located in the lateral front of the vehicle, the pixel coordinate p of the bottom center point of the detection frame information of the two wheels is acquired 1 (x 1 ,y 1 ),p 2 (x 2 ,y 2 );
When (x) 1 -x 2 )(y 1 -y 2 )<0, determining that the vehicle is in the front left direction at the moment;
when (x) 1 -x 2 )(y 1 -y 2 )>0, it is determined that the vehicle is right-front at this time.
5. The monocular distance measuring method of an underground unmanned vehicle according to claim 4, wherein the pixel coordinate of the closest point is A (x) a ,y a ) The S3 specifically includes:
when the forward vehicle is positioned right ahead of the vehicle, acquiring the pixel coordinate of the bottom midpoint position of the detection frame information of the vehicle body information as the pixel coordinate of the closest point A;
when the forward vehicle is positioned in the lateral front of the vehicle, acquiring a pixel coordinate Q (x) of a lower right corner or a lower left corner of the detection frame information of the vehicle tail information at the moment q ,y q ) At this time p 1 、p 2 The straight line is as follows:
Figure FDA0003738828740000021
finding the coordinates of the closest point A as
Figure FDA0003738828740000022
6. The monocular distance measuring method of an unmanned vehicle under well according to claim 5, wherein in S4, the closest point A (x) is a ,y a ) The pixel coordinates of (a) are substituted into the monocular geometric distance measurement model, the following formula can be obtained:
Figure FDA0003738828740000031
h is the height of the sensor from the ground; γ is the depression of the sensor; y is a Is the longitudinal coordinate of the forward vehicle in pixel coordinates of the image from the sensor closest point; v. of 0 、f v Is a camera internal parameter, v 0 Is the pixel ordinate, f, corresponding to the principal point v The length of the focal length in the y-axis direction is described by using pixels and can be obtained by calibration through a Zhang Zhengyou calibration method; d is the distance sought.
7. The monocular distance measuring method of a downhole unmanned vehicle of claim 1, wherein the kalman filtering algorithm comprises: the system comprises a prediction part model and an updating part model, wherein the output result of the prediction part model is used as the input condition of the updating part model.
8. The monocular distance measuring method of a downhole unmanned vehicle of claim 7, wherein the predictive part model comprises the following formula:
Figure FDA0003738828740000032
P t - =FP t-1 F T +Q;
wherein,
Figure FDA0003738828740000033
for the predicted state variable u at time t t Is a control variable at time t, F is a state matrix, B is a control matrix,
Figure FDA0003738828740000035
For the error covariance, P, of the prediction at time t t_1 Is the error covariance updated at time t-1,
Figure FDA0003738828740000034
The state variable predicted at time t-1, and Q the observed noise.
9. The monocular distance measuring method of a downhole unmanned vehicle of claim 7, wherein the updated partial model comprises the following formula:
K t =P t - H T (HP t - H T +R) -1
Figure FDA0003738828740000041
P t =(I-K t H)P t -
wherein, P t As error covariance, Q as observation noise, K t Is the gain coefficient, H is the gain matrix, z t For observed variables, I is the identity matrix, H T Transpose matrix for H, R noise covariance.
10. A monocular distance measuring system of a downhole unmanned vehicle comprising a memory having stored thereon a computer program and a processor which, when executed, implements the method of any one of claims 1 to 9.
CN202210810678.0A 2022-07-11 2022-07-11 Monocular distance measurement system and method for underground unmanned vehicle Pending CN115326009A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210810678.0A CN115326009A (en) 2022-07-11 2022-07-11 Monocular distance measurement system and method for underground unmanned vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210810678.0A CN115326009A (en) 2022-07-11 2022-07-11 Monocular distance measurement system and method for underground unmanned vehicle

Publications (1)

Publication Number Publication Date
CN115326009A true CN115326009A (en) 2022-11-11

Family

ID=83918362

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210810678.0A Pending CN115326009A (en) 2022-07-11 2022-07-11 Monocular distance measurement system and method for underground unmanned vehicle

Country Status (1)

Country Link
CN (1) CN115326009A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115937312A (en) * 2022-12-04 2023-04-07 天津职业技术师范大学(中国职业培训指导教师进修中心) A Monocular Ranging Method Used in Vehicle Driving
WO2024199192A1 (en) * 2023-03-29 2024-10-03 北京罗克维尔斯科技有限公司 Multi-sensor fusion vehicle detection method and apparatus, device, medium and vehicle

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105488454A (en) * 2015-11-17 2016-04-13 天津工业大学 Monocular vision based front vehicle detection and ranging method
CN109949364A (en) * 2019-04-01 2019-06-28 上海淞泓智能汽车科技有限公司 A kind of vehicle attitude detection accuracy optimization method based on drive test monocular cam
CN110031829A (en) * 2019-04-18 2019-07-19 北京联合大学 A kind of targeting accuracy distance measuring method based on monocular vision
CN111982072A (en) * 2020-07-29 2020-11-24 西北工业大学 A target ranging method based on monocular vision
CN113706612A (en) * 2021-10-28 2021-11-26 天地(常州)自动化股份有限公司 Underground coal mine vehicle positioning method fusing UWB and monocular vision SLAM
CN114705121A (en) * 2022-03-29 2022-07-05 智道网联科技(北京)有限公司 Vehicle pose measuring method and device, electronic equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105488454A (en) * 2015-11-17 2016-04-13 天津工业大学 Monocular vision based front vehicle detection and ranging method
CN109949364A (en) * 2019-04-01 2019-06-28 上海淞泓智能汽车科技有限公司 A kind of vehicle attitude detection accuracy optimization method based on drive test monocular cam
CN110031829A (en) * 2019-04-18 2019-07-19 北京联合大学 A kind of targeting accuracy distance measuring method based on monocular vision
CN111982072A (en) * 2020-07-29 2020-11-24 西北工业大学 A target ranging method based on monocular vision
CN113706612A (en) * 2021-10-28 2021-11-26 天地(常州)自动化股份有限公司 Underground coal mine vehicle positioning method fusing UWB and monocular vision SLAM
CN114705121A (en) * 2022-03-29 2022-07-05 智道网联科技(北京)有限公司 Vehicle pose measuring method and device, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115937312A (en) * 2022-12-04 2023-04-07 天津职业技术师范大学(中国职业培训指导教师进修中心) A Monocular Ranging Method Used in Vehicle Driving
WO2024199192A1 (en) * 2023-03-29 2024-10-03 北京罗克维尔斯科技有限公司 Multi-sensor fusion vehicle detection method and apparatus, device, medium and vehicle

Similar Documents

Publication Publication Date Title
US20250200984A1 (en) Methods and apparatus for acquisition and tracking, object classification and terrain inference
CN113297881B (en) Target detection method and related device
CN112424001B (en) Vehicle-trailer distance detection device and method
JP4856656B2 (en) Vehicle detection device
CN105835880B (en) Lane following system
JP4676373B2 (en) Peripheral recognition device, peripheral recognition method, and program
CN107949768B (en) Vehicle position estimating device, vehicle position estimating method
CN104751151B (en) A kind of identification of multilane in real time and tracking
US11958485B2 (en) Vehicle control method and apparatus
JP4956453B2 (en) Object detection device
WO2021215199A1 (en) Information processing device, image capturing system, information processing method, and computer program
CN106295560A (en) The track keeping method controlled based on vehicle-mounted binocular camera and stagewise PID
US20250355438A1 (en) Onboard cluster tracking system
GB2571589A (en) Terrain inference method and apparatus
CN115326009A (en) Monocular distance measurement system and method for underground unmanned vehicle
WO2019031137A1 (en) Roadside object detection device, roadside object detection method, and roadside object detection system
JP4714104B2 (en) Object tilt detection device
WO2020160927A1 (en) Vehicle control system and method
KR20040067584A (en) Steering angle of the vehicle due to a travelling deviation revision / speed control data creation system and method
GB2571588A (en) Object classification method and apparatus
GB2584383A (en) Vehicle control system and method
CN116767226A (en) Information acquisition and monitoring management system and method for electric automobile
GB2571585A (en) Vehicle control method and apparatus
CN121386856A (en) Robot motion control system with terrain self-adaptation function
GB2571586A (en) Acquisition and tracking method and apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Country or region after: China

Address after: Room 303, Zone D, Main Building of Beihang Hefei Science City Innovation Research Institute, No. 999 Weiwu Road, Xinzhan District, Hefei City, Anhui Province, 230012

Applicant after: Taoke Zhixing Technology Co., Ltd.

Address before: 100191 Quantum Silver Tower, 11th Floor, Zhichun Road, Haidian District, Beijing

Applicant before: BEIJING TAGE IDRIVER TECHNOLOGY CO.,LTD.

Country or region before: China

CB02 Change of applicant information