WO2018161278A1 - Système d'automobile sans conducteur et son procédé de commande, et automobile - Google Patents
Système d'automobile sans conducteur et son procédé de commande, et automobile Download PDFInfo
- Publication number
- WO2018161278A1 WO2018161278A1 PCT/CN2017/075983 CN2017075983W WO2018161278A1 WO 2018161278 A1 WO2018161278 A1 WO 2018161278A1 CN 2017075983 W CN2017075983 W CN 2017075983W WO 2018161278 A1 WO2018161278 A1 WO 2018161278A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- information
- obstacle
- driverless
- visual
- radar
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W30/00—Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
- B60W40/02—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
- B60W40/10—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to vehicle motion
- B60W40/105—Speed
Definitions
- the invention relates to the technical field of automobiles, in particular to an unmanned vehicle system, a control method thereof and an automobile.
- the current self-driving car technology is basically equipped with automatic operation and driving ability.
- advanced instruments such as cameras, radar sensors and laser detectors are installed on the car to sense the speed limit and roadside traffic signs of the road. If you want to leave, just use the map to navigate.
- the driverless system mainly uses the on-board sensor to sense the surrounding environment of the vehicle, and controls the steering and speed of the vehicle according to the road, vehicle position and obstacle information obtained by the perception, so that the vehicle can travel safely and reliably on the road.
- a driverless car is a kind of smart car, which mainly relies on a computer-based smart pilot in the car to realize driverless driving.
- the difficulty of the driverless system lies in the ability to distinguish between roadside traffic and surrounding environment identification, which may result in inaccurate data collected by the driverless system.
- a driverless car system comprising:
- the environment sensing subsystem is configured to collect vehicle information and surrounding environment information of the driverless vehicle, and the surrounding environment information includes image information of the surrounding environment and three-dimensional coordinate information;
- a data fusion subsystem for merging image information and three-dimensional coordinate information of the surrounding environment and extracting obstacle identification information
- a path planning decision subsystem for planning a travel route based on vehicle information, obstacle identification information, and travel destination information
- the travel control subsystem is configured to generate a control command according to the travel route, and control the driverless car according to the control command.
- the above-mentioned driverless car system integrates surrounding environment information including image information and three-dimensional coordinate information through a data fusion subsystem, and extracts obstacle information, lane line information, traffic identification information, and tracking information of dynamic obstacles, thereby improving the surrounding area.
- the route planning decision subsystem plans the driving route according to the information extracted by the data fusion subsystem and the driving destination information, and the driving control subsystem generates a control command according to the driving route, and controls the driverless car according to the control command, thereby realizing safety. Extremely high performance driverless.
- a car including the above-described driverless car system.
- a control method for an driverless car including:
- Collecting vehicle information and surrounding environment information of the driverless vehicle, and the surrounding environment information includes image information of the surrounding environment and three-dimensional coordinate information;
- Integrating image information and three-dimensional coordinate information of the surrounding environment and extracting obstacle identification information are integralating image information and three-dimensional coordinate information of the surrounding environment and extracting obstacle identification information
- a control command is generated according to the driving route, and the driverless car is controlled according to the control command.
- 1 is a structural frame diagram of an unmanned vehicle system in an embodiment
- FIG. 2 is a structural diagram of an environment sensing subsystem in an embodiment
- FIG. 3 is a structural diagram of a data fusion subsystem in an embodiment
- FIG. 4 is a flow chart of a control method of an driverless car in one embodiment.
- An unmanned vehicle system includes an environment sensing subsystem 10, a data fusion subsystem 20, a path planning decision subsystem 30, and a travel control subsystem 40.
- the environment sensing subsystem 10 is configured to collect vehicle information and surrounding environment information of the driverless vehicle, wherein the surrounding environment information includes image information of the surrounding environment and three-dimensional coordinate information.
- the data fusion subsystem 20 is configured to fuse the image information and the three-dimensional coordinate information of the surrounding environment and extract the obstacle identification information.
- the path planning decision subsystem 30 is configured to plan a travel route based on the vehicle information, the obstacle identification information, and the travel destination information.
- the travel control subsystem 40 is configured to generate a control command according to the travel route, and control the driverless car according to the control command.
- the above-mentioned driverless car system integrates surrounding environment information including image information and three-dimensional coordinate information through the data fusion subsystem 20, and extracts obstacle identification information, thereby improving the recognition ability and accuracy of the surrounding environment information.
- the route planning decision subsystem 30 plans a travel route based on the vehicle information, the obstacle identification information, and the travel destination information, and the travel control subsystem 40 generates a control command according to the travel route, and controls the driverless vehicle according to the control command, thereby Achieve unparalleled safety features.
- the context aware subsystem 10 includes a vision sensor 110 and a radar 120.
- the visual sensor 110 is mainly composed of one or two graphic sensors, and sometimes with a light projector and other auxiliary equipment.
- Image sensors can use laser scanners, line and area array CCD cameras or TV cameras, or the latest digital cameras.
- the vision sensor 110 is installed on the driverless car for collecting the surrounding environment information of the driverless car, and collecting real-time road condition information near the driverless car, including obstacle information, lane line information, traffic sign information, and Dynamic tracking information for obstacles.
- the collected surrounding environment information is image information of the surrounding environment, and may also be referred to as video information.
- the radar 120 is used to collect three-dimensional coordinate information of the surrounding environment of the driverless car.
- the radar 105 is included in the driverless vehicle system.
- the plurality of radars 120 include a lidar and a millimeter wave radar.
- Lidar uses mechanical multi-beam laser radar, which mainly detects the position and velocity of the target by emitting a laser beam. It can also use the echo intensity information of the laser radar to detect and track obstacles. Lidar has the advantage of wider detection range and high detection accuracy.
- the wavelength of the millimeter wave radar is between the centimeter wave and the light wave. It has the advantages of microwave guidance and photoelectric guidance, and its guide head has small volume, light weight and high spatial resolution.
- the millimeter wave seeker penetrates fog and smoke. The ability to have strong dust.
- the simultaneous use of lidar and millimeter-wave radar can solve the drawbacks of Lidar's inability to perform in extreme climates, which can greatly improve the detection performance of driverless vehicles.
- the context aware subsystem 10 is also used to collect vehicle information for a driverless car.
- the vehicle information includes the current geographical location and time of the driverless car, the posture of the vehicle, and the current running speed.
- the environment awareness subsystem 10 further includes a GPS positioning navigator 130 and an inertial measurement unit (Inertial) Measurement Unit, IMU) 140 and vehicle speed acquisition module 150.
- the GPS location navigator 130 collects the current geographic location and time of the driverless car. When the driverless car is driving, the global locator installed in the car will always get the exact position of the car to further improve safety.
- the inertial measurement unit 140 is used to measure the vehicle attitude of the driverless car.
- the vehicle speed collecting module 150 is configured to acquire the speed at which the driverless car is currently running.
- the data fusion subsystem 20 is configured to fuse image information and three-dimensional coordinate information and extract obstacle identification information, wherein the obstacle identification information includes lane line information, obstacle information, traffic identification information, and Tracking information for dynamic obstacles.
- the data fusion subsystem 20 includes a lane line fusion module 210, an obstacle recognition fusion module 220, a traffic identification fusion module 230, and an obstacle dynamic tracking fusion module 240.
- the lane line fusion module 210 is configured to superimpose or exclude surrounding environment information collected by the vision sensor 110 and the radar 120, and extract lane line information.
- the obstacle recognition fusion module 220 is configured to fuse the surrounding environment information collected by the vision sensor 110 and the radar 120, and extract obstacle information.
- the traffic identifier fusion module 230 is configured to detect surrounding environment information collected by the vision sensor 110 and the radar 120, and extract traffic identification information.
- the obstacle dynamic tracking fusion module 240 is configured to fuse the surrounding environment information collected by the visual sensor 110 and the radar 120, and extract lane line information.
- the lane line fusion module 210 includes a visual lane line detection unit 211 and a radar lane line detection unit 213.
- the visual lane line detecting unit 211 is configured to process the image information and extract the visual lane line information.
- the visual lane line detecting unit 211 performs preprocessing such as denoising, enhancement, and segmentation on the image information acquired by the visual sensor 110, and extracts visual lane line information.
- the radar lane line detecting unit 213 is configured to extract road surface information driven by the driverless vehicle, and acquire lane outer contour information according to the road surface information.
- the radar lane line detecting unit 213 calibrates the three-dimensional coordinate information of the driving ground of the driverless vehicle acquired by the laser radar, and calculates discrete points in the three-dimensional coordinate information, wherein the discrete points can be defined A point at which the distance between two adjacent points is greater than a preset range.
- the discrete points are filtered, and the position information of the ground is fitted by the random sampling consistency method to obtain the outer contour information of the lane, that is, the radar lane line information is obtained.
- the lane line fusion module 210 fuses (superimposes) or excludes the acquired visual lane line information and the lane outer contour information to obtain real-time lane line information. Through the lane line fusion module 210, the accuracy of the identification of the lane line information can be improved, and the situation that the lane line information is acquired can be avoided.
- the obstacle recognition fusion module 220 includes a visual obstacle recognition unit 221 and a radar obstacle recognition unit 223.
- the visual obstacle recognition unit 221 is configured to segment the background information and the foreground information according to the image information, and identify the foreground information to obtain the visual obstacle information having the color information.
- the visual obstacle recognition unit 221 processes the image information by means of pattern recognition or machine learning, and uses the background update algorithm to create a background model and segment the foreground. The segmented foreground is identified to obtain visual obstacle information having color information.
- the radar obstacle recognition unit 223 is configured to identify radar obstacle information having three-dimensional coordinate information within a first preset height range.
- the radar obstacle recognition unit 223 preprocesses the surrounding environment information of the driverless vehicle acquired by the laser radar, removes the ground information, and filters and recognizes the three-dimensional coordinate information of the surrounding environment within the first preset height range. Detecting the region of interest based on the constraints of lane line information (region Of Interest, ROI), wherein the region of interest outlines the area to be processed in the form of a box, a circle, an ellipse, an irregular polygon, and the like. The data information of the identified region of interest is rasterized, and the obstacle block clustering is performed. The original lidar point cloud data corresponding to each obstacle block is subjected to secondary clustering, and the under division is placed.
- ROI lane line information
- the point cloud data of the quadratic cluster is used as a training sample set, the classifier model is generated according to the training sample set, and then the training model is used to classify and identify the obstacle block after the quadratic clustering and acquire the radar with three-dimensional coordinate information. Obstacle information.
- the obstacle recognition fusion module 220 is configured to fuse the visual obstacle information and the radar obstacle information to acquire the obstacle information. Since the visual obstacle information will fail in a strong light environment or a scene in which the light changes rapidly, the radar 120 detects the obstacle information through the active light source, and the stability is strong. When the driverless car is driving in a scene with a strong light environment or a rapidly changing light, the obstacle recognition information fusion module 220 can superimpose the visual obstacle information and the radar obstacle information, so that the light environment or the light changes rapidly. Get accurate obstacle information in the scene.
- the obstacle information acquired by the visual obstacle recognition unit 221 contains rich red, green and blue RGB information, and the pixels are high.
- the obstacle information including the color information and the three-dimensional information can be simultaneously acquired.
- the obstacle recognition fusion module 220 the false recognition rate can be reduced, the recognition accuracy can be improved, and safe driving can be further ensured.
- the traffic sign fusion module 230 includes a visual traffic sign detection unit 231 and a radar traffic sign detection unit 233.
- the visual traffic sign detecting unit 231 detects the image information and extracts the visual traffic sign information.
- the visual traffic sign detecting unit 231 detects the image information, processes the image information by means of pattern recognition or machine learning, and acquires visual traffic sign information, wherein the visual traffic sign information includes red, green and blue RGB color information.
- the radar traffic identification detecting unit 233 is configured to extract the ground traffic identification information; and is further configured to detect the suspended traffic identification information in the second preset height range.
- the radar traffic sign detecting unit 233 extracts the traffic sign line points according to the reflection intensity gradient, and then uses the curve to fit the ground traffic sign information (ground traffic sign line), and can also obtain the second pre- according to the obstacle clustering principle.
- Set the target in the height range and the shape is a standard rectangle and a circle, and define the target as the hanging traffic identification information.
- the traffic identification fusion module 230 is configured to determine the location of the traffic identification information according to the ground traffic identification information and the suspended traffic identification information. In the acquired specific location area, the category or kind of the traffic identification information is identified based on the visual traffic identification information acquired by the visual traffic identification detecting unit 231.
- the traffic sign fusion module 230 can accurately acquire various traffic sign information of the bottom surface or the suspension, and can ensure that the driverless car drives safely under the precursor of the traffic rules.
- the obstacle dynamic tracking fusion module 240 includes a visual dynamic tracking unit 241 and a radar dynamic tracking unit 243.
- the visual dynamic tracking unit 241 is configured to identify the image information, and locate the dynamic obstacle in the adjacent two consecutive frames, and obtain the color information of the dynamic obstacle.
- the visual dynamic tracking unit 241 processes the image information (video image) sequence by means of pattern recognition or machine learning, identifies and locates the dynamic obstacle in successive frames of the video image, and acquires the color information of the obstacle.
- the radar dynamic tracking unit 243 is used to track three-dimensional coordinate information of the dynamic obstacle.
- the radar dynamic tracking unit 243 combines the nearest neighbor matching algorithm and the multi-hypothesis tracking algorithm to determine that the obstacles of two adjacent frames or frames are the same target according to the related target association algorithm.
- the three-dimensional position information and speed information of the target are obtained according to the test data of the laser radar, and then the target after the association is tracked.
- the Kalman filter and the particle filter filtering algorithm can be used to filter the measured state and the predicted state of the obtained target to obtain the three-dimensional coordinate information of the relatively accurate dynamic obstacle.
- the obstacle dynamic tracking fusion module 240 is configured to combine the color information of the dynamic obstacle and the three-dimensional coordinate information of the obstacle to obtain the tracking information of the dynamic obstacle. Since the visual dynamic obstacle information is easily interfered by strong light or illumination changes, there is no precise three-dimensional coordinate information of the dynamic obstacle, but the visual dynamic obstacle information contains rich red, green and blue RGB color information. The dynamic obstacle information acquired by the lightning has no red, green and blue RGB color information. When the occlusion and occlusion are separated during the movement, it is impossible to identify which dynamic object is specific. However, the dynamic obstacle information acquired by the laser radar is stable.
- the obstacle dynamic tracking fusion module 240 can fuse the color information of the dynamic obstacle acquired from the image information and the three-dimensional coordinate information of the dynamic obstacle information acquired by the laser radar, and can acquire the color information and the three-dimensional coordinate information. Dynamic obstacles allow accurate tracking of dynamic obstacles.
- the path planning decision subsystem 30 is configured to plan the travel path based on the vehicle information, the obstacle identification information extracted by the data fusion subsystem 20, and the travel destination information.
- the path planning decision subsystem 30 can determine the surrounding environment information extracted by the data fusion subsystem 20 based on the vehicle information acquired by the environment aware subsystem 10 (the current geographic location and time of the driverless vehicle, the vehicle attitude and the current running speed).
- the obstacle information, the lane line information, the traffic sign information, and the dynamic tracking information of the obstacle, and the driving destination information of the driverless car are used to plan the traveling route.
- the path planning decision subsystem 30 combines the planned driving path to plan the position of the driver's car at the next moment, and calculates the control data of the driverless car, including the angular velocity, the line speed, the traveling direction, and the like.
- the travel control subsystem 40 is configured to generate a control command based on the travel path and control the driverless vehicle in accordance with the control command.
- the travel control subsystem 40 generates control commands based on the control data calculated by the path planning decision subsystem 30, including the travel speed of the vehicle, the direction of travel (front, rear, left, and right), the throttle, and the form of the vehicle. Control, and thus ensure that the driverless vehicle can drive safely and smoothly, and realize the function of driverless.
- the driverless vehicle system further includes a communication subsystem 50 for transmitting the travel path planned by the path planning decision subsystem 30 to the external monitoring center in real time.
- the driving status of the driverless car is monitored by an external monitoring center.
- the above-mentioned driverless car system integrates surrounding environment information including image information and three-dimensional coordinate information through the data fusion subsystem 20, and extracts obstacle information, lane line information, traffic identification information, and tracking information of dynamic obstacles, thereby improving the pair.
- the ability and accuracy of the surrounding environment information The route planning decision subsystem 30 plans the travel route according to the information extracted by the data fusion subsystem 20 and the travel destination information, and the travel control subsystem 40 generates a control command according to the travel route, and controls the unmanned vehicle according to the control command, and further Unmanned features with extremely high safety performance can be achieved.
- an embodiment of the present invention also provides an automobile including the driverless automobile system in each of the above embodiments.
- the surrounding environment information including the image information and the three-dimensional coordinate information can be fused by the data fusion subsystem 20 in the driverless car system in the automobile, and the obstacle information, the lane line information, and the traffic sign can be extracted.
- Information and tracking information of dynamic obstacles improve the ability and accuracy of information about the surrounding environment.
- the route planning decision subsystem 30 plans the travel route according to the information extracted by the data fusion subsystem 20 and the travel destination information, and the travel control subsystem 40 generates a control command according to the travel route, and controls the unmanned vehicle according to the control command, and further Unmanned features with extremely high safety performance can be achieved.
- a control method for a driverless car is also provided.
- 4 is a flow chart of a control method of a driverless car. The control method is based on the above-described driverless car system, and the method includes:
- Step S410 Collect vehicle information and surrounding environment information of the driverless car.
- the context aware subsystem 10 in a driverless vehicle system includes a vision sensor 110 and a radar 120.
- the visual sensor 110 is mainly composed of one or two graphic sensors, and sometimes with a light projector and other auxiliary equipment.
- Image sensors can use laser scanners, line and area array CCD cameras or TV cameras, or the latest digital cameras.
- the surrounding environment information of the driverless car is collected by the visual sensor 110 installed on the driverless car, and the real-time road condition information near the driverless car is collected, including obstacle information, lane line information, traffic sign information, and obstacles. Dynamic tracking information of objects.
- the collected surrounding environment information is image information of the surrounding environment, and may also be referred to as video information.
- the three-dimensional coordinate information of the surrounding environment of the driverless car is collected by the radar 120.
- the radar 105 is included in the driverless vehicle system.
- the plurality of radars 120 include a lidar and a millimeter wave radar.
- Lidar uses mechanical multi-beam laser radar, which mainly detects the position and velocity of the target by emitting a laser beam. It can also use the echo intensity information of the laser radar to detect and track obstacles. Lidar has the advantage of wider detection range and high detection accuracy.
- the wavelength of the millimeter wave radar is between the centimeter wave and the light wave. It has the advantages of microwave guidance and photoelectric guidance, and its guide head has small volume, light weight and high spatial resolution.
- the millimeter wave seeker penetrates fog and smoke. The ability to have strong dust.
- the simultaneous use of lidar and millimeter-wave radar can solve the drawbacks of Lidar's inability to perform in extreme climates, which can greatly improve the detection performance of driverless vehicles.
- Step S420 Fusion of vehicle information and surrounding environment information of the surrounding environment and extraction of obstacle identification information.
- the data fusion subsystem 20 in the driverless car system can fuse the image information and the three-dimensional coordinate information and extract the obstacle identification information, wherein the obstacle identification information includes lane line information, obstacle information, traffic identification information, and dynamic obstacles. tracking information.
- the image information is subjected to pre-processing such as denoising, enhancement, and segmentation, and the visual lane line information is extracted.
- the obtained three-dimensional coordinate information is processed to obtain the road surface information of the driverless vehicle, and the outer contour information of the lane is obtained according to the road surface information, that is, the radar lane line information is acquired; and the acquired visual lane line information and the lane outer contour information are performed. Fusion (overlay) or exclusion to get real-time lane line information.
- Segmenting background information and foreground information according to image information identifying foreground information to obtain visual obstacle information having color information; identifying radar obstacle information having three-dimensional coordinate information within a first preset height range; and integrating visual obstacles Information and Radar Obstacle Information Obtaining obstacle information allows you to obtain accurate obstacle information in scenes with strong light conditions or rapidly changing light.
- the image information is detected and the visual traffic identification information is extracted.
- the traffic sign line points are extracted, and the ground traffic identification information (ground traffic identification line) is fitted by the curve, and the shape can be obtained within the second preset height range according to the obstacle clustering principle. Rectangular and circular targets, and define the target as hanging traffic identification information.
- the location of the traffic identification information is determined according to the ground traffic identification information and the suspended traffic identification information.
- the category or kind of the traffic identification information is identified based on the visual traffic identification information acquired by the visual traffic identification detecting unit 231.
- the traffic sign fusion module 230 can accurately acquire various traffic sign information of the bottom surface or the suspension, and can ensure that the driverless car drives safely under the precursor of the traffic rules.
- the image information is identified, and the dynamic obstacle is located in two consecutive frames of consecutive frames, and the color information of the dynamic obstacle is obtained.
- the nearest neighbor matching algorithm and the multi-hypothesis tracking algorithm are combined to determine that the obstacles of two adjacent frames or multiple frames are the same target.
- the three-dimensional position information and speed information of the target are obtained according to the test data of the laser radar, and then the target after the association is tracked.
- the Kalman filter and the particle filter filtering algorithm can be used to filter the measured state and the predicted state of the obtained target to obtain the three-dimensional coordinate information of the relatively accurate dynamic obstacle.
- the color information of the dynamic obstacle and the three-dimensional coordinate information of the obstacle are combined to obtain the tracking information of the dynamic obstacle.
- the fusion of the color information of the dynamic obstacle obtained from the image information and the three-dimensional coordinate information of the dynamic obstacle information acquired by the laser radar can acquire a dynamic obstacle including the color information and the three-dimensional coordinate information, and can perform the dynamic obstacle Precise tracking.
- Step S430 Plan the travel route based on the vehicle information, the obstacle identification information, and the travel destination information.
- the vehicle information (the current geographic location and time of the driverless car, the vehicle posture, and the current running speed) acquired by the environment sensing subsystem 10, and the surrounding environment information (obstacle information, lane line) extracted by the data fusion subsystem 20 Information, traffic identification information, and dynamic tracking information for obstacles) and driving destination information of driverless cars to plan driving routes.
- Step S440 Generate a control command according to the driving route, and control the driverless car according to the control command.
- the control command is generated according to the control data calculated by the path planning decision subsystem 30, and the control command includes control of the traveling speed of the vehicle, the driving direction (front, rear, left, and right), the throttle, and the gear position of the vehicle, thereby ensuring no People driving vehicles can drive safely and smoothly, achieving unmanned functions.
- control method of the driverless vehicle further includes: collecting the current geographic location and time of the driverless car; measuring the vehicle attitude of the driverless car; and obtaining the current running speed of the driverless car.
- the control method of the driverless vehicle further includes the step of collecting vehicle information of the driverless vehicle.
- the vehicle information includes the current geographical location and time of the driverless car, the posture of the vehicle, and the current running speed.
- the current location and time of the driverless car can be collected by the GPS positioning navigator 130.
- the global locator installed in the car will always get the exact position of the car to further improve safety.
- the vehicle attitude of the driverless car is measured by the inertial measurement unit 140.
- the speed of the current running of the driverless car is obtained by the vehicle speed collecting module 150.
- control method of the driverless vehicle further includes: transmitting the planned travel path of the route planning decision subsystem to the external monitoring center in real time.
- the driving status of the driverless car is monitored by an external monitoring center.
- the above control method of the driverless vehicle can improve the recognition ability and accuracy of the surrounding environment information by integrating the vehicle information and the surrounding environment information and extracting the obstacle identification information.
- the driving route is planned according to the vehicle information, the obstacle identification information, and the driving destination information; and the control command is generated according to the driving route, and the driverless car is controlled according to the control command, thereby achieving the unsafe driving function with extremely high safety performance. .
Landscapes
- Engineering & Computer Science (AREA)
- Automation & Control Theory (AREA)
- Transportation (AREA)
- Mechanical Engineering (AREA)
- Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Traffic Control Systems (AREA)
Abstract
L'invention concerne un système d'automobile sans conducteur, comprenant : un sous-système de détection d'environnement (10), utilisé pour acquérir des informations de véhicule et des informations d'environnement ambiant, lesdites informations d'environnement ambiant comprenant des informations d'image et des informations de coordonnées tridimensionnelles de l'environnement ambiant; un sous-système de fusion de données (20), utilisé pour fusionner les informations d'image et les informations de coordonnées tridimensionnelles de l'environnement ambiant et extraire des informations d'identification d'obstacle; un sous-système de prise de décision de planification de trajet (30), utilisé pour planifier un trajet de déplacement en fonction des informations de véhicule, des informations d'identification d'obstacle et des informations de destination de déplacement; un sous-système de commande de déplacement (40), utilisé pour générer une instruction de commande en fonction du trajet de déplacement et commander l'automobile sans conducteur en fonction de l'instruction de commande. Le système améliore les performances de sécurité de l'automobile sans conducteur. L'invention concerne également un procédé de commande d'automobile sans conducteur et une automobile.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/CN2017/075983 WO2018161278A1 (fr) | 2017-03-08 | 2017-03-08 | Système d'automobile sans conducteur et son procédé de commande, et automobile |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/CN2017/075983 WO2018161278A1 (fr) | 2017-03-08 | 2017-03-08 | Système d'automobile sans conducteur et son procédé de commande, et automobile |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2018161278A1 true WO2018161278A1 (fr) | 2018-09-13 |
Family
ID=63447212
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2017/075983 Ceased WO2018161278A1 (fr) | 2017-03-08 | 2017-03-08 | Système d'automobile sans conducteur et son procédé de commande, et automobile |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2018161278A1 (fr) |
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109934164A (zh) * | 2019-03-12 | 2019-06-25 | 杭州飞步科技有限公司 | 基于轨迹安全度的数据处理方法和装置 |
| CN112543877A (zh) * | 2019-04-03 | 2021-03-23 | 华为技术有限公司 | 定位方法和定位装置 |
| CN112947495A (zh) * | 2021-04-25 | 2021-06-11 | 北京三快在线科技有限公司 | 模型训练的方法、无人驾驶设备的控制方法以及装置 |
| WO2023116344A1 (fr) * | 2021-12-23 | 2023-06-29 | 清华大学 | Procédé de test de conduite sans conducteur, système de test de conduite sans conducteur, et dispositif informatique |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104648382A (zh) * | 2013-11-25 | 2015-05-27 | 王健 | 基于双目视觉的汽车自动行驶系统 |
| CN105151043A (zh) * | 2015-08-19 | 2015-12-16 | 内蒙古麦酷智能车技术有限公司 | 一种无人驾驶汽车紧急避让的系统和方法 |
| CN105946620A (zh) * | 2016-06-07 | 2016-09-21 | 北京新能源汽车股份有限公司 | 电动汽车及其主动限速控制方法和系统 |
| JP2016192150A (ja) * | 2015-03-31 | 2016-11-10 | トヨタ自動車株式会社 | 車両走行制御装置 |
| US20160363647A1 (en) * | 2015-06-15 | 2016-12-15 | GM Global Technology Operations LLC | Vehicle positioning in intersection using visual cues, stationary objects, and gps |
-
2017
- 2017-03-08 WO PCT/CN2017/075983 patent/WO2018161278A1/fr not_active Ceased
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104648382A (zh) * | 2013-11-25 | 2015-05-27 | 王健 | 基于双目视觉的汽车自动行驶系统 |
| JP2016192150A (ja) * | 2015-03-31 | 2016-11-10 | トヨタ自動車株式会社 | 車両走行制御装置 |
| US20160363647A1 (en) * | 2015-06-15 | 2016-12-15 | GM Global Technology Operations LLC | Vehicle positioning in intersection using visual cues, stationary objects, and gps |
| CN105151043A (zh) * | 2015-08-19 | 2015-12-16 | 内蒙古麦酷智能车技术有限公司 | 一种无人驾驶汽车紧急避让的系统和方法 |
| CN105946620A (zh) * | 2016-06-07 | 2016-09-21 | 北京新能源汽车股份有限公司 | 电动汽车及其主动限速控制方法和系统 |
Cited By (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109934164A (zh) * | 2019-03-12 | 2019-06-25 | 杭州飞步科技有限公司 | 基于轨迹安全度的数据处理方法和装置 |
| CN112543877A (zh) * | 2019-04-03 | 2021-03-23 | 华为技术有限公司 | 定位方法和定位装置 |
| CN112543877B (zh) * | 2019-04-03 | 2022-01-11 | 华为技术有限公司 | 定位方法和定位装置 |
| US12001517B2 (en) | 2019-04-03 | 2024-06-04 | Huawei Technologies Co., Ltd. | Positioning method and apparatus |
| CN112947495A (zh) * | 2021-04-25 | 2021-06-11 | 北京三快在线科技有限公司 | 模型训练的方法、无人驾驶设备的控制方法以及装置 |
| WO2023116344A1 (fr) * | 2021-12-23 | 2023-06-29 | 清华大学 | Procédé de test de conduite sans conducteur, système de test de conduite sans conducteur, et dispositif informatique |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN107161141B (zh) | 无人驾驶汽车系统及汽车 | |
| CN206691107U (zh) | 无人驾驶汽车系统及汽车 | |
| US12140446B2 (en) | Automatic annotation of environmental features in a map during navigation of a vehicle | |
| KR102785870B1 (ko) | 융합된 카메라/LiDAR 데이터 포인트를 사용한 자동화된 대상체 주석 달기 | |
| CN111102986B (zh) | 用于运载工具导航的尺寸缩小的地图的自动生成以及时空定位 | |
| CN104943684B (zh) | 无人驾驶汽车控制系统和具有其的汽车 | |
| CN113791621B (zh) | 一种自动驾驶牵引车与飞机对接方法及系统 | |
| Tsugawa et al. | An architecture for cooperative driving of automated vehicles | |
| EP3647733A1 (fr) | Annotation automatique des caractéristiques d'environnement dans une carte lors de la navigation d'un véhicule | |
| US20200346654A1 (en) | Vehicle Information Storage Method, Vehicle Travel Control Method, and Vehicle Information Storage Device | |
| KR102717432B1 (ko) | LiDAR 정보와 카메라 정보의 병합 | |
| AU2019419781A1 (en) | Vehicle using spatial information acquired using sensor, sensing device using spatial information acquired using sensor, and server | |
| KR101510745B1 (ko) | 차량의 무인 자율주행 시스템 | |
| CN111273673A (zh) | 一种无人车的自动驾驶跟随方法、系统及无人车 | |
| WO2021006441A1 (fr) | Procédé de collecte d'informations de panneau de signalisation routière utilisant un système de cartographie mobile | |
| KR102604298B1 (ko) | 랜드마크 위치 추정 장치와 방법 및 이러한 방법을 수행하도록 프로그램된 컴퓨터 프로그램을 저장하는 컴퓨터 판독 가능한 기록매체 | |
| WO2020141694A1 (fr) | Véhicule utilisant des informations spatiales acquises à l'aide d'un capteur, dispositif de détection utilisant des informations spatiales acquises à l'aide d'un capteur, et serveur | |
| CN114091513B (zh) | 面向地面无人平台辅助遥控驾驶的态势感知方法及系统 | |
| WO2018161278A1 (fr) | Système d'automobile sans conducteur et son procédé de commande, et automobile | |
| CN114998436B (zh) | 对象标注方法、装置、电子设备及存储介质 | |
| CN112061138A (zh) | 车辆偏心率映射 | |
| CN116935281A (zh) | 基于雷达和视频的机动车道异常行为在线监测方法及设备 | |
| JP3227247B2 (ja) | 走行路検出装置 | |
| CN114274978A (zh) | 一种无人驾驶物流车的避障方法 | |
| Franke et al. | Towards holistic autonomous obstacle detection in railways by complementing of on-board vision with UAV-based object localization |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17899898 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 23/01/2020) |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 17899898 Country of ref document: EP Kind code of ref document: A1 |