Disclosure of Invention
The application provides an automatic driving operation processing method, device, equipment and storage medium, which are used for solving the problem of abnormal driving behaviors such as sudden braking, dragon drawing and the like caused by inaccurate positioning information and relative position information.
The technical scheme is as follows:
In a first aspect, there is provided an automatic driving operation processing method, including:
acquiring coordinate information of a target in an automatic driving scene at a current time stamp, wherein the target comprises one or more of an automatic driving vehicle, an obstacle near the automatic driving vehicle and a map element;
Determining at least two driving operations to be executed aiming at the target and a reference coordinate system corresponding to each driving operation, wherein the driving operations are divided into a first driving operation and a second driving operation according to types, the at least two driving operations at least comprise the second driving operation, the reference coordinate system corresponding to the first driving operation is a map coordinate system, the reference coordinate system corresponding to the second driving operation is an odometer coordinate system or a projection coordinate system related to the odometer coordinate system, the projection coordinate system meets the condition that the horizontal plane of the reference coordinate system is parallel to the horizontal plane of the map coordinate system, and the parameter of each coordinate axis continuously changes along with the odometer coordinate system;
Based on the association relation between driving operations, calculating a conversion matrix between different reference coordinate systems;
and carrying out coordinate system conversion on the coordinate information based on the conversion matrix, and executing corresponding automatic driving operation processing based on a conversion result.
In one possible implementation manner, acquiring the image semantic feature information of the region to be detected includes:
When the automatic driving vehicle enters a second type of scene from a first type of scene, the map information which is referred by acquiring coordinate information and/or executing driving operation is switched from global map information to local map information;
when the automatic driving vehicle enters the first type of scene from the second type of scene, the map information which is referred by acquiring coordinate information and/or executing driving operation is switched from local map information to global map information;
The first type of scenes are scenes which can be covered by the high-precision map, and the second type of scenes are local scenes which are easy to cause the distortion of the position information.
In one possible implementation, the first type of driving operation includes at least a routing operation;
The second driving operation at least comprises a sensing operation, a positioning operation, a prediction operation, a decision operation, a planning operation and a control operation;
the reference coordinate systems corresponding to the sensing operation and the positioning operation are the odometer coordinate systems, and the reference coordinate systems corresponding to the prediction operation, the decision operation, the planning operation and the control operation are the projection coordinate systems.
In one possible implementation, calculating the transformation matrix between different reference coordinate systems based on the association between driving operations includes:
Determining any two driving operations with a direct transmission relationship based on information transmission directions among the plurality of driving operations;
and when the reference coordinate systems corresponding to the two driving operations are determined to be different, determining a conversion matrix between the two different reference coordinate systems under the current timestamp according to the positioning output result.
In one possible implementation of the present invention,
The projection coordinate system is an odometer horizontal projection coordinate system, and the following steps are that:
The transformation matrix from the odometer horizontal projection coordinate system to the map coordinate system is a projection matrix from the vehicle body coordinate system to the map coordinate system and an inverse matrix from the vehicle body coordinate system to the projection matrix of the odometer coordinate system;
the conversion matrix from the map coordinate system to the odometer horizontal projection coordinate system is an inverse matrix of the conversion matrix from the odometer horizontal projection coordinate system to the map coordinate system;
The transformation matrix from the odometer coordinate system to the odometer horizontal projection coordinate system is that the transformation matrix from the map coordinate system to the odometer horizontal projection coordinate system is multiplied by the transformation matrix from the odometer coordinate system to the map coordinate system;
The conversion matrix from the odometer horizontal projection coordinate system to the odometer coordinate system is an inverse matrix of the conversion matrix from the odometer coordinate system to the odometer horizontal projection coordinate system;
The conversion matrix from the vehicle body coordinate system to the odometer horizontal projection coordinate system is that the conversion matrix from the map coordinate system to the odometer horizontal projection coordinate system is multiplied by the conversion matrix from the vehicle body coordinate system to the map coordinate system;
The transformation matrix from the odometer horizontal projection coordinate system to the vehicle body coordinate system is an inverse matrix of the transformation matrix from the vehicle body coordinate system to the odometer horizontal projection coordinate system;
The projection matrix is formed by reserving coordinates of horizontal planes of the corresponding conversion matrix corresponding to two axial directions and corresponding to an axial direction of a course angle, and setting zero coordinates of the axial direction, a pitch angle and a roll angle of the vertical horizontal plane corresponding to the axial directions respectively.
In a second aspect, there is provided an automatic driving operation processing apparatus including:
The system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring coordinate information of a target in an automatic driving scene at a current time stamp, and the target comprises one or more of an automatic driving vehicle, an obstacle near the automatic driving vehicle and a map element;
The system comprises a determining module, a calculating module and a calculating module, wherein the determining module is used for determining at least two driving operations to be executed aiming at the target and a reference coordinate system corresponding to each driving operation, the driving operations are divided into a first driving operation and a second driving operation according to types, the at least two driving operations at least comprise the second driving operation, the reference coordinate system corresponding to the first driving operation is a map coordinate system, the reference coordinate system corresponding to the second driving operation is an odometer coordinate system or a projection coordinate system related to the odometer coordinate system, the projection coordinate system meets the condition that the horizontal plane of the reference coordinate system is parallel to the horizontal plane of the map coordinate system, and the parameter of each coordinate axis continuously changes along with the odometer coordinate system;
The calculation module is used for calculating a conversion matrix between different reference coordinate systems based on the association relation between driving operations;
and the conversion module is used for carrying out coordinate system conversion on the coordinate information based on the conversion matrix and executing corresponding automatic driving operation processing based on the conversion result.
In a third aspect, there is provided a computer readable storage medium having stored therein at least one instruction that is loaded and executed by a processor to implement the aspects and methods of any one of the possible implementations as described above.
In a fourth aspect, there is provided an electronic device comprising:
At least one processor, and
A memory communicatively coupled to the at least one processor, wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the aspects and methods of any one of the possible implementations described above.
In a fifth aspect, there is provided an autonomous vehicle comprising an electronic device as described above.
The technical scheme provided by the application has the beneficial effects that at least:
According to the technical scheme, in order to adapt to the driving scene change, the embodiment of the application can select proper map information to determine the relative position, and the odometer coordinate system and the projection coordinate system are used for replacing the map coordinate system so as to assist the automatic driving vehicle to accurately execute corresponding driving operation. Specifically, the coordinate information of the target at the current timestamp may be acquired first, then two or more continuous driving operations to be performed on the target and a reference coordinate system corresponding to each driving operation may be approximately determined based on the current coordinate information, where the reference coordinate system is a map coordinate system, an odometer coordinate system or a projection coordinate system, then a transformation matrix between different reference coordinate systems is calculated according to an association relationship between driving operations, finally, coordinate system transformation may be performed on the acquired coordinate information based on the calculated transformation matrix, and corresponding driving operation processing may be performed based on the transformation result. In the whole processing process, the coordinate information is based on an odometer coordinate system or a projection coordinate system, so that the scene can be rapidly and flexibly switched between global map information and local map information when encountering scene switching, the acquired positioning information is truly and accurately ensured, the problem of positioning distortion caused by incapability of flexibly and accurately switching map types by the map coordinate system is avoided, the occurrence of abnormal driving behaviors such as emergency braking, picture dragon, re-planning and the like is reduced, the driving safety of an automatic driving vehicle is improved, the applicability of the automatic driving scene is also improved, and the manual takeover rate is reduced.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the application or to delineate the scope of the application. Other features of the present application will become apparent from the description that follows.
Detailed Description
Exemplary embodiments of the present application will now be described with reference to the accompanying drawings, in which various details of the embodiments of the present application are included to facilitate understanding, and are to be considered merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
It will be apparent that the described embodiments are some, but not all, embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It should be noted that, the terminal device according to the embodiment of the present application may include, but is not limited to, smart devices such as a mobile phone, a Personal digital assistant (Personal DIGITAL ASSISTANT, PDA), a wireless handheld device, and a Tablet Computer (Tablet Computer), and the display device may include, but is not limited to, devices with display functions such as a Personal Computer and a television.
In addition, the term "and/or" is merely an association relation describing the association object, and means that three kinds of relations may exist, for example, a and/or B, and that three kinds of cases where a exists alone, while a and B exist alone, exist alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
It should be noted that the following description of the present application will refer to terms related to automatic driving and driving operations, and for convenience of understanding, the related terms will be explained as follows.
The manual pipe taking rate is the number of times of manual pipe taking participation corresponding to the fixed mileage;
Host vehicle: autonomous vehicle, which may also be defined herein as autonomous relative to other autonomous vehicles;
odom the coordinate system, also known as the odometer coordinate system, is a non-fixed, movable coordinate system. It represents the coordinate system of the robot relative to its starting position calculated from encoder information (e.g. wheel odometer data) or other motion sensors (e.g. vision odometer, inertial measurement unit, etc.);
during automatic driving, when the vehicle encounters environmental change, obstacle, new traffic signal or other unpredictable condition, the system needs to recalculate or adjust its running track;
Three-dimensional coordinate transformation-in three-dimensional space, coordinate transformation typically involves rotation, translation, and scaling. These transforms may be represented by a 4x4 homogeneous coordinate transformation matrix;
Three-dimensional rotation-in three-dimensional space, the rotational transformation can be combined by rotation about the X-axis, Y-axis and Z-axis. These rotations may be represented by euler angles (yaw-pitch-roll) or quaternions.
In view of the background that the information such as the relative position determined under the map coordinate system may not be distorted by taking the appropriate map information as a reference during scene change, the application proposes an autopilot operation processing scheme, and the concept of the application is that in order to adapt to the determination of the relative position by selecting the appropriate map information during driving scene change, an odometer coordinate system and a projection coordinate system can be used to replace the map coordinate system so as to assist the autopilot vehicle to accurately execute the corresponding driving operation. In specific implementation, the coordinate information of the target at the current timestamp can be acquired first, then two or more continuous driving operations to be executed for the target and a reference coordinate system corresponding to each driving operation can be approximately determined based on the current coordinate information, the reference coordinate system is mainly an odometer coordinate system or a projection coordinate system corresponding to the second driving operation, then a conversion matrix between different reference coordinate systems is calculated according to the association relation between the driving operations, finally, coordinate system conversion can be performed on the acquired coordinate information based on the calculated conversion matrix, and corresponding driving operation processing is performed based on the conversion result. In the whole processing process, the coordinate information is based on an odometer coordinate system or a projection coordinate system, so that the scene can be rapidly and flexibly switched between global map information and local map information when encountering scene switching, the acquired positioning information is truly and accurately ensured, the problem of positioning distortion caused by incapability of flexibly and accurately switching map types of map maps is avoided, the occurrence of abnormal driving behaviors such as emergency braking, picture dragon, re-planning and the like is reduced, the driving safety of an automatic driving vehicle is improved, the applicability of the automatic driving scene is also improved, and the manual takeover rate is reduced.
The driving operation processing scheme according to the present application will be described in detail below by way of specific examples.
Referring to fig. 2, a schematic diagram of steps of a driving operation processing method according to an embodiment of the present application is shown. The main execution body of the driving operation processing method may be a driving operation processing device, wherein the processing device may be a software module having calculation and data processing capabilities, or may be an electronic device integrated with a similar software module. Alternatively, the execution subject of the driving operation processing method in the present application may be an autonomous vehicle or a related processing module integrated in an autonomous vehicle.
The automatic driving operation processing method shown in fig. 2 may specifically include the steps of:
and 202, acquiring coordinate information of a target in an automatic driving scene at a current time stamp, wherein the target comprises one or more of an automatic driving vehicle, an obstacle near the automatic driving vehicle and a map element.
The coordinate information may be the position information of the target at the current time stamp, and the acquisition mode may be obtained through sensing sensor detection or other sensor calculation. The object involved in the automatic driving scene of the application can be an automatic driving vehicle, or an obstacle near the automatic driving vehicle, such as other automatic driving vehicles, pedestrians, bicycles or electric vehicles, and the like, and can also be map elements in an electronic map, such as buildings, crossroads, traffic lights, street lamps, road posts, signs, and the like. For map elements, the coordinate information of the map elements can be directly inquired and obtained from the electronic map.
It will be appreciated that the target may also be some other element that can affect the travel of an autonomous vehicle on a road, such as a hurricane or cloud caused by weather conditions, etc. The present invention relates to the position tracking of these dynamic elements, and accurate position information can be obtained through some prediction ports, where the technology is capable and allowed by legal regulations.
Step 204, determining at least two driving operations to be executed aiming at the target and a reference coordinate system corresponding to each driving operation, wherein the driving operations are divided into a first driving operation and a second driving operation according to types, the at least two driving operations at least comprise the second driving operation, the reference coordinate system corresponding to the first driving operation is a map coordinate system, the reference coordinate system corresponding to the second driving operation is an odometer coordinate system or a projection coordinate system related to the odometer coordinate system, the projection coordinate system meets the condition that the horizontal plane of the reference coordinate system is parallel to the horizontal plane of the map coordinate system, and the parameters of each coordinate axis continuously change along with the odometer coordinate system.
Optionally, in the scheme of the application, the first driving operation at least comprises a routing operation, and the second driving operation at least comprises a sensing operation, a positioning operation, a prediction operation, a decision operation, a planning operation and a control operation, wherein reference coordinate systems corresponding to the sensing operation and the positioning operation are both odometer coordinate systems, and reference coordinate systems corresponding to the prediction operation, the decision operation, the planning operation and the control operation are both projection coordinate systems.
In a specific implementation, the sensing operation is performed under odom coordinates, and the outputted obstacle (such as a vehicle, a non-motor vehicle, a pedestrian, etc.), the map element (such as a lane line, a map boundary, a crosswalk, etc.) and the like are odom coordinates. The positioning operation is performed under odom coordinates, and the position of the vehicle under odom coordinates is output. The routing operation is performed under the map coordinate system, and the planned route of the own vehicle under the map coordinate system is output. The prediction operation is performed under odom footprint coordinates. The decision operation is performed in odom footprint's coordinate system. The planning operation is performed under odom footprint coordinates. The control operation is performed under odom footprint coordinates.
In fact, the driving operation of the application also comprises a reference line operation, namely, inputting a planned route under the map coordinate system and outputting a reference line of the vehicle under the odom footprint coordinate system.
It should be understood that in the solution of the present application, the reference coordinate system corresponding to at least one of the driving operations involved is the odometer coordinate system or the projection coordinate system related to the odometer coordinate system. Since the present application mainly relates to coordinate conversion between driving operations, even for a driving operation such as a routing operation with a map coordinate system as a reference coordinate system, there is a corresponding driving operation requiring conversion of the coordinate system, for example, a reference line operation associated with the routing.
And 206, calculating a conversion matrix between different reference coordinate systems based on the association relation between driving operations.
The association relationship may be an execution sequence relationship between driving operations, a signal transmission relationship between driving operations, or the like. The application is not limited in this regard.
Optionally, when calculating the conversion matrix between different reference coordinate systems based on the association relation between driving operations, specifically, any two driving operations with a direct transmission relation can be determined based on the information transmission directions between a plurality of driving operations, and when determining that the reference coordinate systems corresponding to the two driving operations are different, the conversion matrix between the two different reference coordinate systems under the current timestamp is determined according to the positioning output result.
For example, the driving operation to be performed by the target is determined to be a sensing operation, a positioning operation and a prediction operation, then there is no signal transmission relation between the sensing operation and the positioning operation, and the two execution sequences are juxtaposed, so that a conversion matrix does not need to be determined between the two driving operations. The sensing operation and the predicting operation have a signal transmission relation, and the signal transmission direction is determined to be from sensing to predicting, and no other driving operation participates in the middle, so that the reference coordinate systems corresponding to the two driving operations can be further determined, if the reference coordinate systems are the same, no processing is needed, and if the reference coordinate systems are different, for example, the sensing operation corresponds to odom coordinate systems, and the predicting operation corresponds to odom footprint coordinate systems, a conversion matrix between the two reference coordinate systems needs to be calculated.
In the scheme of the application, the projection coordinate system is an odometer horizontal projection coordinate system, namely odom footprint coordinate system, and then the following conversion matrix can exist:
① odom footprint the conversion matrix from the map odometer horizontal projection coordinate system to the map coordinate system is a projection matrix from the vehicle body coordinate system to the map coordinate system multiplied by an inverse matrix of the projection matrix from the vehicle body coordinate system to the odometer coordinate system;
② The conversion matrix from the map coordinate system to the odometer horizontal projection coordinate system is an inverse matrix of the conversion matrix from the odometer horizontal projection coordinate system to the map coordinate system;
③ The conversion matrix from odom to odom footprint odometer coordinate system to the odometer horizontal projection coordinate system is a conversion matrix from a map coordinate system to the odometer horizontal projection coordinate system multiplied by a conversion matrix from the odometer coordinate system to the map coordinate system;
④ The conversion matrix from odom footprint to odom odometer horizontal projection coordinate system to the odometer coordinate system is an inverse matrix of the conversion matrix from the odometer coordinate system to the odometer horizontal projection coordinate system;
⑤ The conversion matrix from the baselink-odom footprint vehicle body coordinate system to the odometer horizontal projection coordinate system is a conversion matrix from the map coordinate system to the odometer horizontal projection coordinate system and a conversion matrix from the vehicle body coordinate system to the map coordinate system;
⑥ The conversion matrix from odom footprint to baselink odometer horizontal projection coordinate system to the vehicle body coordinate system is an inverse matrix of the conversion matrix from the vehicle body coordinate system to the odometer horizontal projection coordinate system;
The projection matrix is formed by reserving coordinates of horizontal planes of the corresponding conversion matrix corresponding to two axial directions and corresponding to an axial direction of a course angle, and setting zero coordinates of the axial direction, a pitch angle and a roll angle of the vertical horizontal plane corresponding to the axial directions respectively.
It should be understood that in the solution of the present application, the corresponding transformation matrix is also different for each frame between driving operations for different reference frames.
And step 208, processing the coordinate information based on the conversion matrix, and executing corresponding driving operation processing based on the conversion result.
After determining the conversion matrix between the involved driving operations, the coordinate system conversion may be performed step by step on the coordinate information and the result of the driving operation processing of the coordinate information according to the corresponding conversion matrix, and the corresponding driving operation processing may be performed at the corresponding driving operation node according to the result of each conversion.
For example, after determining the conversion matrix corresponding to the sensing-prediction, the coordinate information may be input to the sensing module to perform the sensing operation, and the result of the sensing operation may be subjected to the coordinate conversion process based on the conversion matrix, and then the corresponding prediction operation may be performed using the conversion result as the input of the prediction module.
Optionally, in the scheme of the application, when the automatic driving vehicle drives into a second type of scene from a first type of scene, the map information which is referred to by acquiring the coordinate information and/or executing the driving operation is switched from global map information to local map information, and when the automatic driving vehicle drives into the first type of scene from the second type of scene, the map information which is referred to by acquiring the coordinate information and/or executing the driving operation is switched from local map information to global map information, wherein the first type of scene is a scene which can be covered by a high-precision map, and the second type of scene is a local scene which is easy to cause the distortion of the position information.
For example, the first type of scene may be outside the tunnel and the second type of scene may be inside the tunnel. The automatic driving vehicle can be switched back and forth from outside the tunnel to inside the tunnel to outside the tunnel, and the automatic driving vehicle can be supported to flexibly switch between the global map and the local map based on the coordinate conversion scheme in the steps 202-208, so that stable, continuous and safe driving of the automatic driving vehicle is ensured, and positioning distortion is avoided.
It should be noted that, part or all of the execution body in steps 202 to 208 may be an application located in the local terminal, or may be a functional unit such as a plug-in unit or a software development kit (Software Development Kit, SDK) disposed in the application located in the local terminal, or may be a processing engine located in a server on the network side, or may be a distributed system located on the network side, for example, a processing engine or a distributed system in an autopilot platform on the network side, which is not limited in this embodiment.
It will be appreciated that the application may be a local program (NATIVEAPP) installed on the local terminal, or may also be a web page program (webApp) of a browser on the local terminal, which is not limited in this embodiment.
It should be noted that, the specific implementation process provided in the present implementation manner may be combined with the various specific implementation processes provided in the foregoing implementation manner to implement the blind area detection method of the present embodiment. The detailed description may refer to the relevant content in the foregoing implementation, and will not be repeated here.
For a better understanding of the method according to the embodiment of the present application, the following describes the method according to the embodiment of the present application with reference to the accompanying drawings and specific application scenarios.
Fig. 3 is a schematic flow chart of a driving operation processing scheme provided by the application. The driving operation determined by the processing thread comprises a second driving operation such as a sensing operation, a positioning operation, a predicting operation, a decision operation, a planning operation, a control operation and the like, and a first driving operation such as a routing operation, a reference line operation and the like. As shown in fig. 3, the reference coordinate system corresponding to the sensing operation and the positioning operation is odom coordinate system, the reference coordinate system corresponding to the prediction operation, the decision operation, the planning operation and the control operation is odom footprint coordinate system, the reference coordinate system corresponding to the routing operation is map coordinate system, and the reference coordinate system corresponding to the reference line operation is map coordinate system (input) and odom footprint coordinate system (output).
According to the processing manner shown in fig. 2, a corresponding reference coordinate system may be determined for each driving operation, and then a corresponding transformation matrix may be calculated according to the relationship between driving operations. As shown in fig. 3, the sensing module serves as a starting point, and the input of the sensing module can be the requested map information or coordinate information acquired by other sensors. After the processing of the sensing operation, the output result can be transmitted to the prediction module or the planning module, and the reference coordinate systems corresponding to the prediction module and the planning module are the same and are odom footprint coordinate systems, so that the conversion matrix odom-odom footprint between the odom coordinate system corresponding to the sensing operation and the odom footprint coordinate system is only needed to be calculated. And after the conversion matrix is used for carrying out coordinate system conversion processing on the output result of the sensing module, the output result is respectively transmitted to the prediction module and the planning module for corresponding driving operation. It should be understood that the driving operation herein may be understood as an internal operation related to driving performed by the corresponding module, not a real driving operation. Similarly, the modules corresponding to other driving operations may also sequentially perform corresponding coordinate conversion and driving operations according to the flow sequence in fig. 3.
When map information is requested in odom or odom footprint, the request for related map information may be converted from odom or odom footprint to map so that it may be accurately found in the map coordinate. And then, converting the requested related map result into a odom coordinate system or a odom footprint coordinate system by a map coordinate system so as to return to a corresponding module for processing corresponding driving operation.
As shown in fig. 4, the process of the autonomous vehicle passing through the tunnel, before, after, and after the autonomous vehicle enters the tunnel, will be described. Before entering the tunnel, the method carries out sensing positioning based on the global map, and the like, and after entering the tunnel, the method is directly switched from the global map to the local map. Since the reference coordinate system of the whole process of predicting, deciding, planning and controlling each module is odom/odom footprint coordinate system, the vehicle coordinate is stable and continuous in the switching process, and no extra switching or redundant logic is needed. Moreover, the problems of emergency braking, dragon drawing, re-planning and the like during positioning information distortion can be reduced, the safety is improved, and the manual takeover rate is reduced.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present application is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required for the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
Fig. 5 shows a block diagram of an automatic driving operation processing device according to an embodiment of the present application, as shown in fig. 5. The automatic driving operation processing apparatus 500 of the present embodiment may include an acquisition module 501, a determination module 502, a calculation module 503, and a conversion module 504. The system comprises an acquisition module 501, a storage module and a display module, wherein the acquisition module 501 is used for acquiring coordinate information of an object in an automatic driving scene at a current time stamp, and the object comprises one or more of an automatic driving vehicle, an obstacle near the automatic driving vehicle and a map element. The determining module 502 is configured to determine at least two driving operations to be performed for the target, and a reference coordinate system corresponding to each driving operation, where the driving operations are classified into a first driving operation and a second driving operation according to types, the at least two driving operations at least include the second driving operation, the reference coordinate system corresponding to the first driving operation is a map coordinate system, the reference coordinate system corresponding to the second driving operation is an odometer coordinate system or a projection coordinate system related to the odometer coordinate system, the projection coordinate system meets that a horizontal plane of the reference coordinate system is parallel to a horizontal plane of the map coordinate system, and parameters of each coordinate axis continuously change along with the odometer coordinate system. A calculating module 503, configured to calculate a transformation matrix between different reference coordinate systems based on the association relationship between driving operations. And a conversion module 504, configured to perform coordinate system conversion on the coordinate information based on the conversion matrix, and perform corresponding autopilot operation processing based on a conversion result.
It should be noted that, part or all of the autopilot processing apparatus in this embodiment may be an application located at a local terminal, or may be a functional unit such as a plug-in unit or a software development kit (Software Development Kit, SDK) disposed in the application located at the local terminal, or may be a processing engine located in a server on a network side, or may be a distributed system located on the network side, for example, a processing engine or a distributed system in an autopilot platform on the network side, which is not limited in this embodiment.
It will be appreciated that the application may be a local program (NATIVEAPP) installed on the local terminal, or may also be a web page program (webApp) of a browser on the local terminal, which is not limited in this embodiment.
Optionally, in one possible implementation manner of this embodiment, when the autonomous vehicle drives from the first type of scene into the second type of scene, the conversion module 504 further switches the map information referred to by acquiring the coordinate information and/or performing the driving operation from global map information to local map information, and when the autonomous vehicle drives from the second type of scene into the first type of scene, the conversion module 504 further switches the map information referred to by acquiring the coordinate information and/or performing the driving operation from local map information to global map information, wherein the first type of scene is a scene that can be covered by a high-precision map, and the second type of scene is a local scene that is prone to cause distortion of the position information.
Optionally, in one possible implementation manner of the embodiment, the first driving operation at least includes a routing operation, and the second driving operation at least includes a sensing operation, a positioning operation, a prediction operation, a decision operation, a planning operation and a control operation, wherein reference coordinate systems corresponding to the sensing operation and the positioning operation are both odometer coordinate systems, and reference coordinate systems corresponding to the prediction operation, the decision operation, the planning operation and the control operation are both projection coordinate systems.
Optionally, in one possible implementation manner of this embodiment, the calculating module 503 is specifically configured to determine any two driving operations with a direct transmission relationship based on information transmission directions between the plurality of driving operations when calculating the conversion matrix between different reference coordinate systems based on the association relationship between the driving operations, and determine the conversion matrix between two different reference coordinate systems under the current timestamp according to the positioning output result when determining that the reference coordinate systems corresponding to the two driving operations are different.
Optionally, in a possible implementation manner of the present embodiment, the projection coordinate system is an odometer horizontal projection coordinate system, and then:
The transformation matrix from the odometer horizontal projection coordinate system to the map coordinate system is a projection matrix from the vehicle body coordinate system to the map coordinate system and an inverse matrix from the vehicle body coordinate system to the projection matrix of the odometer coordinate system;
the conversion matrix from the map coordinate system to the odometer horizontal projection coordinate system is an inverse matrix of the conversion matrix from the odometer horizontal projection coordinate system to the map coordinate system;
The transformation matrix from the odometer coordinate system to the odometer horizontal projection coordinate system is that the transformation matrix from the map coordinate system to the odometer horizontal projection coordinate system is multiplied by the transformation matrix from the odometer coordinate system to the map coordinate system;
The conversion matrix from the odometer horizontal projection coordinate system to the odometer coordinate system is an inverse matrix of the conversion matrix from the odometer coordinate system to the odometer horizontal projection coordinate system;
The conversion matrix from the vehicle body coordinate system to the odometer horizontal projection coordinate system is that the conversion matrix from the map coordinate system to the odometer horizontal projection coordinate system is multiplied by the conversion matrix from the vehicle body coordinate system to the map coordinate system;
The transformation matrix from the odometer horizontal projection coordinate system to the vehicle body coordinate system is an inverse matrix of the transformation matrix from the vehicle body coordinate system to the odometer horizontal projection coordinate system;
The projection matrix is formed by reserving coordinates of horizontal planes of the corresponding conversion matrix corresponding to two axial directions and corresponding to an axial direction of a course angle, and setting zero coordinates of the axial direction, a pitch angle and a roll angle of the vertical horizontal plane corresponding to the axial directions respectively.
In this embodiment, coordinate information of the target at the current timestamp may be acquired, then two or more continuous driving operations to be performed on the target and a reference coordinate system corresponding to each driving operation may be approximately determined based on the current coordinate information, where the reference coordinate system is a map coordinate system, an odometer coordinate system or a projection coordinate system, then a transformation matrix between different reference coordinate systems is calculated according to an association relationship between driving operations, finally, coordinate system transformation may be performed on the acquired coordinate information based on the calculated transformation matrix, and corresponding driving operation processing may be performed based on the transformation result. In the whole processing process, the coordinate information is based on an odometer coordinate system or a projection coordinate system, so that the scene can be rapidly and flexibly switched between global map information and local map information when encountering scene switching, the acquired positioning information is truly and accurately ensured, the problem of positioning distortion caused by incapability of flexibly and accurately switching map types by the map coordinate system is avoided, the occurrence of abnormal driving behaviors such as emergency braking, picture dragon, re-planning and the like is reduced, the driving safety of an automatic driving vehicle is improved, the applicability of the automatic driving scene is also improved, and the manual takeover rate is reduced.
One embodiment of the present application provides a computer-readable storage medium having stored therein at least one instruction that is loaded and executed by a processor to implement the autopilot operation processing method as described above.
One embodiment of the present application provides an electronic device including a processor and a memory having at least one instruction stored therein, the instruction being loaded and executed by the processor to implement an autopilot operation processing method as described above.
One embodiment of the present application provides an autonomous vehicle comprising an electronic device as described above. Specifically, the autonomous vehicle may be a vehicle of the L2 class and above.
In the technical scheme of the application, the related processes of collecting, storing, using, processing, transmitting, providing, disclosing and the like of the personal information of the user accord with the regulations of related laws and regulations, and the public order is not violated.
Fig. 6 shows a schematic block diagram of an example electronic device 600 that may be used to implement an embodiment of the application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the applications described and/or claimed herein.
As shown in fig. 6, the electronic device 600 includes a computing unit 601 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 602 or a computer program loaded from a storage unit 608 into a Random Access Memory (RAM) 603. In the RAM603, various programs and data required for the operation of the electronic device 600 can also be stored. The computing unit 601, ROM 602, and RAM603 are connected to each other by a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Various components in the electronic device 600 are connected to the I/O interface 605, including an input unit 606, such as a keyboard, mouse, etc., an output unit 607, such as various types of displays, speakers, etc., a storage unit 608, such as a magnetic disk, optical disk, etc., and a communication unit 609, such as a network card, modem, wireless communication transceiver, etc. The communication unit 609 allows the electronic device 600 to exchange information/data with other devices through a computer network, such as the internet, and/or various telecommunication networks.
The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The calculation unit 601 performs the respective methods and processes described above, for example, the method of the automatic driving operation process. For example, in some embodiments, the method of autopilot operation processing may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 600 via the ROM 602 and/or the communication unit 609. When the computer program is loaded into the RAM 603 and executed by the computing unit 601, one or more steps of the method of the above-described autopilot operation processing may be performed. Alternatively, in other embodiments, the computing unit 601 may be configured to perform the method of autopilot operation processing in any other suitable manner (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include being implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be a special or general purpose programmable processor, operable to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present application may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present application, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user and a keyboard and a pointing device (e.g., a mouse or a trackball) by which the user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user, for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback), and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a Local Area Network (LAN), a Wide Area Network (WAN), and the Internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present disclosure may be performed in parallel, sequentially, or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above embodiments do not limit the scope of the present application. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present application should be included in the scope of the present application.