[go: up one dir, main page]

CN114332818B - Obstacle detection method and device and electronic equipment - Google Patents

Obstacle detection method and device and electronic equipment Download PDF

Info

Publication number
CN114332818B
CN114332818B CN202111633794.1A CN202111633794A CN114332818B CN 114332818 B CN114332818 B CN 114332818B CN 202111633794 A CN202111633794 A CN 202111633794A CN 114332818 B CN114332818 B CN 114332818B
Authority
CN
China
Prior art keywords
obstacle
data
vehicle
matching
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111633794.1A
Other languages
Chinese (zh)
Other versions
CN114332818A (en
Inventor
杨健
张甲甲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Intelligent Connectivity Beijing Technology Co Ltd
Original Assignee
Apollo Intelligent Connectivity Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apollo Intelligent Connectivity Beijing Technology Co Ltd filed Critical Apollo Intelligent Connectivity Beijing Technology Co Ltd
Priority to CN202111633794.1A priority Critical patent/CN114332818B/en
Publication of CN114332818A publication Critical patent/CN114332818A/en
Application granted granted Critical
Publication of CN114332818B publication Critical patent/CN114332818B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The disclosure provides a detection method and device for an obstacle and electronic equipment, and relates to the technical field of artificial intelligence such as intelligent traffic, environment sensing and automatic driving. The specific implementation scheme is as follows: when determining whether the obstacle perceived by the road side is a false detection obstacle, first obstacle data in a detection road perceived by the road side equipment, second obstacle data in a detection road perceived by the vehicle and vehicle data can be acquired respectively; the first obstacle data, the second obstacle data and the vehicle data are the same in acquisition time; matching a first obstacle corresponding to the first obstacle data with a second obstacle corresponding to the second obstacle data; and then, according to the matching result and the vehicle data, determining the detection result of the first obstacle, and realizing the automatic detection of whether the obstacle perceived by the road side equipment is a false detection obstacle, thereby effectively improving the detection efficiency of the obstacle.

Description

Obstacle detection method and device and electronic equipment
Technical Field
The disclosure relates to the technical field of image processing, in particular to the technical field of artificial intelligence such as intelligent traffic, environment sensing and automatic driving, and particularly relates to a detection method and device of an obstacle and electronic equipment.
Background
Obstacle data perceived by road side equipment has important application in more fields. Taking the driving assistance field as an example, in general, the roadside device sends the perceived obstacle data to the autonomous vehicle, so that the autonomous vehicle combines the obstacle data to realize driving assistance.
Therefore, the accuracy of the obstacle data perceived by the roadside equipment is of paramount importance. If there is a false detection of an obstacle identified based on the obstacle data, accuracy of the obstacle data perceived by the roadside apparatus may be affected. Therefore, how to detect whether an obstacle identified based on road-side perceived obstacle data is a false detection obstacle is a problem to be solved by those skilled in the art.
Disclosure of Invention
The disclosure provides a method, a device and electronic equipment for detecting an obstacle, which can automatically detect whether the obstacle perceived by a road side is a false detection obstacle, so that the detection efficiency of the obstacle is effectively improved.
According to a first aspect of the present disclosure, there is provided a detection method of an obstacle, which may include:
acquiring first obstacle data in a detection road perceived by road side equipment, second obstacle data in the detection road perceived by a vehicle and vehicle data; the first obstacle data, the second obstacle data and the vehicle data are the same in acquisition time.
And matching the first obstacle corresponding to the first obstacle data with the second obstacle corresponding to the second obstacle data to obtain a matching result.
Determining a detection result of the first obstacle according to the matching result and the vehicle data; wherein the detection result comprises false detection or non-false detection.
According to a second aspect of the present disclosure, there is provided a detection apparatus of an obstacle, which may include:
an acquisition unit configured to acquire first obstacle data in a detected road perceived by a roadside apparatus, second obstacle data in the detected road perceived by a vehicle, and vehicle data; the first obstacle data, the second obstacle data and the vehicle data are the same in acquisition time.
And the matching unit is used for matching the first obstacle corresponding to the first obstacle data with the second obstacle corresponding to the second obstacle data to obtain a matching result.
A processing unit configured to determine a detection result of the first obstacle according to the matching result and the vehicle data; wherein the detection result comprises false detection or non-false detection.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of detecting an obstacle as described in the first aspect above.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing the computer to execute the obstacle detection method of the first aspect described above.
According to a fifth aspect of the present disclosure, there is provided a computer program product comprising: a computer program stored in a readable storage medium, from which at least one processor of an electronic device can read, the at least one processor executing the computer program causing the electronic device to perform the method of detecting an obstacle according to the first aspect.
According to the technical scheme of the disclosure, whether the obstacle perceived by the road side equipment is a false detection obstacle or not can be automatically detected, so that the detection efficiency of the obstacle is effectively improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a schematic illustration of a roadside-to-perceived obstacle and a vehicle-to-perceived obstacle provided by an embodiment of the present disclosure;
fig. 2 is a flow chart of a method for detecting an obstacle according to a first embodiment of the present disclosure;
FIG. 3 is a schematic illustration of interactions between a roadside device and a vehicle provided by an embodiment of the disclosure;
fig. 4 is a schematic path diagram of obstacle data in a vehicle-aware full-path coverage scenario provided by an embodiment of the present disclosure;
FIG. 5 is a schematic path diagram of obstacle data in another vehicle-aware full path coverage scenario provided by an embodiment of the present disclosure;
FIG. 6 is a schematic path diagram of obstacle data in yet another vehicle-aware full path coverage scenario provided by an embodiment of the present disclosure;
FIG. 7 is a schematic illustration of a roadside-to-perceived obstacle and a vehicle-to-perceived obstacle provided by an embodiment of the disclosure;
FIG. 8 is a schematic illustration of another roadside-to-side and vehicle-to-side perceived obstacle provided by an embodiment of the disclosure;
fig. 9 is a flowchart of a method for matching a first obstacle corresponding to first obstacle data with a second obstacle corresponding to second obstacle data according to a second embodiment of the disclosure;
FIG. 10 is a schematic illustration of a vehicle being matched to a first obstacle provided by an embodiment of the present disclosure;
FIG. 11 is a schematic illustration of an intersection division provided by an embodiment of the present disclosure;
FIG. 12 is a schematic illustration of a third obstacle to second obstacle matching provided by an embodiment of the present disclosure;
fig. 13 is a flowchart of a method for determining a detection result of a first obstacle according to a matching combination and vehicle data according to a third embodiment of the present disclosure;
fig. 14 is a schematic view of a positional relationship of an obstacle and a vehicle provided by an embodiment of the present disclosure;
fig. 15 is a schematic structural view of an obstacle detecting device provided according to a third embodiment of the present disclosure;
fig. 16 is a schematic block diagram of an electronic device provided by an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In embodiments of the present disclosure, "at least one" means one or more, and "a plurality" means two or more. "and/or" describes the access relationship of the matching object, meaning that there may be three relationships, e.g., a and/or B, which may represent: there are three cases, a alone, a and B together, and B alone, wherein a, B may be singular or plural. In the text description of the present disclosure, the character "/" generally indicates that the front-to-back matching object is an "or" relationship. Furthermore, in the embodiments of the present disclosure, "first", "second", "third", "fourth", "fifth" and "sixth" are only for distinguishing contents of different objects, and have no other special meaning.
The technical scheme provided by the embodiment of the disclosure can be applied to the technical fields of intelligent transportation, environment sensing, automatic driving and the like. Taking the driving assistance field as an example, the accurate perception of obstacle data by road side equipment is important for driving assistance so as to acquire accurate obstacle data.
Whether the road side equipment perceives the obstacle data accurately or not is closely related to the perception capability of the road side equipment. In the following description, the "perception of a roadside device" may be briefly described as "roadside perception". When evaluating the road side perceptibility, one important factor affecting the road side perceptibility is: whether or not the obstacle identified based on the road-side perceived obstacle data is a false detection. If false detection exists, accuracy of obstacle data perceived by the road side equipment is affected, and therefore road side perception capability is affected. In the following description, the "obstacle identified based on the roadside-aware obstacle data" may be briefly described as "a roadside-aware obstacle". Therefore, how to detect whether the obstacle perceived by the road side is a false detection obstacle is a problem to be solved by those skilled in the art.
At present, when detecting whether an obstacle perceived by a road side is a false detection obstacle, road side equipment firstly collects a scene image of a road and carries out obstacle recognition on the scene image so as to extract obstacle data in the scene image, wherein the obstacle data corresponds to the road side perceived obstacle; in addition, staff can also carry out manual marking on the acquired scene images, mark obstacles in the road, and the manually marked obstacles serve as true values; and comparing the obstacle perceived by the road side with the obstacle marked by the man, and detecting whether the obstacle perceived by the road side is a false detection obstacle or not according to a comparison result. However, the detection efficiency of the obstacle is improved by adopting a manual labeling method.
The false detection of the obstacle is understood to be perceived by the road side, but the obstacle is not present in the artificially marked obstacle.
In order to automatically detect whether the road-side perceived obstacle is a false detection obstacle or not, the fact that the vehicle has higher perception capability in the perceivable area is considered, so that the fact that the obstacle corresponding to the obstacle data perceived by the vehicle in the perceivable area is taken as a true value can be considered, the road-side perceived obstacle is matched with the obstacle corresponding to the obstacle data perceived by the vehicle, whether the road-side perceived obstacle is a false detection obstacle or not is determined together according to a matching result and the vehicle data, and whether the obstacle perceived by road-side equipment is the false detection obstacle or not is achieved, and therefore detection efficiency of the obstacle is effectively improved.
As an example, referring to fig. 1, fig. 1 is a schematic diagram of a road-side perceived obstacle and a vehicle perceived obstacle provided by an embodiment of the present disclosure, and, as can be seen with reference to fig. 1, a left diagram in fig. 1 is obstacle data perceived by a vehicle through a vehicle-end view angle, where the obstacles corresponding to the obstacle data include an obstacle 1 and an obstacle 2, and it should be noted that the obstacle data perceived by the vehicle through the vehicle-end view angle does not include the vehicle itself; the right diagram in fig. 1 is obstacle data perceived by the road side device through the road side view angle, and the obstacles corresponding to the obstacle data include an obstacle 1, an obstacle 2, an obstacle 3, an obstacle 4 and an obstacle 5 in the right diagram; among the 5 obstacles perceived by the road side equipment, the obstacle 1 and the obstacle 2 are accurately perceived obstacles, the obstacle 5 is a false detection obstacle, and the obstacle 3 and the obstacle 4 are non-false detection obstacles.
Based on the above technical conception, the embodiments of the present disclosure provide a method for detecting an obstacle, and the method for detecting an obstacle provided by the present disclosure will be described in detail by way of specific embodiments. It is to be understood that the following embodiments may be combined with each other and that some embodiments may not be repeated for the same or similar concepts or processes.
Example 1
Fig. 2 is a flowchart of a method for detecting an obstacle according to a first embodiment of the present disclosure, which may be performed by software and/or hardware devices, for example, an on-board terminal or a server of a vehicle. For example, taking a hardware device as an in-vehicle terminal of a vehicle as an example, please refer to fig. 2, the method for detecting an obstacle may include:
s201, acquiring first obstacle data in a detection road perceived by a road side device, second obstacle data in a detection road perceived by a vehicle and vehicle data; the first obstacle data, the second obstacle data and the vehicle data are identical in acquisition time.
For example, the obstacle data may include data such as a type of the obstacle, a position of the obstacle, and a size of the obstacle, and may specifically be set according to actual needs, or may include other information, such as a speed of the obstacle, etc., where the embodiment of the disclosure only describes taking the first obstacle data including data such as a type of the obstacle, a position of the obstacle, and a size of the obstacle as an example, but the embodiment of the disclosure is not limited thereto.
For example, the vehicle data may include data such as a type of the vehicle, a position of the vehicle, and a size of the vehicle, which may be specifically set according to actual needs, and may also include other information, such as a speed of the vehicle, etc., where the embodiment of the disclosure is described only by taking the data including the type of the vehicle, the position of the vehicle, and the size of the vehicle as examples, but the embodiment of the disclosure is not limited thereto.
For example, when the first obstacle data is acquired, the first obstacle data sent by other electronic devices may be received, for example, the first obstacle data acquired by the roadside device; the first obstacle data may also be looked up from a local store; the first obstacle data may also be obtained by other methods, and may specifically be set according to actual needs, where the method for obtaining the first obstacle data is not specifically limited in the embodiments of the present disclosure.
For example, when the second obstacle data is acquired, the second obstacle data sent by other electronic devices may be received; the second obstacle data may also be looked up from a local store; the second obstacle data may also be obtained by other methods, and may specifically be set according to actual needs, where the method for obtaining the second obstacle data is not specifically limited in the embodiments of the present disclosure.
Taking receiving first obstacle data sent by a road side device as an example, as can be seen in fig. 3, fig. 3 is an interaction schematic diagram between the road side device and a vehicle, where the road side device may sense and detect obstacle information in a road through a road side sensing system of the road side device, where the obstacle information includes the first obstacle data and sensing time; and encodes the perceived obstacle information; then the encoded obstacle information is sent to a vehicle through a Road Side Unit (RSU) by a wireless transmission method; it can be understood that the road side device senses and detects the obstacle information in the road in real time, and sends the sensed obstacle information to the vehicle in real time, so that the obstacle information sensed by the road side device is recorded as first obstacle information and the obstacle information sensed by the vehicle is recorded as second obstacle information for distinguishing the road side device from the vehicle sensed obstacle information.
Correspondingly, the vehicle can receive the encoded first obstacle information perceived by the road side equipment through an On Board Unit (OBU) of the vehicle; and the encoded first obstacle information is sent to the vehicle-mounted computing unit, and the vehicle-mounted computing unit can decode the encoded first obstacle information to acquire the first obstacle information perceived by the road side equipment. In addition, the vehicle-mounted sensing system of the vehicle can sense and detect the obstacle information in the road, and the obstacle information can be recorded as second obstacle information, wherein the second obstacle information comprises second obstacle data and sensing time; and sending the perceived second obstacle information to the vehicle-mounted computing unit; it can be understood that the vehicle-mounted sensing system senses and detects second obstacle information in the road in real time and sends the second obstacle information sensed in real time to the vehicle-mounted computing unit; in addition, the positioning system in the vehicle can also position the position of the vehicle in real time and send the position of the vehicle to the vehicle-mounted computing unit; the vehicle-mounted computing unit is used for jointly determining whether the obstacle perceived by the road side is a false detection obstacle or not based on the first obstacle information perceived by the road side equipment, the second obstacle information perceived by the vehicle-mounted perception system and the vehicle information. Wherein the vehicle information includes collecting multiple sets of vehicle data in real time.
The road side equipment has the capability of converting object calculation in the acquired scene image of the detection road into obstacle data with type, coordinate position and size; the vehicle also has the ability to computationally transform objects in the acquired scene images of the detected road into obstacle data having a type, coordinate position, and size.
As can be seen from the above description, when the vehicle-mounted terminal performs the obstacle information acquisition, the acquired obstacle information includes first obstacle information perceived by the road side device and second obstacle information perceived by the vehicle-mounted perception system of the vehicle. When the first obstacle information perceived by the road side equipment is received, the distance between the vehicle and the central point of the detected road intersection is calculated in real time by taking the high-precision positioning information of the vehicle as a basis, and when the vehicle is detected to drive into the range of 150m from the central point, the first obstacle information perceived by the road side equipment is received by the vehicle and stored; in addition, the acquired vehicle information including the plurality of sets of vehicle data is also stored together.
After acquiring the first obstacle information sent by the road side equipment in real time, the second obstacle information perceived by the vehicle in real time and the vehicle information, the vehicle-mounted terminal needs to perform time alignment in order to accurately detect whether the obstacle perceived by the road side is a false detection obstacle or not, namely, according to the same acquisition time, first initial obstacle data, second initial obstacle data and initial data of the vehicle perceived by the same acquisition time are screened out from more first obstacle information, second obstacle information and vehicle information, and then, based on the first initial obstacle data, the second initial obstacle data and the initial data of the vehicle, whether the obstacle perceived by the road side is the false detection obstacle is determined together. The first initial obstacle data may be understood as being screened from the first perception information, the second initial obstacle data may be understood as being screened from the second perception information, and the initial vehicle data may be understood as being screened from the vehicle information. For example, the initial data of the vehicle includes the vehicle position and the size of the vehicle.
It should be noted that, in the embodiment of the present disclosure, when the first initial obstacle data perceived by the road side device is the obstacle data in the full-path coverage scene, the corresponding second initial obstacle data perceived by the vehicle is also the obstacle data in the full-path coverage scene, so as to ensure that the coverage areas of the initial obstacle data are the same, so that the obstacle matching can be better performed. For example, for the second initial obstacle data perceived by the vehicle to be the obstacle data in the full-path coverage scene, see fig. 4, 5 and 6 described below, where fig. 4 is a schematic path diagram of the obstacle data in the full-path coverage scene perceived by the vehicle provided by the embodiment of the disclosure, and fig. 5 is a schematic path diagram of the obstacle data in the full-path coverage scene perceived by the other vehicle provided by the embodiment of the disclosure; FIG. 6 is a schematic path diagram of obstacle data in yet another vehicle-aware full path coverage scenario provided by an embodiment of the present disclosure; by combining the three different path modes, the vehicle can sense the obstacle data in the full-path coverage scene.
For example, referring to fig. 7, fig. 7 is a schematic diagram of a road side perceived obstacle and a vehicle perceived obstacle according to an embodiment of the present disclosure, where first initial obstacle data perceived by a road side device through a road side view angle shown in fig. 7 is the same as acquisition time of second initial obstacle data perceived by a vehicle through a vehicle end view angle. As can be seen in conjunction with fig. 7, the left graph in fig. 7 shows that the second initial obstacle data perceived by the vehicle through the vehicle end view includes: obstacle 1, obstacle 2, and 4 obstacles not labeled with obstacle numbers; the right graph in fig. 7 is that the obstacle corresponding to the first initial obstacle data perceived by the road side device through the road side view angle includes: obstacle 1, obstacle 2, obstacle 3, obstacle 4, obstacle 5, vehicle, and 2 obstacles not labeled with an obstacle number.
When detecting whether 7 obstacles corresponding to first initial obstacle data perceived by a road side are false detection obstacles, 7 obstacles corresponding to second initial obstacle data perceived by a vehicle are taken as true values of the obstacles to be detected. In consideration of an obstacle corresponding to the second initial obstacle data perceived by the vehicle as a true value for detecting whether or not the obstacle perceived by the road side is a false detection obstacle, it is necessary to ensure accuracy of the obstacle data perceived by the vehicle.
Assuming that the second initial obstacle data perceived by the vehicle is obstacle data located within a detection range corresponding to the vehicle perception capability, the first initial obstacle data may be directly determined as the first obstacle data, and the second initial obstacle data may be determined as the second obstacle data. Assuming that the obstacle data shown in fig. 7 is the obstacle data perceived by the vehicle at the position shown in fig. 7, the positions of two unnumbered obstacles in the left graph in fig. 7 are not located in the detection range corresponding to the vehicle perception capability, the second initial obstacle data may be further screened. In general, considering that the vehicle sensing capability is a detection range centered on the vehicle and having a radius of about 60 meters, it is possible to screen out, from second initial obstacle data sensed by the vehicle, obstacle data located within a detection range corresponding to the vehicle sensing capability, determine the screened out obstacle data as second obstacle data, and determine an obstacle corresponding to the second obstacle data as a true value; correspondingly, screening out obstacle data in a detection range corresponding to the vehicle sensing capability from first initial obstacle data sensed by road side equipment, and determining the screened out obstacle data as first obstacle data; in addition, it is also necessary to screen out vehicle data located in a detection range corresponding to the vehicle perception capability from the initial data of the vehicle, so as to acquire first obstacle data in the detection road perceived by the roadside apparatus, second obstacle data in the detection road perceived by the vehicle, and vehicle data.
In combination with the schematic diagrams of the road-side perceived obstacle and the vehicle perceived obstacle shown in fig. 7, through the detection range defined based on the vehicle perception capability, the corresponding obstacle corresponding to the second obstacle data within the detection range perceived by the vehicle may be used as a true value only to determine whether the obstacle corresponding to the first obstacle data within the road-side perceived position detection range is a false detection obstacle. For example, referring to fig. 8, fig. 8 is a schematic diagram of another road-side perceived obstacle and a vehicle perceived obstacle according to an embodiment of the present disclosure, and as can be seen in conjunction with fig. 8, a diagram on the left side in fig. 8 is that an obstacle corresponding to second initial obstacle data perceived by a vehicle through an end view angle includes: an obstacle 1 and an obstacle 2; the right diagram in fig. 8 is that the obstacle corresponding to the first initial obstacle data perceived by the road side device through the road side view angle includes: obstacle 1, obstacle 2, obstacle 3, obstacle 4, obstacle 5, and vehicle.
For example, in the embodiment of the present disclosure, the limitation of the perception line of sight of the vehicle may be further considered, and the vehicle may have a perception blind area with more areas in the intersection of the detected road, that is, the line of sight blind area, where the vehicle cannot perceive the obstacle data in the perception blind area, so that some area limitations may be further performed when screening the obstacle data in the detection range corresponding to the perception capability of the vehicle; in addition, it is considered that the vehicle can sense not only the obstacle data in the lane line in the intersection of the detected road, but also the obstacle data in the periphery of the road; the road side equipment can only sense the obstacle data in the road and has limited range, so that the false detection of the obstacle in the road is only related to the false detection of the obstacle in the road, and the false detection of the obstacle at the periphery of the road is not considered.
After the first obstacle data in the detected road perceived by the roadside device and the second obstacle data in the detected road perceived by the vehicle are acquired respectively, the first obstacle corresponding to the first obstacle data and the second obstacle corresponding to the second obstacle data may be matched, that is, the following S202 is executed:
s202, matching a first obstacle corresponding to the first obstacle data with a second obstacle corresponding to the second obstacle data to obtain a matching result.
Wherein the matching result includes a match or a mismatch.
The number of the first obstacles may be one or a plurality of first obstacles, and may be specifically set according to actual needs; the number of the second barriers can be one or a plurality of, and can be specifically set according to actual needs.
When a first obstacle corresponding to the first obstacle data is matched with a second obstacle corresponding to the second obstacle data, aiming at the first obstacle, if the second obstacle matched with the first obstacle exists in the second obstacle, indicating that the first obstacle is an obstacle accurately perceived by a road side; in contrast, if there is an obstacle that does not match the first obstacle in the second obstacle, it is indicated that the first obstacle may be an obstacle that is erroneously detected by the roadside apparatus, so in order to accurately determine whether the first obstacle is an obstacle that is erroneously detected by the roadside apparatus, vehicle data may be further combined, and whether the first obstacle is an obstacle that is erroneously detected by the roadside apparatus may be determined together according to the matching result and the vehicle data, that is, the following S203 is executed:
S203, determining a detection result of the first obstacle according to the matching result and the vehicle data; wherein the detection result comprises false detection or non-false detection.
It can be seen that, in the embodiment of the present disclosure, when determining whether the obstacle perceived by the roadside apparatus is a false detection obstacle, first obstacle data in the detection road perceived by the roadside apparatus, second obstacle data in the detection road perceived by the vehicle, and vehicle data may be acquired respectively; the first obstacle data, the second obstacle data and the vehicle data are the same in acquisition time; matching a first obstacle corresponding to the first obstacle data with a second obstacle corresponding to the second obstacle data; and then, according to the matching result and the vehicle data, determining the detection result of the first obstacle, and realizing the automatic detection of whether the obstacle perceived by the road side equipment is a false detection obstacle, thereby effectively improving the detection efficiency of the obstacle.
Based on the embodiment shown in fig. 2, in S202, when the first obstacle corresponding to the first obstacle data and the second obstacle corresponding to the second obstacle data are matched, considering that the second obstacle data in the detected road perceived by the vehicle does not include the vehicle itself, in order to accurately match, the first obstacle data perceived by the road side should not include the obstacle data of the vehicle as an obstacle, therefore, before matching, it may be determined whether the first obstacle data perceived by the road side includes the obstacle data of the vehicle as an obstacle, and if the first obstacle data perceived by the road side does not include the obstacle data of the vehicle as an obstacle, it may be determined that the first obstacle corresponding to the first obstacle data and the second obstacle corresponding to the second obstacle data are directly matched, and directly according to the matching result and the vehicle data, the detection result of the first obstacle may be determined; the specific implementation method can be seen in the technical scheme of the third embodiment.
If the first obstacle data perceived by the road side includes the obstacle data of the vehicle as an obstacle, in order to accurately detect whether the perceived obstacle by the road side is a false detection obstacle or not, so as to accurately detect the perception capability of the road side device, the obstacle data of the vehicle as the obstacle can be removed from the first obstacle data perceived by the road side and then matched. Next, a detailed description will be given of how to match the first obstacle corresponding to the first obstacle data with the second obstacle corresponding to the second obstacle data in S202 described above, by the second embodiment shown in fig. 9 described below.
Example two
Fig. 9 is a flowchart of a method for matching a first obstacle corresponding to first obstacle data with a second obstacle corresponding to second obstacle data, which may also be performed by a software and/or hardware device, according to a second embodiment of the disclosure. For example, referring to fig. 9, the method may include:
and S901, removing obstacle data of the vehicle serving as an obstacle from the first obstacle data to obtain third obstacle data.
When removing the obstacle data of the vehicle as the obstacle from the first obstacle data, the vehicle can be matched with the first obstacle corresponding to the first obstacle data, the first obstacle matched with the vehicle is determined from the first obstacles corresponding to the first obstacle data, and the determined first obstacle matched with the vehicle is a road side perceived vehicle; and then removing the obstacle data of the first obstacle matched with the vehicle from the first obstacle data, so that third obstacle data which does not comprise the vehicle can be obtained.
When the vehicles are respectively matched with the first obstacles, a first rectangular area corresponding to the vehicles can be determined according to the vehicle data; determining a second rectangular area corresponding to the first obstacle; calculating the overlapping area of the first rectangular area and the second rectangular area, which can be recorded as a first overlapping area; determining a first obstacle matched with the vehicle from the first obstacles according to the first overlapping area; the first obstacle matched with the vehicle is the vehicle, then the obstacle data of the first obstacle matched with the vehicle is removed from the first obstacle data, namely the obstacle data of the vehicle serving as the obstacle is removed from the first obstacle data, and third obstacle data are obtained.
For example, when determining the first rectangular area corresponding to the vehicle according to the vehicle data, the vehicle position included in the vehicle data may be determined as the center point of the rectangular area, and the size of the rectangular area may be determined according to the vehicle size included in the vehicle data, so as to determine the first rectangular area corresponding to the vehicle.
For example, when determining the second rectangular area corresponding to the first obstacle, the position of the obstacle included in the first obstacle data may be determined as a center point of the rectangular area, and the size of the rectangular area may be determined according to the size of the obstacle included in the first obstacle data, so as to determine the second rectangular area corresponding to the first obstacle.
For example, when determining a first obstacle matched with the vehicle from the first obstacles according to the first overlapping area, and removing the obstacle data of the first obstacle matched with the vehicle from the first obstacle data, the first obstacle corresponding to the maximum first overlapping area may be determined according to the first overlapping area; determining a first obstacle corresponding to the largest first overlapping area as a first obstacle matched with the vehicle; and removing the obstacle data of the first obstacle corresponding to the maximum first overlapping area from the first obstacle data, thereby obtaining third obstacle data.
For example, referring to the schematic diagrams of the road-side perceived obstacle and the vehicle perceived obstacle shown in fig. 8, it is assumed that the obstacle corresponding to the road-side perceived first obstacle data includes the obstacle 1, the obstacle 2, the obstacle 3, the obstacle 4, the obstacle 5, and the vehicle; when determining the first obstacle matched with the vehicle from the 6 obstacles, referring to fig. 10, fig. 10 is a schematic diagram of matching the vehicle with the first obstacle according to an embodiment of the disclosure, the vehicle may be respectively matched with each of the 6 obstacles, and during matching, a first overlapping area of a first rectangular area corresponding to the vehicle and a second rectangular area corresponding to each of the 6 obstacles may be respectively calculated; and determining the first obstacle corresponding to the maximum first overlapping area as the first obstacle matched with the vehicle, and removing the obstacle data of the first obstacle matched with the vehicle from the first obstacle data perceived by the road side, namely removing the obstacle data of the vehicle serving as the obstacle from the first obstacle data perceived by the road side, so as to obtain third obstacle data, wherein the obstacles corresponding to the third obstacle data comprise an obstacle 1, an obstacle 2, an obstacle 3, an obstacle 4 and an obstacle 5.
After removing the obstacle data of the vehicle as an obstacle from the first obstacle data, a third obstacle corresponding to the third obstacle data and a second obstacle corresponding to the second obstacle data may be matched, that is, the following S902 is performed:
s902, matching a third obstacle corresponding to the third obstacle data with a second obstacle corresponding to the second obstacle data to obtain a matching result.
For example, when matching a third obstacle corresponding to the third obstacle data with a second obstacle corresponding to the second obstacle data, a rectangular area corresponding to the third obstacle may be determined first, and a second overlapping area of the rectangular area corresponding to the second obstacle may be determined first; and determining a fifth obstacle corresponding to the maximum second overlapping area from the second obstacles; further judging the size relation between the maximum second overlapping area and a preset threshold value; if the maximum second overlapping area is smaller than or equal to a preset threshold value, determining that the third obstacle is not matched with the fifth obstacle; and if the maximum second overlapping area is larger than the preset threshold value, determining that the third obstacle is matched with the fifth obstacle. The value of the preset threshold can be set according to actual needs.
For example, when the third obstacle corresponding to the third obstacle data is matched with the second obstacle corresponding to the second obstacle data, in combination with the description in S201, the limitation of the perception line of sight of the vehicle may be considered, and the vehicle may not perceive the obstacle data in the perception blind area, so that some region limitations may be further performed when screening the obstacle data in the detection range corresponding to the perception capability of the vehicle; for example, as shown in fig. 11, fig. 11 is a schematic diagram of intersection division provided in an embodiment of the present disclosure, where an intersection of a detected road may be divided into 9 areas, and the 9 areas are numbered in sequence: region 1, region 2, region 3, region 4, region 5, region 6, region 7, region 8, and region 9. Because the influence of factors such as an intermediate green belt, a median, an intersection orientation and the like possibly exists in the detected road, the obstacle data in the lane line direction of 9 areas and the vehicle can be screened, and the third obstacle corresponding to the third obstacle data in the lane line direction of the 9 areas and the vehicle can be matched with the second obstacle corresponding to the second obstacle data, so that the accuracy of obstacle matching can be improved.
In combination with the schematic diagrams of the road-side perceived obstacle and the vehicle perceived obstacle shown in fig. 8, when the third obstacle corresponding to the road-side perceived third obstacle data includes the obstacle 1, the obstacle 2, the obstacle 3, the obstacle 4 and the obstacle 5 in the right graph and the obstacle corresponding to the second obstacle data includes the obstacle 1 and the obstacle 2 in the left graph, for example, as shown in fig. 12, fig. 12 is a schematic diagram of matching the third obstacle with the second obstacle provided in the embodiment of the present disclosure, the obstacle 1 in the left graph may be first matched with the obstacle 1, the obstacle 2, the obstacle 3, the obstacle 4 and the obstacle 5 in the right graph, the second overlapping areas of the obstacle 1 in the left graph and the obstacle 1, the obstacle 2, the obstacle 3, the obstacle 4 and the obstacle 5 in the right graph may be first determined, and the largest overlapping areas of the obstacle 1 in the left graph and the right graph may be calculated, and the largest overlapping areas of the obstacle 1 in the left graph and the largest overlapping areas of the obstacle 1 in the right graph may be determined, and the largest threshold value is determined.
Then, the obstacle 2 in the left graph is respectively matched with the obstacle 1, the obstacle 2, the obstacle 3, the obstacle 4 and the obstacle 5 in the right graph, so that the second overlapping areas of the obstacle 2 in the left graph and each of the obstacle 1, the obstacle 2, the obstacle 3, the obstacle 4 and the obstacle 5 in the right graph can be respectively determined, the second overlapping area of the obstacle 2 in the left graph and the obstacle 2 in the right graph can be obtained by calculation, the maximum second overlapping area is larger than a preset threshold value, and the obstacle 2 in the left graph and the obstacle 2 in the right graph are determined to be matched.
After the matching, determining that the obstacle 1, the obstacle 2, the obstacle 3, the obstacle 4 and the obstacle 5 corresponding to the third obstacle data perceived by the road side are the obstacles successfully matched with the second obstacle; the obstacle 3, the obstacle 4, and the obstacle 5 are obstacles that fail to match with the second obstacle, thereby obtaining a matching result.
Based on the obtained matching result, the obstacle 1 and the obstacle 2 perceived by the road side are successfully matched, and the obstacle 1 and the obstacle 2 are accurately perceived by the road side equipment; the failure of matching of the obstacle 3, the obstacle 4 and the obstacle 5 indicates that the obstacle 1 and the obstacle 2 are the obstacles which are not accurately perceived by the road side equipment, and as to whether the inaccurately perceived obstacle is a false detection obstacle, further determination is needed by combining vehicle data, namely, the detection result of the first obstacle is determined according to the matching combination and the vehicle data.
As can be seen, in the embodiment of the present disclosure, when matching the first obstacle corresponding to the first obstacle data with the second obstacle corresponding to the second obstacle data, it is considered that the second obstacle data in the detected road perceived by the vehicle does not include the vehicle itself, and the first obstacle data perceived by the road side includes the obstacle data of the vehicle as an obstacle, so that the obstacle data of the vehicle as an obstacle may be removed from the first obstacle data first; matching a third obstacle corresponding to the obtained third obstacle data with a second obstacle corresponding to the second obstacle data; therefore, whether the obstacle perceived by the road side is a false detection obstacle or not can be accurately detected based on the matching result, and the accuracy of the false detection obstacle detection result is improved.
Based on the above embodiment, after the matching result is obtained by matching the third obstacle corresponding to the third obstacle data with the second obstacle corresponding to the second obstacle data, the detection result of the first obstacle may be determined according to the matching combination and the vehicle data. Next, a detailed description will be given by way of a third embodiment shown in fig. 13 described below.
Example III
Fig. 13 is a flowchart of a method for determining a detection result of a first obstacle according to a matching combination and vehicle data, which may be performed by a software and/or hardware device, according to a third embodiment of the present disclosure. For example, referring to fig. 13, the method may include:
s1301, if the matching result indicates that there is a fourth obstacle that fails to match with the second obstacle in the third obstacle, determining a positional relationship between the fourth obstacle and the vehicle according to the vehicle data.
By combining the schematic diagrams of the road-side perceived obstacle and the vehicle perceived obstacle shown in fig. 8, after matching the third obstacle corresponding to the third obstacle data with the second obstacle corresponding to the second obstacle data, it can be determined that, among the third obstacles corresponding to the third obstacle data, the obstacle 3, the obstacle 4, and the obstacle 5 are fourth obstacles that fail to match with the second obstacle.
Obstacle 3, obstacle 4, and obstacle 5 that fail to match the second obstacle are not necessarily all road side misperception, resulting in false detection of the obstacle; it is also possible that the vehicle senses that the line of sight is blocked by some obstacle, resulting in the vehicle not sensing the obstacle 3, the obstacle 4, and the obstacle 5 that failed to match with the second obstacle, and therefore, it is also necessary to further determine the positional relationship between the fourth obstacle and the vehicle for each of the fourth obstacle 3, the obstacle 4, and the obstacle 5.
For example, in determining the positional relationship between the fourth obstacle and the vehicle, assuming that the obstacle corresponding to the third obstacle data perceived by the road side includes the obstacle 6, the obstacle 7, the obstacle 8, and the vehicle, when determining the positional relationship between the vehicle and the obstacle 6, the obstacle 7, and the obstacle 8, respectively, as shown in fig. 14, for example, fig. 14 is a schematic diagram of the positional relationship between the obstacle and the vehicle provided in the embodiment of the present disclosure, coordinates corresponding to the obstacle 6 may be calculated according to the position of the obstacle 6 included in the obstacle data of the obstacle 6 and the size of the obstacle 6, and the coordinates corresponding to the obstacle 6 may be denoted as (x 1, y 1), (x 2, y 2); calculating coordinates corresponding to the vehicle based on the position of the vehicle and the size of the vehicle included in the vehicle data, the coordinates corresponding to the vehicle being inscribed as (x 3, y 3), (x 4, y 4); it can be seen that coordinates (x 1, y 1), (x 2, y 2), (x 3, y 3), and (x 4, y 4) enclose a rectangular area. As can be seen from fig. 14, the obstacle 6 is hidden by the obstacle 7, and the positional relationship with the vehicle is: the obstacle 6 is in a blind sensing area within the sensing range of the vehicle; the positional relationship between the obstacle 7 and the vehicle is: the obstacle 7 is not in a blind zone of perception in the perception range of the vehicle; the positional relationship between the obstacle 8 and the vehicle is: the obstacle 8 is not in the blind area of the perception of the vehicle, so that the positional relationship between the vehicle and the obstacle 6, the obstacle 7, and the obstacle 8 is determined.
Based on the positional relationship determination method of fig. 14, the positional relationship between the obstacle 3 and the vehicle shown in fig. 8 can be obtained as follows: the obstacle 3 is in a perception blind area within the perception range of the vehicle; the positional relationship between the obstacle 4 and the vehicle is: the obstacle 4 is in a blind sensing area within the sensing range of the vehicle; the positional relationship between the obstacle 5 and the vehicle is: the obstacle 5 is not in a blind perception region within the perception range of the vehicle.
After determining the positional relationship between the fourth obstacle and the vehicle from the vehicle data, the detection result of the fourth obstacle may be determined from the positional relationship, that is, the following S1302 is performed:
s1302, determining a detection result of the fourth obstacle according to the position relation.
For example, when determining the detection result of the fourth obstacle according to the positional relationship, whether the fourth obstacle is in the blind area of the vehicle within the sensing range may be determined according to the positional relationship; if the fourth obstacle is not in the perception blind area, determining that the detection result is false detection, namely the fourth obstacle is false detection obstacle; if the fourth obstacle is in the perception blind area, determining that the detection result is non-false detection, namely the fourth obstacle is the non-false detection obstacle.
In combination with the description in S1301 above, the obstacle 3 and the obstacle 4 are in the blind zone of perception in the perception range of the vehicle, resulting in that the second obstacle data perceived by the vehicle does not include the obstacle data of the obstacle 3 and the obstacle 4, and therefore, the obstacle 3 and the obstacle 4 are non-false detection obstacles perceived by the road side; the obstacle 5 is not in the perception blind area in the perception range of the vehicle, and the second obstacle data perceived by the vehicle must include the obstacle data of the obstacle 5, and the second obstacle data perceived by the vehicle does not include the obstacle data of the obstacle 5, so that the obstacle 5 is a false detection obstacle perceived by the road side.
As can be seen, in the embodiment of the disclosure, when determining the detection result of the first obstacle according to the matching combination and the vehicle data, if the matching result indicates that there is a fourth obstacle that fails to match with the second obstacle in the third obstacle, determining a positional relationship between the fourth obstacle and the vehicle according to the vehicle data; and determining the detection result of the fourth obstacle according to the position relation, so that whether the obstacle perceived by the road side is a false detection obstacle or not can be accurately detected based on the position relation, and the accuracy of the detection result of the false detection obstacle is improved.
Example III
Fig. 15 is a schematic structural diagram of an obstacle detection device 150 according to a third embodiment of the disclosure, and as shown in fig. 15, the obstacle detection device 150 may include:
an acquisition unit 1501 for acquiring first obstacle data in a detected road perceived by a roadside apparatus, second obstacle data in a detected road perceived by a vehicle, and vehicle data; the first obstacle data, the second obstacle data and the vehicle data are identical in acquisition time.
The matching unit 1502 is configured to match a first obstacle corresponding to the first obstacle data with a second obstacle corresponding to the second obstacle data, so as to obtain a matching result.
A processing unit 1503 for determining a detection result of the first obstacle according to the matching result and the vehicle data; wherein the detection result comprises false detection or non-false detection.
Optionally, the matching unit 1502 includes a first matching module and a second matching module.
And the first matching module is used for removing the obstacle data of the vehicle serving as the obstacle from the first obstacle data to obtain third obstacle data.
And the second matching module is used for matching the third obstacle corresponding to the third obstacle data with the second obstacle corresponding to the second obstacle data to obtain a matching result.
Optionally, the first matching module includes a first matching sub-module and a second matching sub-module.
The first matching sub-module is used for determining a first rectangular area corresponding to the vehicle according to the vehicle data; and determining a second rectangular area corresponding to the first obstacle.
And the second matching submodule is used for removing the obstacle data of the vehicle serving as an obstacle from the first obstacle data according to the first overlapping area of the first rectangular area and the second rectangular area.
Optionally, the second matching submodule is specifically configured to reject, from the first obstacle data, the obstacle data of the first obstacle corresponding to the largest first overlapping area.
Optionally, the processing unit 1503 includes a first processing module and a second processing module.
The first processing module is used for determining the position relation between the fourth obstacle and the vehicle according to the vehicle data if the matching result indicates that the fourth obstacle which fails to match the second obstacle exists in the third obstacle.
And the second processing module is used for determining the detection result of the fourth obstacle according to the position relation.
Optionally, the second processing module includes a first processing sub-module, a second processing sub-module, and a third processing sub-module.
And the first processing sub-module is used for determining whether the fourth obstacle is in a perception blind area in the perception range of the vehicle according to the position relation.
And the second processing sub-module is used for determining that the detection result is false detection if the fourth obstacle is not in the perception blind area.
And the third processing sub-module is used for determining that the detection result is non-false detection if the fourth obstacle is in the perception blind area.
Optionally, the second matching module includes a third matching sub-module, a fourth matching sub-module, and a fifth matching sub-module.
A third matching sub-module, configured to determine a rectangular area corresponding to a third obstacle, and a second overlapping area of the rectangular area corresponding to the second obstacle; and determining a fifth obstacle corresponding to the maximum second overlapping area from the second obstacles.
And the fourth matching sub-module is used for determining that the third obstacle is not matched with the fifth obstacle if the maximum second overlapping area is smaller than or equal to a preset threshold value.
And the fifth matching submodule is used for determining that the third obstacle is matched with the fifth obstacle if the maximum second overlapping area is larger than a preset threshold value.
The detection device 150 for the obstacle according to the embodiment of the present disclosure may execute the technical scheme of the detection method for the obstacle shown in any embodiment, and the implementation principle and beneficial effects of the detection device for the obstacle are similar to those of the detection method for the obstacle, and may refer to the implementation principle and beneficial effects of the detection method for the obstacle, which are not described herein.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
According to an embodiment of the present disclosure, the present disclosure also provides a computer program product comprising: a computer program stored in a readable storage medium, from which at least one processor of an electronic device can read, the at least one processor executing the computer program causing the electronic device to perform the solution provided by any one of the embodiments described above.
Fig. 16 is a schematic block diagram of an electronic device 160 provided by an embodiment of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 16, the apparatus 160 includes a computing unit 1601 that may perform various suitable actions and processes according to a computer program stored in a Read Only Memory (ROM) 1602 or a computer program loaded from a storage unit 1608 into a Random Access Memory (RAM) 1603. In RAM1603, various programs and data required for operation of device 160 may also be stored. The computing unit 1601, ROM1602, and RAM1603 are connected to each other by a bus 1604. An input/output (I/O) interface 1605 is also connected to the bus 1604.
Various components in device 160 are connected to I/O interface 1605, including: an input unit 1606 such as a keyboard, a mouse, and the like; an output unit 1607 such as various types of displays, speakers, and the like; a storage unit 1608, such as a magnetic disk, an optical disk, or the like; and a communication unit 1609, such as a network card, modem, wireless communication transceiver, or the like. Communication unit 1609 allows device 160 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunications networks.
The computing unit 1601 may be a variety of general purpose and/or special purpose processing components with processing and computing capabilities. Some examples of computing unit 1601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 1601 performs the respective methods and processes described above, for example, a detection method of an obstacle. For example, in some embodiments, the method of detecting an obstacle may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 1608. In some embodiments, some or all of the computer program may be loaded and/or installed onto device 160 via ROM1602 and/or communication unit 1609. When a computer program is loaded into RAM1603 and executed by computing unit 1601, one or more steps of the obstacle detection method described above may be performed. Alternatively, in other embodiments, the computing unit 1601 may be configured to perform the method of detection of the obstacle by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service ("Virtual Private Server" or simply "VPS") are overcome. The server may also be a server of a distributed system or a server that incorporates a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (10)

1. A method of detecting an obstacle, comprising:
acquiring first obstacle data in a detection road perceived by road side equipment, second obstacle data in the detection road perceived by a vehicle and vehicle data; wherein the first obstacle data, the second obstacle data and the vehicle data are the same in acquisition time; the coverage areas of the first obstacle data and the second obstacle data are the same;
Removing obstacle data of the vehicle serving as an obstacle from the first obstacle data to obtain third obstacle data;
matching a third obstacle corresponding to the third obstacle data with a second obstacle corresponding to the second obstacle data to obtain a matching result;
if the matching result indicates that a fourth obstacle which fails to match the second obstacle exists in the third obstacle, determining the position relationship between the fourth obstacle and the vehicle according to the vehicle data;
determining whether the fourth obstacle is in a perception blind area in the perception range of the vehicle according to the position relation;
if the fourth obstacle is not in the perception blind area, determining that the detection result of the fourth obstacle is false detection;
and if the fourth obstacle is in the perception blind area, determining that the detection result of the fourth obstacle is non-false detection.
2. The method of claim 1, wherein the culling the obstacle data of the vehicle as an obstacle from the first obstacle data comprises:
determining a first rectangular area corresponding to the vehicle according to the vehicle data; determining a second rectangular area corresponding to the first obstacle;
And removing obstacle data of the vehicle serving as an obstacle from the first obstacle data according to the first overlapping area of the first rectangular area and the second rectangular area.
3. The method of claim 2, wherein the culling the obstacle data of the vehicle as an obstacle from the first obstacle data according to the first overlapping area of the first rectangular region and the second rectangular region includes:
and removing the obstacle data of the first obstacle corresponding to the largest first overlapping area from the first obstacle data.
4. A method according to any one of claims 1-3, wherein the matching the third obstacle corresponding to the third obstacle data with the second obstacle corresponding to the second obstacle data, to obtain the matching result, includes:
determining a rectangular area corresponding to the third obstacle, and a second overlapping area of the rectangular area corresponding to the second obstacle; determining a fifth obstacle corresponding to the maximum second overlapping area from the second obstacles;
if the maximum second overlapping area is smaller than or equal to a preset threshold value, determining that the third obstacle is not matched with the fifth obstacle;
And if the maximum second overlapping area is larger than the preset threshold value, determining that the third obstacle is matched with the fifth obstacle.
5. A detection device for an obstacle, comprising:
an acquisition unit configured to acquire first obstacle data in a detected road perceived by a roadside apparatus, second obstacle data in the detected road perceived by a vehicle, and vehicle data; wherein the first obstacle data, the second obstacle data and the vehicle data are the same in acquisition time; the coverage areas of the first obstacle data and the second obstacle data are the same;
the matching unit comprises a first matching module and a second matching module;
the first matching module is used for removing the obstacle data of the vehicle serving as an obstacle from the first obstacle data to obtain third obstacle data;
the second matching module is used for matching a third obstacle corresponding to the third obstacle data with a second obstacle corresponding to the second obstacle data to obtain a matching result;
the processing unit comprises a first processing module and a second processing module;
the first processing module is configured to determine, according to the vehicle data, a positional relationship between a fourth obstacle that fails to match with the second obstacle, if the matching result indicates that the fourth obstacle exists in the third obstacle;
The second processing module comprises a first processing sub-module, a second processing sub-module and a third processing sub-module;
the first processing sub-module is used for determining whether the fourth obstacle is in a perception blind area in the perception range of the vehicle according to the position relation;
the second processing sub-module is configured to determine that a detection result of the fourth obstacle is false detection if the fourth obstacle is not in the perception dead zone;
and the third processing sub-module is used for determining that the detection result of the fourth obstacle is non-false detection if the fourth obstacle is in the perception blind area.
6. The apparatus of claim 5, wherein the first matching module comprises a first matching sub-module and a second matching sub-module;
the first matching sub-module is used for determining a first rectangular area corresponding to the vehicle according to the vehicle data; determining a second rectangular area corresponding to the first obstacle;
the second matching sub-module is configured to reject obstacle data of the vehicle as an obstacle from the first obstacle data according to a first overlapping area of the first rectangular area and the second rectangular area.
7. The device according to claim 6,
the second matching submodule is specifically configured to reject, from the first obstacle data, the obstacle data of the first obstacle corresponding to the largest first overlapping area.
8. The apparatus of any of claims 5-7, wherein the second matching module comprises a third matching sub-module, a fourth matching sub-module, and a fifth matching sub-module;
the third matching sub-module is used for determining a rectangular area corresponding to the third obstacle and a second overlapping area of the rectangular area corresponding to the second obstacle; determining a fifth obstacle corresponding to the maximum second overlapping area from the second obstacles;
the fourth matching sub-module is configured to determine that the third obstacle is not matched with the fifth obstacle if the maximum second overlapping area is less than or equal to a preset threshold;
the fifth matching submodule is configured to determine that the third obstacle is matched with the fifth obstacle if the maximum second overlapping area is greater than the preset threshold.
9. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of detecting an obstacle as claimed in any one of claims 1 to 4.
10. A non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the method of detecting an obstacle according to any one of claims 1-4.
CN202111633794.1A 2021-12-28 2021-12-28 Obstacle detection method and device and electronic equipment Active CN114332818B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111633794.1A CN114332818B (en) 2021-12-28 2021-12-28 Obstacle detection method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111633794.1A CN114332818B (en) 2021-12-28 2021-12-28 Obstacle detection method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN114332818A CN114332818A (en) 2022-04-12
CN114332818B true CN114332818B (en) 2024-04-09

Family

ID=81016759

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111633794.1A Active CN114332818B (en) 2021-12-28 2021-12-28 Obstacle detection method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN114332818B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115107793A (en) * 2022-07-07 2022-09-27 阿波罗智能技术(北京)有限公司 Travel planning method, device, equipment and storage medium for automatic driving vehicle
CN119768849A (en) * 2022-09-02 2025-04-04 深圳引望智能技术有限公司 Sensing method, device and system

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002269694A (en) * 2001-03-08 2002-09-20 Natl Inst For Land & Infrastructure Management Mlit Roadside processing device that corrects obstacle detection data
CN106463059A (en) * 2014-06-06 2017-02-22 日立汽车系统株式会社 Obstacle-information-managing device
CN109801508A (en) * 2019-02-26 2019-05-24 百度在线网络技术(北京)有限公司 The motion profile prediction technique and device of barrier at crossing
CN110386065A (en) * 2018-04-20 2019-10-29 比亚迪股份有限公司 Monitoring method, device, computer equipment and the storage medium of vehicle blind zone
CN110687549A (en) * 2019-10-25 2020-01-14 北京百度网讯科技有限公司 Obstacle detection method and device
CN111208839A (en) * 2020-04-24 2020-05-29 清华大学 A fusion method and system of real-time perception information and autonomous driving map
CN111551938A (en) * 2020-04-26 2020-08-18 北京踏歌智行科技有限公司 A perception fusion method of unmanned driving technology based on mining environment
CN111768642A (en) * 2019-04-02 2020-10-13 上海图森未来人工智能科技有限公司 Vehicle's road environment perception and vehicle control method, system, device and vehicle
CN112199991A (en) * 2020-08-27 2021-01-08 广州中国科学院软件应用技术研究所 A simulation point cloud filtering method and system for vehicle-road cooperative roadside perception
CN112712719A (en) * 2020-12-25 2021-04-27 北京百度网讯科技有限公司 Vehicle control method, vehicle-road coordination system, road side equipment and automatic driving vehicle
CN112764013A (en) * 2020-12-25 2021-05-07 北京百度网讯科技有限公司 Method, device and equipment for testing automatic driving vehicle perception system and storage medium
CN113378947A (en) * 2021-06-21 2021-09-10 北京踏歌智行科技有限公司 Vehicle road cloud fusion sensing system and method for unmanned transportation in open-pit mining area
CN113468941A (en) * 2021-03-11 2021-10-01 长沙智能驾驶研究院有限公司 Obstacle detection method, device, equipment and computer storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9766336B2 (en) * 2015-03-16 2017-09-19 Here Global B.V. Vehicle obstruction detection
CN109979238A (en) * 2017-12-28 2019-07-05 北京百度网讯科技有限公司 Barrier based reminding method, device and equipment in lane

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002269694A (en) * 2001-03-08 2002-09-20 Natl Inst For Land & Infrastructure Management Mlit Roadside processing device that corrects obstacle detection data
CN106463059A (en) * 2014-06-06 2017-02-22 日立汽车系统株式会社 Obstacle-information-managing device
CN110386065A (en) * 2018-04-20 2019-10-29 比亚迪股份有限公司 Monitoring method, device, computer equipment and the storage medium of vehicle blind zone
CN109801508A (en) * 2019-02-26 2019-05-24 百度在线网络技术(北京)有限公司 The motion profile prediction technique and device of barrier at crossing
CN111768642A (en) * 2019-04-02 2020-10-13 上海图森未来人工智能科技有限公司 Vehicle's road environment perception and vehicle control method, system, device and vehicle
CN110687549A (en) * 2019-10-25 2020-01-14 北京百度网讯科技有限公司 Obstacle detection method and device
CN111208839A (en) * 2020-04-24 2020-05-29 清华大学 A fusion method and system of real-time perception information and autonomous driving map
CN111551938A (en) * 2020-04-26 2020-08-18 北京踏歌智行科技有限公司 A perception fusion method of unmanned driving technology based on mining environment
CN112199991A (en) * 2020-08-27 2021-01-08 广州中国科学院软件应用技术研究所 A simulation point cloud filtering method and system for vehicle-road cooperative roadside perception
CN112712719A (en) * 2020-12-25 2021-04-27 北京百度网讯科技有限公司 Vehicle control method, vehicle-road coordination system, road side equipment and automatic driving vehicle
CN112764013A (en) * 2020-12-25 2021-05-07 北京百度网讯科技有限公司 Method, device and equipment for testing automatic driving vehicle perception system and storage medium
CN113468941A (en) * 2021-03-11 2021-10-01 长沙智能驾驶研究院有限公司 Obstacle detection method, device, equipment and computer storage medium
CN113378947A (en) * 2021-06-21 2021-09-10 北京踏歌智行科技有限公司 Vehicle road cloud fusion sensing system and method for unmanned transportation in open-pit mining area

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Markus Hiesmair et.al .Empowering Road Vehicles to Learn Parking Situations Based on Optical Sensor Measurements.《 IOT'17: PROCEEDINGS OF THE SEVENTH INTERNATIONAL CONFERENCE ON THE INTERNET OF THINGS》.2017,第199-200页. *
一种自动驾驶车的环境感知系统;姜灏;;电子制作(15);第72-75页 *

Also Published As

Publication number Publication date
CN114332818A (en) 2022-04-12

Similar Documents

Publication Publication Date Title
CN113762272B (en) Road information determining method and device and electronic equipment
US11887473B2 (en) Road congestion detection method and device, and electronic device
CN112580571A (en) Vehicle running control method and device and electronic equipment
CN113743344B (en) Road information determining method and device and electronic equipment
CN111932611B (en) Object position acquisition method and device
CN112764013A (en) Method, device and equipment for testing automatic driving vehicle perception system and storage medium
CN113887418A (en) Method and device for detecting illegal driving of vehicle, electronic equipment and storage medium
CN113392794B (en) Vehicle cross-line identification method, device, electronic device and storage medium
CN114332818B (en) Obstacle detection method and device and electronic equipment
CN115891868B (en) Fault detection method and device for automatic driving vehicle, electronic equipment and medium
EP4080479A2 (en) Method for identifying traffic light, device, cloud control platform and vehicle-road coordination system
CN113205041A (en) Structured information extraction method, device, equipment and storage medium
CN111640301A (en) Method, system and device for detecting fault vehicle, electronic equipment and storage medium
CN109284801A (en) State identification method, device, electronic equipment and the storage medium of traffic light
CN113887391A (en) Method and device for recognizing road sign and automatic driving vehicle
CN114512024A (en) Parking space identification method, device, equipment and storage medium
CN112863187B (en) Detection method of perception model, electronic equipment, road side equipment and cloud control platform
CN116343152A (en) Lane line detection method, device and electronic equipment
CN112507951B (en) Indicating lamp identification method, indicating lamp identification device, indicating lamp identification equipment, road side equipment and cloud control platform
CN115953759A (en) Detection method, device, electronic equipment and storage medium for parking space limiter
CN111612851B (en) Method, apparatus, device and storage medium for calibrating camera
CN118692052A (en) Method, device, electronic device and storage medium for identifying vehicle-to-lane cones
US20230049992A1 (en) Fusion and association of traffic objects in driving environment
CN114519117B (en) Method, device, electronic device and storage medium for determining event authenticity
CN117746137A (en) Traffic accident-based image processing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant