WO2023179030A1 - 一种道路边界检测方法、装置、电子设备、存储介质和计算机程序产品 - Google Patents
一种道路边界检测方法、装置、电子设备、存储介质和计算机程序产品 Download PDFInfo
- Publication number
- WO2023179030A1 WO2023179030A1 PCT/CN2022/129043 CN2022129043W WO2023179030A1 WO 2023179030 A1 WO2023179030 A1 WO 2023179030A1 CN 2022129043 W CN2022129043 W CN 2022129043W WO 2023179030 A1 WO2023179030 A1 WO 2023179030A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- vehicle
- road
- lane
- image
- boundaries
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/582—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W60/00—Drive control systems specially adapted for autonomous road vehicles
- B60W60/001—Planning or execution of driving tasks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2420/00—Indexing codes relating to the type of sensors based on the principle of their operation
- B60W2420/40—Photo, light or radio wave sensitive means, e.g. infrared sensors
- B60W2420/403—Image sensing, e.g. optical camera
Definitions
- the present disclosure relates to, but is not limited to, the technical field of computer vision, and relates to a road boundary detection method, device, electronic equipment, storage medium and computer program product.
- embodiments of the present disclosure provide a road boundary detection method, device, electronic device, storage medium and computer program product.
- Embodiments of the present disclosure provide a road boundary detection method, which method includes:
- a road boundary into which the vehicle can drive is selected from the plurality of road boundaries.
- selecting a road boundary that the vehicle can drive into from the plurality of road boundaries includes: determining the own vehicle lane in which the vehicle is located based on the road image;
- a road boundary that the vehicle can drive into is determined from the plurality of road boundaries based on the own vehicle lane in which the vehicle is located.
- determining the own vehicle lane in which the vehicle is located based on the road image includes:
- the vehicle lane in which the vehicle is located is determined based on the traffic sign.
- determining the own vehicle lane in which the vehicle is located based on the road image includes:
- the own vehicle lane in which the vehicle is located is determined based on the traveling direction of the other vehicle.
- determining the own vehicle lane in which the vehicle is located based on the traffic sign includes:
- the own vehicle lane in which the vehicle is located is determined based on the designated road marking.
- determining the own vehicle lane in which the vehicle is located based on the traveling direction of the other vehicle includes: in response to the traveling direction of the other vehicle being opposite to the traveling direction of the vehicle, based on the traveling direction of the other vehicle.
- the lane in which the vehicle is located determines the own lane in which the vehicle is located.
- determining the road boundary that the vehicle can drive into from the plurality of road boundaries based on the own vehicle lane in which the vehicle is located includes: based on the traffic sign and the self-vehicle lane in which the vehicle is located.
- the vehicle lane determines the road boundary into which the vehicle can drive from the plurality of road boundaries.
- determining a road boundary that the vehicle can drive into from the plurality of road boundaries based on the own vehicle lane in which the vehicle is located includes: obtaining the location information of the vehicle, obtained in advance
- the map data determines the map sub-data related to the location information, and determines the road boundary that the vehicle can drive into from the multiple road boundaries based on the map sub-data; the map data at least includes road data, road Signage data and traffic sign data.
- determining multiple road boundaries in the road image includes: detecting multiple lanes in the road image, and determining the multiple road boundaries by connecting ends of each lane.
- determining a plurality of road boundaries in the road image includes: detecting a drivable area in the road image, and determining a plurality of road boundaries in the image based on a contour of the drivable area. road boundaries.
- the method further includes: determining a driving path of the vehicle based on a road boundary that the vehicle can drive into, and controlling the driving of the vehicle according to the driving path.
- the method further includes: setting a first area of interest based on a road boundary that the vehicle can drive into, and obtaining an image corresponding to the first area of interest at a first resolution; wherein, The road image is obtained at a second resolution that is smaller than the first resolution.
- the method further includes: setting a second area of interest based on a road boundary that the vehicle can drive into, and obtaining an image corresponding to the second area of interest at a first frame rate; wherein, The road image is obtained at a second frame rate that is less than the first frame rate.
- Embodiments of the present disclosure also provide a road boundary detection device, which device includes: a detection part and a selection part; wherein,
- the detection part is configured to identify road images collected by an image collection device provided on the vehicle and determine multiple road boundaries in the road images;
- the selecting part is configured to select a road boundary into which the vehicle can drive from the plurality of road boundaries.
- An embodiment of the present disclosure also provides a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, the steps of the method described in the embodiment of the present disclosure are implemented.
- Embodiments of the present disclosure also provide an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor.
- the embodiments of the present disclosure are implemented when the processor executes the program. The steps of the method.
- Embodiments of the present disclosure also provide a computer program product.
- the computer program product includes a computer program or instructions.
- the computer program or instructions When the computer program or instructions are run on an electronic device, the electronic device executes the embodiments of the present disclosure. The steps of the method.
- the road boundary detection method provided by the embodiment of the present disclosure can determine the road boundary that the vehicle can drive into based on the identified road boundary, especially in the scenario of invisible road boundaries, to determine the road boundary that the vehicle can drive into. Provide sufficient basis for vehicle turning decisions at intersections.
- Figure 1a is a schematic diagram of a road boundary in a road boundary detection method according to an embodiment of the present disclosure
- Figure 1b is a schematic diagram of an enterable road boundary in the road boundary detection method according to an embodiment of the present disclosure
- Figure 2a is a schematic diagram of an application scenario of an embodiment of the present disclosure
- Figure 2b is a schematic diagram 2 of an application scenario of an embodiment of the present disclosure.
- Figure 3 is a schematic flowchart 1 of a road boundary detection method according to an embodiment of the present disclosure
- Figure 4a is a schematic diagram of a vehicle lane scene in the road boundary detection method according to an embodiment of the present disclosure
- Figure 4b is a schematic diagram 2 of the scene of the self-vehicle lane in the road boundary detection method according to the embodiment of the present disclosure
- Figure 5 is a schematic structural diagram of a road boundary detection device according to an embodiment of the present disclosure.
- Figure 6 is a schematic diagram 2 of the composition and structure of a road boundary detection device according to an embodiment of the present disclosure
- Figure 7 is a schematic diagram 3 of the composition and structure of a road boundary detection device according to an embodiment of the present disclosure.
- FIG. 8 is a schematic diagram of the hardware composition of an electronic device according to an embodiment of the present disclosure.
- Figures 1a and 1b are respectively schematic diagrams of road boundaries and accessible road boundaries in the road boundary detection method of the embodiment of the present disclosure; in addition to the boundaries 110 on both sides of the lane where the vehicle 130 is located, the road boundaries also include boundaries perpendicular to the lane.
- the boundaries 120 on both sides, as shown in Figure 1a, include boundaries 120 perpendicular to both sides of the lane. In the following embodiments, these boundaries are collectively referred to as road boundaries. In the intersection shown in the scene, eight road boundaries are visible.
- the road boundary corresponding to the vehicle's traveling direction is the road boundary that the vehicle can drive into.
- the left road boundary is the road boundary 140 that the vehicle can drive into.
- Figure 2a is a schematic diagram of an application scenario of an embodiment of the present disclosure. As shown in Figure 2a, it is assumed that in the intersection scenario shown in Figures 1a and 1b, there is an obstruction 210 in the southwest corner of the intersection (the top, bottom, left and right of the image correspond to the north and south respectively. , south, west and east). Normally, the obstruction 210 will block the perspective of vehicles traveling from south to north.
- Figure 2b is a schematic diagram of the second application scenario of the embodiment of the present disclosure. As shown in Figure 2b, the vehicle 230 The driver or sensor in the scene cannot obtain information about a part of the area blocked by the obstruction 210. This part of the area can be called the unknown area 220.
- the road boundary that the driver or sensor in the vehicle 230 cannot sense is called the invisible road boundary 250 (the thick solid line in Figure 2a and Figure 2b).
- the driver or sensor in the vehicle 230 can The perceived road boundary is the visible road boundary 240 (thick dashed line in Figure 2a).
- multiple road boundaries in the road image are determined by identifying the road images collected by the image acquisition device installed on the vehicle; and the vehicle is selected from the multiple road boundaries.
- the road boundary that can be driven into can identify the road boundary (especially the invisible road boundary), and can realize the determination of the road boundary that the vehicle can drive into.
- the terms “comprising”, “comprises” or any other variations thereof are intended to cover non-exclusive inclusion, so that a method or device including a series of elements not only includes the explicitly stated elements, but also other elements not expressly listed, or elements inherent to the implementation of the method or apparatus.
- an element defined by the statement “comprises a" does not exclude the presence of other related elements (such as steps in the method or devices) in the method or device including the element.
- a part of the device for example, may be part of a circuit, part of a processor, part of a program or software, etc.).
- the road boundary detection method provided by the embodiment of the present disclosure includes a series of steps, but the road boundary detection method provided by the embodiment of the present disclosure is not limited to the recorded steps.
- the road boundary detection device provided by the embodiment of the present disclosure A series of modules are included, but the device provided by the embodiment of the present disclosure is not limited to include the explicitly recorded modules, and may also include modules that need to be set up to obtain relevant information or perform processing based on the information.
- a and/or B can mean: A exists alone, A and B exist simultaneously, and they exist alone. B these three situations.
- at least one herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, and C, which can mean including from A, Any one or more elements selected from the set composed of B and C.
- FIG. 3 is a schematic flowchart 1 of a road boundary detection method according to an embodiment of the present disclosure; as shown in Figure 3, the method includes:
- Step S301 Identify the road image collected by the image collection device installed on the vehicle, and determine multiple road boundaries in the road image;
- Step S302 Select a road boundary into which the vehicle can drive from the plurality of road boundaries.
- the road boundary detection method in the embodiment of the present disclosure is applied to electronic devices, which may be vehicle-mounted devices, cloud platforms, or other computer devices.
- the vehicle-mounted device may be a thin client, a thick client, a microprocessor-based system, a small computer system, etc. installed on the vehicle
- the cloud platform may be a distributed computer system including a small computer system or a large computer system. Cloud computing technology environment and so on.
- the vehicle-mounted equipment can be connected through communication with the vehicle's sensors, positioning devices, etc., and the vehicle-mounted equipment can obtain the data collected by the vehicle's sensors and the geographical location information reported by the positioning device through the communication connection.
- the vehicle's sensor may be at least one of the following: millimeter wave radar, lidar, camera and other equipment;
- the positioning device may be a device for providing positioning services based on at least one of the following positioning systems: Global Positioning System ( GPS (Global Positioning System), Beidou Satellite Navigation System or Galileo Satellite Navigation System.
- the vehicle-mounted device may be an Advanced Driving Assistant System (ADAS).
- ADAS is installed on the vehicle.
- the ADAS may obtain the vehicle's real-time location information from the vehicle's positioning device, and/or the ADAS may Image data, radar data, etc. representing information about the vehicle's surrounding environment are obtained from the vehicle's sensors.
- ADAS can send vehicle driving data including the vehicle's real-time location information to the cloud platform.
- the cloud platform can receive the vehicle's real-time location information and/or image data, radar data, etc. representing the vehicle's surrounding environment information.
- the road image is obtained through an image acquisition device (ie, the above-mentioned sensor, such as a camera) installed on the vehicle.
- the image acquisition device collects road images or environment images around the vehicle in real time as the vehicle moves. Further, by detecting and recognizing the road image, multiple road boundaries related to the vehicle in the road image are determined, and then a road boundary that the vehicle can enter is selected from the multiple road boundaries.
- the electronic device can determine the road boundary that the vehicle can drive into based on the identified road boundary, especially in the scenario of invisible road boundaries, determine the road boundary that the vehicle can drive into, Provide sufficient basis for vehicle turning decisions at intersections.
- determining multiple road boundaries in the road image includes detecting multiple lanes in the road image, and determining the multiple road boundaries by connecting ends of each lane.
- multiple lanes in the road image can be detected through the first network, that is, multiple lane lines in the road image can be detected.
- the road image is processed through the first network to obtain the lane lines in the road image; and then multiple road boundaries related to the vehicle are obtained by connecting the end edges of the lane lines.
- other image detection schemes may also be used to detect multiple lanes in road images.
- the road image is first grayscaled, and the lane edge in the grayscaled road image is detected, for example, an edge detection operator is used to perform edge detection; the processed image is further binarized. , thereby obtaining the lane lines in the road image.
- determining multiple road boundaries in the road image includes: detecting a drivable area in the road image, and determining multiple road boundaries in the image based on a contour of the drivable area. road boundary.
- the drivable area in the road image can be detected through the second network;
- the free space (Freespace) which can also be called the passable area, represents the area where the vehicle can travel or the area where the vehicle can travel.
- Freespace which can also be called the passable area
- road images in addition to the current vehicle, it usually also includes other vehicles, pedestrians, trees, road edges, etc.
- the areas where other vehicles, pedestrians, trees, and road edges are located are all areas where the current vehicle cannot travel. Therefore, the road image is processed through the second network, and areas such as other vehicles, pedestrians, trees, and road edges in the road image are removed to obtain the drivable area of the vehicle.
- determining multiple road boundaries in the road image includes: using a third network to detect the road image and determining multiple road boundaries related to the vehicle.
- a pre-trained third network can be used to process road images to obtain multiple road boundaries related to the vehicle.
- the above-mentioned first network, second network and third network can all be deep neural networks (DNN, Deep Neural Networks).
- DNN deep neural networks
- selecting a road boundary that the vehicle can drive into from the multiple road boundaries includes: determining the own vehicle lane where the vehicle is located based on the road image; The self-vehicle lane determines the road boundary that the vehicle can drive into from the multiple road boundaries.
- the own vehicle lane in which the vehicle is located can be determined, and the vehicle can be determined from the multiple road boundaries based on the own vehicle lane in which the vehicle is located. Entering road boundary.
- the road boundary corresponding to the driving direction of the vehicle is the road boundary that the vehicle can drive into.
- the road boundary in the left lane is a road boundary that the vehicle can drive into
- the road boundary in the right lane is a road boundary that the vehicle cannot drive into.
- the road boundary in the right lane is the road boundary that the vehicle can drive into
- the road boundary in the left lane is the road that the vehicle cannot drive into boundary.
- the above-mentioned “left” and “right” are relative; when a person faces the road boundary shown in Figure 1a according to the driving direction of the vehicle, the two lanes divided by the solid line of the lane, the left lane is called The lane on the left is called the right lane, and the lane on the right is called the right lane.
- determining the own lane in which the vehicle is located based on the road image includes: identifying traffic signs in the road image; and determining the own lane in which the vehicle is located based on the traffic signs.
- determining the own vehicle lane in which the vehicle is located based on the road image includes: identifying the driving direction of other vehicles in the road image; determining the driving direction of the other vehicle based on the driving direction of the other vehicle.
- the vehicle lane in which the vehicle is located includes: identifying the driving direction of other vehicles in the road image; determining the driving direction of the other vehicle based on the driving direction of the other vehicle.
- the electronic device may determine the own lane in which the vehicle is located based on the recognized traffic sign and/or the driving direction of other vehicles.
- the traffic signs include at least one of the following: signs indicated by traffic signs, road signs, etc.
- the traffic signs are graphic symbols used to indicate traffic regulations and road information. They are usually installed at intersections or road edges to manage traffic and indicate driving directions to ensure smooth roads and safe driving.
- the road signs include markings on the road (such as white solid lines, white dashed lines, yellow solid lines, double yellow solid lines, etc.), signs of road attributes marked on the road (such as straight lines, turn signs, speed limits, etc.). signs, bus signs, etc., i.e. manually drawn signs on the road).
- the electronic device can determine the own lane in which the vehicle is located through the driving direction of other vehicles detected in the road image; the electronic device can also determine the vehicle's own lane through the traffic signs detected in the road image.
- determining the lane in which the vehicle is located based on the traffic sign includes: responding to the traffic sign indicating that the lane in which the vehicle is located is not a one-way lane, and the traffic sign includes a specified In the case of road markings, the own vehicle lane in which the vehicle is located is determined based on the designated road markings.
- the designated road markings are used to indicate traffic flows traveling in the same direction or to separate traffic flows traveling in opposite directions.
- the designated road markings include solid lines (such as yellow solid lines, double yellow solid lines, etc.) and dotted lines (such as white dotted lines).
- determining the own vehicle lane in which the vehicle is located based on the traveling direction of the other vehicle includes: in response to the traveling direction of the other vehicle being opposite to the traveling direction of the vehicle, based on the traveling direction of the other vehicle.
- the lanes in which other vehicles are located determine the own lane in which the vehicle is located.
- Figure 4a is a schematic diagram of a vehicle lane scene in the road boundary detection method according to an embodiment of the present disclosure.
- the road passes If the image recognizes a solid line (for example, the thick solid line 410 in Figure 4a), and the vehicle 400 is driving on the left, it can be determined that the lane to the left of the solid line is the own vehicle lane.
- the thick solid line 410 is the designated road marking line.
- Figure 4b is a schematic diagram of the second vehicle lane scene in the road boundary detection method according to the embodiment of the present disclosure.
- dotted lines for example, Figure 4b
- the thick dotted line 420 in 4b) and the vehicle 400 is driving on the left, it can be determined that the lane on the right side of the dotted line may be the own vehicle lane; further, the self-vehicle lane can be determined based on other traffic signs or through the recognition results of road images. driveway.
- the lane where the other vehicles are located is not the vehicle's own lane.
- the own lane of the vehicle can be obtained by removing the lanes in which other vehicles (vehicles traveling in the opposite direction to the current vehicle) are located in the lane.
- the road boundaries corresponding to the lanes in which other vehicles are located may be further determined, and other vehicles may be removed from the multiple road boundaries determined in step S301 The road boundary corresponding to the lane where the vehicle (the vehicle traveling in the opposite direction to the current vehicle) is located, and then the road boundary that the vehicle can drive into is obtained.
- determining a road boundary that the vehicle can drive into from the multiple road boundaries based on the own vehicle lane in which the vehicle is located includes: based on the traffic sign and the vehicle The own vehicle lane in which the vehicle is located determines the road boundary into which the vehicle can drive from the plurality of road boundaries.
- the electronic device can identify the traffic signs in the road image in real time, and determine the road boundary that the vehicle can drive into from the multiple road boundaries in combination with the own lane where the vehicle is located.
- the traffic signs may include at least one of the following signs: one-way driving signs, right-turn traffic signs at roundabouts, prohibition of driving outside the designated direction sign, no entry signs, traffic closure signs, no vehicle crossing signs, No turning signs, pedestrian only signs, bicycle only signs, bicycle and pedestrian only signs, stop lines, lane lines, and more.
- the electronic device after the electronic device determines multiple road boundaries related to the vehicle and determines the own lane where the vehicle is located, it can determine the road boundaries that the vehicle can drive into based on various traffic signs set around the vehicle.
- determining the road boundaries that the vehicle can drive into from the multiple road boundaries based on the own vehicle lane in which the vehicle is located includes: obtaining the location information of the vehicle. , determining map sub-data related to the location information from pre-obtained map data, and determining a road boundary that the vehicle can drive into from the plurality of road boundaries based on the map sub-data; the map data at least includes Road data, road sign data and traffic sign data.
- the electronic device can obtain map data in advance.
- the map data can be, for example, data containing a priori information such as road information and traffic sign information; the electronic device can determine the driving direction of the vehicle based on the location information of the vehicle. , and then determine the route that the vehicle can drive based on the location information and driving direction of the vehicle, and determine the road boundaries that the vehicle can drive into or the road boundaries that the vehicle cannot drive into based on the routes that the vehicle can drive.
- the method further includes: determining a driving path of the vehicle based on a road boundary that the vehicle can drive into, and controlling the driving of the vehicle according to the driving path.
- the electronic device can determine the driving path of the vehicle for the road boundary that the vehicle can drive into, and the electronic device can control the vehicle to drive according to the driving path.
- the method further includes: setting a first area of interest based on a road boundary that the vehicle can drive into, and obtaining an image corresponding to the first area of interest at a first resolution; wherein , the road image is obtained according to a second resolution, and the second resolution is smaller than the first resolution.
- the method further includes: setting a second area of interest based on a road boundary that the vehicle can drive into, and obtaining an image corresponding to the second area of interest at a first frame rate; Wherein, the road image is obtained at a second frame rate, and the second frame rate is smaller than the first frame rate.
- the electronic device sets a region of interest (ROI, Region of Interest) based on the boundary of the road that the vehicle can drive into, that is, the aforementioned first region of interest and the second region of interest.
- ROI Region of Interest
- the electronic device can use a second resolution (also called a low resolution) to obtain it, and for the first area of interest, a higher resolution than the second resolution can be used.
- the first resolution also called high resolution
- a higher quality image is collected for the first region of interest to facilitate subsequent object recognition of the image corresponding to the first region of interest.
- the electronic device may use a second frame rate (also called a low frame rate) to obtain the image, and for the second area of interest, a higher frame rate may be used.
- the first frame rate (which can also be called a high frame rate) is obtained to facilitate subsequent object recognition of the image corresponding to the second area of interest.
- FIG. 5 is a schematic structural diagram of a road boundary detection device according to an embodiment of the present disclosure; as shown in Figure 5, the device includes: a detection part 51 and a selection part 52; wherein,
- the detection part 51 is configured to identify road images collected by an image collection device provided on the vehicle and determine multiple road boundaries in the road images;
- the selection part 52 is configured to select a road boundary into which the vehicle can drive from the plurality of road boundaries.
- the selection part 52 is configured to determine the own vehicle lane in which the vehicle is located based on the road image; and select the self-vehicle lane in which the vehicle is located from the plurality of road boundaries based on the road image. Determine the boundaries of the road into which the vehicle can drive.
- the selection part 52 is configured to identify traffic signs in the road image; and determine the own vehicle lane in which the vehicle is located based on the traffic signs.
- the selection part 52 is configured to identify the traveling directions of other vehicles in the road image; and determine the own lane in which the vehicle is located based on the traveling directions of the other vehicles.
- the selection part 52 is configured to respond to the situation that the traffic sign indicates that the lane where the vehicle is located is not a one-way lane, and the traffic sign includes designated road markings, The own vehicle lane in which the vehicle is located is determined based on the designated road markings.
- the selection portion 52 is configured to determine the lane in which the vehicle is located based on the lane in which the other vehicle is located in response to the traveling direction of the other vehicle being opposite to the traveling direction of the vehicle. Bicycle lane.
- the selection part 52 is configured to determine a road boundary that the vehicle can drive into from the plurality of road boundaries based on the traffic sign and the own vehicle lane in which the vehicle is located. .
- the selection part 52 is configured to obtain the location information of the vehicle, determine the map sub-data related to the location information from the map data obtained in advance, and based on the map sub-data
- the data determines a road boundary that the vehicle can drive into from the plurality of road boundaries; the map data at least includes road data, road identification data and traffic sign data.
- the detection part 51 is configured to detect multiple lanes in the road image and determine multiple road boundaries related to the vehicle by connecting the ends of each lane.
- the detection part 51 is configured to detect a drivable area in the road image, and determine a plurality of road boundaries related to the vehicle based on the outline of the drivable area.
- the device further includes a first control part 53 for determining the driving path of the vehicle based on the road boundary that the vehicle can drive into. The path controls the movement of the vehicle.
- the device further includes a second control part 54 for setting a first area of interest based on the road boundary that the vehicle can drive into, according to the first resolution An image corresponding to the first region of interest is obtained; wherein the road image is obtained at a second resolution, and the second resolution is smaller than the first resolution.
- the device further includes a second control part 54 for setting a second area of interest based on the road boundary that the vehicle can drive into, according to the first frame rate An image corresponding to the second region of interest is obtained; wherein the road image is obtained at a second frame rate, and the second frame rate is smaller than the first frame rate.
- the device is used in electronic equipment.
- the detection part 51, the selection part 52, the first control part 53 and the second control part 54 in the device can all be composed of a central processing unit (CPU, Central Processing Unit), a digital signal processor (DSP, Digital) in practical applications. Signal Processor), microcontroller unit (MCU, Microcontroller Unit) or programmable gate array (FPGA, Field-Programmable Gate Array).
- CPU Central Processing Unit
- DSP digital signal processor
- MCU microcontroller unit
- FPGA Field-Programmable Gate Array
- FIG. 8 is a schematic diagram of the hardware composition of the electronic device according to the embodiment of the present disclosure.
- the electronic device includes a memory 82, a processor 81 and a memory 82. and a computer program that can be run on the processor 81.
- the processor 81 executes the program, the steps of the road boundary detection method described in the embodiment of the present disclosure are implemented.
- the electronic device may also include a user interface 83 and a network interface 84.
- the user interface 83 may include a display, keyboard, mouse, trackball, click wheel, keys, buttons, touch pad or touch screen, etc.
- bus system 85 various components in the electronic device are coupled together through bus system 85 .
- bus system 85 is used to implement connection communication between these components.
- the bus system 85 also includes a power bus, a control bus and a status signal bus.
- the various buses are labeled bus system 85 in FIG. 8 .
- the memory 82 may be a volatile memory or a non-volatile memory, or may include both volatile and non-volatile memories.
- non-volatile memory can be read-only memory (ROM, Read Only Memory), programmable read-only memory (PROM, Programmable Read-Only Memory), erasable programmable read-only memory (EPROM, Erasable Programmable Read-Only Memory).
- Volatile memory can be random access memory (RAM, Random Access Memory), which is used as an external cache.
- RAM Random Access Memory
- SRAM Static Random Access Memory
- SSRAM Synchronous Static Random Access Memory
- DRAM Dynamic Random Access Memory
- SDRAM Synchronous Dynamic Random Access Memory
- DDRSDRAM Double Data Rate Synchronous Dynamic Random Access Memory
- ESDRAM enhanced Type Synchronous Dynamic Random Access Memory
- SLDRAM Synchronous Link Dynamic Random Access Memory
- DRRAM Direct Rambus Random Access Memory
- the memory 82 described in embodiments of the present disclosure is intended to include, but is not limited to, these and any other suitable types of memory.
- the methods disclosed in the above embodiments of the present disclosure can be applied to the processor 81 or implemented by the processor 81 .
- the processor 81 may be an integrated circuit chip with signal processing capabilities. During the implementation process, each step of the above method can be completed by instructions in the form of hardware integrated logic circuits or software in the processor 81 .
- the above-mentioned processor 81 may be a general processor, a DSP, or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
- the processor 81 can implement or execute the disclosed methods, steps and logical block diagrams in the embodiments of the present disclosure.
- a general-purpose processor may be a microprocessor or any conventional processor, etc.
- the steps of the method disclosed in conjunction with the embodiments of the present disclosure can be directly implemented by a hardware decoding processor, or executed by a combination of hardware and software modules in the decoding processor.
- the software module may be located in a storage medium, which is located in the memory 82.
- the processor 81 reads the information in the memory 82 and completes the steps of the foregoing method in combination with its hardware.
- the electronic device may be configured by one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs, Complex Programmable Logic Device), FPGA, general processor, controller, MCU, microprocessor (Microprocessor), or other electronic component implementation, used to execute the aforementioned method.
- ASICs Application Specific Integrated Circuits
- DSPs Digital Signal processors
- PLDs Programmable Logic Devices
- CPLDs Complex Programmable Logic Devices
- FPGA general processor
- controller MCU
- Microprocessor microprocessor
- the present disclosure also provides a computer-readable storage medium, such as a memory 82 including a computer program.
- the computer program can be executed by the processor 81 of the electronic device to complete the steps of the foregoing method.
- the computer-readable storage medium can be FRAM, ROM, PROM, EPROM, EEPROM, Flash Memory, magnetic surface memory, optical disk, or CD-ROM; it can also be various devices including one or any combination of the above memories.
- the computer-readable storage medium provided by the embodiment of the present disclosure has a computer program stored thereon, and when the program is executed by the processor, the steps of the road boundary detection method described in the embodiment of the present disclosure are implemented.
- Embodiments of the present disclosure also provide a computer program product.
- the computer program product includes a computer program or instructions.
- the computer program or instructions When the computer program or instructions are run on an electronic device, the electronic device executes the embodiments of the present disclosure. The steps of the road boundary detection method.
- the disclosed devices and methods can be implemented in other ways.
- the device embodiments described above are schematic.
- the division of parts is a logical function division.
- the coupling, direct coupling, or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection of the equipment or parts may be electrical, mechanical, or other forms. of.
- the parts described above as separate components may or may not be physically separated.
- the components shown as parts may or may not be physical parts, that is, they may be located in one place or distributed to multiple network parts; Some or all of them may be selected according to actual needs to achieve the purpose of the embodiments of the present disclosure.
- each functional part in each embodiment of the present disclosure can be all integrated into one processing part, or each part can be a separate part, or two or more parts can be integrated into one part; the above-mentioned integration
- the part can be implemented in the form of hardware, or it can be implemented in the form of hardware plus software functional parts.
- the aforementioned program can be stored in a computer-readable storage medium.
- the program When the program is executed, It includes the steps of the above method embodiment; and the aforementioned storage medium includes: various media that can store program codes, such as mobile storage devices, ROM, RAM, magnetic disks or optical disks.
- the above-mentioned integrated parts of the present disclosure are implemented in the form of software function modules and sold or used as independent products, they can also be stored in a computer-readable storage medium.
- the computer software products are stored in a storage medium and include a number of instructions to A computer device (which may be a personal computer, a server, a network device, etc.) is caused to execute all or part of the methods described in various embodiments of the present disclosure.
- the aforementioned storage media include: mobile storage devices, ROM, RAM, magnetic disks or optical disks and other media that can store program codes.
- Embodiments of the present disclosure provide a road boundary detection method, device, electronic device and storage medium.
- the method includes: identifying a road image collected by an image collection device installed on a vehicle, determining a plurality of road boundaries in the road image; and selecting a road boundary into which the vehicle can drive from the plurality of road boundaries.
- it is possible to determine the road boundary that the vehicle can drive into based on the identified road boundary, especially in the scenario of invisible road boundaries, determine the road boundary that the vehicle can drive into, for the vehicle Provide sufficient basis for intersection turning decisions.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Transportation (AREA)
- Mechanical Engineering (AREA)
- Human Computer Interaction (AREA)
- Automation & Control Theory (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computational Linguistics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Traffic Control Systems (AREA)
- Navigation (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (17)
- 一种道路边界检测方法,所述方法由电子设备执行,所述方法包括:识别设置在车辆上的图像采集设备采集的道路图像,确定所述道路图像中的多个道路边界;从所述多个道路边界中选择所述车辆能够驶入的道路边界。
- 根据权利要求1所述的方法,其中,所述从所述多个道路边界中选择所述车辆能够驶入的道路边界,包括:基于所述道路图像确定所述车辆所在的自车车道;基于所述车辆所在的自车车道从所述多个道路边界中确定所述车辆能够驶入的道路边界。
- 根据权利要求2所述的方法,其中,所述基于所述道路图像确定所述车辆所在的自车车道,包括:识别所述道路图像中的交通标识;基于所述交通标识确定所述车辆所在的自车车道。
- 根据权利要求2所述的方法,其中,所述基于所述道路图像确定所述车辆所在的自车车道,包括:识别所述道路图像中的其他车辆的行驶方向;基于所述其他车辆的行驶方向确定所述车辆所在的自车车道。
- 根据权利要求3所述的方法,其中,所述基于所述交通标识确定所述车辆所在的自车车道,包括:响应于所述交通标识表示所述车辆所在的车道不是单向车道、且所述交通标识包括指定道路标线的情况下,基于所述指定道路标线确定所述车辆所在的自车车道。
- 根据权利要求4所述的方法,其中,所述基于所述其他车辆的行驶方向确定所述车辆所在的自车车道,包括:响应于所述其他车辆的行驶方向与所述车辆的行驶方向相反,基于所述其他车辆的所在车道确定所述车辆所在的自车车道。
- 根据权利要求3或5所述的方法,其中,所述基于所述车辆所在的自 车车道从所述多个道路边界中确定所述车辆能够驶入的道路边界,包括:基于所述交通标识以及所述车辆所在的自车车道从所述多个道路边界中确定所述车辆能够驶入的道路边界。
- 根据权利要求2至6任一项所述的方法,其中,所述基于所述车辆所在的自车车道从所述多个道路边界中确定所述车辆能够驶入的道路边界,包括:获得所述车辆所在的位置信息,从预先获得的地图数据确定与所述位置信息相关的地图子数据,基于所述地图子数据从所述多个道路边界中确定所述车辆能够驶入的道路边界;所述地图数据至少包括道路数据、道路标识数据和交通标志牌数据。
- 根据权利要求1至8任一项所述的方法,其中,所述确定所述道路图像中的多个道路边界,包括:检测所述道路图像中的多个车道,通过连接各车道的末端确定所述多个道路边界。
- 根据权利要求1至8任一项所述的方法,其中,所述确定所述道路图像中的多个道路边界,包括:检测所述道路图像中的可行驶区域,基于所述可行驶区域的轮廓线确定所述图像中的多个道路边界。
- 根据权利要求1至10任一项所述的方法,其中,所述方法还包括:基于所述车辆能够驶入的道路边界确定所述车辆的行驶路径,按照所述行驶路径控制所述车辆行驶。
- 根据权利要求1至11任一项所述的方法,其中,所述方法还包括:基于所述车辆能够驶入的道路边界设置第一感兴趣区域,按照第一分辨率获得所述第一感兴趣区域对应的图像;其中,所述道路图像按照第二分辨率获得,所述第二分辨率小于所述第一分辨率。
- 根据权利要求1至11任一项所述的方法,其中,所述方法还包括:基于所述车辆能够驶入的道路边界设置第二感兴趣区域,按照第一帧速率获得所述第二感兴趣区域对应的图像;其中,所述道路图像按照第二帧速 率获得,所述第二帧速率小于所述第一帧速度。
- 一种道路边界检测装置,所述装置包括:检测部分和选择部分;其中,所述检测部分,被配置为识别设置在车辆上的图像采集设备采集的道路图像,确定所述道路图像中的多个道路边界;所述选择部分,被配置为从所述多个道路边界中选择所述车辆能够驶入的道路边界。
- 一种计算机可读存储介质,其上存储有计算机程序,在所述程序被处理器执行的情况下实现权利要求1至13任一项所述方法的步骤。
- 一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,在所述处理器执行所述程序的情况下实现权利要求1至13任一项所述方法的步骤。
- 一种计算机程序产品,所述计算机程序产品包括计算机程序或指令,在所述计算机程序或指令在电子设备上运行的情况下,使得所述电子设备执行权利要求1至13任一项所述方法的步骤。
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2024552773A JP2025509225A (ja) | 2022-03-24 | 2022-11-01 | 道路境界検出方法及びその装置、電子機器、記憶媒体、並びにコンピュータプログラム製品 |
| US18/892,714 US20250014359A1 (en) | 2022-03-24 | 2024-09-23 | Road boundary detection method and apparatus, and electronic device, storage medium and computer program product |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210303727.1A CN114694116A (zh) | 2022-03-24 | 2022-03-24 | 一种道路边界检测方法、装置、电子设备和存储介质 |
| CN202210303727.1 | 2022-03-24 |
Related Child Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US18/892,714 Continuation US20250014359A1 (en) | 2022-03-24 | 2024-09-23 | Road boundary detection method and apparatus, and electronic device, storage medium and computer program product |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2023179030A1 true WO2023179030A1 (zh) | 2023-09-28 |
Family
ID=82140064
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2022/129043 Ceased WO2023179030A1 (zh) | 2022-03-24 | 2022-11-01 | 一种道路边界检测方法、装置、电子设备、存储介质和计算机程序产品 |
Country Status (4)
| Country | Link |
|---|---|
| US (1) | US20250014359A1 (zh) |
| JP (1) | JP2025509225A (zh) |
| CN (1) | CN114694116A (zh) |
| WO (1) | WO2023179030A1 (zh) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN118230201A (zh) * | 2024-04-15 | 2024-06-21 | 山东省交通工程监理咨询有限公司 | 一种基于无人机的高速公路智能图像处理方法 |
| CN119181070A (zh) * | 2024-11-22 | 2024-12-24 | 浙江吉利控股集团有限公司 | 数据处理方法、装置、设备、存储介质以及产品 |
Families Citing this family (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114694116A (zh) * | 2022-03-24 | 2022-07-01 | 商汤集团有限公司 | 一种道路边界检测方法、装置、电子设备和存储介质 |
| CN115909265A (zh) * | 2022-10-31 | 2023-04-04 | 毫末智行科技有限公司 | 车道级定位检测方法、装置、设备及存储介质 |
| CN116012804A (zh) * | 2023-01-20 | 2023-04-25 | 安徽蔚来智驾科技有限公司 | 路沿线的检测分类方法、检测分类模型训练方法 |
Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108216229A (zh) * | 2017-09-08 | 2018-06-29 | 北京市商汤科技开发有限公司 | 交通工具、道路线检测和驾驶控制方法及装置 |
| CN111874006A (zh) * | 2020-08-05 | 2020-11-03 | 腾讯科技(深圳)有限公司 | 路线规划处理方法和装置 |
| US20210016780A1 (en) * | 2018-08-02 | 2021-01-21 | GM Global Technology Operations LLC | Controlling an autonomous vehicle based upon computed lane boundaries |
| CN112309233A (zh) * | 2020-10-26 | 2021-02-02 | 北京三快在线科技有限公司 | 一种道路边界的确定、道路切分方法及装置 |
| CN112363192A (zh) * | 2020-09-29 | 2021-02-12 | 蘑菇车联信息科技有限公司 | 车道定位方法、装置、车辆、电子设备及存储介质 |
| CN113297878A (zh) * | 2020-02-21 | 2021-08-24 | 百度在线网络技术(北京)有限公司 | 道路交叉口识别方法、装置、计算机设备和存储介质 |
| CN114170826A (zh) * | 2021-12-03 | 2022-03-11 | 地平线(上海)人工智能技术有限公司 | 自动驾驶控制方法和装置、电子设备和存储介质 |
| CN114694116A (zh) * | 2022-03-24 | 2022-07-01 | 商汤集团有限公司 | 一种道路边界检测方法、装置、电子设备和存储介质 |
Family Cites Families (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP4861851B2 (ja) * | 2007-02-13 | 2012-01-25 | アイシン・エィ・ダブリュ株式会社 | レーン判定装置及びレーン判定方法、並びにそれを用いたナビゲーション装置 |
| JP6451332B2 (ja) * | 2015-01-14 | 2019-01-16 | 株式会社ニコン | 撮像装置および自動車 |
| JP6362442B2 (ja) * | 2014-06-13 | 2018-07-25 | 富士通株式会社 | 車線境界線抽出装置、車線境界線抽出方法、及びプログラム |
| JP2018026023A (ja) * | 2016-08-11 | 2018-02-15 | 株式会社デンソー | 認識装置、及び、認識方法 |
| CN111201170B (zh) * | 2017-10-10 | 2023-06-16 | 本田技研工业株式会社 | 车辆控制装置及车辆控制方法 |
| US10585434B2 (en) * | 2018-01-10 | 2020-03-10 | GM Global Technology Operations LLC | Relaxable turn boundaries for autonomous vehicles |
| CN111351503A (zh) * | 2018-12-20 | 2020-06-30 | 阿里巴巴集团控股有限公司 | 辅助驾驶方法、辅助驾驶系统、计算设备及存储介质 |
| CN113963325B (zh) * | 2020-07-02 | 2025-07-22 | 深圳引望智能技术有限公司 | 推理车道的方法、训练车道推理模型的方法及装置 |
| CN114179805B (zh) * | 2021-12-10 | 2023-12-19 | 北京百度网讯科技有限公司 | 一种行驶方向确定方法、装置、设备以及存储介质 |
-
2022
- 2022-03-24 CN CN202210303727.1A patent/CN114694116A/zh active Pending
- 2022-11-01 WO PCT/CN2022/129043 patent/WO2023179030A1/zh not_active Ceased
- 2022-11-01 JP JP2024552773A patent/JP2025509225A/ja active Pending
-
2024
- 2024-09-23 US US18/892,714 patent/US20250014359A1/en active Pending
Patent Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108216229A (zh) * | 2017-09-08 | 2018-06-29 | 北京市商汤科技开发有限公司 | 交通工具、道路线检测和驾驶控制方法及装置 |
| US20210016780A1 (en) * | 2018-08-02 | 2021-01-21 | GM Global Technology Operations LLC | Controlling an autonomous vehicle based upon computed lane boundaries |
| CN113297878A (zh) * | 2020-02-21 | 2021-08-24 | 百度在线网络技术(北京)有限公司 | 道路交叉口识别方法、装置、计算机设备和存储介质 |
| CN111874006A (zh) * | 2020-08-05 | 2020-11-03 | 腾讯科技(深圳)有限公司 | 路线规划处理方法和装置 |
| CN112363192A (zh) * | 2020-09-29 | 2021-02-12 | 蘑菇车联信息科技有限公司 | 车道定位方法、装置、车辆、电子设备及存储介质 |
| CN112309233A (zh) * | 2020-10-26 | 2021-02-02 | 北京三快在线科技有限公司 | 一种道路边界的确定、道路切分方法及装置 |
| CN114170826A (zh) * | 2021-12-03 | 2022-03-11 | 地平线(上海)人工智能技术有限公司 | 自动驾驶控制方法和装置、电子设备和存储介质 |
| CN114694116A (zh) * | 2022-03-24 | 2022-07-01 | 商汤集团有限公司 | 一种道路边界检测方法、装置、电子设备和存储介质 |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN118230201A (zh) * | 2024-04-15 | 2024-06-21 | 山东省交通工程监理咨询有限公司 | 一种基于无人机的高速公路智能图像处理方法 |
| CN119181070A (zh) * | 2024-11-22 | 2024-12-24 | 浙江吉利控股集团有限公司 | 数据处理方法、装置、设备、存储介质以及产品 |
Also Published As
| Publication number | Publication date |
|---|---|
| CN114694116A (zh) | 2022-07-01 |
| JP2025509225A (ja) | 2025-04-11 |
| US20250014359A1 (en) | 2025-01-09 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2023179030A1 (zh) | 一种道路边界检测方法、装置、电子设备、存储介质和计算机程序产品 | |
| JP7349792B2 (ja) | 車両走行のための情報を提供する方法 | |
| JP7645341B2 (ja) | 緊急車両の検出 | |
| CN110210280B (zh) | 一种超视距感知方法、系统、终端和存储介质 | |
| CN111874006B (zh) | 路线规划处理方法和装置 | |
| US10118614B2 (en) | Detailed map format for autonomous driving | |
| CN112325896B (zh) | 导航方法、装置、智能行驶设备及存储介质 | |
| CN111311902B (zh) | 一种数据处理方法、装置、设备和机器可读介质 | |
| CN109426256A (zh) | 自动驾驶车辆的基于驾驶员意图的车道辅助系统 | |
| CN107923757B (zh) | 非固态对象监测 | |
| CN111091037A (zh) | 用于确定驾驶信息的方法和设备 | |
| CN111144211A (zh) | 点云显示方法和装置 | |
| WO2021227520A1 (zh) | 可视化界面的显示方法、装置、电子设备和存储介质 | |
| WO2023179028A1 (zh) | 一种图像处理方法、装置、设备及存储介质 | |
| CN112444258B (zh) | 可行驶区域判定的方法、智能驾驶系统和智能汽车 | |
| WO2023179027A1 (zh) | 一种道路障碍物检测方法、装置、设备及存储介质 | |
| US11465620B1 (en) | Lane generation | |
| CN109774720A (zh) | 高精度地图可视化方法、装置及存储介质 | |
| CN108332761B (zh) | 一种使用及创建路网地图信息的方法与设备 | |
| CN117470258A (zh) | 一种地图构建方法、装置、设备及介质 | |
| US12147232B2 (en) | Method, system and computer program product for the automated locating of a vehicle | |
| JP2019164602A (ja) | 逆走警告システム、逆走警告方法、及び逆走警告プログラム | |
| CN114913503B (zh) | 提示点的确定方法、装置、服务器、车辆和存储介质 | |
| CN115690738A (zh) | 车位状态标记方法、装置、设备及介质 | |
| US20250299579A1 (en) | Parking Spot Detection |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22933081 Country of ref document: EP Kind code of ref document: A1 |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2024552773 Country of ref document: JP |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 08/01/2025) |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 22933081 Country of ref document: EP Kind code of ref document: A1 |