TWI842321B - Lane detection method and lane detection device - Google Patents
Lane detection method and lane detection device Download PDFInfo
- Publication number
- TWI842321B TWI842321B TW111151024A TW111151024A TWI842321B TW I842321 B TWI842321 B TW I842321B TW 111151024 A TW111151024 A TW 111151024A TW 111151024 A TW111151024 A TW 111151024A TW I842321 B TWI842321 B TW I842321B
- Authority
- TW
- Taiwan
- Prior art keywords
- vehicle
- image
- vehicles
- track
- trajectory
- Prior art date
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 87
- 238000000034 method Methods 0.000 claims description 32
- 238000012417 linear regression Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 20
- 238000004590 computer program Methods 0.000 description 19
- 238000005516 engineering process Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 8
- 238000012545 processing Methods 0.000 description 5
- 230000003628 erosive effect Effects 0.000 description 4
- 230000032683 aging Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 230000008447 perception Effects 0.000 description 2
- 239000004576 sand Substances 0.000 description 2
- 101001121408 Homo sapiens L-amino-acid oxidase Proteins 0.000 description 1
- 102100026388 L-amino-acid oxidase Human genes 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000013211 curve analysis Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Landscapes
- Image Analysis (AREA)
- Image Processing (AREA)
- Traffic Control Systems (AREA)
Abstract
Description
本發明涉及電腦技術領域,具體涉及一種車道檢測方法及車道檢測裝置。 The present invention relates to the field of computer technology, and specifically to a lane detection method and a lane detection device.
車道檢測係自動駕駛系統中一個重要之感知任務,藉由車道檢測可幫助完成眾多上層應用。例如,可有助於車道偏離警示(Lane Departure Warning,LDW)、車道保持輔助(Lane Keep Assist,LKA)等系統,亦可輔助前向碰撞預警(Forward CollisionWarning,FCW)判斷路徑上最近之車輛,以說明進行車輛路徑規劃等。然而由於風沙侵蝕、雨水衝刷、路面施工或路面老舊等原因,路面上之車道線容易磨損甚至消失。如此,僅依靠圖像檢測技術直接檢測路面上之車道線將變得相當困難。 Lane detection is an important perception task in the autonomous driving system. Lane detection can help complete many upper-level applications. For example, it can help systems such as Lane Departure Warning (LDW) and Lane Keep Assist (LKA). It can also assist the Forward Collision Warning (FCW) to determine the nearest vehicle on the path to explain the vehicle path planning. However, due to wind and sand erosion, rain erosion, road construction or aging of the road, the lane lines on the road are easily worn or even disappear. In this way, it will become quite difficult to directly detect the lane lines on the road by relying solely on image detection technology.
針對上述問題,本申請提供一種車道檢測方法及車道檢測裝置,可藉由採集到之車輛資訊獲取車道資訊,以輔助實現車道檢測。 In response to the above problems, this application provides a lane detection method and a lane detection device, which can obtain lane information through collected vehicle information to assist in lane detection.
本申請第一方面提供一種車道檢測方法,應用於車輛,包括:獲取第一影像,第一影像包括至少一車輛之影像;根據第一影像獲取至少一車輛之輪廓及朝向,朝向為至少一車輛之車頭方向;根據至少一車輛之輪廓及朝向獲取至少一車輛之軌跡,軌跡為至少一車輛之移動軌跡方向; 根據至少一車輛之軌跡獲取車道資訊。 The first aspect of the present application provides a lane detection method, which is applied to a vehicle, comprising: obtaining a first image, the first image including an image of at least one vehicle; obtaining the outline and orientation of at least one vehicle based on the first image, the orientation being the front direction of at least one vehicle; obtaining the track of at least one vehicle based on the outline and orientation of at least one vehicle, the track being the moving track direction of at least one vehicle; obtaining lane information based on the track of at least one vehicle.
於一些實施例中,當第一影像為單幀影像且第一影像包括至少兩台車輛之影像時,根據至少一車輛之輪廓及朝向獲取至少一車輛之軌跡,包括:根據至少兩台車輛之輪廓獲取至少兩台車輛之中心點;根據至少兩台車輛之中心點及朝向建立至少一個軌跡分組;根據至少一個軌跡分組獲取至少兩台車輛之軌跡。 In some embodiments, when the first image is a single-frame image and the first image includes images of at least two vehicles, obtaining the trajectory of at least one vehicle according to the outline and orientation of at least one vehicle includes: obtaining the center points of at least two vehicles according to the outlines of at least two vehicles; establishing at least one trajectory grouping according to the center points and orientations of at least two vehicles; obtaining the trajectory of at least two vehicles according to at least one trajectory grouping.
於一些實施例中,根據至少兩台車輛之中心點及朝向建立至少一個軌跡分組,包括:選擇至少兩台車輛中之其中一個作為標定車輛,標定車輛為至少兩台車輛中於第一影像中y軸座標最小且未被分配至軌跡分組之車輛;以標定車輛之中心為原點,標定車輛之朝向為方向向量,獲取標定車輛之軌跡延長線;判斷標定車輛之軌跡延長線是否與其他車輛之車尾影像相交;若標定車輛之軌跡延長線與其他車輛之車尾影像相交,則將標定車輛及與標定車輛之軌跡延長線相交之車輛分配至一個軌跡分組,將與標定車輛之軌跡延長線相交之車輛作為新之標定車輛,並重新進行以標定車輛之中心為原點,標定車輛之朝向為方向向量,獲取標定車輛之軌跡延長線;若標定車輛之軌跡延長線與其他車輛之車尾影像不相交,則返回選擇至少兩台車輛中之其中一個作為標定車輛;重複上述步驟,直至全部車輛分配於至少一軌跡分組。 In some embodiments, at least one track group is established based on the center points and orientations of at least two vehicles, including: selecting one of the at least two vehicles as a calibration vehicle, the calibration vehicle being the vehicle with the smallest y-axis coordinate in the first image among the at least two vehicles and not assigned to the track group; taking the center of the calibration vehicle as the origin and the orientation of the calibration vehicle as the direction vector, obtaining the track extension line of the calibration vehicle; determining whether the track extension line of the calibration vehicle intersects with the rear image of the other vehicle; if the track extension line of the calibration vehicle intersects with the rear image of the other vehicle, If the images intersect, the calibration vehicle and the vehicle that intersects with the calibration vehicle's extended track are assigned to a track group, and the vehicle that intersects with the calibration vehicle's extended track is used as the new calibration vehicle, and the calibration vehicle's center is used as the origin and the calibration vehicle's orientation is used as the direction vector to obtain the calibration vehicle's extended track; if the calibration vehicle's extended track does not intersect with the rear image of other vehicles, return to select one of at least two vehicles as the calibration vehicle; repeat the above steps until all vehicles are assigned to at least one track group.
於一些實施例中,方法還包括:根據軌跡分組內車輛之數量判斷軌跡分組是否為孤立分組,其中,孤立分組包括一車輛;若軌跡分組為孤立分組,則判斷孤立分組內之車輛之軌跡是否與其他車輛之軌跡相交或孤立分組內之車輛之軌跡與其他車輛之軌跡距離是否低於一預設距離值; 若孤立分組內之車輛之軌跡與其他車輛之軌跡相交或孤立分組內之車輛之軌跡與其他車輛之軌跡距離低於一預設距離值,則刪除孤立分組。 In some embodiments, the method further includes: determining whether the trajectory group is an isolated group according to the number of vehicles in the trajectory group, wherein the isolated group includes a vehicle; if the trajectory group is an isolated group, determining whether the trajectory of the vehicle in the isolated group intersects with the trajectory of other vehicles or whether the distance between the trajectory of the vehicle in the isolated group and the trajectory of other vehicles is less than a preset distance value; If the trajectory of the vehicle in the isolated group intersects with the trajectory of other vehicles or the distance between the trajectory of the vehicle in the isolated group and the trajectory of other vehicles is less than a preset distance value, deleting the isolated group.
於一些實施例中,當第一影像包括至少兩幀影像且第一影像包括至少兩台車輛之影像時,方法還包括:選取第一影像中預設數量幀數之影像;選取標定影像幀,標定影像幀為預設數量幀數之影像中時序最早之一幀影像;根據標定影像幀獲取至少兩台車輛之輪廓獲取至少兩台車輛之中心點,根據至少兩台車輛之中心點及朝向建立至少一個軌跡分組,根據至少一個軌跡分組獲取至少兩台車輛之軌跡;於獲取到標定影像幀內之全部車輛之軌跡後,返回選取標定影像幀;重複上述步驟,直至預設數量幀數之影像內之全部車輛之軌跡獲取完成;根據預設數量幀數之影像內之全部車輛之軌跡獲取車道資訊。 In some embodiments, when the first image includes at least two frames and the first image includes images of at least two vehicles, the method further includes: selecting a preset number of frames in the first image; selecting a calibration image frame, the calibration image frame being the earliest frame in the preset number of frames; obtaining the outlines of at least two vehicles and the center points of at least two vehicles according to the calibration image frame; At least one track group is established based on the center point and orientation of the vehicle, and the tracks of at least two vehicles are obtained based on at least one track group; after the tracks of all vehicles in the calibration image frame are obtained, return to select the calibration image frame; repeat the above steps until the tracks of all vehicles in the image of the preset number of frames are obtained; and the lane information is obtained based on the tracks of all vehicles in the image of the preset number of frames.
於一些實施例中,當第一影像為單幀影像且第一影像包括一車輛時,根據至少一車輛之輪廓及朝向獲取至少一車輛之軌跡,包括:根據車輛之輪廓獲取車輛之中心點;根據車輛之中心點及車輛之朝向獲取車輛之軌跡。 In some embodiments, when the first image is a single-frame image and the first image includes a vehicle, obtaining the trajectory of at least one vehicle based on the outline and orientation of the at least one vehicle includes: obtaining the center point of the vehicle based on the outline of the vehicle; obtaining the trajectory of the vehicle based on the center point of the vehicle and the orientation of the vehicle.
於一些實施例中,當第一影像包括至少兩幀影像且第一影像包括一車輛時,根據至少一車輛之輪廓及朝向獲取至少一車輛之軌跡,包括:獲取至少兩幀影像之時序資訊;根據車輛之輪廓獲取車輛之中心點;根據時序資訊、車輛之中心點及車輛之朝向獲取車輛之軌跡。 In some embodiments, when the first image includes at least two frames of images and the first image includes a vehicle, obtaining the trajectory of at least one vehicle according to the outline and orientation of the at least one vehicle includes: obtaining timing information of at least two frames of images; obtaining the center point of the vehicle according to the outline of the vehicle; obtaining the trajectory of the vehicle according to the timing information, the center point of the vehicle and the orientation of the vehicle.
於一些實施例中,根據時序資訊、車輛之中心點及車輛之朝向獲取車輛之軌跡,包括:根據時序資訊獲取至少兩幀影像中車輛之朝向;判斷相鄰幀影像中車輛之朝向之間之夾角是否大於一預設夾角值,其中相 鄰幀影像之時序相鄰;若夾角大於預設夾角值,則根據曲線回歸演算法、車輛之中心點及車輛之朝向獲取車輛之軌跡;若夾角小於或等於預設夾角值,則根據線性回歸演算法、車輛之中心點及車輛之朝向獲取車輛之軌跡。 In some embodiments, the trajectory of the vehicle is obtained according to the timing information, the center point of the vehicle and the orientation of the vehicle, including: obtaining the orientation of the vehicle in at least two frames of images according to the timing information; determining whether the angle between the orientations of the vehicle in adjacent frames of images is greater than a preset angle value, wherein the timing of the adjacent frames of images is adjacent; if the angle is greater than the preset angle value, the trajectory of the vehicle is obtained according to a curve regression algorithm, the center point of the vehicle and the orientation of the vehicle; if the angle is less than or equal to the preset angle value, the trajectory of the vehicle is obtained according to a linear regression algorithm, the center point of the vehicle and the orientation of the vehicle.
於一些實施例中,方法還包括:獲取第二影像,第二影像包括車道線;根據第二影像獲取車道之第一資訊;根據車道之第一資訊及車道資訊獲取第二車道資訊。 In some embodiments, the method further includes: obtaining a second image, the second image including a lane line; obtaining first lane information based on the second image; and obtaining second lane information based on the first lane information and the lane information.
本申請第二方面還提供一種車道檢測裝置,應用於車輛,包括:影像獲取模組,影像獲取模組用於獲取第一影像,第一影像包括至少一車輛之影像;輪廓獲取模組,連接影像獲取模組,輪廓獲取模組用於根據第一影像獲取至少一車輛之輪廓及朝向,朝向為至少一車輛之車頭方向;軌跡獲取模組,連接輪廓獲取模組,軌跡獲取模組用於根據至少一車輛之輪廓及朝向獲取至少一車輛之軌跡,軌跡為至少一車輛之移動軌跡方向;車道獲取模組,連接軌跡獲取模組,車道獲取模組用於根據至少一車輛之軌跡獲取車道資訊。 The second aspect of the present application also provides a lane detection device, which is applied to a vehicle, comprising: an image acquisition module, the image acquisition module is used to acquire a first image, the first image includes an image of at least one vehicle; a contour acquisition module, connected to the image acquisition module, the contour acquisition module is used to acquire the contour and orientation of at least one vehicle according to the first image, the orientation being at least The front direction of a vehicle; a track acquisition module, connected to the contour acquisition module, the track acquisition module is used to acquire the track of at least one vehicle according to the contour and direction of at least one vehicle, and the track is the moving track direction of at least one vehicle; a lane acquisition module, connected to the track acquisition module, the lane acquisition module is used to acquire lane information according to the track of at least one vehicle.
本申請藉由獲取第一影像上之至少一車輛之輪廓及朝向,進而確定至少一車輛之軌跡,從而根據至少一車輛之軌跡初步獲取車道資訊,從而輔助實現車道檢測。 This application obtains the outline and orientation of at least one vehicle on the first image, and then determines the trajectory of at least one vehicle, thereby preliminarily obtaining lane information based on the trajectory of at least one vehicle, thereby assisting in lane detection.
S1-S4、S401-S403、S501-S506、S801-S803、S101-S106、S121-S122、S141-S143:步驟 S1-S4, S401-S403, S501-S506, S801-S803, S101-S106, S121-S122, S141-S143: Steps
101:車輛 101: Vehicles
102:終端設備 102: Terminal equipment
103:圖像採集設備 103: Image acquisition equipment
104:雷達設備 104: Radar equipment
100:車道檢測裝置 100: Lane detection device
10:影像獲取模組 10: Image acquisition module
20:輪廓獲取模組 20: Outline acquisition module
30:軌跡獲取模組 30:Trajectory acquisition module
40:車道獲取模組 40: Lane acquisition module
200:電子設備 200: Electronic equipment
201:處理器 201: Processor
202:記憶體 202: Memory
203:電腦程式 203: Computer Programs
圖1為本申請提供之車道檢測方法之系統架構圖。 Figure 1 is a system architecture diagram of the lane detection method provided in this application.
圖2為本申請提供之車道檢測方法之流程示意圖。 Figure 2 is a schematic diagram of the process of the lane detection method provided in this application.
圖3為本申請一實施例提供之整體檢測框之示意圖。 Figure 3 is a schematic diagram of the overall detection frame provided in an embodiment of this application.
圖4為圖2步驟S2於第一場景下之子步驟之流程示意圖。 Figure 4 is a schematic diagram of the process of the sub-steps of step S2 in Figure 2 under the first scenario.
圖5為圖4中步驟S402之子步驟之流程示意圖。 Figure 5 is a schematic diagram of the process of the sub-steps of step S402 in Figure 4.
圖6A-圖6C為本申請一實施例中之軌跡分組之過程示意圖。 Figures 6A-6C are schematic diagrams of the process of track grouping in an embodiment of the present application.
圖7為本申請一實施例之軌跡分組後之示意圖。 Figure 7 is a schematic diagram of the track grouping of an embodiment of this application.
圖8為本申請一實施例中之判斷孤立分組之流程示意圖。 Figure 8 is a schematic diagram of the process of determining isolated groups in an embodiment of this application.
圖9A-圖9C為本申請一實施例中之三種孤立分組之示意圖。 Figures 9A-9C are schematic diagrams of three isolated groups in an embodiment of this application.
圖10為圖2步驟S2於第二場景下之子步驟之流程示意圖。 Figure 10 is a schematic diagram of the process of the sub-steps of step S2 in Figure 2 in the second scenario.
圖11為本申請一實施例中不同時序之影像幀之重疊示意圖。 Figure 11 is a schematic diagram of the overlay of image frames at different time sequences in an embodiment of the present application.
圖12為圖2步驟S2於第三場景下之子步驟之流程示意圖。 Figure 12 is a schematic diagram of the process of the sub-steps of step S2 in Figure 2 under the third scenario.
圖13為本申請一實施例中之車輛軌跡之示意圖。 Figure 13 is a schematic diagram of a vehicle track in an embodiment of the present application.
圖14為圖2步驟S2於第四場景下之子步驟之流程示意圖。 Figure 14 is a schematic diagram of the process of the sub-steps of step S2 in Figure 2 under the fourth scenario.
圖15A為本申請一實施例之不同時序之第一影像之合併示意圖。 FIG15A is a schematic diagram showing the merging of the first images at different time sequences of an embodiment of the present application.
圖15B為本申請另一實施例之不同時序之第一影像之合併示意圖。 FIG. 15B is a schematic diagram showing the merging of first images at different time sequences in another embodiment of the present application.
圖16為本申請一實施例提供之車道檢測裝置之模組示意圖。 Figure 16 is a schematic diagram of the module of the lane detection device provided in an embodiment of this application.
圖17為應用本發明實現車道檢測方法之電子設備之結構示意圖。 Figure 17 is a schematic diagram of the structure of the electronic device that uses the present invention to implement the lane detection method.
下面將結合本發明實施例中之附圖,對本發明實施例中之技術方案進行清楚、完整地描述,顯然,所描述之實施例僅僅係本發明一部分實施例,而非全部之實施例。基於本發明中之實施例,本領域具有通常技藝者於沒有做出創造性勞動前提下所獲得之所有其他實施例,均屬於本 發明保護之範圍。 The following will combine the attached figures in the embodiments of the present invention to clearly and completely describe the technical solutions in the embodiments of the present invention. Obviously, the described embodiments are only part of the embodiments of the present invention, not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by ordinary skilled persons in this field without creative labor are within the scope of protection of the present invention.
除非另有定義,本文所使用之所有之技術與科學術語與屬於本發明之技術領域之具有通常技藝者通常理解之含義相同。本文中於本發明之說明書中所使用之術語僅係為描述具體之實施例,並非旨在於限制本發明。本文所使用之術語“及/或”包括一個或複數相關之所列項目的任意與所有之組合。 Unless otherwise defined, all technical and scientific terms used herein have the same meaning as those commonly understood by those skilled in the art in the technical field of the present invention. The terms used herein in the specification of the present invention are only for describing specific embodiments and are not intended to limit the present invention. The term "and/or" used herein includes any and all combinations of one or more related listed items.
下面結合附圖,對本發明之一些實施方式作詳細說明。於不衝突之情況下,下述之實施例及實施例中之特徵可相互組合。 The following is a detailed description of some implementation methods of the present invention in conjunction with the attached figures. The following embodiments and features in the embodiments can be combined with each other without conflict.
目前之高速串列匯流排協定中,通常均於數據中包含了時鐘訊號(clock signal),而不再需要一根獨立之時鐘訊號資料線與數據資料線平行傳輸。如此,為於接收端能夠正確對數據訊號(data signal)進行採樣,需要於接收端將數據訊號中之時鐘訊號提取出來,即係時鐘恢復過程。 In current high-speed serial bus protocols, the clock signal is usually included in the data, and there is no need for an independent clock signal data line to be transmitted in parallel with the data data line. In this way, in order to correctly sample the data signal at the receiving end, the clock signal in the data signal needs to be extracted at the receiving end, which is the clock recovery process.
車道檢測係自動駕駛系統中一個重要之感知任務,藉由車道檢測可幫助完成眾多上層應用。例如,可有助於車道偏離警示(Lane Departure Warning,LDW)、車道保持輔助(Lane Keep Assist,LKA)等系統,亦可輔助前向碰撞預警(Forward CollisionWarning,FCW)判斷路徑上最近之車輛,以說明進行車輛路徑規劃等。然而由於風沙侵蝕、雨水衝刷、路面施工或路面老舊等原因,路面上之車道線容易磨損甚至消失。如此,僅依靠圖像檢測技術直接檢測路面上之車道將變得相當困難。 Lane detection is an important perception task in the autonomous driving system. Lane detection can help complete many upper-level applications. For example, it can help systems such as Lane Departure Warning (LDW) and Lane Keep Assist (LKA). It can also assist the Forward Collision Warning (FCW) to determine the nearest vehicle on the path to explain the vehicle path planning. However, due to wind and sand erosion, rain erosion, road construction or aging of the road, the lane lines on the road are easily worn or even disappear. In this way, it will become quite difficult to directly detect the lanes on the road by relying solely on image detection technology.
為此,本申請提供一種車道檢測方法,用於獲取車道資訊,從而輔助實現車道檢測。 To this end, this application provides a lane detection method for obtaining lane information to assist in lane detection.
首先,對本申請實施例涉及到之系統架構進行介紹。 First, the system architecture involved in this application embodiment is introduced.
圖1係本申請一實施例提供之車道檢測方法之系統架構圖。於該系統中,包括車輛101、終端設備102、圖像採集設備103及雷達設備104。其中,終端設備102分別與圖像採集設備103以及雷達設備104通訊連接,如此,圖像採集設備103與雷達設備104可將各自採集到之資料發送至終端設備102。車輛101與終端設備102之間亦通訊連接,如此,終端設備102可向車輛101發送控制指令。 FIG1 is a system architecture diagram of a lane detection method provided in an embodiment of the present application. The system includes a vehicle 101, a terminal device 102, an image acquisition device 103, and a radar device 104. The terminal device 102 is connected to the image acquisition device 103 and the radar device 104 for communication, so that the image acquisition device 103 and the radar device 104 can send the data collected by each to the terminal device 102. The vehicle 101 is also connected to the terminal device 102 for communication, so that the terminal device 102 can send control instructions to the vehicle 101.
具體終端設備102可安裝於車輛101內部,亦可係駕駛員當前攜帶於身上之終端設備。該終端設備102包括有用於進行資料資訊處理之處理器,藉由該處理器,該終端設備102可對接收到之由圖像採集設備103或雷達設備104採集到之資料進行處理,從而確定得到車輛101所對應之車道資訊。另外,該終端設備102還可提供用於進行人機交互之介面,藉由該人機交互介面可向使用者顯示當前之路況以及路徑規劃圖等資訊。 The specific terminal device 102 can be installed inside the vehicle 101, or it can be a terminal device currently carried by the driver. The terminal device 102 includes a processor for data information processing. Through the processor, the terminal device 102 can process the data collected by the image acquisition device 103 or the radar device 104, thereby determining the lane information corresponding to the vehicle 101. In addition, the terminal device 102 can also provide an interface for human-computer interaction, through which the current road conditions and route planning map and other information can be displayed to the user.
圖像採集設備103安裝於車輛101之車身外部。具體可於車輛101之車身周圍安裝有多個圖像採集設備103。例如,可於車輛101之車身周圍安裝4個圖像採集設備103,該4個圖像採集設備可分別作為前視圖像採集設備、後視圖像採集設備、左視圖像採集設備與右視圖像採集設備。其中,前視圖像採集設備安裝於車頭之中央位置,後視圖像採集設備安裝於車尾之中央位置,左視圖像採集設備安裝於車輛左側沿長度方向之中點位置上,右視圖像採集設備安裝於車輛右側沿長度方向之中點位置上。值得注意,上述僅係以4個圖像採集設備為例進行說明,於實際應用中,車輛101之車身周圍還可安裝有更多或更少之圖像採集設備103。該圖像採集設備103可於車輛101行駛之過程中對車輛101周邊之路況進行圖像採集, 藉由對採集到路況圖像進行處理與分析,可對車輛101周圍存於之障礙物、車道線、訊號燈以及訊號燈停止線等進行檢測,亦即係,藉由對採集到之路況圖像進行處理與分析,可初步得到障礙物檢測結果、車道線檢測結果、訊號燈檢測結果以及停止線檢測結果等。 The image acquisition device 103 is installed on the outside of the vehicle 101. Specifically, multiple image acquisition devices 103 can be installed around the body of the vehicle 101. For example, four image acquisition devices 103 can be installed around the body of the vehicle 101, and the four image acquisition devices can be used as a front view image acquisition device, a rear view image acquisition device, a left view image acquisition device, and a right view image acquisition device. Among them, the front view image acquisition device is installed at the center of the front of the vehicle, the rear view image acquisition device is installed at the center of the rear of the vehicle, the left view image acquisition device is installed at the midpoint of the left side of the vehicle along the length direction, and the right view image acquisition device is installed at the midpoint of the right side of the vehicle along the length direction. It is worth noting that the above is only an example of four image acquisition devices. In actual application, more or fewer image acquisition devices 103 may be installed around the body of the vehicle 101. The image acquisition device 103 can collect images of the road conditions around the vehicle 101 while the vehicle 101 is driving. By processing and analyzing the collected road condition images, obstacles, lane lines, signal lights, and signal light stop lines around the vehicle 101 can be detected. That is, by processing and analyzing the collected road condition images, preliminary obstacle detection results, lane line detection results, signal light detection results, and stop line detection results can be obtained.
雷達設備104安裝於車輛101之車身外部。具體可於車輛101頂部之中央位置安裝至少一個雷達設備104。或者,可於車輛101頂部之不同位置處安裝多個雷達設備104,例如,可於車輛101頂部之四個角點上安裝有四個雷達設備104。藉由該雷達設備104可對障礙物進行檢測,尤其係可對運動之障礙物之位置與運動參數進行準確之檢測。可理解,障礙物可係行駛於車輛101周圍之其他車輛或移動裝置。 The radar device 104 is installed on the outside of the vehicle 101. Specifically, at least one radar device 104 can be installed at the center of the top of the vehicle 101. Alternatively, multiple radar devices 104 can be installed at different positions on the top of the vehicle 101, for example, four radar devices 104 can be installed at the four corners of the top of the vehicle 101. The radar device 104 can detect obstacles, especially the position and movement parameters of moving obstacles can be accurately detected. It can be understood that obstacles can be other vehicles or mobile devices traveling around the vehicle 101.
需要說明,終端設備102可係車載終端設備,亦可為其他當前處於車輛101內部之移動終端設備。例如,該終端設備102可係諸如工業電腦、可攜式電腦、智慧手機、平板電腦等終端。圖像採集設備103可係能夠進行圖像採集之相機或者係攝像頭等。例如,該圖像採集設備103可係魚眼環視相機。雷達設備104可係雷射雷達及/或毫米波雷達。 It should be noted that the terminal device 102 may be a vehicle-mounted terminal device or other mobile terminal device currently inside the vehicle 101. For example, the terminal device 102 may be a terminal such as an industrial computer, a portable computer, a smart phone, a tablet computer, etc. The image acquisition device 103 may be a camera capable of image acquisition or a camera, etc. For example, the image acquisition device 103 may be a fisheye surround camera. The radar device 104 may be a laser radar and/or a millimeter wave radar.
可選地,上述系統架構中之終端設備102可由車輛控制單元(Vehicle control unit,VCU)來代替,於該種情況下,下述實施例中提供之方法步驟則可應用於車輛101,且由車輛101之車輛控制單元來執行。 Optionally, the terminal device 102 in the above system architecture may be replaced by a vehicle control unit (VCU). In this case, the method steps provided in the following embodiments may be applied to the vehicle 101 and executed by the vehicle control unit of the vehicle 101.
請繼續參閱圖2,圖2為本申請一實施例提供之車道檢測方法之流程示意圖。下述實施例以車道檢測方法應用於車輛101,且由車輛101之車輛控制單元執行為例進行說明。 Please continue to refer to Figure 2, which is a schematic diagram of the process of the lane detection method provided in an embodiment of the present application. The following embodiment is explained by taking the lane detection method applied to vehicle 101 and executed by the vehicle control unit of vehicle 101 as an example.
於本申請中,車道檢測方法包括如下步驟: In this application, the lane detection method includes the following steps:
步驟S1:獲取第一影像,第一影像包括至少一車輛之影像。 Step S1: Obtain a first image, the first image includes an image of at least one vehicle.
於本申請實施例中,藉由圖像採集設備103及/或雷達設備104獲取複數初始影像,並對複數初始影像進行處理,以篩選出第一影像。其中,初始影像為根據圖像採集設備103及/或雷達設備104採集得到之車輛101周邊或所在路段之影像。初始影像可係圖像採集設備103採集得到之圖片或包括多幀圖片之視頻資料,初始影像亦可係根據雷達設備104採集得到之點雲資料生成之圖片。 In the embodiment of the present application, multiple initial images are obtained by the image acquisition device 103 and/or the radar device 104, and the multiple initial images are processed to filter out the first image. The initial image is an image of the surrounding area or the road section of the vehicle 101 acquired by the image acquisition device 103 and/or the radar device 104. The initial image can be a picture acquired by the image acquisition device 103 or video data including multiple frames of pictures. The initial image can also be a picture generated based on the point cloud data acquired by the radar device 104.
於一些實施例中,可基於圖像檢測技術或視覺識別技術等技術,從圖像採集設備103採集得到之複數初始影像中篩選出包括至少一車輛之初始影像作為第一影像,本申請並不對具體之圖像檢測技術做限定。 In some embodiments, based on image detection technology or visual recognition technology, an initial image including at least one vehicle can be selected from the multiple initial images collected by the image acquisition device 103 as the first image. This application does not limit the specific image detection technology.
於另一些實施例中,亦可基於目標檢測等技術,根據雷達設備104採集到之點雲資料,即初始影像,篩選包括至少一車輛之影像,作為第一影像。 In other embodiments, based on target detection and other technologies, the point cloud data collected by the radar device 104, i.e., the initial image, can be used to filter out an image including at least one vehicle as the first image.
具體步驟S1中提及之車輛為位於車輛101周邊之車輛,例如,可係與車輛101位於同一條路上之其他車輛。 Specifically, the vehicles mentioned in step S1 are vehicles located around vehicle 101, for example, other vehicles located on the same road as vehicle 101.
步驟S2:根據第一影像獲取至少一車輛之輪廓及朝向,朝向為至少一車輛之車頭方向。 Step S2: Obtain the outline and orientation of at least one vehicle based on the first image, where the orientation is the front direction of at least one vehicle.
請參圖3,具體本申請藉由習知之深度學習網路技術加以訓練構建之檢測模型對第一影像上顯示之每一車輛進行處理,以於每一第一影像上生成對應之車輛之整體檢測框及車頭檢測框(或車尾檢測框)。可理解,車頭檢測框相對整體檢測框所在之一側,即車輛之車頭方向為對應車輛之朝向。車尾檢測框相對整體檢測框所在之一側之反方向,為對應車輛 之朝向。 Please refer to Figure 3. Specifically, the detection model constructed by training the known deep learning network technology in this application processes each vehicle displayed on the first image to generate the overall detection frame and the front detection frame (or rear detection frame) of the corresponding vehicle on each first image. It can be understood that the front detection frame is opposite to the side where the overall detection frame is located, that is, the direction of the front of the vehicle is the direction of the corresponding vehicle. The rear detection frame is opposite to the side where the overall detection frame is located, which is the direction of the corresponding vehicle.
於本申請中,整體檢測框及車頭檢測框(或車尾檢測框)均為三維(three-dimensional,3D)檢測框,例如可係近似長方體之檢測框。如此,可藉由8個角點(圖3中之C1-C8)之座標表示3D檢測框之位置,或藉由4個角點、車底對地高度值及車底與車頂之間之高度值表示3D檢測框之位置(圖未示)。 In this application, the overall detection frame and the front detection frame (or rear detection frame) are three-dimensional (3D) detection frames, for example, they can be approximately rectangular detection frames. In this way, the position of the 3D detection frame can be represented by the coordinates of 8 corner points (C1-C8 in Figure 3), or by 4 corner points, the height value of the bottom of the vehicle to the ground, and the height value between the bottom of the vehicle and the top of the vehicle (not shown).
可理解,與二維檢測框相比,3D檢測框側重於對真實世界中之3D座標系中目標物體之定位與識別。3D檢測框上之幾何資訊可用於測量車輛101與其他關鍵目標(例如同一車道上之其他車輛)之間之距離。如此,本申請藉由獲取車輛之3D檢測框作為車輛輪廓,有利於將基於圖像獲取到之資料與真實環境資料建立聯繫。 It can be understood that compared with the two-dimensional detection frame, the 3D detection frame focuses on the positioning and identification of the target object in the 3D coordinate system in the real world. The geometric information on the 3D detection frame can be used to measure the distance between the vehicle 101 and other key targets (such as other vehicles on the same lane). In this way, the present application is conducive to establishing a connection between the data obtained based on the image and the real environment data by obtaining the 3D detection frame of the vehicle as the vehicle outline.
步驟S3:根據至少一車輛之輪廓及朝向獲取至少一車輛之軌跡,軌跡為至少一車輛之移動軌跡方向。 Step S3: Obtain the trajectory of at least one vehicle based on the contour and orientation of at least one vehicle, where the trajectory is the moving trajectory direction of at least one vehicle.
具體於本步驟中,先根據獲取到之每一車輛之輪廓,即整體檢測框,計算車輛之中心點座標(X0,Y0)。再根據車輛之中心點及朝向,形成該車輛之軌跡。其中,軌跡表徵第一影像上之至少一車輛之移動軌跡方向。 Specifically, in this step, the coordinates of the center point (X0, Y0) of each vehicle are calculated based on the obtained outline of each vehicle, that is, the overall detection frame. Then, the trajectory of the vehicle is formed based on the center point and orientation of the vehicle. The trajectory represents the moving trajectory direction of at least one vehicle on the first image.
於本申請中,以第一影像之左下角之頂點作為原點,從第一影像之原點向右延伸作為第一影像之X軸,從第一影像之原點向上延伸作為Y軸。 In this application, the vertex of the lower left corner of the first image is taken as the origin, the axis extending rightward from the origin of the first image is taken as the X axis of the first image, and the axis extending upward from the origin of the first image is taken as the Y axis.
於一些實施例中,可由複數車輛之中心點座標連接形成之曲線來表示至少一車輛之軌跡。 In some embodiments, the trajectory of at least one vehicle may be represented by a curve formed by connecting the center point coordinates of multiple vehicles.
進一步地,為提高車道檢測之準確率,於一些實施例中,可藉由獲取一段連續時間內之預設數量之第一影像,以形成至少一車輛於一段時間內之較長之移動軌跡;於另一些實施例中,亦可藉由獲取同一第一影像內之多輛車輛之移動軌跡,以形成同一時間點由多輛車輛形成之較長之移動軌跡。 Furthermore, in order to improve the accuracy of lane detection, in some embodiments, a preset number of first images within a continuous period of time may be obtained to form a longer moving trajectory of at least one vehicle within a period of time; in other embodiments, the moving trajectories of multiple vehicles in the same first image may be obtained to form a longer moving trajectory formed by multiple vehicles at the same time point.
步驟S4:根據至少一車輛之軌跡獲取車道資訊。 Step S4: Obtain lane information based on the trajectory of at least one vehicle.
具體可根據步驟S3中形成軌跡之車輛之中心點座標,以獲取相應之曲線方程。 Specifically, the corresponding curve equation can be obtained based on the center point coordinates of the vehicle that forms the track in step S3.
可理解,車道通常限制了車輛之行車軌跡及行駛方向。亦就係說,位於同一車道上之車輛之行車軌跡及行駛方向通常大致相同。如此,可對獲取到之至少一車輛之軌跡進行曲線分析,以獲取該軌跡之曲線方程作為車道軌跡等車道資訊。 It can be understood that lanes usually restrict the driving trajectory and driving direction of vehicles. In other words, the driving trajectory and driving direction of vehicles on the same lane are usually roughly the same. In this way, a curve analysis can be performed on the trajectory of at least one vehicle to obtain the curve equation of the trajectory as lane information such as the lane trajectory.
進一步可根據車輛軌跡之方向,確認車道方向;還可根據第一影像上檢測到之車輛軌跡之數量,確定車道數量。 Furthermore, the direction of the lane can be confirmed based on the direction of the vehicle track; the number of lanes can also be determined based on the number of vehicle tracks detected in the first image.
可理解,本申請提供之車道檢測方法,藉由獲取第一影像上之至少一車輛之輪廓及朝向,進而確定至少一車輛之軌跡,從而根據至少一車輛之軌跡初步獲取車道資訊,從而輔助實現車道檢測。 It can be understood that the lane detection method provided by this application obtains the outline and orientation of at least one vehicle on the first image, and then determines the track of at least one vehicle, thereby preliminarily obtaining lane information based on the track of at least one vehicle, thereby assisting in realizing lane detection.
進一步地,於一些實施例中,車道檢測方法還包括: Furthermore, in some embodiments, the lane detection method also includes:
步驟S5:獲取第二影像,且第二影像包括車道線;根據第二影像獲取車道之第一資訊;根據車道之第一資訊及車道資訊獲取第二車道資訊。 Step S5: Obtain a second image, and the second image includes a lane line; obtain first lane information based on the second image; obtain second lane information based on the first lane information and the lane information.
於步驟S5中,第二影像可係從複數初始影像中篩選出來之包 括車道線之影像。第一資訊可包括車道線之部分標線資訊等。第二車道資訊可係基於車道資訊與車道之第一資訊合併後輸出之更完整之車道資訊,例如包括車道標線、箭頭或車道數量等資訊中之至少一種。 In step S5, the second image may be an image including lane lines that is selected from a plurality of initial images. The first information may include partial lane line information, etc. The second lane information may be more complete lane information output after the lane information is combined with the first lane information, for example, including at least one of lane line markings, arrows, or lane quantity information.
如此,本申請提供之車道檢測方法,可根據獲取到之車輛軌跡初步獲取車道資訊,進而再根據獲取到之車道之第一資訊,從而合併輸出第二車道資訊,以更大程度地豐富車道資訊。 In this way, the lane detection method provided by this application can initially obtain lane information based on the obtained vehicle track, and then combine and output the second lane information based on the obtained first lane information, so as to enrich the lane information to a greater extent.
進一步地,本申請以下內容還根據獲取到之不同之第一影像,進一步說明上述步驟S2之詳細處理過程。 Furthermore, the following content of this application further explains the detailed processing process of the above step S2 based on the different first images obtained.
請參閱圖4,於一些實施例中,當第一影像為單幀影像且第一影像包括至少兩台車輛之影像時,例如圖6A,步驟S2還包括: Please refer to FIG. 4. In some embodiments, when the first image is a single-frame image and the first image includes images of at least two vehicles, such as FIG. 6A, step S2 further includes:
步驟S401:根據至少兩台車輛之輪廓獲取至少兩台車輛之中心點。 Step S401: Obtain the center points of at least two vehicles based on the outlines of at least two vehicles.
步驟S402:根據至少兩台車輛之中心點及朝向建立至少一個軌跡分組;具體請參閱圖5,步驟S402進一步包括: Step S402: Establish at least one track grouping according to the center points and directions of at least two vehicles; please refer to Figure 5 for details, step S402 further includes:
步驟S501:選擇至少兩台車輛中之其中一個作為標定車輛,例如車輛a(參圖6B)。其中,標定車輛為至少兩台車輛中於第一影像中y軸座標最小且未被分配至軌跡分組之車輛。 Step S501: Select one of at least two vehicles as a calibration vehicle, such as vehicle a (see FIG6B ). The calibration vehicle is the vehicle with the smallest y-axis coordinate in the first image and not assigned to a track group among the at least two vehicles.
步驟S502:以標定車輛之中心為原點,標定車輛之朝向為方向向量,獲取標定車輛之軌跡延長線。 Step S502: Take the center of the calibration vehicle as the origin and the orientation of the calibration vehicle as the direction vector to obtain the trajectory extension line of the calibration vehicle.
於一些實施例中,可先獲取車頭檢測框之中心點座標,然後連接整體檢測框之中心點座標與車頭檢測框之中心點座標作為方向向量, 且整體檢測框之中心點座標為該方向向量之原點,該方向向量之方向與車頭檢測框所在一側之方向相同。於另一些實施例中,亦可先獲取車尾檢測框之中心點座標,然後連接整體檢測框之中心點座標與車尾檢測框之中心點座標作為方向向量,且整體檢測框之中心點座標為該方向向量之原點,該方向向量之方向與車尾檢測框所在一側之方向相反。 In some embodiments, the center point coordinates of the front detection frame can be obtained first, and then the center point coordinates of the whole detection frame and the center point coordinates of the front detection frame are connected as a direction vector, and the center point coordinates of the whole detection frame are the origin of the direction vector, and the direction of the direction vector is the same as the direction of the side where the front detection frame is located. In other embodiments, the center point coordinates of the rear detection frame can also be obtained first, and then the center point coordinates of the whole detection frame and the center point coordinates of the rear detection frame are connected as a direction vector, and the center point coordinates of the whole detection frame are the origin of the direction vector, and the direction of the direction vector is opposite to the direction of the side where the rear detection frame is located.
步驟S503:判斷標定車輛之軌跡延長線是否與其他車輛之車尾影像相交; Step S503: Determine whether the extended track line of the calibrated vehicle intersects with the rear image of other vehicles;
步驟S504:若標定車輛(例如圖6B中之車輛a)之軌跡延長線與其他車輛(例如圖6B中之車輛b)之車尾影像相交,則將標定車輛及與標定車輛之軌跡延長線相交之車輛分配至一個軌跡分組,將與標定車輛之軌跡延長線相交之車輛作為新之標定車輛,並重新進行以標定車輛之中心為原點,標定車輛之朝向為方向向量,獲取標定車輛之軌跡延長線;例如,請參圖6C,當車輛b之軌跡延長線與車輛c之車尾相交時,則車輛a、車輛b及車輛c可作為一個軌跡分組。 Step S504: If the extended track of the calibration vehicle (e.g., vehicle a in FIG. 6B ) intersects with the rear image of another vehicle (e.g., vehicle b in FIG. 6B ), the calibration vehicle and the vehicle intersecting with the extended track of the calibration vehicle are assigned to a track group, and the vehicle intersecting with the extended track of the calibration vehicle is assigned to a track group. As a new calibration vehicle, re-calibrate the center of the calibration vehicle as the origin and the orientation of the calibration vehicle as the direction vector to obtain the trajectory extension line of the calibration vehicle; for example, please refer to Figure 6C, when the trajectory extension line of vehicle b intersects with the rear of vehicle c, then vehicles a, b and c can be regarded as a trajectory group.
步驟S505:若標定車輛之軌跡延長線與其他車輛之車尾影像不相交,則返回選擇至少兩台車輛中之其中一個作為標定車輛; Step S505: If the extended track line of the calibration vehicle does not intersect with the rear image of other vehicles, then return to select one of at least two vehicles as the calibration vehicle;
步驟S506:重複上述步驟,直至全部車輛分配於至少一軌跡分組。 Step S506: Repeat the above steps until all vehicles are assigned to at least one track group.
例如,請參圖7,按照上述步驟S501-S506,可將圖6A所示之第一影像中之車輛分配至三個軌跡分組。 For example, referring to FIG. 7, according to the above steps S501-S506, the vehicle in the first image shown in FIG. 6A can be assigned to three track groups.
步驟S403:根據至少一個軌跡分組獲取至少兩台車輛之軌跡。 Step S403: Obtain the trajectories of at least two vehicles according to at least one trajectory grouping.
於一些實施例中,步驟S403還包括: In some embodiments, step S403 also includes:
步驟S801:根據軌跡分組內車輛之數量判斷軌跡分組是否為孤立分組,其中,孤立分組包括一車輛。 Step S801: Determine whether the track group is an isolated group based on the number of vehicles in the track group, wherein an isolated group includes a vehicle.
例如,請參圖9A至圖9C,車輛d、車輛e及車輛f所在之軌跡分組分別形成孤立分組。 For example, referring to Figures 9A to 9C, the track groups where vehicles d, e, and f are located form isolated groups respectively.
步驟S802:若軌跡分組為孤立分組,則判斷孤立分組內之車輛之軌跡是否與其他車輛之軌跡相交或孤立分組內之車輛之軌跡與其他車輛之軌跡距離是否低於一預設距離值。 Step S802: If the track group is an isolated group, determine whether the track of the vehicle in the isolated group intersects with the track of other vehicles or whether the distance between the track of the vehicle in the isolated group and the track of other vehicles is less than a preset distance value.
步驟S803:若孤立分組內之車輛之軌跡與其他車輛之軌跡相交或孤立分組內之車輛之軌跡與其他車輛之軌跡距離低於一預設距離值,則刪除孤立分組。 Step S803: If the track of the vehicle in the isolated group intersects with the track of other vehicles or the distance between the track of the vehicle in the isolated group and the track of other vehicles is less than a preset distance value, the isolated group is deleted.
可理解,當孤立分組內之車輛之軌跡與其他分組內之車輛之軌跡相交,或孤立分組內之車輛之軌跡與其他車輛之軌跡距離低於一預設距離值時,說明該孤立分組內之車輛可能係從一車道移動至另一車道。因此,當前孤立分組內之車輛並非處於正常之車道行駛狀態,可刪除該孤立分組。 It can be understood that when the track of the vehicles in the isolated group intersects with the track of the vehicles in other groups, or the distance between the track of the vehicles in the isolated group and the track of other vehicles is lower than a preset distance value, it means that the vehicles in the isolated group may move from one lane to another. Therefore, the vehicles in the current isolated group are not in a normal lane driving state, and the isolated group can be deleted.
當孤立分組內之車輛之軌跡不滿足上述條件時,說明該孤立分組內之車輛大概率行駛於正常之車道上。 When the tracks of the vehicles in an isolated group do not meet the above conditions, it means that the vehicles in the isolated group are most likely driving on a normal lane.
如此,藉由上述步驟,可對包括至少兩台車輛之單幀之第一影像進行處理,以獲得類似圖7或圖9C所示之軌跡分組。如此,藉由分別對上述軌跡分組中之各個車輛之中心點座標求取曲線方程,則可初步獲得第一影像上之車道之曲線,車道數量及車道方向。 Thus, through the above steps, the first image of a single frame including at least two vehicles can be processed to obtain a track grouping similar to that shown in FIG. 7 or FIG. 9C. Thus, by respectively obtaining the curve equation for the center point coordinates of each vehicle in the above track grouping, the curve of the lane on the first image, the number of lanes and the direction of the lane can be preliminarily obtained.
以圖9C為例,當求取得到圖9C上之三個軌跡分組上之三條 曲線分別為k1、k2及k3時,說明當前道路可能存於3條車道,且3條車道之方向大致相同,3條車道之曲線可係k1、k2及k3。可理解,於一些實施例中,出於統一座標系之需要,亦可對上述求得之曲線k1、k2及k3進行相應之變換,本申請並不對此進行限制。 Taking Figure 9C as an example, when the three curves obtained on the three track groups in Figure 9C are k1, k2 and k3, it means that the current road may have three lanes, and the directions of the three lanes are roughly the same, and the curves of the three lanes may be k1, k2 and k3. It can be understood that in some embodiments, the above-obtained curves k1, k2 and k3 may also be transformed accordingly for the need of a unified coordinate system, and this application is not limited to this.
進一步地,請繼續參閱圖10,當第一影像包括至少兩幀影像,且第一影像包括至少兩台車輛之影像時,方法還包括: Further, please continue to refer to FIG. 10 , when the first image includes at least two frames of images, and the first image includes images of at least two vehicles, the method further includes:
步驟S101:選取第一影像中預設數量幀數之影像。 Step S101: Select an image with a preset number of frames in the first image.
步驟S102:選取標定影像幀,標定影像幀為預設數量幀數之影像中時序最早之一幀影像。 Step S102: Select the calibration image frame, which is the earliest frame in the image sequence among the preset number of frames.
例如,請參圖11,步驟S101中選取了第一影像中之第一幀影像與第二幀影像。 For example, please refer to Figure 11, in step S101, the first frame image and the second frame image in the first image are selected.
可理解,圖像採集設備103以及雷達設備104採集得到初始圖像時,初始圖像上標記有對應之時間戳記。如此,可藉由分別獲取第一幀影像與第二幀影像對應之初始影像之時間戳記,從而獲取時序資訊。 It can be understood that when the image acquisition device 103 and the radar device 104 acquire the initial image, the initial image is marked with a corresponding time stamp. In this way, the timing information can be obtained by respectively obtaining the time stamps of the initial image corresponding to the first frame image and the second frame image.
步驟S103:根據標定影像幀獲取至少兩台車輛之輪廓獲取至少兩台車輛之中心點,根據至少兩台車輛之中心點及朝向建立至少一個軌跡分組,根據至少一個軌跡分組獲取至少兩台車輛之軌跡。 Step S103: Obtain the outlines of at least two vehicles and the center points of at least two vehicles according to the calibration image frame, establish at least one track grouping according to the center points and directions of at least two vehicles, and obtain the tracks of at least two vehicles according to at least one track grouping.
可理解,步驟S103中建立軌跡分組之過程與步驟S402中建立軌跡分組之過程相同,於此不再贅述。 It can be understood that the process of establishing track groups in step S103 is the same as the process of establishing track groups in step S402, and will not be repeated here.
步驟S104:於獲取到標定影像幀內之全部車輛之軌跡後,返回選取另一影像幀作為標定影像幀; Step S104: After obtaining the tracks of all vehicles in the calibration image frame, return to select another image frame as the calibration image frame;
步驟S105:重複上述步驟,直至預設數量幀數之影像內之全 部車輛之軌跡獲取完成。 Step S105: Repeat the above steps until the trajectories of all vehicles in the image of the preset number of frames are obtained.
步驟S106:根據預設數量幀數之影像內之全部車輛之軌跡獲取車道資訊。 Step S106: Obtain lane information based on the tracks of all vehicles in the image of a preset number of frames.
於步驟S106中,將預設距離小於預設數值之兩軌跡進行合併,如此,藉由將複數張幀影像中之軌跡進行合併,可綜合計算出多幀影像中最遠之車道之距離。 In step S106, two tracks whose preset distance is less than the preset value are merged. In this way, by merging the tracks in multiple frames of images, the distance of the farthest lane in multiple frames of images can be comprehensively calculated.
進一步地,請繼續參閱圖12,當第一影像為單幀影像且第一影像包括一車輛(請參圖13)時,根據至少一車輛之輪廓及朝向獲取至少一車輛之軌跡,包括: Further, please continue to refer to FIG. 12. When the first image is a single-frame image and the first image includes a vehicle (see FIG. 13), the trajectory of at least one vehicle is obtained according to the outline and orientation of at least one vehicle, including:
步驟S121:根據車輛之輪廓獲取車輛之中心點。 Step S121: Obtain the center point of the vehicle based on the outline of the vehicle.
步驟S122:根據車輛之中心點及車輛之朝向獲取車輛之軌跡。 Step S122: Obtain the trajectory of the vehicle based on the center point of the vehicle and the direction of the vehicle.
可理解,步驟S122中獲取到之車輛之軌跡之曲線方程為直線方程(Y=mN+n)。此時,可結合步驟S5,以進一步提高車道檢測之準確率。 It can be understood that the curve equation of the vehicle's trajectory obtained in step S122 is a straight line equation (Y=mN+n). At this time, step S5 can be combined to further improve the accuracy of lane detection.
進一步地,請繼續參閱圖14,當第一影像包括至少兩幀影像且第一影像包括一車輛時,根據至少一車輛之輪廓及朝向獲取至少一車輛之軌跡,包括: Further, please continue to refer to Figure 14. When the first image includes at least two frames of images and the first image includes a vehicle, the trajectory of at least one vehicle is obtained according to the outline and orientation of at least one vehicle, including:
步驟S141:獲取至少兩幀影像之時序資訊。 Step S141: Obtain timing information of at least two frames of images.
步驟S142:根據車輛之輪廓獲取車輛之中心點。 Step S142: Obtain the center point of the vehicle based on the outline of the vehicle.
步驟S143:根據時序資訊、車輛之中心點及車輛之朝向獲取車輛之軌跡。 Step S143: Obtain the trajectory of the vehicle based on the timing information, the center point of the vehicle and the direction of the vehicle.
具體步驟S143還包括如下步驟: 根據時序資訊獲取至少兩幀影像中車輛之朝向;判斷相鄰幀影像中車輛之朝向之間之夾角是否大於一預設夾角值,其中相鄰幀影像之時序相鄰;若夾角大於預設夾角值,則根據曲線回歸演算法、車輛之中心點及車輛之朝向獲取車輛之軌跡;若夾角小於或等於預設夾角值,則根據線性回歸演算法、車輛之中心點及車輛之朝向獲取車輛之軌跡。 The specific step S143 also includes the following steps: Acquire the orientation of the vehicle in at least two frames of images according to the timing information; determine whether the angle between the orientations of the vehicle in adjacent frames of images is greater than a preset angle value, wherein the timing of the adjacent frames of images is adjacent; if the angle is greater than the preset angle value, obtain the trajectory of the vehicle according to the curve regression algorithm, the center point of the vehicle and the orientation of the vehicle; if the angle is less than or equal to the preset angle value, obtain the trajectory of the vehicle according to the linear regression algorithm, the center point of the vehicle and the orientation of the vehicle.
例如,請參圖15A,圖15A中之車輛g、車輛h及車輛i為同一車輛於依次相鄰之三個時間點內之影像。 For example, please refer to Figure 15A. Vehicle g, vehicle h, and vehicle i in Figure 15A are images of the same vehicle at three consecutive time points.
於圖15A中,車輛g之朝向與車輛h之朝向之間之夾角大於預設夾角值,車輛h與車輛i之間之夾角大於預設夾角值。如此,於圖15A中,車道可能係產生了彎曲,車輛朝向發生了較大之改變,使得不同時序影像中之車輛朝向產生了較大之夾角,故藉由曲線回歸演算法、車輛之中心點及車輛之朝向獲取車輛之軌跡。 In Figure 15A, the angle between the orientation of vehicle g and vehicle h is greater than the preset angle value, and the angle between vehicle h and vehicle i is greater than the preset angle value. Thus, in Figure 15A, the lane may have curved, and the orientation of the vehicle has changed significantly, resulting in a larger angle between the orientations of the vehicle in different time series images. Therefore, the trajectory of the vehicle is obtained by using the curve regression algorithm, the center point of the vehicle, and the orientation of the vehicle.
於圖15B中,車輛j與車輛k之朝向之間之夾角小於或等於預設夾角值。如此,於圖15B中,車道可能近似直線,車輛朝向並未發生較大改變,使得不同時序影像中之車輛之朝向之夾角較小,故藉由曲線回歸演算法、車輛之中心點及車輛之朝向獲取車輛之軌跡。 In Figure 15B, the angle between the orientations of vehicle j and vehicle k is less than or equal to the preset angle value. Therefore, in Figure 15B, the lane may be approximately a straight line, and the orientation of the vehicle has not changed significantly, making the angle between the orientations of the vehicles in different time series images smaller. Therefore, the trajectory of the vehicle is obtained by using the curve regression algorithm, the center point of the vehicle and the orientation of the vehicle.
進一步請參圖16,本申請第二實施例還提供一種車道檢測裝置100,可應用於移動車輛上。車道檢測裝置100包括影像獲取模組10、輪廓獲取模組20、軌跡獲取模組30及車道獲取模組40。 Further referring to FIG. 16 , the second embodiment of the present application also provides a lane detection device 100 that can be applied to a mobile vehicle. The lane detection device 100 includes an image acquisition module 10, a contour acquisition module 20, a track acquisition module 30, and a lane acquisition module 40.
其中,影像獲取模組10用於獲取第一影像。第一影像包括至 少一車輛之影像。 The image acquisition module 10 is used to acquire a first image. The first image includes an image of at least one vehicle.
輪廓獲取模組20連接影像獲取模組10。輪廓獲取模組20用於根據第一影像獲取至少一車輛之輪廓及朝向,所述朝向為至少一車輛之車頭方向。 The contour acquisition module 20 is connected to the image acquisition module 10. The contour acquisition module 20 is used to acquire the contour and orientation of at least one vehicle according to the first image, and the orientation is the front direction of at least one vehicle.
軌跡獲取模組30連接輪廓獲取模組20。軌跡獲取模組30用於根據至少一車輛之輪廓及朝向獲取至少一車輛之軌跡,所述軌跡為至少一車輛之移動軌跡方向。 The trajectory acquisition module 30 is connected to the contour acquisition module 20. The trajectory acquisition module 30 is used to acquire the trajectory of at least one vehicle according to the contour and orientation of at least one vehicle, and the trajectory is the moving trajectory direction of at least one vehicle.
車道獲取模組40連接軌跡獲取模組30。車道獲取模組40用於根據至少一車輛之軌跡獲取車道資訊。 The lane acquisition module 40 is connected to the track acquisition module 30. The lane acquisition module 40 is used to acquire lane information based on the track of at least one vehicle.
可理解,影像獲取模組10、輪廓獲取模組20、軌跡獲取模組30及車道獲取模組40用於執行圖1至圖16對應之實施例中之各步驟,具體請參閱上一實施例之相關描述,於此不再贅述。 It can be understood that the image acquisition module 10, the contour acquisition module 20, the track acquisition module 30 and the lane acquisition module 40 are used to execute the steps in the embodiments corresponding to Figures 1 to 16. Please refer to the relevant description of the previous embodiment for details, which will not be repeated here.
請參閱圖17,本申請第三實施例還提供一種電子設備200。電子設備200包括處理器201及記憶體202。記憶體202存儲有電腦程式203。電腦程式203包括至少一個指令。處理器201用於執行記憶體202中存儲之至少一個指令,以實現上述位元姿調整方法實施例中之步驟S1至步驟S5。 Please refer to Figure 17. The third embodiment of the present application also provides an electronic device 200. The electronic device 200 includes a processor 201 and a memory 202. The memory 202 stores a computer program 203. The computer program 203 includes at least one instruction. The processor 201 is used to execute at least one instruction stored in the memory 202 to implement steps S1 to S5 in the above-mentioned bit pose adjustment method embodiment.
進一步地,電腦程式203可被分割成一個或多個單元,一個或者多個單元被存儲於記憶體202中,並由處理器201執行,以完成本發明。一個或多個單元可係能夠完成特定功能之一系列電腦程式指令段,該指令段用於描述所述電腦程式203於電子設備中之執行過程。例如,電腦程式203可被分割成影像獲取單元、輪廓獲取單元、軌跡獲取單元及車道 獲取單元,各單元具體功能如下:影像獲取單元用於獲取第一影像。第一影像包括至少一車輛之影像。 Furthermore, the computer program 203 can be divided into one or more units, one or more units are stored in the memory 202, and executed by the processor 201 to complete the present invention. One or more units may be a series of computer program instruction segments capable of completing specific functions, and the instruction segments are used to describe the execution process of the computer program 203 in the electronic device. For example, the computer program 203 can be divided into an image acquisition unit, a contour acquisition unit, a track acquisition unit, and a lane acquisition unit, and the specific functions of each unit are as follows: The image acquisition unit is used to acquire a first image. The first image includes an image of at least one vehicle.
輪廓獲取單元連接影像獲取單元。輪廓獲取單元用於根據第一影像獲取至少一車輛之輪廓及朝向,所述朝向為至少一車輛之車頭方向。 The contour acquisition unit is connected to the image acquisition unit. The contour acquisition unit is used to acquire the contour and orientation of at least one vehicle according to the first image, wherein the orientation is the front direction of at least one vehicle.
軌跡獲取單元連接輪廓獲取單元。軌跡獲取單元用於根據至少一車輛之輪廓及朝向獲取至少一車輛之軌跡,所述軌跡為至少一車輛之移動軌跡方向。 The trajectory acquisition unit is connected to the contour acquisition unit. The trajectory acquisition unit is used to acquire the trajectory of at least one vehicle according to the contour and orientation of at least one vehicle, and the trajectory is the moving trajectory direction of at least one vehicle.
車道獲取單元連接軌跡獲取單元。車道獲取單元用於根據至少一車輛之軌跡獲取車道資訊。 The lane acquisition unit is connected to the track acquisition unit. The lane acquisition unit is used to acquire lane information based on the track of at least one vehicle.
可理解,電子設備200可包括,但不僅限於,處理器201、記憶體202。本領域技術人員可理解,圖3僅僅係電子設備200之示例,並不構成對電子設備200之限定,可包括比圖示更多或更少之部件,或者組合某些部件,或者不同之部件,例如電子設備200還可包括輸入輸出設備、網路接入設備、匯流排等。 It is understandable that the electronic device 200 may include, but is not limited to, a processor 201 and a memory 202. Those skilled in the art may understand that FIG. 3 is only an example of the electronic device 200 and does not constitute a limitation on the electronic device 200. The electronic device 200 may include more or fewer components than shown, or may combine certain components, or different components. For example, the electronic device 200 may also include input and output devices, network access devices, buses, etc.
可理解,處理器201可係中央處理模組(Central Processing Unit,CPU),還可係其他通用處理器、數位訊號處理器(Digital Signal Processor,DSP)、專用積體電路(Application Specific Integrated Circuit,ASIC)、現成可程式設計閘陣列(Field-Programmable Gate Array,FPGA)或者其他可程式設計邏輯器件、分立門或者電晶體邏輯器件、分立硬體元件等。通用處理器可係微處理器或者處理器201亦可係任何常規之處理器等,處理器 201係電子設備之控制中心,利用各種介面與線路連接整個電子設備之各個部分。 It can be understood that the processor 201 can be a central processing unit (CPU), other general-purpose processors, digital signal processors (DSP), application-specific integrated circuits (ASIC), field-programmable gate arrays (FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc. The general-purpose processor can be a microprocessor or processor 201 can also be any conventional processor, etc. Processor 201 is the control center of the electronic device, and uses various interfaces and lines to connect various parts of the entire electronic device.
記憶體202可用於存儲電腦程式與/或模組/單元。處理器201藉由運行或執行存儲於記憶體202內之電腦程式與/或模組/單元,以及調用存儲於記憶體202內之資料,實現電子設備之各種功能。記憶體202可主要包括存儲程式區與存儲資料區。其中,存儲程式區可存儲作業系統、至少一個功能所需之應用程式(比如聲音播放功能、圖像播放功能等)等。存儲資料區可存儲根據電子設備之使用所創建之資料(比如視頻資料、音訊資料、電話本等)等。此外,記憶體202可包括高速隨機存取記憶體,還可包括非易失性記憶體,例如硬碟機、記憶體、插接式硬碟機,智慧存儲卡(Smart Media Card,SMC),安全數位(Secure Digital,SD)卡,快閃記憶體卡(Flash Card)、至少一個磁碟記憶體件、快閃記憶體器件、或其他易失性固態記憶體件。 The memory 202 can be used to store computer programs and/or modules/units. The processor 201 realizes various functions of the electronic device by running or executing the computer programs and/or modules/units stored in the memory 202 and calling the data stored in the memory 202. The memory 202 can mainly include a program storage area and a data storage area. Among them, the program storage area can store the operating system, at least one application required for a function (such as a sound playback function, an image playback function, etc.). The data storage area can store data created according to the use of the electronic device (such as video data, audio data, phone book, etc.). In addition, the memory 202 may include high-speed random access memory and non-volatile memory, such as a hard drive, memory, a plug-in hard drive, a smart media card (SMC), a secure digital (SD) card, a flash memory card (Flash Card), at least one disk memory device, a flash memory device, or other volatile solid-state memory devices.
本申請另一實施例還提供一種電腦可讀存儲介質,其存儲有包括至少一個指令之電腦程式,至少一個指令被電子設備中之處理器執行以實現如上之位姿調整方法。 Another embodiment of the present application also provides a computer-readable storage medium, which stores a computer program including at least one instruction, and the at least one instruction is executed by a processor in an electronic device to implement the above posture adjustment method.
示例性電腦程式可被分割成一個或多個模組/單元,一個或者多個模組/單元被存儲於記憶體202中,並由處理器201執行,以完成本發明。一個或多個模組/單元可係能夠完成特定功能之一系列電腦程式指令段,指令段用於描述電腦程式於電子設備中之執行過程。 The exemplary computer program may be divided into one or more modules/units, one or more modules/units are stored in the memory 202 and executed by the processor 201 to complete the present invention. One or more modules/units may be a series of computer program instruction segments capable of completing specific functions, and the instruction segments are used to describe the execution process of the computer program in the electronic device.
本發明實現上述實施例方法中之全部或部分流程,亦可藉由電腦程式來指令相關之硬體來完成,之電腦程式可存儲於一電腦可讀存儲 介質中,電腦程式於被處理器執行時,可實現上述各個方法實施例之步驟。其中,電腦程式包括電腦程式代碼,電腦程式代碼可為原始程式碼形式、可執行檔或某些中間形式等。電腦可讀介質可包括:能夠攜帶電腦程式代碼之任何實體或裝置、記錄介質、U盤、移動硬碟機、磁碟、光碟、電腦記憶體、唯讀記憶體(ROM,Read-Only Memory)、隨機存取記憶體(RAM,Random Access Memory)、電訊號以及軟體分發介質等。需要說明,電腦可讀介質包含之內容可根據司法管轄區內立法與專利實踐之要求進行適當之增減,例如於某些司法管轄區,根據立法與專利實踐,電腦可讀介質不包括電載波訊號與電信訊號。 The present invention can implement all or part of the processes in the above-mentioned embodiments by instructing the relevant hardware through a computer program. The computer program can be stored in a computer-readable storage medium. When the computer program is executed by a processor, the steps of the above-mentioned method embodiments can be implemented. The computer program includes computer program code, which can be in the form of source code, executable file or some intermediate form. The computer-readable medium can include: any entity or device capable of carrying computer program code, recording medium, USB flash drive, mobile hard drive, magnetic disk, optical disk, computer memory, read-only memory (ROM), random access memory (RAM), electrical signal and software distribution medium, etc. It should be noted that the content of computer-readable media may be appropriately increased or decreased according to the requirements of legislation and patent practice in the jurisdiction. For example, in some jurisdictions, based on legislation and patent practice, computer-readable media does not include electrical carrier signals and telecommunication signals.
於本發明所提供之幾個實施例中,應該理解到,所揭露之電子設備與方法,可藉由其它之方式實現。例如,以上所描述之電子設備實施例僅僅係示意性例如,模組之劃分,僅僅為一種邏輯功能劃分,實際實現時可有另外之劃分方式。 In the several embodiments provided by the present invention, it should be understood that the disclosed electronic devices and methods can be implemented in other ways. For example, the electronic device embodiments described above are only schematic. For example, the division of modules is only a logical function division, and there may be other division methods in actual implementation.
另外,於本發明各個實施例中之各功能模組可集成於相同處理模組中,亦可係各個模組單獨物理存於,亦可兩個或兩個以上模組集成於相同模組中。上述集成之模組既可採用硬體之形式實現,亦可採用硬體加軟體功能模組之形式實現。 In addition, each functional module in each embodiment of the present invention can be integrated into the same processing module, each module can be physically stored separately, or two or more modules can be integrated into the same module. The above-mentioned integrated module can be implemented in the form of hardware or in the form of hardware plus software functional modules.
對於本領域技術人員而言,顯然本發明不限於上述示範性實施例之細節,且於不背離本發明之精神或基本特徵之情況下,能夠以其他之具體形式實現本發明。因此,無論從哪一點來看,均應將實施例看作係示範性且係非限制性。此外,顯然“包括”一詞不排除其他模組或步驟,單數不排除複數。電子設備中陳述之多個模組或電子設備亦可由同一個模 組或電子設備藉由軟體或者硬體來實現。第一,第二等詞語用以表示名稱,而並不表示任何特定之順序。 It is obvious to those skilled in the art that the present invention is not limited to the details of the above exemplary embodiments and can be implemented in other specific forms without departing from the spirit or basic features of the present invention. Therefore, no matter from which point of view, the embodiments should be regarded as exemplary and non-restrictive. In addition, it is obvious that the word "including" does not exclude other modules or steps, and the singular does not exclude the plural. Multiple modules or electronic devices described in the electronic device can also be implemented by the same module or electronic device through software or hardware. The words first, second, etc. are used to indicate names and do not indicate any specific order.
以上實施方式僅用以說明本發明之技術方案而非限制,儘管參照以上較佳實施方式對本發明進行了詳細說明,本領域具有通常技藝者應當理解,可對本發明之技術方案進行修改或等同替換均不應脫離本發明技術方案之精神與範圍。本領域具有通常技藝者還可於本發明精神內做其它變化等用於本發明之設計,僅要其不偏離本發明之技術效果均可。該等依據本發明精神所做之變化,均應包含於本發明所要求保護之範圍之內。 The above implementations are only used to illustrate the technical solution of the present invention and are not intended to limit it. Although the present invention is described in detail with reference to the above preferred implementations, those with ordinary skills in this field should understand that the technical solution of the present invention can be modified or replaced equivalently without departing from the spirit and scope of the technical solution of the present invention. Those with ordinary skills in this field can also make other changes within the spirit of the present invention for the design of the present invention, as long as they do not deviate from the technical effects of the present invention. Such changes made in accordance with the spirit of the present invention should be included in the scope of protection required by the present invention.
S1-S4:步驟 S1-S4: Steps
Claims (9)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| TW111151024A TWI842321B (en) | 2022-12-30 | 2022-12-30 | Lane detection method and lane detection device |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| TW111151024A TWI842321B (en) | 2022-12-30 | 2022-12-30 | Lane detection method and lane detection device |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| TWI842321B true TWI842321B (en) | 2024-05-11 |
| TW202427389A TW202427389A (en) | 2024-07-01 |
Family
ID=92076793
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| TW111151024A TWI842321B (en) | 2022-12-30 | 2022-12-30 | Lane detection method and lane detection device |
Country Status (1)
| Country | Link |
|---|---|
| TW (1) | TWI842321B (en) |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111626277A (en) * | 2020-08-03 | 2020-09-04 | 杭州智诚惠通科技有限公司 | Vehicle tracking method and device based on over-station inter-modulation index analysis |
| US20220001872A1 (en) * | 2019-05-28 | 2022-01-06 | Mobileye Vision Technologies Ltd. | Semantic lane description |
| CN115214708A (en) * | 2021-04-19 | 2022-10-21 | 华为技术有限公司 | Vehicle intention prediction method and related device thereof |
-
2022
- 2022-12-30 TW TW111151024A patent/TWI842321B/en active
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20220001872A1 (en) * | 2019-05-28 | 2022-01-06 | Mobileye Vision Technologies Ltd. | Semantic lane description |
| CN111626277A (en) * | 2020-08-03 | 2020-09-04 | 杭州智诚惠通科技有限公司 | Vehicle tracking method and device based on over-station inter-modulation index analysis |
| CN115214708A (en) * | 2021-04-19 | 2022-10-21 | 华为技术有限公司 | Vehicle intention prediction method and related device thereof |
Also Published As
| Publication number | Publication date |
|---|---|
| TW202427389A (en) | 2024-07-01 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Liu et al. | A survey on autonomous driving datasets: Statistics, annotation quality, and a future outlook | |
| Xing et al. | Advances in vision-based lane detection: Algorithms, integration, assessment, and perspectives on ACP-based parallel vision | |
| US11670087B2 (en) | Training data generating method for image processing, image processing method, and devices thereof | |
| CN109389026B (en) | Lane detection methods and equipment | |
| CN114419098B (en) | Moving target trajectory prediction method and device based on visual transformation | |
| CN112258519B (en) | Automatic extraction method and device for way-giving line of road in high-precision map making | |
| WO2023123837A1 (en) | Map generation method and apparatus, electronic device, and storage medium | |
| CN112507862A (en) | Vehicle orientation detection method and system based on multitask convolutional neural network | |
| CN112654997B (en) | A kind of lane line detection method and device | |
| CN116734828A (en) | Determination of road topology information, electronic map data processing methods, electronic equipment | |
| WO2022082571A1 (en) | Lane line detection method and apparatus | |
| CN114091521A (en) | Method, device and equipment for detecting vehicle course angle and storage medium | |
| CN114694108A (en) | Image processing method, device, equipment and storage medium | |
| CN112528807A (en) | Method and device for predicting driving track, electronic equipment and storage medium | |
| CN115292435A (en) | High-precision map updating method and device, electronic equipment and storage medium | |
| CN113902047B (en) | Image element matching method, device, equipment and storage medium | |
| CN117392423A (en) | Lidar-based target true value data prediction method, device and equipment | |
| CN118038409A (en) | Vehicle drivable area detection method, device, electronic equipment and storage medium | |
| JP2018073275A (en) | Image recognition device | |
| CN115507873B (en) | Route planning method, device, equipment and medium based on bus tail traffic light | |
| TWI842321B (en) | Lane detection method and lane detection device | |
| CN112558036A (en) | Method and apparatus for outputting information | |
| CN115123291A (en) | A method and device for behavior prediction based on obstacle recognition | |
| CN116166761B (en) | A method and device for updating all elements of a high-precision map based on newly added road scenes | |
| CN118279847A (en) | Lane detection method and lane detection device |