[go: up one dir, main page]

TWI751735B - Automatic guided vehicle tracking system and automatic guided vehicle tracking method - Google Patents

Automatic guided vehicle tracking system and automatic guided vehicle tracking method Download PDF

Info

Publication number
TWI751735B
TWI751735B TW109135087A TW109135087A TWI751735B TW I751735 B TWI751735 B TW I751735B TW 109135087 A TW109135087 A TW 109135087A TW 109135087 A TW109135087 A TW 109135087A TW I751735 B TWI751735 B TW I751735B
Authority
TW
Taiwan
Prior art keywords
spherical
coordinates
coordinate
target
spherical coordinate
Prior art date
Application number
TW109135087A
Other languages
Chinese (zh)
Other versions
TW202215184A (en
Inventor
粘博閎
高薇雅
黃穎竹
田永平
Original Assignee
財團法人工業技術研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 財團法人工業技術研究院 filed Critical 財團法人工業技術研究院
Priority to TW109135087A priority Critical patent/TWI751735B/en
Priority to CN202011258643.8A priority patent/CN114326695B/en
Application granted granted Critical
Publication of TWI751735B publication Critical patent/TWI751735B/en
Publication of TW202215184A publication Critical patent/TW202215184A/en

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Radar Systems Or Details Thereof (AREA)
  • Traffic Control Systems (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The present invention provides an automatic guided vehicle tracking system and an automatic guided vehicle tracking method. The automatic guided vehicle tracking system includes: a data collection unit, obtaining a color image and obtaining two dimensional information; a data computing unit, obtaining a three dimensional coordinate of an object in the color image, obtaining a two dimensional coordinate of the object in the two dimensional information, transforming the three dimensional coordinate to a first spherical coordinate of a spherical coordinate system and transforming the two dimensional coordinate to a second spherical coordinate of the spherical coordinate system, generating a third spherical coordinate of the object according to the first spherical coordinate obtained by a color image depth camera and the second spherical coordinate obtained by a Lidar; and a vehicle control unit including a vehicle controller, controlling a vehicle to follow the object according to the third spherical coordinate.

Description

自走車跟隨系統及自走車跟隨方法Self-propelled vehicle following system and self-propelled vehicle following method

本發明是有關於一種自走車跟隨系統及自走車跟隨方法,且特別是有關於一種讓自走車能在干擾較多或路線較複雜的場域跟隨目標的自走車跟隨系統及自走車跟隨方法。The present invention relates to a self-propelled vehicle following system and a self-propelled vehicle following method, and in particular to a self-propelled vehicle following system and a self-propelled vehicle that allow a self-propelled vehicle to follow a target in a field with more interference or complicated routes. Walk and follow method.

近年來智慧零售與物流電商蓬勃發展,但勞動人口高齡化,搬運與撿貨等人力作業成本顯著上升。為解決物流業人力資源供不應求的問題,使用人機協作跟隨式自走車,可提升人員作業效率並擴大就業年齡層。In recent years, smart retail and logistics e-commerce have flourished, but the labor force is aging, and labor costs such as handling and picking have risen significantly. In order to solve the problem of the shortage of human resources in the logistics industry, the use of human-machine collaborative follow-up bicycles can improve the efficiency of personnel operations and expand the employment age group.

倉儲用服務型自走車可大致分為導航式自走車及跟隨式自走車。現存的跟隨型自走車主要應用於環境較單純的場域,如貨架位置固定的物流倉。若在干擾較多或路線較複雜的場域,如人車繁雜的物流倉或零售賣場,當多人同時作業時,難免發生人員交錯的情況,目標一旦被遮蔽,便無法繼續跟隨。因此,如何讓自走車能正確地跟隨人員是本領域技術人員應致力的目標。Service-type bicycles for storage can be roughly divided into navigation-type bicycles and follow-up bicycles. Existing self-propelled self-propelled vehicles are mainly used in fields with relatively simple environments, such as logistics warehouses with fixed shelf positions. In areas with more interference or complicated routes, such as logistics warehouses or retail stores with many people and vehicles, when many people work at the same time, it is inevitable that people will stagger. Once the target is blocked, it cannot continue to follow. Therefore, how to make the self-propelled vehicle follow the person correctly is the goal that those skilled in the art should strive for.

有鑑於此,本發明提供一種自走車跟隨系統及自走車跟隨方法,讓自走車能在干擾較多或路線較複雜的場域跟隨目標。In view of this, the present invention provides a self-propelled vehicle following system and a self-propelled vehicle following method, so that the self-propelled vehicle can follow a target in a field with more interference or complicated route.

本發明提出一種自走車跟隨系統,包括:資料收集單元,包括:彩色影像深度攝影機,獲得彩色影像;以及光學雷達,獲得二維資訊;以及資料運算單元,包括:影像目標追蹤模組,獲得所述彩色影像中的目標的三維座標;光學雷達目標追蹤模組,並獲得所述二維資訊中所述目標的二維座標;座標融合模組,將所述三維座標轉換為球座標系的第一球座標,並將所述二維座標轉換為所述球座標系的第二球座標;以及控制指令輸出模組,根據基於所述彩色影像深度攝影機獲得的所述第一球座標與基於所述光學雷達獲得的所述第二球座標,來產生所述目標的第三球座標;以及車體控制單元,包括車體控制器,根據所述第三球座標控制車體來跟隨所述目標。The present invention provides a self-propelled vehicle following system, including: a data collection unit, including: a color image depth camera, which obtains a color image; and an optical radar, which obtains two-dimensional information; and a data computing unit, including: an image target tracking module, which obtains The three-dimensional coordinates of the target in the color image; the optical radar target tracking module, and obtain the two-dimensional coordinates of the target in the two-dimensional information; the coordinate fusion module, convert the three-dimensional coordinates into spherical coordinates a first spherical coordinate, converting the two-dimensional coordinate into a second spherical coordinate of the spherical coordinate system; and a control command output module, according to the first spherical coordinate obtained based on the color image depth camera and the the second spherical coordinates obtained by the optical radar to generate the third spherical coordinates of the target; and a vehicle body control unit including a vehicle body controller, which controls the vehicle body to follow the said third spherical coordinates according to the third spherical coordinates Target.

本發明提出一種自走車跟隨方法,包括:藉由彩色影像深度攝影機獲得彩色影像,並藉由光學雷達,獲得二維資訊;獲得所述彩色影像中的目標的三維座標,並獲得所述二維資訊中所述目標的二維座標;將所述三維座標轉換為球座標系的第一球座標,並將所述二維座標轉換為所述球座標系的第二球座標;根據基於所述彩色影像深度攝影機獲得的所述第一球座標與基於所述光學雷達獲得的所述第二球座標,來產生所述目標的第三球座標;以及藉由車體控制器根據所述第三球座標控制車體來跟隨所述目標。The present invention provides a method for following a bicycle, comprising: obtaining a color image by a color image depth camera, and obtaining two-dimensional information by using an optical radar; obtaining three-dimensional coordinates of a target in the color image, and obtaining the two The two-dimensional coordinates of the target in the dimension information; the three-dimensional coordinates are converted into the first spherical coordinates of the spherical coordinate system, and the two-dimensional coordinates are converted into the second spherical coordinates of the spherical coordinate system; the first spherical coordinates obtained by the color image depth camera and the second spherical coordinates obtained based on the optical radar to generate the third spherical coordinates of the target; and the vehicle body controller according to the first spherical coordinates Three spherical coordinates control the body to follow the target.

基於上述,本發明的自走車跟隨系統及自走車跟隨方法利用設置於車體上的彩色影像深度攝影機獲得彩色影像並利用設置於車體上的光學雷達獲得二維資訊,獲得彩色影像中的目標的三維座標及二維資訊中目標的二維座標,並分別將三維座標及二維座標轉換為第一球座標及第二球座標,再根據第一球座標及第二球座標產生目標的第三球座標。最後,車體的車體控制器可根據第三球座標控制車體來跟隨目標。本發明的自走車跟隨系統及自走車跟隨方法以彩色影像深度攝影機增加目標可被辨識的特徵來彌補光學雷達的點雲圖特徵不足的問題,以在複雜的環境中可更準確地偵測目標。Based on the above, the self-propelled vehicle following system and the self-propelled vehicle following method of the present invention use a color image depth camera disposed on the vehicle body to obtain a color image, and use an optical radar disposed on the vehicle body to obtain two-dimensional information, and obtain the color image in the color image. The three-dimensional coordinates of the target and the two-dimensional coordinates of the target in the two-dimensional information, and convert the three-dimensional coordinates and the two-dimensional coordinates into the first spherical coordinates and the second spherical coordinates, and then generate the target according to the first spherical coordinates and the second spherical coordinates. 's third spherical coordinates. Finally, the vehicle body controller of the vehicle body can control the vehicle body to follow the target according to the third spherical coordinate. The self-propelled vehicle following system and the self-propelled vehicle following method of the present invention use a color image depth camera to increase the feature that the target can be identified to make up for the problem of insufficient point cloud image features of the optical radar, so as to detect more accurately in a complex environment Target.

本發明同時使用光學雷達(Lidar)及視覺影像兩種跟隨方法,以互相彌補不足。具體來說,當目標被遮擋時,光學雷達無法判斷目標物與障礙物的不同,易跟錯目標。視覺影像儲存了目標的顏色、形狀、紋理等特徵,供系統匹配,使跟隨車不會誤判目標,當障礙物移除後,仍可繼續跟隨正確的目標物。光學雷達所生成的點雲圖可獲得目標與車體的距離和目標大致的形狀大小,在追蹤的過程中易與形狀相近的障礙物混淆。視覺影像可儲存目標物的色彩分佈、形狀、紋理等特徵,使追蹤目標的過程有更多資訊可供判讀。然而,視覺影像的視域範圍較窄,當目標超出彩色影像深度(RGB-D)攝影機之視線範圍時,易跟丟目標。光學雷達視域廣,可繼續追蹤超出攝影機視域之目標物,使跟隨自走車能繼續追蹤跟隨。The present invention simultaneously uses two following methods, Lidar and visual image, to make up for each other's deficiencies. Specifically, when the target is blocked, the lidar cannot judge the difference between the target and the obstacle, and it is easy to follow the wrong target. The visual image stores the color, shape, texture and other characteristics of the target for the system to match, so that the following car will not misjudge the target, and when the obstacle is removed, it can continue to follow the correct target. The point cloud image generated by the lidar can obtain the distance between the target and the vehicle body and the approximate shape and size of the target, which is easy to be confused with obstacles with similar shapes during the tracking process. The visual image can store the color distribution, shape, texture and other characteristics of the target, so that the process of tracking the target has more information for interpretation. However, the visual field of view of the visual image is narrow, and when the target exceeds the line of sight of the color image depth (RGB-D) camera, it is easy to lose the target. The lidar has a wide field of view, and can continue to track the target beyond the field of view of the camera, so that the following bicycle can continue to track and follow.

圖1為根據本發明一實施例的自走車跟隨系統的方塊圖。FIG. 1 is a block diagram of a bicycle following system according to an embodiment of the present invention.

請參照圖1,本發明一實施例的自走車跟隨系統包括資料收集單元110、資料運算單元120及車體控制單元130。資料收集單元110包括設置於自走車車體上的彩色影像深度攝影機111及光學雷達112。彩色影像深度攝影機111可獲得彩色影像。光學雷達112可獲得例如點雲圖的二維資訊。資料運算單元120可對彩色影像及二維資訊進行運算以輸出目標座標讓車體控制單元130的車體控制機器人操作系統(Robot Operating System,ROS)131根據目標座標控制車體來跟隨目標。資料運算單元120包括人員偵測模組121、影像目標鎖定模組122、影像目標追蹤模組123、光學雷達目標鎖定模組124、光學雷達目標追蹤模組125、座標融合模組126及控制指令輸出模組127。在一實施例中,資料運算單元120可包括處理器並執行分別對應各模組(即,人員偵測模組121、影像目標鎖定模組122、影像目標追蹤模組123、光學雷達目標鎖定模組124、光學雷達目標追蹤模組125、座標融合模組126及控制指令輸出模組127)的軟體/韌體程式碼。在另一實施例中,資料運算單元120的各模組可透過硬體電路來實作。在另一實施例中,資料運算單元120的各模組也可透過硬體電路及/或軟體/韌體程式碼的組合來實作。本發明不限制資料運算單元120的各模組的實作方式。資料運算單元120的各模組將在下文中詳細說明。Referring to FIG. 1 , a bicycle following system according to an embodiment of the present invention includes a data collection unit 110 , a data calculation unit 120 and a vehicle body control unit 130 . The data collection unit 110 includes a color image depth camera 111 and an optical radar 112 disposed on the body of the bicycle. The color image depth camera 111 can obtain color images. The LiDAR 112 can obtain two-dimensional information such as a point cloud image. The data computing unit 120 can perform operations on the color image and the two-dimensional information to output target coordinates for the vehicle body control unit 130 to control the robot operating system (Robot Operating System, ROS) 131 according to the target coordinates to control the vehicle body to follow the target. The data computing unit 120 includes a person detection module 121 , an image target locking module 122 , an image target tracking module 123 , an optical radar target locking module 124 , an optical radar target tracking module 125 , a coordinate fusion module 126 and a control command Output module 127 . In one embodiment, the data computing unit 120 may include a processor and execute the corresponding modules (ie, the personnel detection module 121 , the image target locking module 122 , the image target tracking module 123 , the LiDAR target locking module The software/firmware code of the group 124, the optical radar target tracking module 125, the coordinate fusion module 126 and the control command output module 127). In another embodiment, each module of the data operation unit 120 may be implemented by a hardware circuit. In another embodiment, each module of the data computing unit 120 may also be implemented through a combination of hardware circuits and/or software/firmware codes. The present invention does not limit the implementation of each module of the data computing unit 120 . Each module of the data computing unit 120 will be described in detail below.

在一實施例中,資料運算單元120可執行鎖定目標人員程序或人員辨識追蹤程序來進行彩色深度影像的目標鎖定及追蹤。鎖定目標人員程序或人員辨識追蹤程序都可利用深度學習方法取得目標人員影像或是取得影像中所有人的影像,再藉由機器學習方法(例如,MobileNet -SSD v2 lite物件識別模型)執行分割影像並取得特徵,例如將取得的人員影像分割成多個等份,並取得多等份的第一主成分當作此影像的特徵。在執行鎖定目標人員程序時,資料運算單元120儲存這些特徵。在執行人員辨識追蹤的程序時,資料運算單元120可將鎖定目標人員時所得到的特徵做為目標特徵輸入,用以進行特徵比對並取得目標位置,並依據辨識結果鎖定目標與進行跟隨。特徵提取方法為例如是RGB直方圖(Histogram)。In one embodiment, the data computing unit 120 can execute a target person locking program or a person identification tracking program to perform target locking and tracking of the color depth image. The target person program or person identification tracking program can use deep learning methods to obtain the target person image or obtain the image of everyone in the image, and then use machine learning methods (for example, MobileNet-SSD v2 lite object recognition model) to perform image segmentation And obtain features, for example, divide the obtained person image into multiple equal parts, and obtain the first principal component of the multiple equal parts as the features of the image. The data computing unit 120 stores these characteristics when the program of locking the target person is executed. When performing the process of identifying and tracking the person, the data computing unit 120 can input the feature obtained when the target person is locked as the target feature input to perform feature comparison and obtain the target position, and lock the target and follow according to the identification result. The feature extraction method is, for example, RGB histogram (Histogram).

另一方面,資料運算單元120還可執行光學雷達演算法來進行光學雷達二維資訊的目標鎖定及追蹤。光學雷達演算法可透過二維光學雷達(Lidar)經由光學感測器取得環境資訊(例如,點雲圖)。這些資訊為以光學雷達為中心,與周遭環境物體距離和角度資訊的集合。為了避免因畫面中物件大小的改變而跟丟,本發明可使用CSRT追蹤演算法進行物件追蹤。CSRT追蹤演算法可使用具通道及空間可靠度的判別相關濾波器(Discriminative Correlation Filter with Channel and Spatial Reliability,DCF-CSR)演算法來調整濾波器,確保物件被縮放時依舊能被追蹤。CSRT 追蹤演算法可計算被選取區域的方向梯度直方圖(Histogram of Oriented Gradient,HOG)特徵及色彩名稱(Colornames)特徵,與前一幀進行比對,以此判斷物件當前的位置。On the other hand, the data computing unit 120 can also execute a lidar algorithm to perform target locking and tracking of the lidar 2D information. The Lidar algorithm can obtain environmental information (eg, point cloud images) through optical sensors through two-dimensional Lidar. The information is a collection of distance and angle information from the surrounding objects, centered on the LiDAR. In order to avoid being lost due to the change of the size of the object in the picture, the present invention can use the CSRT tracking algorithm to track the object. The CSRT tracking algorithm uses a Discriminative Correlation Filter with Channel and Spatial Reliability (DCF-CSR) algorithm to adjust the filter to ensure that objects can still be tracked when scaled. The CSRT tracking algorithm can calculate the Histogram of Oriented Gradient (HOG) feature and Colornames feature of the selected area, and compare it with the previous frame to determine the current position of the object.

圖2為根據本發明一實施例的人員偵測模組的方塊圖。FIG. 2 is a block diagram of a person detection module according to an embodiment of the present invention.

請參照圖2,人員偵測模組121可接收彩色影像210的輸入,將彩色影像210經過深度學習物件偵測模型230的分析,以輸出人員全身定界框座標和框中影像220。深度學習物件偵測模型230例如是單次多框偵測器(Single Shot Multibox Detector,SSD)。深度學習物件偵測模型230可經由分類器231接收彩色影像210再透過多個卷積層232進行特徵提取後執行人員偵測功能233,最後輸出人員全身定界框座標和框中影像220。Referring to FIG. 2 , the person detection module 121 can receive the input of the color image 210 , and analyze the color image 210 by the deep learning object detection model 230 to output the coordinates of the bounding frame of the person and the frame image 220 . The deep learning object detection model 230 is, for example, a Single Shot Multibox Detector (SSD). The deep learning object detection model 230 can receive the color image 210 through the classifier 231 and then perform feature extraction through a plurality of convolutional layers 232 to perform the person detection function 233 , and finally output the person whole body bounding frame coordinates and the frame image 220 .

圖3為根據本發明一實施例的影像目標鎖定模組的方塊圖。3 is a block diagram of an image target locking module according to an embodiment of the present invention.

請參照圖3,影像目標鎖定模組122會接收來自人員辨識模組121的人員定界框座標和框中影像220,在彩色影像深度攝影機111的視域範圍內,鎖定面積最大的定界框為目標(S301)並忽略人員偵測模組121偵測到的其他非目標人員。接著,影像目標鎖定模組122會針對面積最大的定界框提取影像特徵並儲存為目標影像特徵(S302)以輸出目標影像特徵310,儲存面積最大的定界框為目標影像定界框(S303)以輸出目標定界框320(或稱為第一目標定界框)。Referring to FIG. 3 , the image target locking module 122 will receive the coordinates of the human bounding frame and the frame image 220 from the human identification module 121 , and within the field of view of the color image depth camera 111 , lock the bounding box with the largest area It is a target ( S301 ) and other non-target persons detected by the person detection module 121 are ignored. Next, the image target locking module 122 extracts image features for the bounding box with the largest area and stores it as the target image feature (S302) to output the target image feature 310, and stores the bounding box with the largest area as the target image bounding box (S303). ) to output the target bounding box 320 (or referred to as the first target bounding box).

圖4為根據本發明一實施例的影像目標追蹤模組的方塊圖。4 is a block diagram of an image target tracking module according to an embodiment of the present invention.

請參照圖4,影像目標追蹤模組123可執行目標追蹤模組執行影像目標追蹤程序來接收目標影像特徵310、目標定界框320及人員定界框座標和框中影像220來計算人員定界框與目標定界框的覆蓋率(S401)。影像目標追蹤模組123還將框中影像與目標影像特徵310進行人員全身影像特徵匹配(S402)。影像目標追蹤模組123再接收彩色影像210的像素深度資訊410來定位目標的三維中心位置(S403)以輸出三維座標420。舉例來說,當目標定界框以( x 1, y 1, w 1, h 1 )表示且人員定界框以( x 2, y 2, w 2, h 2 )表示時,覆蓋率(Coverage Rate)

Figure 02_image001
,其中 xy為定界框的基準座標(例如,中心座標或角落座標), wh分別為定界框的寬度及高度。 Please refer to FIG. 4 , the image target tracking module 123 can execute the target tracking module to execute the image target tracking program to receive the target image feature 310 , the target bounding frame 320 , the coordinates of the personnel bounding frame and the frame image 220 to calculate the personnel boundary The coverage of the box and the target bounding box (S401). The image target tracking module 123 also performs the feature matching of the whole body image of the person with the frame image and the target image feature 310 (S402). The image target tracking module 123 then receives the pixel depth information 410 of the color image 210 to locate the 3D center position of the target ( S403 ) to output the 3D coordinates 420 . For example, when the target bounding box is represented by ( x 1 , y 1 , w 1 , h 1 ) and the person bounding box is represented by ( x 2 , y 2 , w 2 , h 2 ), the coverage rate (Coverage) Rate)
Figure 02_image001
, where x and y are the reference coordinates (eg, center coordinates or corner coordinates) of the bounding box, and w and h are the width and height of the bounding box, respectively.

圖5為根據本發明一實施例的光學雷達目標鎖定模組的方塊圖。FIG. 5 is a block diagram of an optical radar target locking module according to an embodiment of the present invention.

請參照圖5,光學雷達目標鎖定模組124可接收光學雷達112的二維資訊510(例如,二維點雲圖)以鎖定預定座標的定界框為目標定界框(或稱為第二目標定界框)(S501),初始化特徵追蹤器(S502)並輸出特徵追蹤器520。特徵追蹤器520例如是腿部特徵追蹤器。Referring to FIG. 5 , the lidar target locking module 124 can receive the two-dimensional information 510 (eg, a two-dimensional point cloud image) of the lidar 112 to lock the bounding box of predetermined coordinates as the target bounding box (or referred to as the second target bounding box) (S501), initialize the feature tracker (S502) and output the feature tracker 520. Feature tracker 520 is, for example, a leg feature tracker.

圖6為根據本發明一實施例的光學雷達目標追蹤模組的方塊圖。FIG. 6 is a block diagram of an optical radar target tracking module according to an embodiment of the present invention.

請參照圖6,光學雷達目標追蹤模組125可執行點雲目標追蹤程序來接收二維資訊510及特徵追蹤器520並依據二維資訊510與特徵追蹤器520來追蹤目標(S601),並定位目標的二維中心位置(S602)以輸出目標的二維座標610。Referring to FIG. 6 , the LiDAR target tracking module 125 can execute a point cloud target tracking program to receive the 2D information 510 and the feature tracker 520 and track the target according to the 2D information 510 and the feature tracker 520 ( S601 ), and locate the target The two-dimensional center position of the target ( S602 ) to output the two-dimensional coordinates 610 of the target.

請再參考圖1,座標融合模組126捨棄攝影機座標系的三維座標420中對應高度的座標值以將三維座標420轉換為對應車體的車體座標系的第一座標再將第一座標轉換為第一球座標,並將光學雷達座標系的二維座標610轉換為車體座標系的第二座標再將第二座標轉換為第二球座標。以下將參考圖7來說明攝影機座標系及光學雷達座標系分別轉換為車體座標系的轉換流程。Please refer to FIG. 1 again, the coordinate fusion module 126 discards the coordinate value corresponding to the height in the three-dimensional coordinate 420 of the camera coordinate system to convert the three-dimensional coordinate 420 into the first coordinate of the vehicle body coordinate system corresponding to the vehicle body, and then converts the first coordinate is the first spherical coordinate, and the two-dimensional coordinate 610 of the optical radar coordinate system is converted into the second coordinate of the vehicle body coordinate system, and then the second coordinate is converted into the second spherical coordinate. The following will describe the conversion process of the camera coordinate system and the optical radar coordinate system into the vehicle body coordinate system with reference to FIG. 7 .

圖7為根據本發明一實施例的攝影機座標系、光學雷達座標系及車體座標系的座標關係的示意圖。FIG. 7 is a schematic diagram of the coordinate relationship between a camera coordinate system, an optical radar coordinate system, and a vehicle body coordinate system according to an embodiment of the present invention.

請參照圖7,車體是從車體座標系710的角度來判斷人員的位置,因此座標融合模組126會將攝影機座標系720及光學雷達座標系730的資料轉換到車體座標系710來顯示。Referring to FIG. 7 , the vehicle body determines the position of the person from the perspective of the vehicle body coordinate system 710 , so the coordinate fusion module 126 converts the data of the camera coordinate system 720 and the optical radar coordinate system 730 to the vehicle body coordinate system 710 to show.

舉例來說,假設攝影機座標系720的資料點座標為

Figure 02_image003
。若車體座標系710與攝影機座標系720之間存在旋轉關係
Figure 02_image005
及平移關係
Figure 02_image007
,則攝影機座標系720的資料點在車體座標系710的第一座標為
Figure 02_image009
Figure 02_image011
,其中
Figure 02_image013
,
Figure 02_image015
。 For example, suppose the data point coordinates of the camera coordinate system 720 are
Figure 02_image003
. If there is a rotational relationship between the vehicle body coordinate system 710 and the camera coordinate system 720
Figure 02_image005
and translation relationship
Figure 02_image007
, then the first coordinate of the data point of the camera coordinate system 720 in the vehicle body coordinate system 710 is
Figure 02_image009
,
Figure 02_image011
,in
Figure 02_image013
,
Figure 02_image015
.

類似地,假設光學雷達座標系730的資料點座標為

Figure 02_image017
。若車體座標系710與光學雷達座標系730之間存在旋轉關係
Figure 02_image019
平移關係
Figure 02_image021
,則光學雷達座標系730的資料點在車體座標系710的第二座標為
Figure 02_image023
Figure 02_image025
,其中
Figure 02_image027
,
Figure 02_image029
。 Similarly, suppose the data point coordinates of the lidar coordinate system 730 are
Figure 02_image017
. If there is a rotational relationship between the vehicle body coordinate system 710 and the optical radar coordinate system 730
Figure 02_image019
Translation relationship
Figure 02_image021
, then the second coordinate of the data point of the optical radar coordinate system 730 in the vehicle body coordinate system 710 is
Figure 02_image023
,
Figure 02_image025
,in
Figure 02_image027
,
Figure 02_image029
.

接著,座標融合模組126可利用以下公式將第一座標及第二座標分別轉換為第一球座標及第二球座標:

Figure 02_image031
Figure 02_image033
,也就是將第一座標及第二座標的笛卡兒座標( x, y)轉換為第一球座標及第二球座標的極座標(
Figure 02_image035
,
Figure 02_image037
)。 Next, the coordinate fusion module 126 can use the following formula to convert the first coordinate and the second coordinate into the first spherical coordinate and the second spherical coordinate, respectively:
Figure 02_image031
,
Figure 02_image033
, that is to convert the Cartesian coordinates ( x , y ) of the first and second coordinates into the polar coordinates of the first and second spherical coordinates (
Figure 02_image035
,
Figure 02_image037
).

圖8為根據本發明一實施例的控制指令輸出模組輸出目標的第三球座標的流程圖。FIG. 8 is a flowchart of a third spherical coordinate of an output target of a control command output module according to an embodiment of the present invention.

請參照圖8,「A:基於彩色影像判斷的第一球座標(

Figure 02_image039
,
Figure 02_image041
)」801及「B:基於光學雷達判斷的第二球座標(
Figure 02_image043
,
Figure 02_image045
)」802會被輸入控制指令輸出模組127來進行目標位置判斷(S810)。若A及B皆有值(S811),且
Figure 02_image041
Figure 02_image045
相差大於門檻值(S812),取A為第三球座標(S813),因為彩色影像深度攝影機111在沒跟丟目標時準確率比光學雷達112高。 Please refer to Fig. 8, "A: The first spherical coordinate determined based on the color image (
Figure 02_image039
,
Figure 02_image041
)" 801 and "B: The second spherical coordinate determined based on the lidar (
Figure 02_image043
,
Figure 02_image045
)" 802 will be input to the control command output module 127 to determine the target position (S810). If both A and B have values (S811), and
Figure 02_image041
and
Figure 02_image045
If the difference is greater than the threshold value ( S812 ), A is taken as the third spherical coordinate ( S813 ), because the color image depth camera 111 has a higher accuracy than the optical radar 112 when it does not follow the target.

若A及B皆有值,且

Figure 02_image041
Figure 02_image045
相差小於等於門檻值(S814),根據
Figure 02_image041
Figure 02_image045
的算術平均值及
Figure 02_image039
Figure 02_image043
較小者獲得第三球座標(S815),因為具有較小半徑值的輸出座標離車體較近,有較大的可能性為要追蹤的目標。 if both A and B have values, and
Figure 02_image041
and
Figure 02_image045
The difference is less than or equal to the threshold value (S814), according to
Figure 02_image041
and
Figure 02_image045
The arithmetic mean of and
Figure 02_image039
and
Figure 02_image043
The smaller one obtains the third spherical coordinate (S815), because the output coordinate with the smaller radius value is closer to the vehicle body, and has a greater possibility of being the target to be tracked.

若A及B其中之一為空值(S816),也就是從彩色影像深度攝影機111及光學雷達112的其中之一沒有追蹤到目標時,取A與B的有值者為第三球座標(S817)。If one of A and B is a null value (S816), that is, when the target is not tracked from one of the color image depth camera 111 and the optical radar 112, the value of A and B is taken as the third spherical coordinate ( S817).

若A及B皆為空值(S818),則不輸出第三球座標(S819),並重新執行目標追蹤程序(S820)。If both A and B are null (S818), the third spherical coordinate is not output (S819), and the target tracking program is re-executed (S820).

綜上所述,本發明的自走車跟隨系統及自走車跟隨方法利用設置於車體上的彩色影像深度攝影機獲得彩色影像並利用設置於車體上的光學雷達獲得二維資訊,獲得彩色影像中的目標的三維座標及二維資訊中目標的二維座標,並分別將三維座標及二維座標轉換為第一球座標及第二球座標,再根據第一球座標及第二球座標產生目標的第三球座標。最後,車體的車體控制器可根據第三球座標控制車體來跟隨目標。本發明的自走車跟隨系統及自走車跟隨方法以彩色影像深度攝影機增加目標可被辨識的特徵來彌補光學雷達的點雲圖特徵不足的問題,以在複雜的環境中可更準確地偵測目標。此外,系統也能記憶目標特徵,即使目標被遮蔽也能繼續跟隨。To sum up, the self-propelled vehicle following system and the self-propelled vehicle following method of the present invention use the color image depth camera disposed on the vehicle body to obtain color images, and use the optical radar disposed on the vehicle body to obtain two-dimensional information, and obtain color images. The three-dimensional coordinates of the target in the image and the two-dimensional coordinates of the target in the two-dimensional information, and the three-dimensional coordinates and the two-dimensional coordinates are converted into the first spherical coordinates and the second spherical coordinates, and then according to the first spherical coordinates and the second spherical coordinates A third spherical coordinate of the target is generated. Finally, the vehicle body controller of the vehicle body can control the vehicle body to follow the target according to the third spherical coordinate. The self-propelled vehicle following system and the self-propelled vehicle following method of the present invention use a color image depth camera to increase the feature that the target can be identified to make up for the problem of insufficient point cloud image features of the optical radar, so as to detect more accurately in a complex environment Target. In addition, the system can also memorize the characteristics of the target and continue to follow even if the target is obscured.

雖然本發明已以實施例揭露如上,然其並非用以限定本發明,任何所屬技術領域中具有通常知識者,在不脫離本發明的精神和範圍內,當可作些許的更動與潤飾,故本發明的保護範圍當視後附的申請專利範圍所界定者為準。Although the present invention has been disclosed above by the embodiments, it is not intended to limit the present invention. Anyone with ordinary knowledge in the technical field can make some changes and modifications without departing from the spirit and scope of the present invention. Therefore, The protection scope of the present invention shall be determined by the scope of the appended patent application.

110:資料收集單元 111:彩色影像深度攝影機 112:光學雷達 120:資料運算單元 121:人員偵測模組 122:影像目標鎖定模組 123:影像目標追蹤模組 124:光學雷達目標鎖定模組 125:光學雷達目標追蹤模組 126:座標融合模組 127:控制指令輸出模組 130:車體控制單元 131:車體控制機器人操作系統 210:彩色影像 220:人員全身定界框座標和框中影 230:深度學習物件偵測模型 231:分類器 232:卷積層 233:人員偵測功能 310:目標影像特徵 320:目標定界框 S301~S303:步驟 410:像素深度資訊 420:三維座標 S401~S403:步驟 510:二維資訊 520:特徵追蹤器 S501~S502:步驟 610:二維座標 S601~S602:步驟 710:車體座標系 720:攝影機座標系 730:光學雷達座標系 801:「A:基於彩色影像判斷的第一球座標(

Figure 02_image039
,
Figure 02_image041
)」 802:「B:基於光學雷達判斷的第二球座標(
Figure 02_image043
,
Figure 02_image045
)」 S810~S820:步驟 110: Data Collection Unit 111: Color Image Depth Camera 112: Optical Radar 120: Data Operation Unit 121: Person Detection Module 122: Image Target Locking Module 123: Image Target Tracking Module 124: Optical Radar Target Locking Module 125 : Optical radar target tracking module 126 : Coordinate fusion module 127 : Control command output module 130 : Vehicle body control unit 131 : Vehicle body control robot operating system 210 : Color image 220 : Coordinates and frame shadows of human body bounding frame 230 : Deep learning object detection model 231: Classifier 232: Convolution layer 233: Person detection function 310: Target image feature 320: Target bounding box S301~S303: Step 410: Pixel depth information 420: 3D coordinates S401~S403: Step 510: Two-dimensional information 520: Feature tracker S501~S502: Step 610: Two-dimensional coordinates S601~S602: Step 710: Vehicle body coordinate system 720: Camera coordinate system 730: Optical radar coordinate system 801: "A: based on color The first spherical coordinate for image judgment (
Figure 02_image039
,
Figure 02_image041
)" 802: "B: The second spherical coordinate determined based on the lidar (
Figure 02_image043
,
Figure 02_image045
)” S810~S820: Steps

圖1為根據本發明一實施例的自走車跟隨系統的方塊圖。 圖2為根據本發明一實施例的人員偵測模組的方塊圖。 圖3為根據本發明一實施例的影像目標鎖定模組的方塊圖。 圖4為根據本發明一實施例的影像目標追蹤模組的方塊圖。 圖5為根據本發明一實施例的光學雷達目標鎖定模組的方塊圖。 圖6為根據本發明一實施例的光學雷達目標追蹤模組的方塊圖。 圖7為根據本發明一實施例的攝影機座標系、光學雷達座標系及車體座標系的座標關係的示意圖。 圖8為根據本發明一實施例的控制指令輸出模組輸出目標的第三球座標的流程圖。 FIG. 1 is a block diagram of a bicycle following system according to an embodiment of the present invention. FIG. 2 is a block diagram of a person detection module according to an embodiment of the present invention. 3 is a block diagram of an image target locking module according to an embodiment of the present invention. 4 is a block diagram of an image target tracking module according to an embodiment of the present invention. FIG. 5 is a block diagram of an optical radar target locking module according to an embodiment of the present invention. FIG. 6 is a block diagram of an optical radar target tracking module according to an embodiment of the present invention. FIG. 7 is a schematic diagram of the coordinate relationship between a camera coordinate system, an optical radar coordinate system, and a vehicle body coordinate system according to an embodiment of the present invention. FIG. 8 is a flowchart of a third spherical coordinate of an output target of a control command output module according to an embodiment of the present invention.

110:資料收集單元 110: Data Collection Unit

111:彩色影像深度攝影機 111: Color Image Depth Camera

112:光學雷達 112: LiDAR

120:資料運算單元 120: Data operation unit

121:人員偵測模組 121: Personnel Detection Module

122:影像目標鎖定模組 122: Image Target Locking Module

123:影像目標追蹤模組 123: Image Target Tracking Module

124:光學雷達目標鎖定模組 124: LiDAR target locking module

125:光學雷達目標追蹤模組 125: LiDAR target tracking module

126:座標融合模組 126: Coordinate fusion module

127:控制指令輸出模組 127: Control command output module

130:車體控制單元 130: Body control unit

131:車體控制機器人操作系統 131: Body Control Robot Operating System

Claims (20)

一種自走車跟隨系統,包括:資料收集單元,包括:彩色影像深度攝影機,獲得彩色影像;以及光學雷達,獲得二維資訊;以及資料運算單元,包括:人員偵測模組,接收來自所述彩色影像深度攝影機的所述彩色影像後,將所述彩色影像經由深度學習物件偵測模型分析,輸出至少一個人員定界框的座標和至少一個框中影像;以及影像目標鎖定模組,接收所述至少一個人員定界框的座標和所述至少一個框中影像,用以鎖定一面積最大的至少一個定界框為第一目標定界框,並且從所述第一目標定界框提取影像特徵並儲存為目標影像特徵,輸出所述第一目標定界框和所述目標影像特徵;影像目標追蹤模組,由所述影像目標鎖定模組獲得所述彩色影像中的目標的三維座標;光學雷達目標追蹤模組,獲得所述二維資訊中所述目標的二維座標;座標融合模組,將所述三維座標轉換為球座標系的第一球座標,並將所述二維座標轉換為所述球座標系的第二球座標;以及控制指令輸出模組,根據基於所述彩色影像深度攝影機 獲得的所述第一球座標與基於所述光學雷達獲得的所述第二球座標,來產生所述目標的第三球座標;以及車體控制單元,包括車體控制器,根據所述第三球座標控制車體來跟隨所述目標。 A self-propelled vehicle following system includes: a data collection unit, including: a color image depth camera, which obtains a color image; and an optical radar, which obtains two-dimensional information; and a data operation unit, including: a person detection module, which receives After the color image of the color image depth camera is analyzed, the color image is analyzed by the deep learning object detection model, and the coordinates of at least one personnel bounding box and at least one frame image are output; and the image target locking module receives the The coordinates of the at least one personnel bounding box and the image in the at least one box are used to lock the at least one bounding box with the largest area as the first target bounding box, and extract the image from the first target bounding box The feature is stored as a target image feature, and the first target bounding frame and the target image feature are output; the image target tracking module obtains the three-dimensional coordinates of the target in the color image by the image target locking module; The optical radar target tracking module obtains the two-dimensional coordinates of the target in the two-dimensional information; the coordinate fusion module converts the three-dimensional coordinates into the first spherical coordinates of the spherical coordinate system, and converts the two-dimensional coordinates into the first spherical coordinates of the spherical coordinate system. Converting into a second spherical coordinate of the spherical coordinate system; and a control command output module, according to the depth camera based on the color image The obtained first spherical coordinate and the second spherical coordinate obtained based on the optical radar generate a third spherical coordinate of the target; and a vehicle body control unit, including a vehicle body controller, according to the third spherical coordinate Three spherical coordinates control the body to follow the target. 如請求項1所述的自走車跟隨系統,還包括:光學雷達目標鎖定模組,接收來自所述光學雷達的所述二維資訊,用以鎖定預定座標的定界框為第二目標定界框,初始化特徵追蹤器並輸出所述特徵追蹤器。 The self-propelled vehicle following system according to claim 1, further comprising: an optical radar target locking module, which receives the two-dimensional information from the optical radar, and is used to lock the bounding frame of the predetermined coordinates for the second target. bounding box, initialize the feature tracker and output the feature tracker. 如請求項1所述的自走車跟隨系統,其中所述目標追蹤模組執行影像目標追蹤程序與點雲目標追蹤程序,所述影像目標追蹤程序依據所述至少一個人員定界框的座標、所述至少一個框中影像、所述第一目標定界框、所述目標影像特徵,計算所述至少一個人員定界框與所述目標定界框的一覆蓋率,並將所述至少一個框中影像與所述目標影像特徵進行人員全身影像特徵匹配,並依據所述彩色影像的像素深度資訊輸出所述三維座標,所述點雲目標追蹤程序依據所述二維資訊與所述特徵追蹤器來追蹤所述目標並定位所述目標的所述二維座標。 The self-propelled vehicle following system according to claim 1, wherein the target tracking module executes an image target tracking program and a point cloud target tracking program, and the image target tracking program is based on the coordinates of the at least one personnel bounding box, The at least one frame image, the first target bounding frame, and the target image feature, calculate a coverage ratio of the at least one personnel bounding frame and the target bounding frame, and calculate the at least one The frame image and the target image feature are matched with the whole body image feature of the person, and the 3D coordinates are output according to the pixel depth information of the color image, and the point cloud target tracking program is tracked according to the 2D information and the feature. to track the target and locate the two-dimensional coordinates of the target. 如請求項1所述的自走車跟隨系統,其中所述座標融合模組捨棄所述三維座標中對應高度的座標值以將所述三維座標轉換為車體座標系的第一座標並將所述第一座標轉換為所述第一 球座標,並將所述二維座標轉換為所述車體座標系的第二座標並將所述第二座標轉換為所述第二球座標。 The self-propelled vehicle following system according to claim 1, wherein the coordinate fusion module discards the coordinate value corresponding to the height in the three-dimensional coordinate to convert the three-dimensional coordinate into the first coordinate of the vehicle body coordinate system and convert the coordinate value of the three-dimensional coordinate into the first coordinate of the vehicle body coordinate system The first coordinates are converted to the first spherical coordinates, and converting the two-dimensional coordinates into second coordinates of the vehicle body coordinate system and converting the second coordinates into the second spherical coordinates. 如請求項1所述的自走車跟隨系統,其中當所述第一球座標與所述第二球座標皆有值且所述第一球座標與所述第二球座標之角度相差大於一門檻值時,取所述第一球座標為所述第三球座標;以及當所述第一球座標與所述第二球座標皆有值且所述第一球座標與所述第二球座標之角度相差不大於所述門檻值時,根據所述第一球座標與所述第二球座標的角度的算術平均值及所述第一球座標與所述第二球座標的較小半徑值獲得所述第三球座標。 The self-propelled vehicle following system according to claim 1, wherein when both the first spherical coordinate and the second spherical coordinate have values and the angle difference between the first spherical coordinate and the second spherical coordinate is greater than one When the threshold value is set, the first spherical coordinate is taken as the third spherical coordinate; and when both the first spherical coordinate and the second spherical coordinate have values and the first spherical coordinate and the second spherical coordinate have values When the angle difference of the coordinates is not greater than the threshold value, according to the arithmetic mean of the angles of the first spherical coordinates and the second spherical coordinates and the smaller radius of the first spherical coordinates and the second spherical coordinates value to obtain the third spherical coordinate. 如請求項5所述的自走車跟隨系統,其中當所述第一球座標與所述第二球座標其中之一為空值時,取所述第一球座標與所述第二球座標的有值者為所述第三球座標。 The self-propelled vehicle following system according to claim 5, wherein when one of the first spherical coordinate and the second spherical coordinate is a null value, the first spherical coordinate and the second spherical coordinate are taken The value of is the third spherical coordinate. 如請求項6所述的自走車跟隨系統,其中當所述第一球座標與所述第二球座標皆為空值時,重新執行一目標追蹤程序。 The self-propelled vehicle following system according to claim 6, wherein when both the first spherical coordinate and the second spherical coordinate are null values, a target tracking procedure is re-executed. 如請求項1所述的自走車跟隨系統,其中所述二維資訊包括點雲圖。 The self-propelled vehicle following system of claim 1, wherein the two-dimensional information includes a point cloud image. 一種自走車跟隨方法,包括:藉由彩色影像深度攝影機獲得彩色影像,並藉由光學雷達,獲得二維資訊;接收來自所述彩色影像深度攝影機的所述彩色影像後,將所述彩色影像經由深度學習物件偵測模型分析,輸出至少一個人員 定界框的座標和至少一個框中影像;接收所述至少一個人員定界框的座標和所述至少一個框中影像,用以鎖定一面積最大的至少一個定界框為第一目標定界框,並且從所述第一目標定界框提取影像特徵並儲存為目標影像特徵,輸出所述第一目標定界框和所述目標影像特徵;藉由所述第一目標定界框和所述目標影像特徵獲得所述彩色影像中的目標的三維座標,並藉由所述二維資訊獲得所述二維資訊中所述目標的二維座標;將所述三維座標轉換為球座標系的第一球座標,並將所述二維座標轉換為所述球座標系的第二球座標;根據基於所述彩色影像深度攝影機獲得的所述第一球座標與基於所述光學雷達獲得的所述第二球座標,來產生所述目標的第三球座標;以及藉由車體控制器根據所述第三球座標控制車體來跟隨所述目標。 A method for following a bicycle, comprising: obtaining a color image by a color image depth camera, and obtaining two-dimensional information by using an optical radar; after receiving the color image from the color image depth camera, converting the color image Through deep learning object detection model analysis, output at least one person Coordinates of the bounding box and at least one frame image; receiving the coordinates of the at least one personnel bounding frame and the at least one frame image, so as to lock the at least one bounding frame with the largest area to delimit the first target frame, and extract image features from the first target bounding box and store them as target image features, output the first target bounding box and the target image features; The three-dimensional coordinates of the target in the color image are obtained from the target image feature, and the two-dimensional coordinates of the target in the two-dimensional information are obtained from the two-dimensional information; the three-dimensional coordinates are converted into spherical coordinates. first spherical coordinates, and convert the two-dimensional coordinates into second spherical coordinates of the spherical coordinate system; according to the first spherical coordinates obtained based on the color image depth camera and the the second spherical coordinate to generate the third spherical coordinate of the target; and the vehicle body controller controls the vehicle body according to the third spherical coordinate to follow the target. 如請求項9所述的自走車跟隨方法,還包括:接收來自所述光學雷達的所述二維資訊,用以鎖定預定座標的定界框為第二目標定界框,初始化特徵追蹤器並輸出所述特徵追蹤器。 The self-propelled vehicle following method according to claim 9, further comprising: receiving the two-dimensional information from the optical radar, locking a bounding frame of predetermined coordinates as a second target bounding frame, and initializing a feature tracker and output the feature tracker. 如請求項9所述的自走車跟隨方法,還包括:執行影像目標追蹤程序與點雲目標追蹤程序,所述影像目標追蹤程序依據所述至少一個人員定界框的座標、所述至少一個框中影像、所述第一目標定界框、所述目標影像特 徵,計算所述至少一個人員定界框與所述目標定界框的一覆蓋率,並將所述至少一個框中影像與所述目標影像特徵進行人員全身影像特徵匹配,並依據所述彩色影像的像素深度資訊輸出所述三維座標,所述點雲目標追蹤程序依據所述二維資訊與所述特徵追蹤器來追蹤所述目標並定位所述目標的所述二維座標。 The self-propelled vehicle following method according to claim 9, further comprising: executing an image target tracking program and a point cloud target tracking program, wherein the image target tracking program is based on the coordinates of the at least one personnel bounding box, the at least one frame image, the first target bounding box, the target image feature feature, calculate a coverage ratio of the at least one person bounding box and the target bounding box, and match the at least one frame image and the target image feature with the human whole body image feature, and according to the color The pixel depth information of the image outputs the 3D coordinates, and the point cloud object tracking program tracks the object and locates the 2D coordinates of the object according to the 2D information and the feature tracker. 如請求項9所述的自走車跟隨方法,還包括:捨棄所述三維座標中對應高度的座標值以將所述三維座標轉換為車體座標系的第一座標並將所述第一座標轉換為所述第一球座標,並將所述二維座標轉換為所述車體座標系的第二座標並將所述第二座標轉換為所述第二球座標。 The self-propelled vehicle following method according to claim 9, further comprising: discarding the coordinate value corresponding to the height in the three-dimensional coordinate to convert the three-dimensional coordinate into the first coordinate of the vehicle body coordinate system and converting the first coordinate Converting to the first spherical coordinates, converting the two-dimensional coordinates to the second coordinates of the vehicle body coordinate system, and converting the second coordinates to the second spherical coordinates. 如請求項9所述的自走車跟隨方法,其中當所述第一球座標與所述第二球座標皆有值且所述第一球座標與所述第二球座標之角度相差大於一門檻值時,取所述第一球座標為所述第三球座標;以及當所述第一球座標與所述第二球座標皆有值且所述第一球座標與所述第二球座標之角度相差不大於所述門檻值時,根據所述第一球座標與所述第二球座標的角度的算術平均值及所述第一球座標與所述第二球座標的較小半徑值獲得所述第三球座標。 The self-propelled vehicle following method according to claim 9, wherein when both the first spherical coordinate and the second spherical coordinate have values and the angle difference between the first spherical coordinate and the second spherical coordinate is greater than one When the threshold value is set, the first spherical coordinate is taken as the third spherical coordinate; and when both the first spherical coordinate and the second spherical coordinate have values and the first spherical coordinate and the second spherical coordinate have values When the angle difference of the coordinates is not greater than the threshold value, according to the arithmetic mean of the angles of the first spherical coordinates and the second spherical coordinates and the smaller radius of the first spherical coordinates and the second spherical coordinates value to obtain the third spherical coordinate. 如請求項13所述的自走車跟隨方法,其中當所述第一球座標與所述第二球座標其中之一為空值時,取所述第一球座標與所述第二球座標的有值者為所述第三球座標。 The self-propelled vehicle following method according to claim 13, wherein when one of the first spherical coordinate and the second spherical coordinate is a null value, the first spherical coordinate and the second spherical coordinate are taken The value of is the third spherical coordinate. 如請求項14所述的自走車跟隨方法,其中當所述第一球座標與所述第二球座標皆為空值時,重新執行一目標追蹤程序。 The self-propelled vehicle following method according to claim 14, wherein when both the first spherical coordinate and the second spherical coordinate are null values, a target tracking procedure is re-executed. 如請求項9所述的自走車跟隨方法,其中所述二維資訊包括點雲圖。 The self-propelled vehicle following method according to claim 9, wherein the two-dimensional information includes a point cloud image. 一種自走車跟隨系統,包括:資料收集單元,包括:彩色影像深度攝影機,獲得彩色影像;以及光學雷達,獲得二維資訊;以及資料運算單元,包括:影像目標追蹤模組,獲得所述彩色影像中的目標的三維座標;光學雷達目標追蹤模組,獲得所述二維資訊中所述目標的二維座標;座標融合模組,將所述三維座標轉換為球座標系的第一球座標,並將所述二維座標轉換為所述球座標系的第二球座標,其中所述座標融合模組捨棄所述三維座標中對應高度的座標值以將所述三維座標轉換為車體座標系的第一座標,並將所述第一座標轉換為所述第一球座標,將所述二維座標轉換為所述車體座標系的第二座標,並將所述第二座標轉換為所述第二球座標;以及控制指令輸出模組,根據基於所述彩色影像深度攝影機獲得的所述第一球座標與基於所述光學雷達獲得的所述第二球座 標,來產生所述目標的第三球座標;以及車體控制單元,包括車體控制器,根據所述第三球座標控制車體來跟隨所述目標。 A self-propelled vehicle following system includes: a data collection unit, including: a color image depth camera to obtain a color image; and an optical radar to obtain two-dimensional information; and a data operation unit, including: an image target tracking module, to obtain the color image The three-dimensional coordinates of the target in the image; the optical radar target tracking module obtains the two-dimensional coordinates of the target in the two-dimensional information; the coordinate fusion module converts the three-dimensional coordinates into the first spherical coordinates of the spherical coordinate system , and convert the two-dimensional coordinates into the second spherical coordinates of the spherical coordinate system, wherein the coordinate fusion module discards the coordinate value corresponding to the height in the three-dimensional coordinates to convert the three-dimensional coordinates into vehicle body coordinates The first coordinate of the vehicle body coordinate system, and the first coordinate is converted into the first spherical coordinate, the two-dimensional coordinate is converted into the second coordinate of the vehicle body coordinate system, and the second coordinate is converted into the second spherical coordinates; and a control command output module, according to the first spherical coordinates obtained based on the color image depth camera and the second spherical coordinates obtained based on the optical radar the target to generate a third spherical coordinate of the target; and a vehicle body control unit, including a vehicle body controller, which controls the vehicle body to follow the target according to the third spherical coordinate. 一種自走車跟隨系統,包括:資料收集單元,包括:彩色影像深度攝影機,獲得彩色影像;以及光學雷達,獲得二維資訊;以及資料運算單元,包括:影像目標追蹤模組,獲得所述彩色影像中的目標的三維座標;光學雷達目標追蹤模組,獲得所述二維資訊中所述目標的二維座標;座標融合模組,將所述三維座標轉換為球座標系的第一球座標,並將所述二維座標轉換為所述球座標系的第二球座標;以及控制指令輸出模組,根據基於所述彩色影像深度攝影機獲得的所述第一球座標與基於所述光學雷達獲得的所述第二球座標,來產生所述目標的第三球座標,其中當所述第一球座標與所述第二球座標皆有值且所述第一球座標與所述第二球座標之角度相差大於一門檻值時,取所述第一球座標為所述第三球座標,並且,當所述第一球座標與所述第二球座標皆有值且所述第一球座標與所述第二球座標之角度相差不大於所述門檻值時,根據所述 第一球座標與所述第二球座標的角度的算術平均值及所述第一球座標與所述第二球座標的較小半徑值獲得所述第三球座標;以及車體控制單元,包括車體控制器,根據所述第三球座標控制車體來跟隨所述目標。 A self-propelled vehicle following system includes: a data collection unit, including: a color image depth camera to obtain a color image; and an optical radar to obtain two-dimensional information; and a data operation unit, including: an image target tracking module, to obtain the color image The three-dimensional coordinates of the target in the image; the optical radar target tracking module obtains the two-dimensional coordinates of the target in the two-dimensional information; the coordinate fusion module converts the three-dimensional coordinates into the first spherical coordinates of the spherical coordinate system , and convert the two-dimensional coordinates into the second spherical coordinates of the spherical coordinate system; and a control command output module, according to the first spherical coordinates obtained based on the color image depth camera and based on the optical radar the obtained second spherical coordinate to generate the third spherical coordinate of the target, wherein when the first spherical coordinate and the second spherical coordinate both have values and the first spherical coordinate and the second spherical coordinate When the angle difference between the spherical coordinates is greater than a threshold value, the first spherical coordinate is taken as the third spherical coordinate, and when both the first spherical coordinate and the second spherical coordinate have values and the first spherical coordinate When the angle difference between the spherical coordinate and the second spherical coordinate is not greater than the threshold value, according to the The third spherical coordinate is obtained from the arithmetic mean value of the angle between the first spherical coordinate and the second spherical coordinate and the smaller radius value of the first spherical coordinate and the second spherical coordinate; and the vehicle body control unit, A vehicle body controller is included to control the vehicle body to follow the target according to the third spherical coordinate. 一種自走車跟隨方法,包括:藉由彩色影像深度攝影機獲得彩色影像,並藉由光學雷達,獲得二維資訊;獲得所述彩色影像中的目標的三維座標,並獲得所述二維資訊中所述目標的二維座標;將所述三維座標轉換為球座標系的第一球座標,並將所述二維座標轉換為所述球座標系的第二球座標,其中捨棄所述三維座標中對應高度的座標值以將所述三維座標轉換為車體座標系的第一座標,並將所述第一座標轉換為所述第一球座標,以及將所述二維座標轉換為所述車體座標系的第二座標,並將所述第二座標轉換為所述第二球座標;根據基於所述彩色影像深度攝影機獲得的所述第一球座標與基於所述光學雷達獲得的所述第二球座標,來產生所述目標的第三球座標;以及藉由車體控制器根據所述第三球座標控制車體來跟隨所述目標。 A method for following a self-propelled vehicle, comprising: obtaining a color image by a color image depth camera, and obtaining two-dimensional information by using an optical radar; obtaining three-dimensional coordinates of a target in the color image, and obtaining the two-dimensional information. The two-dimensional coordinates of the target; convert the three-dimensional coordinates into the first spherical coordinates of the spherical coordinate system, and convert the two-dimensional coordinates into the second spherical coordinates of the spherical coordinate system, wherein the three-dimensional coordinates are discarded to convert the three-dimensional coordinates to the first coordinates of the vehicle body coordinate system, convert the first coordinates to the first spherical coordinates, and convert the two-dimensional coordinates to the the second coordinates of the vehicle body coordinate system, and convert the second coordinates into the second spherical coordinates; according to the first spherical coordinates obtained based on the color image depth camera and the first spherical coordinates obtained based on the optical radar the second spherical coordinate to generate the third spherical coordinate of the target; and the vehicle body controller controls the vehicle body according to the third spherical coordinate to follow the target. 一種自走車跟隨方法,包括:藉由彩色影像深度攝影機獲得彩色影像,並藉由光學雷達, 獲得二維資訊;獲得所述彩色影像中的目標的三維座標,並獲得所述二維資訊中所述目標的二維座標;將所述三維座標轉換為球座標系的第一球座標,並將所述二維座標轉換為所述球座標系的第二球座標;根據基於所述彩色影像深度攝影機獲得的所述第一球座標與基於所述光學雷達獲得的所述第二球座標,來產生所述目標的第三球座標,其中當所述第一球座標與所述第二球座標皆有值且所述第一球座標與所述第二球座標之角度相差大於一門檻值時,取所述第一球座標為所述第三球座標,以及當所述第一球座標與所述第二球座標皆有值且所述第一球座標與所述第二球座標之角度相差不大於所述門檻值時,根據所述第一球座標與所述第二球座標的角度的算術平均值及所述第一球座標與所述第二球座標的較小半徑值獲得所述第三球座標;以及藉由車體控制器根據所述第三球座標控制車體來跟隨所述目標。 A method for following a bicycle, comprising: obtaining a color image by a color image depth camera, and obtaining a color image by an optical radar, obtaining two-dimensional information; obtaining the three-dimensional coordinates of the target in the color image, and obtaining the two-dimensional coordinates of the target in the two-dimensional information; converting the three-dimensional coordinates into the first spherical coordinates of a spherical coordinate system, and converting the two-dimensional coordinates into the second spherical coordinates of the spherical coordinate system; according to the first spherical coordinates obtained based on the color image depth camera and the second spherical coordinates obtained based on the optical radar, to generate the third spherical coordinate of the target, wherein when both the first spherical coordinate and the second spherical coordinate have values and the angle difference between the first spherical coordinate and the second spherical coordinate is greater than a threshold value , take the first spherical coordinate as the third spherical coordinate, and when both the first spherical coordinate and the second spherical coordinate have values and the difference between the first spherical coordinate and the second spherical coordinate When the angle difference is not greater than the threshold value, it is obtained according to the arithmetic mean of the angles of the first spherical coordinate and the second spherical coordinate and the smaller radius value of the first spherical coordinate and the second spherical coordinate the third spherical coordinate; and controlling the vehicle body according to the third spherical coordinate by the vehicle body controller to follow the target.
TW109135087A 2020-10-12 2020-10-12 Automatic guided vehicle tracking system and automatic guided vehicle tracking method TWI751735B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
TW109135087A TWI751735B (en) 2020-10-12 2020-10-12 Automatic guided vehicle tracking system and automatic guided vehicle tracking method
CN202011258643.8A CN114326695B (en) 2020-10-12 2020-11-12 Self-propelled vehicle following system and self-propelled vehicle following method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW109135087A TWI751735B (en) 2020-10-12 2020-10-12 Automatic guided vehicle tracking system and automatic guided vehicle tracking method

Publications (2)

Publication Number Publication Date
TWI751735B true TWI751735B (en) 2022-01-01
TW202215184A TW202215184A (en) 2022-04-16

Family

ID=80809114

Family Applications (1)

Application Number Title Priority Date Filing Date
TW109135087A TWI751735B (en) 2020-10-12 2020-10-12 Automatic guided vehicle tracking system and automatic guided vehicle tracking method

Country Status (2)

Country Link
CN (1) CN114326695B (en)
TW (1) TWI751735B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116661505B (en) * 2023-05-31 2024-09-24 深圳市普渡科技有限公司 Robot, robot following method, device and storage medium
TWI873060B (en) * 2024-07-12 2025-02-11 中華學校財團法人中華科技大學 Smart health delivery system with deep learning algorithm

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWM361902U (en) * 2009-01-22 2009-08-01 Univ Lunghwa Sci & Technology Framework with human body posture identification function
CN104898652A (en) * 2011-01-28 2015-09-09 英塔茨科技公司 Interfacing with a mobile telepresence robot
WO2016126297A2 (en) * 2014-12-24 2016-08-11 Irobot Corporation Mobile security robot

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150294496A1 (en) * 2014-04-14 2015-10-15 GM Global Technology Operations LLC Probabilistic person-tracking using multi-view fusion
CN104933392A (en) * 2014-03-19 2015-09-23 通用汽车环球科技运作有限责任公司 Probabilistic people tracking using multi-view integration
CN104751119A (en) * 2015-02-11 2015-07-01 中国科学院大学 Rapid detecting and tracking method for pedestrians based on information fusion
CN105975923B (en) * 2016-05-03 2020-02-21 湖南拓视觉信息技术有限公司 Method and system for tracking human objects
CN107194962B (en) * 2017-04-01 2020-06-05 深圳市速腾聚创科技有限公司 Point cloud and plane image fusion method and device
CN108932736B (en) * 2018-05-30 2022-10-11 南昌大学 Two-dimensional laser radar point cloud data processing method and dynamic robot pose calibration method
US11340610B2 (en) * 2018-07-24 2022-05-24 Huili Yu Autonomous target following method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWM361902U (en) * 2009-01-22 2009-08-01 Univ Lunghwa Sci & Technology Framework with human body posture identification function
CN104898652A (en) * 2011-01-28 2015-09-09 英塔茨科技公司 Interfacing with a mobile telepresence robot
WO2016126297A2 (en) * 2014-12-24 2016-08-11 Irobot Corporation Mobile security robot

Also Published As

Publication number Publication date
CN114326695B (en) 2024-02-06
CN114326695A (en) 2022-04-12
TW202215184A (en) 2022-04-16

Similar Documents

Publication Publication Date Title
US9098766B2 (en) Controlled human pose estimation from depth image streams
Boniardi et al. Robot localization in floor plans using a room layout edge extraction network
JP6430064B2 (en) Method and system for aligning data
US12477093B2 (en) Wide viewing angle stereo camera apparatus and depth image processing method using the same
US20100215271A1 (en) Body feature detection and human pose estimation using inner distance shape contexts
Nair et al. Moving obstacle detection from a navigating robot
CN102313536A (en) Method for barrier perception based on airborne binocular vision
Maier et al. Vision-based humanoid navigation using self-supervised obstacle detection
Tamjidi et al. 6-DOF pose estimation of a portable navigation aid for the visually impaired
Alcantarilla et al. Visual odometry priors for robust EKF-SLAM
Zhou et al. Online multiple targets detection and tracking from mobile robot in cluttered indoor environments with depth camera
TWI751735B (en) Automatic guided vehicle tracking system and automatic guided vehicle tracking method
Zhang et al. DP-VINS: Dynamics Adaptive Plane-Based Visual-Inertial SLAM for Autonomous Vehicles
Maier et al. Appearance-based traversability classification in monocular images using iterative ground plane estimation
Vincze et al. Edge-projected integration of image and model cues for robust model-based object tracking
Kitt et al. Detection and tracking of independently moving objects in urban environments
CN108694348B (en) Tracking registration method and device based on natural features
Butt et al. Monocular SLAM initialization using epipolar and homography model
Singh et al. Efficient deep learning-based semantic mapping approach using monocular vision for resource-limited mobile robots
Lin et al. Robust ground plane region detection using multiple visual cues for obstacle avoidance of a mobile robot
Takaoka et al. 3D map building for a humanoid robot by using visual odometry
KR102546156B1 (en) Autonomous logistics transport robot
Bonin-Font et al. A monocular mobile robot reactive navigation approach based on the inverse perspective transformation
Kim et al. Semantic Loop Closure for Reducing False Matches in SLAM
Premaratne et al. Feature based stereo correspondence using moment invariant