TWI751735B - Automatic guided vehicle tracking system and automatic guided vehicle tracking method - Google Patents
Automatic guided vehicle tracking system and automatic guided vehicle tracking method Download PDFInfo
- Publication number
- TWI751735B TWI751735B TW109135087A TW109135087A TWI751735B TW I751735 B TWI751735 B TW I751735B TW 109135087 A TW109135087 A TW 109135087A TW 109135087 A TW109135087 A TW 109135087A TW I751735 B TWI751735 B TW I751735B
- Authority
- TW
- Taiwan
- Prior art keywords
- spherical
- coordinates
- coordinate
- target
- spherical coordinate
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 28
- 238000013480 data collection Methods 0.000 claims abstract description 9
- 230000003287 optical effect Effects 0.000 claims description 48
- 238000001514 detection method Methods 0.000 claims description 17
- 230000004927 fusion Effects 0.000 claims description 13
- 238000013135 deep learning Methods 0.000 claims description 7
- 239000000284 extract Substances 0.000 claims description 3
- 230000001131 transforming effect Effects 0.000 abstract 2
- 238000010586 diagram Methods 0.000 description 14
- 230000000007 visual effect Effects 0.000 description 5
- 238000000605 extraction Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000032683 aging Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Images
Landscapes
- Image Analysis (AREA)
- Radar Systems Or Details Thereof (AREA)
- Traffic Control Systems (AREA)
- Optical Radar Systems And Details Thereof (AREA)
Abstract
Description
本發明是有關於一種自走車跟隨系統及自走車跟隨方法,且特別是有關於一種讓自走車能在干擾較多或路線較複雜的場域跟隨目標的自走車跟隨系統及自走車跟隨方法。The present invention relates to a self-propelled vehicle following system and a self-propelled vehicle following method, and in particular to a self-propelled vehicle following system and a self-propelled vehicle that allow a self-propelled vehicle to follow a target in a field with more interference or complicated routes. Walk and follow method.
近年來智慧零售與物流電商蓬勃發展,但勞動人口高齡化,搬運與撿貨等人力作業成本顯著上升。為解決物流業人力資源供不應求的問題,使用人機協作跟隨式自走車,可提升人員作業效率並擴大就業年齡層。In recent years, smart retail and logistics e-commerce have flourished, but the labor force is aging, and labor costs such as handling and picking have risen significantly. In order to solve the problem of the shortage of human resources in the logistics industry, the use of human-machine collaborative follow-up bicycles can improve the efficiency of personnel operations and expand the employment age group.
倉儲用服務型自走車可大致分為導航式自走車及跟隨式自走車。現存的跟隨型自走車主要應用於環境較單純的場域,如貨架位置固定的物流倉。若在干擾較多或路線較複雜的場域,如人車繁雜的物流倉或零售賣場,當多人同時作業時,難免發生人員交錯的情況,目標一旦被遮蔽,便無法繼續跟隨。因此,如何讓自走車能正確地跟隨人員是本領域技術人員應致力的目標。Service-type bicycles for storage can be roughly divided into navigation-type bicycles and follow-up bicycles. Existing self-propelled self-propelled vehicles are mainly used in fields with relatively simple environments, such as logistics warehouses with fixed shelf positions. In areas with more interference or complicated routes, such as logistics warehouses or retail stores with many people and vehicles, when many people work at the same time, it is inevitable that people will stagger. Once the target is blocked, it cannot continue to follow. Therefore, how to make the self-propelled vehicle follow the person correctly is the goal that those skilled in the art should strive for.
有鑑於此,本發明提供一種自走車跟隨系統及自走車跟隨方法,讓自走車能在干擾較多或路線較複雜的場域跟隨目標。In view of this, the present invention provides a self-propelled vehicle following system and a self-propelled vehicle following method, so that the self-propelled vehicle can follow a target in a field with more interference or complicated route.
本發明提出一種自走車跟隨系統,包括:資料收集單元,包括:彩色影像深度攝影機,獲得彩色影像;以及光學雷達,獲得二維資訊;以及資料運算單元,包括:影像目標追蹤模組,獲得所述彩色影像中的目標的三維座標;光學雷達目標追蹤模組,並獲得所述二維資訊中所述目標的二維座標;座標融合模組,將所述三維座標轉換為球座標系的第一球座標,並將所述二維座標轉換為所述球座標系的第二球座標;以及控制指令輸出模組,根據基於所述彩色影像深度攝影機獲得的所述第一球座標與基於所述光學雷達獲得的所述第二球座標,來產生所述目標的第三球座標;以及車體控制單元,包括車體控制器,根據所述第三球座標控制車體來跟隨所述目標。The present invention provides a self-propelled vehicle following system, including: a data collection unit, including: a color image depth camera, which obtains a color image; and an optical radar, which obtains two-dimensional information; and a data computing unit, including: an image target tracking module, which obtains The three-dimensional coordinates of the target in the color image; the optical radar target tracking module, and obtain the two-dimensional coordinates of the target in the two-dimensional information; the coordinate fusion module, convert the three-dimensional coordinates into spherical coordinates a first spherical coordinate, converting the two-dimensional coordinate into a second spherical coordinate of the spherical coordinate system; and a control command output module, according to the first spherical coordinate obtained based on the color image depth camera and the the second spherical coordinates obtained by the optical radar to generate the third spherical coordinates of the target; and a vehicle body control unit including a vehicle body controller, which controls the vehicle body to follow the said third spherical coordinates according to the third spherical coordinates Target.
本發明提出一種自走車跟隨方法,包括:藉由彩色影像深度攝影機獲得彩色影像,並藉由光學雷達,獲得二維資訊;獲得所述彩色影像中的目標的三維座標,並獲得所述二維資訊中所述目標的二維座標;將所述三維座標轉換為球座標系的第一球座標,並將所述二維座標轉換為所述球座標系的第二球座標;根據基於所述彩色影像深度攝影機獲得的所述第一球座標與基於所述光學雷達獲得的所述第二球座標,來產生所述目標的第三球座標;以及藉由車體控制器根據所述第三球座標控制車體來跟隨所述目標。The present invention provides a method for following a bicycle, comprising: obtaining a color image by a color image depth camera, and obtaining two-dimensional information by using an optical radar; obtaining three-dimensional coordinates of a target in the color image, and obtaining the two The two-dimensional coordinates of the target in the dimension information; the three-dimensional coordinates are converted into the first spherical coordinates of the spherical coordinate system, and the two-dimensional coordinates are converted into the second spherical coordinates of the spherical coordinate system; the first spherical coordinates obtained by the color image depth camera and the second spherical coordinates obtained based on the optical radar to generate the third spherical coordinates of the target; and the vehicle body controller according to the first spherical coordinates Three spherical coordinates control the body to follow the target.
基於上述,本發明的自走車跟隨系統及自走車跟隨方法利用設置於車體上的彩色影像深度攝影機獲得彩色影像並利用設置於車體上的光學雷達獲得二維資訊,獲得彩色影像中的目標的三維座標及二維資訊中目標的二維座標,並分別將三維座標及二維座標轉換為第一球座標及第二球座標,再根據第一球座標及第二球座標產生目標的第三球座標。最後,車體的車體控制器可根據第三球座標控制車體來跟隨目標。本發明的自走車跟隨系統及自走車跟隨方法以彩色影像深度攝影機增加目標可被辨識的特徵來彌補光學雷達的點雲圖特徵不足的問題,以在複雜的環境中可更準確地偵測目標。Based on the above, the self-propelled vehicle following system and the self-propelled vehicle following method of the present invention use a color image depth camera disposed on the vehicle body to obtain a color image, and use an optical radar disposed on the vehicle body to obtain two-dimensional information, and obtain the color image in the color image. The three-dimensional coordinates of the target and the two-dimensional coordinates of the target in the two-dimensional information, and convert the three-dimensional coordinates and the two-dimensional coordinates into the first spherical coordinates and the second spherical coordinates, and then generate the target according to the first spherical coordinates and the second spherical coordinates. 's third spherical coordinates. Finally, the vehicle body controller of the vehicle body can control the vehicle body to follow the target according to the third spherical coordinate. The self-propelled vehicle following system and the self-propelled vehicle following method of the present invention use a color image depth camera to increase the feature that the target can be identified to make up for the problem of insufficient point cloud image features of the optical radar, so as to detect more accurately in a complex environment Target.
本發明同時使用光學雷達(Lidar)及視覺影像兩種跟隨方法,以互相彌補不足。具體來說,當目標被遮擋時,光學雷達無法判斷目標物與障礙物的不同,易跟錯目標。視覺影像儲存了目標的顏色、形狀、紋理等特徵,供系統匹配,使跟隨車不會誤判目標,當障礙物移除後,仍可繼續跟隨正確的目標物。光學雷達所生成的點雲圖可獲得目標與車體的距離和目標大致的形狀大小,在追蹤的過程中易與形狀相近的障礙物混淆。視覺影像可儲存目標物的色彩分佈、形狀、紋理等特徵,使追蹤目標的過程有更多資訊可供判讀。然而,視覺影像的視域範圍較窄,當目標超出彩色影像深度(RGB-D)攝影機之視線範圍時,易跟丟目標。光學雷達視域廣,可繼續追蹤超出攝影機視域之目標物,使跟隨自走車能繼續追蹤跟隨。The present invention simultaneously uses two following methods, Lidar and visual image, to make up for each other's deficiencies. Specifically, when the target is blocked, the lidar cannot judge the difference between the target and the obstacle, and it is easy to follow the wrong target. The visual image stores the color, shape, texture and other characteristics of the target for the system to match, so that the following car will not misjudge the target, and when the obstacle is removed, it can continue to follow the correct target. The point cloud image generated by the lidar can obtain the distance between the target and the vehicle body and the approximate shape and size of the target, which is easy to be confused with obstacles with similar shapes during the tracking process. The visual image can store the color distribution, shape, texture and other characteristics of the target, so that the process of tracking the target has more information for interpretation. However, the visual field of view of the visual image is narrow, and when the target exceeds the line of sight of the color image depth (RGB-D) camera, it is easy to lose the target. The lidar has a wide field of view, and can continue to track the target beyond the field of view of the camera, so that the following bicycle can continue to track and follow.
圖1為根據本發明一實施例的自走車跟隨系統的方塊圖。FIG. 1 is a block diagram of a bicycle following system according to an embodiment of the present invention.
請參照圖1,本發明一實施例的自走車跟隨系統包括資料收集單元110、資料運算單元120及車體控制單元130。資料收集單元110包括設置於自走車車體上的彩色影像深度攝影機111及光學雷達112。彩色影像深度攝影機111可獲得彩色影像。光學雷達112可獲得例如點雲圖的二維資訊。資料運算單元120可對彩色影像及二維資訊進行運算以輸出目標座標讓車體控制單元130的車體控制機器人操作系統(Robot Operating System,ROS)131根據目標座標控制車體來跟隨目標。資料運算單元120包括人員偵測模組121、影像目標鎖定模組122、影像目標追蹤模組123、光學雷達目標鎖定模組124、光學雷達目標追蹤模組125、座標融合模組126及控制指令輸出模組127。在一實施例中,資料運算單元120可包括處理器並執行分別對應各模組(即,人員偵測模組121、影像目標鎖定模組122、影像目標追蹤模組123、光學雷達目標鎖定模組124、光學雷達目標追蹤模組125、座標融合模組126及控制指令輸出模組127)的軟體/韌體程式碼。在另一實施例中,資料運算單元120的各模組可透過硬體電路來實作。在另一實施例中,資料運算單元120的各模組也可透過硬體電路及/或軟體/韌體程式碼的組合來實作。本發明不限制資料運算單元120的各模組的實作方式。資料運算單元120的各模組將在下文中詳細說明。Referring to FIG. 1 , a bicycle following system according to an embodiment of the present invention includes a
在一實施例中,資料運算單元120可執行鎖定目標人員程序或人員辨識追蹤程序來進行彩色深度影像的目標鎖定及追蹤。鎖定目標人員程序或人員辨識追蹤程序都可利用深度學習方法取得目標人員影像或是取得影像中所有人的影像,再藉由機器學習方法(例如,MobileNet -SSD v2 lite物件識別模型)執行分割影像並取得特徵,例如將取得的人員影像分割成多個等份,並取得多等份的第一主成分當作此影像的特徵。在執行鎖定目標人員程序時,資料運算單元120儲存這些特徵。在執行人員辨識追蹤的程序時,資料運算單元120可將鎖定目標人員時所得到的特徵做為目標特徵輸入,用以進行特徵比對並取得目標位置,並依據辨識結果鎖定目標與進行跟隨。特徵提取方法為例如是RGB直方圖(Histogram)。In one embodiment, the
另一方面,資料運算單元120還可執行光學雷達演算法來進行光學雷達二維資訊的目標鎖定及追蹤。光學雷達演算法可透過二維光學雷達(Lidar)經由光學感測器取得環境資訊(例如,點雲圖)。這些資訊為以光學雷達為中心,與周遭環境物體距離和角度資訊的集合。為了避免因畫面中物件大小的改變而跟丟,本發明可使用CSRT追蹤演算法進行物件追蹤。CSRT追蹤演算法可使用具通道及空間可靠度的判別相關濾波器(Discriminative Correlation Filter with Channel and Spatial Reliability,DCF-CSR)演算法來調整濾波器,確保物件被縮放時依舊能被追蹤。CSRT 追蹤演算法可計算被選取區域的方向梯度直方圖(Histogram of Oriented Gradient,HOG)特徵及色彩名稱(Colornames)特徵,與前一幀進行比對,以此判斷物件當前的位置。On the other hand, the
圖2為根據本發明一實施例的人員偵測模組的方塊圖。FIG. 2 is a block diagram of a person detection module according to an embodiment of the present invention.
請參照圖2,人員偵測模組121可接收彩色影像210的輸入,將彩色影像210經過深度學習物件偵測模型230的分析,以輸出人員全身定界框座標和框中影像220。深度學習物件偵測模型230例如是單次多框偵測器(Single Shot Multibox Detector,SSD)。深度學習物件偵測模型230可經由分類器231接收彩色影像210再透過多個卷積層232進行特徵提取後執行人員偵測功能233,最後輸出人員全身定界框座標和框中影像220。Referring to FIG. 2 , the
圖3為根據本發明一實施例的影像目標鎖定模組的方塊圖。3 is a block diagram of an image target locking module according to an embodiment of the present invention.
請參照圖3,影像目標鎖定模組122會接收來自人員辨識模組121的人員定界框座標和框中影像220,在彩色影像深度攝影機111的視域範圍內,鎖定面積最大的定界框為目標(S301)並忽略人員偵測模組121偵測到的其他非目標人員。接著,影像目標鎖定模組122會針對面積最大的定界框提取影像特徵並儲存為目標影像特徵(S302)以輸出目標影像特徵310,儲存面積最大的定界框為目標影像定界框(S303)以輸出目標定界框320(或稱為第一目標定界框)。Referring to FIG. 3 , the image
圖4為根據本發明一實施例的影像目標追蹤模組的方塊圖。4 is a block diagram of an image target tracking module according to an embodiment of the present invention.
請參照圖4,影像目標追蹤模組123可執行目標追蹤模組執行影像目標追蹤程序來接收目標影像特徵310、目標定界框320及人員定界框座標和框中影像220來計算人員定界框與目標定界框的覆蓋率(S401)。影像目標追蹤模組123還將框中影像與目標影像特徵310進行人員全身影像特徵匹配(S402)。影像目標追蹤模組123再接收彩色影像210的像素深度資訊410來定位目標的三維中心位置(S403)以輸出三維座標420。舉例來說,當目標定界框以(
x
1, y
1, w
1, h
1 )表示且人員定界框以(
x
2, y
2, w
2, h
2 )表示時,覆蓋率(Coverage Rate)
,其中
x及
y為定界框的基準座標(例如,中心座標或角落座標),
w及
h分別為定界框的寬度及高度。
Please refer to FIG. 4 , the image
圖5為根據本發明一實施例的光學雷達目標鎖定模組的方塊圖。FIG. 5 is a block diagram of an optical radar target locking module according to an embodiment of the present invention.
請參照圖5,光學雷達目標鎖定模組124可接收光學雷達112的二維資訊510(例如,二維點雲圖)以鎖定預定座標的定界框為目標定界框(或稱為第二目標定界框)(S501),初始化特徵追蹤器(S502)並輸出特徵追蹤器520。特徵追蹤器520例如是腿部特徵追蹤器。Referring to FIG. 5 , the lidar
圖6為根據本發明一實施例的光學雷達目標追蹤模組的方塊圖。FIG. 6 is a block diagram of an optical radar target tracking module according to an embodiment of the present invention.
請參照圖6,光學雷達目標追蹤模組125可執行點雲目標追蹤程序來接收二維資訊510及特徵追蹤器520並依據二維資訊510與特徵追蹤器520來追蹤目標(S601),並定位目標的二維中心位置(S602)以輸出目標的二維座標610。Referring to FIG. 6 , the LiDAR
請再參考圖1,座標融合模組126捨棄攝影機座標系的三維座標420中對應高度的座標值以將三維座標420轉換為對應車體的車體座標系的第一座標再將第一座標轉換為第一球座標,並將光學雷達座標系的二維座標610轉換為車體座標系的第二座標再將第二座標轉換為第二球座標。以下將參考圖7來說明攝影機座標系及光學雷達座標系分別轉換為車體座標系的轉換流程。Please refer to FIG. 1 again, the coordinate
圖7為根據本發明一實施例的攝影機座標系、光學雷達座標系及車體座標系的座標關係的示意圖。FIG. 7 is a schematic diagram of the coordinate relationship between a camera coordinate system, an optical radar coordinate system, and a vehicle body coordinate system according to an embodiment of the present invention.
請參照圖7,車體是從車體座標系710的角度來判斷人員的位置,因此座標融合模組126會將攝影機座標系720及光學雷達座標系730的資料轉換到車體座標系710來顯示。Referring to FIG. 7 , the vehicle body determines the position of the person from the perspective of the vehicle body coordinate
舉例來說,假設攝影機座標系720的資料點座標為
。若車體座標系710與攝影機座標系720之間存在旋轉關係
及平移關係
,則攝影機座標系720的資料點在車體座標系710的第一座標為
,
,其中
,
。
For example, suppose the data point coordinates of the camera coordinate
類似地,假設光學雷達座標系730的資料點座標為
。若車體座標系710與光學雷達座標系730之間存在旋轉關係
平移關係
,則光學雷達座標系730的資料點在車體座標系710的第二座標為
,
,其中 , 。
Similarly, suppose the data point coordinates of the lidar coordinate
接著,座標融合模組126可利用以下公式將第一座標及第二座標分別轉換為第一球座標及第二球座標:
,
,也就是將第一座標及第二座標的笛卡兒座標(
x,
y)轉換為第一球座標及第二球座標的極座標(
,
)。
Next, the coordinate
圖8為根據本發明一實施例的控制指令輸出模組輸出目標的第三球座標的流程圖。FIG. 8 is a flowchart of a third spherical coordinate of an output target of a control command output module according to an embodiment of the present invention.
請參照圖8,「A:基於彩色影像判斷的第一球座標(
,
)」801及「B:基於光學雷達判斷的第二球座標(
,
)」802會被輸入控制指令輸出模組127來進行目標位置判斷(S810)。若A及B皆有值(S811),且
及
相差大於門檻值(S812),取A為第三球座標(S813),因為彩色影像深度攝影機111在沒跟丟目標時準確率比光學雷達112高。
Please refer to Fig. 8, "A: The first spherical coordinate determined based on the color image ( , )" 801 and "B: The second spherical coordinate determined based on the lidar ( , )" 802 will be input to the control
若A及B皆有值,且 及 相差小於等於門檻值(S814),根據 及 的算術平均值及 及 較小者獲得第三球座標(S815),因為具有較小半徑值的輸出座標離車體較近,有較大的可能性為要追蹤的目標。 if both A and B have values, and and The difference is less than or equal to the threshold value (S814), according to and The arithmetic mean of and and The smaller one obtains the third spherical coordinate (S815), because the output coordinate with the smaller radius value is closer to the vehicle body, and has a greater possibility of being the target to be tracked.
若A及B其中之一為空值(S816),也就是從彩色影像深度攝影機111及光學雷達112的其中之一沒有追蹤到目標時,取A與B的有值者為第三球座標(S817)。If one of A and B is a null value (S816), that is, when the target is not tracked from one of the color
若A及B皆為空值(S818),則不輸出第三球座標(S819),並重新執行目標追蹤程序(S820)。If both A and B are null (S818), the third spherical coordinate is not output (S819), and the target tracking program is re-executed (S820).
綜上所述,本發明的自走車跟隨系統及自走車跟隨方法利用設置於車體上的彩色影像深度攝影機獲得彩色影像並利用設置於車體上的光學雷達獲得二維資訊,獲得彩色影像中的目標的三維座標及二維資訊中目標的二維座標,並分別將三維座標及二維座標轉換為第一球座標及第二球座標,再根據第一球座標及第二球座標產生目標的第三球座標。最後,車體的車體控制器可根據第三球座標控制車體來跟隨目標。本發明的自走車跟隨系統及自走車跟隨方法以彩色影像深度攝影機增加目標可被辨識的特徵來彌補光學雷達的點雲圖特徵不足的問題,以在複雜的環境中可更準確地偵測目標。此外,系統也能記憶目標特徵,即使目標被遮蔽也能繼續跟隨。To sum up, the self-propelled vehicle following system and the self-propelled vehicle following method of the present invention use the color image depth camera disposed on the vehicle body to obtain color images, and use the optical radar disposed on the vehicle body to obtain two-dimensional information, and obtain color images. The three-dimensional coordinates of the target in the image and the two-dimensional coordinates of the target in the two-dimensional information, and the three-dimensional coordinates and the two-dimensional coordinates are converted into the first spherical coordinates and the second spherical coordinates, and then according to the first spherical coordinates and the second spherical coordinates A third spherical coordinate of the target is generated. Finally, the vehicle body controller of the vehicle body can control the vehicle body to follow the target according to the third spherical coordinate. The self-propelled vehicle following system and the self-propelled vehicle following method of the present invention use a color image depth camera to increase the feature that the target can be identified to make up for the problem of insufficient point cloud image features of the optical radar, so as to detect more accurately in a complex environment Target. In addition, the system can also memorize the characteristics of the target and continue to follow even if the target is obscured.
雖然本發明已以實施例揭露如上,然其並非用以限定本發明,任何所屬技術領域中具有通常知識者,在不脫離本發明的精神和範圍內,當可作些許的更動與潤飾,故本發明的保護範圍當視後附的申請專利範圍所界定者為準。Although the present invention has been disclosed above by the embodiments, it is not intended to limit the present invention. Anyone with ordinary knowledge in the technical field can make some changes and modifications without departing from the spirit and scope of the present invention. Therefore, The protection scope of the present invention shall be determined by the scope of the appended patent application.
110:資料收集單元 111:彩色影像深度攝影機 112:光學雷達 120:資料運算單元 121:人員偵測模組 122:影像目標鎖定模組 123:影像目標追蹤模組 124:光學雷達目標鎖定模組 125:光學雷達目標追蹤模組 126:座標融合模組 127:控制指令輸出模組 130:車體控制單元 131:車體控制機器人操作系統 210:彩色影像 220:人員全身定界框座標和框中影 230:深度學習物件偵測模型 231:分類器 232:卷積層 233:人員偵測功能 310:目標影像特徵 320:目標定界框 S301~S303:步驟 410:像素深度資訊 420:三維座標 S401~S403:步驟 510:二維資訊 520:特徵追蹤器 S501~S502:步驟 610:二維座標 S601~S602:步驟 710:車體座標系 720:攝影機座標系 730:光學雷達座標系 801:「A:基於彩色影像判斷的第一球座標( , )」 802:「B:基於光學雷達判斷的第二球座標( , )」 S810~S820:步驟 110: Data Collection Unit 111: Color Image Depth Camera 112: Optical Radar 120: Data Operation Unit 121: Person Detection Module 122: Image Target Locking Module 123: Image Target Tracking Module 124: Optical Radar Target Locking Module 125 : Optical radar target tracking module 126 : Coordinate fusion module 127 : Control command output module 130 : Vehicle body control unit 131 : Vehicle body control robot operating system 210 : Color image 220 : Coordinates and frame shadows of human body bounding frame 230 : Deep learning object detection model 231: Classifier 232: Convolution layer 233: Person detection function 310: Target image feature 320: Target bounding box S301~S303: Step 410: Pixel depth information 420: 3D coordinates S401~S403: Step 510: Two-dimensional information 520: Feature tracker S501~S502: Step 610: Two-dimensional coordinates S601~S602: Step 710: Vehicle body coordinate system 720: Camera coordinate system 730: Optical radar coordinate system 801: "A: based on color The first spherical coordinate for image judgment ( , )" 802: "B: The second spherical coordinate determined based on the lidar ( , )” S810~S820: Steps
圖1為根據本發明一實施例的自走車跟隨系統的方塊圖。 圖2為根據本發明一實施例的人員偵測模組的方塊圖。 圖3為根據本發明一實施例的影像目標鎖定模組的方塊圖。 圖4為根據本發明一實施例的影像目標追蹤模組的方塊圖。 圖5為根據本發明一實施例的光學雷達目標鎖定模組的方塊圖。 圖6為根據本發明一實施例的光學雷達目標追蹤模組的方塊圖。 圖7為根據本發明一實施例的攝影機座標系、光學雷達座標系及車體座標系的座標關係的示意圖。 圖8為根據本發明一實施例的控制指令輸出模組輸出目標的第三球座標的流程圖。 FIG. 1 is a block diagram of a bicycle following system according to an embodiment of the present invention. FIG. 2 is a block diagram of a person detection module according to an embodiment of the present invention. 3 is a block diagram of an image target locking module according to an embodiment of the present invention. 4 is a block diagram of an image target tracking module according to an embodiment of the present invention. FIG. 5 is a block diagram of an optical radar target locking module according to an embodiment of the present invention. FIG. 6 is a block diagram of an optical radar target tracking module according to an embodiment of the present invention. FIG. 7 is a schematic diagram of the coordinate relationship between a camera coordinate system, an optical radar coordinate system, and a vehicle body coordinate system according to an embodiment of the present invention. FIG. 8 is a flowchart of a third spherical coordinate of an output target of a control command output module according to an embodiment of the present invention.
110:資料收集單元 110: Data Collection Unit
111:彩色影像深度攝影機 111: Color Image Depth Camera
112:光學雷達 112: LiDAR
120:資料運算單元 120: Data operation unit
121:人員偵測模組 121: Personnel Detection Module
122:影像目標鎖定模組 122: Image Target Locking Module
123:影像目標追蹤模組 123: Image Target Tracking Module
124:光學雷達目標鎖定模組 124: LiDAR target locking module
125:光學雷達目標追蹤模組 125: LiDAR target tracking module
126:座標融合模組 126: Coordinate fusion module
127:控制指令輸出模組 127: Control command output module
130:車體控制單元 130: Body control unit
131:車體控制機器人操作系統 131: Body Control Robot Operating System
Claims (20)
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| TW109135087A TWI751735B (en) | 2020-10-12 | 2020-10-12 | Automatic guided vehicle tracking system and automatic guided vehicle tracking method |
| CN202011258643.8A CN114326695B (en) | 2020-10-12 | 2020-11-12 | Self-propelled vehicle following system and self-propelled vehicle following method |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| TW109135087A TWI751735B (en) | 2020-10-12 | 2020-10-12 | Automatic guided vehicle tracking system and automatic guided vehicle tracking method |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| TWI751735B true TWI751735B (en) | 2022-01-01 |
| TW202215184A TW202215184A (en) | 2022-04-16 |
Family
ID=80809114
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| TW109135087A TWI751735B (en) | 2020-10-12 | 2020-10-12 | Automatic guided vehicle tracking system and automatic guided vehicle tracking method |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN114326695B (en) |
| TW (1) | TWI751735B (en) |
Families Citing this family (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN116661505B (en) * | 2023-05-31 | 2024-09-24 | 深圳市普渡科技有限公司 | Robot, robot following method, device and storage medium |
| TWI873060B (en) * | 2024-07-12 | 2025-02-11 | 中華學校財團法人中華科技大學 | Smart health delivery system with deep learning algorithm |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| TWM361902U (en) * | 2009-01-22 | 2009-08-01 | Univ Lunghwa Sci & Technology | Framework with human body posture identification function |
| CN104898652A (en) * | 2011-01-28 | 2015-09-09 | 英塔茨科技公司 | Interfacing with a mobile telepresence robot |
| WO2016126297A2 (en) * | 2014-12-24 | 2016-08-11 | Irobot Corporation | Mobile security robot |
Family Cites Families (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20150294496A1 (en) * | 2014-04-14 | 2015-10-15 | GM Global Technology Operations LLC | Probabilistic person-tracking using multi-view fusion |
| CN104933392A (en) * | 2014-03-19 | 2015-09-23 | 通用汽车环球科技运作有限责任公司 | Probabilistic people tracking using multi-view integration |
| CN104751119A (en) * | 2015-02-11 | 2015-07-01 | 中国科学院大学 | Rapid detecting and tracking method for pedestrians based on information fusion |
| CN105975923B (en) * | 2016-05-03 | 2020-02-21 | 湖南拓视觉信息技术有限公司 | Method and system for tracking human objects |
| CN107194962B (en) * | 2017-04-01 | 2020-06-05 | 深圳市速腾聚创科技有限公司 | Point cloud and plane image fusion method and device |
| CN108932736B (en) * | 2018-05-30 | 2022-10-11 | 南昌大学 | Two-dimensional laser radar point cloud data processing method and dynamic robot pose calibration method |
| US11340610B2 (en) * | 2018-07-24 | 2022-05-24 | Huili Yu | Autonomous target following method and device |
-
2020
- 2020-10-12 TW TW109135087A patent/TWI751735B/en active
- 2020-11-12 CN CN202011258643.8A patent/CN114326695B/en active Active
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| TWM361902U (en) * | 2009-01-22 | 2009-08-01 | Univ Lunghwa Sci & Technology | Framework with human body posture identification function |
| CN104898652A (en) * | 2011-01-28 | 2015-09-09 | 英塔茨科技公司 | Interfacing with a mobile telepresence robot |
| WO2016126297A2 (en) * | 2014-12-24 | 2016-08-11 | Irobot Corporation | Mobile security robot |
Also Published As
| Publication number | Publication date |
|---|---|
| CN114326695B (en) | 2024-02-06 |
| CN114326695A (en) | 2022-04-12 |
| TW202215184A (en) | 2022-04-16 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US9098766B2 (en) | Controlled human pose estimation from depth image streams | |
| Boniardi et al. | Robot localization in floor plans using a room layout edge extraction network | |
| JP6430064B2 (en) | Method and system for aligning data | |
| US12477093B2 (en) | Wide viewing angle stereo camera apparatus and depth image processing method using the same | |
| US20100215271A1 (en) | Body feature detection and human pose estimation using inner distance shape contexts | |
| Nair et al. | Moving obstacle detection from a navigating robot | |
| CN102313536A (en) | Method for barrier perception based on airborne binocular vision | |
| Maier et al. | Vision-based humanoid navigation using self-supervised obstacle detection | |
| Tamjidi et al. | 6-DOF pose estimation of a portable navigation aid for the visually impaired | |
| Alcantarilla et al. | Visual odometry priors for robust EKF-SLAM | |
| Zhou et al. | Online multiple targets detection and tracking from mobile robot in cluttered indoor environments with depth camera | |
| TWI751735B (en) | Automatic guided vehicle tracking system and automatic guided vehicle tracking method | |
| Zhang et al. | DP-VINS: Dynamics Adaptive Plane-Based Visual-Inertial SLAM for Autonomous Vehicles | |
| Maier et al. | Appearance-based traversability classification in monocular images using iterative ground plane estimation | |
| Vincze et al. | Edge-projected integration of image and model cues for robust model-based object tracking | |
| Kitt et al. | Detection and tracking of independently moving objects in urban environments | |
| CN108694348B (en) | Tracking registration method and device based on natural features | |
| Butt et al. | Monocular SLAM initialization using epipolar and homography model | |
| Singh et al. | Efficient deep learning-based semantic mapping approach using monocular vision for resource-limited mobile robots | |
| Lin et al. | Robust ground plane region detection using multiple visual cues for obstacle avoidance of a mobile robot | |
| Takaoka et al. | 3D map building for a humanoid robot by using visual odometry | |
| KR102546156B1 (en) | Autonomous logistics transport robot | |
| Bonin-Font et al. | A monocular mobile robot reactive navigation approach based on the inverse perspective transformation | |
| Kim et al. | Semantic Loop Closure for Reducing False Matches in SLAM | |
| Premaratne et al. | Feature based stereo correspondence using moment invariant |