TWI788758B - Target image tracking system and method - Google Patents
Target image tracking system and method Download PDFInfo
- Publication number
- TWI788758B TWI788758B TW110101728A TW110101728A TWI788758B TW I788758 B TWI788758 B TW I788758B TW 110101728 A TW110101728 A TW 110101728A TW 110101728 A TW110101728 A TW 110101728A TW I788758 B TWI788758 B TW I788758B
- Authority
- TW
- Taiwan
- Prior art keywords
- target
- image
- target object
- semantic segmentation
- image tracking
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 53
- 239000002245 particle Substances 0.000 claims abstract description 53
- 230000011218 segmentation Effects 0.000 claims abstract description 32
- 238000004364 calculation method Methods 0.000 claims abstract description 18
- 238000004458 analytical method Methods 0.000 claims abstract description 9
- 238000005070 sampling Methods 0.000 claims abstract description 5
- 230000003287 optical effect Effects 0.000 claims description 33
- 238000012952 Resampling Methods 0.000 claims description 26
- 230000033001 locomotion Effects 0.000 claims description 17
- 230000000873 masking effect Effects 0.000 claims description 8
- 238000006073 displacement reaction Methods 0.000 claims description 7
- 238000012545 processing Methods 0.000 claims description 4
- 230000008569 process Effects 0.000 description 6
- 230000000694 effects Effects 0.000 description 5
- 238000012795 verification Methods 0.000 description 4
- 238000012937 correction Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000008878 coupling Effects 0.000 description 2
- 238000010168 coupling process Methods 0.000 description 2
- 238000005859 coupling reaction Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000008685 targeting Effects 0.000 description 1
Images
Landscapes
- Closed-Circuit Television Systems (AREA)
- Radar Systems Or Details Thereof (AREA)
- Image Analysis (AREA)
- Aiming, Guidance, Guns With A Light Source, Armor, Camouflage, And Targets (AREA)
Abstract
Description
本發明是有關於一種追蹤系統及方法,且特別是有關於一種目標物影像追蹤系統及方法。The present invention relates to a tracking system and method, and in particular to a target image tracking system and method.
最近幾年因為電腦效能的提升以及平行運算的普及化,影像處裡的運用越來越廣泛,目標物的視覺追蹤就是其中一項,主要運用有在行人的監控追蹤、道路上的車輛偵測、軍事作戰以及無人機救援等等方面。In recent years, due to the improvement of computer performance and the popularization of parallel computing, the application of image processing has become more and more extensive. The visual tracking of objects is one of them. It is mainly used in the monitoring and tracking of pedestrians and the detection of vehicles on the road. , military operations and UAV rescue and so on.
傳統影像追蹤領域應用的很廣泛,然而,卻容易有追蹤失敗的問題。Traditional image tracking is widely used, however, it is prone to tracking failure.
本發明提出一種目標物影像追蹤系統及方法,改善先前技術的問題。The present invention proposes a target image tracking system and method to improve the problems of the prior art.
在本發明的一實施例中,本發明所提出的目標物影像追蹤系統包含攝影裝置、儲存裝置以及處理器,處理器電性連接攝影裝置與儲存裝置。攝影裝置取得目標物的影像,儲存裝置儲存至少一指令,處理器用以存取並執行至少一指令以:對目標物的影像執行多種重要性重新採樣粒子濾波器,在計算粒子相似度時,依多種目標物特徵的運算複雜度分為多個階段計算,以即時地追蹤目標物;對目標物的影像執行語義分割,依據語義分割的分析結果,預測目標物下一步出現的位置。In an embodiment of the present invention, the object image tracking system proposed by the present invention includes a photographing device, a storage device, and a processor, and the processor is electrically connected to the photographing device and the storage device. The photographing device obtains the image of the target object, the storage device stores at least one instruction, and the processor is used to access and execute at least one instruction to: perform multiple importance resampling particle filters on the image of the target object, and calculate particle similarity according to The computational complexity of various target features is divided into multiple stages of calculation to track the target in real time; perform semantic segmentation on the image of the target, and predict the next location of the target based on the analysis results of the semantic segmentation.
在本發明的一實施例中,處理器用以存取並執行至少一指令以:執行稀疏表示法以判斷目標物的遮蔽率;利用遮蔽率,決定關於目標物的模板更新與否。In one embodiment of the present invention, the processor is used for accessing and executing at least one instruction to: execute the sparse representation to determine the occlusion rate of the target object; use the occlusion rate to determine whether to update the template of the target object.
在本發明的一實施例中,處理器用以存取並執行至少一指令以:基於影像中的光流資訊,使用稀疏光流法判斷攝影裝置的偏移,據以校正影像的位移。In one embodiment of the present invention, the processor is used for accessing and executing at least one instruction: based on the optical flow information in the image, using the sparse optical flow method to determine the offset of the camera device, so as to correct the image displacement.
在本發明的一實施例中,處理器用以存取並執行至少一指令以:透過語義分割以從影像中萃取出目標物的區域,執行稠密光流法判斷目標物的區域移動趨勢,從而預測目標物的動向。In an embodiment of the present invention, the processor is used to access and execute at least one instruction to: extract the region of the target object from the image through semantic segmentation, and execute the dense optical flow method to determine the regional movement trend of the target object, thereby predicting movement of the target.
在本發明的一實施例中,處理器用以存取並執行至少一指令以:判斷目標物的區域的遮蔽狀態;當目標物的區域被其他區域遮蔽時,屏蔽多種重要性重新採樣粒子濾波器中關於目標物的模板的更新。In an embodiment of the present invention, the processor is used to access and execute at least one instruction to: determine the occlusion status of the target area; when the target area is occluded by other areas, mask multiple importance resampling particle filters Updates to templates for objects in .
在本發明的一實施例中,本發明所提出的目標物影像追蹤方法包含以下步驟:(a)透過攝影裝置以取得目標物的影像;(b)對目標物的影像執行多種重要性重新採樣粒子濾波器,在計算粒子相似度時,依多種目標物特徵的運算複雜度分為多個階段計算,以即時地追蹤目標物;(c)對目標物的影像執行語義分割,依據語義分割的分析結果,預測目標物下一步出現的位置。In one embodiment of the present invention, the object image tracking method proposed by the present invention includes the following steps: (a) obtaining an image of the object through a camera device; (b) performing multiple importance resampling on the image of the object Particle filter, when calculating particle similarity, is divided into multiple stages according to the computational complexity of various target features, so as to track the target in real time; (c) perform semantic segmentation on the image of the target, according to the semantic segmentation Analyze the results and predict where the target will appear next.
在本發明的一實施例中,步驟(b)包含:執行稀疏表示法以判斷目標物的遮蔽率;利用遮蔽率,決定關於目標物的模板更新與否。In an embodiment of the present invention, the step (b) includes: performing a sparse representation method to determine the occlusion rate of the target object; using the occlusion rate to determine whether to update the template of the target object.
在本發明的一實施例中,步驟(b)包含:基於影像中的光流資訊,使用稀疏光流法判斷攝影裝置的偏移,據以校正影像的位移。In an embodiment of the present invention, the step (b) includes: based on the optical flow information in the image, using the sparse optical flow method to determine the offset of the photographing device, so as to correct the image displacement.
在本發明的一實施例中,步驟(c)包含:透過語義分割以從影像中萃取出目標物的區域,執行稠密光流法判斷目標物的區域移動趨勢,從而預測目標物的動向。In one embodiment of the present invention, step (c) includes: extracting the region of the target object from the image through semantic segmentation, and performing a dense optical flow method to determine the regional movement trend of the target object, thereby predicting the movement of the target object.
在本發明的一實施例中,步驟(c)包含:判斷目標物的區域的遮蔽狀態;當目標物的區域被其他區域遮蔽時,屏蔽多種重要性重新採樣粒子濾波器中關於目標物的模板的更新。In an embodiment of the present invention, step (c) includes: judging the masking status of the target area; when the target area is covered by other areas, masking multiple importances and resampling the target object template in the particle filter update.
綜上所述,本發明之技術方案與現有技術相比具有明顯的優點和有益效果。本發明的目標物影像追蹤系統及目標物影像追蹤方法首先實現一個多種重要性重新採樣技術,來解決當採用多種目標物特徵進行影像追蹤時,效率不高以及無法即時處理的問題。多種目標物特徵將在計算相似度時,依運算複雜度分為多個階段計算,以節省運算時間,達到即時且強健的追蹤效果。再者,本發明在對於目標物的粒子濾波器追蹤下,加入了新穎且準確的語義分割方法,可以得到更好的追蹤效果。To sum up, the technical solution of the present invention has obvious advantages and beneficial effects compared with the prior art. The object image tracking system and object image tracking method of the present invention first implement a multi-importance re-sampling technique to solve the problems of low efficiency and inability to process images in real time when using multiple object features for image tracking. When calculating the similarity of various target features, the calculation is divided into multiple stages according to the complexity of the calculation, so as to save calculation time and achieve instant and robust tracking results. Furthermore, the present invention adds a novel and accurate semantic segmentation method under the particle filter tracking of the target object, which can obtain better tracking effect.
以下將以實施方式對上述之說明作詳細的描述,並對本發明之技術方案提供更進一步的解釋。The above-mentioned description will be described in detail in the following implementation manners, and further explanations will be provided for the technical solution of the present invention.
為了使本發明之敘述更加詳盡與完備,可參照所附之圖式及以下所述各種實施例,圖式中相同之號碼代表相同或相似之元件。另一方面,眾所週知的元件與步驟並未描述於實施例中,以避免對本發明造成不必要的限制。In order to make the description of the present invention more detailed and complete, reference may be made to the accompanying drawings and various embodiments described below, and the same numbers in the drawings represent the same or similar elements. On the other hand, well-known elements and steps have not been described in the embodiments in order to avoid unnecessarily limiting the invention.
請參照第1圖,本發明之技術態樣是一種目標物影像追蹤系統100,其可應用在監控與偵測系統,或是廣泛地運用在各種技術環節。值得一提的是,本技術態樣之目標物影像追蹤系統100可達到相當的技術進步,並具有産業上的廣泛利用價值。以下將搭配第1圖來說明目標物影像追蹤系統100之具體實施方式。Please refer to FIG. 1, the technical aspect of the present invention is a target
應瞭解到,目標物影像追蹤系統100的多種實施方式搭配第1圖進行描述。於以下描述中,為了便於解釋,進一步設定許多特定細節以提供一或多個實施方式的全面性闡述。然而,本技術可在沒有這些特定細節的情況下實施。於其他舉例中,為了有效描述這些實施方式,已知結構與裝置以方塊圖形式顯示。此處使用的「舉例而言」的用語,以表示「作為例子、實例或例證」的意思。此處描述的作為「舉例而言」的任何實施例,無須解讀為較佳或優於其他實施例。It should be understood that various implementations of the object
第1圖是依照本發明一實施例之一種目標物影像追蹤系統100的方塊圖。如第1圖所示,目標物影像追蹤系統100可包含儲存裝置110、處理器120以及攝影裝置130。舉例而言,儲存裝置110可泛指硬碟、記憶體、暫存器或其他儲存媒介,處理器120可為中央處理器,攝影裝置130可為彩色攝影機、熱像儀與/或其他影像感測器。在本發明的一實施例中,攝影裝置130僅為單一攝影機,減少多個攝影機所占用的空間與成本,去除鏡頭影像匹配的運算。FIG. 1 is a block diagram of an object
在架構上,處理器120電性連接儲存裝置110以及攝影裝置130。應瞭解到,於實施方式與申請專利範圍中,涉及『電性連接』之描述,其可泛指一元件透過其他元件而間接電氣耦合至另一元件,或是一元件無須透過其他元件而直接電連結至另一元件。舉例而言,儲存裝置110可為內部儲存裝置直接電連結至處理器120,或是儲存裝置110可為外部儲存裝置透過外部電子線路間接耦合至處理器120。Structurally, the
於使用時,攝影裝置130取得目標物(如:特定人物)的影像,儲存裝置110儲存至少一指令,處理器120用以存取並執行至少一指令以:對目標物的影像執行多種重要性重新採樣粒子濾波器,在計算粒子相似度時,依多種目標物特徵的運算複雜度分為多個階段計算,以即時地追蹤目標物;對目標物的影像執行語義分割,依據語義分割的分析結果,預測目標物下一步出現的位置。藉此,目標物影像追蹤系統100利用語義分割來強化粒子濾波的追蹤效果,更準確的預測目標物的運動趨勢。When in use, the
關於上述多種重要性重新採樣粒子濾波器,舉例而言,首先由攝影裝置130所拍的彩色影像,使用多種重要性重新採樣粒子濾波器在目標物附近高斯亂數取得多個採樣點(視為粒子)。先從相似度分析來得到權重分數,利用稀疏表示法做遮蔽判斷與更新模板。並從稀疏光流得到攝影裝置130的偏移來校正影像位移。全部的粒子依權重高低排序,保留前大約10%。將這些粒子再取高斯亂數補足被替換掉的約90%粒子。依此規則反覆循環達到粒子濾波器的追蹤。應瞭解到,本文中所使用之『約』、『大約』或『大致』係用以修飾任何可些微變化的數量,但這種些微變化並不會改變其本質。於實施方式中若無特別說明,則代表以『約』、『大約』或『大致』所修飾之數值的誤差範圍一般是容許在百分之二十以內,較佳地是於百分之十以內,而更佳地則是於百分五之以內。Regarding the above-mentioned multi-importance resampling particle filter, for example, first, the color image captured by the
在多種重要性重新採樣計算的時候,舉例而言,處理器120首先對所有的預測狀態粒子進行部分的相似度計算,以運算速度非常快的特徵比對來做初步地評估。先依前一刻預測的結果,來評估彩色、熱像儀溫度兩種較快速的相似度,並從相似度比對結果中取出這兩種相似度較高的粒子,進行第二階段運算較複雜的邊緣輪廓相似度驗證。接著,再從第二階段的相似度比對結果中取出少數較相似的粒子,進行下一階段的相似度比對,依此類推,延伸至後續的階層。此過程中,會先選用較快的方法放在第一階段,目的是快速篩選出目標物較有可能的區域,消除在下一階段之不必要的計算。在進行後續階段的相似度驗證時,可以選用比較複雜、運算時間較慢的驗證方法,運算最複雜、耗費運算時間最久的相似度驗證方法會方在最後階段。如此,因為經過先前階段的篩選,能夠減少在目標物候選機率較低區域的運算量,同時,以後續階段的比對來準確地找到目標物,並能夠維持目標物影像追蹤系統100的即時性。When calculating multiple importance resamplings, for example, the
實務上,當目標物被其它物體遮擋時,粒子濾波器有可能造成更新錯誤,誤判目標物下一時刻的位置,導致追蹤失敗,而後就更難追回原先的目標物上。因此,在本發明的一實施例中,處理器120用以存取並執行至少一指令以:執行稀疏表示法以判斷目標物的遮蔽率;利用遮蔽率,決定關於目標物的模板更新與否。In practice, when the target is blocked by other objects, the particle filter may cause an update error, misjudging the position of the target at the next moment, resulting in tracking failure, and then it is more difficult to track back to the original target. Therefore, in an embodiment of the present invention, the
儲存裝置110可儲存稀疏外貌模型,其包含目標物的模板集及破碎模板。稀疏表示法的相似度計算是經由多個模板(如:目標物的模板集)與破碎模板做線性組合的結果,其可以穩定的追蹤目標物,尤其在目標物被遮蔽的狀況,仍可以正確地找出目標物的位置,同時能夠估測目標物被遮蔽的程度。稀疏表示法為了增加當目標遮蔽時追蹤的強健性,將影像中有遮蔽的部分用破碎模板拼湊回原本的圖像。破碎模板的相關係數可做為遮蔽的依據,再以此估測目標物遮蔽程度,並確定關於目標物的模板是否更新。The
當目標物漸漸變化,會慢慢開始與原始特徵有差異,這時為了有效果的持續追蹤,可更新目標物的模板。例如正在移動的目標物會因觀測角度改變而有不同的色澤紋理特徵。太陽光或照明的改變,亦會形成目標物外觀顏色的變化。適時且有效的目標物模板更新,能建立起完整的模板集,幫助強健整個目標物影像追蹤系統100,降低追蹤失敗的機會。但是在錯誤的時機(如:目標物被遮蔽)更新模板則會反其道而行,易導致追蹤失效,追蹤模板集更因此紀錄失真,讓追蹤動作因而停留在背景物上或環境中其他物體上。When the target object gradually changes, it will slowly start to differ from the original features. At this time, in order to keep tracking effectively, the template of the target object can be updated. For example, a moving target will have different color and texture characteristics due to changes in the viewing angle. Changes in sunlight or lighting will also cause changes in the appearance and color of the target. Timely and effective updating of target object templates can establish a complete set of templates to help strengthen the entire target object
除了先前計算得到的遮蔽率之外,目標物與背景的彩色影像相似度,亦可用來評估其在背景環境中之獨特性。若目標物與背景相似度太高,不適宜更新模板,會降低目標物的獨特性,易導致追蹤過程出錯。在本發明的一實施例中,當目標物與背景相似度小於預設比例,且遮蔽率在預設範圍內時,處理器120更新模板並儲存至儲存裝置110。In addition to the previously calculated occlusion rate, the color image similarity between the object and the background can also be used to evaluate its uniqueness in the background environment. If the similarity between the target and the background is too high, it is not suitable to update the template, which will reduce the uniqueness of the target and easily lead to errors in the tracking process. In an embodiment of the present invention, when the similarity between the object and the background is less than a preset ratio and the occlusion rate is within a preset range, the
關於上述預設比例及預設範圍,舉例而言,目標物與背景相似度小於40%,且遮蔽率在10%與30%之間。當遮蔽率超過30%視為目標物已被遮擋住,此時不適合更新模板。而遮蔽率低於10%時亦不更新,因目標物可能處在不動的態勢下,此時更新模板容易丟失之前的變化特徵,模板集會有高度相關性特徵,將不利於往後的追蹤。Regarding the above preset ratio and preset range, for example, the similarity between the target object and the background is less than 40%, and the occlusion rate is between 10% and 30%. When the occlusion rate exceeds 30%, it is considered that the target has been occluded, and it is not suitable to update the template at this time. And when the occlusion rate is lower than 10%, it will not be updated, because the target object may be in a motionless state. At this time, the update template will easily lose the previous change characteristics, and the template set will have highly correlated characteristics, which will not be conducive to future tracking.
在電腦圖像視覺領域中,光流表示著影像中,物體的動向速度,每一幀(frame)上的每一點影像,都包含著瞬間動作方向,而移動至下一幀的點上,所以光流是一種向量形式的表示。將攝影裝置130所拍得的整張影像都利用光流法來分析,可以得到整張影像全部像素點的運動場,此即為稠密光流(Dense Optical Flow)。而若只是取影像中少部份的角點,通常為每間隔數個像素點,再進行移動速度的計算,會得到稀疏光流(Sparse Optical Flow)。在本發明的一實施例中,處理器120用以存取並執行至少一指令以:基於影像中的光流資訊,使用稀疏光流法判斷攝影裝置130的偏移,據以校正影像的位移。藉此,粒子濾波器追蹤部份會利用稀疏光流來進行攝影裝置130移動的校正,而後續語義分割的追蹤分析則會採用稠密光流來強化判斷目標物的運動趨勢。In the field of computer image vision, optical flow represents the moving speed of objects in the image, and each point of the image on each frame (frame) contains the direction of the momentary movement, and moves to the point of the next frame, so Optical flow is a representation in vector form. The entire image captured by the
舉例而言,當來源影像是由人為手持拍攝時,或是在移動中的物體上(如:行駛中的車輛),亦或是固定於街角上的監視器,其也有可能受大風吹過而晃動。在這等之中的情境下,為了能更準確的追蹤到目標物,排除外在干擾因素,本發明利用稀疏光流來將攝影裝置130的動作回饋目標物影像追蹤系統100修正,更能有助穩定目標物追蹤。For example, when the source image is taken by a human hand, or on a moving object (such as: a moving vehicle), or a monitor fixed on a street corner, it may also be blown by strong winds shaking. In such situations, in order to track the target more accurately and eliminate external interference factors, the present invention uses sparse optical flow to feed back the action of the
圖像的語義分割(Semantic Segmentation)是由人工智慧發展下重要的其中一個環節,是深度學習(Deep Learning)之下的視覺辨識技術,其可深入發展出整張視覺影像中,針對每一個像素點的識別技術,個別分類轉換出分割圖像。在本發明的一實施例中,處理器120用以存取並執行至少一指令以:透過語義分割以從影像中萃取出目標物的區域,執行稠密光流法判斷目標物的區域移動趨勢,從而預測目標物的動向。藉此,為了強化追蹤力度,除了借重語義分割所分離出的目標物外,本發明同樣也可利用稠密光流加強預測的判斷,根據目標物的動作趨勢,輔助粒子濾波器的追蹤效果。Semantic Segmentation of images is one of the most important links under the development of artificial intelligence. It is a visual recognition technology under Deep Learning. It can deeply develop the entire visual image, targeting each pixel With point recognition technology, individual classifications are converted into segmented images. In an embodiment of the present invention, the
在本發明的一實施例中,處理器120用以存取並執行至少一指令以:判斷目標物的區域的遮蔽狀態;當目標物的區域被其他區域遮蔽時,屏蔽多種重要性重新採樣粒子濾波器中關於目標物的模板的更新。具體而言,若是遮蔽狀態下,停止粒子濾波器的模板更新,防止多種重要性重新採樣粒子濾波器更新到錯誤的模板,造成追蹤失敗或追蹤至錯誤的目標物上。在未被遮擋的狀態下,則是正常通知多種重要性重新採樣粒子濾波器正常更新模板,維持目標物的形態轉換後仍能持續追蹤。藉此,語義分割可強化多種重要性重新採樣粒子濾波器,輔助多種重要性重新採樣粒子濾波器的追蹤效果。In one embodiment of the present invention, the
綜合以上,多種重要性重新採樣粒子濾波器與語義分割可同時並行處理攝影裝置130所拍的影像,交由語義分割處理過後,能夠得到較精確的目標物與其鄰近物體的特徵。再使用稠密光流法來預測目標物與其鄰近物體的動態,預測出下一時刻的位置,並回饋予粒子濾波器濾除位置錯誤的粒子。另外,同時結合上述動態的資訊做遮蔽判斷,將所判斷遮蔽狀態亦傳回予多種重要性重新採樣粒子濾波器,做為是否更新模板的依據。Based on the above, multiple importance resampling particle filters and semantic segmentation can simultaneously process the images captured by the
為了對上述目標物影像追蹤系統100所執行的方法做更進一步的闡述,請同時參照第1~2圖,第2圖是依照本發明一實施例之一種目標物影像追蹤方法200的流程圖。如第3圖所示,目標物影像追蹤方法200包含步驟S201~S209(應瞭解到,在本實施例中所提及的步驟,除特別敘明其順序者外,均可依實際需要調整其前後順序,甚至可同時或部分同時執行)。In order to further explain the method performed by the above object
目標物影像追蹤方法200可以採用非暫態電腦可讀取記錄媒體上的電腦程式產品的形式,此電腦可讀取記錄媒體具有包含在介質中的電腦可讀取的複數個指令。適合的記錄媒體可以包括以下任一者:非揮發性記憶體,例如:唯讀記憶體(ROM)、可程式唯讀記憶體(PROM)、可抹拭可程式唯讀記憶體(EPROM)、電子抹除式可程式唯讀記憶體(EEPROM);揮發性記憶體,例如:靜態存取記憶體(SRAM)、動態存取記憶體(SRAM)、雙倍資料率隨機存取記憶體(DDR-RAM);光學儲存裝置,例如:唯讀光碟(CD-ROM)、唯讀數位多功能影音光碟(DVD-ROM);磁性儲存裝置,例如:硬碟機、軟碟機。The object
在本發明的一實施例中,於目標物影像追蹤方法200,透過攝影裝置130以取得目標物的影像;對目標物的影像執行多種重要性重新採樣粒子濾波器,在計算粒子相似度時,依多種目標物特徵的運算複雜度分為多個階段計算,以即時地追蹤目標物;對目標物的影像執行語義分割,依據語義分割的分析結果,預測目標物下一步出現的位置。In one embodiment of the present invention, in the object
在目標物影像追蹤方法200中,步驟S201~S205主要是相關於多種重要性重新採樣粒子濾波器,步驟S206~S209主要是相關於語義分割。In the object
具體而言,於步驟S201,透過攝影裝置130以取得目標物的影像。於步驟S202,進行相似度分析,亦即對目標物的影像執行多種重要性重新採樣粒子濾波器,在計算粒子相似度時,依多種目標物特徵的運算複雜度分為多個階段計算,以即時地追蹤目標物。於步驟S203,進行稀疏表示遮蔽處理,亦即執行稀疏表示法以判斷目標物的遮蔽率;利用遮蔽率,決定關於目標物的模板更新與否。於步驟S204,進行光流校正和動態修正,亦即基於影像中的光流資訊,使用稀疏光流法判斷攝影裝置的偏移,據以校正影像的位移。Specifically, in step S201 , an image of the target object is obtained through the photographing
另一方面,於步驟S206,透過語義分割來進行影像分割。於步驟S207,進行目標物分類區域萃取,亦即透過語義分割以從影像中萃取出目標物的區域。於步驟S208,進行稠密光流動態比較,亦即執行稠密光流法判斷目標物的區域移動趨勢,據以於步驟S209,進行目標物估算,從而於步驟S205,進行預測目標物,預測目標物的動向。On the other hand, in step S206, image segmentation is performed through semantic segmentation. In step S207, object classification region extraction is performed, that is, the region of the object is extracted from the image through semantic segmentation. In step S208, the dynamic comparison of dense optical flow is performed, that is, the dense optical flow method is used to judge the regional movement trend of the target object, and based on this, the target object is estimated in step S209, so that in step S205, the target object is predicted, and the target object is predicted trend.
再者,於步驟S209,進行遮蔽判斷,判斷目標物的區域的遮蔽狀態。進行粒子更新,亦即語義分割處理過後,預測出下一時刻的位置,並回饋予多種重要性重新採樣粒子濾波器濾除位置錯誤的粒子。另外,當步驟S209判定目標物的區域被其他區域遮蔽時,於步驟S205,屏蔽多種重要性重新採樣粒子濾波器中關於目標物的模板的更新。Furthermore, in step S209 , an occlusion determination is performed to determine the occlusion state of the area of the target. Particle update is performed, that is, after semantic segmentation processing, the position at the next moment is predicted, and fed back to multiple importance resampling particle filters to filter out particles with wrong positions. In addition, when it is determined in step S209 that the area of the object is covered by other areas, in step S205 , the updating of the templates related to the object in the multi-importance resampling particle filter is shielded.
綜上所述,本發明之技術方案與現有技術相比具有明顯的優點和有益效果。本發明的目標物影像追蹤系統100及目標物影像追蹤方法200首先實現一個多種重要性重新採樣技術,來解決當採用多種目標物特徵進行影像追蹤時,效率不高以及無法即時處理的問題。多種目標物特徵將在計算相似度時,依運算複雜度分為多個階段計算,以節省運算時間,達到即時且強健的追蹤效果。再者,本發明在對於目標物的粒子濾波器追蹤下,加入了新穎且準確的語義分割方法,可以得到更好的追蹤效果。To sum up, the technical solution of the present invention has obvious advantages and beneficial effects compared with the prior art. The object
雖然本發明已以實施方式揭露如上,然其並非用以限定本發明,任何熟習此技藝者,在不脫離本發明之精神和範圍內,當可作各種之更動與潤飾,因此本發明之保護範圍當視後附之申請專利範圍所界定者為準。Although the present invention has been disclosed above in terms of implementation, it is not intended to limit the present invention. Anyone skilled in this art can make various changes and modifications without departing from the spirit and scope of the present invention. Therefore, the protection of the present invention The scope shall be defined by the appended patent application scope.
為讓本發明之上述和其他目的、特徵、優點與實施例能更明顯易懂,所附符號之說明如下: 100:目標物影像追蹤系統 110:儲存裝置 120:處理器 130:攝影裝置 200:目標物影像追蹤方法 S201~S209:步驟 In order to make the above and other objects, features, advantages and embodiments of the present invention more obvious and understandable, the accompanying symbols are explained as follows: 100: Target image tracking system 110: storage device 120: Processor 130: Photographic device 200: Target Image Tracking Method S201~S209: steps
為讓本發明之上述和其他目的、特徵、優點與實施例能更明顯易懂,所附圖式之說明如下: 第1圖是依照本發明一實施例之一種目標物影像追蹤系統的方塊圖;以及 第2圖是依照本發明一實施例之一種目標物影像追蹤方法的流程圖。 In order to make the above and other objects, features, advantages and embodiments of the present invention more clearly understood, the accompanying drawings are described as follows: FIG. 1 is a block diagram of an object image tracking system according to an embodiment of the present invention; and FIG. 2 is a flowchart of a target image tracking method according to an embodiment of the present invention.
200:目標物影像追蹤方法 200: Target Image Tracking Method
S201~S209:步驟 S201~S209: steps
Claims (10)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| TW110101728A TWI788758B (en) | 2021-01-15 | 2021-01-15 | Target image tracking system and method |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| TW110101728A TWI788758B (en) | 2021-01-15 | 2021-01-15 | Target image tracking system and method |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| TW202230201A TW202230201A (en) | 2022-08-01 |
| TWI788758B true TWI788758B (en) | 2023-01-01 |
Family
ID=83782376
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| TW110101728A TWI788758B (en) | 2021-01-15 | 2021-01-15 | Target image tracking system and method |
Country Status (1)
| Country | Link |
|---|---|
| TW (1) | TWI788758B (en) |
Families Citing this family (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| TWI769915B (en) | 2021-08-26 | 2022-07-01 | 財團法人工業技術研究院 | Projection system and projection calibration method using the same |
| TWI843251B (en) * | 2022-10-25 | 2024-05-21 | 財團法人工業技術研究院 | Target tracking system and target tracking method using the same |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107465911A (en) * | 2016-06-01 | 2017-12-12 | 东南大学 | A kind of extraction of depth information method and device |
| TW201816723A (en) * | 2016-09-28 | 2018-05-01 | 香港商港大科橋有限公司 | Recovery of pixel resolution in scanning imaging |
| CN111325843A (en) * | 2020-03-09 | 2020-06-23 | 北京航空航天大学 | A real-time semantic map construction method based on semantic inverse depth filtering |
-
2021
- 2021-01-15 TW TW110101728A patent/TWI788758B/en active
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107465911A (en) * | 2016-06-01 | 2017-12-12 | 东南大学 | A kind of extraction of depth information method and device |
| TW201816723A (en) * | 2016-09-28 | 2018-05-01 | 香港商港大科橋有限公司 | Recovery of pixel resolution in scanning imaging |
| CN111325843A (en) * | 2020-03-09 | 2020-06-23 | 北京航空航天大学 | A real-time semantic map construction method based on semantic inverse depth filtering |
Also Published As
| Publication number | Publication date |
|---|---|
| TW202230201A (en) | 2022-08-01 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN103971386B (en) | A kind of foreground detection method under dynamic background scene | |
| CN108090919B (en) | A Kernel Correlation Filter Tracking Method Based on Superpixel Optical Flow and Adaptive Learning Factor Improvement | |
| CN110211157B (en) | A Long-term Target Tracking Method Based on Correlation Filtering | |
| Wang et al. | Robust video-based surveillance by integrating target detection with tracking | |
| CN103530893B (en) | Based on the foreground detection method of background subtraction and movable information under camera shake scene | |
| CN110120064B (en) | Depth-related target tracking algorithm based on mutual reinforcement and multi-attention mechanism learning | |
| WO2023273010A1 (en) | High-rise littering detection method, apparatus, and device, and computer storage medium | |
| CN110555377B (en) | Pedestrian detection and tracking method based on fish eye camera overlooking shooting | |
| CN110008867A (en) | A kind of early warning method, device and storage medium based on abnormal behavior of people | |
| CN112184759A (en) | Moving target detection and tracking method and system based on video | |
| CN106683121A (en) | Robust object tracking method in fusion detection process | |
| CN112613565B (en) | Anti-occlusion tracking method based on multi-feature fusion and adaptive learning rate updating | |
| CN103440645A (en) | Target tracking algorithm based on self-adaptive particle filter and sparse representation | |
| CN106570490B (en) | A kind of pedestrian's method for real time tracking based on quick clustering | |
| CN104091349A (en) | Robust target tracking method based on support vector machine | |
| CN119477886B (en) | Building crack monitoring method and system based on machine learning | |
| CN104537688A (en) | Moving object detecting method based on background subtraction and HOG features | |
| CN110310305B (en) | A target tracking method and device based on BSSD detection and Kalman filtering | |
| TWI788758B (en) | Target image tracking system and method | |
| CN111192294A (en) | A target tracking method and system based on target detection | |
| CN110751671B (en) | Target tracking method based on kernel correlation filtering and motion estimation | |
| CN110516731A (en) | A method and system for detecting feature points of visual odometer based on deep learning | |
| CN114677330A (en) | An image processing method, electronic device and storage medium | |
| CN106570471A (en) | Scale adaptive multi-attitude face tracking method based on compressive tracking algorithm | |
| CN104182976B (en) | Field moving object fining extraction method |