TWI778872B - Sensor fusion method for detecting a person's condition - Google Patents
Sensor fusion method for detecting a person's condition Download PDFInfo
- Publication number
- TWI778872B TWI778872B TW110143567A TW110143567A TWI778872B TW I778872 B TWI778872 B TW I778872B TW 110143567 A TW110143567 A TW 110143567A TW 110143567 A TW110143567 A TW 110143567A TW I778872 B TWI778872 B TW I778872B
- Authority
- TW
- Taiwan
- Prior art keywords
- human skeleton
- sensor fusion
- dimensional human
- moving person
- time series
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/86—Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
- G01S13/867—Combination of radar systems with cameras
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Evolutionary Computation (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Life Sciences & Earth Sciences (AREA)
- Software Systems (AREA)
- Computational Linguistics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Biophysics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biomedical Technology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Databases & Information Systems (AREA)
- Computational Mathematics (AREA)
- Algebra (AREA)
- Computer Networks & Wireless Communication (AREA)
- Radar Systems Or Details Thereof (AREA)
- Image Analysis (AREA)
- Lining Or Joining Of Plastics Or The Like (AREA)
- Branch Pipes, Bends, And The Like (AREA)
Abstract
Description
本發明係有關於一種人員狀況偵測的方法,尤其係指一種人員狀況偵測的感測器融合方法。The present invention relates to a method for detecting a person's condition, and more particularly, to a sensor fusion method for detecting a person's condition.
在目前的技術中,有許多非接觸式跌倒偵測產品或解決方案,但是普遍面臨下列問題。例如,在使用雷達的方案中,當場景內人數過多時則跌倒辨識率可能會下降。另外,在使用影像式裝置(例如RGB攝影機、IR攝影機)的方案中,視野(Field of View, FOV)涵蓋範圍太小,而深度攝影機有距離限制(5m)。再者,使用影像式裝置的方案,容易使系統長時間維持在高負載(亦即運算量較複雜)的情況之下,有影響產品壽命及能源浪費的缺點。In the current technology, there are many non-contact fall detection products or solutions, but the following problems are generally faced. For example, in a scheme using radar, the fall recognition rate may decrease when there are too many people in the scene. In addition, in the solution using image-based devices (eg, RGB cameras, IR cameras), the field of view (FOV) coverage is too small, and the depth camera has a distance limit (5m). Furthermore, the solution of using an image-type device can easily keep the system under high load (ie, the computation amount is relatively complex) for a long time, which has the disadvantages of affecting product life and wasting energy.
綜上所述,如何提供一個可以有效且準確偵測人員狀況偵測的方法乃是業界所需思考的重要課題。To sum up, how to provide an effective and accurate method for detecting personnel status is an important issue that the industry needs to think about.
爰是,本發明之實施例主要提供一種人員狀況偵測的感測器融合方法,可有效地提升產品壽命及避免能源浪費。另外,藉由結合不同來源的數據,相較於單獨使用這些來源而言,所得到之訊息的不確定性較小。也就是說,可以更準確地判別人員狀況。In other words, the embodiments of the present invention mainly provide a sensor fusion method for personnel condition detection, which can effectively improve product life and avoid energy waste. In addition, by combining data from different sources, the information obtained is less uncertain than using these sources alone. That is, the human condition can be more accurately discriminated.
鑒於上述內容,本揭露之一態樣係提供一種人員狀況偵測的感測器融合方法,包含:以一毫米波雷達定位一偵測區域內至少一移動人員的位置;以一深度感測相機擷取所述至少一移動人員的一RGB影像或一IR影像,並產生與所述至少一移動人員對應的一二維人體骨架點資訊;以一人工智慧運算平台執行一感測器融合(sensor fusion)程序,利用所述二維人體骨架點資訊衍生之資料合成一三維人體骨架時間序列,其中所述人工智慧運算平台耦接所述毫米波雷達與所述深度感測相機;在所述合成三維人體骨架時間序列的數量大於一閾值N時,以所述人工智慧運算平台內一動作辨識模組,判斷所述至少一移動人員是否跌倒,以決定是否發出一通知。In view of the above, an aspect of the present disclosure provides a sensor fusion method for personnel condition detection, including: locating the position of at least one moving person in a detection area with a millimeter-wave radar; using a depth-sensing camera Capture an RGB image or an IR image of the at least one moving person, and generate a two-dimensional human skeleton point information corresponding to the at least one moving person; perform a sensor fusion with an artificial intelligence computing platform fusion) program, using the data derived from the two-dimensional human skeleton point information to synthesize a three-dimensional human skeleton time series, wherein the artificial intelligence computing platform is coupled to the millimeter-wave radar and the depth-sensing camera; When the number of three-dimensional human skeleton time series is greater than a threshold N, a motion recognition module in the artificial intelligence computing platform is used to determine whether the at least one moving person has fallen, so as to determine whether to issue a notification.
根據本揭露之一個或多個實施方式,其中在執行所述感測器融合程序前更包含:一映射步驟,先將所述二維人體骨架點資訊轉換成一三維人體骨架點資訊;以及一訊號處理步驟,利用一分群演算法從所述毫米波雷達的訊號獲得一點雲群集平均速度;其中,所述三維人體骨架點資訊與所述點雲群集平均速度於所述感測器融合程序經結合後,產生所述三維人體骨架時間序列。According to one or more embodiments of the present disclosure, before executing the sensor fusion process, it further comprises: a mapping step, first converting the 2D human skeleton point information into a 3D human skeleton point information; and a signal The processing step is to obtain the average speed of a point cloud cluster from the signal of the millimeter-wave radar using a clustering algorithm; wherein, the three-dimensional human skeleton point information and the average speed of the point cloud cluster are combined in the sensor fusion process Then, the three-dimensional human skeleton time series is generated.
根據本揭露之一個或多個實施方式,其中所述深度感測相機更利用飛時測距(Time of Flight, ToF)技術、結構光(structured light) 技術或是主動式立體視覺(active stereo vision) 技術,取得所述至少一移動人員的一深度資訊,以在所述映射步驟中用於獲得所述三維人體骨架點資訊。According to one or more embodiments of the present disclosure, the depth-sensing camera further utilizes Time of Flight (ToF) technology, structured light technology, or active stereo vision ) technology to obtain a depth information of the at least one moving person for obtaining the three-dimensional human skeleton point information in the mapping step.
根據本揭露之一個或多個實施方式,更包含:一ID編號產生步驟,在所述RGB影像或所述IR影像顯示所述至少一移動人員為複數人時,給予每一移動人員相對應之ID編號,之後進行所述映射步驟;一ID編號比對步驟,將各所述ID編號與一記憶體內儲存之ID編號進行比對,其中所述記憶體與所述人工智慧運算平台耦接;一資料串聯步驟,在所述ID編號比對步驟的結果顯示為相同時,串聯所檢測到之所述三維人體骨架時間序列與相同ID編號之時間序列;以及在所述三維人體骨架時間序列的數量大於所述閾值N時,以所述人工智慧運算平台內所述動作辨識模組,判斷所述至少一移動人員是否跌倒,而決定是否發出所述通知。According to one or more embodiments of the present disclosure, it further includes: an ID number generating step, when the RGB image or the IR image shows that the at least one moving person is a plurality of persons, assigning a corresponding ID number to each moving person ID number, and then performing the mapping step; an ID number comparison step, comparing each of the ID numbers with an ID number stored in a memory, wherein the memory is coupled to the artificial intelligence computing platform; a data concatenation step of concatenating the detected three-dimensional human skeleton time series and the time series of the same ID number when the result of the ID number comparison step shows the same; When the number is greater than the threshold N, the motion recognition module in the artificial intelligence computing platform is used to determine whether the at least one moving person has fallen, and then determine whether to issue the notification.
根據本揭露之一個或多個實施方式,其中在所述ID編號產生步驟與所述ID編號比對步驟之間更包含:一座標系統轉換步驟,將座標系統原點從所述深度感測相機中心轉換至人體骨架原點,其中所述人體骨架原點係肩膀與頭部連線的交點。According to one or more embodiments of the present disclosure, between the ID number generation step and the ID number comparison step, the step further includes: a coordinate system conversion step of changing the origin of the coordinate system from the depth sensing camera The center is converted to the origin of the human skeleton, wherein the origin of the human skeleton is the intersection of the line connecting the shoulder and the head.
根據本揭露之一個或多個實施方式,其中當所述ID編號比對步驟的結果顯示為相異時,則新增ID編號並建立所述三維人體骨架時間序列儲存空間,且將所檢測到之所述三維人體骨架時間序列儲存至所述記憶體,然後返回所述ID編號產生步驟。According to one or more embodiments of the present disclosure, when the result of the ID number comparison step is shown to be different, an ID number is added and the three-dimensional human skeleton time series storage space is created, and the detected The three-dimensional human skeleton time series is stored in the memory, and then returns to the ID number generating step.
根據本揭露之一個或多個實施方式,其中所述深度感測相機在所述毫米波雷達之所述偵測區域內,依照預設之優先權順序逐一對不同位置之所述至少一移動人員擷取所述RGB影像或所述IR影像。According to one or more embodiments of the present disclosure, the depth-sensing camera, within the detection area of the millimeter-wave radar, pairs the at least one moving person at different positions one by one according to a preset priority order. Capture the RGB image or the IR image.
根據本揭露之一個或多個實施方式,其中所述二維人體骨架點資訊係由所述人工智慧運算平台內之一姿態估測與追蹤模組,根據一姿態估測與追蹤模型獲得,其中所述姿態估測與追蹤模型的骨幹網路係使用卷積神經網路的架構。According to one or more embodiments of the present disclosure, the two-dimensional human skeleton point information is obtained by an attitude estimation and tracking module in the artificial intelligence computing platform according to an attitude estimation and tracking model, wherein The backbone network of the pose estimation and tracking model uses a convolutional neural network architecture.
根據本揭露之一個或多個實施方式,其中所述動作辨識模組係藉由一深度學習模型或一機器學習分類器,判斷所述至少一移動人員是否跌倒。According to one or more embodiments of the present disclosure, the motion recognition module determines whether the at least one moving person falls by using a deep learning model or a machine learning classifier.
根據本揭露之一個或多個實施方式,其中所述毫米波雷達會在所述偵測區域內重複執行偵測動作,直到所述至少一移動人員出現並確認位置後,藉由與所述人工智慧運算平台耦接之一馬達調整所述深度感測相機的拍攝方向與角度。According to one or more embodiments of the present disclosure, the millimeter-wave radar will repeatedly perform the detection action in the detection area until the at least one moving person appears and confirms the position, and then by working with the artificial The intelligent computing platform is coupled to a motor to adjust the shooting direction and angle of the depth-sensing camera.
為便貴審查委員能對本發明之目的、形狀、構造裝置特徵及其功效,做更進一步之認識與瞭解,茲舉實施例配合圖式,詳細說明如下。In order to facilitate your reviewers to have a further understanding and understanding of the purpose, shape, structure and device features of the present invention and their effects, the following examples are given in conjunction with the drawings.
以下揭露提供不同的實施例或示例,以建置所提供之標的物的不同特徵。以下敘述之成分以及排列方式的特定示例是為了簡化本公開,目的不在於構成限制;元件的尺寸和形狀亦不被揭露之範圍或數值所限制,但可以取決於元件之製程條件或所需的特性。例如,利用剖面圖描述本發明的技術特徵,這些剖面圖是理想化的實施例示意圖。因而,由於製造工藝和/公差而導致圖示之形狀不同是可以預見的,不應為此而限定。The following disclosure provides different embodiments or examples for implementing different features of the provided subject matter. The specific examples of components and arrangements described below are for the purpose of simplifying the present disclosure and are not intended to be limiting; the size and shape of the components are not limited by the disclosed ranges or values, but may depend on the processing conditions of the components or the desired characteristic. For example, technical features of the invention are described using cross-sectional illustrations that are schematic illustrations of idealized embodiments. Thus, variations in the shapes of the illustrations due to manufacturing processes and/or tolerances are foreseeable and should not be limited thereto.
再者,空間相對性用語,例如「下方」、「在…之下」、「低於」、「在…之上」以及「高於」等,是為了易於描述圖式中所繪示的元素或特徵之間的關係;此外,空間相對用語除了圖示中所描繪的方向,還包含元件在使用或操作時的不同方向。Furthermore, spatially relative terms such as "below", "below", "below", "above" and "above" are used to facilitate the description of the elements depicted in the drawings or relationship between features; further, spatially relative terms encompass different orientations of elements in use or operation in addition to the orientation depicted in the illustrations.
首先要說明的是,本發明之實施例利用感測器融合(sensor fusion)技術,藉由結合不同感測器(例如,毫米波雷達、深度感測相機)所取得的數據而產生無法靠單一感測器所提供的訊息。First of all, it should be noted that the embodiment of the present invention utilizes sensor fusion technology, by combining data obtained by different sensors (eg, millimeter-wave radar, depth-sensing camera) to generate data that cannot be generated by a single sensor information provided by the sensor.
在本發明之實施例中,係先透過毫米波雷達偵測環境之大範圍區域內是否有人,若有人則定位人體位置。接著,再轉動深度感測相機鎖定人體。然後,利用一人工智慧(Artificial Intelligence, AI)運算平台,以AI深度學習技術萃取三維(3D)人體骨架及追蹤目標。最後,結合毫米波雷達所偵測到的人體中心點移動速度辨識是否發生跌倒。In the embodiment of the present invention, the millimeter wave radar is used to detect whether there is a person in a large area of the environment, and if there is a person, the position of the human body is located. Next, turn the depth-sensing camera to lock the human body. Then, an artificial intelligence (AI) computing platform is used to extract the three-dimensional (3D) human skeleton and track the target with AI deep learning technology. Finally, the movement speed of the center point of the human body detected by the millimeter-wave radar is used to identify whether a fall has occurred.
以下,搭配圖式說明本案之實施例中人員狀況偵測的感測器融合方法與所應用的系統。In the following, the sensor fusion method and the applied system of the personnel condition detection in the embodiments of the present application are described with the drawings.
首先,請參考圖1,圖1係繪示本發明一實施例之硬體系統的外觀示意圖。如圖1所示,硬體系統100包含一人工智慧運算平台10、一毫米波雷達20、一深度感測相機30、一馬達40以及一記憶體50。其中,人工智慧運算平台10分別耦接毫米波雷達20、深度感測相機30、馬達40以及記憶體50。First, please refer to FIG. 1 . FIG. 1 is a schematic diagram illustrating the appearance of a hardware system according to an embodiment of the present invention. As shown in FIG. 1 , the
然後,請參考圖2A,圖2A係繪示本發明之實施例中硬體系統運作的示意圖。圖2A係一俯視圖,說明本發明之實施例先使用毫米波雷達20搜索場域內人體位置,再決定深度感測相機30的旋轉方向與角度。在本發明之實施例中,毫米波雷達20會在偵測區域110內重複執行偵測動作直到至少一移動人員120出現並確認位置後,藉由與人工智慧運算平台10耦接之一馬達40調整深度感測相機30的拍攝方向與角度。Then, please refer to FIG. 2A , which is a schematic diagram illustrating the operation of the hardware system in the embodiment of the present invention. FIG. 2A is a top view illustrating an embodiment of the present invention that first uses the
如圖2A所示,硬體系統100之毫米波雷達20將其偵測區域110劃分為四個象限,當毫米波雷達20在偵測區域110內偵測到一移動人員120時,會定位移動人員120的位置(x, y)。例如,在本發明之實施例中,移動人員120係位於第四象限。接著,人工智慧運算平台10內一處理器(圖未顯示)會根據移動人員120在第四象限的位置(x, y),代入下列(式4)計算馬達40的旋轉角度α。若移動人員120的位置(x, y)位於第一、第二或第三象限的話,則代入相對應的(式1)、(式2)或(式3)計算馬達40的旋轉角度α。As shown in FIG. 2A , the millimeter-
第一象限: (式1) Quadrant 1: (Formula 1)
第二象限: (式2) Second quadrant: (Formula 2)
第三象限: (式3) The third quadrant: (Formula 3)
第四象限: (式4) Fourth quadrant: (Formula 4)
在此,旋轉角度α定義為移動人員120與硬體系統100中心之連線和座標軸X之間的夾角。Here, the rotation angle α is defined as the included angle between the line connecting the moving
在本發明之實施例中,馬達40用於調整深度感測相機30的方向與角度,因此深度感測相機30會被馬達40朝移動人員120的位置方向轉動一旋轉角度α。然後,深度感測相機30在本身之視野(Field of View, FOV)內執行人體骨架偵測運算與獲得深度資訊。In the embodiment of the present invention, the
另外,請參考圖2B,圖2B係繪示本發明之實施例中以毫米波雷達生成四維點雲並進行分群演算法的示意圖。如圖2B所示,在本發明之實施例中,在步驟200中,進行資料收集。然後,在步驟210中,執行單一畫面處理程序(single frame processing),即使用毫米波雷達20生成四維(4D)點雲(x, y, z, v)後,再由分群演算法找出每個群集的中心點220與平均速度。其中(x, y, z)代表各點位置,而v代表各該點的速度。在本發明之實施例中,四維(4D)點雲(x, y, z, v)之生成乃是採用頻率調變連續波雷達(Frequency Modulated Continuous Waveform radar, FMCW radar)發射毫米波並記錄來自場景的反射,然後計算稀疏點雲並濾除靜態物體所對應的點。在本發明之實施例中,分群演算法乃是採用基於密度的分群(Density-Based Spatial Clustering of Applications with Noise, DBSCAN)演算法,對點雲進行分群並找出每個群集的中心點220與平均速度。In addition, please refer to FIG. 2B . FIG. 2B is a schematic diagram of generating a four-dimensional point cloud by a millimeter-wave radar and performing a clustering algorithm according to an embodiment of the present invention. As shown in FIG. 2B, in the embodiment of the present invention, in
另外,請參考圖2C,圖2C係繪示本發明之實施例中硬體系統運作的示意圖。圖2C係一俯視圖,說明在多人場景中,由於硬體系統100之深度感測相機30的視野130無法一次涵蓋毫米波雷達20的偵測區域110,因此鎖定目標的優先權依照預先設定的條件來解決。例如,位於第一象限的移動人員230,因為在點雲Z軸方向有劇烈速度v的變化,依照預先設定的條件具有第一順位的優先權,故深度感測相機30會優先旋轉至移動人員230的方向,進行相關程序。待處理完移動人員230後,接著處理位於第四象限且在點雲X軸或Y軸方向有明顯位移的移動人員240,依照預先設定的條件具有第二順位的優先權,故深度感測相機30會接著旋轉至移動人員240的方向,進行相關程序。待處理完移動人員240後,接著處理各範圍內檢測到之人數最多的第三象限,此情況依照預先設定的條件具有第三順位的優先權,故深度感測相機30會接著旋轉至移動人群250的方向,進行相關程序。以上圖2C僅是用於說明毫米波雷達20在多人場景優先權判斷的一個實施例,當然在其他實施例中,所述預先設定的條件也可以依照不同的設計需求而定義不同的優先權順序。In addition, please refer to FIG. 2C , which is a schematic diagram illustrating the operation of the hardware system in the embodiment of the present invention. 2C is a top view illustrating that in a multi-person scene, since the field of
接下來,請參考圖3A,圖3A係繪示本發明之實施例中人員狀況偵測的感測器融合方法的流程圖。如圖3A所示,本發明一實施例之人員狀況偵測的感測器融合方法包含步驟S10~S130,以下搭配圖1、圖2A~圖2C逐一說明各步驟。Next, please refer to FIG. 3A . FIG. 3A is a flowchart illustrating a sensor fusion method for human condition detection according to an embodiment of the present invention. As shown in FIG. 3A , the sensor fusion method for personnel condition detection according to an embodiment of the present invention includes steps S10 to S130 , and each step is described below with reference to FIGS. 1 and 2A to 2C .
在步驟S10中,以一毫米波雷達20偵測移動人體,並將訊息傳回人工智慧運算平台10內之處理器(圖未顯示)。在不同實施例中,如同在現實環境中一樣,所述移動人體係至少一移動人員,例如一移動人員或一移動人群。In step S10, a millimeter-
在步驟S20中,人工智慧運算平台10內之處理器(圖未顯示)會判斷毫米波雷達20偵測區域110內是否有人。若判斷結果為否即無人,則返回步驟S10繼續以毫米波雷達20偵測移動人體。若判斷結果為是即有人,例如圖2A之移動人員120或圖2C之移動人員230、240或移動人群250,則進到下一步驟S30。In step S20 , the processor (not shown) in the artificial
在步驟S30中,人工智慧運算平台10內之處理器(圖未顯示)定位所偵測到之移動人體的位置並藉由馬達40旋轉深度感測相機30,逐一對準所偵測到之移動人體。In step S30, the processor (not shown) in the artificial
在步驟S40中,以深度感測相機30逐一擷取所偵測到之移動人體的一RGB影像或一IR影像。In step S40 , an RGB image or an IR image of the detected moving human body is captured one by one with the
在步驟S50中,進行二維人體骨架估測及追蹤。人工智慧運算平台10內之處理器(圖未顯示)根據步驟S40之RGB影像或IR影像產生與所偵測到之移動人體對應的一二維人體骨架點資訊。In step S50, two-dimensional human skeleton estimation and tracking are performed. The processor (not shown) in the artificial
在步驟S60中,人工智慧運算平台10內之處理器(圖未顯示)判斷深度感測相機30之視野範圍內是否有人。若判斷結果為否即無人,則返回步驟S10繼續以毫米波雷達20偵測移動人體。若判斷結果為是即有人,則進到下一步驟S70。In step S60 , the processor (not shown) in the artificial
接下來,在步驟S70與步驟S80中,以人工智慧運算平台10執行一感測器融合(sensor fusion)程序,如下說明。Next, in step S70 and step S80, a sensor fusion program is executed by the artificial
在步驟S70中,根據步驟S50所得之二維人體骨架點資訊,藉由人工智慧運算平台10內之一映射模組(圖未顯示)執行一映射步驟S701,將所述二維人體骨架點資訊轉換成以(x
m, y
m, z
m)表示之三維人體骨架點資訊,其中m為自然數,如圖3B所示。另外,透過人工智慧運算平台10內之處理器(圖未顯示)執行一訊號處理步驟S702,利用分群演算法從毫米波雷達20的訊號獲得以v
1表示之點雲群集平均速度,即所謂毫米波雷達點雲速度提取,如圖3B所示。後續在圖3B會再次進一步說明。在本發明 之實施例中,深度感測相機30更利用飛時測距(Time of Flight, ToF)技術、結構光(structured light) 技術或是主動式立體視覺(active stereo vision) 技術,取得至少一移動人員的一深度資訊,以在所述映射步驟中用於獲得所述三維人體骨架點資訊。
In step S70, according to the two-dimensional human skeleton point information obtained in step S50, a mapping step S701 is performed by a mapping module (not shown) in the artificial
在步驟S80中,將步驟S70所得之三維人體骨架點資訊與點雲群集平均速度,合成一三維人體骨架時間序列,如圖3B所示。後續在圖3B會再次進一步說明。In step S80, the three-dimensional human skeleton point information obtained in step S70 and the point cloud clustering average velocity are combined into a three-dimensional human skeleton time series, as shown in FIG. 3B. It will be further described in FIG. 3B later.
在步驟S90中,透過人工智慧運算平台10內之處理器(圖未顯示)判別步驟S80所合成之三維人體骨架時間序列的數量是否大於一閾值N。當所述三維人體骨架時間序列的數量未大於一閾值N(即N個畫面)時,則返回步驟S50。當所述三維人體骨架時間序列的數量大於一閾值N時,則執行下一步驟S100。In step S90 , whether the number of the three-dimensional human skeleton time series synthesized in step S80 is greater than a threshold N is determined by the processor (not shown) in the artificial
在步驟S100中,人工智慧運算平台10呼叫一動作辨識模組(圖未顯示),而此動作辨識模組係用於判斷至少一移動人員是否跌倒。In step S100, the artificial
在步驟S110中,當所述動作辨識模組判斷無人跌倒時,則返回步驟S50。當所述動作辨識模組判斷有人跌倒時,則執行下一步驟S120。In step S110, when the motion recognition module determines that no one has fallen, the process returns to step S50. When the motion recognition module determines that someone has fallen, the next step S120 is executed.
在步驟S120中,當所述動作辨識模組判斷有人跌倒的情況為連續發生且次數大於或等於K時,則在步驟S130中發出一通知以通報有人跌倒。若所述動作辨識模組判斷有人跌倒的情況,未符合連續發生且次數大於或等於K的條件時,則返回步驟S50。In step S120 , when the motion recognition module determines that the situation of someone falling is continuous and the number of times is greater than or equal to K, a notification is sent to notify that someone has fallen in step S130 . If the motion recognition module determines that the situation of someone falling does not meet the condition of continuous occurrence and the number of times is greater than or equal to K, the process returns to step S50.
接著,請參考圖3B,圖3B係繪示圖3A中關於感測器融合的程序。如圖3B所示,在本發明之實施例中,感測器融合的程序係將深度感測相機30與毫米波雷達20的資料進行融合。亦如前所述,根據從深度感測相機30之RGB影像或IR影像得到的二維人體骨架點資訊,在步驟S701中映射成三維人體骨架點資訊300,包含(x
1, y
1, z
1)、(x
2, y
2, z
2)、(x
3, y
3, z
3)、…、(x
m-2, y
m-2, z
m-2)、(x
m-1, y
m-1, z
m-1)與(x
m, y
m, z
m)共m個數據點。另外,根據從毫米波雷達20得到的訊號,在步驟S702中進行訊號處理,得到以v
1表示之點雲群集平均速度310。然後,將所得之三維人體骨架點資訊與點雲群集平均速度,透過感測器融合技術而合成一三維人體骨架時間序列320。
Next, please refer to FIG. 3B . FIG. 3B illustrates the process of sensor fusion in FIG. 3A . As shown in FIG. 3B , in the embodiment of the present invention, the process of sensor fusion is to fuse the data of the depth-sensing
接下來,請參考圖4,圖4係繪示本發明之實施例中用於多人場景跌倒辨識演算法的流程圖。如圖2C、圖3A與圖4所示,如前述在多人場景的情況下,深度感測相機30在毫米波雷達20之偵測區域110內,依照預設之優先權順序逐一對不同位置之至少一移動人員擷取RGB影像或IR影像。接著,執行步驟S400~S480的多人場景跌倒辨識演算法流程。Next, please refer to FIG. 4 . FIG. 4 is a flowchart illustrating a fall recognition algorithm for a multi-person scene according to an embodiment of the present invention. As shown in FIG. 2C , FIG. 3A and FIG. 4 , in the case of a multi-person scene, the depth-sensing
在步驟S400中,在所述RGB影像或所述IR影像顯示所述至少一移動人員為複數人時,執行一ID編號產生步驟,給予每一移動人員相對應之ID編號,進行多人二維人體骨架估測及追蹤,取得二維人體骨架點資訊。In step S400, when the RGB image or the IR image shows that the at least one moving person is a plurality of persons, an ID number generation step is performed, and an ID number corresponding to each moving person is given, and the multi-person two-dimensional Human skeleton estimation and tracking to obtain 2D human skeleton point information.
在步驟S410中,進行映射步驟,將二維人體骨架點資訊映射成三維人體骨架點資訊。In step S410, a mapping step is performed to map the two-dimensional human skeleton point information into three-dimensional human skeleton point information.
在步驟S420,進行座標系統轉換,將座標系統原點從深度感測相機30的中心轉換至人體骨架原點,其中所述人體骨架原點係肩膀與頭部連線的交點。In step S420, coordinate system conversion is performed to convert the origin of the coordinate system from the center of the
在步驟S430,進行ID編號比對,亦即將各ID編號與記憶體50內儲存之ID編號進行比對。若比對結果顯示為不相同時,則接著在步驟S440中,新增ID編號並建立三維人體骨架時間序列儲存空間。然後,在步驟S450中,將所檢測到之三維人體骨架時間序列儲存至記憶體50,並返回步驟S400。若步驟S430之比對結果顯示為相同時,則進到步驟S460執行資料串聯,串聯所檢測到之三維人體骨架時間序列與相同ID編號之時間序列,並進到下一步驟S470。In step S430 , the ID numbers are compared, that is, each ID number is compared with the ID numbers stored in the
在步驟S470中,當三維人體骨架時間序列的數量未大於一閾值N時,則返回步驟S400。當三維人體骨架時間序列的數量大於一閾值N時,則進到步驟480,利用人工智慧運算平台10內之動作辨識模組(圖未顯示),判斷所述至少一移動人員是否跌倒,並決定是否發出一通知以通報有人跌倒。In step S470, when the number of three-dimensional human skeleton time series is not greater than a threshold N, the process returns to step S400. When the number of three-dimensional human skeleton time series is greater than a threshold N, then go to step 480, use the motion recognition module (not shown in the figure) in the artificial
另外,請參考圖5,圖5係繪示本發明一實施例之姿態估測與追蹤模組的功能示意圖。如圖5所示,利用人工智慧運算平台10內之姿態估測與追蹤模組(圖未顯示),將RGB影像502或IR影像504輸入人體骨架估測與追蹤模型508,得到二維人體骨架點資訊510,然後結合深度資訊506以映射出三維人體骨架點資訊512。In addition, please refer to FIG. 5 . FIG. 5 is a functional schematic diagram of an attitude estimation and tracking module according to an embodiment of the present invention. As shown in FIG. 5 , the pose estimation and tracking module (not shown) in the artificial
另外,請參考圖6,圖6係繪示本發明一實施例之動作辨識模組的功能示意圖。如圖6所示,利用人工智慧運算平台10內之動作辨識模組(圖未顯示),將對應時間序列t-2、t-1、t之三維人體骨架點資訊輸入動作辨識模型600,然後根據估測類別610,辨識所述至少一移動人員的動作,而得到一估測結果620。在此,時間序列t-2、t-1、t僅為例示,並非用於限定本發明,實際上輸入動作辨識模型600之時間序列的數量乃因應實際上訓練需要而定。在本發明之實施例中,動作辨識模型600例如是跌倒辨識模型。另外,動作辨識模組600可以是一深度學習模型(RNN、LSTM或GCN)架構或一機器學習分類器(SVM)。In addition, please refer to FIG. 6 . FIG. 6 is a functional schematic diagram of a motion recognition module according to an embodiment of the present invention. As shown in FIG. 6, the motion recognition module (not shown) in the artificial
另外,請參考圖7,圖7係繪示本發明另一實施例之姿態估測與追蹤模組的功能示意圖。如圖7所示,利用人工智慧運算平台10內之姿態估測與追蹤模組(圖未顯示),將對應時間序列t-1、t之IR影像720、710以及時刻t-1之中心點熱圖(heatmap)730輸入人體骨架估測與追蹤模型740,然後得到影像750、760與770。在影像750中,有邊界框可以估計畫面中檢測到知人體的數量及位置。在影像760中,二維骨架估測身體關節(joints)與重要部位關鍵點(keypoints)。在影像770中,可以利用偏移量估測或預測前後畫面座標位移,用來追蹤人體ID編號。在本發明之實施例中,姿態估測與追蹤模型740的骨幹網路可以是不同形式的卷積神經網路(CNN)的架構。而且,不同任務共用骨幹模型,減輕系統運算負擔。In addition, please refer to FIG. 7 . FIG. 7 is a functional schematic diagram of a posture estimation and tracking module according to another embodiment of the present invention. As shown in FIG. 7 , using the posture estimation and tracking module (not shown) in the artificial
再來,請參考圖8,圖8係繪示本發明一實施例之三維人體骨架時間序列的示意圖。如圖8所示,畫面(frame)n-4、畫面n-3、畫面n-2、畫面n-1、畫面n係表示連續畫面,用以例示三維人體骨架時間序列,其中n為自然數。另外,元件符號P為人體骨架原點,如前面步驟S420所述,當座標系統原點從以深度感測相機30為中心轉換成以人體骨架原點P為中心時,可排除深度感測相機30之視角對跌倒辨識模型等動作辨識模型的影響。Next, please refer to FIG. 8 , which is a schematic diagram illustrating a time series of a three-dimensional human skeleton according to an embodiment of the present invention. As shown in FIG. 8 , frame n-4, frame n-3, frame n-2, frame n-1, and frame n represent consecutive frames to illustrate a three-dimensional human skeleton time series, where n is a natural number . In addition, the component symbol P is the origin of the human skeleton. As described in the previous step S420, when the origin of the coordinate system is converted from the depth-sensing
以上實施方式僅用以說明本發明的技術方案而非限制,儘管參照較佳實施方式對本發明進行了詳細說明,本領域的普通技術人員應當理解,可以對本發明的技術方案進行修改或等同替換,而不脫離本發明技術方案的精神和範圍。The above embodiments are only used to illustrate the technical solutions of the present invention and not to limit them. Although the present invention has been described in detail with reference to the preferred embodiments, those of ordinary skill in the art should understand that the technical solutions of the present invention can be modified or equivalently replaced. Without departing from the spirit and scope of the technical solutions of the present invention.
10:人工智慧運算平台
40:馬達
20:毫米波雷達
50:記憶體
30:深度感測相機
100:硬體系統
110:偵測區域
750:影像
120:移動人員
760:影像
130:視野
770:影像
200、210:步驟
S10~S130:步驟
220:中心點
S701、S702:步驟
230:移動人員
S400~S480:步驟
240:移動人員
P:人體骨架原點
250:移動人群
v:速度
300:三維人體骨架點資訊
v
1:點雲群集平均速度
310:點雲群集平均速度
X、Y、Z:坐標軸
320:三維人體骨架時間序列
(x, y):位置
502:RGB影像
504:IR影像
506:深度資訊
508:人體骨架估測與追蹤模型
510 :二維人體骨架點資訊
512:三維人體骨架點資訊
600:動作辨識模型
610:估測類別
620:估測結果
710:IR影像
720:IR影像
730:中心點熱圖
740:人體骨架估測與追蹤模型
(x, y, z, v):四維點雲
t:時刻
α:旋轉角度
10: AI computing platform 40: Motor 20: Millimeter-wave radar 50: Memory 30: Depth sensing camera 100: Hardware system 110: Detection area 750: Image 120: Mobile personnel 760: Image 130: Field of view 770:
為讓本發明的上述與其他目的、特徵、優點與實施例能更淺顯易懂,所附圖式之說明如下: 圖1係繪示本發明一實施例之硬體系統的外觀示意圖。 圖2A係繪示本發明之實施例中硬體系統運作的示意圖。 圖2B係繪示本發明之實施例中以毫米波雷達生成四維點雲並進行分群演算法的示意圖。 圖2C係繪示本發明之實施例中硬體系統運作的示意圖。 圖3A係繪示本發明之實施例中人員狀況偵測的感測器融合方法的流程圖。 圖3B係繪示圖3A中關於感測器融合的程序。 圖4係繪示本發明之實施例中用於多人場景跌倒辨識演算法的流程圖。 圖5係繪示本發明一實施例之姿態估測與追蹤模組的功能示意圖。 圖6係繪示本發明一實施例之動作辨識模組的功能示意圖。 圖7係繪示本發明另一實施例之姿態估測與追蹤模組的功能示意圖。 圖8係繪示本發明一實施例之三維人體骨架時間序列的示意圖。 In order to make the above-mentioned and other objects, features, advantages and embodiments of the present invention more easily understood, the descriptions of the accompanying drawings are as follows: FIG. 1 is a schematic diagram showing the appearance of a hardware system according to an embodiment of the present invention. FIG. 2A is a schematic diagram illustrating the operation of the hardware system in the embodiment of the present invention. FIG. 2B is a schematic diagram illustrating a millimeter-wave radar to generate a four-dimensional point cloud and perform a clustering algorithm according to an embodiment of the present invention. FIG. 2C is a schematic diagram illustrating the operation of the hardware system in the embodiment of the present invention. FIG. 3A is a flowchart illustrating a sensor fusion method for human condition detection according to an embodiment of the present invention. FIG. 3B illustrates the process of sensor fusion in FIG. 3A . FIG. 4 is a flowchart illustrating an algorithm for identifying a fall in a multi-person scene according to an embodiment of the present invention. FIG. 5 is a functional schematic diagram of an attitude estimation and tracking module according to an embodiment of the present invention. FIG. 6 is a functional schematic diagram of a motion recognition module according to an embodiment of the present invention. FIG. 7 is a functional schematic diagram of a posture estimation and tracking module according to another embodiment of the present invention. FIG. 8 is a schematic diagram illustrating a three-dimensional human skeleton time series according to an embodiment of the present invention.
根據慣常的作業方式,圖中各種特徵與元件並未依實際比例繪製,其繪製方式是為了以最佳的方式呈現與本發明相關的具體特徵與元件。此外,在不同圖式間,以相同或相似的元件符號指稱相似的元件及部件。In accordance with common practice, the various features and elements in the drawings are not drawn to scale, but are drawn in order to best represent specific features and elements relevant to the present invention. Furthermore, the same or similar reference numerals are used to refer to similar elements and parts among the different figures.
S10~S130:步驟 S10~S130: Steps
Claims (9)
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202111369715.0 | 2021-11-18 | ||
| CN202111369715.0A CN114091601B (en) | 2021-11-18 | 2021-11-18 | Sensor fusion method for detecting personnel condition |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| TWI778872B true TWI778872B (en) | 2022-09-21 |
| TW202321987A TW202321987A (en) | 2023-06-01 |
Family
ID=80301718
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| TW110143567A TWI778872B (en) | 2021-11-18 | 2021-11-23 | Sensor fusion method for detecting a person's condition |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN114091601B (en) |
| TW (1) | TWI778872B (en) |
Cited By (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| TWI815680B (en) * | 2022-09-28 | 2023-09-11 | 財團法人車輛研究測試中心 | In-cabin detection method and system |
| CN117017276A (en) * | 2023-10-08 | 2023-11-10 | 中国科学技术大学 | A real-time human body close boundary detection method based on millimeter wave radar |
| TWI832689B (en) * | 2023-02-01 | 2024-02-11 | 新加坡商光寶科技新加坡私人有限公司 | Training system and training method for human presence detection model |
| CN118068318A (en) * | 2024-04-17 | 2024-05-24 | 德心智能科技(常州)有限公司 | Multimodal perception method and system based on millimeter wave radar and environmental sensor |
| TWI858709B (en) * | 2023-05-16 | 2024-10-11 | 大鵬科技股份有限公司 | Intruder detection system |
| US12482228B2 (en) | 2023-02-01 | 2025-11-25 | Lite-On Singapore Pte. Ltd. | Training system and training method for human-presence detection model |
Families Citing this family (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| TWI798042B (en) * | 2022-03-31 | 2023-04-01 | 崑山科技大學 | Environment sensing device and method for freezer |
| CN116125464A (en) * | 2023-01-19 | 2023-05-16 | 河海大学 | Disinfection robot body sensing method and system based on multi-sensor fusion |
| CN116386139A (en) * | 2023-03-27 | 2023-07-04 | 业成科技(成都)有限公司 | Fall monitoring method, device, computer equipment and storage medium |
| TWI866398B (en) * | 2023-08-17 | 2024-12-11 | 宏達國際電子股份有限公司 | Detection device and detection metho for application in radar field |
| TWI876701B (en) * | 2023-11-24 | 2025-03-11 | 國眾電腦股份有限公司 | Industrial safety warning system and method |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105184280A (en) * | 2015-10-10 | 2015-12-23 | 东方网力科技股份有限公司 | Human body identity identification method and apparatus |
| US20180323992A1 (en) * | 2015-08-21 | 2018-11-08 | Samsung Electronics Company, Ltd. | User-Configurable Interactive Region Monitoring |
| CN111695402A (en) * | 2019-03-12 | 2020-09-22 | 沃尔沃汽车公司 | Tool and method for labeling human body posture in 3D point cloud data |
| TW202109468A (en) * | 2019-08-28 | 2021-03-01 | 技嘉科技股份有限公司 | Human condition detection device |
Family Cites Families (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| TWI493510B (en) * | 2013-02-06 | 2015-07-21 | 由田新技股份有限公司 | Falling down detection method |
| CN107590433A (en) * | 2017-08-04 | 2018-01-16 | 湖南星云智能科技有限公司 | A kind of pedestrian detection method based on millimetre-wave radar and vehicle-mounted camera |
| CN110208793B (en) * | 2019-04-26 | 2022-03-11 | 纵目科技(上海)股份有限公司 | Auxiliary driving system, method, terminal and medium based on millimeter wave radar |
| CN110555412B (en) * | 2019-09-05 | 2023-05-16 | 深圳龙岗智能视听研究院 | End-to-end human body gesture recognition method based on combination of RGB and point cloud |
| CN111626199B (en) * | 2020-05-27 | 2023-08-08 | 多伦科技股份有限公司 | Abnormal behavior analysis method for large-scale multi-person carriage scene |
| CN111967379B (en) * | 2020-08-14 | 2022-04-08 | 西北工业大学 | Human behavior recognition method based on RGB video and skeleton sequence |
| CN112346055B (en) * | 2020-10-23 | 2024-04-16 | 无锡威孚高科技集团股份有限公司 | Fall detection method and device based on millimeter wave radar and millimeter wave radar equipment |
| CN112712129B (en) * | 2021-01-11 | 2024-04-19 | 深圳力维智联技术有限公司 | Multi-sensor fusion method, device, equipment and storage medium |
| CN112800905A (en) * | 2021-01-19 | 2021-05-14 | 浙江光珀智能科技有限公司 | Pull-up counting method based on RGBD camera attitude estimation |
| CN112782664B (en) * | 2021-02-22 | 2023-12-12 | 四川八维九章科技有限公司 | A bathroom fall detection method based on millimeter wave radar |
| CN113646736B (en) * | 2021-07-17 | 2024-10-15 | 华为技术有限公司 | Gesture recognition method, device, system and vehicle |
-
2021
- 2021-11-18 CN CN202111369715.0A patent/CN114091601B/en active Active
- 2021-11-23 TW TW110143567A patent/TWI778872B/en active
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20180323992A1 (en) * | 2015-08-21 | 2018-11-08 | Samsung Electronics Company, Ltd. | User-Configurable Interactive Region Monitoring |
| CN105184280A (en) * | 2015-10-10 | 2015-12-23 | 东方网力科技股份有限公司 | Human body identity identification method and apparatus |
| CN111695402A (en) * | 2019-03-12 | 2020-09-22 | 沃尔沃汽车公司 | Tool and method for labeling human body posture in 3D point cloud data |
| TW202109468A (en) * | 2019-08-28 | 2021-03-01 | 技嘉科技股份有限公司 | Human condition detection device |
Cited By (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| TWI815680B (en) * | 2022-09-28 | 2023-09-11 | 財團法人車輛研究測試中心 | In-cabin detection method and system |
| TWI832689B (en) * | 2023-02-01 | 2024-02-11 | 新加坡商光寶科技新加坡私人有限公司 | Training system and training method for human presence detection model |
| US12482228B2 (en) | 2023-02-01 | 2025-11-25 | Lite-On Singapore Pte. Ltd. | Training system and training method for human-presence detection model |
| TWI858709B (en) * | 2023-05-16 | 2024-10-11 | 大鵬科技股份有限公司 | Intruder detection system |
| CN117017276A (en) * | 2023-10-08 | 2023-11-10 | 中国科学技术大学 | A real-time human body close boundary detection method based on millimeter wave radar |
| CN117017276B (en) * | 2023-10-08 | 2024-01-12 | 中国科学技术大学 | A real-time human body close boundary detection method based on millimeter wave radar |
| CN118068318A (en) * | 2024-04-17 | 2024-05-24 | 德心智能科技(常州)有限公司 | Multimodal perception method and system based on millimeter wave radar and environmental sensor |
Also Published As
| Publication number | Publication date |
|---|---|
| CN114091601B (en) | 2023-05-05 |
| TW202321987A (en) | 2023-06-01 |
| CN114091601A (en) | 2022-02-25 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| TWI778872B (en) | Sensor fusion method for detecting a person's condition | |
| US11302026B2 (en) | Attitude recognition method and device, and movable platform | |
| US10354129B2 (en) | Hand gesture recognition for virtual reality and augmented reality devices | |
| US9317741B2 (en) | Three-dimensional object modeling fitting and tracking | |
| JP2014127208A (en) | Method and apparatus for detecting object | |
| CA2884383A1 (en) | Methods, devices and systems for detecting objects in a video | |
| JP2005530278A (en) | System and method for estimating pose angle | |
| JP2016099982A (en) | Behavior recognition device, behaviour learning device, method, and program | |
| CN107563295B (en) | Multi-Kinect-based all-dimensional human body tracking method and processing equipment | |
| CN110533720A (en) | Semantic SLAM system and method based on joint constraint | |
| CN108537214B (en) | An automatic construction method of indoor semantic map | |
| US9104944B2 (en) | Object recognition method, descriptor generating method for object recognition, and descriptor for object recognition | |
| JP2016085602A (en) | Sensor information integration method and apparatus | |
| CN106022266A (en) | A target tracking method and device | |
| Nickalls et al. | A real-time and high performance posture estimation system based on millimeter-wave radar | |
| CN112862865A (en) | Detection and identification method and device for underwater robot and computer storage medium | |
| CN115597582A (en) | Instant positioning and map construction method, device and system and storage medium | |
| Weinrich et al. | Appearance-based 3D upper-body pose estimation and person re-identification on mobile robots | |
| CN117292153B (en) | A SLAM method integrating deep neural network to remove purely dynamic feature points | |
| JP2018195965A (en) | Flying object position detection apparatus, flying object position detection system, flying object position detection method, and program | |
| Amamra et al. | Real-time multiview data fusion for object tracking with RGBD sensors | |
| CN117409393A (en) | Method and system for detecting laser point cloud and visual fusion obstacle of coke oven locomotive | |
| CN117152832A (en) | Fall detection method, device, electronic equipment and computer-readable storage medium | |
| Yu et al. | Research on human body detection and trajectory tracking algorithm based on multi-sensor fusion | |
| CN113160295A (en) | Method and device for correcting joint point position |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| GD4A | Issue of patent certificate for granted invention patent |