TWI866828B - Hand tracking system and method - Google Patents
Hand tracking system and method Download PDFInfo
- Publication number
- TWI866828B TWI866828B TW113112258A TW113112258A TWI866828B TW I866828 B TWI866828 B TW I866828B TW 113112258 A TW113112258 A TW 113112258A TW 113112258 A TW113112258 A TW 113112258A TW I866828 B TWI866828 B TW I866828B
- Authority
- TW
- Taiwan
- Prior art keywords
- hand
- tracking
- contact portion
- tracker
- processor
- Prior art date
Links
Images
Landscapes
- User Interface Of Digital Computer (AREA)
- Image Analysis (AREA)
Abstract
Description
本發明是有關於一種手部追蹤系統,且特別是有關於一種手部追蹤系統和方法。The present invention relates to a hand tracking system, and more particularly to a hand tracking system and method.
為了給使用者帶來沉浸式體驗,不斷地開發與擴展現實(extended reality;XR)相關的技術,例如增強現實(augmented reality;AR)、虛擬現實(virtual reality;VR)以及混合現實(mixed reality;MR)。AR技術允許使用者將虛擬元素帶到現實世界。VR技術允許使用者進入整個新虛擬世界以經歷不同生活。MR技術合並現實世界與虛擬世界。此外,為了給使用者帶來完全沉浸式體驗,可通過一個或多個裝置,提供視覺內容、音頻內容或其它感覺的內容。In order to provide users with an immersive experience, technologies related to extended reality (XR), such as augmented reality (AR), virtual reality (VR), and mixed reality (MR), are constantly being developed. AR technology allows users to bring virtual elements into the real world. VR technology allows users to enter a whole new virtual world to experience a different life. MR technology merges the real world with the virtual world. In addition, in order to provide users with a fully immersive experience, visual content, audio content, or other sensory content can be provided through one or more devices.
本揭露提供一種手部追蹤系統和手部追蹤方法,以便改進手部在手部握持物件時的手部追蹤The present disclosure provides a hand tracking system and a hand tracking method to improve hand tracking when the hand is holding an object.
本揭露的手部追蹤系統包含相機、追蹤器、記憶體以及處理器。相機配置成獲得使用者的手部的手部影像。追蹤器適用於附接到物件且配置成獲得追蹤器資料。記憶體配置成儲存程式碼。處理器配置成存取程式碼以執行:基於手部影像或追蹤器資料,而判定手部正觸摸物件;基於物件,而判定物件的接觸部分,其中接觸部分適用於與使用者的手部接觸;基於手部影像,而判定使用者的手勢(hand gesture);以及基於手勢和接觸部分,而判定手部的手部姿態(hand pose)和物件的物件姿態。The hand tracking system disclosed herein includes a camera, a tracker, a memory, and a processor. The camera is configured to obtain a hand image of a user's hand. The tracker is suitable for being attached to an object and is configured to obtain tracker data. The memory is configured to store program code. The processor is configured to access the program code to execute: based on the hand image or the tracker data, determine that the hand is touching the object; based on the object, determine a contact portion of the object, wherein the contact portion is suitable for contacting the user's hand; based on the hand image, determine the user's hand gesture; and based on the hand gesture and the contact portion, determine the hand pose of the hand and the object pose of the object.
本揭露的手部追蹤方法包含:通過相機,獲得使用者的手部的手部影像;通過追蹤器,獲得追蹤器資料,其中追蹤器適用於附接到物件;通過處理器,基於手部影像或追蹤器資料,而判定手部正觸摸物件;基於物件,而判定物件的接觸部分,其中接觸部分適用於與使用者的手部接觸;基於手部影像,而判定使用者的手勢;以及基於手勢和接觸部分,而判定手部的手部姿態和物件的物件姿態。The hand tracking method disclosed herein includes: obtaining a hand image of a user's hand through a camera; obtaining tracker data through a tracker, wherein the tracker is suitable for being attached to an object; determining through a processor that the hand is touching the object based on the hand image or the tracker data; determining a contact portion of the object based on the object, wherein the contact portion is suitable for contacting the user's hand; determining the user's gesture based on the hand image; and determining the hand posture of the hand and the object posture of the object based on the gesture and the contact portion.
基於上述,根據手部追蹤系統和手部追蹤方法,可在手部握持物件時正確地追蹤手部和物件。Based on the above, according to the hand tracking system and the hand tracking method, the hand and the object can be correctly tracked when the hand holds the object.
現將詳細參考本揭露的示範性實施例,在隨附圖式中示出所述示範性實施例的實例。只要可能,在圖式和描述中使用相同附圖標號來指代相同或相似組件。Reference will now be made in detail to exemplary embodiments of the present disclosure, examples of which are illustrated in the accompanying drawings. Whenever possible, the same drawing reference numerals are used in the drawings and description to refer to the same or like components.
在本揭露的整個本說明書和所附申請專利範圍中,特定術語用以指代特定組件。本領域的技術人員應理解,電子裝置製造商可通過不同名稱來指代相同組件。本文並不意圖區分具有相同功能但不同名稱的那些組件。在以下描述和權利請求中,如“包括”和“包含”的詞語是開放式術語,且應解釋為“包含但不限於…”。Throughout this specification and the appended patent applications disclosed herein, specific terms are used to refer to specific components. It should be understood by those skilled in the art that electronic device manufacturers may refer to the same component by different names. This document does not intend to distinguish between components that have the same function but different names. In the following description and claims, words such as "including" and "comprising" are open terms and should be interpreted as "including but not limited to...".
為了改進沉浸式體驗,現實世界中的實體物件可用於表示虛擬中的虛擬物件。舉例來說,當使用者可實體上觸摸且與虛擬物件互動時,其可使虛擬體驗感覺更真實且更沉浸。對於要求精確度或靈巧性的任務,例如繪畫、組裝一件家具或演奏樂器,尤其如此。另外,使用實體物件以抓取作為虛擬物件還可減少使用者的認知負載。這是因為他們不必學習新的且複雜的控制。替代地,他們可依賴於他們在現實世界中如何與實體物件互動的現有知識。To improve the immersive experience, physical objects in the real world can be used to represent virtual objects in the virtual world. For example, when a user can physically touch and interact with virtual objects, it can make the virtual experience feel more real and more immersive. This is especially true for tasks that require precision or dexterity, such as painting, assembling a piece of furniture, or playing a musical instrument. In addition, using physical objects to grab as virtual objects can also reduce the cognitive load on the user. This is because they do not have to learn new and complex controls. Instead, they can rely on their existing knowledge of how to interact with physical objects in the real world.
然而,當使用者正握持實體物件時,使用者的手部的部分可能會被實體物件遮擋,進而減小手部追蹤的準確度。此外,當實體物件或手部的位置未被清楚地辨識時,在虛擬世界中,虛擬物件與虛擬手部之間可能會顯示出偏移。由於偏移的存在,虛擬物件可能會看起來未黏附到虛擬手部。也就是說,虛擬手部或虛擬手部的虛擬手勢可能看起來很詭異。換句話說,不真實的現象可能會對使用者體驗具有負面影響,進而降低沉浸式體驗。因此,本領域的技術人員追求提供更真實的方式,來將實體物件與手部之間的關係轉換成虛擬物件與虛擬手部之間的關係。However, when a user is holding a physical object, part of the user's hand may be obscured by the physical object, thereby reducing the accuracy of hand tracking. In addition, when the position of the physical object or hand is not clearly identified, an offset may appear between the virtual object and the virtual hand in the virtual world. Due to the offset, the virtual object may appear not to be attached to the virtual hand. In other words, the virtual hand or the virtual gesture of the virtual hand may look strange. In other words, the unrealistic phenomenon may have a negative impact on the user experience, thereby reducing the immersive experience. Therefore, technicians in this field seek to provide a more realistic way to convert the relationship between a physical object and a hand into a relationship between a virtual object and a virtual hand.
圖1為根據本揭露的實施例的手部追蹤系統的示意圖。參考圖1,手部追蹤系統100可包含相機110、追蹤器120、記憶體130以及處理器140。在一個實施例中,相機110、追蹤器120、記憶體130和/或處理器140可整合在單個電子裝置中或彼此分離,但不限於此。在一個實施例中,相機110、記憶體130以及處理器140可包含於頭戴式顯示器(head-mounted display;HMD)裝置中,且追蹤器120可包含於物件中。在另一實施例中,相機110可放置在除使用者的頭部以外的某處以捕獲使用者的影像。然而,本揭露不限於此。FIG. 1 is a schematic diagram of a hand tracking system according to an embodiment of the present disclosure. Referring to FIG. 1 , the
在一個實施例中,相機110可包含例如互補金屬氧化物半導體(complementary metal oxide semiconductor;CMOS)相機或電荷耦合裝置(charge coupled device;CCD)相機。在一個實施例中,相機110可安置於HMD裝置、可穿戴眼鏡(例如,AR/VR護目鏡)、電子裝置、其它類似裝置或這些裝置的組合上。然而,本揭露不限於此。In one embodiment, the
在一個實施例中,追蹤器120可包含例如陀螺儀、加速計、慣性測量單元(inertial measurement unit;IMU)感測器、其它類似裝置或這些裝置的組合。然而,本揭露不限於此。In one embodiment, the
在一個實施例中,記憶體130可包含例如任何類型的固定或可移除隨機存取記憶體(random access memory;RAM)、只讀記憶體(read-only memory;ROM)、快閃記憶體、硬碟、其它類似裝置,或這些裝置的組合,且可用於記錄多個程式碼或模組。然而,本揭露不限於此。In one embodiment, the
在一個實施例中,手部追蹤系統100中的各組件可包含通訊電路,且通訊電路可包含例如有線網路模組、無線網路模組、藍牙模組、紅外模組、射頻辨識(radio frequency identification;RFID)模組、紫蜂(Zigbee)網路模組,或近場通訊(near field communication;NFC)網路模組,但本揭露不限於此。也就是說,手部追蹤系統100中的各組件可通過有線亦或無線通訊彼此通訊。In one embodiment, each component in the
在一個實施例中,處理器140可耦合到相機110、追蹤器120以及記憶體130。在實施例中,處理器140可為通用處理器、專用處理器、常規處理器、數位訊號處理器、多個微處理器、與數位訊號處理器核心組合的一個或多個微處理器、控制器、微控制器、專用積體電路(application specific integrated circuit;ASIC)、現場可程式化閘陣列(field programmable gate array;FPGA)、任何其它種類的積體電路、狀態機、基於高級RISC機(advanced RISC machine,ARM)的處理器以及類似者。In one embodiment, the
在一個實施例中,處理器140可將資料傳輸到相機110、追蹤器120以及記憶體130且從所述相機110、所述追蹤器120以及所述記憶體130接收資料。此外,記憶體130可配置成儲存程式碼。處理器140可從記憶體130存取程式碼來執行本揭露的手部追蹤方法,且在下文中描述其細節。In one embodiment, the
圖2為根據本揭露的實施例的手部追蹤方法的示意性流程圖。參考圖1和圖2,可由圖1中的手部追蹤系統100來執行手部追蹤方法200,且將在下文用圖1中繪示的組件描述圖2中各步驟的細節。FIG2 is a schematic flow chart of a hand tracking method according to an embodiment of the present disclosure. Referring to FIG1 and FIG2, the
首先,在步驟S210中,相機110可配置成獲得使用者的手部的手部影像。隨後,在步驟S220中,追蹤器120可適用於(adapted to)附接到(attached to)物件且配置成獲得追蹤器資料。First, in step S210 , the
然後,在步驟S230中,處理器140可配置成基於手部影像或追蹤器資料,而判定手部正觸摸(例如,接觸)物件。接著,在步驟S240中,處理器140可配置成基於物件,而判定物件的接觸部分。在一個實施例中,接觸部分可適用於與使用者的手部接觸。舉例來說,接觸部分可為物件的手柄或握把,但本揭露不限於此。此外,在步驟S250中,處理器140可配置成基於手部影像,而判定使用者的手勢。另外,在步驟S260中,處理器140可配置成基於手勢和接觸部分,而判定手部的手部姿態(例如,位置與定向(orientation))和物件的物件姿態。另外,處理器140可配置成分別基於手部姿態和物件姿態,而顯示虛擬手部和虛擬物件。Then, in step S230, the
如此一來,可正確地追蹤手部和物件。因此,可以真實的方式顯示虛擬世界中的虛擬手部和虛擬物件,進而給使用者帶來完全沉浸式體驗。另外,手部追蹤方法200的實施細節可參考以下描述以獲得足夠教示、建議以及實作的實施例。In this way, the hand and the object can be tracked correctly. Therefore, the virtual hand and the virtual object in the virtual world can be displayed in a realistic manner, thereby bringing a fully immersive experience to the user. In addition, the implementation details of the
圖3A為根據本揭露的實施例的手部與物件之間的互動情境的示意圖。圖3B為根據本揭露的實施例的手部與物件之間的互動情境的示意圖。參考圖3A和圖3B,互動情境300A和互動情境300B描繪手部H如何以各種方式抓握物件的示範性實施例。FIG. 3A is a schematic diagram of an interaction scenario between a hand and an object according to an embodiment of the present disclosure. FIG. 3B is a schematic diagram of an interaction scenario between a hand and an object according to an embodiment of the present disclosure. Referring to FIG. 3A and FIG. 3B ,
首先參考圖3A。互動情境300A可包含手部H、物件O1、物件O2以及物件O3。在一個實施例中,物件O1可為槍、物件O2可為球拍,且物件O3可為球棒。然而,本揭露不限於此。First, refer to FIG. 3A. The
在一個實施例中,使用者可意圖在手部H上握持物件O1、物件O2以及物件O3中的一者。然而,由於物件O1、物件O2以及物件O3的形狀和功能可不相同,因此用以抓握物件O1、物件O2以及物件O3的最佳方式可能不相同。也就是說,物件O1、物件O2或物件O3的最佳抓握位置可取決於既定用途。在一個實施例中,最佳抓握位置可又被稱為接觸部分。換句話說,接觸部分可適用於與使用者的手部H接觸。In one embodiment, the user may intend to hold one of the objects O1, O2, and O3 on the hand H. However, since the shapes and functions of the objects O1, O2, and O3 may be different, the best way to grasp the objects O1, O2, and O3 may be different. That is, the best grasping position of the object O1, O2, or O3 may depend on the intended use. In one embodiment, the best grasping position may also be referred to as a contact portion. In other words, the contact portion may be adapted to come into contact with the user's hand H.
舉例來說,參考物件O1,物件O1的接觸部分可為接近槍的板機的握把,使得使用者可容易地拉動板機以射擊目標。隨後,參考物件O2,物件O2的接觸部分可為遠離球拍的框架的握把,使得使用者可容易地用球拍擊中目標。另外,參考物件O3,物件O3的接觸部分可為遠離球棒的筒管的手柄,使得使用者可容易地用球棒擊中目標。For example, referring to object O1, the contact portion of object O1 may be a grip close to the trigger of a gun, so that the user can easily pull the trigger to shoot a target. Subsequently, referring to object O2, the contact portion of object O2 may be a grip away from the frame of a racket, so that the user can easily hit a target with the racket. In addition, referring to object O3, the contact portion of object O3 may be a handle away from the barrel of a bat, so that the user can easily hit a target with the bat.
現在參考圖3B。互動情境300B可包含握持情境301、握持情境302以及握持情境303。參考握持情境301,手部H與槍的握把接觸(例如,握持)。參考握持情境302,手部H與球拍的握把接觸。參考握持情境303,手部H與球棒的手柄接觸。Now refer to FIG. 3B.
值得注意的是,握持情境301、握持情境302或握持情境303描繪使用者的手部H如何與物件O1、物件O2或物件O3接觸的理想情況。在一個實施例中,為了執行手部的手部追蹤,需要由相機110捕獲手部H的所有21個(關節)節點。然而,由於手部H的一些節點可能會被物件O1、物件O2或物件O3遮擋,手部H的手部追蹤可能會無法恰當地執行。舉例來說,虛擬物件(對應於物件O1、物件O2或物件O3)可能會看起來未黏附到虛擬手(對應於手部H)。也就是說,虛擬手部或虛擬手部的虛擬手勢可能看起來很詭異。換句話說,不真實的現象可能會對使用者體驗具有負面影響,進而降低沉浸式體驗。It is worth noting that the holding
圖4A為根據本揭露的實施例的單手的手部追蹤情境的示意圖。圖4B為根據本揭露的實施例的單手的手部追蹤情境的示意圖。參考圖4A和圖4B,手部追蹤情境400A和手部追蹤情境400B描繪手部H的追蹤過程的示範性實施例。FIG. 4A is a schematic diagram of a hand tracking context of a single hand according to an embodiment of the present disclosure. FIG. 4B is a schematic diagram of a hand tracking context of a single hand according to an embodiment of the present disclosure. Referring to FIG. 4A and FIG. 4B ,
首先參考圖4A。手部追蹤情境400A可包含手部H、物件OBJ以及追蹤器TRK。如圖4A中所繪示,物件OBJ的物件類型可為乒乓球拍。在一個實施例中,追蹤器TRK可適用於附接到物件OBJ或嵌入在物件OBJ中,且不限於此。也就是說,追蹤器TRK可與物件OBJ相關聯以提供物件OBJ的資訊。舉例來說,追蹤器TRK可配置成提供關於三條軸線的三個線性加速度值和/或三個角速度。然而,本揭露不限於此。並且,對於不同的物件OBJ,追蹤器TRK可安裝於適合的位置。換句話說,追蹤器TRK對於物件OBJ的(相對)位置可為已知的。然而,本揭露不限於此。First, refer to Figure 4A. The
首先,在追蹤過程401A中,如圖2的步驟S230,響應於滿足或觸發特定條件,處理器140可配置成判定手部H正與物件OBJ接觸。在一個實施例中,基於追蹤器TRK的追蹤器資料和/或手部影像,處理器140可配置成判斷手部H是否與物件OBJ接觸。在另一實施例中,距離感測器或接觸感測器可包含於物件OBJ中。基於距離感測器或接觸感測器的感測器資料,處理器140可配置成判斷手部H是否與物件OBJ接觸。然而,本揭露不限於此。First, in the
隨後,在追蹤過程402A中,處理器140可配置成判定物件OBJ的物件類型。物件類型還可被稱為物件OBJ的角色。在一個實施例中,物件類型可包含運動器材、射擊器材或武器。然而,本揭露不限於此。在一個實施例中,物件OBJ的物件資訊或追蹤器TRK的追蹤器編號可儲存在追蹤器TRK中。基於物件資訊或追蹤器編號,可判定物件OBJ的物件類型。也就是說,物件資訊可包含物件類型,且物件類型或追蹤器編號可指示(indicate)與追蹤器TRK相關聯的物件OBJ。在另一實施例中,物件資料庫可儲存在記憶體130中,且可配置成儲存多個物件類型和用於辨識多個物件類型的物件資訊。舉例來說,物件資料庫可為用各個物件類型預訓練的神經網路模型。也就是說,可利用物件資料庫和物件辨識演算法,基於由相機110獲得的影像(例如,手部影像),而判定物件OBJ的物件類型。然而,本揭露不限於此。Subsequently, in the
此外,如圖2的步驟S240,響應於所判定的物件類型,處理器140可配置成基於物件類型,而判定物件OBJ的接觸部分。舉例來說,接觸部分可為物件OBJ的最佳抓握位置。也就是說,對於不同的物件類型,物件OBJ的接觸部分的位置可能有所不同。接著,在判定接觸部分之後,由於判定了物件類型且追蹤器TRK的安裝位置可為已知,因此還可(基於物件類型)判定追蹤器TRK與接觸部分之間的距離,所述距離也可稱為偏移距離。In addition, as shown in step S240 of FIG. 2 , in response to the determined object type, the
另一方面,除了基於物件類型而判定接觸部分之外,也可基於手部影像(例如,手部影像中的使用者的手部H與物件OBJ接觸的接觸部分),而判定接觸部分。此外,對於不同物件類型,追蹤器TRK(的安裝位置)與接觸部分之間的距離可預儲存在記憶體130中。也就是說,對於各物件類型,追蹤器TRK可安裝在物件OBJ的特定位置上。因此,除了基於物件資訊,處理器140也可配置成基於追蹤器TRK與接觸部分之間的距離而判定物件OBJ的物件類型。On the other hand, in addition to determining the contact portion based on the object type, the contact portion may also be determined based on the hand image (e.g., the contact portion where the user's hand H in the hand image contacts the object OBJ). In addition, for different object types, the distance between the tracker TRK (the installation position) and the contact portion may be pre-stored in the
另外,基於追蹤器資料,可判定空間中的追蹤器TRK的坐標。因此,基於追蹤器TRK的坐標和偏移距離,可判定接觸部分的坐標或位置。由於接觸部分是被配置成適於與使用者的手部H接觸。因此,基於接觸部分的坐標或位置,可判定追蹤框TB。追蹤框TB可為用於執行手部H的手部追蹤的區域。也就是說,在判定物件類型和接觸部分之後,可判定用於手部追蹤的追蹤框TB。In addition, based on the tracker data, the coordinates of the tracker TRK in space can be determined. Therefore, based on the coordinates and offset distance of the tracker TRK, the coordinates or position of the contact portion can be determined. Since the contact portion is configured to be suitable for contact with the user's hand H. Therefore, based on the coordinates or position of the contact portion, the tracking frame TB can be determined. The tracking frame TB can be an area for performing hand tracking of the hand H. That is, after determining the object type and the contact portion, the tracking frame TB used for hand tracking can be determined.
然後,在追蹤過程403A中,基於追蹤框TB,可初步地執行手部H的手部追蹤。值得注意的是,如前所述,由於此時手部H的一些節點可能會被物件OBJ所遮擋,手部H的手部追蹤可能會無法恰當地執行。因此,基於追蹤框TB,可初步地判定手部H的全部節點中的多個追蹤節點ND。這些多個追蹤節點ND可為手部H的關鍵節點或特徵節點,且可用於判定初步的手部H的手部姿態。也就是說,基於這些多個追蹤節點ND,可初步地執行手部H的手部追蹤。類似地,由於物件OBJ上的追蹤器TRK的坐標已知,可初步地執行物件OBJ的物件追蹤,以得到初步的物件OBJ的物件姿態。Then, in the
接著,在追蹤過程404A中,如圖2的步驟S250,基於初步的手部追蹤的追蹤結果,可判定手部H的手勢。舉例來說,對於不同的物件類型,使用者可能會採用不同的手勢,來握取物件OBJ。也就是說,在物件種類已知的情況下,使用者用來握取物件OBJ的手勢是可以預測的。並且,可進一步預訓練手勢模型MD,以基於手部影像及/或物件種類,來執行手勢辨識。然而,本揭露不限於此。也就是說,可利用手勢模型MD,基於手部影像及/或物件種類,而判定手部H的手勢。在判定手勢之後,如圖2的步驟S260,可判定優化的手部H的手部姿態和優化的物件OBJ的物件姿態,以確保手部H放置在接觸部分上。也就是說,可判定優化的手部H的手部姿態和優化的物件OBJ的物件姿態,以確保在虛擬世界中虛擬物件可以看起來黏附到虛擬手部。換句話說,虛擬手部的表面可擬合到虛擬物件的表面(例如,接觸部分)。因此,可以真實的方式顯示虛擬手部和虛擬物件,進而給使用者帶來完全沉浸式體驗。Next, in the
值得一提的是,在追蹤情境400A中,假定手部H恰當地握持物件OBJ。也就是說,手部H可能會放置在物件OBJ的接觸部分上。然而,在某些情況下,手部H可能會不正確地握持物件OBJ。也就是說,手部H可能會放置在除接觸部分以外的某處。其中,手部H放置在物件OBJ上的區域可被稱為觸摸區域。換句話說,觸摸區域與接觸部分不同。或者,手部H可能會放置在接觸部分上,但可能手部H的僅一部分與接觸部分接觸,或是手部H可能會在接觸部分上以錯誤方向放置。換句話說,觸摸區域可能會部分地與接觸部分不同。因此,處理器140可配置成基於(優化的)手部姿態,而判定手部H是否正確地放置在接觸部分上。另外,響應於手部H未正確地放置在接觸部分上,處理器140可配置成產生操作說明訊息(instruction message)。以指示使用者將手部H正確地放置在接觸部分上。舉例來說,操作說明訊息可為在虛擬世界中顯示的文字訊息或影像,使得使用者可遵循操作說明訊息,以將手部H正確地放置在接觸部分上。然而,本揭露不限於此。It is worth mentioning that in the
現在參考圖4B。類似於手部追蹤情境400A,手部追蹤情境400B可包含手部H、物件OBJ以及追蹤器TRK。如圖4B中所繪示,物件OBJ的物件類型可為棒球。此外,手部追蹤情境400B可包含追蹤過程401B、追蹤過程402B、追蹤過程403B以及追蹤過程404B。Now refer to FIG. 4B. Similar to the
值得注意的是,由於手部追蹤情境400A與手部追蹤情境400B之間的不同處為物件OBJ的物件類型,因此手部追蹤情境400B的實施細節可參考手部追蹤情境400A,以獲得足夠的教示、建議以及實作的實施例,而本文中不冗餘地描述細節。也就是說,追蹤過程401B、追蹤過程402B、追蹤過程403B以及追蹤過程404B可參考追蹤過程401A、追蹤過程402A、追蹤過程403A以及追蹤過程404A。然而,本揭露不受限制。It is worth noting that, since the difference between the
圖5A為根據本揭露的實施例的雙手的手部追蹤情境的示意圖。圖5B為根據本揭露的實施例的雙手的手部追蹤情境的示意圖。參考圖5A和圖5B,手部追蹤情境500A和手部追蹤情境500B描繪雙手的追蹤過程的示範性實施例。FIG5A is a schematic diagram of a hand tracking scenario for two hands according to an embodiment of the present disclosure. FIG5B is a schematic diagram of a hand tracking scenario for two hands according to an embodiment of the present disclosure. Referring to FIG5A and FIG5B ,
首先參考圖5A。手部追蹤情境500A可包含手部H1(亦可稱為手部H)、手部H2(亦可稱為額外手部)、物件OBJ以及追蹤器TRK。如圖5A中所繪示,物件OBJ的物件類型可為保齡球。此外,由於保齡球由雙手握持,因此物件OBJ的兩個部分可分別與手部H1和手部H2接觸。舉例來說,物件OBJ的額外接觸部分可為包含拇指孔和兩個手指孔的抓握區域,且抓握區域可適用於與手部H2接觸。此外,物件OBJ的接觸部分可為用於握持物件OBJ的重量的握持區域,且握持區域可適用於與手部H1接觸。然而,本揭露不限於此。First, refer to Figure 5A. The
首先,在追蹤過程501A中,如圖2的步驟S230,響應於滿足或觸發的特定條件,處理器140可配置成判定手部H1和手部H2正在觸摸(例如,接觸)物件OBJ。在一個實施例中,基於追蹤器TRK的追蹤器資料和/或由相機110獲得的手部H1的手部影像和手部H2的額外手部影像,處理器140可配置成判斷手部H1和手部H2是否與物件OBJ接觸。在另一實施例中,距離感測器或接觸感測器可包含於物件OBJ中。基於距離感測器或接觸感測器的感測器資料,處理器140可配置成判斷手部H1和手部H2是否與物件OBJ接觸。然而,本揭露不限於此。First, in the
隨後,在追蹤過程502A中,處理器140可配置成判定物件OBJ的物件類型。此外,如圖2的步驟S240,處理器140可配置成基於物件類型和/或手部影像和額外手部影像,而判定物件OBJ的接觸部分和額外接觸部分。上述過程可參考追蹤過程402A的描述。也就是說,在一個實施例中,可基於儲存在追蹤器TRK中的物件OBJ的物件資訊或追蹤器TRK的追蹤器編號,而判定物件OBJ的物件類型、接觸部分和額外接觸部分。也就是說,對於不同的物件類型,物件OBJ的接觸部分和額外接觸部分的位置可能有所不同。在另一實施例中,可基於記憶體130的物件資料庫和/或手部影像和額外手部影像,而判定物件OBJ的物件類型、接觸部分和額外接觸部分。然而,本揭露不限於此。Subsequently, in the
舉例來說,接觸部分加額外接觸部分可為物件OBJ的最佳抓握位置。在判定接觸部分和額外接觸部分之後,由於判定了物件類型,因此還可判定追蹤器TRK與接觸部分之間的偏移距離和追蹤器TRK與額外接觸部分之間的額外偏移距離。另外,基於追蹤器資料,可判定空間中的追蹤器TRK的坐標。因此,基於追蹤器TRK的坐標、偏移距離以及額外偏移距離,可判定手部H1的坐標或位置和手部H2的坐標或位置,以判定追蹤框TB1和追蹤框TB2。也就是說,在判定物件類型和接觸部分和/或額外接觸部分之後,可判定用於手部追蹤的追蹤框TB1和/或追蹤框TB2。For example, the contact portion plus the additional contact portion may be the optimal gripping position for the object OBJ. After determining the contact portion and the additional contact portion, since the object type is determined, the offset distance between the tracker TRK and the contact portion and the additional offset distance between the tracker TRK and the additional contact portion may also be determined. In addition, based on the tracker data, the coordinates of the tracker TRK in space may be determined. Therefore, based on the coordinates, the offset distance, and the additional offset distance of the tracker TRK, the coordinates or position of the hand H1 and the coordinates or position of the hand H2 may be determined to determine the tracking frame TB1 and the tracking frame TB2. That is, after determining the object type and the contact portion and/or the additional contact portion, the tracking frame TB1 and/or the tracking frame TB2 for hand tracking may be determined.
然後,在追蹤過程503A中,基於追蹤框TB1和追蹤框TB2,可判定多個追蹤節點ND1和多個追蹤節點ND2,以初步地執行手部H1的手部追蹤和手部H2的手部追蹤。並且,手部H1和/或手部H2的手部追蹤的追蹤結果可用於判定初步的手部H1的手部姿態、初步的手部H2的手部姿態(也稱為額外手部姿態)或初步的物件OBJ的物件姿態。Then, in the
接著,雖然在圖5A中未描繪,但類似於追蹤過程404A,如圖2的步驟S250,基於手部H1和手部H2的手部追蹤的追蹤結果,可判定手部H1的手勢和手部H2的額外手勢。舉例來說,對於不同的物件類型,使用者可能會採用不同的手勢以及額外手勢,來握取物件OBJ。也就是說,在物件種類已知的情況下,使用者用來握取物件OBJ的手勢以及額外手勢是可以預測的。在判定手勢和額外手勢之後,如圖2的步驟S260,可判定優化的手部H1的手部姿態、優化的手部H2的額外手部姿態以及優化的物件OBJ的物件姿態,以確保手部H1和手部H2分別放置在接觸部分和額外接觸部分上。也就是說,可判定優化的手部H1的手部姿態、優化的手部H2的手部姿態以及優化的物件OBJ的物件姿態,以確保虛擬世界中虛擬物件可以看起來黏附到虛擬手部和虛擬額外手部。換句話說,虛擬手部的表面和虛擬額外手部的表面可擬合到虛擬物件的表面。因此,可以真實的方式顯示虛擬手部、虛擬額外手部以及虛擬物件,進而給使用者帶來完全沉浸式體驗。Next, although not depicted in FIG. 5A , similar to the
值得一提的是,在追蹤情境500A中,假定可基於物件資訊、追蹤器編號或物件資料庫,而判定物件OBJ的物件類型。然而,在某些情況下,追蹤器TRK剛被附接到物件,而物件資訊尚未儲存在追蹤器TRK中。在一個實施例中,處理器140可配置成基於接觸部分和額外接觸部分,而判定物件OBJ的物件類型。更確切地說,對於不同的物件類型,使用者可能會採用不同的方式,來握取物件OBJ。因此,基於使用者正如何握持物件OBJ,可判定物件OBJ的形狀、大小、方向等。舉例而言,手部H1及手部H2的兩個位置可形成物件OBJ的邊界。基於物件OBJ的邊界及/或手部H1的手勢與手部H2的額外手勢,可判定物件OBJ的物件類型。用於辨識物件類型的資訊可預儲存在物件辨識資料庫中,但不限於此。換句話說,雖然最初物件類型是未知的,但可基於接觸部分和額外接觸部分,而判定物件類型。在已知物件類型之後,可更準確地執行手部H1和手部H2的手部追蹤。It is worth mentioning that in the
現在參考圖5B。類似於手部追蹤情境500A,手部追蹤情境500B可包含手部H1、手部H2、物件OBJ以及追蹤器TRK。如圖5B中所繪示,物件OBJ的物件類型可為棒球球棒。此外,手部追蹤情境500B可包含追蹤過程501B和追蹤過程502B。為了清楚起見,為了更好地理解,已簡化圖5B。舉例來說,圖5B中未描繪追蹤器TRK和節點ND。Reference is now made to FIG. 5B. Similar to hand tracking
值得注意的是,由於手部追蹤情境500A與手部追蹤情境500B之間的不同處為物件OBJ的物件類型,因此手部追蹤情境500B的實施細節可參考手部追蹤情境500A以獲得足夠的教示、建議以及實作的實施例,而本文中不冗餘地描述細節。也就是說,追蹤過程501B和追蹤過程502B可參考追蹤過程501A和追蹤過程502A。此外,儘管其未在圖5B中描繪,但與追蹤過程503A和追蹤過程404A類似的過程可包含於手部追蹤情境500B中。然而,本揭露不受限制。It is worth noting that, since the difference between the
圖5C為根據本揭露的實施例的雙手的手部追蹤情境的示意圖。圖5D為根據本揭露的實施例的雙手的手部追蹤情境的示意圖。參考圖5C和圖5D,手部追蹤情境500C和手部追蹤情境500D描繪雙手的追蹤過程的示範性實施例。值得注意的是,手部H1的部分或手部H2的部分可能被遮擋。此外,為清楚起見,為了更好地理解,已簡化圖5C和圖5D。舉例來說,圖5C和圖5D中未描繪追蹤器TRK和節點ND。FIG. 5C is a schematic diagram of a hand tracking scenario for two hands according to an embodiment of the present disclosure. FIG. 5D is a schematic diagram of a hand tracking scenario for two hands according to an embodiment of the present disclosure. Referring to FIG. 5C and FIG. 5D ,
首先參考圖5C。手部追蹤情境500C可包含手部H1、手部H2以及物件OBJ。如圖5C中所繪示,物件OBJ的物件類型可為步槍。此外,手部追蹤情境500C可包含側視圖501C及後視圖502C。值得注意的是,由於手部追蹤情境500A與手部追蹤情境500C之間的不同處為物件OBJ的物件類型,因此手部追蹤情境500C的實施細節可參考手部追蹤情境500A以獲得足夠的教示、建議以及實作的實施例,而本文中不冗餘地描述細節。First, refer to FIG. 5C.
如側視圖501C中所繪示,物件OBJ可由手部H1和手部H2握持。此外,追蹤框TB1和追蹤框TB2可能已經被判定。然而,如後視圖502C中所繪示,手部H1的部分被手部H2遮擋。也就是說,由於手部H1可能未被相機110完全地捕獲,因此無法恰當地判定追蹤框TB1。此外,即使恰當地判定追蹤框TB1,手部H1的追蹤節點ND1中的一些可能被手部H2遮擋,進而可對手部H1的手部追蹤造成負面影響。As shown in the
值得一提的是,通過判定物件OBJ的物件類型和手部H1的手勢,可判定追蹤節點ND1放置在物件OBJ上的其餘部分。也就是說,即使手部H1的部分被手部H2遮擋,仍可正確地執行手部H1的手部追蹤,進而增加使用者體驗。反之亦然。另一方面,雖然物件OBJ的物件類型是未知的,但可基於追蹤框TB1和追蹤框TB2,而判定物件OBJ的物件類型。更確切地說,基於手部H1的接觸部分和手部H2的額外接觸部分,可判定OBJ的物件類型。舉例來說,如圖5C中所繪示,根據側視圖501C,手部H1與手部H2分離。另外,根據後視圖502C,手部H1接近手部H2。因此,可將物件OBJ判定為步槍。然而,本揭露不限於此。It is worth mentioning that by determining the object type of object OBJ and the gesture of hand H1, the remaining portion of tracking node ND1 placed on object OBJ can be determined. That is, even if part of hand H1 is blocked by hand H2, hand tracking of hand H1 can still be performed correctly, thereby increasing the user experience. And vice versa. On the other hand, although the object type of object OBJ is unknown, the object type of object OBJ can be determined based on tracking frame TB1 and tracking frame TB2. More specifically, the object type of OBJ can be determined based on the contact portion of hand H1 and the additional contact portion of hand H2. For example, as shown in FIG5C, according to
現在參考圖5D。手部追蹤情境500D可包含手部H1、手部H2以及物件OBJ。如圖5D中所繪示,物件OBJ的物件類型可為手槍。此外,手部追蹤情境500D可包含側視圖501D。值得注意的是,由於手部追蹤情境500A與手部追蹤情境500D之間的不同處為物件OBJ的物件類型,因此手部追蹤情境500D的實施細節可參考手部追蹤情境500A以獲得足夠的教示、建議以及實作的實施例,而本文中不冗餘地描述細節。Now refer to Figure 5D.
如側視圖501D中所繪示,物件OBJ可由手部H1和手部H2握持。類似於後視圖502C,手部H1的部分可能會被手部H2遮擋。因此,通過判定物件OBJ的物件類型和手部H1的手勢,仍可正確地執行手部H1的手部追蹤,進而增加使用者體驗。另一方面,雖然物件OBJ的物件類型是未知的,但可基於追蹤框TB1和追蹤框TB2,更確切地說,基於手部H1的接觸部分和手部H2的額外接觸部分,可判定物件OBJ的物件類型。舉例來說,如圖5D中所繪示,根據側視圖501D,手部H1接近手部H2。因此,可將物件OBJ判定為手槍。然而,本揭露不限於此。As shown in the
綜上所述,根據手部追蹤系統100和手部追蹤方法200,可正確地追蹤手部H(手部H1和/或手部H2)和物件OBJ。因此,可以真實的方式顯示虛擬世界中的虛擬手部和虛擬物件,進而給使用者帶來完全沉浸式體驗。In summary, according to the
本領域的技術人員應明白,在不脫離本揭露的範圍或精神的情況下,可對所揭露的實施例進行各種修改和變化。鑒於前述內容,希望本揭露涵蓋屬於所附申請專利範圍和其等效物的範圍內的本揭露的修改和變化。It should be understood by those skilled in the art that various modifications and variations may be made to the disclosed embodiments without departing from the scope or spirit of the present disclosure. In view of the foregoing, it is intended that the present disclosure covers modifications and variations of the present disclosure that fall within the scope of the appended patent applications and their equivalents.
100:手部追蹤系統
110:相機
120、TRK:追蹤器
130:記憶體
140:處理器
200:手部追蹤方法
300A、300B:互動情境
301、302、303:握持情境
400A、400B、500A、500B、500C、500D:手部追蹤情境
401A、401B、402A、402B、403A、403B、404A、404B、501A、501B、502A、502B、503A:追蹤過程
501C、501D:側視圖
502C:後視圖
H、H1、H2:手部
MD:手勢模型
ND、ND1、ND2:追蹤節點
OBJ:物件
S210、S220、S230、S240、S250、S260:步驟
TB、TB1、TB2:追蹤框100: Hand tracking system
110:
圖1為根據本揭露的實施例的手部追蹤系統的示意圖。 圖2為根據本揭露的實施例的手部追蹤方法的示意性流程圖。 圖3A為根據本揭露的實施例的手部與物件之間的互動情境的示意圖。 圖3B為根據本揭露的實施例的手部與物件之間的互動情境的示意圖。 圖4A為根據本揭露的實施例的單手的手部追蹤情境的示意圖。 圖4B為根據本揭露的實施例的單手的手部追蹤情境的示意圖。 圖5A為根據本揭露的實施例的雙手的手部追蹤情境的示意圖。 圖5B為根據本揭露的實施例的雙手的手部追蹤情境的示意圖。 圖5C為根據本揭露的實施例的雙手的手部追蹤情境的示意圖。 圖5D為根據本揭露的實施例的雙手的手部追蹤情境的示意圖。 FIG. 1 is a schematic diagram of a hand tracking system according to an embodiment of the present disclosure. FIG. 2 is a schematic flow chart of a hand tracking method according to an embodiment of the present disclosure. FIG. 3A is a schematic diagram of an interaction scenario between a hand and an object according to an embodiment of the present disclosure. FIG. 3B is a schematic diagram of an interaction scenario between a hand and an object according to an embodiment of the present disclosure. FIG. 4A is a schematic diagram of a hand tracking scenario of a single hand according to an embodiment of the present disclosure. FIG. 4B is a schematic diagram of a hand tracking scenario of a single hand according to an embodiment of the present disclosure. FIG. 5A is a schematic diagram of a hand tracking scenario of both hands according to an embodiment of the present disclosure. FIG. 5B is a schematic diagram of a hand tracking scenario of both hands according to an embodiment of the present disclosure. FIG. 5C is a schematic diagram of a hand tracking scenario of both hands according to an embodiment of the present disclosure. FIG. 5D is a schematic diagram of a hand tracking scenario of both hands according to an embodiment of the present disclosure.
200:手部追蹤方法 200:Hand tracking method
S210、S220、S230、S240、S250、S260:步驟 S210, S220, S230, S240, S250, S260: Steps
Claims (11)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| TW113112258A TWI866828B (en) | 2024-04-01 | 2024-04-01 | Hand tracking system and method |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| TW113112258A TWI866828B (en) | 2024-04-01 | 2024-04-01 | Hand tracking system and method |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| TWI866828B true TWI866828B (en) | 2024-12-11 |
| TW202540815A TW202540815A (en) | 2025-10-16 |
Family
ID=94769464
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| TW113112258A TWI866828B (en) | 2024-04-01 | 2024-04-01 | Hand tracking system and method |
Country Status (1)
| Country | Link |
|---|---|
| TW (1) | TWI866828B (en) |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20160086349A1 (en) * | 2014-09-23 | 2016-03-24 | Microsoft Corporation | Tracking hand pose using forearm-hand model |
| US20230221830A1 (en) * | 2021-12-02 | 2023-07-13 | Apple Inc. | User interface modes for three-dimensional display |
| US20230333650A1 (en) * | 2020-08-28 | 2023-10-19 | Apple Inc. | Gesture Tutorial for a Finger-Wearable Device |
| US20240094819A1 (en) * | 2022-09-21 | 2024-03-21 | Apple Inc. | Devices, methods, and user interfaces for gesture-based interactions |
-
2024
- 2024-04-01 TW TW113112258A patent/TWI866828B/en active
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20160086349A1 (en) * | 2014-09-23 | 2016-03-24 | Microsoft Corporation | Tracking hand pose using forearm-hand model |
| US20230333650A1 (en) * | 2020-08-28 | 2023-10-19 | Apple Inc. | Gesture Tutorial for a Finger-Wearable Device |
| US20230221830A1 (en) * | 2021-12-02 | 2023-07-13 | Apple Inc. | User interface modes for three-dimensional display |
| US20240094819A1 (en) * | 2022-09-21 | 2024-03-21 | Apple Inc. | Devices, methods, and user interfaces for gesture-based interactions |
Also Published As
| Publication number | Publication date |
|---|---|
| TW202540815A (en) | 2025-10-16 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP5109098B2 (en) | Motion control system, motion control method, and motion control program | |
| Zhang et al. | Real-time spin estimation of ping-pong ball using its natural brand | |
| JP2009535173A (en) | Three-dimensional input control system, method, and apparatus | |
| US20160059120A1 (en) | Method of using motion states of a control device for control of a system | |
| JP5773144B2 (en) | Motion analysis apparatus, motion analysis system, motion analysis program, and recording medium | |
| JP6613685B2 (en) | Swing diagnostic method, swing diagnostic program, recording medium, swing diagnostic device, and swing diagnostic system | |
| TWI866828B (en) | Hand tracking system and method | |
| WO2015141183A1 (en) | Movement analysis device, movement analysis system, movement analysis method, display method for movement analysis information, and program | |
| JP5638592B2 (en) | System and method for analyzing game control input data | |
| US10948978B2 (en) | Virtual object operating system and virtual object operating method | |
| Nakai et al. | A volleyball playing robot | |
| US12430791B1 (en) | Hand tracking system and method | |
| CN120780132A (en) | Hand tracking system and method | |
| TWI874171B (en) | Hand tracking device, system, and method | |
| Rocha et al. | Dance gestures recognition for wheelchair control | |
| EP3813018A1 (en) | Virtual object operating system and virtual object operating method | |
| JP6661783B2 (en) | Information processing system, information processing apparatus, control method, and program | |
| TWI908003B (en) | Hand tracking method, host, and hand tracking system | |
| JPWO2020158727A1 (en) | Systems, methods, and programs | |
| JP2017029515A (en) | Golf swing analysis device, golf swing analysis system, golf swing analysis method and golf swing analysis program | |
| US11892625B2 (en) | Method for determining posture of user, host, and computer readable medium | |
| EP3378542B1 (en) | System and method of modeling the behavior of game elements during a remote game | |
| TW202438141A (en) | Hand tracking method, host, and hand tracking system | |
| JP7027745B2 (en) | Analysis device for the behavior of hitting tools | |
| CN118674747A (en) | Hand tracking method, host and hand tracking system |