[go: up one dir, main page]

TWI866828B - Hand tracking system and method - Google Patents

Hand tracking system and method Download PDF

Info

Publication number
TWI866828B
TWI866828B TW113112258A TW113112258A TWI866828B TW I866828 B TWI866828 B TW I866828B TW 113112258 A TW113112258 A TW 113112258A TW 113112258 A TW113112258 A TW 113112258A TW I866828 B TWI866828 B TW I866828B
Authority
TW
Taiwan
Prior art keywords
hand
tracking
contact portion
tracker
processor
Prior art date
Application number
TW113112258A
Other languages
Chinese (zh)
Other versions
TW202540815A (en
Inventor
魏敏家
林婷葦
Original Assignee
宏達國際電子股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 宏達國際電子股份有限公司 filed Critical 宏達國際電子股份有限公司
Priority to TW113112258A priority Critical patent/TWI866828B/en
Application granted granted Critical
Publication of TWI866828B publication Critical patent/TWI866828B/en
Publication of TW202540815A publication Critical patent/TW202540815A/en

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)
  • Image Analysis (AREA)

Abstract

This disclosure provides a hand tracking system. A camera is configured to obtain a hand image of a hand of a user. A tracker is adapted to be attached to an object and configured to obtain tracker data. A processor is configured to determine the hand being touching the object based on the hand image or the tracker data. The processor is configured to determine a contact portion of the object based on the object. The contact portion is adapted to be in contact with the hand of the user. The processor is configured to determine a hand gesture of the user based on the hand image. The processor is configured to determine a hand pose of the hand and an object pose of the object based on the hand gesture and the contact portion.

Description

手部追蹤系統和方法Hand tracking system and method

本發明是有關於一種手部追蹤系統,且特別是有關於一種手部追蹤系統和方法。The present invention relates to a hand tracking system, and more particularly to a hand tracking system and method.

為了給使用者帶來沉浸式體驗,不斷地開發與擴展現實(extended reality;XR)相關的技術,例如增強現實(augmented reality;AR)、虛擬現實(virtual reality;VR)以及混合現實(mixed reality;MR)。AR技術允許使用者將虛擬元素帶到現實世界。VR技術允許使用者進入整個新虛擬世界以經歷不同生活。MR技術合並現實世界與虛擬世界。此外,為了給使用者帶來完全沉浸式體驗,可通過一個或多個裝置,提供視覺內容、音頻內容或其它感覺的內容。In order to provide users with an immersive experience, technologies related to extended reality (XR), such as augmented reality (AR), virtual reality (VR), and mixed reality (MR), are constantly being developed. AR technology allows users to bring virtual elements into the real world. VR technology allows users to enter a whole new virtual world to experience a different life. MR technology merges the real world with the virtual world. In addition, in order to provide users with a fully immersive experience, visual content, audio content, or other sensory content can be provided through one or more devices.

本揭露提供一種手部追蹤系統和手部追蹤方法,以便改進手部在手部握持物件時的手部追蹤The present disclosure provides a hand tracking system and a hand tracking method to improve hand tracking when the hand is holding an object.

本揭露的手部追蹤系統包含相機、追蹤器、記憶體以及處理器。相機配置成獲得使用者的手部的手部影像。追蹤器適用於附接到物件且配置成獲得追蹤器資料。記憶體配置成儲存程式碼。處理器配置成存取程式碼以執行:基於手部影像或追蹤器資料,而判定手部正觸摸物件;基於物件,而判定物件的接觸部分,其中接觸部分適用於與使用者的手部接觸;基於手部影像,而判定使用者的手勢(hand gesture);以及基於手勢和接觸部分,而判定手部的手部姿態(hand pose)和物件的物件姿態。The hand tracking system disclosed herein includes a camera, a tracker, a memory, and a processor. The camera is configured to obtain a hand image of a user's hand. The tracker is suitable for being attached to an object and is configured to obtain tracker data. The memory is configured to store program code. The processor is configured to access the program code to execute: based on the hand image or the tracker data, determine that the hand is touching the object; based on the object, determine a contact portion of the object, wherein the contact portion is suitable for contacting the user's hand; based on the hand image, determine the user's hand gesture; and based on the hand gesture and the contact portion, determine the hand pose of the hand and the object pose of the object.

本揭露的手部追蹤方法包含:通過相機,獲得使用者的手部的手部影像;通過追蹤器,獲得追蹤器資料,其中追蹤器適用於附接到物件;通過處理器,基於手部影像或追蹤器資料,而判定手部正觸摸物件;基於物件,而判定物件的接觸部分,其中接觸部分適用於與使用者的手部接觸;基於手部影像,而判定使用者的手勢;以及基於手勢和接觸部分,而判定手部的手部姿態和物件的物件姿態。The hand tracking method disclosed herein includes: obtaining a hand image of a user's hand through a camera; obtaining tracker data through a tracker, wherein the tracker is suitable for being attached to an object; determining through a processor that the hand is touching the object based on the hand image or the tracker data; determining a contact portion of the object based on the object, wherein the contact portion is suitable for contacting the user's hand; determining the user's gesture based on the hand image; and determining the hand posture of the hand and the object posture of the object based on the gesture and the contact portion.

基於上述,根據手部追蹤系統和手部追蹤方法,可在手部握持物件時正確地追蹤手部和物件。Based on the above, according to the hand tracking system and the hand tracking method, the hand and the object can be correctly tracked when the hand holds the object.

現將詳細參考本揭露的示範性實施例,在隨附圖式中示出所述示範性實施例的實例。只要可能,在圖式和描述中使用相同附圖標號來指代相同或相似組件。Reference will now be made in detail to exemplary embodiments of the present disclosure, examples of which are illustrated in the accompanying drawings. Whenever possible, the same drawing reference numerals are used in the drawings and description to refer to the same or like components.

在本揭露的整個本說明書和所附申請專利範圍中,特定術語用以指代特定組件。本領域的技術人員應理解,電子裝置製造商可通過不同名稱來指代相同組件。本文並不意圖區分具有相同功能但不同名稱的那些組件。在以下描述和權利請求中,如“包括”和“包含”的詞語是開放式術語,且應解釋為“包含但不限於…”。Throughout this specification and the appended patent applications disclosed herein, specific terms are used to refer to specific components. It should be understood by those skilled in the art that electronic device manufacturers may refer to the same component by different names. This document does not intend to distinguish between components that have the same function but different names. In the following description and claims, words such as "including" and "comprising" are open terms and should be interpreted as "including but not limited to...".

為了改進沉浸式體驗,現實世界中的實體物件可用於表示虛擬中的虛擬物件。舉例來說,當使用者可實體上觸摸且與虛擬物件互動時,其可使虛擬體驗感覺更真實且更沉浸。對於要求精確度或靈巧性的任務,例如繪畫、組裝一件家具或演奏樂器,尤其如此。另外,使用實體物件以抓取作為虛擬物件還可減少使用者的認知負載。這是因為他們不必學習新的且複雜的控制。替代地,他們可依賴於他們在現實世界中如何與實體物件互動的現有知識。To improve the immersive experience, physical objects in the real world can be used to represent virtual objects in the virtual world. For example, when a user can physically touch and interact with virtual objects, it can make the virtual experience feel more real and more immersive. This is especially true for tasks that require precision or dexterity, such as painting, assembling a piece of furniture, or playing a musical instrument. In addition, using physical objects to grab as virtual objects can also reduce the cognitive load on the user. This is because they do not have to learn new and complex controls. Instead, they can rely on their existing knowledge of how to interact with physical objects in the real world.

然而,當使用者正握持實體物件時,使用者的手部的部分可能會被實體物件遮擋,進而減小手部追蹤的準確度。此外,當實體物件或手部的位置未被清楚地辨識時,在虛擬世界中,虛擬物件與虛擬手部之間可能會顯示出偏移。由於偏移的存在,虛擬物件可能會看起來未黏附到虛擬手部。也就是說,虛擬手部或虛擬手部的虛擬手勢可能看起來很詭異。換句話說,不真實的現象可能會對使用者體驗具有負面影響,進而降低沉浸式體驗。因此,本領域的技術人員追求提供更真實的方式,來將實體物件與手部之間的關係轉換成虛擬物件與虛擬手部之間的關係。However, when a user is holding a physical object, part of the user's hand may be obscured by the physical object, thereby reducing the accuracy of hand tracking. In addition, when the position of the physical object or hand is not clearly identified, an offset may appear between the virtual object and the virtual hand in the virtual world. Due to the offset, the virtual object may appear not to be attached to the virtual hand. In other words, the virtual hand or the virtual gesture of the virtual hand may look strange. In other words, the unrealistic phenomenon may have a negative impact on the user experience, thereby reducing the immersive experience. Therefore, technicians in this field seek to provide a more realistic way to convert the relationship between a physical object and a hand into a relationship between a virtual object and a virtual hand.

圖1為根據本揭露的實施例的手部追蹤系統的示意圖。參考圖1,手部追蹤系統100可包含相機110、追蹤器120、記憶體130以及處理器140。在一個實施例中,相機110、追蹤器120、記憶體130和/或處理器140可整合在單個電子裝置中或彼此分離,但不限於此。在一個實施例中,相機110、記憶體130以及處理器140可包含於頭戴式顯示器(head-mounted display;HMD)裝置中,且追蹤器120可包含於物件中。在另一實施例中,相機110可放置在除使用者的頭部以外的某處以捕獲使用者的影像。然而,本揭露不限於此。FIG. 1 is a schematic diagram of a hand tracking system according to an embodiment of the present disclosure. Referring to FIG. 1 , the hand tracking system 100 may include a camera 110, a tracker 120, a memory 130, and a processor 140. In one embodiment, the camera 110, the tracker 120, the memory 130, and/or the processor 140 may be integrated into a single electronic device or separated from each other, but is not limited thereto. In one embodiment, the camera 110, the memory 130, and the processor 140 may be included in a head-mounted display (HMD) device, and the tracker 120 may be included in an object. In another embodiment, the camera 110 may be placed somewhere other than the user's head to capture an image of the user. However, the present disclosure is not limited thereto.

在一個實施例中,相機110可包含例如互補金屬氧化物半導體(complementary metal oxide semiconductor;CMOS)相機或電荷耦合裝置(charge coupled device;CCD)相機。在一個實施例中,相機110可安置於HMD裝置、可穿戴眼鏡(例如,AR/VR護目鏡)、電子裝置、其它類似裝置或這些裝置的組合上。然而,本揭露不限於此。In one embodiment, the camera 110 may include, for example, a complementary metal oxide semiconductor (CMOS) camera or a charge coupled device (CCD) camera. In one embodiment, the camera 110 may be disposed on an HMD device, wearable glasses (e.g., AR/VR goggles), an electronic device, other similar devices, or a combination of these devices. However, the present disclosure is not limited thereto.

在一個實施例中,追蹤器120可包含例如陀螺儀、加速計、慣性測量單元(inertial measurement unit;IMU)感測器、其它類似裝置或這些裝置的組合。然而,本揭露不限於此。In one embodiment, the tracker 120 may include, for example, a gyroscope, an accelerometer, an inertial measurement unit (IMU) sensor, other similar devices, or a combination of these devices. However, the present disclosure is not limited thereto.

在一個實施例中,記憶體130可包含例如任何類型的固定或可移除隨機存取記憶體(random access memory;RAM)、只讀記憶體(read-only memory;ROM)、快閃記憶體、硬碟、其它類似裝置,或這些裝置的組合,且可用於記錄多個程式碼或模組。然而,本揭露不限於此。In one embodiment, the memory 130 may include, for example, any type of fixed or removable random access memory (RAM), read-only memory (ROM), flash memory, hard disk, other similar devices, or a combination of these devices, and may be used to record multiple program codes or modules. However, the present disclosure is not limited thereto.

在一個實施例中,手部追蹤系統100中的各組件可包含通訊電路,且通訊電路可包含例如有線網路模組、無線網路模組、藍牙模組、紅外模組、射頻辨識(radio frequency identification;RFID)模組、紫蜂(Zigbee)網路模組,或近場通訊(near field communication;NFC)網路模組,但本揭露不限於此。也就是說,手部追蹤系統100中的各組件可通過有線亦或無線通訊彼此通訊。In one embodiment, each component in the hand tracking system 100 may include a communication circuit, and the communication circuit may include, for example, a wired network module, a wireless network module, a Bluetooth module, an infrared module, a radio frequency identification (RFID) module, a Zigbee network module, or a near field communication (NFC) network module, but the present disclosure is not limited thereto. In other words, each component in the hand tracking system 100 may communicate with each other through wired or wireless communication.

在一個實施例中,處理器140可耦合到相機110、追蹤器120以及記憶體130。在實施例中,處理器140可為通用處理器、專用處理器、常規處理器、數位訊號處理器、多個微處理器、與數位訊號處理器核心組合的一個或多個微處理器、控制器、微控制器、專用積體電路(application specific integrated circuit;ASIC)、現場可程式化閘陣列(field programmable gate array;FPGA)、任何其它種類的積體電路、狀態機、基於高級RISC機(advanced RISC machine,ARM)的處理器以及類似者。In one embodiment, the processor 140 may be coupled to the camera 110, the tracker 120, and the memory 130. In embodiments, the processor 140 may be a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor, a plurality of microprocessors, one or more microprocessors combined with a digital signal processor core, a controller, a microcontroller, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), any other type of integrated circuit, a state machine, an advanced RISC machine (ARM) based processor, and the like.

在一個實施例中,處理器140可將資料傳輸到相機110、追蹤器120以及記憶體130且從所述相機110、所述追蹤器120以及所述記憶體130接收資料。此外,記憶體130可配置成儲存程式碼。處理器140可從記憶體130存取程式碼來執行本揭露的手部追蹤方法,且在下文中描述其細節。In one embodiment, the processor 140 may transmit data to and receive data from the camera 110, the tracker 120, and the memory 130. In addition, the memory 130 may be configured to store program code. The processor 140 may access the program code from the memory 130 to execute the hand tracking method of the present disclosure, and the details thereof are described below.

圖2為根據本揭露的實施例的手部追蹤方法的示意性流程圖。參考圖1和圖2,可由圖1中的手部追蹤系統100來執行手部追蹤方法200,且將在下文用圖1中繪示的組件描述圖2中各步驟的細節。FIG2 is a schematic flow chart of a hand tracking method according to an embodiment of the present disclosure. Referring to FIG1 and FIG2, the hand tracking method 200 can be executed by the hand tracking system 100 in FIG1, and the details of each step in FIG2 will be described below using the components shown in FIG1.

首先,在步驟S210中,相機110可配置成獲得使用者的手部的手部影像。隨後,在步驟S220中,追蹤器120可適用於(adapted to)附接到(attached to)物件且配置成獲得追蹤器資料。First, in step S210 , the camera 110 may be configured to obtain a hand image of a user's hand. Then, in step S220 , the tracker 120 may be adapted to be attached to an object and configured to obtain tracker data.

然後,在步驟S230中,處理器140可配置成基於手部影像或追蹤器資料,而判定手部正觸摸(例如,接觸)物件。接著,在步驟S240中,處理器140可配置成基於物件,而判定物件的接觸部分。在一個實施例中,接觸部分可適用於與使用者的手部接觸。舉例來說,接觸部分可為物件的手柄或握把,但本揭露不限於此。此外,在步驟S250中,處理器140可配置成基於手部影像,而判定使用者的手勢。另外,在步驟S260中,處理器140可配置成基於手勢和接觸部分,而判定手部的手部姿態(例如,位置與定向(orientation))和物件的物件姿態。另外,處理器140可配置成分別基於手部姿態和物件姿態,而顯示虛擬手部和虛擬物件。Then, in step S230, the processor 140 may be configured to determine that the hand is touching (e.g., touching) the object based on the hand image or the tracker data. Next, in step S240, the processor 140 may be configured to determine the contact portion of the object based on the object. In one embodiment, the contact portion may be applicable to contact with the user's hand. For example, the contact portion may be a handle or grip of the object, but the present disclosure is not limited thereto. In addition, in step S250, the processor 140 may be configured to determine the user's gesture based on the hand image. In addition, in step S260, the processor 140 may be configured to determine a hand posture (e.g., position and orientation) of the hand and an object posture of the object based on the hand gesture and the contact portion. In addition, the processor 140 may be configured to display a virtual hand and a virtual object based on the hand posture and the object posture, respectively.

如此一來,可正確地追蹤手部和物件。因此,可以真實的方式顯示虛擬世界中的虛擬手部和虛擬物件,進而給使用者帶來完全沉浸式體驗。另外,手部追蹤方法200的實施細節可參考以下描述以獲得足夠教示、建議以及實作的實施例。In this way, the hand and the object can be tracked correctly. Therefore, the virtual hand and the virtual object in the virtual world can be displayed in a realistic manner, thereby bringing a fully immersive experience to the user. In addition, the implementation details of the hand tracking method 200 can refer to the following description to obtain sufficient teachings, suggestions and implementation examples.

圖3A為根據本揭露的實施例的手部與物件之間的互動情境的示意圖。圖3B為根據本揭露的實施例的手部與物件之間的互動情境的示意圖。參考圖3A和圖3B,互動情境300A和互動情境300B描繪手部H如何以各種方式抓握物件的示範性實施例。FIG. 3A is a schematic diagram of an interaction scenario between a hand and an object according to an embodiment of the present disclosure. FIG. 3B is a schematic diagram of an interaction scenario between a hand and an object according to an embodiment of the present disclosure. Referring to FIG. 3A and FIG. 3B , interaction scenario 300A and interaction scenario 300B depict exemplary embodiments of how a hand H grasps an object in various ways.

首先參考圖3A。互動情境300A可包含手部H、物件O1、物件O2以及物件O3。在一個實施例中,物件O1可為槍、物件O2可為球拍,且物件O3可為球棒。然而,本揭露不限於此。First, refer to FIG. 3A. The interactive scenario 300A may include a hand H, an object O1, an object O2, and an object O3. In one embodiment, the object O1 may be a gun, the object O2 may be a racket, and the object O3 may be a bat. However, the present disclosure is not limited thereto.

在一個實施例中,使用者可意圖在手部H上握持物件O1、物件O2以及物件O3中的一者。然而,由於物件O1、物件O2以及物件O3的形狀和功能可不相同,因此用以抓握物件O1、物件O2以及物件O3的最佳方式可能不相同。也就是說,物件O1、物件O2或物件O3的最佳抓握位置可取決於既定用途。在一個實施例中,最佳抓握位置可又被稱為接觸部分。換句話說,接觸部分可適用於與使用者的手部H接觸。In one embodiment, the user may intend to hold one of the objects O1, O2, and O3 on the hand H. However, since the shapes and functions of the objects O1, O2, and O3 may be different, the best way to grasp the objects O1, O2, and O3 may be different. That is, the best grasping position of the object O1, O2, or O3 may depend on the intended use. In one embodiment, the best grasping position may also be referred to as a contact portion. In other words, the contact portion may be adapted to come into contact with the user's hand H.

舉例來說,參考物件O1,物件O1的接觸部分可為接近槍的板機的握把,使得使用者可容易地拉動板機以射擊目標。隨後,參考物件O2,物件O2的接觸部分可為遠離球拍的框架的握把,使得使用者可容易地用球拍擊中目標。另外,參考物件O3,物件O3的接觸部分可為遠離球棒的筒管的手柄,使得使用者可容易地用球棒擊中目標。For example, referring to object O1, the contact portion of object O1 may be a grip close to the trigger of a gun, so that the user can easily pull the trigger to shoot a target. Subsequently, referring to object O2, the contact portion of object O2 may be a grip away from the frame of a racket, so that the user can easily hit a target with the racket. In addition, referring to object O3, the contact portion of object O3 may be a handle away from the barrel of a bat, so that the user can easily hit a target with the bat.

現在參考圖3B。互動情境300B可包含握持情境301、握持情境302以及握持情境303。參考握持情境301,手部H與槍的握把接觸(例如,握持)。參考握持情境302,手部H與球拍的握把接觸。參考握持情境303,手部H與球棒的手柄接觸。Now refer to FIG. 3B. Interaction scenario 300B may include holding scenario 301, holding scenario 302, and holding scenario 303. Referring to holding scenario 301, hand H is in contact with the grip of a gun (e.g., holding). Referring to holding scenario 302, hand H is in contact with the grip of a racket. Referring to holding scenario 303, hand H is in contact with the handle of a bat.

值得注意的是,握持情境301、握持情境302或握持情境303描繪使用者的手部H如何與物件O1、物件O2或物件O3接觸的理想情況。在一個實施例中,為了執行手部的手部追蹤,需要由相機110捕獲手部H的所有21個(關節)節點。然而,由於手部H的一些節點可能會被物件O1、物件O2或物件O3遮擋,手部H的手部追蹤可能會無法恰當地執行。舉例來說,虛擬物件(對應於物件O1、物件O2或物件O3)可能會看起來未黏附到虛擬手(對應於手部H)。也就是說,虛擬手部或虛擬手部的虛擬手勢可能看起來很詭異。換句話說,不真實的現象可能會對使用者體驗具有負面影響,進而降低沉浸式體驗。It is worth noting that the holding scenario 301, holding scenario 302, or holding scenario 303 depicts an ideal situation of how the user's hand H contacts the object O1, the object O2, or the object O3. In one embodiment, in order to perform hand tracking of the hand, all 21 (joint) nodes of the hand H need to be captured by the camera 110. However, since some nodes of the hand H may be blocked by the object O1, the object O2, or the object O3, the hand tracking of the hand H may not be performed properly. For example, the virtual object (corresponding to the object O1, the object O2, or the object O3) may appear not to be attached to the virtual hand (corresponding to the hand H). That is, a virtual hand or a virtual hand's virtual gestures may look weird. In other words, unrealistic phenomena may have a negative impact on the user experience, thereby reducing the immersive experience.

圖4A為根據本揭露的實施例的單手的手部追蹤情境的示意圖。圖4B為根據本揭露的實施例的單手的手部追蹤情境的示意圖。參考圖4A和圖4B,手部追蹤情境400A和手部追蹤情境400B描繪手部H的追蹤過程的示範性實施例。FIG. 4A is a schematic diagram of a hand tracking context of a single hand according to an embodiment of the present disclosure. FIG. 4B is a schematic diagram of a hand tracking context of a single hand according to an embodiment of the present disclosure. Referring to FIG. 4A and FIG. 4B , hand tracking context 400A and hand tracking context 400B depict exemplary embodiments of a tracking process of a hand H.

首先參考圖4A。手部追蹤情境400A可包含手部H、物件OBJ以及追蹤器TRK。如圖4A中所繪示,物件OBJ的物件類型可為乒乓球拍。在一個實施例中,追蹤器TRK可適用於附接到物件OBJ或嵌入在物件OBJ中,且不限於此。也就是說,追蹤器TRK可與物件OBJ相關聯以提供物件OBJ的資訊。舉例來說,追蹤器TRK可配置成提供關於三條軸線的三個線性加速度值和/或三個角速度。然而,本揭露不限於此。並且,對於不同的物件OBJ,追蹤器TRK可安裝於適合的位置。換句話說,追蹤器TRK對於物件OBJ的(相對)位置可為已知的。然而,本揭露不限於此。First, refer to Figure 4A. The hand tracking scenario 400A may include a hand H, an object OBJ, and a tracker TRK. As shown in Figure 4A, the object type of the object OBJ may be a table tennis racket. In one embodiment, the tracker TRK may be adapted to be attached to the object OBJ or embedded in the object OBJ, but is not limited thereto. That is, the tracker TRK may be associated with the object OBJ to provide information of the object OBJ. For example, the tracker TRK may be configured to provide three linear acceleration values and/or three angular velocities about three axes. However, the present disclosure is not limited thereto. Furthermore, for different objects OBJ, the tracker TRK may be installed at a suitable position. In other words, the (relative) position of the tracker TRK to the object OBJ may be known. However, the present disclosure is not limited thereto.

首先,在追蹤過程401A中,如圖2的步驟S230,響應於滿足或觸發特定條件,處理器140可配置成判定手部H正與物件OBJ接觸。在一個實施例中,基於追蹤器TRK的追蹤器資料和/或手部影像,處理器140可配置成判斷手部H是否與物件OBJ接觸。在另一實施例中,距離感測器或接觸感測器可包含於物件OBJ中。基於距離感測器或接觸感測器的感測器資料,處理器140可配置成判斷手部H是否與物件OBJ接觸。然而,本揭露不限於此。First, in the tracking process 401A, such as step S230 of Figure 2, in response to satisfying or triggering a specific condition, the processor 140 may be configured to determine whether the hand H is in contact with the object OBJ. In one embodiment, based on the tracker data of the tracker TRK and/or the hand image, the processor 140 may be configured to determine whether the hand H is in contact with the object OBJ. In another embodiment, a distance sensor or a contact sensor may be included in the object OBJ. Based on the sensor data of the distance sensor or the contact sensor, the processor 140 may be configured to determine whether the hand H is in contact with the object OBJ. However, the present disclosure is not limited to this.

隨後,在追蹤過程402A中,處理器140可配置成判定物件OBJ的物件類型。物件類型還可被稱為物件OBJ的角色。在一個實施例中,物件類型可包含運動器材、射擊器材或武器。然而,本揭露不限於此。在一個實施例中,物件OBJ的物件資訊或追蹤器TRK的追蹤器編號可儲存在追蹤器TRK中。基於物件資訊或追蹤器編號,可判定物件OBJ的物件類型。也就是說,物件資訊可包含物件類型,且物件類型或追蹤器編號可指示(indicate)與追蹤器TRK相關聯的物件OBJ。在另一實施例中,物件資料庫可儲存在記憶體130中,且可配置成儲存多個物件類型和用於辨識多個物件類型的物件資訊。舉例來說,物件資料庫可為用各個物件類型預訓練的神經網路模型。也就是說,可利用物件資料庫和物件辨識演算法,基於由相機110獲得的影像(例如,手部影像),而判定物件OBJ的物件類型。然而,本揭露不限於此。Subsequently, in the tracking process 402A, the processor 140 may be configured to determine the object type of the object OBJ. The object type may also be referred to as the role of the object OBJ. In one embodiment, the object type may include sports equipment, shooting equipment, or weapons. However, the present disclosure is not limited to this. In one embodiment, the object information of the object OBJ or the tracker number of the tracker TRK may be stored in the tracker TRK. Based on the object information or the tracker number, the object type of the object OBJ may be determined. That is, the object information may include the object type, and the object type or the tracker number may indicate the object OBJ associated with the tracker TRK. In another embodiment, the object database may be stored in the memory 130 and may be configured to store a plurality of object types and object information for identifying the plurality of object types. For example, the object database may be a neural network model pre-trained with each object type. That is, the object database and the object recognition algorithm may be used to determine the object type of the object OBJ based on the image (e.g., hand image) obtained by the camera 110. However, the present disclosure is not limited thereto.

此外,如圖2的步驟S240,響應於所判定的物件類型,處理器140可配置成基於物件類型,而判定物件OBJ的接觸部分。舉例來說,接觸部分可為物件OBJ的最佳抓握位置。也就是說,對於不同的物件類型,物件OBJ的接觸部分的位置可能有所不同。接著,在判定接觸部分之後,由於判定了物件類型且追蹤器TRK的安裝位置可為已知,因此還可(基於物件類型)判定追蹤器TRK與接觸部分之間的距離,所述距離也可稱為偏移距離。In addition, as shown in step S240 of FIG. 2 , in response to the determined object type, the processor 140 may be configured to determine the contact portion of the object OBJ based on the object type. For example, the contact portion may be the optimal gripping position of the object OBJ. That is, the position of the contact portion of the object OBJ may be different for different object types. Then, after determining the contact portion, since the object type is determined and the installation position of the tracker TRK may be known, the distance between the tracker TRK and the contact portion may also be determined (based on the object type), and the distance may also be referred to as an offset distance.

另一方面,除了基於物件類型而判定接觸部分之外,也可基於手部影像(例如,手部影像中的使用者的手部H與物件OBJ接觸的接觸部分),而判定接觸部分。此外,對於不同物件類型,追蹤器TRK(的安裝位置)與接觸部分之間的距離可預儲存在記憶體130中。也就是說,對於各物件類型,追蹤器TRK可安裝在物件OBJ的特定位置上。因此,除了基於物件資訊,處理器140也可配置成基於追蹤器TRK與接觸部分之間的距離而判定物件OBJ的物件類型。On the other hand, in addition to determining the contact portion based on the object type, the contact portion may also be determined based on the hand image (e.g., the contact portion where the user's hand H in the hand image contacts the object OBJ). In addition, for different object types, the distance between the tracker TRK (the installation position) and the contact portion may be pre-stored in the memory 130. That is, for each object type, the tracker TRK may be installed at a specific position of the object OBJ. Therefore, in addition to being based on the object information, the processor 140 may also be configured to determine the object type of the object OBJ based on the distance between the tracker TRK and the contact portion.

另外,基於追蹤器資料,可判定空間中的追蹤器TRK的坐標。因此,基於追蹤器TRK的坐標和偏移距離,可判定接觸部分的坐標或位置。由於接觸部分是被配置成適於與使用者的手部H接觸。因此,基於接觸部分的坐標或位置,可判定追蹤框TB。追蹤框TB可為用於執行手部H的手部追蹤的區域。也就是說,在判定物件類型和接觸部分之後,可判定用於手部追蹤的追蹤框TB。In addition, based on the tracker data, the coordinates of the tracker TRK in space can be determined. Therefore, based on the coordinates and offset distance of the tracker TRK, the coordinates or position of the contact portion can be determined. Since the contact portion is configured to be suitable for contact with the user's hand H. Therefore, based on the coordinates or position of the contact portion, the tracking frame TB can be determined. The tracking frame TB can be an area for performing hand tracking of the hand H. That is, after determining the object type and the contact portion, the tracking frame TB used for hand tracking can be determined.

然後,在追蹤過程403A中,基於追蹤框TB,可初步地執行手部H的手部追蹤。值得注意的是,如前所述,由於此時手部H的一些節點可能會被物件OBJ所遮擋,手部H的手部追蹤可能會無法恰當地執行。因此,基於追蹤框TB,可初步地判定手部H的全部節點中的多個追蹤節點ND。這些多個追蹤節點ND可為手部H的關鍵節點或特徵節點,且可用於判定初步的手部H的手部姿態。也就是說,基於這些多個追蹤節點ND,可初步地執行手部H的手部追蹤。類似地,由於物件OBJ上的追蹤器TRK的坐標已知,可初步地執行物件OBJ的物件追蹤,以得到初步的物件OBJ的物件姿態。Then, in the tracking process 403A, based on the tracking frame TB, the hand tracking of the hand H can be preliminarily performed. It is worth noting that, as mentioned above, since some nodes of the hand H may be blocked by the object OBJ at this time, the hand tracking of the hand H may not be properly performed. Therefore, based on the tracking frame TB, a plurality of tracking nodes ND among all the nodes of the hand H can be preliminarily determined. These plurality of tracking nodes ND can be key nodes or feature nodes of the hand H, and can be used to determine the preliminary hand posture of the hand H. That is, based on these plurality of tracking nodes ND, the hand tracking of the hand H can be preliminarily performed. Similarly, since the coordinates of the tracker TRK on the object OBJ are known, object tracking of the object OBJ can be preliminarily performed to obtain a preliminary object pose of the object OBJ.

接著,在追蹤過程404A中,如圖2的步驟S250,基於初步的手部追蹤的追蹤結果,可判定手部H的手勢。舉例來說,對於不同的物件類型,使用者可能會採用不同的手勢,來握取物件OBJ。也就是說,在物件種類已知的情況下,使用者用來握取物件OBJ的手勢是可以預測的。並且,可進一步預訓練手勢模型MD,以基於手部影像及/或物件種類,來執行手勢辨識。然而,本揭露不限於此。也就是說,可利用手勢模型MD,基於手部影像及/或物件種類,而判定手部H的手勢。在判定手勢之後,如圖2的步驟S260,可判定優化的手部H的手部姿態和優化的物件OBJ的物件姿態,以確保手部H放置在接觸部分上。也就是說,可判定優化的手部H的手部姿態和優化的物件OBJ的物件姿態,以確保在虛擬世界中虛擬物件可以看起來黏附到虛擬手部。換句話說,虛擬手部的表面可擬合到虛擬物件的表面(例如,接觸部分)。因此,可以真實的方式顯示虛擬手部和虛擬物件,進而給使用者帶來完全沉浸式體驗。Next, in the tracking process 404A, such as step S250 of FIG. 2 , the gesture of the hand H can be determined based on the tracking result of the preliminary hand tracking. For example, for different object types, the user may use different gestures to grasp the object OBJ. That is, when the object type is known, the gesture used by the user to grasp the object OBJ is predictable. Moreover, the gesture model MD can be further pre-trained to perform gesture recognition based on the hand image and/or the object type. However, the present disclosure is not limited to this. That is to say, the gesture model MD can be used to determine the gesture of the hand H based on the hand image and/or the object type. After the hand gesture is determined, as shown in step S260 of FIG. 2 , the hand posture of the optimized hand H and the object posture of the optimized object OBJ may be determined to ensure that the hand H is placed on the contact portion. That is, the hand posture of the optimized hand H and the object posture of the optimized object OBJ may be determined to ensure that the virtual object may appear to be attached to the virtual hand in the virtual world. In other words, the surface of the virtual hand may be fitted to the surface of the virtual object (e.g., the contact portion). Therefore, the virtual hand and the virtual object may be displayed in a realistic manner, thereby providing the user with a fully immersive experience.

值得一提的是,在追蹤情境400A中,假定手部H恰當地握持物件OBJ。也就是說,手部H可能會放置在物件OBJ的接觸部分上。然而,在某些情況下,手部H可能會不正確地握持物件OBJ。也就是說,手部H可能會放置在除接觸部分以外的某處。其中,手部H放置在物件OBJ上的區域可被稱為觸摸區域。換句話說,觸摸區域與接觸部分不同。或者,手部H可能會放置在接觸部分上,但可能手部H的僅一部分與接觸部分接觸,或是手部H可能會在接觸部分上以錯誤方向放置。換句話說,觸摸區域可能會部分地與接觸部分不同。因此,處理器140可配置成基於(優化的)手部姿態,而判定手部H是否正確地放置在接觸部分上。另外,響應於手部H未正確地放置在接觸部分上,處理器140可配置成產生操作說明訊息(instruction message)。以指示使用者將手部H正確地放置在接觸部分上。舉例來說,操作說明訊息可為在虛擬世界中顯示的文字訊息或影像,使得使用者可遵循操作說明訊息,以將手部H正確地放置在接觸部分上。然而,本揭露不限於此。It is worth mentioning that in the tracking scenario 400A, it is assumed that the hand H holds the object OBJ properly. That is, the hand H may be placed on the contact portion of the object OBJ. However, in some cases, the hand H may hold the object OBJ incorrectly. That is, the hand H may be placed somewhere other than the contact portion. Among them, the area where the hand H is placed on the object OBJ can be referred to as the touch area. In other words, the touch area is different from the contact portion. Alternatively, the hand H may be placed on the contact portion, but only a part of the hand H may be in contact with the contact portion, or the hand H may be placed on the contact portion in the wrong direction. In other words, the touch area may be partially different from the contact portion. Therefore, the processor 140 may be configured to determine whether the hand H is correctly placed on the contact portion based on the (optimized) hand posture. In addition, in response to the hand H not being correctly placed on the contact portion, the processor 140 may be configured to generate an instruction message to instruct the user to correctly place the hand H on the contact portion. For example, the instruction message may be a text message or an image displayed in the virtual world, so that the user can follow the instruction message to correctly place the hand H on the contact portion. However, the present disclosure is not limited thereto.

現在參考圖4B。類似於手部追蹤情境400A,手部追蹤情境400B可包含手部H、物件OBJ以及追蹤器TRK。如圖4B中所繪示,物件OBJ的物件類型可為棒球。此外,手部追蹤情境400B可包含追蹤過程401B、追蹤過程402B、追蹤過程403B以及追蹤過程404B。Now refer to FIG. 4B. Similar to the hand tracking scenario 400A, the hand tracking scenario 400B may include a hand H, an object OBJ, and a tracker TRK. As shown in FIG. 4B, the object type of the object OBJ may be a baseball. In addition, the hand tracking scenario 400B may include a tracking process 401B, a tracking process 402B, a tracking process 403B, and a tracking process 404B.

值得注意的是,由於手部追蹤情境400A與手部追蹤情境400B之間的不同處為物件OBJ的物件類型,因此手部追蹤情境400B的實施細節可參考手部追蹤情境400A,以獲得足夠的教示、建議以及實作的實施例,而本文中不冗餘地描述細節。也就是說,追蹤過程401B、追蹤過程402B、追蹤過程403B以及追蹤過程404B可參考追蹤過程401A、追蹤過程402A、追蹤過程403A以及追蹤過程404A。然而,本揭露不受限制。It is worth noting that, since the difference between the hand tracking context 400A and the hand tracking context 400B is the object type of the object OBJ, the implementation details of the hand tracking context 400B can refer to the hand tracking context 400A to obtain sufficient teachings, suggestions and implementation examples, and the details are not redundantly described herein. That is, the tracking process 401B, the tracking process 402B, the tracking process 403B and the tracking process 404B can refer to the tracking process 401A, the tracking process 402A, the tracking process 403A and the tracking process 404A. However, the present disclosure is not limited thereto.

圖5A為根據本揭露的實施例的雙手的手部追蹤情境的示意圖。圖5B為根據本揭露的實施例的雙手的手部追蹤情境的示意圖。參考圖5A和圖5B,手部追蹤情境500A和手部追蹤情境500B描繪雙手的追蹤過程的示範性實施例。FIG5A is a schematic diagram of a hand tracking scenario for two hands according to an embodiment of the present disclosure. FIG5B is a schematic diagram of a hand tracking scenario for two hands according to an embodiment of the present disclosure. Referring to FIG5A and FIG5B , hand tracking scenario 500A and hand tracking scenario 500B depict an exemplary embodiment of a tracking process for two hands.

首先參考圖5A。手部追蹤情境500A可包含手部H1(亦可稱為手部H)、手部H2(亦可稱為額外手部)、物件OBJ以及追蹤器TRK。如圖5A中所繪示,物件OBJ的物件類型可為保齡球。此外,由於保齡球由雙手握持,因此物件OBJ的兩個部分可分別與手部H1和手部H2接觸。舉例來說,物件OBJ的額外接觸部分可為包含拇指孔和兩個手指孔的抓握區域,且抓握區域可適用於與手部H2接觸。此外,物件OBJ的接觸部分可為用於握持物件OBJ的重量的握持區域,且握持區域可適用於與手部H1接觸。然而,本揭露不限於此。First, refer to Figure 5A. The hand tracking scenario 500A may include a hand H1 (also referred to as hand H), a hand H2 (also referred to as an additional hand), an object OBJ, and a tracker TRK. As shown in Figure 5A, the object type of object OBJ may be a bowling ball. In addition, since the bowling ball is held by both hands, two parts of object OBJ may be in contact with hand H1 and hand H2, respectively. For example, the additional contact portion of object OBJ may be a gripping area including a thumb hole and two finger holes, and the gripping area may be suitable for contact with hand H2. In addition, the contact portion of object OBJ may be a gripping area for holding the weight of object OBJ, and the gripping area may be suitable for contact with hand H1. However, the present disclosure is not limited thereto.

首先,在追蹤過程501A中,如圖2的步驟S230,響應於滿足或觸發的特定條件,處理器140可配置成判定手部H1和手部H2正在觸摸(例如,接觸)物件OBJ。在一個實施例中,基於追蹤器TRK的追蹤器資料和/或由相機110獲得的手部H1的手部影像和手部H2的額外手部影像,處理器140可配置成判斷手部H1和手部H2是否與物件OBJ接觸。在另一實施例中,距離感測器或接觸感測器可包含於物件OBJ中。基於距離感測器或接觸感測器的感測器資料,處理器140可配置成判斷手部H1和手部H2是否與物件OBJ接觸。然而,本揭露不限於此。First, in the tracking process 501A, such as step S230 of FIG. 2 , in response to a specific condition being met or triggered, the processor 140 may be configured to determine whether the hand H1 and the hand H2 are touching (e.g., in contact with) the object OBJ. In one embodiment, based on the tracker data of the tracker TRK and/or the hand image of the hand H1 and the additional hand image of the hand H2 obtained by the camera 110, the processor 140 may be configured to determine whether the hand H1 and the hand H2 are in contact with the object OBJ. In another embodiment, a distance sensor or a contact sensor may be included in the object OBJ. Based on the sensor data of the distance sensor or the contact sensor, the processor 140 may be configured to determine whether the hand H1 and the hand H2 are in contact with the object OBJ. However, the present disclosure is not limited thereto.

隨後,在追蹤過程502A中,處理器140可配置成判定物件OBJ的物件類型。此外,如圖2的步驟S240,處理器140可配置成基於物件類型和/或手部影像和額外手部影像,而判定物件OBJ的接觸部分和額外接觸部分。上述過程可參考追蹤過程402A的描述。也就是說,在一個實施例中,可基於儲存在追蹤器TRK中的物件OBJ的物件資訊或追蹤器TRK的追蹤器編號,而判定物件OBJ的物件類型、接觸部分和額外接觸部分。也就是說,對於不同的物件類型,物件OBJ的接觸部分和額外接觸部分的位置可能有所不同。在另一實施例中,可基於記憶體130的物件資料庫和/或手部影像和額外手部影像,而判定物件OBJ的物件類型、接觸部分和額外接觸部分。然而,本揭露不限於此。Subsequently, in the tracking process 502A, the processor 140 may be configured to determine the object type of the object OBJ. In addition, as shown in step S240 of FIG. 2 , the processor 140 may be configured to determine the contact portion and the additional contact portion of the object OBJ based on the object type and/or the hand image and the additional hand image. The above process may refer to the description of the tracking process 402A. That is, in one embodiment, the object type, the contact portion, and the additional contact portion of the object OBJ may be determined based on the object information of the object OBJ stored in the tracker TRK or the tracker number of the tracker TRK. That is, for different object types, the positions of the contact portion and the additional contact portion of the object OBJ may be different. In another embodiment, the object type, the contact portion, and the additional contact portion of the object OBJ may be determined based on the object database and/or the hand image and the additional hand image in the memory 130. However, the present disclosure is not limited thereto.

舉例來說,接觸部分加額外接觸部分可為物件OBJ的最佳抓握位置。在判定接觸部分和額外接觸部分之後,由於判定了物件類型,因此還可判定追蹤器TRK與接觸部分之間的偏移距離和追蹤器TRK與額外接觸部分之間的額外偏移距離。另外,基於追蹤器資料,可判定空間中的追蹤器TRK的坐標。因此,基於追蹤器TRK的坐標、偏移距離以及額外偏移距離,可判定手部H1的坐標或位置和手部H2的坐標或位置,以判定追蹤框TB1和追蹤框TB2。也就是說,在判定物件類型和接觸部分和/或額外接觸部分之後,可判定用於手部追蹤的追蹤框TB1和/或追蹤框TB2。For example, the contact portion plus the additional contact portion may be the optimal gripping position for the object OBJ. After determining the contact portion and the additional contact portion, since the object type is determined, the offset distance between the tracker TRK and the contact portion and the additional offset distance between the tracker TRK and the additional contact portion may also be determined. In addition, based on the tracker data, the coordinates of the tracker TRK in space may be determined. Therefore, based on the coordinates, the offset distance, and the additional offset distance of the tracker TRK, the coordinates or position of the hand H1 and the coordinates or position of the hand H2 may be determined to determine the tracking frame TB1 and the tracking frame TB2. That is, after determining the object type and the contact portion and/or the additional contact portion, the tracking frame TB1 and/or the tracking frame TB2 for hand tracking may be determined.

然後,在追蹤過程503A中,基於追蹤框TB1和追蹤框TB2,可判定多個追蹤節點ND1和多個追蹤節點ND2,以初步地執行手部H1的手部追蹤和手部H2的手部追蹤。並且,手部H1和/或手部H2的手部追蹤的追蹤結果可用於判定初步的手部H1的手部姿態、初步的手部H2的手部姿態(也稱為額外手部姿態)或初步的物件OBJ的物件姿態。Then, in the tracking process 503A, based on the tracking frame TB1 and the tracking frame TB2, multiple tracking nodes ND1 and multiple tracking nodes ND2 can be determined to preliminarily perform hand tracking of the hand H1 and hand tracking of the hand H2. And, the tracking results of the hand tracking of the hand H1 and/or the hand H2 can be used to determine the preliminary hand pose of the hand H1, the preliminary hand pose of the hand H2 (also called the additional hand pose), or the preliminary object pose of the object OBJ.

接著,雖然在圖5A中未描繪,但類似於追蹤過程404A,如圖2的步驟S250,基於手部H1和手部H2的手部追蹤的追蹤結果,可判定手部H1的手勢和手部H2的額外手勢。舉例來說,對於不同的物件類型,使用者可能會採用不同的手勢以及額外手勢,來握取物件OBJ。也就是說,在物件種類已知的情況下,使用者用來握取物件OBJ的手勢以及額外手勢是可以預測的。在判定手勢和額外手勢之後,如圖2的步驟S260,可判定優化的手部H1的手部姿態、優化的手部H2的額外手部姿態以及優化的物件OBJ的物件姿態,以確保手部H1和手部H2分別放置在接觸部分和額外接觸部分上。也就是說,可判定優化的手部H1的手部姿態、優化的手部H2的手部姿態以及優化的物件OBJ的物件姿態,以確保虛擬世界中虛擬物件可以看起來黏附到虛擬手部和虛擬額外手部。換句話說,虛擬手部的表面和虛擬額外手部的表面可擬合到虛擬物件的表面。因此,可以真實的方式顯示虛擬手部、虛擬額外手部以及虛擬物件,進而給使用者帶來完全沉浸式體驗。Next, although not depicted in FIG. 5A , similar to the tracking process 404A, such as step S250 of FIG. 2 , based on the tracking results of the hand tracking of the hand H1 and the hand H2, the gesture of the hand H1 and the additional gesture of the hand H2 can be determined. For example, for different object types, the user may use different gestures and additional gestures to grasp the object OBJ. In other words, when the object type is known, the gesture and additional gesture used by the user to grasp the object OBJ can be predicted. After determining the hand gesture and the additional hand gesture, as shown in step S260 of FIG. 2 , the hand posture of the optimized hand H1, the additional hand posture of the optimized hand H2, and the object posture of the optimized object OBJ may be determined to ensure that the hand H1 and the hand H2 are placed on the contact portion and the additional contact portion, respectively. That is, the hand posture of the optimized hand H1, the hand posture of the optimized hand H2, and the object posture of the optimized object OBJ may be determined to ensure that the virtual object in the virtual world may appear to be attached to the virtual hand and the virtual additional hand. In other words, the surface of the virtual hand and the surface of the virtual additional hand may fit to the surface of the virtual object. Therefore, the virtual hand, virtual additional hand, and virtual objects can be displayed in a realistic manner, thus bringing a fully immersive experience to the user.

值得一提的是,在追蹤情境500A中,假定可基於物件資訊、追蹤器編號或物件資料庫,而判定物件OBJ的物件類型。然而,在某些情況下,追蹤器TRK剛被附接到物件,而物件資訊尚未儲存在追蹤器TRK中。在一個實施例中,處理器140可配置成基於接觸部分和額外接觸部分,而判定物件OBJ的物件類型。更確切地說,對於不同的物件類型,使用者可能會採用不同的方式,來握取物件OBJ。因此,基於使用者正如何握持物件OBJ,可判定物件OBJ的形狀、大小、方向等。舉例而言,手部H1及手部H2的兩個位置可形成物件OBJ的邊界。基於物件OBJ的邊界及/或手部H1的手勢與手部H2的額外手勢,可判定物件OBJ的物件類型。用於辨識物件類型的資訊可預儲存在物件辨識資料庫中,但不限於此。換句話說,雖然最初物件類型是未知的,但可基於接觸部分和額外接觸部分,而判定物件類型。在已知物件類型之後,可更準確地執行手部H1和手部H2的手部追蹤。It is worth mentioning that in the tracking scenario 500A, it is assumed that the object type of the object OBJ can be determined based on the object information, the tracker number or the object database. However, in some cases, the tracker TRK has just been attached to the object, and the object information has not yet been stored in the tracker TRK. In one embodiment, the processor 140 may be configured to determine the object type of the object OBJ based on the contact portion and the additional contact portion. More specifically, for different object types, the user may adopt different ways to hold the object OBJ. Therefore, based on how the user is holding the object OBJ, the shape, size, direction, etc. of the object OBJ can be determined. For example, the two positions of the hand H1 and the hand H2 can form the boundary of the object OBJ. Based on the boundary of object OBJ and/or the gesture of hand H1 and the additional gesture of hand H2, the object type of object OBJ can be determined. Information for identifying the object type can be pre-stored in an object recognition database, but is not limited thereto. In other words, although the object type is initially unknown, the object type can be determined based on the contact portion and the additional contact portion. After the object type is known, hand tracking of hand H1 and hand H2 can be performed more accurately.

現在參考圖5B。類似於手部追蹤情境500A,手部追蹤情境500B可包含手部H1、手部H2、物件OBJ以及追蹤器TRK。如圖5B中所繪示,物件OBJ的物件類型可為棒球球棒。此外,手部追蹤情境500B可包含追蹤過程501B和追蹤過程502B。為了清楚起見,為了更好地理解,已簡化圖5B。舉例來說,圖5B中未描繪追蹤器TRK和節點ND。Reference is now made to FIG. 5B. Similar to hand tracking scenario 500A, hand tracking scenario 500B may include hand H1, hand H2, object OBJ, and tracker TRK. As shown in FIG. 5B, the object type of object OBJ may be a baseball bat. In addition, hand tracking scenario 500B may include tracking process 501B and tracking process 502B. For the sake of clarity, FIG. 5B has been simplified for better understanding. For example, tracker TRK and node ND are not depicted in FIG. 5B.

值得注意的是,由於手部追蹤情境500A與手部追蹤情境500B之間的不同處為物件OBJ的物件類型,因此手部追蹤情境500B的實施細節可參考手部追蹤情境500A以獲得足夠的教示、建議以及實作的實施例,而本文中不冗餘地描述細節。也就是說,追蹤過程501B和追蹤過程502B可參考追蹤過程501A和追蹤過程502A。此外,儘管其未在圖5B中描繪,但與追蹤過程503A和追蹤過程404A類似的過程可包含於手部追蹤情境500B中。然而,本揭露不受限制。It is worth noting that, since the difference between the hand tracking scenario 500A and the hand tracking scenario 500B is the object type of the object OBJ, the implementation details of the hand tracking scenario 500B can refer to the hand tracking scenario 500A to obtain sufficient teachings, suggestions, and implementation examples, and the details are not redundantly described herein. That is, the tracking process 501B and the tracking process 502B can refer to the tracking process 501A and the tracking process 502A. In addition, although it is not depicted in FIG. 5B, processes similar to the tracking process 503A and the tracking process 404A can be included in the hand tracking scenario 500B. However, the present disclosure is not limited thereto.

圖5C為根據本揭露的實施例的雙手的手部追蹤情境的示意圖。圖5D為根據本揭露的實施例的雙手的手部追蹤情境的示意圖。參考圖5C和圖5D,手部追蹤情境500C和手部追蹤情境500D描繪雙手的追蹤過程的示範性實施例。值得注意的是,手部H1的部分或手部H2的部分可能被遮擋。此外,為清楚起見,為了更好地理解,已簡化圖5C和圖5D。舉例來說,圖5C和圖5D中未描繪追蹤器TRK和節點ND。FIG. 5C is a schematic diagram of a hand tracking scenario for two hands according to an embodiment of the present disclosure. FIG. 5D is a schematic diagram of a hand tracking scenario for two hands according to an embodiment of the present disclosure. Referring to FIG. 5C and FIG. 5D , hand tracking scenario 500C and hand tracking scenario 500D depict exemplary embodiments of a tracking process for two hands. It is noteworthy that portions of hand H1 or portions of hand H2 may be obscured. In addition, for the sake of clarity and for better understanding, FIG. 5C and FIG. 5D have been simplified. For example, tracker TRK and node ND are not depicted in FIG. 5C and FIG. 5D .

首先參考圖5C。手部追蹤情境500C可包含手部H1、手部H2以及物件OBJ。如圖5C中所繪示,物件OBJ的物件類型可為步槍。此外,手部追蹤情境500C可包含側視圖501C及後視圖502C。值得注意的是,由於手部追蹤情境500A與手部追蹤情境500C之間的不同處為物件OBJ的物件類型,因此手部追蹤情境500C的實施細節可參考手部追蹤情境500A以獲得足夠的教示、建議以及實作的實施例,而本文中不冗餘地描述細節。First, refer to FIG. 5C. Hand tracking scenario 500C may include hand H1, hand H2, and object OBJ. As shown in FIG. 5C, the object type of object OBJ may be a rifle. In addition, hand tracking scenario 500C may include a side view 501C and a rear view 502C. It is worth noting that since the difference between hand tracking scenario 500A and hand tracking scenario 500C is the object type of object OBJ, the implementation details of hand tracking scenario 500C may refer to hand tracking scenario 500A for sufficient teachings, suggestions, and implementation examples, and the details are not redundantly described herein.

如側視圖501C中所繪示,物件OBJ可由手部H1和手部H2握持。此外,追蹤框TB1和追蹤框TB2可能已經被判定。然而,如後視圖502C中所繪示,手部H1的部分被手部H2遮擋。也就是說,由於手部H1可能未被相機110完全地捕獲,因此無法恰當地判定追蹤框TB1。此外,即使恰當地判定追蹤框TB1,手部H1的追蹤節點ND1中的一些可能被手部H2遮擋,進而可對手部H1的手部追蹤造成負面影響。As shown in the side view 501C, the object OBJ may be held by the hand H1 and the hand H2. In addition, the tracking frame TB1 and the tracking frame TB2 may have been determined. However, as shown in the rear view 502C, part of the hand H1 is blocked by the hand H2. That is, since the hand H1 may not be completely captured by the camera 110, the tracking frame TB1 cannot be properly determined. In addition, even if the tracking frame TB1 is properly determined, some of the tracking nodes ND1 of the hand H1 may be blocked by the hand H2, which may negatively affect the hand tracking of the hand H1.

值得一提的是,通過判定物件OBJ的物件類型和手部H1的手勢,可判定追蹤節點ND1放置在物件OBJ上的其餘部分。也就是說,即使手部H1的部分被手部H2遮擋,仍可正確地執行手部H1的手部追蹤,進而增加使用者體驗。反之亦然。另一方面,雖然物件OBJ的物件類型是未知的,但可基於追蹤框TB1和追蹤框TB2,而判定物件OBJ的物件類型。更確切地說,基於手部H1的接觸部分和手部H2的額外接觸部分,可判定OBJ的物件類型。舉例來說,如圖5C中所繪示,根據側視圖501C,手部H1與手部H2分離。另外,根據後視圖502C,手部H1接近手部H2。因此,可將物件OBJ判定為步槍。然而,本揭露不限於此。It is worth mentioning that by determining the object type of object OBJ and the gesture of hand H1, the remaining portion of tracking node ND1 placed on object OBJ can be determined. That is, even if part of hand H1 is blocked by hand H2, hand tracking of hand H1 can still be performed correctly, thereby increasing the user experience. And vice versa. On the other hand, although the object type of object OBJ is unknown, the object type of object OBJ can be determined based on tracking frame TB1 and tracking frame TB2. More specifically, the object type of OBJ can be determined based on the contact portion of hand H1 and the additional contact portion of hand H2. For example, as shown in FIG5C, according to side view 501C, hand H1 is separated from hand H2. In addition, according to the rear view 502C, the hand H1 is close to the hand H2. Therefore, the object OBJ can be determined to be a rifle. However, the present disclosure is not limited thereto.

現在參考圖5D。手部追蹤情境500D可包含手部H1、手部H2以及物件OBJ。如圖5D中所繪示,物件OBJ的物件類型可為手槍。此外,手部追蹤情境500D可包含側視圖501D。值得注意的是,由於手部追蹤情境500A與手部追蹤情境500D之間的不同處為物件OBJ的物件類型,因此手部追蹤情境500D的實施細節可參考手部追蹤情境500A以獲得足夠的教示、建議以及實作的實施例,而本文中不冗餘地描述細節。Now refer to Figure 5D. Hand tracking context 500D may include hand H1, hand H2, and object OBJ. As shown in Figure 5D, the object type of object OBJ may be a pistol. In addition, hand tracking context 500D may include side view 501D. It is worth noting that since the difference between hand tracking context 500A and hand tracking context 500D is the object type of object OBJ, the implementation details of hand tracking context 500D can refer to hand tracking context 500A for sufficient teachings, suggestions, and implementation examples, and the details are not redundantly described herein.

如側視圖501D中所繪示,物件OBJ可由手部H1和手部H2握持。類似於後視圖502C,手部H1的部分可能會被手部H2遮擋。因此,通過判定物件OBJ的物件類型和手部H1的手勢,仍可正確地執行手部H1的手部追蹤,進而增加使用者體驗。另一方面,雖然物件OBJ的物件類型是未知的,但可基於追蹤框TB1和追蹤框TB2,更確切地說,基於手部H1的接觸部分和手部H2的額外接觸部分,可判定物件OBJ的物件類型。舉例來說,如圖5D中所繪示,根據側視圖501D,手部H1接近手部H2。因此,可將物件OBJ判定為手槍。然而,本揭露不限於此。As shown in the side view 501D, the object OBJ may be held by the hand H1 and the hand H2. Similar to the rear view 502C, part of the hand H1 may be blocked by the hand H2. Therefore, by determining the object type of the object OBJ and the gesture of the hand H1, the hand tracking of the hand H1 can still be correctly performed, thereby increasing the user experience. On the other hand, although the object type of the object OBJ is unknown, the object type of the object OBJ can be determined based on the tracking frame TB1 and the tracking frame TB2, more precisely, based on the contact portion of the hand H1 and the additional contact portion of the hand H2. For example, as shown in FIG. 5D, according to the side view 501D, the hand H1 is close to the hand H2. Therefore, the object OBJ can be determined to be a pistol. However, the present disclosure is not limited to this.

綜上所述,根據手部追蹤系統100和手部追蹤方法200,可正確地追蹤手部H(手部H1和/或手部H2)和物件OBJ。因此,可以真實的方式顯示虛擬世界中的虛擬手部和虛擬物件,進而給使用者帶來完全沉浸式體驗。In summary, according to the hand tracking system 100 and the hand tracking method 200, the hand H (hand H1 and/or hand H2) and the object OBJ can be correctly tracked. Therefore, the virtual hand and the virtual object in the virtual world can be displayed in a real way, thereby bringing a fully immersive experience to the user.

本領域的技術人員應明白,在不脫離本揭露的範圍或精神的情況下,可對所揭露的實施例進行各種修改和變化。鑒於前述內容,希望本揭露涵蓋屬於所附申請專利範圍和其等效物的範圍內的本揭露的修改和變化。It should be understood by those skilled in the art that various modifications and variations may be made to the disclosed embodiments without departing from the scope or spirit of the present disclosure. In view of the foregoing, it is intended that the present disclosure covers modifications and variations of the present disclosure that fall within the scope of the appended patent applications and their equivalents.

100:手部追蹤系統 110:相機 120、TRK:追蹤器 130:記憶體 140:處理器 200:手部追蹤方法 300A、300B:互動情境 301、302、303:握持情境 400A、400B、500A、500B、500C、500D:手部追蹤情境 401A、401B、402A、402B、403A、403B、404A、404B、501A、501B、502A、502B、503A:追蹤過程 501C、501D:側視圖 502C:後視圖 H、H1、H2:手部 MD:手勢模型 ND、ND1、ND2:追蹤節點 OBJ:物件 S210、S220、S230、S240、S250、S260:步驟 TB、TB1、TB2:追蹤框100: Hand tracking system 110: Camera 120, TRK: Tracker 130: Memory 140: Processor 200: Hand tracking method 300A, 300B: Interaction scenario 301, 302, 303: Holding scenario 400A, 400B, 500A, 500B, 500C, 500D: Hand tracking scenario 401A, 401B, 402A, 402B, 403A, 403B, 404A, 404B, 501A, 501B, 502A, 502B, 503A: Tracking process 501C, 501D: Side view 502C: Rear view H, H1, H2: Hand MD: Gesture model ND, ND1, ND2: Tracking node OBJ: Object S210, S220, S230, S240, S250, S260: Step TB, TB1, TB2: Tracking box

圖1為根據本揭露的實施例的手部追蹤系統的示意圖。 圖2為根據本揭露的實施例的手部追蹤方法的示意性流程圖。 圖3A為根據本揭露的實施例的手部與物件之間的互動情境的示意圖。 圖3B為根據本揭露的實施例的手部與物件之間的互動情境的示意圖。 圖4A為根據本揭露的實施例的單手的手部追蹤情境的示意圖。 圖4B為根據本揭露的實施例的單手的手部追蹤情境的示意圖。 圖5A為根據本揭露的實施例的雙手的手部追蹤情境的示意圖。 圖5B為根據本揭露的實施例的雙手的手部追蹤情境的示意圖。 圖5C為根據本揭露的實施例的雙手的手部追蹤情境的示意圖。 圖5D為根據本揭露的實施例的雙手的手部追蹤情境的示意圖。 FIG. 1 is a schematic diagram of a hand tracking system according to an embodiment of the present disclosure. FIG. 2 is a schematic flow chart of a hand tracking method according to an embodiment of the present disclosure. FIG. 3A is a schematic diagram of an interaction scenario between a hand and an object according to an embodiment of the present disclosure. FIG. 3B is a schematic diagram of an interaction scenario between a hand and an object according to an embodiment of the present disclosure. FIG. 4A is a schematic diagram of a hand tracking scenario of a single hand according to an embodiment of the present disclosure. FIG. 4B is a schematic diagram of a hand tracking scenario of a single hand according to an embodiment of the present disclosure. FIG. 5A is a schematic diagram of a hand tracking scenario of both hands according to an embodiment of the present disclosure. FIG. 5B is a schematic diagram of a hand tracking scenario of both hands according to an embodiment of the present disclosure. FIG. 5C is a schematic diagram of a hand tracking scenario of both hands according to an embodiment of the present disclosure. FIG. 5D is a schematic diagram of a hand tracking scenario of both hands according to an embodiment of the present disclosure.

200:手部追蹤方法 200:Hand tracking method

S210、S220、S230、S240、S250、S260:步驟 S210, S220, S230, S240, S250, S260: Steps

Claims (11)

一種手部追蹤系統,包括: 相機,配置成獲得使用者的手部的手部影像; 追蹤器,適用於附接到物件且配置成獲得追蹤器資料; 記憶體,配置成儲存程式碼;以及 處理器,配置成存取所述程式碼以執行: 基於所述手部影像或所述追蹤器資料,而判定所述手部正觸摸所述物件; 基於所述物件,而判定所述物件的接觸部分,其中所述接觸部分適用於與所述使用者的所述手部接觸; 基於所述手部影像,而判定所述使用者的手勢;以及 基於所述手勢和所述接觸部分,而判定所述手部的手部姿態和所述物件的物件姿態。 A hand tracking system includes: a camera configured to obtain a hand image of a user's hand; a tracker adapted to be attached to an object and configured to obtain tracker data; a memory configured to store program code; and a processor configured to access the program code to execute: based on the hand image or the tracker data, determining that the hand is touching the object; based on the object, determining a contact portion of the object, wherein the contact portion is adapted to contact the user's hand; based on the hand image, determining a gesture of the user; and based on the gesture and the contact portion, determining a hand posture of the hand and an object posture of the object. 如請求項1所述的手部追蹤系統,其中所述處理器進一步配置成存取所述程式碼以執行: 基於儲存在所述追蹤器或儲存在所述記憶體中的物件資料庫的物件資訊,而判定所述物件的物件類型。 The hand tracking system of claim 1, wherein the processor is further configured to access the program code to execute: Determine the object type of the object based on object information stored in the tracker or in the object database stored in the memory. 如請求項1所述的手部追蹤系統,其中所述處理器進一步配置成存取所述程式碼以執行: 基於所述接觸部分與所述追蹤器之間的距離,而判定所述物件的物件類型。 The hand tracking system of claim 1, wherein the processor is further configured to access the program code to execute: Determine the object type of the object based on the distance between the contact portion and the tracker. 如請求項1所述的手部追蹤系統,其中所述處理器進一步配置成存取所述程式碼以執行: 基於所述物件的物件類型,而判定所述物件的所述接觸部分。 A hand tracking system as described in claim 1, wherein the processor is further configured to access the program code to execute: Determine the contact portion of the object based on the object type of the object. 如請求項1所述的手部追蹤系統,其中所述處理器進一步配置成存取所述程式碼以執行: 基於所述物件的物件類型,而判定所述追蹤器與所述接觸部分之間的偏移距離; 基於所述偏移距離,而判定所述手部的追蹤框; 基於所述追蹤框,而執行所述手部的手部追蹤;以及 基於所述手部追蹤的追蹤結果,而判定所述手部姿態和所述物件姿態。 The hand tracking system of claim 1, wherein the processor is further configured to access the program code to execute: Based on the object type of the object, determine the offset distance between the tracker and the contact portion; Based on the offset distance, determine the tracking frame of the hand; Based on the tracking frame, perform hand tracking of the hand; and Based on the tracking result of the hand tracking, determine the hand posture and the object posture. 如請求項5所述的手部追蹤系統,其中所述處理器進一步配置成存取所述程式碼以執行: 基於所述追蹤框,而判定多個追蹤節點;以及 基於所述多個追蹤節點,而執行所述手部追蹤。 A hand tracking system as described in claim 5, wherein the processor is further configured to access the program code to execute: Based on the tracking frame, determine multiple tracking nodes; and Based on the multiple tracking nodes, perform the hand tracking. 如請求項1所述的手部追蹤系統,其中所述處理器進一步配置成存取所述程式碼以執行: 基於所述手部姿態,而判斷所述手部是否正確地放置在所述接觸部分上;以及 響應於所述手部未正確地放置在所述接觸部分上,產生操作說明訊息,以指示所述使用者將所述手部正確地放置在所述接觸部分上。 A hand tracking system as described in claim 1, wherein the processor is further configured to access the program code to execute: Based on the hand posture, determine whether the hand is correctly placed on the contact portion; and In response to the hand not being correctly placed on the contact portion, generate an operation instruction message to instruct the user to correctly place the hand on the contact portion. 如請求項1所述的手部追蹤系統,其中所述處理器進一步配置成存取所述程式碼以執行: 基於額外手部影像或所述追蹤器資料,而判定額外手部正觸摸所述物件;以及 基於所述額外手部影像,而判定所述物件的額外接觸部分,其中所述額外接觸部分適用於與所述額外手部接觸。 The hand tracking system of claim 1, wherein the processor is further configured to access the program code to execute: based on the additional hand image or the tracker data, determining that an additional hand is touching the object; and based on the additional hand image, determining an additional contact portion of the object, wherein the additional contact portion is applicable to contact with the additional hand. 如請求項8所述的手部追蹤系統,其中所述處理器進一步配置成存取所述程式碼以執行: 基於所述額外手部影像,而判定所述額外手部的額外手勢;以及 基於所述額外手勢和所述額外接觸部分,而判定所述額外手部的額外手部姿態。 A hand tracking system as described in claim 8, wherein the processor is further configured to access the program code to execute: Based on the additional hand image, determine an additional gesture of the additional hand; and Based on the additional gesture and the additional contact portion, determine an additional hand posture of the additional hand. 如請求項8所述的手部追蹤系統,其中所述處理器進一步配置成存取所述程式碼以執行: 基於所述接觸部分和所述額外接觸部分,而判定所述物件的物件類型。 A hand tracking system as described in claim 8, wherein the processor is further configured to access the program code to execute: Determine the object type of the object based on the contact portion and the additional contact portion. 一種手部追蹤方法,包括: 通過相機,獲得使用者的手部的手部影像; 通過追蹤器,獲得追蹤器資料,其中所述追蹤器適用於附接到物件; 通過處理器,基於所述手部影像或所述追蹤器資料,而判定所述手部正觸摸所述物件; 基於所述物件,而判定所述物件的接觸部分,其中所述接觸部分適用於與所述使用者的所述手部接觸; 基於所述手部影像,而判定所述使用者的手勢;以及 基於所述手勢和所述接觸部分,而判定所述手部的手部姿態和所述物件的物件姿態。 A hand tracking method, comprising: Obtaining a hand image of a user's hand through a camera; Obtaining tracker data through a tracker, wherein the tracker is suitable for being attached to an object; Determining that the hand is touching the object through a processor based on the hand image or the tracker data; Determining a contact portion of the object based on the object, wherein the contact portion is suitable for contacting the user's hand; Determining a gesture of the user based on the hand image; and Determining a hand posture of the hand and an object posture of the object based on the gesture and the contact portion.
TW113112258A 2024-04-01 2024-04-01 Hand tracking system and method TWI866828B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW113112258A TWI866828B (en) 2024-04-01 2024-04-01 Hand tracking system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW113112258A TWI866828B (en) 2024-04-01 2024-04-01 Hand tracking system and method

Publications (2)

Publication Number Publication Date
TWI866828B true TWI866828B (en) 2024-12-11
TW202540815A TW202540815A (en) 2025-10-16

Family

ID=94769464

Family Applications (1)

Application Number Title Priority Date Filing Date
TW113112258A TWI866828B (en) 2024-04-01 2024-04-01 Hand tracking system and method

Country Status (1)

Country Link
TW (1) TWI866828B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160086349A1 (en) * 2014-09-23 2016-03-24 Microsoft Corporation Tracking hand pose using forearm-hand model
US20230221830A1 (en) * 2021-12-02 2023-07-13 Apple Inc. User interface modes for three-dimensional display
US20230333650A1 (en) * 2020-08-28 2023-10-19 Apple Inc. Gesture Tutorial for a Finger-Wearable Device
US20240094819A1 (en) * 2022-09-21 2024-03-21 Apple Inc. Devices, methods, and user interfaces for gesture-based interactions

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160086349A1 (en) * 2014-09-23 2016-03-24 Microsoft Corporation Tracking hand pose using forearm-hand model
US20230333650A1 (en) * 2020-08-28 2023-10-19 Apple Inc. Gesture Tutorial for a Finger-Wearable Device
US20230221830A1 (en) * 2021-12-02 2023-07-13 Apple Inc. User interface modes for three-dimensional display
US20240094819A1 (en) * 2022-09-21 2024-03-21 Apple Inc. Devices, methods, and user interfaces for gesture-based interactions

Also Published As

Publication number Publication date
TW202540815A (en) 2025-10-16

Similar Documents

Publication Publication Date Title
JP5109098B2 (en) Motion control system, motion control method, and motion control program
Zhang et al. Real-time spin estimation of ping-pong ball using its natural brand
JP2009535173A (en) Three-dimensional input control system, method, and apparatus
US20160059120A1 (en) Method of using motion states of a control device for control of a system
JP5773144B2 (en) Motion analysis apparatus, motion analysis system, motion analysis program, and recording medium
JP6613685B2 (en) Swing diagnostic method, swing diagnostic program, recording medium, swing diagnostic device, and swing diagnostic system
TWI866828B (en) Hand tracking system and method
WO2015141183A1 (en) Movement analysis device, movement analysis system, movement analysis method, display method for movement analysis information, and program
JP5638592B2 (en) System and method for analyzing game control input data
US10948978B2 (en) Virtual object operating system and virtual object operating method
Nakai et al. A volleyball playing robot
US12430791B1 (en) Hand tracking system and method
CN120780132A (en) Hand tracking system and method
TWI874171B (en) Hand tracking device, system, and method
Rocha et al. Dance gestures recognition for wheelchair control
EP3813018A1 (en) Virtual object operating system and virtual object operating method
JP6661783B2 (en) Information processing system, information processing apparatus, control method, and program
TWI908003B (en) Hand tracking method, host, and hand tracking system
JPWO2020158727A1 (en) Systems, methods, and programs
JP2017029515A (en) Golf swing analysis device, golf swing analysis system, golf swing analysis method and golf swing analysis program
US11892625B2 (en) Method for determining posture of user, host, and computer readable medium
EP3378542B1 (en) System and method of modeling the behavior of game elements during a remote game
TW202438141A (en) Hand tracking method, host, and hand tracking system
JP7027745B2 (en) Analysis device for the behavior of hitting tools
CN118674747A (en) Hand tracking method, host and hand tracking system