TW201910897A - Object focusing device and method thereof - Google Patents
Object focusing device and method thereof Download PDFInfo
- Publication number
- TW201910897A TW201910897A TW106127304A TW106127304A TW201910897A TW 201910897 A TW201910897 A TW 201910897A TW 106127304 A TW106127304 A TW 106127304A TW 106127304 A TW106127304 A TW 106127304A TW 201910897 A TW201910897 A TW 201910897A
- Authority
- TW
- Taiwan
- Prior art keywords
- gesture
- user
- virtual
- sight
- line
- Prior art date
Links
Landscapes
- User Interface Of Digital Computer (AREA)
Abstract
Description
本發明是有關於一種物件對焦裝置與方法,且特別是有關於一種使用手勢進行對焦的物件對焦裝置與方法。The present invention relates to an object focusing device and method, and more particularly to an object focusing device and method for focusing using a gesture.
在使用頭戴式的沉浸式裝置(例如,微軟推出的Hololens)執行擴增實境或虛擬實境的功能時,使用者通常可以對裝置顯示的物件進行對焦並且執行所對焦的物件的功能。然而,如何讓使用者準確地對焦並執行物件的功能,是本領域技術人員所欲解決的問題之一。When performing augmented reality or virtual reality functions using a head mounted immersive device (eg, Hololens from Microsoft), the user can typically focus on the objects displayed by the device and perform the functions of the object in focus. However, how to allow the user to accurately focus and perform the function of the object is one of the problems to be solved by those skilled in the art.
本發明提供一種結合手勢辨識的物件對焦方法與物件對焦裝置,可以讓使用者以手勢產生虛擬準心來對焦物件,並且在對焦完成後再以另一手勢執行被對焦的物件的功能。藉此,可以讓使用者可以準確地對裝置顯示的物件進行對焦。The invention provides an object focusing method and an object focusing device combined with gesture recognition, which can allow a user to generate a virtual center of mind to focus on an object, and perform the function of the focused object with another gesture after the focus is completed. Thereby, the user can accurately focus on the objects displayed on the device.
本發明提出一種物件對焦裝置,此物件對焦裝置包括感測單元、顯示單元以及處理單元。感測單元偵測使用者的手勢。顯示單元顯示一物件以及使用者的手勢。處理單元判斷使用者的手勢是否為第一手勢。當使用者的手勢為第一手勢時,處理單元產生虛擬準心並藉由顯示單元顯示此虛擬準心於第一手勢的位置。當第一手勢的位置鄰近於物件的位置並且處理單元判斷使用者的手勢從第一手勢變換至其他手勢時,處理單元綁定虛擬準心於上述的物件以對焦該物件。The invention provides an object focusing device, which comprises a sensing unit, a display unit and a processing unit. The sensing unit detects the gesture of the user. The display unit displays an object and a gesture of the user. The processing unit determines whether the gesture of the user is the first gesture. When the gesture of the user is the first gesture, the processing unit generates a virtual alignment and displays the position of the virtual alignment by the display unit by the display unit. When the position of the first gesture is adjacent to the position of the object and the processing unit determines that the gesture of the user changes from the first gesture to the other gesture, the processing unit binds the virtual alignment to the object to focus on the object.
在本發明的一實施例中,其中在顯示虛擬準心於第一手勢的位置的運作中,顯示單元顯示虛擬準心於第一手勢的第一位置。當第一手勢從第一位置移動至第二位置時,顯示單元顯示虛擬準心於第二位置。In an embodiment of the invention, wherein in the operation of displaying a position that is virtually aligned with the first gesture, the display unit displays a virtual first alignment with the first position of the first gesture. When the first gesture moves from the first position to the second position, the display unit displays the virtual alignment to the second position.
在本發明的一實施例中,其中在綁定虛擬準心於物件的運作之後,處理單元判斷使用者的手勢是否為第二手勢。當使用者的手勢為第二手勢時,處理單元執行對應於物件的功能。In an embodiment of the invention, after the binding is virtually aligned with the operation of the object, the processing unit determines whether the gesture of the user is the second gesture. When the gesture of the user is the second gesture, the processing unit performs a function corresponding to the object.
在本發明的一實施例中,處理單元判斷使用者的手勢是否為第三手勢。當使用者的手勢為第三手勢時,處理單元產生虛擬準心並藉由顯示單元顯示虛擬準心於使用者的虛擬視線的視線對焦位置。當視線對焦位置鄰近於物件的位置並且處理單元判斷使用者的手勢為第二手勢時,處理單元執行對應於物件的功能。In an embodiment of the invention, the processing unit determines whether the gesture of the user is a third gesture. When the gesture of the user is the third gesture, the processing unit generates a virtual eccentricity and displays, by the display unit, a line-of-sight focus position that is virtually aligned with the virtual line of sight of the user. When the line-of-sight focus position is adjacent to the position of the object and the processing unit determines that the gesture of the user is the second gesture, the processing unit performs a function corresponding to the object.
在本發明的一實施例中,處理單元以使用者的兩個眼睛之間的一水平連線的中點定義方向相同於使用者的臉部所面對的方向的虛擬視線,其中虛擬視線與所術水平連線垂直並相交於水平連線的中點,且視線對焦位置為虛擬視線與一物體相交的位置。In an embodiment of the invention, the processing unit defines a virtual line of sight in a direction that is the same as a direction faced by the user's face with a midpoint of a horizontal line between the two eyes of the user, wherein the virtual line of sight is The horizontal lines are perpendicular and intersect at the midpoint of the horizontal line, and the line of sight focus is the position where the virtual line of sight intersects an object.
本發明提出一種物件對焦方法。此物件對焦方法用於一物件對焦裝置。所述方法包括:偵測使用者的手勢;顯示一物件以及使用者的手勢;判斷使用者的手勢是否為第一手勢;當使用者的手勢為第一手勢時,產生虛擬準心並顯示此虛擬準心於第一手勢的位置;以及當第一手勢的位置鄰近於物件的位置並且使用者的手勢從第一手勢變換至其他手勢時,綁定虛擬準心於上述的物件以對焦該物件。The invention provides an object focusing method. This object focusing method is used for an object focusing device. The method includes: detecting a gesture of the user; displaying an object and a gesture of the user; determining whether the gesture of the user is the first gesture; and when the gesture of the user is the first gesture, generating a virtual alignment and displaying the Virtually aligning with the position of the first gesture; and when the position of the first gesture is adjacent to the position of the object and the gesture of the user changes from the first gesture to the other gesture, binding the virtual focus to the object to focus on the object .
在本發明的一實施例中,其中顯示虛擬準心於第一手勢的位置的步驟包括:顯示虛擬準心於第一手勢的第一位置;以及當第一手勢從第一位置移動至第二位置時,顯示虛擬準心於第二位置。In an embodiment of the invention, the step of displaying the virtual centering position of the first gesture includes: displaying a virtual first alignment with the first position of the first gesture; and when the first gesture is moving from the first position to the second When positioned, the virtual alignment is displayed in the second position.
在本發明的一實施例中,其中在綁定虛擬準心於物件的步驟之後,所述方法更包括:判斷使用者的手勢是否為第二手勢。當使用者的手勢為第二手勢時,執行對應於物件的功能。In an embodiment of the invention, after the step of binding the virtual focus to the object, the method further comprises: determining whether the gesture of the user is the second gesture. When the gesture of the user is the second gesture, the function corresponding to the object is performed.
在本發明的一實施例中,所述方法更包括:判斷使用者的手勢是否為第三手勢;當使用者的手勢為第三手勢時,產生虛擬準心並藉由顯示單元顯示虛擬準心於使用者的虛擬視線的視線對焦位置;以及當視線對焦位置鄰近於物件的位置並且處理單元判斷使用者的手勢為第二手勢時,處理單元執行對應於物件的功能。In an embodiment of the invention, the method further includes: determining whether the gesture of the user is a third gesture; when the gesture of the user is the third gesture, generating a virtual alignment and displaying the virtual alignment by the display unit a line-of-sight focus position of the user's virtual line of sight; and when the line-of-sight focus position is adjacent to the position of the object and the processing unit determines that the gesture of the user is the second gesture, the processing unit performs a function corresponding to the object.
在本發明的一實施例中,所述方法更包括:以使用者的兩個眼睛之間的一水平連線的中點定義方向相同於使用者的臉部所面對的方向的虛擬視線,其中虛擬視線與所術水平連線垂直並相交於水平連線的中點,且視線對焦位置為虛擬視線與一物體相交的位置。In an embodiment of the invention, the method further comprises: defining a virtual line of sight in a direction that is the same as a direction facing the face of the user, with a midpoint of a horizontal connection between the two eyes of the user, The virtual line of sight is perpendicular to the horizontal line of the operation and intersects at the midpoint of the horizontal line, and the line of sight focus position is the position where the virtual line of sight intersects with an object.
基於上述,本發明提出的物件對焦方法與物件對焦裝置可以讓使用者以手勢產生虛擬準心來對焦物件,並且在對焦完成後再以另一手勢執行被對焦的物件的功能。藉此,可以讓使用者可以準確地對裝置顯示的物件進行對焦。此外,本發明的物件對焦方法還可以讓使用者以手勢選擇自己想要的對焦方式,例如以使用者的虛擬視線進行對焦。藉此,可以達到對焦方法的多樣性,以提升使用者的體驗。Based on the above, the object focusing method and the object focusing device proposed by the present invention allow the user to create a virtual center of gravity to focus on the object, and perform the function of the focused object with another gesture after the focus is completed. Thereby, the user can accurately focus on the objects displayed on the device. In addition, the object focusing method of the present invention can also allow the user to select the desired focusing mode by gesture, for example, focusing on the user's virtual line of sight. Thereby, the diversity of the focusing method can be achieved to enhance the user experience.
為讓本發明的上述特徵和優點能更明顯易懂,下文特舉實施例,並配合所附圖式作詳細說明如下。The above described features and advantages of the invention will be apparent from the following description.
圖1是依據本發明一實施例所繪示的物件對焦裝置的示意圖。請參照圖1,物件對焦裝置100可以包括顯示單元112、感測單元114以及處理單元116。FIG. 1 is a schematic diagram of an object focusing device according to an embodiment of the invention. Referring to FIG. 1 , the object focusing apparatus 100 may include a display unit 112 , a sensing unit 114 , and a processing unit 116 .
顯示單元112例如是液晶顯示器(liquid crystal display, LCD)、發光二極體(light-emitting diode, LED)、場發射顯示器(field emission display, FED)等提供顯示功能的顯示裝置。The display unit 112 is, for example, a display device that provides a display function such as a liquid crystal display (LCD), a light-emitting diode (LED), or a field emission display (FED).
感測單元114例如是採用電荷耦合元件(Charge coupled device,CCD)鏡頭、互補式金氧半電晶體(Complementary metal oxide semiconductor transistors,CMOS)鏡頭、或深度攝影機(Depth Camera、Time-Of-Flight Camera)、立體攝影機(Stereo Camera)。The sensing unit 114 is, for example, a charge coupled device (CCD) lens, a complementary metal oxide semiconductor transistor (CMOS) lens, or a depth camera (Depth Camera, Time-Of-Flight Camera). ), Stereo Camera.
處理單元116可以是中央處理單元(Central Processing Unit,CPU),或是其他可程式化之一般用途或特殊用途的微處理器(Microprocessor)、數位信號處理器(Digital Signal Processor,DSP)、可程式化控制器、特殊應用積體電路(Application Specific Integrated Circuit,ASIC)或其他類似元件或上述元件的組合。The processing unit 116 can be a central processing unit (CPU), or other programmable general purpose or special purpose microprocessor (Microprocessor), digital signal processor (DSP), programmable A controller, an Application Specific Integrated Circuit (ASIC) or other similar component or a combination of the above components.
在本範例實施例中,顯示單元112以及感測單元114可以分別地透過有線或無線的方式連接至處理單元116。在本範例實施例中,物件對焦裝置100可以是頭戴式裝置(或沉浸式裝置)並且可以用以執行類似於微軟推出的Hololens的擴增實境或虛擬實境的功能。In the present exemplary embodiment, the display unit 112 and the sensing unit 114 can be connected to the processing unit 116 by wire or wirelessly, respectively. In the present exemplary embodiment, the object focus device 100 may be a head mounted device (or an immersive device) and may be used to perform functions similar to the augmented reality or virtual reality of Hololens introduced by Microsoft.
在本範例實施例中,物件對焦裝置100還可以包括儲存單元(未繪示)。儲存單元可以是任何型態的固定或可移動隨機存取記憶體(random access memory,RAM)、唯讀記憶體(read-only memory,ROM)、快閃記憶體(flash memory)或類似元件或上述元件的組合。在本範例實施例中,物件對焦裝置100的儲存單元中會儲存有多個程式碼片段,在上述程式碼片段被安裝後,會由處理單元116來執行。例如,物件對焦裝置100的儲存單元中包括多個模組,藉由這些模組來分別執行物件對焦裝置100的各個運作,其中各模組是由一或多個程式碼片段所組成。然而本發明不限於此,物件對焦裝置100的各個運作也可以是使用其他硬體形式的方式來實現。In the present exemplary embodiment, the object focusing device 100 may further include a storage unit (not shown). The storage unit can be any type of fixed or removable random access memory (RAM), read-only memory (ROM), flash memory or the like. A combination of the above elements. In the present exemplary embodiment, a plurality of code segments are stored in the storage unit of the object focusing device 100, and are executed by the processing unit 116 after the code segments are installed. For example, the storage unit of the object focusing device 100 includes a plurality of modules, and each of the operations of the object focusing device 100 is performed by the modules, wherein each module is composed of one or more code segments. However, the present invention is not limited thereto, and the respective operations of the object focusing apparatus 100 may be implemented by using other hardware forms.
圖2是依據本發明一實施例所繪示的物件對焦裝置的使用方法的流程示意圖。在此搭配圖1的物件對焦裝置100進行說明。2 is a flow chart showing a method of using an object focusing device according to an embodiment of the invention. Here, the object focusing device 100 of Fig. 1 will be described.
請參照圖2,首先,感測單元114可以偵測(或感測)穿戴物件對焦裝置100的使用者的手勢(步驟S201)。在本範例實施例中,感測單元114可以是影像擷取單元,用以擷取使用者的手勢。同時,顯示單元112可以顯示感測單元114擷取到的使用者的手勢,而處理單元116可以判斷使用者的手勢是否為第一手勢、第二手勢或第三手勢(步驟S203)。Referring to FIG. 2, first, the sensing unit 114 can detect (or sense) a gesture of a user wearing the object focusing device 100 (step S201). In the exemplary embodiment, the sensing unit 114 may be an image capturing unit for capturing a gesture of the user. Meanwhile, the display unit 112 may display the gesture of the user captured by the sensing unit 114, and the processing unit 116 may determine whether the gesture of the user is the first gesture, the second gesture, or the third gesture (step S203).
在本範例實施例中,第一手勢、第二手勢與第三手勢可以是互不相同的靜態或動態的手勢。舉例來說,第一手勢可以是使用者的手握拳但只伸出食指的手勢,第二手勢可以是使用者的手握拳但只伸出食指與中指的手勢,第三手勢可以是使用者的手握拳但只伸出食指、中指與無名指的手勢。然而需注意的是,上述第一手勢、第二手勢與第三手勢僅是舉例說明,而本發明並不用於限定第一手勢、第二手勢與第三手勢的手勢。In this exemplary embodiment, the first gesture, the second gesture, and the third gesture may be static or dynamic gestures that are different from each other. For example, the first gesture may be a gesture in which the user's hand grips but only extends the index finger, and the second gesture may be a gesture in which the user's hand grips but only extends the index finger and the middle finger, and the third gesture may be the user. The hand clenched the fist but only extended the gesture of the index finger, middle finger and ring finger. It should be noted, however, that the first gesture, the second gesture, and the third gesture are merely illustrative, and the present invention is not used to define gestures of the first gesture, the second gesture, and the third gesture.
當處理單元116判斷使用者的手勢為第一手勢時,則處理單元116可以判斷使用者將使用第一手勢來對焦物件對焦裝置100所顯示的物件(步驟S205)。When the processing unit 116 determines that the gesture of the user is the first gesture, the processing unit 116 may determine that the user will use the first gesture to focus on the object displayed by the object focusing apparatus 100 (step S205).
特別是,圖3是依據本發明一實施例所繪示的使用第一手勢來進行物件對焦的方法的流程示意圖。圖4A至圖4C是依據本發明一實施例所繪示的使用第一手勢來進行物件對焦的示意圖。而上述步驟S205的詳細過程可由圖3與圖4A至圖4C來進行說明。In particular, FIG. 3 is a schematic flow chart of a method for focusing an object using a first gesture according to an embodiment of the invention. 4A-4C are schematic diagrams of focusing an object using a first gesture according to an embodiment of the invention. The detailed process of the above step S205 can be explained by FIG. 3 and FIG. 4A to FIG. 4C.
請同時參照圖3與圖4A,假設顯示單元112顯示畫面400。畫面400包括由感測單元114偵測到的使用者的手勢40以及一物件42。當處理單元116判斷使用者的手勢40為第一手勢時,處理單元116會產生一虛擬準心44並藉由顯示單元112顯示虛擬準心44於第一手勢的位置(步驟S301)。之後,處理單元116會判斷使用者的手勢40是否保持為第一手勢並且移動(步驟S303)。當使用者的手勢40保持為第一手勢並且移動時,處理單元116持續地將所產生的虛擬準心44顯示在使用者的手勢40的位置以讓虛擬準心44同時隨著第一手勢移動。也就是說,假設顯示單元112一開始是顯示虛擬準心44於第一手勢所在的第一位置,而當第一手勢從第一位置移動至一第二位置時,顯示單元112也會顯示虛擬準心44於第二位置。Referring to FIG. 3 and FIG. 4A simultaneously, it is assumed that the display unit 112 displays the screen 400. The screen 400 includes a gesture 40 of the user detected by the sensing unit 114 and an object 42. When the processing unit 116 determines that the gesture 40 of the user is the first gesture, the processing unit 116 generates a virtual alignment 44 and displays the virtual alignment 44 at the position of the first gesture by the display unit 112 (step S301). Thereafter, the processing unit 116 determines whether the gesture 40 of the user remains as the first gesture and moves (step S303). When the user's gesture 40 remains as the first gesture and moves, the processing unit 116 continuously displays the generated virtual alignment 44 at the position of the user's gesture 40 to cause the virtual alignment 44 to simultaneously move with the first gesture. . That is, it is assumed that the display unit 112 initially displays the virtual center 44 at the first position where the first gesture is located, and when the first gesture moves from the first position to the second position, the display unit 112 also displays the virtual The center of gravity 44 is in the second position.
接著,請同時參照圖3、圖4B至圖4C,在圖4B中,假設使用者維持第一手勢並將手移動到鄰近物件42的位置。當第一手勢的位置鄰近於物件42的位置且使用者的手勢從第一手勢變換至其他手勢時,處理單元116會判斷使用者的手勢從第一手勢變換至其他手勢(步驟S305)。此時處理單元116會將虛擬準心44綁定於物件42以對焦物件42(步驟S307),如圖4C所示。當虛擬準心44綁定於物件42後,使用者若再執行其他的手勢或是使用者的手勢不在感測單元114的感測範圍內時,並不會影響到虛擬準心44的位置。Next, please refer to FIG. 3, FIG. 4B to FIG. 4C at the same time. In FIG. 4B, it is assumed that the user maintains the first gesture and moves the hand to the position adjacent to the object 42. When the position of the first gesture is adjacent to the position of the object 42 and the gesture of the user changes from the first gesture to the other gesture, the processing unit 116 determines that the gesture of the user changes from the first gesture to the other gesture (step S305). At this time, the processing unit 116 binds the virtual alignment 44 to the object 42 to focus on the object 42 (step S307), as shown in FIG. 4C. When the virtual alignment 44 is bound to the object 42, the user does not affect the position of the virtual alignment 44 if the user performs another gesture or the user's gesture is not within the sensing range of the sensing unit 114.
之後,請再次參照圖2,假設虛擬準心44被綁定於物件42後,處理單元116可以再次判斷感測單元114所擷取(或偵測)到的手勢是否為第一手勢、第二手勢或第三手勢。此時,當處理單元116判斷使用者的手勢為第二手勢時,由於虛擬準心44已被綁定於物件42,故處理單元116可以執行對應於物件42的功能(步驟S207)。After that, referring to FIG. 2 again, after the virtual alignment 44 is bound to the object 42, the processing unit 116 can determine again whether the gesture captured (or detected) by the sensing unit 114 is the first gesture, and the second. Gesture or third gesture. At this time, when the processing unit 116 determines that the gesture of the user is the second gesture, since the virtual centering 44 has been bound to the object 42, the processing unit 116 can perform the function corresponding to the object 42 (step S207).
特別是,由於本發明的物件對焦裝置可以是配戴在使用者頭部的頭戴式裝置,故本發明的物件對焦方法還可以模擬以使用者的視線進行對焦的功能。具體來說,假設處理單元116判斷感測單元114所擷取(或偵測)到的手勢為第三手勢時,處理單元116可以判斷使用者將以視線來對焦物件對焦裝置100所顯示的物件(步驟S209)。詳細來說,在偵測到第三手勢後,處理單元116會產生虛擬準心並藉由顯示單元112顯示虛擬準心於使用者的視線(亦稱為,虛擬視線)的一視線對焦位置。其中,處理單元116是以使用者的兩個眼睛之間的一水平連線的中點定義方向相同於使用者的臉部所面對的方向的虛擬視線,且此虛擬視線與上述的水平連線垂直並相交於上述水平連線的中點。而上述的視線對焦位置為上述虛擬視線在一視野中與一物體相交的位置。In particular, since the object focusing device of the present invention can be a head mounted device that is worn on the user's head, the object focusing method of the present invention can also simulate the function of focusing with the user's line of sight. Specifically, if the processing unit 116 determines that the gesture captured (or detected) by the sensing unit 114 is the third gesture, the processing unit 116 may determine that the user will focus on the object displayed by the object focusing device 100 by the line of sight. (Step S209). In detail, after detecting the third gesture, the processing unit 116 generates a virtual alignment and displays a line-of-sight focus position that is virtually aligned with the user's line of sight (also referred to as a virtual line of sight) by the display unit 112. Wherein, the processing unit 116 defines a virtual line of sight whose direction is the same as the direction faced by the user's face by a midpoint of a horizontal line between the two eyes of the user, and the virtual line of sight is connected with the above-mentioned horizontal line. The lines are vertical and intersect at the midpoint of the above horizontal connection. The above-mentioned line-of-sight focusing position is a position at which the virtual line of sight intersects an object in a field of view.
假設上述的視線對焦位置鄰近於如圖4A至圖4C中的物件42的位置,例如圖4C所示。於此同時,若處理單元116判斷使用者的手勢為上述的第二手勢時,則處理單元116會執行對應於物件42的功能。It is assumed that the above-described line-of-sight focus position is adjacent to the position of the object 42 as in FIGS. 4A to 4C, for example, as shown in FIG. 4C. At the same time, if the processing unit 116 determines that the gesture of the user is the second gesture described above, the processing unit 116 performs a function corresponding to the object 42.
圖5是依據本發明一實施例所繪示的物件對焦方法的流程圖。FIG. 5 is a flow chart of a method for focusing an object according to an embodiment of the invention.
請參照圖5,在步驟S501中,感測單元114偵測使用者的手勢。在步驟S503中,顯示單元112顯示一物件以及使用者的手勢。在步驟S505中,處理單元116判斷使用者的手勢是否為第一手勢。當使用者的手勢為第一手勢時,在步驟S507中,處理單元116產生虛擬準心並顯示虛擬準心於第一手勢的位置。當第一手勢的位置鄰近於物件的位置並且使用者的手勢從第一手勢變換至其他手勢時,在步驟S509中,處理單元116綁定虛擬準心於物件以對焦所顯示的物件。Referring to FIG. 5, in step S501, the sensing unit 114 detects a gesture of the user. In step S503, the display unit 112 displays an object and a gesture of the user. In step S505, the processing unit 116 determines whether the gesture of the user is the first gesture. When the gesture of the user is the first gesture, in step S507, the processing unit 116 generates a virtual eccentricity and displays a position that is virtually aligned with the first gesture. When the position of the first gesture is adjacent to the position of the object and the gesture of the user changes from the first gesture to the other gesture, in step S509, the processing unit 116 binds the virtual alignment to the object to focus on the displayed object.
綜上所述,本發明提出的物件對焦方法與物件對焦裝置可以讓使用者以手勢產生虛擬準心來對焦物件,並且在對焦完成後再以另一手勢執行被對焦的物件的功能。藉此,可以讓使用者可以準確地對裝置顯示的物件進行對焦。此外,本發明的物件對焦方法還可以讓使用者以手勢選擇自己想要的對焦方式,例如以使用者的虛擬視線進行對焦。藉此,可以達到對焦方法的多樣性,以提升使用者的體驗。In summary, the object focusing method and the object focusing device proposed by the present invention allow the user to generate a virtual center of gravity to focus on the object, and perform the function of the focused object with another gesture after the focus is completed. Thereby, the user can accurately focus on the objects displayed on the device. In addition, the object focusing method of the present invention can also allow the user to select the desired focusing mode by gesture, for example, focusing on the user's virtual line of sight. Thereby, the diversity of the focusing method can be achieved to enhance the user experience.
雖然本發明已以實施例揭露如上,然其並非用以限定本發明,任何所屬技術領域中具有通常知識者,在不脫離本發明的精神和範圍內,當可作些許的更動與潤飾,故本發明的保護範圍當視後附的申請專利範圍所界定者為準。Although the present invention has been disclosed in the above embodiments, it is not intended to limit the present invention, and any one of ordinary skill in the art can make some changes and refinements without departing from the spirit and scope of the present invention. The scope of the invention is defined by the scope of the appended claims.
100‧‧‧物件對焦裝置100‧‧‧object focusing device
112‧‧‧顯示單元112‧‧‧Display unit
114‧‧‧感測單元114‧‧‧Sensor unit
116‧‧‧處理單元116‧‧‧Processing unit
S201‧‧‧偵測使用者的手勢的步驟S201‧‧‧Steps to detect the user's gesture
S203‧‧‧判斷使用者的手勢是否為第一手勢、第二手勢或第三手勢的步驟S203‧‧‧ Steps of determining whether the user's gesture is the first gesture, the second gesture, or the third gesture
S205‧‧‧使用第一手勢進行對焦的步驟S205‧‧‧Steps for focusing using the first gesture
S207‧‧‧執行已對焦的物件的功能的步驟S207‧‧‧Steps for performing the function of the object in focus
S209‧‧‧使用虛擬視線進行對焦的步驟S209‧‧‧Steps for focusing with virtual line of sight
S301‧‧‧產生虛擬準心並顯示虛擬準心於第一手勢的位置的步驟S301‧‧‧Steps to create a virtual alignment and display the virtual alignment to the position of the first gesture
S303‧‧‧判斷第一手勢是否移動的步驟S303‧‧‧Steps for determining whether the first gesture is moving
S305‧‧‧當第一手勢的位置鄰近於物件的位置時,判斷使用者的手勢是否從第一手勢變換至其他手勢的步驟S305‧‧‧ When the position of the first gesture is adjacent to the position of the object, the step of determining whether the gesture of the user changes from the first gesture to the other gesture
S307‧‧‧綁定虛擬準心於物件的步驟S307‧‧‧Steps for binding virtual alignments to objects
400‧‧‧畫面400‧‧‧ screen
40‧‧‧手勢40‧‧‧ gestures
42‧‧‧物件42‧‧‧ objects
44‧‧‧虛擬準心44‧‧‧virtual alignment
S501‧‧‧偵測使用者的手勢的步驟S501‧‧‧Steps to detect the user's gesture
S503‧‧‧顯示物件以及使用者的手勢的步驟S503‧‧‧Steps for displaying objects and user gestures
S505‧‧‧判斷使用者的手勢是否為第一手勢的步驟S505‧‧‧Steps for determining whether the user's gesture is the first gesture
S507‧‧‧當使用者的手勢為第一手勢時,產生虛擬準心並顯示虛擬準心於第一手勢的位置的步驟S507‧‧‧ When the user's gesture is the first gesture, the step of generating a virtual alignment and displaying the virtual alignment to the position of the first gesture
S509‧‧‧當第一手勢的位置鄰近於物件的位置並且使用者的手勢從第一手勢變換至其他手勢時,綁定虛擬準心於物件以對焦此物件的步驟S509‧‧‧ When the position of the first gesture is adjacent to the position of the object and the gesture of the user changes from the first gesture to the other gesture, the step of binding the virtual focus to the object to focus on the object
圖1是依據本發明一實施例所繪示的物件對焦裝置的示意圖。 圖2是依據本發明一實施例所繪示的物件對焦裝置的使用方法的流程示意圖。 圖3是依據本發明一實施例所繪示的使用第一手勢來進行物件對焦的方法的流程示意圖。 圖4A至圖4C是依據本發明一實施例所繪示的使用第一手勢來進行物件對焦的示意圖。 圖5是依據本發明一實施例所繪示的物件對焦方法的流程圖。FIG. 1 is a schematic diagram of an object focusing device according to an embodiment of the invention. 2 is a flow chart showing a method of using an object focusing device according to an embodiment of the invention. FIG. 3 is a schematic flow chart of a method for focusing an object using a first gesture according to an embodiment of the invention. 4A-4C are schematic diagrams of focusing an object using a first gesture according to an embodiment of the invention. FIG. 5 is a flow chart of a method for focusing an object according to an embodiment of the invention.
Claims (10)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| TW106127304A TW201910897A (en) | 2017-08-11 | 2017-08-11 | Object focusing device and method thereof |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| TW106127304A TW201910897A (en) | 2017-08-11 | 2017-08-11 | Object focusing device and method thereof |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| TW201910897A true TW201910897A (en) | 2019-03-16 |
Family
ID=66590127
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| TW106127304A TW201910897A (en) | 2017-08-11 | 2017-08-11 | Object focusing device and method thereof |
Country Status (1)
| Country | Link |
|---|---|
| TW (1) | TW201910897A (en) |
-
2017
- 2017-08-11 TW TW106127304A patent/TW201910897A/en unknown
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11625841B2 (en) | Localization and tracking method and platform, head-mounted display system, and computer-readable storage medium | |
| TWI734024B (en) | Direction determination system and direction determination method | |
| EP3063602B1 (en) | Gaze-assisted touchscreen inputs | |
| US10922862B2 (en) | Presentation of content on headset display based on one or more condition(s) | |
| US9589325B2 (en) | Method for determining display mode of screen, and terminal device | |
| US20180033211A1 (en) | Personal Electronic Device with a Display System | |
| US20160025983A1 (en) | Computer display device mounted on eyeglasses | |
| EP3062286B1 (en) | Optical distortion compensation | |
| CN107765429A (en) | Image display device and its operating method | |
| CN105474070B (en) | Head-mounted display apparatus and its control method | |
| US11900058B2 (en) | Ring motion capture and message composition system | |
| US20120038592A1 (en) | Input/output device and human-machine interaction system and method thereof | |
| CN112005548A (en) | Method of generating depth information and electronic device supporting the same | |
| CN105988556A (en) | Electronic device and display adjustment method for electronic device | |
| WO2016008265A1 (en) | Method and apparatus for locating position | |
| JP2017083916A (en) | Gesture recognition apparatus, head-mounted display, and mobile terminal | |
| KR101515986B1 (en) | Generation apparatus for virtual coordinates using infrared source in mobile device and thereof method | |
| TW201539252A (en) | Touch system | |
| US10872470B2 (en) | Presentation of content at headset display based on other display not being viewable | |
| TW201518994A (en) | Viewing angle adjusting method, apparatus and system of liquid crystal display | |
| TW201910897A (en) | Object focusing device and method thereof | |
| TWI808783B (en) | Method for adjusting virtual object, host, and computer readable storage medium | |
| JP2015149036A (en) | Method, electronic device and computer program for improving operation accuracy for touch screen | |
| JP2015052895A (en) | Information processing apparatus and information processing method | |
| CN108196676A (en) | Track and identify method and system |