TWI533234B - Control method based on eye's motion and apparatus using the same - Google Patents
Control method based on eye's motion and apparatus using the same Download PDFInfo
- Publication number
- TWI533234B TWI533234B TW104124060A TW104124060A TWI533234B TW I533234 B TWI533234 B TW I533234B TW 104124060 A TW104124060 A TW 104124060A TW 104124060 A TW104124060 A TW 104124060A TW I533234 B TWI533234 B TW I533234B
- Authority
- TW
- Taiwan
- Prior art keywords
- eye
- line
- view pattern
- positions
- user
- Prior art date
Links
- 230000033001 locomotion Effects 0.000 title claims description 71
- 238000000034 method Methods 0.000 title claims description 48
- 210000001747 pupil Anatomy 0.000 claims description 61
- 238000006243 chemical reaction Methods 0.000 claims description 51
- 238000013507 mapping Methods 0.000 claims description 42
- 230000004424 eye movement Effects 0.000 claims description 40
- 238000001514 detection method Methods 0.000 claims description 32
- 238000012545 processing Methods 0.000 claims description 22
- 230000009471 action Effects 0.000 claims description 16
- 238000010191 image analysis Methods 0.000 claims description 7
- 230000000007 visual effect Effects 0.000 claims description 4
- 238000002372 labelling Methods 0.000 claims 1
- 210000001508 eye Anatomy 0.000 description 205
- 238000010586 diagram Methods 0.000 description 19
- 238000012937 correction Methods 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 9
- 238000012549 training Methods 0.000 description 6
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 5
- 210000005252 bulbus oculi Anatomy 0.000 description 5
- 238000003708 edge detection Methods 0.000 description 5
- 230000008859 change Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 3
- 238000010606 normalization Methods 0.000 description 3
- 238000013461 design Methods 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 238000012797 qualification Methods 0.000 description 1
- 210000001525 retina Anatomy 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000011426 transformation method Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Landscapes
- User Interface Of Digital Computer (AREA)
- Eye Examination Apparatus (AREA)
- Position Input By Displaying (AREA)
Description
本發明是有關於一種控制方法及其應用之裝置,且特別是有關於一種基於眼部動作的控制方法及其應用之裝置。 The present invention relates to a control method and an apparatus therefor, and more particularly to a method based on an eye movement control method and an application thereof.
以目前技術而言,人眼追蹤技術主要可區分為與侵入性(invasive)與非侵入性(non-invasive)兩種。侵入性的人眼追蹤技術主要是在眼睛中設置搜尋線圈(search coil)或使用眼動電波圖(electrooculogram)來感測人眼所注視的地點。非侵入性的人眼追蹤技術則主要以視覺辨識(vision based)為基礎,並可區分為免頭戴式(free-head)或頭戴式(head-mount)等多種實現方式。 In the current technology, human eye tracking technology can be mainly divided into two types: invasive and non-invasive. Invasive human eye tracking technology is mainly to set a search coil in the eye or use an electrooculogram to sense the location of the human eye. Non-invasive human eye tracking technology is based primarily on vision based and can be divided into free-head or head-mount implementations.
隨著科技的發展,人眼追蹤技術大幅應用於神經科學、電腦科學等各種領域,並常見於保全系統(例如眼動鎖)或是眼控電腦中。人眼追蹤技術可追蹤眼球的移動以得到眼球所凝視的視線坐落點,並可據以對保全系統或電腦裝置進行控制而實現眼控功能,或是以凝視的方式來觸發對於保全系統或是眼控電腦等 裝置的控制。 With the development of technology, human eye tracking technology has been widely used in various fields such as neuroscience and computer science, and is commonly used in security systems (such as eye movement locks) or eye control computers. The human eye tracking technology can track the movement of the eyeball to obtain the line of sight of the eye gaze, and can realize the eye control function according to the control system or the computer device, or trigger the security system by gazing. Eye control computer, etc. Control of the device.
本發明提供一種基於眼部動作的控制方法及其應用之裝置,可讓使用者在預設的視圖圖樣(view pattern)上進行注視以及移動注視點,並藉由偵測使用者的視線移動軌跡來判斷是否讓裝置執行相對應的操作。藉此,便可藉由使用者的視線移動軌跡來操作裝置。 The invention provides a control method based on eye movement and a device thereof, which can enable a user to gaze and move a gaze point on a preset view pattern, and detect a movement trajectory of a user by detecting a line of sight. To determine whether to let the device perform the corresponding operation. Thereby, the device can be operated by the user's line of sight movement trajectory.
本發明提出一種基於眼部動作的控制方法。此控制方法包括下列步驟。透過取像單元擷取影像序列。分析影像序列,藉以在影像序列的每一張影像中,獲得使用者的眼部區域之眼部影像資訊。基於眼部影像資訊,以偵測使用者在注視視圖圖樣的視線移動軌跡。判斷視線移動軌跡是否符合相對於視圖圖樣的預設軌跡,以產生判斷結果。以及,依據判斷結果,於一裝置上執行對應操作。 The present invention proposes a control method based on eye movements. This control method includes the following steps. Capture the image sequence through the image capture unit. The image sequence is analyzed to obtain eye image information of the user's eye region in each image of the image sequence. Based on the eye image information, the trajectory of the line of sight of the user looking at the view pattern is detected. It is determined whether the line of sight movement track conforms to the preset track relative to the view pattern to generate a judgment result. And, according to the judgment result, the corresponding operation is performed on a device.
在本發明的一實施例中,上述視圖圖樣包括複數個辨識圖案。預設軌跡為對應於至少一個所述辨識圖案所依序組成的幾何連線。 In an embodiment of the invention, the view pattern includes a plurality of identification patterns. The preset trajectory is a geometric connection corresponding to the sequence formed by at least one of the identification patterns.
在本發明的一實施例中,上述基於眼部影像資訊,以偵測使用者在注視視圖圖樣的視線移動軌跡的步驟包括:分析眼部影像資訊以判斷使用者是否注視於視圖圖樣中的預設位置、或注視預設位置超過預設時間,以及當使用者注視於視圖圖樣中的預 設位置、或是注視預設位置超過預設時間時,偵測使用者在注視視圖圖樣的視線移動軌跡。 In an embodiment of the invention, the step of detecting the trajectory of the line of sight of the user watching the view pattern based on the eye image information comprises: analyzing the eye image information to determine whether the user is looking at the preview in the view pattern. Set the position, or look at the preset position for more than the preset time, and when the user looks at the preview in the view pattern When the position is set or the preset position is over the preset time, the trajectory of the line of sight of the user watching the view pattern is detected.
在本發明的一實施例中,上述基於該眼部影像資訊,以偵測使用者在注視該視圖圖樣的視線移動軌跡的步驟包括:基於眼部影像資訊以偵測眼部區域內的眼部物件,依據眼部物件執行眼部追蹤動作,以依序獲得眼部物件位在影像序列中的複數個第一位置,以及依據視圖圖樣對應的座標系統,映射轉換第一位置為複數個第二位置,以獲得使用者在注視視圖圖樣的視線移動軌跡。 In an embodiment of the invention, the step of detecting the trajectory of the line of sight of the view pattern by the user based on the eye image information comprises: detecting the eye part in the eye area based on the eye image information The object performs an eye tracking operation according to the eye object, sequentially obtains a plurality of first positions of the eye object position in the image sequence, and maps the first position to a plurality of second according to the coordinate system corresponding to the view pattern. Position to obtain the trajectory of the line of sight of the user looking at the view pattern.
在本發明的一實施例中,上述的眼部物件包括瞳孔與第一反光點,且依據眼部物件執行眼部追蹤動作,以依序獲得眼部物件相應於影像序列的所述第一位置的步驟包括:對影像序列的瞳孔與第一反光點進行定位以執行眼部追蹤動作,以及依序獲得眼部物件相應於影像序列的所述第一位置。 In an embodiment of the invention, the eye object includes a pupil and a first light reflecting point, and performing an eye tracking action according to the eye object to sequentially obtain the first position of the eye object corresponding to the image sequence. The steps include: positioning the pupil of the image sequence and the first reflective point to perform an eye tracking action, and sequentially obtaining the first position of the eye object corresponding to the image sequence.
在本發明的一實施例中,上述依據視圖圖樣對應的座標系統,映射轉換第一位置為所述複數個第二位置,以獲得使用者在注視視圖圖樣的視線移動軌跡的步驟包括:依據瞳孔與第一反光點,計算距離參數,依據第一反光點相對於瞳孔的向量,計算角度參數,並分析距離參數及角度參數的分佈關係,以轉換第一位置為視圖圖樣對應的座標系統中的所述第二位置,以及將所述第二位置作為使用者在注視視圖圖樣的視線移動軌跡。 In an embodiment of the present invention, the step of mapping the first position to the plurality of second positions according to the coordinate system corresponding to the view pattern to obtain the line of sight movement of the user in the view pattern includes: Calculating the distance parameter with the first reflecting point, calculating the angle parameter according to the vector of the first reflecting point relative to the pupil, and analyzing the distribution relationship between the distance parameter and the angle parameter, so as to convert the first position into the coordinate system corresponding to the view pattern The second position, and the second position is used as a line of sight movement trajectory of the user watching the view pattern.
在本發明的一實施例中,上述眼部物件更包括第二反光 點,且依據視圖圖樣對應的座標系統,映射轉換第一位置為所述複數個第二位置,以獲得使用者在注視該視圖圖樣的該視線移動軌跡的步驟包括:依據瞳孔、第一反光點與第二反光點,計算面積參數,依據第一反光點與瞳孔的第一連線以及第二反光點與瞳孔的第二連線的至少其一,計算角度參數,分析面積參數及角度參數的分佈關係,以轉換第一位置為視圖圖樣對應的座標系統中的所述複數個第二位置,以及將所述第二位置作為使用者在注視視圖圖樣的視線移動軌跡。 In an embodiment of the invention, the eye object further includes a second reflective Point, and according to the coordinate system corresponding to the view pattern, the mapping conversion first position is the plurality of second positions, to obtain the user's gaze movement track of the view pattern, including: according to the pupil, the first reflective point And calculating a region parameter according to the second reflection point, calculating an angle parameter according to at least one of the first connection point of the first reflection point and the pupil and the second connection line of the second reflection point and the pupil, and analyzing the area parameter and the angle parameter And distributing the relationship to convert the first position to the plurality of second positions in the coordinate system corresponding to the view pattern, and to use the second position as a line of sight movement trajectory of the user in the gaze view pattern.
在本發明的一實施例中,上述依據眼部物件執行眼部追蹤動作,以依序獲得眼部物件相應於影像序列的所述第一位置的步驟更包括:在獲得所述第一位置的其中之一之後,調整眼部區域以再次偵測眼部物件,並據以執行眼部追蹤動作。 In an embodiment of the present invention, the step of performing the eye tracking action according to the eye object to sequentially obtain the first position of the eye object corresponding to the image sequence further comprises: obtaining the first position After one of them, the eye area is adjusted to detect the eye object again, and the eye tracking action is performed accordingly.
在本發明的一實施例中,上述依據判斷結果以於裝置上執行對應操作的步驟包括:當視線移動軌跡符合預設軌跡時,裝置解除鎖具的鎖定狀態,以及當視線移動軌跡不符合預設軌跡時,裝置維持鎖具的鎖定狀態。 In an embodiment of the invention, the step of performing the corresponding operation on the device according to the determination result comprises: when the line of sight movement track conforms to the preset track, the device unlocks the locked state of the lock, and when the line of sight movement track does not conform to the preset When the track is in motion, the device maintains the locked state of the lock.
本發明提出一種基於眼部動作進行控制的裝置,此裝置包括取像單元、儲存單元以及處理單元。取像單元用以擷取影像序列。儲存單元用以記錄多個程序模組,並且儲存單元更包括資料庫。處理單元耦接取像單元及儲存單元,以存取並執行儲存單元中記錄的所述模組。所述程序模組包括影像分析模組、視線偵測模組、判斷模組以及控制模組。影像分析模組分析影像序列, 藉以在影像序列的每一張影像中,獲得使用者的眼部區域之眼部影像資訊。視線偵測模組基於眼部影像資訊,以偵測使用者在注視視圖圖樣的視線移動軌跡。判斷模組判斷視線移動軌跡是否符合相對於視圖圖樣的預設軌跡,以產生判斷結果。控制模組依據判斷結果以藉由裝置執行對應操作。 The present invention proposes a device for controlling based on eye movements, the device comprising an image capturing unit, a storage unit and a processing unit. The image capturing unit is used to capture an image sequence. The storage unit is configured to record a plurality of program modules, and the storage unit further includes a database. The processing unit is coupled to the image capturing unit and the storage unit to access and execute the module recorded in the storage unit. The program module includes an image analysis module, a line of sight detection module, a judgment module, and a control module. The image analysis module analyzes the image sequence, In this way, in each image of the image sequence, the eye image information of the user's eye area is obtained. The line-of-sight detection module is based on the eye image information to detect the trajectory of the line of sight of the user watching the view pattern. The determining module determines whether the line of sight movement track conforms to the preset track relative to the view pattern to generate a determination result. The control module performs a corresponding operation by the device according to the judgment result.
在本發明的一實施例中,上述視圖圖樣包括複數個辨識圖案,且預設軌跡為對應於至少一個所述辨識圖案所依序組成的幾何連線。 In an embodiment of the invention, the view pattern includes a plurality of identification patterns, and the preset trajectory is a geometric connection corresponding to the at least one of the identification patterns.
在本發明的一實施例中,上述視線偵測模組分析眼部影像資訊以判斷使用者是否注視於視圖圖樣中的預設位置、或注視預設位置超過預設時間。以及,當使用者注視於視圖圖樣中的預設位置、或注視預設位置超過預設時間時,視線偵測模組偵測使用者在注視視圖圖樣的視線移動軌跡。 In an embodiment of the invention, the line-of-sight detecting module analyzes the eye image information to determine whether the user is looking at the preset position in the view pattern or watching the preset position exceed the preset time. And when the user looks at the preset position in the view pattern or looks at the preset position for more than the preset time, the line-of-sight detecting module detects the line of sight movement of the user watching the view pattern.
在本發明的一實施例中,上述視線偵測模組包括眼部偵測模組、眼部追蹤模組、以及映射轉換模組。其中,眼部偵測模組基於眼部影像資訊以偵測眼部區域內的眼部物件。眼部追蹤模組依據眼部物件執行眼部追蹤動作,以依序獲得眼部物件位在影像序列中的複數個第一位置。映射轉換模組依據視圖圖樣對應的座標系統,映射轉換所述第一位置為複數個第二位置,以獲得使用者在注視視圖圖樣的視線移動軌跡。 In an embodiment of the invention, the line of sight detection module includes an eye detection module, an eye tracking module, and a mapping conversion module. The eye detection module is based on the eye image information to detect eye objects in the eye region. The eye tracking module performs an eye tracking action according to the eye object to sequentially obtain a plurality of first positions of the eye object in the image sequence. The mapping conversion module maps the first position to a plurality of second positions according to the coordinate system corresponding to the view pattern, so as to obtain a line of sight movement track of the user watching the view pattern.
在本發明的一實施例中,上述眼部物件包括瞳孔與第一反光點,且眼部追蹤模組對影像序列的瞳孔與第一反光點進行定 位以執行眼部追蹤動作,以及依序獲得眼部物件相應於影像序列的所述第一位置。 In an embodiment of the invention, the eye object includes a pupil and a first light reflecting point, and the eye tracking module determines the pupil of the image sequence and the first light reflecting point. Positioning to perform an eye tracking action, and sequentially obtaining the first position of the eye object corresponding to the sequence of images.
在本發明的一實施例中,上述映射轉換模組依據瞳孔與第一反光點,計算距離參數,且依據第一反光點相對於瞳孔的向量,計算角度參數,並分析距離參數及角度參數的分佈關係,以轉換第一位置為視圖圖樣對應的座標系統中的所述複數個第二位置,以及將所述第二位置作為使用者在注視視圖圖樣的視線移動軌跡。 In an embodiment of the invention, the mapping conversion module calculates a distance parameter according to the pupil and the first reflection point, and calculates an angle parameter according to the vector of the first reflection point relative to the pupil, and analyzes the distance parameter and the angle parameter. And distributing the relationship to convert the first position to the plurality of second positions in the coordinate system corresponding to the view pattern, and to use the second position as a line of sight movement trajectory of the user in the gaze view pattern.
在本發明的一實施例中,上述眼部物件更包括第二反光點,且映射轉換模組依據瞳孔、第一反光點與第二反光點,計算面積參數,且依據第一反光點與瞳孔的第一連線以及第二反光點與瞳孔的第二連線的至少其一,計算角度參數,並分析面積參數及角度參數的分佈關係,以轉換第一位置為視圖圖樣對應的座標系統中的所述複數個第二位置,以及將所述第二位置作為使用者在注視視圖圖樣的視線移動軌跡。 In an embodiment of the invention, the eye object further includes a second reflective point, and the mapping conversion module calculates the area parameter according to the pupil, the first reflective point and the second reflective point, and according to the first reflective point and the pupil The first connection line and at least one of the second reflection point and the second connection line of the pupil, calculate an angle parameter, and analyze a distribution relationship between the area parameter and the angle parameter to convert the first position into a coordinate system corresponding to the view pattern The plurality of second positions, and the second position as a line of sight movement trajectory of the user in the gaze view pattern.
在本發明的一實施例中,上述眼部追蹤模組更在獲得第一位置的其中之一之後調整眼部區域,以由眼部偵測模組再次偵測眼部物件,且眼部追蹤模組據以執行該眼部追蹤動作。 In an embodiment of the invention, the eye tracking module further adjusts the eye region after obtaining one of the first positions, so that the eye detection module detects the eye object again, and the eye tracking is performed. The module performs the eye tracking action accordingly.
在本發明的一實施例中,當視線移動軌跡符合預設軌跡時,上述控制模組控制裝置解除鎖具的鎖定狀態,且當視線移動軌跡不符合預設軌跡時,控制模組控制裝置維持鎖具的鎖定狀態。 In an embodiment of the invention, when the line of sight movement trajectory conforms to the preset trajectory, the control module control device releases the locked state of the lock, and when the line of sight movement trajectory does not conform to the preset trajectory, the control module control device maintains the lock Lock status.
基於上述,本發明實施例所提出的基於眼部動作的控制 方法及其應用之裝置,可偵測使用者在注視視圖圖樣的視線移動軌跡,並在判定視線移動軌跡符合相對於視圖圖樣的預設軌跡時,由裝置執行相對應的操作。藉此,本發明實施例可藉由視線移動軌跡作為訊號的觸發,並能夠廣泛應用於保全系統、眼控電腦等多個領域。 Based on the above, the eye movement based control proposed by the embodiment of the present invention The method and the device thereof can detect the trajectory of the line of sight of the user watching the view pattern, and perform corresponding operations by the device when determining that the line of sight movement trajectory conforms to the preset trajectory relative to the view pattern. Therefore, the embodiment of the present invention can be used as a trigger of the signal by the line of sight movement track, and can be widely applied to various fields such as a security system and an eye control computer.
為讓本案的上述特徵和優點能更明顯易懂,下文特舉實施例,並配合所附圖式作詳細說明如下。 In order to make the above features and advantages of the present invention more comprehensible, the following embodiments are described in detail with reference to the accompanying drawings.
100‧‧‧裝置 100‧‧‧ device
110、710、810、910‧‧‧取像單元 110, 710, 810, 910‧‧‧ image capture unit
120、720、920‧‧‧處理單元 120, 720, 920‧ ‧ processing unit
130‧‧‧儲存單元 130‧‧‧storage unit
131‧‧‧影像分析模組 131‧‧‧Image Analysis Module
132‧‧‧視線偵測模組 132‧‧‧Sight Detection Module
1321‧‧‧眼部偵測模組 1321‧‧‧Eye detection module
1322‧‧‧眼部追蹤模組 1322‧‧‧Eye tracking module
1323‧‧‧映射轉換模組 1323‧‧‧Map Conversion Module
133‧‧‧判斷模組 133‧‧‧Judgement module
134‧‧‧控制模組 134‧‧‧Control Module
135‧‧‧資料庫 135‧‧‧Database
200‧‧‧視圖圖樣 200‧‧‧ view pattern
210‧‧‧辨識圖案 210‧‧‧ identification pattern
220‧‧‧幾何連線 220‧‧‧Geometry connection
230‧‧‧預設軌跡 230‧‧‧Preset trajectory
510‧‧‧眼部物件 510‧‧‧Eye objects
512‧‧‧瞳孔中心點 512‧‧‧ pupil center point
514、614、616‧‧‧反光點 514, 614, 616‧‧‧ Reflective points
602‧‧‧向量 602‧‧‧ vector
612‧‧‧瞳孔 612‧‧‧瞳孔
630‧‧‧座標點 630‧‧‧ punctuation
640‧‧‧扇形 640‧‧‧ sector
650‧‧‧矩形 650‧‧‧Rectangle
660‧‧‧座標系統 660‧‧‧ coordinate system
700‧‧‧門禁系統 700‧‧‧Access Control System
730‧‧‧鎖具 730‧‧‧Locks
740‧‧‧提示單元 740‧‧‧Cue unit
750‧‧‧門體 750‧‧‧
760‧‧‧罩體 760‧‧‧ Cover
800‧‧‧手持式眼控接目裝置 800‧‧‧Handheld eye control eyepiece
820‧‧‧顯示單元 820‧‧‧ display unit
830‧‧‧殼體 830‧‧‧ housing
840‧‧‧反射鏡 840‧‧‧Mirror
850‧‧‧光源 850‧‧‧Light source
860‧‧‧保險箱 860‧‧‧ safe
900‧‧‧眼控裝置 900‧‧‧ eye control device
H‧‧‧橫線 H‧‧‧ horizontal line
IS‧‧‧影像序列 IS‧‧·image sequence
L‧‧‧距離 L‧‧‧ distance
θ‧‧‧角度 Θ‧‧‧ angle
1~18‧‧‧區域 1~18‧‧‧Area
S310~S350、S410~S480、S510~S540‧‧‧步驟 S310~S350, S410~S480, S510~S540‧‧‧ steps
圖1是依照本發明一實施例的基於眼部動作的控制裝置的功能方塊示意圖。 1 is a functional block diagram of a control device based on eye movements in accordance with an embodiment of the present invention.
圖2A、圖2B是依照本發明一實施例的基於眼部動作的控制方法的示意圖。 2A and 2B are schematic diagrams of a method of controlling an eye movement according to an embodiment of the present invention.
圖3是依照本發明一實施例的基於眼部動作的控制方法的步驟流程圖。 3 is a flow chart showing the steps of a method based on eye movement control in accordance with an embodiment of the present invention.
圖4A是依照本發明一實施例的基於眼部動作的控制方法的步驟流程圖。 4A is a flow chart showing the steps of a method based on eye movement control in accordance with an embodiment of the present invention.
圖4B是依照本發明一實施例的視線偵測模組的功能方塊示意圖。 FIG. 4B is a functional block diagram of a line-of-sight detecting module according to an embodiment of the invention.
圖4C是依照本發明一實施例的基於眼部動作的控制方法的示意圖。 4C is a schematic diagram of a method of controlling an eye movement based on an embodiment of the present invention.
圖5A是依照本發明一實施例的眼部追蹤的步驟流程圖。 5A is a flow chart showing the steps of eye tracking in accordance with an embodiment of the present invention.
圖5B是依照本發明一實施例的眼部追蹤的示意圖。 FIG. 5B is a schematic diagram of eye tracking in accordance with an embodiment of the present invention.
圖6A至圖6G是依照本發明一實施例的座標系統映射轉換的示意圖。 6A-6G are schematic diagrams of coordinate conversion of a coordinate system in accordance with an embodiment of the present invention.
圖7是依照本發明一實施例的基於眼部動作進行控制的門禁系統示意圖。 FIG. 7 is a schematic diagram of an access control system based on eye movement control according to an embodiment of the invention.
圖8A、圖8B是依照本發明一實施例的基於眼部動作進行控制的手持式眼控接目裝置示意圖。 8A and FIG. 8B are schematic diagrams of a handheld eye control device according to an eye movement control according to an embodiment of the invention.
圖9是依照本發明一實施例的基於眼部動作的控制方法的示意圖。 9 is a schematic diagram of a method of controlling an eye movement based on an embodiment of the present invention.
目前眼控技術多是偵測使用者的視線坐落點,而僅能夠依據其在螢幕中所凝視的位置來對保全系統或是眼控電腦進行控制。本發明實施例的基於眼部動作的控制方法及其應用之裝置則可藉由視線移動軌跡作為訊號的觸發,從而以軌跡的方式來實現眼控技術。藉此,本發明實施例可提供更安全的密碼輸入方式,且亦能夠實現更便利且更為直覺的眼動控制。以下即詳加說明本發明實施例所提出的基於眼部動作的控制方法及應用其之裝置。 At present, the eye control technology mostly detects the user's line of sight, and can only control the security system or the eye control computer according to the position it gaze on the screen. The eye movement control method and the application device thereof according to the embodiments of the present invention can implement the eye control technology in a trajectory manner by using the line of sight movement track as a trigger of the signal. Thereby, the embodiment of the invention can provide a safer password input mode, and can also realize more convenient and more intuitive eye movement control. Hereinafter, the eye movement-based control method and the device using the same according to the embodiments of the present invention will be described in detail.
圖1是依照本發明一實施例之基於眼部動作進行控制的裝置的功能方塊示意圖。所述基於眼部動作進行控制的裝置100包括取像單元110、處理單元120以及儲存單元130。於此,所述 裝置100可例如為保險箱、門禁系統或其他類型需驗證使用者的資格(qualification)以決定是否提供使用者一特定權限之保全系統/裝置等,或可以是具備眼控功能的電腦等電子裝置,本發明不限定所述裝置100的具體類型。 1 is a functional block diagram of an apparatus for controlling based on eye movements in accordance with an embodiment of the present invention. The apparatus 100 for controlling based on eye movements includes an image capturing unit 110, a processing unit 120, and a storage unit 130. Here, the The device 100 can be, for example, a safe, an access control system, or other type of qualification that requires verification of the user to determine whether to provide a security system/device for a specific user, or an electronic device such as a computer with an eye control function. The invention does not limit the specific type of device 100.
在本實施例中,取像單元110可用以沿特定方向(視取像單元110的配置位置/角度而定)擷取一影像序列IS,亦即取像單元110連續擷取多張影像,並提供給處理單元120。 In this embodiment, the image capturing unit 110 can be used to capture an image sequence IS in a specific direction (depending on the arrangement position/angle of the image unit 110), that is, the image capturing unit 110 continuously captures multiple images, and Provided to the processing unit 120.
處理單元120耦接取像單元110,並例如為中央處理單元(central processing unit,CPU)、圖形處理單元(graphics processing unit,GPU),或是其他可程式化之微處理器(microprocessor)等裝置。 The processing unit 120 is coupled to the image capturing unit 110, and is, for example, a central processing unit (CPU), a graphics processing unit (GPU), or other programmable microprocessor (microprocessor). .
儲存單元130例如是任何型態的固定或可移動隨機存取記憶體(random access memory,RAM)、唯讀記憶體(read-only memory,ROM)、快閃記憶體(flash memory)或類似元件或上述元件的組合。儲存單元130中包括資料庫135,並記錄多個程序模組。處理單元120耦接儲存單元130,以存取並執行上述程序模組。這些程序模組包括影像分析模組131、視線偵測模組132、判斷模組133、控制模組134。其中,影像分析模組131對取像單元110所擷取的影像序列IS進行影像處理與分析,藉以在影像序列IS的每一張影像中,獲得使用者的眼部區域的眼部影像資訊。藉此,視線偵測模組132即可根據所獲得之眼部影像資訊來偵測使用者的眼部動作,再由判斷模組133判斷使用者在注視視圖圖樣 的視線移動軌跡是否符合一預設軌跡。在此以圖2A與圖2B的範例對『視圖圖樣』、『預設軌跡』以及上述兩者之間的對應關係進行說明。如圖2A所示,視圖圖樣200包括多個辨識圖案210。辨識圖案210可為M×N的矩陣形式排列,在此實施例中,M、N皆為4,然不以此為限。需說明的是,上述預設軌跡可以是對應於所述辨識圖案210的至少其一所依序組成的幾何連線。舉例而言,圖2A所繪示的幾何連線220可由多個辨識圖案210組成,而圖2B則繪示了儲存於資料庫135中的預設軌跡230。可以看出,上述的幾何連線220可與預設軌跡230相對應。 The storage unit 130 is, for example, any type of fixed or removable random access memory (RAM), read-only memory (ROM), flash memory, or the like. Or a combination of the above elements. The storage unit 130 includes a database 135 and records a plurality of program modules. The processing unit 120 is coupled to the storage unit 130 to access and execute the above program module. The program modules include an image analysis module 131, a line of sight detection module 132, a determination module 133, and a control module 134. The image analysis module 131 performs image processing and analysis on the image sequence IS captured by the image capturing unit 110, so that the eye image information of the eye region of the user is obtained in each image of the image sequence IS. In this way, the line-of-sight detecting module 132 can detect the eye movement of the user according to the obtained eye image information, and then the determining module 133 determines that the user is watching the view pattern. Whether the line of sight movement track conforms to a preset track. Here, the correspondence between the "view pattern", the "predetermined track", and the above will be described with reference to the examples of FIGS. 2A and 2B. As shown in FIG. 2A, the view pattern 200 includes a plurality of identification patterns 210. The identification pattern 210 may be arranged in a matrix of M×N. In this embodiment, both M and N are 4, but not limited thereto. It should be noted that the preset trajectory may be a geometric connection corresponding to at least one of the sequence of the identification patterns 210. For example, the geometric connection 220 illustrated in FIG. 2A may be composed of a plurality of identification patterns 210, and FIG. 2B illustrates the preset trajectory 230 stored in the database 135. It can be seen that the geometric connection 220 described above can correspond to the preset trajectory 230.
藉此,控制模組134依據上述視線移動軌跡是否符合預設軌跡的判斷結果決定是否於裝置100上執行對應操作。例如,當裝置100為保全裝置,且預設軌跡被設定為用以將保全裝置中的鎖具解碼時,控制模組134即可依據視線移動軌跡是否符合預設軌跡而切換鎖具為鎖定狀態或解鎖狀態。 Thereby, the control module 134 determines whether to perform a corresponding operation on the device 100 according to the determination result of whether the line of sight movement track conforms to the preset track. For example, when the device 100 is a security device and the preset track is set to decode the lock in the security device, the control module 134 can switch the lock to the locked state or unlock according to whether the line of sight movement track conforms to the preset track. status.
以下對本發明實施例所提出的基於眼部動作的控制方法進行說明。圖3是依照本發明一實施例所繪示的基於眼部動作的控制方法的步驟流程圖。本實施例的方法適用於上述的電子裝置100,在此搭配圖1中裝置100的各項元件,並以圖3的步驟流程圖來說明本發明實施例的基於眼部動作以進行控制的概念。 The method of controlling eye movement based on the embodiment of the present invention will be described below. FIG. 3 is a flow chart showing the steps of a method based on eye movement control according to an embodiment of the invention. The method of the present embodiment is applicable to the electronic device 100 described above, and is combined with the components of the device 100 of FIG. 1 , and the concept of the eye motion based on the embodiment of the present invention is used to describe the control concept of the embodiment of the present invention. .
在步驟S310中,裝置100透過取像單元110擷取影像序列IS。在步驟S320中,影像分析模組131分析影像序列IS,藉以在影像序列IS的每一張影像中,獲得使用者的眼部區域之眼部 影像資訊。並在步驟S330中,視線偵測模組132基於眼部影像資訊以偵測使用者在注視視圖圖樣的視線移動軌跡。 In step S310, the device 100 captures the image sequence IS through the image capturing unit 110. In step S320, the image analysis module 131 analyzes the image sequence IS, thereby obtaining the eye portion of the user's eye region in each image of the image sequence IS. Image information. In step S330, the line-of-sight detecting module 132 detects the trajectory of the line of sight of the user watching the view pattern based on the eye image information.
需說明的是,在部分實施例中,視線偵測模組132可先透過分析眼部影像資訊,藉以判斷使用者是否注視於視圖圖樣中的預設位置,或是判斷使用者是否注視於上述預設位置超過預設時間。當使用者注視於視圖圖樣中的預設位置、或是使用者注視特定預設位置已超過預設時間時,視線偵測模組132會對應開始偵測使用者在注視視圖圖樣的視線移動軌跡。上述視線偵測模組132判斷何時開始偵測使用者的視線移動軌跡的方式僅為示範性說明,應用本實施例者亦可提供使用者藉由外部觸發的方式,例如按壓實體按鈕、或透過從操作頁面點擊以選取進入偵測視線移動軌跡模式等形式,以觸發視線偵測模組132對於視線移動軌跡的偵測動作,本發明不以此為限。 It should be noted that, in some embodiments, the visual line detection module 132 may first analyze the eye image information to determine whether the user is looking at the preset position in the view pattern, or whether the user is looking at the above. The preset position exceeds the preset time. When the user looks at the preset position in the view pattern, or the user looks at the specific preset position for more than the preset time, the line of sight detection module 132 correspondingly starts to detect the line of sight movement of the user in the gaze view pattern. . The manner in which the line-of-sight detecting module 132 determines when to start detecting the trajectory of the user's line of sight is only an exemplary description. The embodiment of the present invention can also provide a method for the user to externally trigger, for example, pressing a physical button or transmitting The invention is not limited to the detection action of the line of sight detection module 132 for the line of sight movement track.
請繼續圖3的步驟流程,在步驟S340中,判斷模組133判斷視線移動軌跡是否符合相對於視圖圖樣的預設軌跡以產生判斷結果,並在步驟S350中,控制模組134可依據判斷結果以於裝置100上執行對應操作。詳言之,請再次參考圖2A及圖2B的範例。當視線偵測模組132偵測到使用者在注視視圖圖樣的視線移動軌跡,且所述視線移動軌跡如圖2A中所繪示的幾何連線220時,判斷模組133可依據資料庫135中所記錄的預設軌跡進行比對,以判斷上述的視線移動軌跡是否符合資料庫135中所記錄的預設軌跡。藉此,控制模組134即可依據視線移動軌跡是否符合 圖2B中的預設軌跡230的判斷結果,而於裝置100上對應執行不同操作。 The process of FIG. 3 is continued. In step S340, the determining module 133 determines whether the line of sight movement track conforms to the preset track relative to the view pattern to generate a determination result, and in step S350, the control module 134 can determine the result according to the determination. A corresponding operation is performed on the device 100. In detail, please refer to the examples of FIG. 2A and FIG. 2B again. When the gaze detection module 132 detects the trajectory of the line of sight of the user watching the view pattern, and the line of sight trajectory is the geometric connection 220 as shown in FIG. 2A, the determination module 133 can be based on the database 135. The preset trajectories recorded in the comparison are compared to determine whether the above-mentioned line of sight movement trajectory conforms to the preset trajectory recorded in the database 135. Thereby, the control module 134 can track the trajectory according to the line of sight. The determination result of the preset trajectory 230 in FIG. 2B, and corresponding operations are performed on the device 100.
另外,在本實施例的裝置100中,其可選擇性地設置一提示單元。所述提示單元可用以提示目前裝置100是否依據使用者的視線移動軌跡符合預設軌跡,從而執行相對應的操作。於此,提示單元可利用文字顯示、燈號顯示、語音提示或任何其他可行的提示方式,以令使用者可分辨出裝置100當下的狀態,本發明對此不限制。 Further, in the apparatus 100 of the present embodiment, it is possible to selectively provide a prompting unit. The prompting unit may be used to prompt whether the current device 100 conforms to the preset trajectory according to the user's line of sight movement, thereby performing a corresponding operation. In this case, the prompting unit can use the text display, the light signal display, the voice prompt or any other feasible prompting manner to enable the user to distinguish the current state of the device 100, which is not limited by the present invention.
接著,以圖4A至圖4C的實施例進一步說明上述基於眼部動作進行控制的細部流程。其中,圖4A為依照本發明一實施例之基於眼部動作的控制方法的步驟流程圖,圖4B為依照本發明一實施例之基於眼部動作進行控制的視線偵測模組的細部方塊示意圖,圖4C則是依照本發明一實施例的眼部區域的示意圖。以下請參照圖4A的步驟流程,並搭配圖1的裝置100及圖4B的視線偵測模組132中的各個元件以進行說明。在此實施例中,裝置100為一保全裝置,且所述基於眼部動作的控制方法可用以控制裝置100中的鎖具。需說明的是,所述基於眼部動作的控制方法亦可應用在任何類型之裝置上,本發明不以此為限。 Next, the detailed flow of the above-described control based on the eye movement will be further described with reference to the embodiment of FIGS. 4A to 4C. 4A is a flow chart of steps of a method for controlling eye movement based on an embodiment of the present invention, and FIG. 4B is a detailed block diagram of a line of sight detection module controlled based on eye movement according to an embodiment of the invention. 4C is a schematic view of an eye region in accordance with an embodiment of the present invention. Hereinafter, please refer to the step flow of FIG. 4A, and the components in FIG. 1 and the visual line detection module 132 of FIG. 4B are used for description. In this embodiment, the device 100 is a security device and the eye motion based control method can be used to control the lock in the device 100. It should be noted that the control method based on the eye movement can also be applied to any type of device, and the invention is not limited thereto.
在步驟S410中,裝置100透過取像單元110擷取影像序列IS,並在步驟S420中,影像分析模組131分析影像序列IS,藉以在影像序列IS的每一張影像中,獲得使用者的眼部區域之眼部影像資訊。此處之細部描述與圖3實施例相似,故請參閱前述。 In step S410, the device 100 captures the image sequence IS through the image capturing unit 110, and in step S420, the image analyzing module 131 analyzes the image sequence IS, thereby obtaining the user's image in each image of the image sequence IS. Eye image information in the eye area. The detailed description herein is similar to the embodiment of Fig. 3, so please refer to the foregoing.
如圖4B所示,此實施例的視線偵測模組132包括眼部偵測模組1321、眼部追蹤模組1322以及映射轉換模組1323。藉此,在步驟S430中,眼部偵測模組1321基於眼部影像資訊以偵測眼部區域內的眼部物件。在步驟S440中,眼部追蹤模組1322依據眼部物件執行眼部追蹤動作,以依序獲得眼部物件位在影像序列中的複數個第一位置。並在步驟S450中,映射轉換模組1323依據視圖圖樣對應的座標系統,映射轉換所述複數個第一位置為複數個第二位置,以獲得使用者在注視視圖圖樣的視線移動軌跡。換句話說,本實施例可透過上述的眼部偵測模組1321、眼部追蹤模組1322以及映射轉換模組1323,以分析影像序列中對應於使用者眼部物件的連續動作,並利用影像序列與視圖圖樣分別對應的座標系統之間的轉換,據以獲得使用者注視視圖圖樣的視線移動軌跡。 As shown in FIG. 4B, the visual line detection module 132 of this embodiment includes an eye detection module 1321, an eye tracking module 1322, and a mapping conversion module 1323. Therefore, in step S430, the eye detection module 1321 detects the eye object in the eye region based on the eye image information. In step S440, the eye tracking module 1322 performs an eye tracking operation according to the eye object to sequentially obtain a plurality of first positions of the eye object position in the image sequence. In step S450, the mapping conversion module 1323 maps and converts the plurality of first positions into a plurality of second positions according to the coordinate system corresponding to the view pattern to obtain a line of sight movement trajectory of the user in the gaze view pattern. In other words, the embodiment can use the eye detection module 1321, the eye tracking module 1322, and the mapping conversion module 1323 to analyze the continuous motion of the image sequence corresponding to the user's eye object, and utilize the The conversion between the coordinate system corresponding to the image sequence and the view pattern respectively is obtained according to the trajectory of the line of sight of the user watching the view pattern.
在取得使用者的視線移動軌跡之後,判斷模組133可將視線移動軌跡與視圖圖樣的預設軌跡比較,並由控制模組134依據判斷結果以藉由裝置100執行對應操作,其中,上述的預設軌跡可被設定為裝置100的鎖具的預設解鎖密碼。因此,在步驟S460中,判斷模組133判斷視線移動軌跡是否符合相對於視圖圖樣的預設軌跡。在裝置100的鎖具處於鎖定的狀態下,若步驟S460判斷為是,則控制模組134會解除鎖具的鎖定狀態(即,鎖具被切換至解鎖狀態);反之,若判斷為否,則控制模組134會維持鎖具的鎖定狀態,並可更透過提示單元發出警示音或文字以提醒使用 者。此外,從另一觀點而言,若鎖具已處於解鎖的狀態,則無論步驟S460的判斷結果是否為是,鎖具皆會維持在解鎖狀態。 After obtaining the trajectory of the user's line of sight, the determining module 133 can compare the line of sight trajectory with the preset trajectory of the view pattern, and the control module 134 performs the corresponding operation by the device 100 according to the determination result. The preset track can be set to the preset unlock code of the lock of the device 100. Therefore, in step S460, the determination module 133 determines whether the line of sight movement trajectory conforms to the preset trajectory with respect to the view pattern. When the lock of the device 100 is in the locked state, if the determination in step S460 is YES, the control module 134 releases the locked state of the lock (ie, the lock is switched to the unlocked state); otherwise, if the determination is no, the control mode is Group 134 will maintain the locked state of the lock, and can also send a warning tone or text through the prompting unit to remind the use By. Further, from another point of view, if the lock is in the unlocked state, the lock will remain in the unlocked state regardless of whether the result of the determination in step S460 is YES.
值得一提的是,在一實施例中,使用者還可在鎖具處於解鎖的狀態下,藉由做出一組連續的眼部動作,以利用對應所述連續的眼部動作的視線移動軌跡,從而自行定義出預設解鎖密碼。當然,所述預設解鎖密碼亦可為設計者預先定義並儲存於資料庫135中,對此本發明不限制。 It is worth mentioning that, in an embodiment, the user can also make a set of continuous eye movements by using a line of sight corresponding to the continuous eye movements while the lock is unlocked. To define the default unlock password. Of course, the preset unlock password may also be predefined by the designer and stored in the database 135, and the invention is not limited thereto.
另值得一提的是,在一實施例中,裝置100還可更包括一辨識模組,用以在偵測到眼部物件(步驟S430)後,先行辨識使用者的眼部物件的生物特徵資訊,藉以確認使用者的身分。上述的生物特徵資訊例如是虹膜或視網膜資訊。若使用者的身分正確,辨識模組才允許裝置100進一步地執行後續的步驟S440~S480。若不正確,則辨識模組停止或拒絕裝置100進行後續的眼部動作辨識步驟。 It is also worth mentioning that, in an embodiment, the device 100 further includes an identification module for identifying the biometric features of the user's eye object after detecting the eye object (step S430). Information to confirm the identity of the user. The above biometric information is, for example, iris or retina information. If the user's identity is correct, the identification module allows the device 100 to further perform the subsequent steps S440-S480. If not, the identification module stops or rejects the device 100 for subsequent eye motion recognition steps.
底下以圖4C的示意圖對上述實施例進行具體說明。在此實施例中,當眼部偵測模組1321搜尋到眼部區域ER內的眼部物件410後,眼部偵測模組1321還可針對眼部影像資訊進行影像處理,以獲得可明確指示出眼部物件410邊界的影像資訊。藉此,眼部追蹤模組1322可依據所述影像資訊以對眼部物件410進行追蹤,從而獲得使用者的視線移動軌跡。舉例來說,藉由調整增益值(gain)與偏移值(offset)的方式,眼部偵測模組1321可調整眼部影像資訊的對比度並獲得一加強影像。接著,眼部偵測模組 1321可再對此加強影像依序進行去雜點、邊緣銳利化、二值化及再次邊緣銳利化等處理,以獲得如圖4C所示的眼部物件410的影像。 The above embodiment will be specifically described below with reference to the schematic diagram of Fig. 4C. In this embodiment, after the eye detection module 1321 searches for the eye object 410 in the eye region ER, the eye detection module 1321 can also perform image processing on the eye image information to obtain clear The image information indicating the boundary of the eye object 410 is indicated. Thereby, the eye tracking module 1322 can track the eye object 410 according to the image information, thereby obtaining a trajectory of the user's line of sight. For example, by adjusting the gain value and the offset value, the eye detection module 1321 can adjust the contrast of the eye image information and obtain a enhanced image. Then, the eye detection module 1321 can further perform processing such as de-noising, edge sharpening, binarization, and edge sharpening on the enhanced image to obtain an image of the eye object 410 as shown in FIG. 4C.
在取得上述的眼部物件410之後,眼部追蹤模組1322會依據眼部物件410而執行眼部追蹤動作,藉以獲得使用者其眼部物件410在各影像序列中的精確位置。在一實施例中,眼部追蹤模組1322可透過眼部物件410中的瞳孔及反光點之間的相對位置關係,從而獲得眼部物件410的精確定位。上述的反光點例如是光斑(purkinje image),其可透過固定光源以光束的形式,並在照射人眼後經反射而形成。依據固定光源的位置,反光點亦會相應地存在於影像中人眼的特定位置,故可作為定位眼部物件410的參考特徵。並且,利用反光點作為定位參考特徵的方式,還可避免眼部追蹤模組1322對眼部物件進行追蹤時,因參考點的光源不足而遭受多餘雜訊干擾的情形。需說明的是,在以下的實施例中,瞳孔位置是由瞳孔的中心點的位置來表示,且依據作為定位參考特徵的反光點的數量不同(例如,在圖5B的範例中,眼部物件510包括一個反光點514,而在圖6E的範例中,眼部物件610則包括兩個反光點614、616),眼部追蹤將可以不同形式來實現。 After obtaining the above-mentioned eye object 410, the eye tracking module 1322 performs an eye tracking operation according to the eye object 410 to obtain the precise position of the user's eye object 410 in each image sequence. In an embodiment, the eye tracking module 1322 can transmit the relative positional relationship between the pupil and the reflective spot in the eye object 410 to obtain precise positioning of the eye object 410. The above-mentioned light reflecting point is, for example, a purkinje image which can be formed by a fixed light source in the form of a light beam and reflected by the human eye. Depending on the position of the fixed light source, the reflective spot will also be present in a specific position of the human eye in the image, so it can be used as a reference feature for locating the eye object 410. Moreover, by using the reflective point as the positioning reference feature, the eye tracking module 1322 can be prevented from being subjected to excessive noise interference due to insufficient light source of the reference point when tracking the eye object. It should be noted that in the following embodiments, the pupil position is represented by the position of the center point of the pupil, and the number of the reflection points according to the positioning reference feature is different (for example, in the example of FIG. 5B, the eye object) 510 includes a reflective spot 514, while in the example of FIG. 6E, eye article 610 includes two reflective spots 614, 616), and eye tracking can be implemented in different forms.
在此先以圖5圖5A繪示的眼部追蹤的步驟流程。在步驟S510中,眼部追蹤模組1322計算眼部區域中的黑白對比變化,以獲得位在影像序列中的瞳孔位置。在步驟S520中,眼部追蹤模組1322獲得反光點的參考位置。並且,在步驟S530中,眼部追蹤模 組1322依據瞳孔位置與反光點的參考位置的相對關係,調整並獲得眼部物件位在影像序列中的的第一位置。 The flow of the steps of eye tracking shown in FIG. 5 and FIG. 5A is first described. In step S510, the eye tracking module 1322 calculates a black and white contrast change in the eye region to obtain a pupil position in the image sequence. In step S520, the eye tracking module 1322 obtains the reference position of the reflection point. And, in step S530, the eye tracking mode The group 1322 adjusts and obtains the first position of the eye object position in the image sequence according to the relative relationship between the pupil position and the reference position of the reflection point.
詳細而言,圖5B繪示的是眼部物件510中包括瞳孔(對應瞳孔中心點512)及一個反光點514的範例,且眼部物件510對應於影像序列的其中一張影像。眼部偵測模組1321可利用瞳孔的統計資訊以獲得眼部物件510,上述的統計資訊例如是相關於眼球和/或瞳孔的面積、長度、寬度、長寬比、形狀、黑白對比的統計數據種類至少其一或其組合。在眼部偵測模組1321獲得眼部物件510之後,眼部追蹤模組1322可計算眼部區域中的黑白對比變化。利用人眼中瞳孔和眼球(在此指人眼構造中的虹膜,為眼睛的黑色、藍色或褐色等非白色區域)邊緣對應於黑白對比變化的極值發生處(一般而言,相對於鄰近的眼球,瞳孔為較黑(或較暗)影像)的特性,眼部追蹤模組1322可取得瞳孔中心點512的大約位置。而對於反光點514而言,眼部追蹤模組1322亦可以類似上述的方式而取得反光點514的參考位置,並還可考量將位於瞳孔中心最近處的白色對比區域的條件作為判斷是否為反光點514的依據。之後,眼部追蹤模組1322對瞳孔進行邊緣檢測,並依據反光點514的參考位置作為邊緣檢測的基準點,以取得精確的瞳孔邊緣來推算瞳孔中心點512的位置,從而修正瞳孔中心點512位置以實現精確的瞳孔定位。上述邊緣檢測的方式可利用像素由白(對應於眼球)至黑(對應於瞳孔)或是由黑至白的劇烈變化處以偵測邊緣。另外,眼部追蹤模組1322還可對反光點512的 參考位置進行細部調整,藉以提供更精準的參考位置以作為瞳孔定位的基準點。其中,眼部追蹤模組1322可將反光點的中心作為其邊緣檢測的基準點,並依據類似對瞳孔進行邊緣檢測的流程,從而調整反光點的參考位置。藉此,經由重複且連續地執行上述流程,眼部追蹤模組1322可依序取得影像序列中的眼部物件的各個位置。這些位置即對應了使用者在注視視圖圖樣的視線移動軌跡。 In detail, FIG. 5B illustrates an example in which the eye object 510 includes a pupil (corresponding to the pupil center point 512) and a light reflecting point 514, and the eye object 510 corresponds to one of the images of the image sequence. The eye detection module 1321 can use the statistical information of the pupil to obtain the eye object 510. The above statistical information is, for example, the area, length, width, aspect ratio, shape, and black and white contrast statistics of the eyeball and/or the pupil. At least one of the types of data or a combination thereof. After the eye detection module 1321 obtains the eye object 510, the eye tracking module 1322 can calculate the black and white contrast change in the eye region. The use of the pupil and the eyeball in the human eye (here, the iris in the human eye structure, the non-white area such as black, blue or brown of the eye) corresponds to the extreme value of the black-and-white contrast change (generally, relative to the neighborhood) The eyeball, the pupil is a darker (or darker) image, and the eye tracking module 1322 can obtain the approximate location of the pupil center point 512. For the reflective spot 514, the eye tracking module 1322 can also obtain the reference position of the reflective spot 514 in a manner similar to that described above, and can also consider the condition of the white contrast region located at the nearest center of the pupil as a determination of whether it is reflective. The basis of point 514. Thereafter, the eye tracking module 1322 performs edge detection on the pupil and uses the reference position of the reflective point 514 as a reference point for edge detection to obtain an accurate pupil edge to estimate the position of the pupil center point 512, thereby correcting the pupil center point 512. Position for precise pupil positioning. The above edge detection method can utilize the pixel to detect the edge from white (corresponding to the eyeball) to black (corresponding to the pupil) or from the black to white. In addition, the eye tracking module 1322 can also be opposite to the reflective point 512. The reference position is fine-tuned to provide a more accurate reference position as a reference point for pupil positioning. The eye tracking module 1322 can use the center of the reflective point as a reference point for edge detection, and adjust the reference position of the reflective point according to a process similar to performing edge detection on the pupil. Thereby, by repeating and continuously performing the above-described flow, the eye tracking module 1322 can sequentially acquire the respective positions of the eye objects in the image sequence. These positions correspond to the trajectory of the line of sight of the user looking at the view pattern.
值得一提的是,圖5A的流程圖還包括步驟S540。在步驟S540中,在眼部追蹤模組1322取得眼部物件510的所述第一位置之後,會進一步地調整眼部區域的範圍,以便眼部追蹤模組1322可精確且有效率地執行下一次的眼部追蹤動作。例如,在一實施例中,眼部追蹤模組1322可依據人眼可移動的最大範圍而調整眼部區域的範圍。另外,在部分實施例中,眼部追蹤模組1322也可在進行眼部追蹤動作之前,適應性地濾除眼部物件中的雜訊,以提升定位並追蹤眼部物件510的精確度。上述對於調整眼部區域以及濾除雜訊的方式可依需求而適當設計,本發明對此不限制。 It is worth mentioning that the flowchart of FIG. 5A further includes step S540. In step S540, after the eye tracking module 1322 obtains the first position of the eye object 510, the range of the eye region is further adjusted so that the eye tracking module 1322 can execute accurately and efficiently. One eye tracking action. For example, in one embodiment, the eye tracking module 1322 can adjust the extent of the eye region based on the maximum range that the human eye can move. In addition, in some embodiments, the eye tracking module 1322 can also adaptively filter out noise in the eye object before performing the eye tracking operation to improve positioning and track the accuracy of the eye object 510. The manner of adjusting the eye area and filtering the noise can be appropriately designed according to requirements, and the present invention is not limited thereto.
以下再以圖5B、圖6A至圖6D的示意圖,對步驟S450中獲得使用者在注視視圖圖樣的視線移動軌跡的具體實施態樣進行說明。在此實施例中,眼部追蹤模組1322可利用前述實施例中步驟S510、S520的實施方式,從而取得瞳孔中心點512與反光點514的位置。接著,映射轉換模組1323可利用瞳孔中心點512至 反光點514之間的距離L,以及通過瞳孔中心點512的一基準線(即圖6A中之水平軸),並配合反光點514相對於瞳孔中心點512的向量602所界定出的角度θ,從而經由上述距離L以及角度θ之間的關係,以將使用者在影像序列中的瞳孔位移位置(即對應於上述的第一位置)轉移至視圖圖樣的座標系統,而得到相對應的視線移動軌跡。應用上述距離L及角度θ之參數以進行座標轉換的實施態樣,底下提供分組對應法及內插對應法作為說明。 Hereinafter, a specific embodiment of the line of sight movement trajectory of the user in the gaze view pattern is obtained in step S450 by using the schematic diagrams of FIG. 5B and FIG. 6A to FIG. 6D. In this embodiment, the eye tracking module 1322 can utilize the implementations of steps S510, S520 in the previous embodiment to obtain the position of the pupil center point 512 and the reflection point 514. Then, the mapping conversion module 1323 can utilize the pupil center point 512 to The distance L between the reflection points 514, and a reference line passing through the pupil center point 512 (i.e., the horizontal axis in Fig. 6A), and the angle θ defined by the reflection point 514 with respect to the vector 602 of the pupil center point 512, Therefore, through the relationship between the distance L and the angle θ, the pupil displacement position of the user in the image sequence (ie, corresponding to the first position described above) is transferred to the coordinate system of the view pattern, and the corresponding line of sight movement is obtained. Track. The parameters of the distance L and the angle θ are applied to perform the coordinate conversion, and the group correspondence method and the interpolation correspondence method are provided as an explanation.
首先以圖6B與圖6C實施例來說明分組對應法。在此實施例中,分組對應法可以包括訓練階段及應用階段,其中,映射轉換模組1323透過訓練階段以取得座標轉換所需的係數,並可據以在應用階段時將目前的眼部物件轉換至視圖圖樣的座標系統中。詳細而言,在訓練階段時,映射轉換模組1323可從圖6B的區域1~16中,每次顯示其中一組區域供使用者觀看(在圖6B中繪示的是將區域1顯示給使用者觀看的情況),以令映射轉換模組1323可定位並取得使用者對應於各組區域的瞳孔中心及反光點的距離及角度,以將圖6B的各組區域的距離-角度分佈繪示於圖6C的距離-角度圖。藉此,在分組對應法的應用階段時,映射轉換模組1323可透過取像單元110目前所取得之眼部物件,據以分析此眼部物件中的瞳孔及反光點所對應的距離L與角度θ,並獲得距離-角度圖中的座標點,例如圖6C的座標點630。接著,映射轉換模組1323可利用最小目標函式找出距離座標點630最接近的區域,以將目前的眼部物件定位至視圖圖樣的座標系統,從而完成座標 轉換。上述最小目標函式的公式(1)如下所述:Min Obj.=W1|Dis_t-Dis_i|+W2|Angle_t-Angle_i|,(i=1~16)……(1) First, the group correspondence method will be described with reference to the embodiment of Figs. 6B and 6C. In this embodiment, the group correspondence method may include a training phase and an application phase, wherein the mapping conversion module 1323 passes through the training phase to obtain the coefficients required for the coordinate conversion, and may according to the current eye object in the application phase. Convert to the coordinate system of the view pattern. In detail, during the training phase, the mapping conversion module 1323 can display one of the regions from the regions 1 to 16 of FIG. 6B for viewing by the user each time (in FIG. 6B, the region 1 is displayed to In the case of the user viewing, the mapping conversion module 1323 can locate and obtain the distance and angle of the pupil center and the reflection point corresponding to each group of regions, so as to draw the distance-angle distribution of each group region of FIG. 6B. The distance-angle diagram shown in Figure 6C. Therefore, in the application phase of the group correspondence method, the mapping conversion module 1323 can transmit the eye object currently acquired by the image capturing unit 110, and analyze the distance L corresponding to the pupil and the reflective point in the eye object. Angle θ and obtain a coordinate point in the distance-angle diagram, such as coordinate point 630 of Figure 6C. Then, the mapping conversion module 1323 can find the closest region to the coordinate point 630 by using the minimum target function to locate the current eye object to the coordinate system of the view pattern, thereby completing the coordinate conversion. The formula (1) of the above minimum objective function is as follows: Min Obj.=W 1 |Dis_t-Dis_i|+W 2 |Angle_t-Angle_i|, (i=1~16)......(1)
其中,Dis_i及Angle_i是目標距離值與角度值,而Dis_t及Angle_t是目前眼部物件所對應的距離值與角度值。上述距離值與角度值可透過座標點的x軸與y軸座標計算而得到,而W1、W2則為分配權重。可以看出,圖6B中區域的數量將決定映射轉換模組1323以分組對應法轉換座標時的精確程度。因此,當此實施例的分組對應法包括的區域數量越多,映射轉換模組1323可相應獲得較精確的轉換座標。 Among them, Dis_i and Angle_i are the target distance value and the angle value, and Dis_t and Angle_t are the distance value and angle value corresponding to the current eye object. The above distance value and angle value can be obtained by calculating the x-axis and y-axis coordinates of the coordinate point, and W 1 and W 2 are the distribution weights. It can be seen that the number of regions in FIG. 6B will determine the accuracy of the mapping conversion module 1323 to convert coordinates in a grouping correspondence method. Therefore, when the number of regions included in the packet correspondence method of this embodiment is larger, the mapping conversion module 1323 can obtain a more accurate conversion coordinate correspondingly.
另一方面,圖6D的實施例則用以說明內插對應法。在此實施例中,映射轉換模組1323可透過扭正(un-warping)運算,從而將距離-角度圖中的座標點所畫出的扇形640轉換為較為規則的矩形650,並將矩形650的位置對應到視圖圖樣所在的座標系統660中。此實施例同樣可包括訓練階段及應用階段,且訓練階段的實施方式亦可與前述實施例類似。不同之處在於,本實施例的映射轉換模組1323更利用上述扭正運算以得到正規化分佈圖,並可以仿射轉換技術(affine transform)對所述正規化分佈圖進行移動校正(moving calibration)。經扭正運算以獲得正規化的距離-角度分佈圖的公式(2)、(3)如下所述:
扭正運算將未扭正前的取樣點(X,Y)={(x1,y1)...(xn,yn)}轉正為目標點(XT,YT)={(xT1,yT1)...(xTn,yTn)},其中,X、Y表示上述的距離L及角度θ,n為取樣數目,且a0~a5、b0~b5為扭正轉換係數。藉此,映射轉換模組1323可藉由反矩陣運算而求得係數a0~a5與b0~b5的最佳解。如此一來,映射轉換模組1323即可利用上述係數將目前未知的座標點進行扭正轉換。 The twisting operation converts the untwisted sampling point (X, Y) = {(x 1 , y 1 )...(x n , y n )} to the target point (X T , Y T )={( x T1 , y T1 ) (x Tn , y Tn )}, where X and Y represent the above-described distance L and angle θ, n is the number of samples, and a 0 ~ a 5 , b 0 ~ b 5 are Twist the conversion factor. Thereby, the mapping conversion module 1323 can obtain the optimal solutions of the coefficients a 0 to a 5 and b 0 to b 5 by inverse matrix operations. In this way, the mapping conversion module 1323 can use the above-mentioned coefficients to perform the twisting and transformation of the currently unknown coordinate points.
另一方面,映射轉換模組1323利用仿射技術進行移動校正的公式(4)則如下述:
其中,x'、y'為經移動校正後的新座標,a~f則為仿射轉換係數。藉此,映射轉換模組1323亦可經由反矩陣計算,或是透過輸入未校正前顯示單元的任三個角落的三對座標點及校正後的任三個角落的三對座標點,而求得上述仿射轉換係數a~f。藉此,映射轉換模組1323即可將目前未知的座標點進行移動校正,以排除影像縮放、平移、旋轉等因素在座標轉換時所造成的影響。相對於圖6B、6C所提出的分組對應法的實施例而言,映射轉換模組1323使用內插對應法可以較少的取樣數量來完成座標轉換。 Among them, x', y' are the new coordinates after movement correction, and a~f is the affine conversion coefficient. Therefore, the mapping conversion module 1323 can also calculate through the inverse matrix, or by inputting three pairs of coordinate points of any three corners of the uncorrected front display unit and three pairs of coordinate points of the corrected three corners. The above affine conversion coefficients a~f are obtained. Thereby, the mapping conversion module 1323 can perform motion correction on the currently unknown coordinate points to eliminate the influence of image scaling, translation, rotation and other factors on the coordinate conversion. With respect to the embodiment of the packet correspondence method proposed in FIGS. 6B and 6C, the mapping conversion module 1323 can perform coordinate conversion with a small number of samples using the interpolation correspondence method.
需說明的是,在一實施例中,當眼部追蹤模組1322依序 獲得眼部物件位在影像序列中的複數個第一位置之後,映射轉換模組1323才將這些第一位置以連續動作的形式轉換為使用者在注視視圖圖樣的視線移動軌跡。而在另一實施例中,映射轉換模組1323則可與眼部追蹤模組1322取得眼部物件的各第一位置的動作同步,以將上述第一位置依序地進行座標轉換,藉此獲得使用者的視線移動軌跡。應用本實施例者可依其設計需求決定獲得視線移動軌跡的方式,本發明對此不限制。 It should be noted that, in an embodiment, when the eye tracking module 1322 is in order After obtaining the plurality of first positions of the eye objects in the image sequence, the mapping conversion module 1323 converts the first positions into a continuous motion form into a line of sight movement trajectory of the user watching the view pattern. In another embodiment, the mapping conversion module 1323 can synchronize with the actions of the eye tracking module 1322 to obtain the first positions of the eye objects, so as to coordinately convert the first position to the first position. Obtain the user's line of sight movement. The method for obtaining the line of sight movement track can be determined according to the design requirement of the embodiment, and the present invention is not limited thereto.
另一方面,圖6E的範例則說明當眼部物件610包括瞳孔612與2個反光點614、616的情況。與前述實施例類似,眼部追蹤模組1322可利用影像的黑白對比變化(例如對眼部物件610的影像進行二值化處理),以找出眼部物件610中的瞳孔612與反光點614、616,並取得其個別的中心點以表示瞳孔612與反光點614、616各自的位置。映射轉換模組1323可計算瞳孔612與反光點614、616在不共線時所形成的三角形面積,並可適應性地校正因使用者頭部移動而可能造成的誤差。詳言之,對於考慮上述誤差的面積算法,其可藉由在計算出瞳孔612與反光點614、616形成的三角形面積後,進一步地以一正規化因子對上述計算所得之面積進行正規化處理。舉例而言,上述的正規化因子可為反光點614、616之間的距離的平方除以2,且映射轉換模組1323會將前述計算所得的三角形面積除以此正規化因子,從而進行誤差校正以獲得面積參數。 On the other hand, the example of FIG. 6E illustrates the case where the eye object 610 includes the pupil 612 and the two reflective spots 614, 616. Similar to the previous embodiment, the eye tracking module 1322 can utilize the black and white contrast change of the image (eg, binarizing the image of the eye object 610) to find the pupil 612 and the reflective spot 614 in the eye object 610. , 616, and obtain their individual center points to indicate the respective positions of the pupil 612 and the reflective spots 614, 616. The mapping conversion module 1323 can calculate the triangular area formed when the pupil 612 and the reflective spots 614, 616 are not collinear, and can adaptively correct errors that may be caused by the movement of the user's head. In detail, for the area algorithm considering the above error, the calculated area can be further normalized by a normalization factor after calculating the triangular area formed by the pupil 612 and the reflection points 614, 616. . For example, the normalization factor may be the square of the distance between the reflection points 614 and 616 divided by 2, and the mapping conversion module 1323 divides the calculated triangle area by the normalization factor, thereby performing error. Correction to obtain the area parameters.
另一方面,映射轉換模組1323並藉由瞳孔612與反光點 614、616以計算角度參數。在一實施例中,映射轉換模組1323可透過橫向穿過瞳孔612中心點的橫線H,以及瞳孔612與反光點614的連線,以獲得上述兩線間的第一夾角α 1。另外,映射轉換模組1323也可藉由橫線H以及瞳孔612與反光點616的連線,以獲得上述兩線間的第二夾角α 2。換言之,所述的角度參數可以是第一夾角α 1、第二夾角α 2,也可以是第二夾角α 2與第一夾角α 1之間的差值(即α 2-α 1,以瞳孔612為中心,瞳孔612與反光點614的連線,以及瞳孔612與反光點616的連線之間的夾角)。 On the other hand, the mapping conversion module 1323 is provided by the pupil 612 and the reflective point. 614, 616 to calculate the angle parameter. In one embodiment, the mapping conversion module 1323 can transmit a transverse line H transversely passing through a center point of the pupil 612 and a line connecting the pupil 612 and the reflective point 614 to obtain a first angle α 1 between the two lines. In addition, the mapping conversion module 1323 can also obtain the second angle α 2 between the two lines by the horizontal line H and the connection of the pupil 612 and the reflection point 616. In other words, the angle parameter may be the first angle α 1 , the second angle α 2 , or may be the difference between the second angle α 2 and the first angle α 1 (ie α 2−α 1 to pupil 612 is the center, the line connecting the pupil 612 with the light reflecting point 614, and the angle between the line connecting the pupil 612 and the reflecting point 616).
因此,映射轉換模組1323可在首次使用時,透過如圖6F所示的校正區域1~18或是校正點進行校正,以得到角度-面積平面圖上的校正座標。類似於前述實施例的訓練階段,使用者對應於各組區域的瞳孔612及反光點614、616所形成的面積及角度的分佈被繪示於圖6G中,並由映射轉換模組1323將上述的原始座標映射到對應於校正區域1~18的座標,以取得座標轉換(例如仿射轉換方法)所需的係數。如此一來,在進入應用階段之後,映射轉換模組1323即可藉由用以座標轉換的係數,從而完成對於瞳孔612及反光點614、616的的座標轉換。 Therefore, the mapping conversion module 1323 can perform correction on the first-time use through the correction areas 1 to 18 or the correction points as shown in FIG. 6F to obtain the corrected coordinates on the angle-area plan. Similar to the training phase of the foregoing embodiment, the distribution of the area and angle formed by the user corresponding to the pupil 612 and the reflective spots 614, 616 of each group of regions is shown in FIG. 6G, and the mapping conversion module 1323 described above. The original coordinates are mapped to coordinates corresponding to the correction areas 1-18 to obtain the coefficients required for coordinate conversion (eg, affine transformation methods). In this way, after entering the application phase, the mapping conversion module 1323 can complete the coordinate conversion for the pupil 612 and the reflective points 614, 616 by the coefficients used for coordinate conversion.
圖7至圖9的實施例為上述基於眼部動作的控制方法的實際應用範例。首先以圖7為例進行說明,其中,圖7為本發明一實施例之基於眼部動作進行控制的門禁系統示意圖。 The embodiment of FIGS. 7 to 9 is a practical application example of the above-described eye movement-based control method. First, FIG. 7 is taken as an example for explanation. FIG. 7 is a schematic diagram of an access control system based on eye movement control according to an embodiment of the present invention.
門禁系統700包括取像單元710、處理單元720、鎖具730、提示單元740以及門體750。鎖具730設置於門體750上, 並用以控制門體750的開啟或關閉。在本實施例中,提示單元740例如是燈號顯示裝置,並透過燈號來提示鎖具730的目前配置狀態為鎖定狀態或解鎖狀態,但本發明實施例並不限制提示單元的形式。 The access control system 700 includes an image capturing unit 710, a processing unit 720, a lock 730, a prompting unit 740, and a door body 750. The lock 730 is disposed on the door body 750. And used to control the opening or closing of the door body 750. In the present embodiment, the prompting unit 740 is, for example, a light number display device, and indicates that the current configuration state of the lock 730 is a locked state or an unlocked state by a light signal, but the embodiment of the present invention does not limit the form of the prompting unit.
在本實施例中,取像單元710例如設置於門體750上,並由一罩體760將取像單元710遮覆,而僅於取像單元710的取像方向上露出一影像擷取區,以令使用者可將眼部區域對準該影像擷取區,使得取像單元710可針對使用者的眼部區域擷取影像序列,以避免他人窺視,從而提高門禁系統700的使用安全性。於此,罩體760的設置同樣是設計者可依其設計需求而自行選擇是否加入,本發明不以此為限。 In this embodiment, the image capturing unit 710 is disposed on the door body 750, and the image capturing unit 710 is covered by a cover 760, and an image capturing area is exposed only in the image capturing direction of the image capturing unit 710. Therefore, the user can align the eye area with the image capturing area, so that the image capturing unit 710 can capture the image sequence for the user's eye area to avoid peers, thereby improving the safety of the access control system 700. . Herein, the arrangement of the cover 760 is also the designer can choose whether to join according to the design requirements, and the invention is not limited thereto.
基於門禁系統700的架構,處理單元720可依據取像單元710所擷取的影像序列,並可讀取如之前實施例所述的儲存單元中所記錄的程序模組,而依照圖2至圖6D實施例的步驟流程來偵測出使用者所做出的眼部動作,並據以判斷使用者的視線移動軌跡是否符合預設軌跡。藉此,處理單元720可依據判斷結果決定是否發出對應的控制訊號來控制鎖具730解除鎖定狀態,從而令使用者在其視線移動軌跡符合預設軌跡時,可開啟門體750以進入門體750後的區域。 Based on the architecture of the access control system 700, the processing unit 720 can process the image sequence captured by the image capturing unit 710, and can read the program module recorded in the storage unit as described in the previous embodiment, according to FIG. 2 to FIG. The step flow of the 6D embodiment detects the eye movements made by the user, and determines whether the user's line of sight movement trajectory conforms to the preset trajectory. Therefore, the processing unit 720 can determine whether to issue the corresponding control signal to control the unlocking state of the lock 730 according to the determination result, so that the user can open the door body 750 to enter the door body 750 when the line of sight movement track conforms to the preset track. After the area.
圖8A與圖8B是本發明的另一範例,並繪示出依照本發明一實施例之基於眼部動作進行控制的手持式眼控接目裝置800的示意圖。手持式眼控接目裝置800包括處理單元、取像單元 810、顯示單元820、殼體830、反射鏡840以及光源850,並連接一保全設備(例如保險箱860)以對使用者進行驗證。在本實施例中,手持式眼控接目裝置800可透過無線傳輸介面(例如:短距離無線通訊(Short Distance Wireless Communication)、無線射頻辨識(Radio Frequency Identification,RFID)、藍芽(Bluetooth)或Wi-Fi等)的方式而與保險箱860相連。取像單元810與光源850皆配置於殼體830內,並鄰近於手持式眼控接目裝置800的窗口的位置。光源850可在取像單元810擷取使用者的眼部影像時開啟,或者在使用者靠近時開啟,藉以提供足夠的亮度,本發明不以此為限。顯示單元820配置於殼體830內,其可用以顯示有關於密碼輸入資訊的畫面。而反射鏡840則可將顯示單元820所顯示的畫面反射至手持式眼控接目裝置800的窗口,以令使用者可透過窗口而觀看到顯示單元820所顯示的畫面內容。所述之密碼輸入資訊例如為『請輸入密碼』等提示使用者可開始進行眼動輸入的文字,或者指示使用者密碼是否輸出正確的文字,本發明不以此為限。 8A and 8B are diagrams showing another example of the present invention, and illustrating a handheld eye control device 800 controlled based on eye movements in accordance with an embodiment of the present invention. The handheld eye control device 800 includes a processing unit and an image capturing unit 810. The display unit 820, the housing 830, the mirror 840, and the light source 850 are connected to a security device (such as the safe 860) to authenticate the user. In this embodiment, the handheld eye control device 800 can transmit through a wireless transmission interface (for example, Short Distance Wireless Communication, Radio Frequency Identification (RFID), Bluetooth, or Wi-Fi, etc. is connected to the safe 860. The image capturing unit 810 and the light source 850 are both disposed in the housing 830 and adjacent to the position of the window of the handheld eye control device 800. The light source 850 can be turned on when the image capturing unit 810 captures the image of the user's eye, or is turned on when the user approaches, thereby providing sufficient brightness, which is not limited by the present invention. The display unit 820 is disposed within the housing 830 and can be used to display a screen with information about the password input. The mirror 840 can reflect the image displayed by the display unit 820 to the window of the handheld eye control device 800, so that the user can view the screen content displayed by the display unit 820 through the window. The password input information is, for example, a "please enter a password" or the like, which prompts the user to start the eye movement input, or indicates whether the user password outputs the correct text, and the present invention is not limited thereto.
因此,在使用者欲使用保險箱860時,使用者可以利用手持的方式將手持式眼控接目裝置800承靠於眼睛上,並作出眼部動作。類似地,基於手持式眼控接目裝置800的架構,取像單元810拍攝使用者眼部並擷取影像序列,且處理單元可讀取如之前實施例所描述的儲存單元中所記錄的程序模組,而依照圖2至圖6D的實施例的步驟流程來偵測出使用者所做出的眼部動作,藉 以取得使用者注視顯示單元820顯示的視圖圖樣的視線移動軌跡,並將所述視線移動軌跡作為保險箱860的輸入密碼,而與一軌跡形式的預設之安全密碼進行比對。當上述安全密碼與輸入密碼相符時,處理單元可產生驗證成功信息並傳送至保險箱860,從而控制將其鎖具開啟。本實施例的其他細部作動方式可類似於上述實施例,故請參照前述。 Therefore, when the user wants to use the safe 860, the user can carry the handheld eye-contacting device 800 against the eyes by hand, and make an eye movement. Similarly, based on the architecture of the handheld eye control device 800, the image capturing unit 810 captures the user's eyes and captures the image sequence, and the processing unit can read the program recorded in the storage unit as described in the previous embodiment. Module, and according to the step flow of the embodiment of FIG. 2 to FIG. 6D, the eye movements made by the user are detected, and The line of sight movement trajectory of the view pattern displayed by the user gaze display unit 820 is obtained, and the line of sight movement trajectory is used as an input password of the safe 860, and is compared with a preset security code in the form of a track. When the security password above matches the input password, the processing unit may generate verification success information and transmit it to the safe 860 to control the opening of its lock. Other details of the operation of the embodiment can be similar to the above embodiment, so please refer to the foregoing.
圖9是依照本發明又一實施例之基於眼部動作進行控制的眼控裝置900的示意圖眼控裝置900包括取像單元910以及處理單元。在此實施例中,眼部追蹤模組提供另一種追蹤眼部物件的實施態樣。詳言之,取像單元910可具備可轉動地調整方向及角度的鏡頭,以便將鏡頭調整成仰望使用者臉部的狀態。例如,取像單元910的鏡頭可以仰角45度的角度朝向該使用者的臉部,以加強使用者鼻孔影像的辨識度,而有助於取得鼻孔影像,從而作為界定眼部物件的位置的參考特徵。 9 is a schematic diagram of an eye control device 900 that is controlled based on eye movements according to still another embodiment of the present invention. The eye control device 900 includes an image capturing unit 910 and a processing unit. In this embodiment, the eye tracking module provides another implementation for tracking eye objects. In detail, the image capturing unit 910 may be provided with a lens that rotatably adjusts the direction and angle to adjust the lens to a state of looking up to the user's face. For example, the lens of the image capturing unit 910 can face the face of the user at an angle of 45 degrees to enhance the recognition of the user's nostril image, and help to obtain the nostril image, thereby serving as a reference for defining the position of the eye object. feature.
詳細而言,處理單元可透過鼻孔區域相對於其他區域明顯較黑的特性,並利用鼻孔區域的最長橫軸及最長縱軸的交叉點作為鼻孔的中心點。接著,眼部追蹤模組計算兩鼻孔中心點之間的間距以決定一起算點座標(s1,t1),再根據上述間距以及起算點座標而計算出基準點座標(s2,t2),其中,s2=s1+k1×D,t2=t1+k2×D,k1=1.6~1.8,k2=1.6~1.8,且較佳是k1=k2。上述k1、k2之數值可依據統計結果所得到,且經由上述關係式所獲得的基準點座標(s2,t2),可剛好落在接近於臉部影像中的一個眼部物件的中心 點。如此一來,透過鼻孔特徵以及眼睛統計特徵,眼部追蹤模組可對使用者的眼部物件取得精確定位。 In detail, the processing unit can pass through the characteristics of the nostril region which is significantly darker relative to other regions, and utilize the intersection of the longest horizontal axis and the longest longitudinal axis of the nostril region as the center point of the nostrils. Next, the eye tracking module calculates the spacing between the center points of the two nostrils to determine the coordinates of the points (s 1 , t 1 ), and then calculates the coordinates of the reference point based on the spacing and the starting point coordinates (s 2 , t 2 ). Wherein s 2 = s 1 + k 1 × D, t 2 = t 1 + k 2 × D, k 1 = 1.6 to 1.8, k 2 = 1.6 to 1.8, and preferably k 1 = k 2 . The values of k 1 and k 2 described above can be obtained according to statistical results, and the reference point coordinates (s 2 , t 2 ) obtained through the above relationship can just fall on an eye object close to the facial image. Center point. In this way, through the nostril features and eye statistics, the eye tracking module can accurately locate the user's eye objects.
藉此,基於上述眼控裝置900的硬體架構,處理單元可以類似的方式讀取如之前實施例所描述的儲存單元中所記錄的程序模組,而依照圖2至圖6D的實施例的步驟流程來偵測出使用者所做出的眼部動作,並對應取得使用者在注視視圖圖樣的視線移動軌跡,以判斷是否符合預設軌跡,從而藉由眼控裝置900以依據使用者的視線移動軌跡而執行對應操作,例如偵測使用者的視線移動軌跡來將電腦螢幕解鎖,或是操作電腦視窗的游標等功能。 Thereby, based on the hardware architecture of the above-described eye control device 900, the processing unit can read the program modules recorded in the storage unit as described in the previous embodiment in a similar manner, and according to the embodiment of FIG. 2 to FIG. 6D The step process is to detect the eye movements made by the user, and correspondingly obtain the trajectory of the line of sight of the user watching the view pattern to determine whether the preset trajectory is met, so as to be based on the user by the eye control device 900. The line of sight moves the trajectory to perform corresponding operations, such as detecting the user's line of sight movement track to unlock the computer screen, or operating the cursor of the computer window.
綜上所述,本發明實施例所提出的基於眼部動作的控制方法及應用其之裝置,可偵測使用者在注視視圖圖樣的視線移動軌跡,並在判定視線移動軌跡符合相對於視圖圖樣的預設軌跡時,由裝置執行相對應的操作。藉此,本發明實施例可藉由視線移動軌跡作為訊號的觸發,並能夠廣泛應用於保全系統、眼控電腦等多個領域。 In summary, the eye movement control method and the device using the same according to the embodiments of the present invention can detect the trajectory of the line of sight of the user watching the view pattern, and determine that the line of sight movement trajectory is in accordance with the view pattern. When the preset trajectory is performed, the corresponding operation is performed by the device. Therefore, the embodiment of the present invention can be used as a trigger of the signal by the line of sight movement track, and can be widely applied to various fields such as a security system and an eye control computer.
雖然本發明已以實施例揭露如上,然其並非用以限定本發明,任何所屬技術領域中具有通常知識者,在不脫離本發明的精神和範圍內,當可作些許的更動與潤飾,故本發明的保護範圍當視後附的申請專利範圍所界定者為準。 Although the present invention has been disclosed in the above embodiments, it is not intended to limit the present invention, and any one of ordinary skill in the art can make some changes and refinements without departing from the spirit and scope of the present invention. The scope of the invention is defined by the scope of the appended claims.
S310~S350‧‧‧步驟 S310~S350‧‧‧Steps
Claims (12)
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| TW104124060A TWI533234B (en) | 2015-07-24 | 2015-07-24 | Control method based on eye's motion and apparatus using the same |
| CN201510568143.7A CN106371565A (en) | 2015-07-24 | 2015-09-09 | Control method based on eye movement and device applied by control method |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| TW104124060A TWI533234B (en) | 2015-07-24 | 2015-07-24 | Control method based on eye's motion and apparatus using the same |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| TWI533234B true TWI533234B (en) | 2016-05-11 |
| TW201705038A TW201705038A (en) | 2017-02-01 |
Family
ID=56509269
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| TW104124060A TWI533234B (en) | 2015-07-24 | 2015-07-24 | Control method based on eye's motion and apparatus using the same |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN106371565A (en) |
| TW (1) | TWI533234B (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US10817595B2 (en) | 2019-02-14 | 2020-10-27 | Nanning Fugui Precision Industrial Co., Ltd. | Method of device unlocking and device utilizing the same |
Families Citing this family (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106933349A (en) * | 2017-02-06 | 2017-07-07 | 歌尔科技有限公司 | Unlocking method, device and virtual reality device for virtual reality device |
| CN108733203A (en) * | 2017-04-20 | 2018-11-02 | 上海耕岩智能科技有限公司 | A kind of method and apparatus of eyeball tracking operation |
| CN110069960A (en) * | 2018-01-22 | 2019-07-30 | 北京亮亮视野科技有限公司 | Filming control method, system and intelligent glasses based on sight motion profile |
| CN109144250B (en) * | 2018-07-24 | 2021-12-21 | 北京七鑫易维信息技术有限公司 | Position adjusting method, device, equipment and storage medium |
| CN111951454B (en) * | 2020-10-16 | 2021-01-05 | 兰和科技(深圳)有限公司 | Fingerprint biological identification unlocking device of intelligent access control and judgment method thereof |
| CN113253846B (en) * | 2021-06-02 | 2024-04-12 | 樊天放 | HID interaction system and method based on gaze deflection trend |
Family Cites Families (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101807110B (en) * | 2009-02-17 | 2012-07-04 | 由田新技股份有限公司 | Pupil positioning method and system |
| TWI398796B (en) * | 2009-03-27 | 2013-06-11 | Utechzone Co Ltd | Pupil tracking methods and systems, and correction methods and correction modules for pupil tracking |
| CN103699210A (en) * | 2012-09-27 | 2014-04-02 | 北京三星通信技术研究有限公司 | Mobile terminal and control method thereof |
| CN103902029B (en) * | 2012-12-26 | 2018-03-27 | 腾讯数码(天津)有限公司 | A kind of mobile terminal and its unlocking method |
-
2015
- 2015-07-24 TW TW104124060A patent/TWI533234B/en active
- 2015-09-09 CN CN201510568143.7A patent/CN106371565A/en not_active Withdrawn
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US10817595B2 (en) | 2019-02-14 | 2020-10-27 | Nanning Fugui Precision Industrial Co., Ltd. | Method of device unlocking and device utilizing the same |
| TWI727364B (en) * | 2019-02-14 | 2021-05-11 | 新加坡商鴻運科股份有限公司 | Method of device unlocking and electronic device utilizing the same |
Also Published As
| Publication number | Publication date |
|---|---|
| TW201705038A (en) | 2017-02-01 |
| CN106371565A (en) | 2017-02-01 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| TWI533234B (en) | Control method based on eye's motion and apparatus using the same | |
| US12223760B2 (en) | Systems and methods for performing fingerprint based user authentication using imagery captured using mobile devices | |
| EP2680191B1 (en) | Facial recognition | |
| EP2680192B1 (en) | Facial recognition | |
| US9361507B1 (en) | Systems and methods for performing fingerprint based user authentication using imagery captured using mobile devices | |
| EP2883189B1 (en) | Spoof detection for biometric authentication | |
| US11449590B2 (en) | Device and method for user authentication on basis of iris recognition | |
| HK40069201A (en) | Methods and systems for performing fingerprint identification | |
| HK1246928B (en) | Systems and methods for performing fingerprint based user authentication using imagery captured using mobile devices | |
| HK1246928A1 (en) | Systems and methods for performing fingerprint based user authentication using imagery captured using mobile devices | |
| HK1211721B (en) | Methods and systems for spoof detection for biometric authentication |