TWI748596B - Eye center positioning method and system thereof - Google Patents
Eye center positioning method and system thereof Download PDFInfo
- Publication number
- TWI748596B TWI748596B TW109127247A TW109127247A TWI748596B TW I748596 B TWI748596 B TW I748596B TW 109127247 A TW109127247 A TW 109127247A TW 109127247 A TW109127247 A TW 109127247A TW I748596 B TWI748596 B TW I748596B
- Authority
- TW
- Taiwan
- Prior art keywords
- eye center
- eye
- center
- model
- image
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 46
- 238000006243 chemical reaction Methods 0.000 claims description 43
- 238000004364 calculation method Methods 0.000 claims description 27
- 238000001514 detection method Methods 0.000 claims description 8
- 238000000605 extraction Methods 0.000 claims description 8
- 239000000284 extract Substances 0.000 claims description 4
- 238000010586 diagram Methods 0.000 description 4
- 101100059600 Caenorhabditis elegans cec-1 gene Proteins 0.000 description 3
- 230000007246 mechanism Effects 0.000 description 3
- 239000011521 glass Substances 0.000 description 2
- 210000001747 pupil Anatomy 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000010195 expression analysis Methods 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 230000008921 facial expression Effects 0.000 description 1
- 230000000873 masking effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Images
Landscapes
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Description
本發明係關於一種眼睛中心定位方法及其系統,且特別是關於一種利用生成對抗網路與深度對應點之眼睛中心定位方法及其系統。The present invention relates to an eye center positioning method and system thereof, and more particularly to an eye center positioning method and system using a generated confrontation network and depth corresponding points.
眼睛中心(瞳孔)定位是眾多視覺應用的第一步,亦為最重要的一步,例如:人臉識別系統與面部表情分析。此外,虹膜檢測和定位也離不開眼睛中心定位。眼睛中心定位的精準度將直接地影響下一步的處理。然而,習知的眼睛中心定位方法幾乎僅考慮與應用於「正臉」或「有限偏航旋轉(Yaw rotation)」的臉部上。Eye center (pupil) positioning is the first and most important step in many vision applications, such as facial recognition systems and facial expression analysis. In addition, iris detection and positioning are also inseparable from eye center positioning. The accuracy of the eye center positioning will directly affect the next step of processing. However, the conventional eye center positioning method is almost only considered and applied to the "front face" or "limited yaw rotation" face.
多視角臉部的「大偏航旋轉角度(Large yaw-rotation angle)」包含:眼睛遭受鼻樑遮蔽、陰影遮蔽或眼鏡遮蔽;以及「完全消失的眼睛」包含:完全區塊遮蔽、閉眼、戴墨鏡或眼鏡反光。臉部旋轉與外物的干擾都將導致眼部區域無法完整地的顯示,而大幅地影響眼睛定位困難度,並容易造成眼睛定位錯誤。The "Large yaw-rotation angle" of a multi-view face includes: eyes that are blocked by the bridge of the nose, shadows, or glasses; and "completely disappeared eyes" include: complete block masking, closed eyes, and wearing sunglasses Or the glasses reflect light. Both the rotation of the face and the interference of foreign objects will result in the incomplete display of the eye area, which will greatly affect the difficulty of eye positioning and easily cause eye positioning errors.
為解決上述問題,本發明提供一種眼睛中心定位方法及其系統,在眼睛中心定位的過程中,為了避免眼睛遭受遮蔽而消失的影響,利用Complete Representation Generative Adversarial Network(CR-GAN)方法與透過轉換原人臉區域與新生成正臉之間的深度對應點,以有效地定位眼睛中心。In order to solve the above problems, the present invention provides an eye center positioning method and system. In the process of eye center positioning, in order to avoid the influence of the eyes being obscured and disappearing, the Complete Representation Generative Adversarial Network (CR-GAN) method and transmission conversion are used. The depth correspondence point between the original face area and the newly generated front face can effectively locate the eye center.
依據本發明的方法態樣之一實施方式提供一種眼睛中心定位方法,其由眼睛中心定位系統執行,且眼睛中心定位系統包含處理器與記憶體。眼睛中心定位方法包含以下步驟:萃取步驟、生成步驟、定位步驟、轉換步驟以及校正步驟。萃取步驟係驅動處理器依據人臉辨識模型萃取輸入影像而產生第一影像,且輸入影像儲存於記憶體。生成步驟係驅動處理器依據生成對抗網路模型將第一影像重新生成為第二影像。定位步驟係驅動處理器依據梯度模型定位第二影像的眼睛區域而產生初始眼睛中心。轉換步驟係驅動處理器依據轉換模型轉換初始眼睛中心而產生對應於第一影像之深度眼睛中心。校正步驟係驅動處理器依據校正模型修正深度眼睛中心而產生精準眼睛中心。According to an embodiment of the method aspect of the present invention, an eye center positioning method is provided, which is executed by an eye center positioning system, and the eye center positioning system includes a processor and a memory. The eye center positioning method includes the following steps: an extraction step, a generation step, a positioning step, a conversion step, and a correction step. The extraction step is to drive the processor to extract the input image according to the face recognition model to generate a first image, and the input image is stored in the memory. The generating step is to drive the processor to regenerate the first image into the second image according to the generating confrontation network model. The positioning step is to drive the processor to locate the eye area of the second image according to the gradient model to generate the initial eye center. The conversion step is to drive the processor to convert the initial eye center according to the conversion model to generate a deep eye center corresponding to the first image. The correction step is to drive the processor to correct the deep eye center according to the correction model to generate a precise eye center.
藉此,本發明之眼睛中心定位方法利用生成對抗網路將原始人臉影像重新生成出新生成正臉影像,並透過處理器產生對應於原始人臉影像之深度眼睛中心,最後得到修正後的精準眼睛中心。As a result, the eye center positioning method of the present invention uses the generation confrontation network to regenerate the original face image into a newly generated front face image, and generates the depth eye center corresponding to the original face image through the processor, and finally obtains the corrected eye center. Accurate the center of the eye.
根據前段所述實施方式的眼睛中心定位方法,其中第一影像具有一第一粗糙眼睛中心,第二影像具有一第二粗糙眼睛中心。轉換模型包含第一運算模型與第二運算模型,且轉換步驟包含以下步驟:旋轉變量檢測步驟與位置預測步驟,其中旋轉變量檢測步驟係驅動處理器依據第一運算模型計算第一粗糙眼睛中心與第二粗糙眼睛中心的線性關係而產生深度旋轉變量。位置預測步驟係驅動處理器依據第二運算模型與深度旋轉變量計算初始眼睛中心而產生深度眼睛中心。According to the eye center positioning method of the embodiment described in the preceding paragraph, the first image has a first rough eye center, and the second image has a second rough eye center. The conversion model includes a first calculation model and a second calculation model, and the conversion step includes the following steps: a rotation variable detection step and a position prediction step. The rotation variable detection step drives the processor to calculate the first rough eye center and the center of the rough eye according to the first calculation model. The linear relationship of the second rough eye center produces a depth rotation variable. The position prediction step drives the processor to calculate the initial eye center according to the second calculation model and the depth rotation variable to generate a deep eye center.
根據前段所述實施方式的眼睛中心定位方法,其中第一運算模型包含第一粗糙眼睛中心之第一方程式、第二粗糙眼睛中心之第二方程式與第三方程式、第一斜率、第二斜率、第三斜率及深度旋轉變量,第一方程式表示為 ,第二方程式表示為 ,第三方程式表示為 ,第一斜率表示為 ,第二斜率表示為 ,第三斜率表示為 ,深度旋轉變量表示為 與 且符合下式: 。 According to the eye center positioning method of the embodiment described in the preceding paragraph, the first calculation model includes a first equation for the center of the first rough eye, a second equation for the center of the second rough eye and a third-party program, the first slope, the second slope, The third slope and depth rotation variable, the first equation is expressed as , The second equation is expressed as , The third-party program is represented as , The first slope is expressed as , The second slope is expressed as , The third slope is expressed as , The depth rotation variable is expressed as and And meet the following formula: .
根據前段所述實施方式的眼睛中心定位方法,其中第二運算模型包含初始眼睛中心、第二粗糙眼睛中心、深度旋轉變量及深度眼睛中心,初始眼睛中心表示為 ,第二粗糙眼睛中心表示為 ,深度旋轉變量表示為 與 ,深度眼睛中心表示為 且符合下式: 。 According to the eye center positioning method of the embodiment described in the preceding paragraph, the second calculation model includes the initial eye center, the second rough eye center, the depth rotation variable, and the depth eye center, and the initial eye center is expressed as , The center of the second rough eye is expressed as , The depth rotation variable is expressed as and , The depth of the eye center is expressed as And meet the following formula: .
根據前段所述實施方式的眼睛中心定位方法,其中第一影像具有第一粗糙眼睛中心,且校正步驟包含以下步驟:轉換子步驟與修正子步驟,其中轉換子步驟係驅動處理器依據梯度模型定位第一影像的眼睛區域而產生邊界眼睛中心,並計算邊界眼睛中心與第一粗糙眼睛中心之差值,最後依據轉換模型轉換差值而產生校正值。修正子步驟係驅動處理器依據校正模型與校正值修正深度眼睛中心而產生精準眼睛中心。According to the eye center positioning method of the embodiment described in the preceding paragraph, the first image has a first rough eye center, and the correction step includes the following steps: a conversion sub-step and a correction sub-step, wherein the conversion sub-step drives the processor to locate according to the gradient model The eye area of the first image generates the boundary eye center, and calculates the difference between the boundary eye center and the first rough eye center, and finally converts the difference according to the conversion model to generate a correction value. The correction sub-step is to drive the processor to correct the deep eye center according to the correction model and the correction value to generate a precise eye center.
根據前段所述實施方式的眼睛中心定位方法,其中校正模型包含精準眼睛中心、校正值、邊界眼睛中心、第一粗糙眼睛中心、深度眼睛中心及深度旋轉變量,精準眼睛中心表示為 ,校正值表示為 ,邊界眼睛中心為 ,第一粗糙眼睛中心表示為 ,深度眼睛中心表示為 ,深度旋轉變量表示為 與 且符合下式: 。 According to the eye center positioning method of the embodiment described in the preceding paragraph, the correction model includes the precise eye center, the correction value, the boundary eye center, the first rough eye center, the deep eye center, and the depth rotation variable. The precise eye center is expressed as , The correction value is expressed as , The center of the boundary eye is , The center of the first rough eye is expressed as , The depth of the eye center is expressed as , The depth rotation variable is expressed as and And meet the following formula: .
依據本發明的結構態樣之一實施方式提供一種眼睛中心定位系統,其用以定位輸入影像之精準眼睛中心,且眼睛中心定位系統包含記憶體以及處理器,其中記憶體係存取輸入影像、人臉辨識模型、生成對抗網路模型、梯度模型、轉換模型及校正模型。處理器係電性連接於記憶體,且處理器依據人臉辨識模型萃取輸入影像而產生第一影像。處理器依據生成對抗網路模型將第一影像重新生成為第二影像。處理器依據梯度模型定位第二影像的眼睛區域而產生初始眼睛中心。處理器依據轉換模型轉換初始眼睛中心而產生對應於第一影像之深度眼睛中心,然後處理器依據校正模型修正深度眼睛中心而產生精準眼睛中心。According to an embodiment of the structural aspect of the present invention, an eye center positioning system is provided, which is used to locate the precise eye center of an input image, and the eye center positioning system includes a memory and a processor, wherein the memory system accesses the input image, human Face recognition model, generation of confrontation network model, gradient model, conversion model and correction model. The processor is electrically connected to the memory, and the processor extracts the input image according to the face recognition model to generate the first image. The processor regenerates the first image into the second image according to the generated confrontation network model. The processor locates the eye area of the second image according to the gradient model to generate the initial eye center. The processor converts the initial eye center according to the conversion model to generate a deep eye center corresponding to the first image, and then the processor corrects the deep eye center according to the correction model to generate a precise eye center.
藉此,處理器利用儲存於記憶體內的人臉辨識模型、生成對抗網路模型、梯度模型、轉換模型及校正模型而定位輸入影像的眼睛中心。In this way, the processor uses the face recognition model stored in the memory, generates a confrontation network model, a gradient model, a conversion model, and a correction model to locate the eye center of the input image.
根據前段所述實施方式的眼睛中心定位系統,其中第一影像具有第一粗糙眼睛中心,第二影像具有第二粗糙眼睛中心,且轉換模型包含第一運算模型與第二運算模型。處理器依據第一運算模型計算第一粗糙眼睛中心與第二粗糙眼睛中心的線性關係而產生深度旋轉變量。處理器依據第二運算模型與深度旋轉變量計算初始眼睛中心而產生深度眼睛中心。According to the eye center positioning system of the embodiment described in the preceding paragraph, the first image has a first rough eye center, the second image has a second rough eye center, and the conversion model includes a first calculation model and a second calculation model. The processor calculates the linear relationship between the center of the first rough eye and the center of the second rough eye according to the first calculation model to generate a depth rotation variable. The processor calculates the initial eye center according to the second calculation model and the depth rotation variable to generate the deep eye center.
根據前段所述實施方式的眼睛中心定位系統,其中第一運算模型包含第一粗糙眼睛中心之第一方程式、第二粗糙眼睛中心之第二方程式與第三方程式、第一斜率、第二斜率、第三斜率及深度旋轉變量,第一方程式表示為 ,第二方程式表示為 ,第三方程式表示為 ,第一斜率表示為 ,第二斜率表示為 ,第三斜率表示為 ,深度旋轉變量表示為 與 且符合下式: 。 According to the eye center positioning system of the embodiment described in the preceding paragraph, the first calculation model includes the first equation for the first rough eye center, the second equation for the second rough eye center and the third-party formula, the first slope, the second slope, The third slope and depth rotation variable, the first equation is expressed as , The second equation is expressed as , The third-party program is represented as , The first slope is expressed as , The second slope is expressed as , The third slope is expressed as , The depth rotation variable is expressed as and And meet the following formula: .
根據前段所述實施方式的眼睛中心定位系統,其中第二運算模型包含初始眼睛中心、第二粗糙眼睛中心、深度旋轉變量及深度眼睛中心,初始眼睛中心表示為 ,第二粗糙眼睛中心表示為 ,深度旋轉變量表示為 與 ,深度眼睛中心表示為 且符合下式: 。 According to the eye center positioning system of the embodiment described in the preceding paragraph, the second calculation model includes the initial eye center, the second rough eye center, the depth rotation variable, and the depth eye center, and the initial eye center is expressed as , The center of the second rough eye is expressed as , The depth rotation variable is expressed as and , The depth of the eye center is expressed as And meet the following formula: .
根據前段所述實施方式的眼睛中心定位系統,其中第一影像具有第一粗糙眼睛中心,且處理器依據梯度模型定位第一影像的眼睛區域而產生邊界眼睛中心,並計算邊界眼睛中心與第一粗糙眼睛中心之差值,處理器依據轉換模型轉換差值而產生校正值,然後處理器依據校正模型與校正值修正深度眼睛中心而產生精準眼睛中心。According to the eye center positioning system of the embodiment described in the preceding paragraph, the first image has a first rough eye center, and the processor locates the eye area of the first image according to the gradient model to generate the boundary eye center, and calculates the boundary eye center and the first rough eye center. For the difference between the center of the rough eye, the processor converts the difference according to the conversion model to generate a correction value, and then the processor corrects the center of the deep eye according to the correction model and the correction value to generate a precise eye center.
根據前段所述實施方式的眼睛中心定位系統,其中校正模型包含精準眼睛中心、校正值、邊界眼睛中心、第一粗糙眼睛中心、深度眼睛中心及深度旋轉變量,精準眼睛中心表示為 ,校正值表示為 ,邊界眼睛中心為 ,第一粗糙眼睛中心表示為 ,深度眼睛中心表示為 ,深度旋轉變量表示為 與 且符合下式: 。 According to the eye center positioning system of the embodiment described in the preceding paragraph, the correction model includes the precise eye center, the correction value, the boundary eye center, the first rough eye center, the deep eye center, and the depth rotation variable. The precise eye center is expressed as , The correction value is expressed as , The center of the boundary eye is , The center of the first rough eye is expressed as , The depth of the eye center is expressed as , The depth rotation variable is expressed as and And meet the following formula: .
以下將參照圖式說明本發明之複數個實施例。為明確說明起見,許多實務上的細節將在以下敘述中一併說明。然而,應瞭解到,這些實務上的細節不應用以限制本發明。也就是說,在本發明部分實施例中,這些實務上的細節是非必要的。此外,為簡化圖式起見,一些習知慣用的結構與元件在圖式中將以簡單示意的方式繪示之;並且重複之元件將可能使用相同的編號表示之。Hereinafter, a plurality of embodiments of the present invention will be described with reference to the drawings. For the sake of clarity, many practical details will be explained in the following description. However, it should be understood that these practical details should not be used to limit the present invention. That is to say, in some embodiments of the present invention, these practical details are unnecessary. In addition, for the sake of simplification of the drawings, some conventionally used structures and elements will be drawn in a simple schematic manner in the drawings; and repeated elements may be represented by the same numbers.
此外,本文中當某一元件(或機構或模組等)「連接」 、「設置」或「耦合」於另一元件,可指所述元件是直接連接、直接設置或直接耦合於另一元件,亦可指某一元件是間接連接、間接設置或間接耦合於另一元件,意即,有其他元件介於所述元件及另一元件之間。而當有明示某一元件是「直接連接」、「直接設置」或「直接耦合」於另一元件時,才表示沒有其他元件介於所述元件及另一元件之間。而第一、第二、第三等用語只是用來描述不同元件或成分,而對元件/成分本身並無限制,因此,第一元件/成分亦可改稱為第二元件/成分。且本文中之元件/成分/機構/模組之組合非此領域中之一般周知、常規或習知之組合,不能以元件/成分/機構/模組本身是否為習知,來判定其組合關係是否容易被技術領域中之通常知識者輕易完成。 In addition, in this article, when a component (or mechanism or module, etc.) is "connected" , "Disposed" or "coupled" to another element can mean that the element is directly connected, directly disposed, or directly coupled to another element, and can also mean that an element is indirectly connected, indirectly disposed, or indirectly coupled to another element The element means that there is another element between the element and another element. When it is clearly stated that a certain element is "directly connected", "directly arranged" or "directly coupled" to another element, it means that there is no other element between the element and another element. The terms first, second, third, etc., are only used to describe different elements or components, and have no limitation on the elements/components themselves. Therefore, the first element/component can also be renamed as the second element/component. And the combination of components/components/mechanisms/modules in this article is not a combination of general well-known, conventional or conventional in this field. It is not possible to judge whether the combination relationship is based on whether the component/component/mechanism/module itself is a conventional one. It can be easily completed by ordinary knowledgeable persons in the technical field.
請一併參照第1圖與第2圖,其中第1圖係繪示本發明的結構態樣之實施方式的眼睛中心定位系統100的方塊示意圖,第2圖係繪示第1圖結構態樣之實施方式的眼睛中心定位系統100的輸入影像200、第一影像210及第二影像220的示意圖。如圖所示,眼睛中心定位系統100用以定位輸入影像200之精準眼睛中心250,且眼睛中心定位系統100包含處理器110與記憶體120。Please refer to Figures 1 and 2 together. Figure 1 shows a block diagram of the eye
記憶體120存取輸入影像200、人臉辨識模型121、生成對抗網路模型122、梯度模型123、轉換模型124及校正模型125,且處理器110電性連接於記憶體120。首先,處理器110依據人臉辨識模型121萃取輸入影像200而產生第一影像210。接著,處理器110依據生成對抗網路模型122將第一影像210重新生成為第二影像220,並依據梯度模型123定位第二影像220的眼睛區域(未另標號)而產生初始眼睛中心230。然後,處理器110依據轉換模型124轉換初始眼睛中心230而產生對應於第一影像210之深度眼睛中心240。最終,處理器110依據校正模型125修正深度眼睛中心240而產生精準眼睛中心250。藉此,處理器110利用儲存於記憶體120內的人臉辨識模型121、生成對抗網路模型122、梯度模型123、轉換模型124及校正模型125而定位輸入影像200的眼睛中心,並輸出精準眼睛中心250。The
請一併參照第1圖至第3圖,其中第3圖係繪示本發明的方法態樣之實施方式的眼睛中心定位方法S100的步驟流程圖。如圖所示,眼睛中心定位方法S100可經由眼睛中心定位系統100執行,且眼睛中心定位方法S100包含以下步驟:萃取步驟S110、生成步驟S120、定位步驟S130、轉換步驟S140以及校正步驟S150。Please refer to FIGS. 1 to 3 together, where FIG. 3 is a flowchart of the steps of the eye center positioning method S100 according to the embodiment of the method aspect of the present invention. As shown in the figure, the eye center positioning method S100 can be performed by the eye
萃取步驟S110係驅動處理器110依據人臉辨識模型121萃取輸入影像200而產生第一影像210,且輸入影像200儲存於記憶體120;換言之,萃取步驟S110係萃取輸入影像200中的人臉區域。生成步驟S120係驅動處理器110依據生成對抗網路模型122將第一影像210重新生成為第二影像220。定位步驟S130係驅動處理器110依據梯度模型123定位第二影像220的眼睛區域而產生初始眼睛中心230。轉換步驟S140係驅動處理器110依據轉換模型124轉換初始眼睛中心230而產生對應於第一影像210之深度眼睛中心240。校正步驟S150驅動處理器110依據校正模型125修正深度眼睛中心240而產生精準眼睛中心250。The extraction step S110 is to drive the
藉此,本發明之眼睛中心定位方法S100利用生成對抗網路模型122將原人臉影像(即第一影像210)重新生成為正臉影像(即第二影像220),並透過處理器110產生對應於原人臉影像之深度眼睛中心240,最後經由校正模型125得到修正後的精準眼睛中心250。In this way, the eye center positioning method S100 of the present invention uses the generation
詳細地說,生成對抗網路模型122是運用Complete Representation Generative Adversarial Network(CR-GAN)方法將第一影像210重新生成為第二影像220。當面對原人臉影像處於大偏航旋轉角度和完全消失的眼睛時,經由CR-GAN方法所產生的正臉影像從不完整的眼部可重新獲得完整表示(如第2圖所示)。因此,從相同原人臉的正視角可更精準地與合理地定位出正臉的眼睛中心。另外,梯度模型123是基於梯度方法而定位第二影像220的初始眼睛中心230,但梯度方法為一般習知技術且非本發明之重點,細節不再贅述。In detail, the generation of the
請一併參照第1圖至第4圖,其中第4圖係繪示第3圖方法態樣之實施方式的眼睛中心定位方法S100的轉換步驟S140的步驟流程圖。值得注意的是,第一影像210可具有第一粗糙眼睛中心CEC
1,第二影像220可具有第二粗糙眼睛中心CEC
2,且第一粗糙眼睛中心CEC
1與第二粗糙眼睛中心CEC
2是眼睛區域的中心而非瞳孔中心。更特別的是,於轉換步驟S140中,轉換模型124可包含第一運算模型(未另繪示)與第二運算模型(未另繪示),且轉換步驟S140中可包含旋轉變量檢測步驟S141與位置預測步驟S142。
Please refer to FIG. 1 to FIG. 4 together. FIG. 4 is a flowchart of the conversion step S140 of the eye center positioning method S100 in the embodiment of the method aspect of FIG. 3. It is worth noting that the
旋轉變量檢測步驟S141係驅動處理器110依據第一運算模型計算第一粗糙眼睛中心CEC
1與第二粗糙眼睛中心CEC
2的線性關係而產生深度旋轉變量(未另繪示)。具體而言,第一運算模型可包含第一粗糙眼睛中心CEC
1之第一方程式、第二粗糙眼睛中心CEC
2之第二方程式與第三方程式、第一方程式之第一斜率、第二方程式之第二斜率、第三方程式之第三斜率及深度旋轉變量,第一方程式表示為
,第二方程式表示為
,第三方程式表示為
,第一斜率表示為
,第二斜率表示為
,第三斜率表示為
,深度旋轉變量表示為
與
且符合下列式子(1):
(1)。
The rotation variable detection step S141 is to drive the
詳細地說,旋轉變量檢測步驟S141透過X軸和Z軸的兩線性方程式(即第一方程式與第二方程式)計算出深度旋轉變量
並透過X軸和Y軸的兩線性方程式(即第一方程式與第三方程式)計算出深度旋轉變量
,因此可將2D影像實施3D座標的轉換,以有效地說明第一影像210與第二影像220之間的旋轉關係。
In detail, the rotation variable detection step S141 calculates the depth rotation variable through the two linear equations of the X axis and the Z axis (that is, the first equation and the second equation) And through the two linear equations of the X axis and Y axis (i.e. the first equation and the third-party program), the depth rotation variable is calculated Therefore, the 2D image can be converted into 3D coordinates to effectively illustrate the rotation relationship between the
接續地,位置預測步驟S142係驅動處理器110依據第二運算模型與深度旋轉變量計算初始眼睛中心230而產生深度眼睛中心240。具體而言,第二運算模型可包含初始眼睛中心230、第二粗糙眼睛中心CEC
2、深度旋轉變量及深度眼睛中心240,初始眼睛中心230表示為
,第二粗糙眼睛中心CEC
2表示為
,X軸與Y軸之座標值分別表示為
與
,深度旋轉變量表示為
與
,深度眼睛中心240表示為
且符合下式子(2):
(2)。
Subsequently, the position prediction step S142 is to drive the
詳細地說,位置預測步驟S142透過第一影像210與第二影像220之間的深度旋轉變量來預測未知的定位,且以右眼作為舉例,但本發明並不以此為限。基於第二影像220正視角而預測的初始眼睛中心230,並經由式子(2)將初始眼睛中心230實施旋轉變量的未知定位轉換,以獲得右眼區域的深度眼睛中心240。但是,由於CR-GAN生成的不穩定,導致深度眼睛中心240不能作為最後的定位結果。因此,需要額外再計算一個校正值(未另繪示),以確保第二影像220正視角和第一影像210的眼睛中心定位是同一個位置。In detail, the position prediction step S142 uses the depth rotation variable between the
請一併參照第1圖至第5圖,其中第5圖係繪示第3圖方法態樣之實施方式的眼睛中心定位方法S100的校正步驟S150的步驟流程圖。如圖所示,校正步驟S150可包含轉換子步驟S151與修正子步驟S152,其中轉換子步驟S151係驅動處理器110依據梯度模型123定位第一影像210的眼睛區域而產生邊界眼睛中心(未另繪示),並計算邊界眼睛中心與第一粗糙眼睛中心CEC
1之座標差值,最後依據轉換模型124轉換差值而產生校正值。修正子步驟S152係驅動處理器110依據校正模型125與校正值修正深度眼睛中心240而產生精準眼睛中心250。
Please refer to FIGS. 1 to 5 together, where FIG. 5 is a flowchart of the calibration step S150 of the eye center positioning method S100 according to the embodiment of the method aspect of FIG. 3. As shown in the figure, the correction step S150 can include a conversion sub-step S151 and a correction sub-step S152, wherein the conversion sub-step S151 is to drive the
具體而言,校正模型125可包含精準眼睛中心250、校正值、邊界眼睛中心、第一粗糙眼睛中心CEC
1、深度眼睛中心240及深度旋轉變量,精準眼睛中心250表示為
,校正值表示為
,邊界眼睛中心為
,第一粗糙眼睛中心CEC
1表示為
,深度眼睛中心240表示為
,深度旋轉變量表示為
與
符合下式子(3):
(3)。
Specifically, the
詳細地說,基於第一影像210正視角所得到邊界眼睛中心,並計算邊界眼睛中心與第一粗糙眼睛中心CEC
1的差值後,再轉換至深度定位作為校正值。最後,加上第二影像220的深度眼睛中心240並轉換回第一影像210正視角作為真實精準定位而產生輸入影像200的精準眼睛中心250。藉此,可同時解決臉部大偏航旋轉角度與眼睛被完全遮蔽,進而有效地並精準地定位眼睛中心。
In detail, the boundary eye center is obtained based on the positive angle of view of the
雖然本發明已以實施方式揭露如上,然其並非用以限定本發明,任何熟習此技藝者,在不脫離本發明的精神和範圍內,當可作各種的更動與潤飾,因此本發明的保護範圍當視後附的申請專利範圍所界定者為準。Although the present invention has been disclosed in the above embodiments, it is not intended to limit the present invention. Anyone familiar with the art can make various changes and modifications without departing from the spirit and scope of the present invention. Therefore, the protection of the present invention The scope shall be subject to the scope of the attached patent application.
100:眼睛中心定位系統 110:處理器 120:記憶體 121:人臉辨識模型 122:生成對抗網路模型 123:梯度模型 124:轉換模型 125:校正模型 200:輸入影像 210:第一影像 220:第二影像 230:初始眼睛中心 240:深度眼睛中心 250:精準眼睛中心 S100:眼睛中心定位方法 S110:萃取步驟 S120:生成步驟 S130:定位步驟 S140:轉換步驟 S141:旋轉變量檢測步驟 S142:位置預測步驟 S150:校正步驟 S151:轉換子步驟 S152:修正子步驟 CEC 1:第一粗糙眼睛中心 CEC 2:第二粗糙眼睛中心 100: Eye Center Positioning System 110: Processor 120: Memory 121: Face Recognition Model 122: Generate Adversarial Network Model 123: Gradient Model 124: Conversion Model 125: Correction Model 200: Input Image 210: First Image 220: Second image 230: initial eye center 240: deep eye center 250: precise eye center S100: eye center positioning method S110: extraction step S120: generation step S130: positioning step S140: conversion step S141: rotation variable detection step S142: position prediction Step S150: Correction Step S151: Conversion sub-step S152: Correction sub-step CEC 1 : First coarse eye center CEC 2 : Second coarse eye center
第1圖係繪示本發明的結構態樣之一實施方式的眼睛中心定位系統的方塊示意圖; 第2圖係繪示第1圖結構態樣之實施方式的眼睛中心定位系統的輸入影像、第一影像及第二影像的示意圖; 第3圖係繪示本發明的方法態樣之一實施方式的眼睛中心定位方法的步驟流程圖; 第4圖係繪示第3圖方法態樣之實施方式的眼睛中心定位方法的轉換步驟的步驟流程圖;以及 第5圖係繪示第3圖方法態樣之實施方式的眼睛中心定位方法的校正步驟的步驟流程圖。 Figure 1 is a block diagram of an eye center positioning system according to an embodiment of the structural aspect of the present invention; Fig. 2 is a schematic diagram showing the input image, the first image and the second image of the eye center positioning system according to the embodiment of the structural aspect of Fig. 1; Figure 3 is a flowchart showing the steps of an eye center positioning method according to an embodiment of the method aspect of the present invention; Fig. 4 is a step-by-step flowchart of the conversion steps of the eye center positioning method according to the embodiment of the method aspect in Fig. 3; and FIG. 5 is a flow chart showing the calibration steps of the eye center positioning method according to the embodiment of the method aspect of FIG. 3. FIG.
S100:眼睛中心定位方法 S100: Eye center positioning method
S110:萃取步驟 S110: Extraction step
S120:生成步驟 S120: Generation steps
S130:定位步驟 S130: Positioning step
S140:轉換步驟 S140: Conversion steps
S150:校正步驟 S150: Calibration steps
Claims (10)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| TW109127247A TWI748596B (en) | 2020-08-11 | 2020-08-11 | Eye center positioning method and system thereof |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| TW109127247A TWI748596B (en) | 2020-08-11 | 2020-08-11 | Eye center positioning method and system thereof |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| TWI748596B true TWI748596B (en) | 2021-12-01 |
| TW202207076A TW202207076A (en) | 2022-02-16 |
Family
ID=80680843
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| TW109127247A TWI748596B (en) | 2020-08-11 | 2020-08-11 | Eye center positioning method and system thereof |
Country Status (1)
| Country | Link |
|---|---|
| TW (1) | TWI748596B (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US12400429B2 (en) | 2023-02-24 | 2025-08-26 | National Sun Yat-Sen University | Method and electrical device for training cross-domain classifier |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105205480A (en) * | 2015-10-31 | 2015-12-30 | 潍坊学院 | Complex scene human eye locating method and system |
| US20190096135A1 (en) * | 2017-09-26 | 2019-03-28 | Aquifi, Inc. | Systems and methods for visual inspection based on augmented reality |
| TWI668639B (en) * | 2018-05-30 | 2019-08-11 | 瑞軒科技股份有限公司 | Facial recognition system and method |
| TWI687871B (en) * | 2019-03-28 | 2020-03-11 | 國立勤益科技大學 | Image identification system for security protection |
-
2020
- 2020-08-11 TW TW109127247A patent/TWI748596B/en active
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105205480A (en) * | 2015-10-31 | 2015-12-30 | 潍坊学院 | Complex scene human eye locating method and system |
| CN105205480B (en) | 2015-10-31 | 2018-12-25 | 潍坊学院 | Human-eye positioning method and system in a kind of complex scene |
| US20190096135A1 (en) * | 2017-09-26 | 2019-03-28 | Aquifi, Inc. | Systems and methods for visual inspection based on augmented reality |
| TWI668639B (en) * | 2018-05-30 | 2019-08-11 | 瑞軒科技股份有限公司 | Facial recognition system and method |
| TWI687871B (en) * | 2019-03-28 | 2020-03-11 | 國立勤益科技大學 | Image identification system for security protection |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US12400429B2 (en) | 2023-02-24 | 2025-08-26 | National Sun Yat-Sen University | Method and electrical device for training cross-domain classifier |
Also Published As
| Publication number | Publication date |
|---|---|
| TW202207076A (en) | 2022-02-16 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| KR102334139B1 (en) | Eye gaze tracking based upon adaptive homography mapping | |
| US11610289B2 (en) | Image processing method and apparatus, storage medium, and terminal | |
| CN111783605B (en) | A method, device, equipment and storage medium for face image recognition | |
| US11181978B2 (en) | System and method for gaze estimation | |
| CN103390152A (en) | Sight tracking system suitable for human-computer interaction and based on system on programmable chip (SOPC) | |
| CN112418251A (en) | Infrared body temperature detection method and system | |
| CN109634431B (en) | Media-free floating projection visual tracking interactive system | |
| CN112017212A (en) | Training and tracking method and system of facial key point tracking model | |
| CN114494347A (en) | Single-camera multi-mode sight tracking method and device and electronic equipment | |
| US20250069183A1 (en) | Method and apparatus of processing image, interactive device, electronic device, and storage medium | |
| CN115049738A (en) | Method and system for estimating distance between person and camera | |
| TWI748596B (en) | Eye center positioning method and system thereof | |
| CN116245940A (en) | Category-level six-degree-of-freedom object pose estimation method based on structure difference perception | |
| CN114419259B (en) | A visual positioning method and system based on physical model imaging simulation | |
| CN104463876A (en) | Adaptive-filtering-based rapid multi-circle detection method for image under complex background | |
| Perra et al. | Adaptive eye-camera calibration for head-worn devices | |
| CN114820513A (en) | Vision detection method | |
| JP6906943B2 (en) | On-board unit | |
| Li et al. | Learning to synthesize photorealistic dual-pixel images from RGBD frames | |
| KR101001184B1 (en) | Iterative 3D Face Pose Estimation Method Using Face Normalization Vectors | |
| CN111866493B (en) | Image correction method, device and equipment based on head-mounted display equipment | |
| CN110298225A (en) | A method of blocking the human face five-sense-organ positioning under environment | |
| CN117930975A (en) | Eyesight protection method and device, eye protection wearable equipment and storage medium | |
| CN116740778A (en) | A method, device, equipment and medium for processing face image samples of people wearing glasses | |
| CN113741682A (en) | Method, device and equipment for mapping fixation point and storage medium |