[go: up one dir, main page]

TWI748596B - Eye center positioning method and system thereof - Google Patents

Eye center positioning method and system thereof Download PDF

Info

Publication number
TWI748596B
TWI748596B TW109127247A TW109127247A TWI748596B TW I748596 B TWI748596 B TW I748596B TW 109127247 A TW109127247 A TW 109127247A TW 109127247 A TW109127247 A TW 109127247A TW I748596 B TWI748596 B TW I748596B
Authority
TW
Taiwan
Prior art keywords
eye center
eye
center
model
image
Prior art date
Application number
TW109127247A
Other languages
Chinese (zh)
Other versions
TW202207076A (en
Inventor
許巍嚴
鍾季叡
Original Assignee
國立中正大學
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 國立中正大學 filed Critical 國立中正大學
Priority to TW109127247A priority Critical patent/TWI748596B/en
Application granted granted Critical
Publication of TWI748596B publication Critical patent/TWI748596B/en
Publication of TW202207076A publication Critical patent/TW202207076A/en

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present disclosure provides an eye center positioning method. The eye center positioning method includes the following steps: First, an extracting step drives a processor to extract an inputting image according to a face recognition model to generate a first image, and the inputting image is stored in a memory. Then, a generating step drives the processor to regenerate the first image into a second image according to a generative adversarial network model. Then, a positioning step drives the processor to locate an eye region of the second image according to a gradient model to generate an initial eye center. Furthermore, the converting step drives the processor to convert the initial eye center according to a converting model to generate a deep eye center corresponding to the first image. Finally, a correcting step drives the processor to correct the deep eye center according to a correcting model to generate an accurate eye center. Therefore, the center of the eye can be effectively positioned.

Description

眼睛中心定位方法及其系統Eye center positioning method and system

本發明係關於一種眼睛中心定位方法及其系統,且特別是關於一種利用生成對抗網路與深度對應點之眼睛中心定位方法及其系統。The present invention relates to an eye center positioning method and system thereof, and more particularly to an eye center positioning method and system using a generated confrontation network and depth corresponding points.

眼睛中心(瞳孔)定位是眾多視覺應用的第一步,亦為最重要的一步,例如:人臉識別系統與面部表情分析。此外,虹膜檢測和定位也離不開眼睛中心定位。眼睛中心定位的精準度將直接地影響下一步的處理。然而,習知的眼睛中心定位方法幾乎僅考慮與應用於「正臉」或「有限偏航旋轉(Yaw rotation)」的臉部上。Eye center (pupil) positioning is the first and most important step in many vision applications, such as facial recognition systems and facial expression analysis. In addition, iris detection and positioning are also inseparable from eye center positioning. The accuracy of the eye center positioning will directly affect the next step of processing. However, the conventional eye center positioning method is almost only considered and applied to the "front face" or "limited yaw rotation" face.

多視角臉部的「大偏航旋轉角度(Large yaw-rotation angle)」包含:眼睛遭受鼻樑遮蔽、陰影遮蔽或眼鏡遮蔽;以及「完全消失的眼睛」包含:完全區塊遮蔽、閉眼、戴墨鏡或眼鏡反光。臉部旋轉與外物的干擾都將導致眼部區域無法完整地的顯示,而大幅地影響眼睛定位困難度,並容易造成眼睛定位錯誤。The "Large yaw-rotation angle" of a multi-view face includes: eyes that are blocked by the bridge of the nose, shadows, or glasses; and "completely disappeared eyes" include: complete block masking, closed eyes, and wearing sunglasses Or the glasses reflect light. Both the rotation of the face and the interference of foreign objects will result in the incomplete display of the eye area, which will greatly affect the difficulty of eye positioning and easily cause eye positioning errors.

為解決上述問題,本發明提供一種眼睛中心定位方法及其系統,在眼睛中心定位的過程中,為了避免眼睛遭受遮蔽而消失的影響,利用Complete Representation Generative Adversarial Network(CR-GAN)方法與透過轉換原人臉區域與新生成正臉之間的深度對應點,以有效地定位眼睛中心。In order to solve the above problems, the present invention provides an eye center positioning method and system. In the process of eye center positioning, in order to avoid the influence of the eyes being obscured and disappearing, the Complete Representation Generative Adversarial Network (CR-GAN) method and transmission conversion are used. The depth correspondence point between the original face area and the newly generated front face can effectively locate the eye center.

依據本發明的方法態樣之一實施方式提供一種眼睛中心定位方法,其由眼睛中心定位系統執行,且眼睛中心定位系統包含處理器與記憶體。眼睛中心定位方法包含以下步驟:萃取步驟、生成步驟、定位步驟、轉換步驟以及校正步驟。萃取步驟係驅動處理器依據人臉辨識模型萃取輸入影像而產生第一影像,且輸入影像儲存於記憶體。生成步驟係驅動處理器依據生成對抗網路模型將第一影像重新生成為第二影像。定位步驟係驅動處理器依據梯度模型定位第二影像的眼睛區域而產生初始眼睛中心。轉換步驟係驅動處理器依據轉換模型轉換初始眼睛中心而產生對應於第一影像之深度眼睛中心。校正步驟係驅動處理器依據校正模型修正深度眼睛中心而產生精準眼睛中心。According to an embodiment of the method aspect of the present invention, an eye center positioning method is provided, which is executed by an eye center positioning system, and the eye center positioning system includes a processor and a memory. The eye center positioning method includes the following steps: an extraction step, a generation step, a positioning step, a conversion step, and a correction step. The extraction step is to drive the processor to extract the input image according to the face recognition model to generate a first image, and the input image is stored in the memory. The generating step is to drive the processor to regenerate the first image into the second image according to the generating confrontation network model. The positioning step is to drive the processor to locate the eye area of the second image according to the gradient model to generate the initial eye center. The conversion step is to drive the processor to convert the initial eye center according to the conversion model to generate a deep eye center corresponding to the first image. The correction step is to drive the processor to correct the deep eye center according to the correction model to generate a precise eye center.

藉此,本發明之眼睛中心定位方法利用生成對抗網路將原始人臉影像重新生成出新生成正臉影像,並透過處理器產生對應於原始人臉影像之深度眼睛中心,最後得到修正後的精準眼睛中心。As a result, the eye center positioning method of the present invention uses the generation confrontation network to regenerate the original face image into a newly generated front face image, and generates the depth eye center corresponding to the original face image through the processor, and finally obtains the corrected eye center. Accurate the center of the eye.

根據前段所述實施方式的眼睛中心定位方法,其中第一影像具有一第一粗糙眼睛中心,第二影像具有一第二粗糙眼睛中心。轉換模型包含第一運算模型與第二運算模型,且轉換步驟包含以下步驟:旋轉變量檢測步驟與位置預測步驟,其中旋轉變量檢測步驟係驅動處理器依據第一運算模型計算第一粗糙眼睛中心與第二粗糙眼睛中心的線性關係而產生深度旋轉變量。位置預測步驟係驅動處理器依據第二運算模型與深度旋轉變量計算初始眼睛中心而產生深度眼睛中心。According to the eye center positioning method of the embodiment described in the preceding paragraph, the first image has a first rough eye center, and the second image has a second rough eye center. The conversion model includes a first calculation model and a second calculation model, and the conversion step includes the following steps: a rotation variable detection step and a position prediction step. The rotation variable detection step drives the processor to calculate the first rough eye center and the center of the rough eye according to the first calculation model. The linear relationship of the second rough eye center produces a depth rotation variable. The position prediction step drives the processor to calculate the initial eye center according to the second calculation model and the depth rotation variable to generate a deep eye center.

根據前段所述實施方式的眼睛中心定位方法,其中第一運算模型包含第一粗糙眼睛中心之第一方程式、第二粗糙眼睛中心之第二方程式與第三方程式、第一斜率、第二斜率、第三斜率及深度旋轉變量,第一方程式表示為

Figure 02_image001
,第二方程式表示為
Figure 02_image003
,第三方程式表示為
Figure 02_image005
,第一斜率表示為
Figure 02_image007
,第二斜率表示為
Figure 02_image009
,第三斜率表示為
Figure 02_image011
,深度旋轉變量表示為
Figure 02_image013
Figure 02_image015
且符合下式:
Figure 02_image017
。 According to the eye center positioning method of the embodiment described in the preceding paragraph, the first calculation model includes a first equation for the center of the first rough eye, a second equation for the center of the second rough eye and a third-party program, the first slope, the second slope, The third slope and depth rotation variable, the first equation is expressed as
Figure 02_image001
, The second equation is expressed as
Figure 02_image003
, The third-party program is represented as
Figure 02_image005
, The first slope is expressed as
Figure 02_image007
, The second slope is expressed as
Figure 02_image009
, The third slope is expressed as
Figure 02_image011
, The depth rotation variable is expressed as
Figure 02_image013
and
Figure 02_image015
And meet the following formula:
Figure 02_image017
.

根據前段所述實施方式的眼睛中心定位方法,其中第二運算模型包含初始眼睛中心、第二粗糙眼睛中心、深度旋轉變量及深度眼睛中心,初始眼睛中心表示為

Figure 02_image019
,第二粗糙眼睛中心表示為
Figure 02_image021
,深度旋轉變量表示為
Figure 02_image013
Figure 02_image015
,深度眼睛中心表示為
Figure 02_image023
且符合下式:
Figure 02_image025
。 According to the eye center positioning method of the embodiment described in the preceding paragraph, the second calculation model includes the initial eye center, the second rough eye center, the depth rotation variable, and the depth eye center, and the initial eye center is expressed as
Figure 02_image019
, The center of the second rough eye is expressed as
Figure 02_image021
, The depth rotation variable is expressed as
Figure 02_image013
and
Figure 02_image015
, The depth of the eye center is expressed as
Figure 02_image023
And meet the following formula:
Figure 02_image025
.

根據前段所述實施方式的眼睛中心定位方法,其中第一影像具有第一粗糙眼睛中心,且校正步驟包含以下步驟:轉換子步驟與修正子步驟,其中轉換子步驟係驅動處理器依據梯度模型定位第一影像的眼睛區域而產生邊界眼睛中心,並計算邊界眼睛中心與第一粗糙眼睛中心之差值,最後依據轉換模型轉換差值而產生校正值。修正子步驟係驅動處理器依據校正模型與校正值修正深度眼睛中心而產生精準眼睛中心。According to the eye center positioning method of the embodiment described in the preceding paragraph, the first image has a first rough eye center, and the correction step includes the following steps: a conversion sub-step and a correction sub-step, wherein the conversion sub-step drives the processor to locate according to the gradient model The eye area of the first image generates the boundary eye center, and calculates the difference between the boundary eye center and the first rough eye center, and finally converts the difference according to the conversion model to generate a correction value. The correction sub-step is to drive the processor to correct the deep eye center according to the correction model and the correction value to generate a precise eye center.

根據前段所述實施方式的眼睛中心定位方法,其中校正模型包含精準眼睛中心、校正值、邊界眼睛中心、第一粗糙眼睛中心、深度眼睛中心及深度旋轉變量,精準眼睛中心表示為

Figure 02_image027
,校正值表示為
Figure 02_image029
,邊界眼睛中心為
Figure 02_image031
,第一粗糙眼睛中心表示為
Figure 02_image033
,深度眼睛中心表示為
Figure 02_image023
,深度旋轉變量表示為
Figure 02_image013
Figure 02_image015
且符合下式:
Figure 02_image035
Figure 02_image037
。 According to the eye center positioning method of the embodiment described in the preceding paragraph, the correction model includes the precise eye center, the correction value, the boundary eye center, the first rough eye center, the deep eye center, and the depth rotation variable. The precise eye center is expressed as
Figure 02_image027
, The correction value is expressed as
Figure 02_image029
, The center of the boundary eye is
Figure 02_image031
, The center of the first rough eye is expressed as
Figure 02_image033
, The depth of the eye center is expressed as
Figure 02_image023
, The depth rotation variable is expressed as
Figure 02_image013
and
Figure 02_image015
And meet the following formula:
Figure 02_image035
Figure 02_image037
.

依據本發明的結構態樣之一實施方式提供一種眼睛中心定位系統,其用以定位輸入影像之精準眼睛中心,且眼睛中心定位系統包含記憶體以及處理器,其中記憶體係存取輸入影像、人臉辨識模型、生成對抗網路模型、梯度模型、轉換模型及校正模型。處理器係電性連接於記憶體,且處理器依據人臉辨識模型萃取輸入影像而產生第一影像。處理器依據生成對抗網路模型將第一影像重新生成為第二影像。處理器依據梯度模型定位第二影像的眼睛區域而產生初始眼睛中心。處理器依據轉換模型轉換初始眼睛中心而產生對應於第一影像之深度眼睛中心,然後處理器依據校正模型修正深度眼睛中心而產生精準眼睛中心。According to an embodiment of the structural aspect of the present invention, an eye center positioning system is provided, which is used to locate the precise eye center of an input image, and the eye center positioning system includes a memory and a processor, wherein the memory system accesses the input image, human Face recognition model, generation of confrontation network model, gradient model, conversion model and correction model. The processor is electrically connected to the memory, and the processor extracts the input image according to the face recognition model to generate the first image. The processor regenerates the first image into the second image according to the generated confrontation network model. The processor locates the eye area of the second image according to the gradient model to generate the initial eye center. The processor converts the initial eye center according to the conversion model to generate a deep eye center corresponding to the first image, and then the processor corrects the deep eye center according to the correction model to generate a precise eye center.

藉此,處理器利用儲存於記憶體內的人臉辨識模型、生成對抗網路模型、梯度模型、轉換模型及校正模型而定位輸入影像的眼睛中心。In this way, the processor uses the face recognition model stored in the memory, generates a confrontation network model, a gradient model, a conversion model, and a correction model to locate the eye center of the input image.

根據前段所述實施方式的眼睛中心定位系統,其中第一影像具有第一粗糙眼睛中心,第二影像具有第二粗糙眼睛中心,且轉換模型包含第一運算模型與第二運算模型。處理器依據第一運算模型計算第一粗糙眼睛中心與第二粗糙眼睛中心的線性關係而產生深度旋轉變量。處理器依據第二運算模型與深度旋轉變量計算初始眼睛中心而產生深度眼睛中心。According to the eye center positioning system of the embodiment described in the preceding paragraph, the first image has a first rough eye center, the second image has a second rough eye center, and the conversion model includes a first calculation model and a second calculation model. The processor calculates the linear relationship between the center of the first rough eye and the center of the second rough eye according to the first calculation model to generate a depth rotation variable. The processor calculates the initial eye center according to the second calculation model and the depth rotation variable to generate the deep eye center.

根據前段所述實施方式的眼睛中心定位系統,其中第一運算模型包含第一粗糙眼睛中心之第一方程式、第二粗糙眼睛中心之第二方程式與第三方程式、第一斜率、第二斜率、第三斜率及深度旋轉變量,第一方程式表示為

Figure 02_image001
,第二方程式表示為
Figure 02_image003
,第三方程式表示為
Figure 02_image005
,第一斜率表示為
Figure 02_image007
,第二斜率表示為
Figure 02_image009
,第三斜率表示為
Figure 02_image011
,深度旋轉變量表示為
Figure 02_image013
Figure 02_image015
且符合下式:
Figure 02_image017
。 According to the eye center positioning system of the embodiment described in the preceding paragraph, the first calculation model includes the first equation for the first rough eye center, the second equation for the second rough eye center and the third-party formula, the first slope, the second slope, The third slope and depth rotation variable, the first equation is expressed as
Figure 02_image001
, The second equation is expressed as
Figure 02_image003
, The third-party program is represented as
Figure 02_image005
, The first slope is expressed as
Figure 02_image007
, The second slope is expressed as
Figure 02_image009
, The third slope is expressed as
Figure 02_image011
, The depth rotation variable is expressed as
Figure 02_image013
and
Figure 02_image015
And meet the following formula:
Figure 02_image017
.

根據前段所述實施方式的眼睛中心定位系統,其中第二運算模型包含初始眼睛中心、第二粗糙眼睛中心、深度旋轉變量及深度眼睛中心,初始眼睛中心表示為

Figure 02_image019
,第二粗糙眼睛中心表示為
Figure 02_image021
,深度旋轉變量表示為
Figure 02_image013
Figure 02_image015
,深度眼睛中心表示為
Figure 02_image023
且符合下式:
Figure 02_image025
。 According to the eye center positioning system of the embodiment described in the preceding paragraph, the second calculation model includes the initial eye center, the second rough eye center, the depth rotation variable, and the depth eye center, and the initial eye center is expressed as
Figure 02_image019
, The center of the second rough eye is expressed as
Figure 02_image021
, The depth rotation variable is expressed as
Figure 02_image013
and
Figure 02_image015
, The depth of the eye center is expressed as
Figure 02_image023
And meet the following formula:
Figure 02_image025
.

根據前段所述實施方式的眼睛中心定位系統,其中第一影像具有第一粗糙眼睛中心,且處理器依據梯度模型定位第一影像的眼睛區域而產生邊界眼睛中心,並計算邊界眼睛中心與第一粗糙眼睛中心之差值,處理器依據轉換模型轉換差值而產生校正值,然後處理器依據校正模型與校正值修正深度眼睛中心而產生精準眼睛中心。According to the eye center positioning system of the embodiment described in the preceding paragraph, the first image has a first rough eye center, and the processor locates the eye area of the first image according to the gradient model to generate the boundary eye center, and calculates the boundary eye center and the first rough eye center. For the difference between the center of the rough eye, the processor converts the difference according to the conversion model to generate a correction value, and then the processor corrects the center of the deep eye according to the correction model and the correction value to generate a precise eye center.

根據前段所述實施方式的眼睛中心定位系統,其中校正模型包含精準眼睛中心、校正值、邊界眼睛中心、第一粗糙眼睛中心、深度眼睛中心及深度旋轉變量,精準眼睛中心表示為

Figure 02_image027
,校正值表示為
Figure 02_image029
,邊界眼睛中心為
Figure 02_image031
,第一粗糙眼睛中心表示為
Figure 02_image033
,深度眼睛中心表示為
Figure 02_image023
,深度旋轉變量表示為
Figure 02_image013
Figure 02_image015
且符合下式:
Figure 02_image035
Figure 02_image037
。 According to the eye center positioning system of the embodiment described in the preceding paragraph, the correction model includes the precise eye center, the correction value, the boundary eye center, the first rough eye center, the deep eye center, and the depth rotation variable. The precise eye center is expressed as
Figure 02_image027
, The correction value is expressed as
Figure 02_image029
, The center of the boundary eye is
Figure 02_image031
, The center of the first rough eye is expressed as
Figure 02_image033
, The depth of the eye center is expressed as
Figure 02_image023
, The depth rotation variable is expressed as
Figure 02_image013
and
Figure 02_image015
And meet the following formula:
Figure 02_image035
Figure 02_image037
.

以下將參照圖式說明本發明之複數個實施例。為明確說明起見,許多實務上的細節將在以下敘述中一併說明。然而,應瞭解到,這些實務上的細節不應用以限制本發明。也就是說,在本發明部分實施例中,這些實務上的細節是非必要的。此外,為簡化圖式起見,一些習知慣用的結構與元件在圖式中將以簡單示意的方式繪示之;並且重複之元件將可能使用相同的編號表示之。Hereinafter, a plurality of embodiments of the present invention will be described with reference to the drawings. For the sake of clarity, many practical details will be explained in the following description. However, it should be understood that these practical details should not be used to limit the present invention. That is to say, in some embodiments of the present invention, these practical details are unnecessary. In addition, for the sake of simplification of the drawings, some conventionally used structures and elements will be drawn in a simple schematic manner in the drawings; and repeated elements may be represented by the same numbers.

此外,本文中當某一元件(或機構或模組等)「連接」 、「設置」或「耦合」於另一元件,可指所述元件是直接連接、直接設置或直接耦合於另一元件,亦可指某一元件是間接連接、間接設置或間接耦合於另一元件,意即,有其他元件介於所述元件及另一元件之間。而當有明示某一元件是「直接連接」、「直接設置」或「直接耦合」於另一元件時,才表示沒有其他元件介於所述元件及另一元件之間。而第一、第二、第三等用語只是用來描述不同元件或成分,而對元件/成分本身並無限制,因此,第一元件/成分亦可改稱為第二元件/成分。且本文中之元件/成分/機構/模組之組合非此領域中之一般周知、常規或習知之組合,不能以元件/成分/機構/模組本身是否為習知,來判定其組合關係是否容易被技術領域中之通常知識者輕易完成。 In addition, in this article, when a component (or mechanism or module, etc.) is "connected" , "Disposed" or "coupled" to another element can mean that the element is directly connected, directly disposed, or directly coupled to another element, and can also mean that an element is indirectly connected, indirectly disposed, or indirectly coupled to another element The element means that there is another element between the element and another element. When it is clearly stated that a certain element is "directly connected", "directly arranged" or "directly coupled" to another element, it means that there is no other element between the element and another element. The terms first, second, third, etc., are only used to describe different elements or components, and have no limitation on the elements/components themselves. Therefore, the first element/component can also be renamed as the second element/component. And the combination of components/components/mechanisms/modules in this article is not a combination of general well-known, conventional or conventional in this field. It is not possible to judge whether the combination relationship is based on whether the component/component/mechanism/module itself is a conventional one. It can be easily completed by ordinary knowledgeable persons in the technical field.

請一併參照第1圖與第2圖,其中第1圖係繪示本發明的結構態樣之實施方式的眼睛中心定位系統100的方塊示意圖,第2圖係繪示第1圖結構態樣之實施方式的眼睛中心定位系統100的輸入影像200、第一影像210及第二影像220的示意圖。如圖所示,眼睛中心定位系統100用以定位輸入影像200之精準眼睛中心250,且眼睛中心定位系統100包含處理器110與記憶體120。Please refer to Figures 1 and 2 together. Figure 1 shows a block diagram of the eye center positioning system 100 according to an embodiment of the structural aspect of the present invention, and Figure 2 shows the structural aspect of Figure 1. A schematic diagram of the input image 200, the first image 210, and the second image 220 of the eye center positioning system 100 of the embodiment. As shown in the figure, the eye center positioning system 100 is used to locate the precise eye center 250 of the input image 200, and the eye center positioning system 100 includes a processor 110 and a memory 120.

記憶體120存取輸入影像200、人臉辨識模型121、生成對抗網路模型122、梯度模型123、轉換模型124及校正模型125,且處理器110電性連接於記憶體120。首先,處理器110依據人臉辨識模型121萃取輸入影像200而產生第一影像210。接著,處理器110依據生成對抗網路模型122將第一影像210重新生成為第二影像220,並依據梯度模型123定位第二影像220的眼睛區域(未另標號)而產生初始眼睛中心230。然後,處理器110依據轉換模型124轉換初始眼睛中心230而產生對應於第一影像210之深度眼睛中心240。最終,處理器110依據校正模型125修正深度眼睛中心240而產生精準眼睛中心250。藉此,處理器110利用儲存於記憶體120內的人臉辨識模型121、生成對抗網路模型122、梯度模型123、轉換模型124及校正模型125而定位輸入影像200的眼睛中心,並輸出精準眼睛中心250。The memory 120 accesses the input image 200, the face recognition model 121, the generation confrontation network model 122, the gradient model 123, the conversion model 124 and the correction model 125, and the processor 110 is electrically connected to the memory 120. First, the processor 110 extracts the input image 200 according to the face recognition model 121 to generate the first image 210. Then, the processor 110 regenerates the first image 210 into the second image 220 according to the generated confrontation network model 122, and locates the eye area (not labeled) of the second image 220 according to the gradient model 123 to generate the initial eye center 230. Then, the processor 110 converts the initial eye center 230 according to the conversion model 124 to generate a deep eye center 240 corresponding to the first image 210. Finally, the processor 110 corrects the deep eye center 240 according to the correction model 125 to generate a precise eye center 250. In this way, the processor 110 uses the face recognition model 121, the generation confrontation network model 122, the gradient model 123, the conversion model 124, and the correction model 125 stored in the memory 120 to locate the eye center of the input image 200, and output accurate 250 in the center of the eye.

請一併參照第1圖至第3圖,其中第3圖係繪示本發明的方法態樣之實施方式的眼睛中心定位方法S100的步驟流程圖。如圖所示,眼睛中心定位方法S100可經由眼睛中心定位系統100執行,且眼睛中心定位方法S100包含以下步驟:萃取步驟S110、生成步驟S120、定位步驟S130、轉換步驟S140以及校正步驟S150。Please refer to FIGS. 1 to 3 together, where FIG. 3 is a flowchart of the steps of the eye center positioning method S100 according to the embodiment of the method aspect of the present invention. As shown in the figure, the eye center positioning method S100 can be performed by the eye center positioning system 100, and the eye center positioning method S100 includes the following steps: an extraction step S110, a generation step S120, a positioning step S130, a conversion step S140, and a correction step S150.

萃取步驟S110係驅動處理器110依據人臉辨識模型121萃取輸入影像200而產生第一影像210,且輸入影像200儲存於記憶體120;換言之,萃取步驟S110係萃取輸入影像200中的人臉區域。生成步驟S120係驅動處理器110依據生成對抗網路模型122將第一影像210重新生成為第二影像220。定位步驟S130係驅動處理器110依據梯度模型123定位第二影像220的眼睛區域而產生初始眼睛中心230。轉換步驟S140係驅動處理器110依據轉換模型124轉換初始眼睛中心230而產生對應於第一影像210之深度眼睛中心240。校正步驟S150驅動處理器110依據校正模型125修正深度眼睛中心240而產生精準眼睛中心250。The extraction step S110 is to drive the processor 110 to extract the input image 200 according to the face recognition model 121 to generate the first image 210, and the input image 200 is stored in the memory 120; in other words, the extraction step S110 is to extract the face region in the input image 200 . The generating step S120 is to drive the processor 110 to regenerate the first image 210 into the second image 220 according to the generating confrontation network model 122. The positioning step S130 is to drive the processor 110 to locate the eye area of the second image 220 according to the gradient model 123 to generate the initial eye center 230. The conversion step S140 is to drive the processor 110 to convert the initial eye center 230 according to the conversion model 124 to generate a deep eye center 240 corresponding to the first image 210. The correction step S150 drives the processor 110 to correct the deep eye center 240 according to the correction model 125 to generate an accurate eye center 250.

藉此,本發明之眼睛中心定位方法S100利用生成對抗網路模型122將原人臉影像(即第一影像210)重新生成為正臉影像(即第二影像220),並透過處理器110產生對應於原人臉影像之深度眼睛中心240,最後經由校正模型125得到修正後的精準眼睛中心250。In this way, the eye center positioning method S100 of the present invention uses the generation confrontation network model 122 to regenerate the original face image (ie, the first image 210) into a front face image (ie, the second image 220), and generate it through the processor 110 Corresponding to the depth eye center 240 of the original face image, the corrected accurate eye center 250 is finally obtained through the correction model 125.

詳細地說,生成對抗網路模型122是運用Complete Representation Generative Adversarial Network(CR-GAN)方法將第一影像210重新生成為第二影像220。當面對原人臉影像處於大偏航旋轉角度和完全消失的眼睛時,經由CR-GAN方法所產生的正臉影像從不完整的眼部可重新獲得完整表示(如第2圖所示)。因此,從相同原人臉的正視角可更精準地與合理地定位出正臉的眼睛中心。另外,梯度模型123是基於梯度方法而定位第二影像220的初始眼睛中心230,但梯度方法為一般習知技術且非本發明之重點,細節不再贅述。In detail, the generation of the confrontation network model 122 uses the Complete Representation Generative Adversarial Network (CR-GAN) method to regenerate the first image 210 into the second image 220. When facing the original face image at a large yaw rotation angle and completely disappearing eyes, the front face image generated by the CR-GAN method can regain a complete representation from the incomplete eyes (as shown in Figure 2) . Therefore, from the front view of the same original face, the eye center of the front face can be located more accurately and reasonably. In addition, the gradient model 123 is based on the gradient method to locate the initial eye center 230 of the second image 220, but the gradient method is a conventional technology and is not the focus of the present invention, and the details will not be repeated.

請一併參照第1圖至第4圖,其中第4圖係繪示第3圖方法態樣之實施方式的眼睛中心定位方法S100的轉換步驟S140的步驟流程圖。值得注意的是,第一影像210可具有第一粗糙眼睛中心CEC 1,第二影像220可具有第二粗糙眼睛中心CEC 2,且第一粗糙眼睛中心CEC 1與第二粗糙眼睛中心CEC 2是眼睛區域的中心而非瞳孔中心。更特別的是,於轉換步驟S140中,轉換模型124可包含第一運算模型(未另繪示)與第二運算模型(未另繪示),且轉換步驟S140中可包含旋轉變量檢測步驟S141與位置預測步驟S142。 Please refer to FIG. 1 to FIG. 4 together. FIG. 4 is a flowchart of the conversion step S140 of the eye center positioning method S100 in the embodiment of the method aspect of FIG. 3. It is worth noting that the first image 210 may have a first rough eye center CEC 1 , the second image 220 may have a second rough eye center CEC 2 , and the first rough eye center CEC 1 and the second rough eye center CEC 2 are The center of the eye area rather than the center of the pupil. More specifically, in the conversion step S140, the conversion model 124 may include a first operation model (not shown separately) and a second operation model (not shown separately), and the conversion step S140 may include a rotation variable detection step S141 And the position prediction step S142.

旋轉變量檢測步驟S141係驅動處理器110依據第一運算模型計算第一粗糙眼睛中心CEC 1與第二粗糙眼睛中心CEC 2的線性關係而產生深度旋轉變量(未另繪示)。具體而言,第一運算模型可包含第一粗糙眼睛中心CEC 1之第一方程式、第二粗糙眼睛中心CEC 2之第二方程式與第三方程式、第一方程式之第一斜率、第二方程式之第二斜率、第三方程式之第三斜率及深度旋轉變量,第一方程式表示為

Figure 02_image001
,第二方程式表示為
Figure 02_image003
,第三方程式表示為
Figure 02_image005
,第一斜率表示為
Figure 02_image007
,第二斜率表示為
Figure 02_image009
,第三斜率表示為
Figure 02_image011
,深度旋轉變量表示為
Figure 02_image013
Figure 02_image015
且符合下列式子(1):
Figure 02_image017
(1)。 The rotation variable detection step S141 is to drive the processor 110 to calculate the linear relationship between the first coarse eye center CEC 1 and the second coarse eye center CEC 2 according to the first calculation model to generate a depth rotation variable (not shown separately). Specifically, the first calculation model may include the first equation of the first rough eye center CEC 1, the second equation of the second rough eye center CEC 2 and the third-party formula, the first slope of the first equation, and the second equation The second slope, the third slope of the third-party program and the depth rotation variable, the first equation is expressed as
Figure 02_image001
, The second equation is expressed as
Figure 02_image003
, The third-party program is represented as
Figure 02_image005
, The first slope is expressed as
Figure 02_image007
, The second slope is expressed as
Figure 02_image009
, The third slope is expressed as
Figure 02_image011
, The depth rotation variable is expressed as
Figure 02_image013
and
Figure 02_image015
And meet the following formula (1):
Figure 02_image017
(1).

詳細地說,旋轉變量檢測步驟S141透過X軸和Z軸的兩線性方程式(即第一方程式與第二方程式)計算出深度旋轉變量

Figure 02_image015
並透過X軸和Y軸的兩線性方程式(即第一方程式與第三方程式)計算出深度旋轉變量
Figure 02_image015
,因此可將2D影像實施3D座標的轉換,以有效地說明第一影像210與第二影像220之間的旋轉關係。 In detail, the rotation variable detection step S141 calculates the depth rotation variable through the two linear equations of the X axis and the Z axis (that is, the first equation and the second equation)
Figure 02_image015
And through the two linear equations of the X axis and Y axis (i.e. the first equation and the third-party program), the depth rotation variable is calculated
Figure 02_image015
Therefore, the 2D image can be converted into 3D coordinates to effectively illustrate the rotation relationship between the first image 210 and the second image 220.

接續地,位置預測步驟S142係驅動處理器110依據第二運算模型與深度旋轉變量計算初始眼睛中心230而產生深度眼睛中心240。具體而言,第二運算模型可包含初始眼睛中心230、第二粗糙眼睛中心CEC 2、深度旋轉變量及深度眼睛中心240,初始眼睛中心230表示為

Figure 02_image019
,第二粗糙眼睛中心CEC 2表示為
Figure 02_image021
,X軸與Y軸之座標值分別表示為
Figure 02_image039
Figure 02_image041
,深度旋轉變量表示為
Figure 02_image013
Figure 02_image015
,深度眼睛中心240表示為
Figure 02_image023
且符合下式子(2):
Figure 02_image043
Figure 02_image045
(2)。 Subsequently, the position prediction step S142 is to drive the processor 110 to calculate the initial eye center 230 according to the second calculation model and the depth rotation variable to generate the deep eye center 240. Specifically, the second calculation model may include the initial eye center 230, the second rough eye center CEC 2 , the depth rotation variable, and the depth eye center 240. The initial eye center 230 is expressed as
Figure 02_image019
, CEC 2 of the second rough eye center is expressed as
Figure 02_image021
, The coordinate values of X-axis and Y-axis are expressed as
Figure 02_image039
and
Figure 02_image041
, The depth rotation variable is expressed as
Figure 02_image013
and
Figure 02_image015
, The depth of the eye center 240 is expressed as
Figure 02_image023
And meet the following formula (2):
Figure 02_image043
Figure 02_image045
(2).

詳細地說,位置預測步驟S142透過第一影像210與第二影像220之間的深度旋轉變量來預測未知的定位,且以右眼作為舉例,但本發明並不以此為限。基於第二影像220正視角而預測的初始眼睛中心230,並經由式子(2)將初始眼睛中心230實施旋轉變量的未知定位轉換,以獲得右眼區域的深度眼睛中心240。但是,由於CR-GAN生成的不穩定,導致深度眼睛中心240不能作為最後的定位結果。因此,需要額外再計算一個校正值(未另繪示),以確保第二影像220正視角和第一影像210的眼睛中心定位是同一個位置。In detail, the position prediction step S142 uses the depth rotation variable between the first image 210 and the second image 220 to predict the unknown location, and the right eye is taken as an example, but the present invention is not limited to this. The initial eye center 230 is predicted based on the positive angle of view of the second image 220, and the initial eye center 230 is subjected to unknown positioning conversion of the rotation variable through equation (2) to obtain the depth eye center 240 of the right eye area. However, due to the instability of CR-GAN generation, the deep eye center 240 cannot be used as the final positioning result. Therefore, it is necessary to calculate an additional correction value (not shown separately) to ensure that the front view angle of the second image 220 and the eye center positioning of the first image 210 are at the same position.

請一併參照第1圖至第5圖,其中第5圖係繪示第3圖方法態樣之實施方式的眼睛中心定位方法S100的校正步驟S150的步驟流程圖。如圖所示,校正步驟S150可包含轉換子步驟S151與修正子步驟S152,其中轉換子步驟S151係驅動處理器110依據梯度模型123定位第一影像210的眼睛區域而產生邊界眼睛中心(未另繪示),並計算邊界眼睛中心與第一粗糙眼睛中心CEC 1之座標差值,最後依據轉換模型124轉換差值而產生校正值。修正子步驟S152係驅動處理器110依據校正模型125與校正值修正深度眼睛中心240而產生精準眼睛中心250。 Please refer to FIGS. 1 to 5 together, where FIG. 5 is a flowchart of the calibration step S150 of the eye center positioning method S100 according to the embodiment of the method aspect of FIG. 3. As shown in the figure, the correction step S150 can include a conversion sub-step S151 and a correction sub-step S152, wherein the conversion sub-step S151 is to drive the processor 110 to locate the eye area of the first image 210 according to the gradient model 123 to generate the boundary eye center (not shown) (Shown), and calculate the coordinate difference between the center of the boundary eye and the center of the first rough eye CEC 1 , and finally convert the difference according to the conversion model 124 to generate a correction value. The correction sub-step S152 is to drive the processor 110 to correct the depth eye center 240 according to the correction model 125 and the correction value to generate a precise eye center 250.

具體而言,校正模型125可包含精準眼睛中心250、校正值、邊界眼睛中心、第一粗糙眼睛中心CEC 1、深度眼睛中心240及深度旋轉變量,精準眼睛中心250表示為

Figure 02_image027
,校正值表示為
Figure 02_image029
,邊界眼睛中心為
Figure 02_image031
,第一粗糙眼睛中心CEC 1表示為
Figure 02_image033
,深度眼睛中心240表示為
Figure 02_image023
,深度旋轉變量表示為
Figure 02_image013
Figure 02_image015
符合下式子(3):
Figure 02_image035
Figure 02_image047
(3)。 Specifically, the calibration model 125 may include a precise eye center 250, a correction value, a boundary eye center, a first rough eye center CEC 1 , a depth eye center 240, and a depth rotation variable. The precise eye center 250 is expressed as
Figure 02_image027
, The correction value is expressed as
Figure 02_image029
, The center of the boundary eye is
Figure 02_image031
, CEC 1 at the center of the first rough eye is expressed as
Figure 02_image033
, The depth of the eye center 240 is expressed as
Figure 02_image023
, The depth rotation variable is expressed as
Figure 02_image013
and
Figure 02_image015
Meet the following formula (3):
Figure 02_image035
Figure 02_image047
(3).

詳細地說,基於第一影像210正視角所得到邊界眼睛中心,並計算邊界眼睛中心與第一粗糙眼睛中心CEC 1的差值後,再轉換至深度定位作為校正值。最後,加上第二影像220的深度眼睛中心240並轉換回第一影像210正視角作為真實精準定位而產生輸入影像200的精準眼睛中心250。藉此,可同時解決臉部大偏航旋轉角度與眼睛被完全遮蔽,進而有效地並精準地定位眼睛中心。 In detail, the boundary eye center is obtained based on the positive angle of view of the first image 210, and the difference between the boundary eye center and the first rough eye center CEC 1 is calculated, and then converted to depth positioning as the correction value. Finally, the deep eye center 240 of the second image 220 is added and converted back to the front view of the first image 210 as a true accurate positioning to generate the precise eye center 250 of the input image 200. In this way, the large yaw rotation angle of the face and the complete obscuration of the eyes can be solved at the same time, and the center of the eyes can be effectively and accurately located.

雖然本發明已以實施方式揭露如上,然其並非用以限定本發明,任何熟習此技藝者,在不脫離本發明的精神和範圍內,當可作各種的更動與潤飾,因此本發明的保護範圍當視後附的申請專利範圍所界定者為準。Although the present invention has been disclosed in the above embodiments, it is not intended to limit the present invention. Anyone familiar with the art can make various changes and modifications without departing from the spirit and scope of the present invention. Therefore, the protection of the present invention The scope shall be subject to the scope of the attached patent application.

100:眼睛中心定位系統 110:處理器 120:記憶體 121:人臉辨識模型 122:生成對抗網路模型 123:梯度模型 124:轉換模型 125:校正模型 200:輸入影像 210:第一影像 220:第二影像 230:初始眼睛中心 240:深度眼睛中心 250:精準眼睛中心 S100:眼睛中心定位方法 S110:萃取步驟 S120:生成步驟 S130:定位步驟 S140:轉換步驟 S141:旋轉變量檢測步驟 S142:位置預測步驟 S150:校正步驟 S151:轉換子步驟 S152:修正子步驟 CEC 1:第一粗糙眼睛中心 CEC 2:第二粗糙眼睛中心 100: Eye Center Positioning System 110: Processor 120: Memory 121: Face Recognition Model 122: Generate Adversarial Network Model 123: Gradient Model 124: Conversion Model 125: Correction Model 200: Input Image 210: First Image 220: Second image 230: initial eye center 240: deep eye center 250: precise eye center S100: eye center positioning method S110: extraction step S120: generation step S130: positioning step S140: conversion step S141: rotation variable detection step S142: position prediction Step S150: Correction Step S151: Conversion sub-step S152: Correction sub-step CEC 1 : First coarse eye center CEC 2 : Second coarse eye center

第1圖係繪示本發明的結構態樣之一實施方式的眼睛中心定位系統的方塊示意圖; 第2圖係繪示第1圖結構態樣之實施方式的眼睛中心定位系統的輸入影像、第一影像及第二影像的示意圖; 第3圖係繪示本發明的方法態樣之一實施方式的眼睛中心定位方法的步驟流程圖; 第4圖係繪示第3圖方法態樣之實施方式的眼睛中心定位方法的轉換步驟的步驟流程圖;以及 第5圖係繪示第3圖方法態樣之實施方式的眼睛中心定位方法的校正步驟的步驟流程圖。 Figure 1 is a block diagram of an eye center positioning system according to an embodiment of the structural aspect of the present invention; Fig. 2 is a schematic diagram showing the input image, the first image and the second image of the eye center positioning system according to the embodiment of the structural aspect of Fig. 1; Figure 3 is a flowchart showing the steps of an eye center positioning method according to an embodiment of the method aspect of the present invention; Fig. 4 is a step-by-step flowchart of the conversion steps of the eye center positioning method according to the embodiment of the method aspect in Fig. 3; and FIG. 5 is a flow chart showing the calibration steps of the eye center positioning method according to the embodiment of the method aspect of FIG. 3. FIG.

S100:眼睛中心定位方法 S100: Eye center positioning method

S110:萃取步驟 S110: Extraction step

S120:生成步驟 S120: Generation steps

S130:定位步驟 S130: Positioning step

S140:轉換步驟 S140: Conversion steps

S150:校正步驟 S150: Calibration steps

Claims (10)

一種眼睛中心定位方法,由一眼睛中心定位系統執行,該眼睛中心定位系統包含一處理器與一記憶體,且該眼睛中心定位方法包含以下步驟:一萃取步驟,係驅動該處理器依據一人臉辨識模型萃取一輸入影像而產生一第一影像,該第一影像具有一第一粗糙眼睛中心,該輸入影像儲存於該記憶體;一生成步驟,係驅動該處理器依據一生成對抗網路模型將該第一影像重新生成為一第二影像;一定位步驟,係驅動該處理器依據一梯度模型定位該第二影像的一眼睛區域而產生一初始眼睛中心;一轉換步驟,係驅動該處理器依據一轉換模型轉換該初始眼睛中心而產生對應於該第一影像之一深度眼睛中心;以及一校正步驟,係驅動該處理器依據一校正模型修正該深度眼睛中心而產生一精準眼睛中心,且該校正步驟包含:一轉換子步驟,係驅動該處理器依據該梯度模型定位該第一影像的一眼睛區域而產生一邊界眼睛中心,並計算該邊界眼睛中心與該第一粗糙眼睛中心之一差值,最後依據該轉換模型轉換該差值而產生一校正值;及一修正子步驟,係驅動該處理器依據該校正模型與該校正值修正該深度眼睛中心而產生該精準眼睛中心。 An eye center positioning method is executed by an eye center positioning system, the eye center positioning system includes a processor and a memory, and the eye center positioning method includes the following steps: an extraction step, which drives the processor according to a face The recognition model extracts an input image to generate a first image, the first image has a first rough eye center, and the input image is stored in the memory; a generating step drives the processor to generate a confrontation network model The first image is regenerated as a second image; a positioning step is to drive the processor to locate an eye region of the second image according to a gradient model to generate an initial eye center; a conversion step is to drive the processing The device converts the initial eye center according to a conversion model to generate a deep eye center corresponding to the first image; and a correction step drives the processor to correct the deep eye center according to a correction model to generate a precise eye center, And the correction step includes: a conversion sub-step, which drives the processor to locate an eye area of the first image according to the gradient model to generate a boundary eye center, and calculate the difference between the boundary eye center and the first rough eye center A difference value is finally converted according to the conversion model to generate a correction value; and a correction sub-step is to drive the processor to correct the deep eye center according to the correction model and the correction value to generate the precise eye center. 如請求項1所述之眼睛中心定位方法,其中 該第二影像具有一第二粗糙眼睛中心,該轉換模型包含一第一運算模型與一第二運算模型,且該轉換步驟包含:一旋轉變量檢測步驟,係驅動該處理器依據該第一運算模型計算該第一粗糙眼睛中心與該第二粗糙眼睛中心的線性關係而產生一深度旋轉變量;及一位置預測步驟,係驅動該處理器依據該第二運算模型與該深度旋轉變量計算該初始眼睛中心而產生該深度眼睛中心。 The eye center positioning method as described in claim 1, wherein The second image has a second rough eye center, the conversion model includes a first calculation model and a second calculation model, and the conversion step includes: a rotation variable detection step, which drives the processor according to the first calculation The model calculates the linear relationship between the center of the first rough eye and the center of the second rough eye to generate a depth rotation variable; and a position prediction step is to drive the processor to calculate the initial The center of the eye produces the depth of the center of the eye. 如請求項2所述之眼睛中心定位方法,其中該第一運算模型包含該第一粗糙眼睛中心之一第一方程式、該第二粗糙眼睛中心之一第二方程式與一第三方程式、一第一斜率、一第二斜率、一第三斜率及該深度旋轉變量,該第一方程式表示為L 1,該第二方程式表示為L 2,該第三方程式表示為L 3,該第一斜率表示為m 1,該第二斜率表示為m 2,該第三斜率表示為m 3,該深度旋轉變量表示為face θ*face θ' 且符合下式:
Figure 109127247-A0305-02-0019-1
The eye center positioning method according to claim 2, wherein the first calculation model includes a first equation of the first rough eye center, a second equation of the second rough eye center, a third-party program, and a first equation A slope, a second slope, a third slope, and the depth rotation variable. The first equation is represented by L 1 , the second equation is represented by L 2 , the third-party formula is represented by L 3 , and the first slope is represented Is m 1 , the second slope is denoted as m 2 , the third slope is denoted as m 3 , and the depth rotation variable is denoted as face θ * and face θ'and conforms to the following formula:
Figure 109127247-A0305-02-0019-1
如請求項2所述之眼睛中心定位方法,其中該第二運算模型包含該初始眼睛中心、該第二粗糙眼睛中心、該深度旋轉變量及該深度眼睛中心,該初始眼睛中心表示為IF erC*,該第二粗糙眼睛中心表示為
Figure 109127247-A0305-02-0019-18
,該深度旋轉 變量表示為face θ*face θ' ,該深度眼睛中心表示為
Figure 109127247-A0305-02-0020-19
且符合下式:
Figure 109127247-A0305-02-0020-2
The eye center positioning method according to claim 2, wherein the second calculation model includes the initial eye center, the second rough eye center, the depth rotation variable, and the depth eye center, and the initial eye center is expressed as IF erC * , The center of the second rough eye is expressed as
Figure 109127247-A0305-02-0019-18
, The depth rotation variable is expressed as face θ * and face θ' , and the depth of the eye center is expressed as
Figure 109127247-A0305-02-0020-19
And meet the following formula:
Figure 109127247-A0305-02-0020-2
如請求項1所述之眼睛中心定位方法,其中該校正模型包含該精準眼睛中心、該校正值、該邊界眼睛中心、該第一粗糙眼睛中心、該深度眼睛中心及一深度旋轉變量,該精準眼睛中心表示為I erC*,該校正值表示為α4,該邊界眼睛中心為
Figure 109127247-A0305-02-0020-20
,該第一粗糙眼睛中心表示為
Figure 109127247-A0305-02-0020-21
,該深度眼睛中心表示為
Figure 109127247-A0305-02-0020-22
,該深度旋轉變量表示為face θ*face θ' 且符合下式:
Figure 109127247-A0305-02-0020-3
The eye center positioning method according to claim 1, wherein the correction model includes the precise eye center, the correction value, the boundary eye center, the first rough eye center, the deep eye center, and a depth rotation variable, and the precision The eye center is expressed as I erC * , the correction value is expressed as α 4 , and the boundary eye center is
Figure 109127247-A0305-02-0020-20
, The center of the first rough eye is expressed as
Figure 109127247-A0305-02-0020-21
, The depth of the eye center is expressed as
Figure 109127247-A0305-02-0020-22
, The depth rotation variable is expressed as face θ * and face θ'and conforms to the following formula:
Figure 109127247-A0305-02-0020-3
一種使用如請求項1所述之眼睛中心定位方法的眼睛中心定位系統,用以定位該輸入影像之該精準眼睛中心,且該眼睛中心定位系統包含:一記憶體,係存取該輸入影像、該人臉辨識模型、該生成對抗網路模型、該梯度模型、該轉換模型及該校正模型;以及一處理器,係電性連接於該記憶體,且該處理器依據該人臉辨識模型萃取該輸入影像而產生該第一影像,該第一 影像具有該第一粗糙眼睛中心,該處理器依據該生成對抗網路模型將該第一影像重新生成為該第二影像,該處理器依據該梯度模型定位該第二影像的該眼睛區域而產生該初始眼睛中心,該處理器依據該轉換模型轉換該初始眼睛中心而產生對應於該第一影像之該深度眼睛中心,然後該處理器依據該校正模型修正該深度眼睛中心而產生該精準眼睛中心;其中,該處理器依據該梯度模型定位該第一影像的該眼睛區域而產生該邊界眼睛中心,並計算該邊界眼睛中心與該第一粗糙眼睛中心之該差值,該處理器依據該轉換模型轉換該差值而產生該校正值;其中,該處理器依據該校正模型與該校正值修正該深度眼睛中心而產生該精準眼睛中心。 An eye center positioning system using the eye center positioning method according to claim 1, for positioning the precise eye center of the input image, and the eye center positioning system includes: a memory for accessing the input image, The face recognition model, the generation confrontation network model, the gradient model, the conversion model, and the correction model; and a processor, which is electrically connected to the memory, and the processor extracts according to the face recognition model The input image generates the first image, the first The image has the first rough eye center, the processor regenerates the first image into the second image according to the generating confrontation network model, and the processor locates the eye area of the second image according to the gradient model to generate For the initial eye center, the processor converts the initial eye center according to the conversion model to generate the deep eye center corresponding to the first image, and then the processor corrects the deep eye center according to the correction model to generate the precise eye center Wherein the processor locates the eye region of the first image according to the gradient model to generate the boundary eye center, and calculates the difference between the boundary eye center and the first rough eye center, and the processor is based on the conversion The model converts the difference value to generate the correction value; wherein, the processor corrects the deep eye center according to the correction model and the correction value to generate the precise eye center. 如請求項6所述之眼睛中心定位系統,其中該第二影像具有一第二粗糙眼睛中心,該轉換模型包含一第一運算模型與一第二運算模型,其中,該處理器依據該第一運算模型計算該第一粗糙眼睛中心與該第二粗糙眼睛中心的線性關係而產生一深度旋轉變量;及該處理器依據該第二運算模型與該深度旋轉變量計算該初始眼睛中心而產生該深度眼睛中心。 The eye center positioning system according to claim 6, wherein the second image has a second rough eye center, the conversion model includes a first calculation model and a second calculation model, wherein the processor is based on the first The calculation model calculates the linear relationship between the center of the first rough eye and the center of the second rough eye to generate a depth rotation variable; and the processor calculates the initial eye center according to the second calculation model and the depth rotation variable to generate the depth The center of the eye. 如請求項7所述之眼睛中心定位系統,其中 該第一運算模型包含該第一粗糙眼睛中心之一第一方程式、該第二粗糙眼睛中心之一第二方程式與一第三方程式、一第一斜率、一第二斜率、一第三斜率及該深度旋轉變量,該第一方程式表示為L 1,該第二方程式表示為L 2,該第三方程式表示為L 3,該第一斜率表示為m 1,該第二斜率表示為m 2,該第三斜率表示為m 3,該深度旋轉變量表示為face θ*face θ' 且符合下式:
Figure 109127247-A0305-02-0022-4
The eye center positioning system according to claim 7, wherein the first calculation model includes a first equation of the first rough eye center, a second equation of the second rough eye center, a third-party program, and a first equation A slope, a second slope, a third slope, and the depth rotation variable. The first equation is represented by L 1 , the second equation is represented by L 2 , the third-party formula is represented by L 3 , and the first slope is represented Is m 1 , the second slope is denoted as m 2 , the third slope is denoted as m 3 , and the depth rotation variable is denoted as face θ * and face θ'and conforms to the following formula:
Figure 109127247-A0305-02-0022-4
如請求項7所述之眼睛中心定位系統,其中該第二運算模型包含該初始眼睛中心、該第二粗糙眼睛中心、該深度旋轉變量及該深度眼睛中心,該初始眼睛中心表示為IF erC*,該第二粗糙眼睛中心表示為
Figure 109127247-A0305-02-0022-23
,該深度旋轉變量表示為face θ*face θ' ,該深度眼睛中心表示為
Figure 109127247-A0305-02-0022-24
且符合下式:
Figure 109127247-A0305-02-0022-6
The eye center positioning system according to claim 7, wherein the second calculation model includes the initial eye center, the second rough eye center, the depth rotation variable, and the deep eye center, and the initial eye center is expressed as IF erC * , The center of the second rough eye is expressed as
Figure 109127247-A0305-02-0022-23
, The depth rotation variable is expressed as face θ * and face θ' , and the depth of the eye center is expressed as
Figure 109127247-A0305-02-0022-24
And meet the following formula:
Figure 109127247-A0305-02-0022-6
如請求項6所述之眼睛中心定位系統,其中該校正模型包含該精準眼睛中心、該校正值、該邊界眼睛中心、該第一粗糙眼睛中心、該深度眼睛中心及一深度旋轉變量,該精準眼睛中心表示為I erC*,該校正值表示為α4,該邊界眼睛中心為
Figure 109127247-A0305-02-0022-25
,該第一粗糙眼睛中心表示為
Figure 109127247-A0305-02-0022-26
,該深度眼睛中心表示為
Figure 109127247-A0305-02-0022-27
,該深度旋轉變量表示為face θ*face θ' 且符合下式:
Figure 109127247-A0305-02-0023-7
The eye center positioning system according to claim 6, wherein the correction model includes the precise eye center, the correction value, the boundary eye center, the first rough eye center, the deep eye center, and a depth rotation variable, and the precision The eye center is expressed as I erC * , the correction value is expressed as α 4 , and the boundary eye center is
Figure 109127247-A0305-02-0022-25
, The center of the first rough eye is expressed as
Figure 109127247-A0305-02-0022-26
, The depth of the eye center is expressed as
Figure 109127247-A0305-02-0022-27
, The depth rotation variable is expressed as face θ * and face θ'and conforms to the following formula:
Figure 109127247-A0305-02-0023-7
TW109127247A 2020-08-11 2020-08-11 Eye center positioning method and system thereof TWI748596B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW109127247A TWI748596B (en) 2020-08-11 2020-08-11 Eye center positioning method and system thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW109127247A TWI748596B (en) 2020-08-11 2020-08-11 Eye center positioning method and system thereof

Publications (2)

Publication Number Publication Date
TWI748596B true TWI748596B (en) 2021-12-01
TW202207076A TW202207076A (en) 2022-02-16

Family

ID=80680843

Family Applications (1)

Application Number Title Priority Date Filing Date
TW109127247A TWI748596B (en) 2020-08-11 2020-08-11 Eye center positioning method and system thereof

Country Status (1)

Country Link
TW (1) TWI748596B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12400429B2 (en) 2023-02-24 2025-08-26 National Sun Yat-Sen University Method and electrical device for training cross-domain classifier

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105205480A (en) * 2015-10-31 2015-12-30 潍坊学院 Complex scene human eye locating method and system
US20190096135A1 (en) * 2017-09-26 2019-03-28 Aquifi, Inc. Systems and methods for visual inspection based on augmented reality
TWI668639B (en) * 2018-05-30 2019-08-11 瑞軒科技股份有限公司 Facial recognition system and method
TWI687871B (en) * 2019-03-28 2020-03-11 國立勤益科技大學 Image identification system for security protection

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105205480A (en) * 2015-10-31 2015-12-30 潍坊学院 Complex scene human eye locating method and system
CN105205480B (en) 2015-10-31 2018-12-25 潍坊学院 Human-eye positioning method and system in a kind of complex scene
US20190096135A1 (en) * 2017-09-26 2019-03-28 Aquifi, Inc. Systems and methods for visual inspection based on augmented reality
TWI668639B (en) * 2018-05-30 2019-08-11 瑞軒科技股份有限公司 Facial recognition system and method
TWI687871B (en) * 2019-03-28 2020-03-11 國立勤益科技大學 Image identification system for security protection

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12400429B2 (en) 2023-02-24 2025-08-26 National Sun Yat-Sen University Method and electrical device for training cross-domain classifier

Also Published As

Publication number Publication date
TW202207076A (en) 2022-02-16

Similar Documents

Publication Publication Date Title
KR102334139B1 (en) Eye gaze tracking based upon adaptive homography mapping
US11610289B2 (en) Image processing method and apparatus, storage medium, and terminal
CN111783605B (en) A method, device, equipment and storage medium for face image recognition
US11181978B2 (en) System and method for gaze estimation
CN103390152A (en) Sight tracking system suitable for human-computer interaction and based on system on programmable chip (SOPC)
CN112418251A (en) Infrared body temperature detection method and system
CN109634431B (en) Media-free floating projection visual tracking interactive system
CN112017212A (en) Training and tracking method and system of facial key point tracking model
CN114494347A (en) Single-camera multi-mode sight tracking method and device and electronic equipment
US20250069183A1 (en) Method and apparatus of processing image, interactive device, electronic device, and storage medium
CN115049738A (en) Method and system for estimating distance between person and camera
TWI748596B (en) Eye center positioning method and system thereof
CN116245940A (en) Category-level six-degree-of-freedom object pose estimation method based on structure difference perception
CN114419259B (en) A visual positioning method and system based on physical model imaging simulation
CN104463876A (en) Adaptive-filtering-based rapid multi-circle detection method for image under complex background
Perra et al. Adaptive eye-camera calibration for head-worn devices
CN114820513A (en) Vision detection method
JP6906943B2 (en) On-board unit
Li et al. Learning to synthesize photorealistic dual-pixel images from RGBD frames
KR101001184B1 (en) Iterative 3D Face Pose Estimation Method Using Face Normalization Vectors
CN111866493B (en) Image correction method, device and equipment based on head-mounted display equipment
CN110298225A (en) A method of blocking the human face five-sense-organ positioning under environment
CN117930975A (en) Eyesight protection method and device, eye protection wearable equipment and storage medium
CN116740778A (en) A method, device, equipment and medium for processing face image samples of people wearing glasses
CN113741682A (en) Method, device and equipment for mapping fixation point and storage medium