[go: up one dir, main page]

CN106066696A - The sight tracing compensated based on projection mapping correction and point of fixation under natural light - Google Patents

The sight tracing compensated based on projection mapping correction and point of fixation under natural light Download PDF

Info

Publication number
CN106066696A
CN106066696A CN201610409478.9A CN201610409478A CN106066696A CN 106066696 A CN106066696 A CN 106066696A CN 201610409478 A CN201610409478 A CN 201610409478A CN 106066696 A CN106066696 A CN 106066696A
Authority
CN
China
Prior art keywords
point
eyes
eye
fixation
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610409478.9A
Other languages
Chinese (zh)
Other versions
CN106066696B (en
Inventor
秦华标
胡大正
卓林海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201610409478.9A priority Critical patent/CN106066696B/en
Publication of CN106066696A publication Critical patent/CN106066696A/en
Application granted granted Critical
Publication of CN106066696B publication Critical patent/CN106066696B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Multimedia (AREA)
  • Eye Examination Apparatus (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

本发明公开了自然光下基于投影映射校正和注视点补偿的视线跟踪方法,该方法首先提取双眼虹膜中心和眼睛内外角点、嘴角点特征,其次由眼睛内外角点与嘴角点构成的矩形计算与头部处于标定位置时的矩形信息之间的投影映射关系,进而对虹膜中心位置以及眼睛内外角点位置进行投影映射校正,消除头部运动带来的影响;接着,由经校正后的左右眼虹膜中心,分别与左右眼睛内外角点构成4个向量,结合多项式映射模型得到实时注视点,最后通过支持向量回归模型进行注视点补偿。本发明为自然光下的视线跟踪提供了一种能降低头部运动影响,精度高的解决方法。

The invention discloses a line-of-sight tracking method based on projection mapping correction and fixation point compensation under natural light. The method firstly extracts the iris center of both eyes, the inner and outer corners of the eyes, and the features of the corners of the mouth, and then calculates the rectangle formed by the inner and outer corners of the eyes and the corners of the mouth. The projection mapping relationship between the rectangular information when the head is at the calibration position, and then the projection mapping correction is performed on the center position of the iris and the inner and outer corners of the eye to eliminate the impact of head movement; then, the corrected left and right eyes The center of the iris forms 4 vectors with the inner and outer corners of the left and right eyes respectively, combined with the polynomial mapping model to obtain the real-time fixation point, and finally compensates the fixation point through the support vector regression model. The invention provides a high-precision solution capable of reducing the impact of head movement for line-of-sight tracking under natural light.

Description

自然光下基于投影映射校正和注视点补偿的视线跟踪方法Gaze Tracking Method Based on Projection Mapping Correction and Gaze Compensation under Natural Light

技术领域technical field

本发明涉及视线跟踪技术领域,具体涉及自然光下基于投影映射校正和注视点补偿的视线跟踪方法。The invention relates to the technical field of gaze tracking, in particular to a gaze tracking method based on projection mapping correction and gaze point compensation under natural light.

背景技术Background technique

视线跟踪改变了人类与机器设备互动的方式,成为新技术或系统创意的源头,拓展了多种信息系统的用途,是目前人机交互研究的重要领域。Gaze tracking has changed the way humans interact with machines and equipment, has become the source of new technologies or system ideas, and has expanded the use of various information systems. It is an important field of human-computer interaction research.

视线跟踪方法主要分为接触式方法和非接触式方法。基于摄像的非接触式方法对用户更为友好,具有自然和直接的优势,是目前视线跟踪作为人机交互方式研究的主流方向。基于摄像的非接触视线跟踪方法中,自然光下的视线跟踪算法无需其他辅助光源,能更好的进行推广应用。然而,该方法的主要难点在于:(1)在没有辅助红外光源下图像存在光照变化以及低对比度的情况下,如何精确的提取眼动特征信息;(2)在没有普尔钦斑点辅助下,寻找能代表眼睛运动且具有鲁棒性的眼动向量;(3)解决头部运动下眼动向量随之变化导致无法准确的进行注视点估计问题。Gaze tracking methods are mainly classified into contact methods and non-contact methods. The camera-based non-contact method is more user-friendly, has natural and direct advantages, and is currently the mainstream research direction of eye-tracking as a human-computer interaction method. In the non-contact eye-tracking method based on camera, the eye-tracking algorithm under natural light does not need other auxiliary light sources, which can be better popularized and applied. However, the main difficulty of this method lies in: (1) How to accurately extract eye movement feature information without the aid of auxiliary infrared light source and the image has illumination changes and low contrast; (2) How to find An eye movement vector that can represent eye movement and is robust; (3) Solve the problem that the eye movement vector changes with the head movement, which leads to the inability to accurately estimate the fixation point.

发明内容Contents of the invention

本发明公开了一种自然光下基于投影映射校正和注视点补偿的视线跟踪方法,在自然光源下,通过提取虹膜中心、眼睛内外角点以及嘴角点,建立基于虹膜与眼角点信息到屏幕注视点的映射模型。该方法能有效消除头部自由运动对视线估计结果的影响,同时硬件需求上只需一个单目摄像头,提高了普通摄像头下的视线跟踪的精度和实时性。The invention discloses a line-of-sight tracking method based on projection mapping correction and fixation point compensation under natural light. Under natural light, by extracting iris center, inner and outer corners of eyes and mouth corners, establishing a fixation point on the screen based on iris and eye corner point information mapping model. This method can effectively eliminate the influence of free movement of the head on the line of sight estimation results. At the same time, only one monocular camera is required in terms of hardware requirements, which improves the accuracy and real-time performance of line of sight tracking under ordinary cameras.

本发明通过以下技术方案来实现:The present invention is realized through the following technical solutions:

一种自然光下基于投影映射校正的视线跟踪方法,该方法需要一个普通摄像头,无需额外光源辅助,包含步骤:(1)摄像机采集图像,进行人脸定位和眼动信息提取。A line-of-sight tracking method based on projection mapping correction under natural light, which requires an ordinary camera without the assistance of an additional light source, and includes the following steps: (1) The camera collects images, performs face positioning and eye movement information extraction.

(2)眼动信息校正:通过眼睛角点,嘴角点信息计算投影映射矩阵,对虹膜中心、眼睛内外角点位置进行校正。(2) Correction of eye movement information: Calculate the projection mapping matrix through the information of the corners of the eyes and the corners of the mouth, and correct the positions of the center of the iris and the inner and outer corners of the eyes.

(3)初步注视点估计:利用经校正后的虹膜中心位置、眼睛内外角点位置构成二维眼动向量,并建立二维眼动向量到屏幕注视点的映射关系,根据实时的二维向量计算出实时屏幕注视点。(3) Preliminary fixation point estimation: Use the corrected iris center position and the inner and outer corners of the eyes to form a two-dimensional eye movement vector, and establish a mapping relationship between the two-dimensional eye movement vector and the fixation point on the screen. According to the real-time two-dimensional vector Calculates the real-time screen gaze point.

(4)注视点补偿:采用支持向量回归模型进行注视点补偿,修正头部运动带来的注视点偏差,从而得到最终的注视点估计结果。(4) Gaze point compensation: The support vector regression model is used for gaze point compensation, and the gaze point deviation caused by head movement is corrected, so as to obtain the final gaze point estimation result.

上述方法中,所述步骤(1)中包括:In the above method, the step (1) includes:

a.采用基于Adaboost的人脸检测算法的对采集图像进行人脸定位,其次采用基于局部二进制特征回归方法(Face Alignment via Regressing Local Binary Features)确定眼睛内、外角点以及嘴角点的感兴趣区域;a. Use the face detection algorithm based on Adaboost to locate the face of the collected image, and then use the local binary feature regression method (Face Alignment via Regressing Local Binary Features) to determine the regions of interest in the inner and outer corners of the eyes and the corners of the mouth;

b.依据不同角点特征的具体生理形状分别进行精确定位,通过Fast角点检测和筛选方法得到眼睛内角点和嘴角点定位,并采用曲线拟合方法定位眼睛外角点;b. Carry out precise positioning according to the specific physiological shapes of different corner features, obtain the inner corners of the eyes and the corners of the mouth through the Fast corner detection and screening method, and use the curve fitting method to locate the outer corners of the eyes;

c.根据眼睛内外角点位置确定眼睛图像,接着提取眼睛图像的梯度特征,定位虹膜搜索起始点;其次,从搜索起始点出发,通过滑动窗口对虹膜边缘进行搜索,最后根据椭圆拟合方法定位虹膜中心。上述方法中,所述步骤(2)中包括:c. Determine the eye image according to the position of the inner and outer corners of the eye, then extract the gradient feature of the eye image, and locate the iris search starting point; secondly, start from the search starting point, search the edge of the iris through a sliding window, and finally locate according to the ellipse fitting method iris center. In the above method, the step (2) includes:

以距离屏幕中点设定距离处的位置点为头部标定位置,记头部处于头部标定位置且正面注视屏幕时获取的人脸图像为标定图像,计算根据所述步骤(1)定位到的眼睛外角点和嘴角点位置与标定图像中对应的特征点位置之间的投影映射矩阵,利用该投影映射矩阵对实时获取的眼睛内外角点位置、虹膜中心位置进行校正。Take the position point at the set distance from the midpoint of the screen as the head calibration position, record the face image obtained when the head is at the head calibration position and stare at the screen frontally as the calibration image, and calculate according to the step (1) to locate The projection mapping matrix between the outer corners of the eyes and the corners of the mouth and the corresponding feature points in the calibration image is used to correct the real-time inner and outer corners of the eyes and the center of the iris.

上述方法中,所述步骤(3)中包括:In the above method, the step (3) includes:

a.根据所述步骤(2)校正后的左右虹膜中心位置,分别与校正后的左右眼睛内外角点位置构成4个眼动向量,叠加后得到校正后的二维眼动向量;a. According to the corrected left and right iris center positions in the step (2), respectively form 4 eye movement vectors with the corrected left and right eye inner and outer corner positions, and obtain the corrected two-dimensional eye movement vectors after superposition;

b.头部在标定位置处静止不动时,眼睛注视屏幕上的标定点,计算校正后的二维眼动向量,根据该向量与标定点的对应关系计算多项式映射模型的参数;实时获取校正后的二维眼动向量,结合多项式映射模型,计算出初步注视点估计结果。b. When the head is still at the calibration position, the eyes are fixed on the calibration point on the screen, the corrected two-dimensional eye movement vector is calculated, and the parameters of the polynomial mapping model are calculated according to the corresponding relationship between the vector and the calibration point; the correction is obtained in real time The final two-dimensional eye movement vector is combined with the polynomial mapping model to calculate the preliminary fixation point estimation result.

上述方法中,所述步骤(4)中包括:In the above method, the step (4) includes:

训练支持向量回归模型,输入为眼睛内角点间线段中点在图像上的坐标与标定图像相同点的偏移,左右眼内外角点间在图像上的距离与标定图像对应的眼睛内外角点间距离的比值以及图像上内角点间的连线与标定图像内角点间连线之间的角度差;输出为对应的注视点估计结果与真实标定点间的位移偏差;利用支持向量回归模型进行注视点补偿,从而得到最终的注视点估计结果。本发明的优点与积极效果在于:To train the support vector regression model, the input is the coordinates of the midpoint of the line between the inner corners of the eyes on the image and the offset of the same point in the calibration image, and the distance between the inner and outer corners of the left and right eyes on the image and the distance between the inner and outer corners of the eyes corresponding to the calibration image The ratio of the distance and the angle difference between the connection line between the inner corner points on the image and the connection line between the inner corner points of the calibration image; the output is the displacement deviation between the corresponding fixation point estimation result and the real calibration point; the support vector regression model is used for fixation point compensation, so as to obtain the final fixation point estimation result. Advantage and positive effect of the present invention are:

1.自然光下缺乏有效的参考点,双眼数据代表了眼睛运动信息。本发明采用左右眼虹膜中心和眼睛内外角点构造眼动向量,能更有效的代表眼动信息。利用滑动窗口搜索方法定位虹膜边缘,提高了虹膜中心定位的精度。1. There is no effective reference point under natural light, and binocular data represents eye movement information. The present invention uses the iris centers of the left and right eyes and the inner and outer corners of the eyes to construct eye movement vectors, which can represent eye movement information more effectively. Using the sliding window search method to locate the edge of the iris, the accuracy of iris center location is improved.

2.本发明通过投影映射校正方法对虹膜中心等特征点位置进行校正,同时采用支持向量回归模型进行注视点补偿,能有效降低头部运动对眼动特征的影响,视线跟踪方法具有更好的抗头部运动干扰能力。2. The present invention corrects the position of feature points such as the center of the iris through the projection mapping correction method, and at the same time uses the support vector regression model to compensate the fixation point, which can effectively reduce the impact of head movement on eye movement characteristics, and the line of sight tracking method has better Anti-jamming ability of head movement.

3.该方法计算量少,硬件只需要一个摄像头。3. This method requires less calculation, and the hardware only needs one camera.

附图说明Description of drawings

图1是本发明实施方式中显示屏与摄像头的布置示意图。FIG. 1 is a schematic diagram of the arrangement of a display screen and a camera in an embodiment of the present invention.

图2是本发明实施方式中视线跟踪方法的流程示意图。Fig. 2 is a schematic flowchart of a gaze tracking method in an embodiment of the present invention.

图3是本发明实施方式中滑动窗口示意图。Fig. 3 is a schematic diagram of a sliding window in an embodiment of the present invention.

图4a、图4b是本发明实施方式中屏幕上的两种标定点分布图。Fig. 4a and Fig. 4b are distribution diagrams of two kinds of calibration points on the screen in the embodiment of the present invention.

具体实施方式detailed description

下面结合附图对本发明的具体实施方式作进一步说明。The specific embodiments of the present invention will be further described below in conjunction with the accompanying drawings.

如图1,本发明硬件配置上需要一个普通摄像头,位于屏幕中心正上方,实时地捕捉人脸图像。As shown in Fig. 1, the hardware configuration of the present invention requires an ordinary camera located directly above the center of the screen to capture face images in real time.

如图2,本发明的具体实施步骤如下:As shown in Figure 2, the specific implementation steps of the present invention are as follows:

步骤一、实时采集图像,提取眼动特征信息;Step 1. Collect images in real time and extract eye movement feature information;

步骤二、动态眼动向量校正;Step 2, dynamic eye movement vector correction;

步骤三、构造多项式映射模型;Step 3, constructing a polynomial mapping model;

步骤四、训练注视点补偿模型进行注视点计算。Step 4: Train the gaze point compensation model to calculate the gaze point.

其中步骤一的具体实施为:Wherein the concrete implementation of step 1 is:

1.人脸特征点初定位1. Initial positioning of face feature points

从摄像头获取人脸图像,采用基于局部二进制特征的形状回归方法(FaceAlignment via Regressing Local Binary Features)进行人脸特征点的初定位,得到眼睛轮廓、嘴角点的初步位置。其次,在初步定位的基础上,分别得到眼睛区域和嘴巴区域作为定位感兴趣区域。The face image is obtained from the camera, and the shape regression method based on local binary features (FaceAlignment via Regressing Local Binary Features) is used for the initial positioning of the face feature points, and the initial positions of the eye contours and mouth corners are obtained. Secondly, on the basis of the preliminary positioning, the eye area and the mouth area are respectively obtained as the area of interest for positioning.

2.眼动特征提取2. Eye movement feature extraction

眼动特征信息为眼睛内角点特征,眼睛外角点特征,嘴角点特征以及虹膜中心。具体实施步骤为:The eye movement feature information includes eye inner corner feature, eye outer corner feature, mouth corner feature and iris center. The specific implementation steps are:

2.1眼睛内角点定位2.1 Positioning of the inner corner of the eye

首先由初定位得到的眼睛内角点候选点确定内角点感兴趣区域,对该区域进行FAST角点检测得到角点候选点;本发明认为候选点聚集处的点更有可能为眼角点,根据每个候选点周围的候选点个数进行筛选,并依据眼角点在眼睛中的相对位置关系,快速精确的定位到眼睛内角点。First, determine the inner corner point interest region by the eye inner corner point candidate points obtained by initial positioning, and perform FAST corner point detection on this region to obtain corner point candidate points; the present invention thinks that the points where the candidate points gather are more likely to be eye corner points, according to each The number of candidate points around each candidate point is screened, and the inner corner of the eye is quickly and accurately located according to the relative position of the corner of the eye in the eye.

2.2眼睛外角点定位2.2 Positioning of the outer corner of the eye

首先通过自适应阈值分割分离出眼睛与背景区域,从而提取到眼睛轮廓。对上下眼睛轮廓进行曲线拟合,并计算两条曲线拟合的交点,靠左边的交点即为眼睛外角点定位。Firstly, the eyes and the background area are separated by adaptive threshold segmentation, so as to extract the eye contour. Curve fitting is performed on the upper and lower eye contours, and the intersection point of the two curve fittings is calculated. The intersection point on the left is the outer corner of the eye.

2.3嘴角点定位2.3 Positioning of mouth corners

首先提取嘴巴区域图像在Lab色彩空间中的a分量,分割出嘴唇区域。其次,确定嘴唇区域最左和最右的点为嘴角点初步位置,并对该区域进行OTSU分割,对分割后的图像进行局部角点检测,进行孤立点滤除后,认为候选点最左或最右的候选点为嘴角点位置。Firstly, the a component of the mouth area image in the Lab color space is extracted, and the lip area is segmented. Secondly, determine the leftmost and rightmost points of the lip area as the initial position of the corner of the mouth, and perform OTSU segmentation on the area, perform local corner detection on the segmented image, and filter out the isolated points, and consider the candidate point as the leftmost or The rightmost candidate point is the position of the corner of the mouth.

2.4虹膜中心定位2.4 Iris Center Positioning

在部分光照下,虹膜与巩膜间的对比并不是很明显,传统的边缘检测算子难以精确检测出虹膜边缘。其次,光照变化也导致二值化方法难以完整的分割出虹膜区块。因而,本发明提出一种基于滑动窗口的虹膜边缘搜索的方法,能在眼睛运动和姿态变化的情况下定位到虹膜边缘,从而得到虹膜中心定位。Under partial illumination, the contrast between the iris and the sclera is not very obvious, and it is difficult for traditional edge detection operators to accurately detect the edge of the iris. Secondly, the change of illumination also makes it difficult for the binarization method to completely segment the iris block. Therefore, the present invention proposes a sliding window-based iris edge search method, which can locate the iris edge under the condition of eye movement and posture changes, thereby obtaining iris center location.

a.搜索起始点定位a. Search starting point positioning

首先提取原眼睛图像的红色分量图像,并对该图像进行形态学的开操作以减少反射光斑的影响;其次,计算图像预处理后的眼睛图像的梯度特征,并寻找最多梯度向量经过的点,设为搜索起始点。由于该方法直接通过梯度特征进行计算,在图像模糊、眼睛变形严重等情况均能得到很好的定位结果。First, extract the red component image of the original eye image, and perform a morphological opening operation on the image to reduce the influence of reflection spots; second, calculate the gradient feature of the eye image after image preprocessing, and find the points through which the most gradient vectors pass, Set as the starting point of the search. Since this method is directly calculated by gradient features, good positioning results can be obtained in situations such as blurred images and severe eye deformation.

b.虹膜边缘搜索与虹膜中心定位b. Iris edge search and iris center positioning

如附图3所示构造滑动窗口,由两个连续的同样大小的矩形构成,分别表示左窗口1、右窗口2,表示滑动窗口3的中心。滑动窗口从虹膜梯度中心出发,向四周出发搜索虹膜边缘。根据眼角点间的连线可以判断眼睛的旋转角度θ0,从而设置虹膜轮廓的搜索范围,避开上下眼睑遮挡的部分,以免造成轮廓拟合误差。The sliding window is constructed as shown in Figure 3, which consists of two continuous rectangles of the same size, respectively representing the left window 1 and the right window 2, and representing the center of the sliding window 3. The sliding window starts from the center of the iris gradient and searches for the edges of the iris around. According to the connection between the corners of the eyes, the rotation angle θ 0 of the eyes can be judged, so as to set the search range of the iris contour, and avoid the parts blocked by the upper and lower eyelids, so as to avoid contour fitting errors.

统计窗口内的像素平均值,并定义滑动窗口的能量:Calculate the average value of pixels within a window and define the energy of the sliding window:

其中,Ii以及Ij(i=1,2,..N,j=1,2,..N)分别为左右窗口的像素点的像素值,N为窗口中的像素点个数。θ为当前滑动窗口搜索方向。设定搜索方向后,从搜索起始点出发,得到搜索窗口的能量函数曲线。寻找该函数曲线的最大波峰,即虹膜边缘位置。Wherein, I i and I j (i=1,2,..N,j=1,2,..N) are pixel values of pixels in the left and right windows respectively, and N is the number of pixels in the window. θ is the current sliding window search direction. After setting the search direction, starting from the search starting point, the energy function curve of the search window is obtained. Find the largest peak of the function curve, i.e. the edge of the iris.

利用滑动窗口搜索到虹膜的精确轮廓点集后,通过最小二乘法进行椭圆拟合,椭圆中心即为虹膜中心位置。After the precise contour point set of the iris is searched by using the sliding window, the ellipse is fitted by the least square method, and the center of the ellipse is the center of the iris.

其中步骤二的具体实施步骤为:Wherein the specific implementation steps of step two are:

1.基于双眼数据的眼动向量构造1. Eye movement vector construction based on binocular data

首先根据左右虹膜中心与左右内外角点一共构成4个眼动向量,构成的向量集:First, according to the left and right iris centers and the left and right inner and outer corners, a total of 4 eye movement vectors are formed, and the formed vector set is:

vv oo ll == II ll -- CC oo ll vv ii ll == II ll -- CC ii ll vv oo rr == II rr -- CC oo rr vv ii rr == II rr -- CC ii rr

其中,vil和vol分别表示左眼虹膜中心Il与左眼内外角点位置Cil,Col构成的眼动向量,vir和vor分别表示右眼虹膜中心Ir与右眼内外角点位置Cir,Cor构成的眼动向量。在头部自由运动的情况,眼睛随着头部运动而运动,直接由眼角点与虹膜中心构成的向量也随之发生变化。因此,本发明采用双眼数据构造新型眼动向量来表征注视点,并通过对双眼数据的校正来消除不同头部姿势带来的影响。双眼数据构成的眼动向量如下式所示:Among them, v il and v ol represent the eye movement vectors formed by the iris center I l of the left eye and the inner and outer corner positions C il and C ol of the left eye respectively, and v ir and v or represent the iris center I r of the right eye and the inner and outer corners of the right eye respectively. The eye movement vector formed by the corner position C ir and C or . In the case of free head movement, the eyes move with the head movement, and the vector directly formed by the corner of the eye and the center of the iris also changes accordingly. Therefore, the present invention uses binocular data to construct a new type of eye movement vector to represent the gaze point, and eliminates the influence of different head postures by correcting the binocular data. The eye movement vector formed by binocular data is as follows:

vv == 11 44 (( vv oo ll ++ vv ii ll ++ vv ii rr ++ vv oo rr )) ..

2.动态眼动向量校正2. Dynamic Eye Movement Vector Correction

在视线估计的过程中,头部深度运动会导致眼睛图像出现形变,偏转、旋转、俯仰运动会导致眼睛图像出现旋转以及形变,通过眼睛图像提取的眼动向量在头部运动的情况下直接进行注视点映射会造成较大的误差。因而,本发明提出一种基于投影映射的方法对眼动向量进行动态校正。In the process of line of sight estimation, the depth movement of the head will cause the deformation of the eye image, and the yaw, rotation, and pitch movements will cause the rotation and deformation of the eye image. The eye movement vector extracted from the eye image can directly fix the fixation point in the case of head movement. Mapping can cause large errors. Therefore, the present invention proposes a method based on projection mapping to dynamically correct eye movement vectors.

首先,根据步骤一实时捕捉的人脸图像提取的眼角点、嘴角点特征点4个点在图像坐标系的坐标。记实时捕捉的特征点坐标为(x1,y1),头部处于头部标定位置且正面注视屏幕时获取的人脸图像上特征点的坐标为(x2,y2),二者存在投影映射关系如下:First, the coordinates of the four points in the image coordinate system of the eye corner point and mouth corner point feature points extracted from the face image captured in real time in step 1. The coordinates of the feature points captured during recording are (x 1 , y 1 ), and the coordinates of the feature points on the face image acquired when the head is at the head calibration position and looking at the screen frontally are (x 2 , y 2 ). The projection mapping relationship is as follows:

sthe s xx 11 ythe y 11 11 == Hh pp xx 22 ythe y 22 11

其中,s同样为参数,Hp为3×3矩阵,记为:Among them, s is also a parameter, and H p is a 3×3 matrix, recorded as:

Hh pp == hh 1111 hh 1212 hh 1313 hh 21twenty one hh 22twenty two hh 23twenty three hh 3131 hh 3232 hh 3333

对矩阵Hp进行归一化,使得h33=1,则可得:Normalize the matrix H p so that h 33 =1, then:

sxsx 11 == hh 1111 xx 22 ++ hh 1212 ythe y 22 ++ hh 1313 sysy 11 == hh 21twenty one xx 22 ++ hh 22twenty two ythe y 22 ++ hh 23twenty three sthe s == hh 3131 xx 22 ++ hh 3232 ythe y 22 ++ 11

则实时捕捉的特征点坐标可以通过如下公式进行映射校正:Then the coordinates of the feature points captured in real time can be mapped and corrected by the following formula:

xx 11 == sxsx 11 sthe s == hh 1111 xx 22 ++ hh 1212 ythe y 22 ++ hh 1313 hh 3131 xx 22 ++ hh 3232 ythe y 22 ++ 11 ythe y 11 == sysy 11 sthe s == hh 21twenty one xx 22 ++ hh 22twenty two ythe y 22 ++ hh 23twenty three hh 3131 xx 22 ++ hh 3232 ythe y 22 ++ 11

将实时捕捉的眼睛外角点以及嘴角点4个特征点与对应的头部处于头部标定位置且正面注视屏幕时获取的人脸图像上特征点位置代入上式,构成8个线性方程。因而,通过求解方程,可以计算得当前头部位置姿态时,对应于标定位置时的投影变换矩阵Hp。则头部运动后虹膜中心在图像上成像的位置同样可以根据投影变换矩阵Hp进行校正:The four feature points of the outer corners of the eyes and the corners of the mouth captured in real time and the corresponding feature points on the face image obtained when the head is at the head calibration position and looking at the screen are substituted into the above formula to form 8 linear equations. Therefore, by solving the equation, the projection transformation matrix H p corresponding to the calibration position at the current head position and posture can be calculated. Then the position of the center of the iris imaged on the image after the head movement can also be corrected according to the projection transformation matrix Hp :

sthe s II ^^ 11 -- Hh pp II 11

根据上式计算出校正后的虹膜中心位置其中,符号代表特征点或眼动向量I经投影映射校正后的结果。因此,同理可以得到眼睛内角点、外角点在标定位置处的位置,则校正后的眼动向量可以表示为:Calculate the corrected iris center position according to the above formula Among them, the symbol Represents the result of feature points or eye movement vector I corrected by projection mapping. Therefore, similarly, the positions of the inner and outer corners of the eye at the calibration position can be obtained, and the corrected eye movement vector can be expressed as:

vv oo ll ^^ == II ll ^^ -- CC oo ll ^^ vv ii ll ^^ == II ll ^^ -- CC oo ll ^^ vv oo rr ^^ == II rr ^^ -- CC oo rr ^^ vv ii rr ^^ == II rr ^^ -- CC ii rr ^^

同时,由于校正后的眼睛内外角点与标定位置相同,即内外角点的距离保持不变。因而,在保持信息不变的情况,可以将本发明眼动向量简化为:At the same time, since the corrected inner and outer corners of the eye are the same as the calibration position, that is, the distance between the inner and outer corners remains unchanged. Therefore, under the condition of keeping the information unchanged, the eye movement vector of the present invention can be simplified as:

vv == 11 22 (( vv ii ll ^^ ++ vv oo rr ^^ ))

即得到校正后的眼动向量。That is, the corrected eye movement vector is obtained.

其中步骤三的具体实施步骤为:The specific implementation steps of Step 3 are as follows:

1.如附图4a所示设置屏幕上3×3的9个点作为标定点。其次,选取带标定的映射函数如下:1. As shown in Figure 4a, set 9 points of 3×3 on the screen as calibration points. Second, select the mapping function with calibration as follows:

PP xx == aa 00 ++ aa 11 vv xx ++ aa 22 vv xx 22 ++ aa 33 vv xx 33 ++ aa 44 vv ythe y ++ aa 55 vv xx vv ythe y ++ aa 66 vv xx 22 vv ythe y ++ aa 77 vv xx 33 vv ythe y PP ythe y == bb 00 ++ bb 11 vv xx ++ bb 22 vv xx 22 ++ bb 33 vv ythe y ++ bb 44 vv ythe y 22 ++ bb 55 vv xx vv ythe y ++ bb 66 vv xx 22 vv ythe y

其中(vx,vy)为校正后的眼动向量,(Px,Py)为注视点估计结果,ai(i=0,1,...7),bi(i=0,1,...6)共15个未知参数。Where (v x ,v y ) is the corrected eye movement vector, (P x ,P y ) is the fixation point estimation result, a i (i=0,1,...7), b i (i=0 ,1,...6) totally 15 unknown parameters.

2.用户分别注视9个标定点,通过步骤一实时提取眼动特征,并通过步骤二构造眼动向量并进行校正。每个标定点提取的眼动向量可以分别构造2个等式方程,共18个方程,通过最小二乘法进行参数求解。2. The user gazes at 9 calibration points respectively, extracts eye movement features in real time through step 1, and constructs and corrects eye movement vectors through step 2. The eye movement vectors extracted from each calibration point can respectively construct 2 equations, a total of 18 equations, and solve the parameters by the least square method.

其中步骤四的具体实施步骤为:The specific implementation steps of Step 4 are:

经过校正后的眼动向量进行多项式映射后得到的估计结果与实际注视点位置间存在偏差,该偏差与头部运动有关。本发明采用支持向量回归来补偿这个误差。输入样本向量X=[vx,vy,Mx,My,Rl,RrΔ],输出向量为Y=[Yx,Yy]。其中,v=(vx,vy)为校正后的眼动向量,M=(Mx,My)为内角点间线段中点在图像上的坐标与标定位置相同点的偏移,Rl和Rr分别为左右眼内外角点间在图像上的距离与标定图像对应的眼睛内外角点间距离的比值。θΔ为图像上内角点间的连线与标定图像内角点间连线之间的角度差。There is a deviation between the estimated result obtained by polynomial mapping of the corrected eye movement vector and the actual fixation point position, and the deviation is related to the head movement. The present invention uses support vector regression to compensate for this error. The input sample vector X=[v x ,v y ,M x ,M y ,R l ,R rΔ ], the output vector is Y=[Y x ,Y y ]. Among them, v=(v x ,v y ) is the corrected eye movement vector, M=(M x ,M y ) is the offset between the coordinates of the middle point of the line segment between the inner corner points on the image and the same point as the calibration position, R l and R r are the ratios of the distance between the inner and outer corners of the left and right eyes on the image to the distance between the inner and outer corners of the eyes corresponding to the calibration image. θ Δ is the angle difference between the connection line between the inner corner points on the image and the connection line between the inner corner points of the calibration image.

1.训练数据构造。1. Training data construction.

如附图4b所示,实验者在注视屏幕中指定的标定点时,在保持注视同一个点的同时进行头部运动,根据步骤一实时采集特征信息,并构成输入样本向量X。与此同时,根据步骤二通过对眼动向量进行校正,进而通过步骤三获取的多项式映射模型得到注视点估计值,其与真实的坐标值的位移偏差为(Δx,Δy)。分别构造两个训练集:{(X1,Δx1),…,(Xi,Δxi),…,(XN,ΔxN)},{(X1,Δy1),…,(Xi,Δyi),…,(XN,ΔyN)}。其中,Xi(i=1,2,...,N)为不同的样本向量,(Δxi,Δyi)(i=1,2,...,N)为不同样本向量对应的注视点位移偏差。N为样本个数。两个训练集分别进行训练。As shown in Figure 4b, when the experimenter gazes at the designated calibration point on the screen, he moves his head while maintaining his gaze at the same point, collects feature information in real time according to step 1, and forms an input sample vector X. At the same time, according to Step 2, the eye movement vector is corrected, and then the polynomial mapping model obtained in Step 3 is used to obtain the estimated fixation point, and the displacement deviation from the real coordinate value is (Δx, Δy). Construct two training sets respectively: {(X 1 ,Δx 1 ),…,(X i ,Δx i ),…,(X N ,Δx N )},{(X 1 ,Δy 1 ),…,(X i ,Δy i ),…,(X N ,Δy N )}. Among them, X i (i=1,2,...,N) are different sample vectors, (Δxi ,Δy i )( i =1,2,...,N) are the fixations corresponding to different sample vectors Point displacement deviation. N is the number of samples. The two training sets are trained separately.

2.模型参数选择。本发明的支持向量回归模型采用RBF径向基函数,能对复杂关系进行回归。其次,采用网格搜索法进行参数寻找,主要搜素参数为平衡参数C,损失函数参数ε以及核参数γ。2. Model parameter selection. The support vector regression model of the present invention adopts the RBF radial basis function, and can perform regression on complex relationships. Secondly, the grid search method is used to search for parameters. The main search parameters are the balance parameter C, the loss function parameter ε, and the kernel parameter γ.

3.分别根据两个训练集进行支持向量回归训练,得到最优的回归模型,通过实时的输入向量X可以计算得相应的注视点补偿偏移量(Yx,Yy)。3. Carry out support vector regression training according to the two training sets respectively to obtain the optimal regression model, and calculate the corresponding gaze point compensation offset (Y x , Y y ) through the real-time input vector X.

4.注视点估计。经过步骤三的多项式映射方程计算的注视点估计值为(Px,Py),与注视点补偿模型计算所得的偏移量(Yx,Yy)叠加,则最终的注视点估计结果为:4. Gaze point estimation. The estimated fixation point calculated by the polynomial mapping equation in step 3 is (P x , P y ), which is superimposed with the offset (Y x , Y y ) calculated by the fixation point compensation model, and the final fixation point estimation result is :

(Sx,Sy)=(Px,Py)+(Yx,Yy)。(S x ,S y )=(P x ,P y )+(Y x ,Y y ).

Claims (5)

1. the sight tracing compensated based on projection mapping correction and point of fixation under natural light, it is characterised in that the method needs One common camera, it is not necessary to additional light source assists, and specifically comprises the steps of
(1) camera acquisition image, carries out Face detection and eye moves information retrieval;
(2) eye moves information correction: calculate projection mapping matrix, to iris center, eyes by eyes angle point and corners of the mouth dot information Inside and outside corner location is corrected;
(3) preliminary point of fixation is estimated: utilize calibrated after iris center, corner location constitutes two dimension eye and moves inside and outside eyes Vector, and set up the two dimension eye moving vector mapping relations to screen point of fixation, calculate screen in real time according to real-time bivector Curtain point of fixation;
(4) point of fixation compensates: uses support vector regression model to carry out point of fixation compensation, revises the point of fixation that head movement brings Deviation, thus obtain final point of fixation estimated result.
The sight tracing compensated based on projection mapping correction and point of fixation under natural light the most according to claim 1, It is characterized in that described step (1) including:
A. use Face datection algorithm based on Adaboost carries out Face detection to collection image, secondly uses based on office Portion's binary features homing method determines the inside and outside angle point of eyes and the area-of-interest of corners of the mouth point;
B. it is accurately positioned respectively, by Fast Corner Detection and screening according to the concrete physiology shape of different Corner Features Method obtains angle point and corners of the mouth point location in eyes, and uses the curve-fitting method location outer angle point of eyes;
C. determining eye image according to corner location inside and outside eyes, then extract the Gradient Features of eye image, location iris is searched Rope starting point;Secondly, from initial search point, by sliding window, iris edge is scanned for, intend finally according to ellipse Iris center, conjunction method location.
The sight tracing compensated based on projection mapping correction and point of fixation under natural light the most according to claim 2, It is characterized in that described step (2) including:
With the location point at distance screen midpoint setpoint distance for head calibration position, note head is in head calibration position and just The facial image obtained when screen is watched in face attentively is uncalibrated image, calculate the outer angle point of the eyes navigated to according to described step (1) and Projection mapping matrix between characteristic of correspondence point position in mouth corner location and uncalibrated image, utilizes this projection mapping matrix pair Inside and outside the eyes obtained in real time, corner location, iris center are corrected.
The sight tracing compensated based on projection mapping correction and point of fixation under natural light the most according to claim 3, It is characterized in that described step (3) including:
A. according to described step (2) correct after iris center, left and right, respectively with correction after left and right eyes inside and outside angle point Position constitutes 4 eye moving vectors, the two-dimentional eye moving vector after being corrected after superposition;
B. head is at calibration position during transfixion, and the fixed point on eye gaze screen, the two-dimentional eye after calculating correction moves Vector, according to the parameter of this vector Yu the corresponding relation evaluator mapping model of fixed point;Obtain two after correction in real time Dimension eye moving vector, in conjunction with polynomial map model, calculates preliminary point of fixation estimated result.
The sight tracing compensated based on projection mapping correction and point of fixation under natural light the most according to claim 4, It is characterized in that, it is characterised in that described step (4) including:
Training support vector regression model, inputs as angle point top-stitching section midpoint coordinate on image in eyes and uncalibrated image phase With the skew put, angle point spacing inside and outside the eyes that inside and outside right and left eyes, between angle point, the distance on image is corresponding with uncalibrated image Differential seat angle between line between angle point in line between angle point and uncalibrated image on ratio and image;It is output as the note of correspondence Offset deviation between viewpoint estimated result and true fixed point;Support vector regression model is utilized to carry out point of fixation compensation, thus Obtain final point of fixation estimated result.
CN201610409478.9A 2016-06-08 2016-06-08 Gaze tracking method based on projection mapping correction and gaze compensation under natural light Expired - Fee Related CN106066696B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610409478.9A CN106066696B (en) 2016-06-08 2016-06-08 Gaze tracking method based on projection mapping correction and gaze compensation under natural light

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610409478.9A CN106066696B (en) 2016-06-08 2016-06-08 Gaze tracking method based on projection mapping correction and gaze compensation under natural light

Publications (2)

Publication Number Publication Date
CN106066696A true CN106066696A (en) 2016-11-02
CN106066696B CN106066696B (en) 2019-05-14

Family

ID=57421206

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610409478.9A Expired - Fee Related CN106066696B (en) 2016-06-08 2016-06-08 Gaze tracking method based on projection mapping correction and gaze compensation under natural light

Country Status (1)

Country Link
CN (1) CN106066696B (en)

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106778710A (en) * 2017-02-17 2017-05-31 吉林大学 A kind of flight simulator dynamic view system based on kinect sensors
CN106778687A (en) * 2017-01-16 2017-05-31 大连理工大学 Method for viewing points detecting based on local evaluation and global optimization
CN107291238A (en) * 2017-06-29 2017-10-24 深圳天珑无线科技有限公司 A kind of data processing method and device
WO2018121635A1 (en) * 2016-12-28 2018-07-05 北京七鑫易维信息技术有限公司 Method and device for determining fixation point mapping function and method and device for determining fixation point
CN109308472A (en) * 2018-09-30 2019-02-05 华南理工大学 A 3D Line-of-Sight Estimation Method Based on Iris Projection Matching Function
CN109685829A (en) * 2018-12-17 2019-04-26 成都旷视金智科技有限公司 Eye-controlling focus method, apparatus and electronic equipment based on image
WO2019128677A1 (en) * 2017-12-29 2019-07-04 北京七鑫易维信息技术有限公司 Method and apparatus for determining gazing point based on eye movement analysis device
CN110543843A (en) * 2019-08-23 2019-12-06 北京工业大学 Human eye positioning and size calculation algorithm based on forward oblique projection and backward oblique projection
CN110568930A (en) * 2019-09-10 2019-12-13 Oppo广东移动通信有限公司 Gaze calibration method and related equipment
CN111681280A (en) * 2020-06-03 2020-09-18 中国建设银行股份有限公司 Sliding verification code notch positioning method and device
CN112051918A (en) * 2019-06-05 2020-12-08 京东方科技集团股份有限公司 Human eye gaze calculation method and human eye gaze calculation system
CN112132755A (en) * 2019-06-25 2020-12-25 京东方科技集团股份有限公司 Method, apparatus, system and computer readable medium for correcting and demarcating pupil position
CN112219222A (en) * 2018-06-26 2021-01-12 索尼公司 Motion compensation for geometry information
CN112906431A (en) * 2019-11-19 2021-06-04 北京眼神智能科技有限公司 Iris image segmentation method and device, electronic equipment and storage medium
CN112989914A (en) * 2019-12-16 2021-06-18 辉达公司 Gaze-determining machine learning system with adaptive weighted input
CN113158879A (en) * 2021-04-19 2021-07-23 天津大学 Three-dimensional fixation point estimation and three-dimensional eye movement model establishment method based on matching characteristics
CN113408408A (en) * 2021-06-17 2021-09-17 杭州嘉轩信息科技有限公司 Sight tracking method combining skin color and iris characteristics
CN115311699A (en) * 2021-04-20 2022-11-08 深圳先进技术研究院 Self-adaptive threshold value eyelash segmentation method
CN115409845A (en) * 2022-11-03 2022-11-29 成都新西旺自动化科技有限公司 Special-shaped high-precision balanced alignment method and system
CN115601823A (en) * 2022-10-12 2023-01-13 南通大学(Cn) Method for tracking and evaluating concentration degree of primary and secondary school students
CN116052235A (en) * 2022-05-31 2023-05-02 荣耀终端有限公司 Gaze point estimation method and electronic equipment
CN116594502A (en) * 2023-04-23 2023-08-15 武汉学院 Eye tracking error compensation or parameter calibration method and system based on gaze trajectory
US12260017B2 (en) 2019-12-16 2025-03-25 Nvidia Corporation Gaze determination machine learning system having adaptive weighting of inputs
US12271518B2 (en) 2020-09-28 2025-04-08 Beijing Boe Optoelectronics Technology Co., Ltd. Gaze point calculation apparatus and driving method therefor, and electronic device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6373961B1 (en) * 1996-03-26 2002-04-16 Eye Control Technologies, Inc. Eye controllable screen pointer
US20020105482A1 (en) * 2000-05-26 2002-08-08 Lemelson Jerome H. System and methods for controlling automatic scrolling of information on a display or screen
CN104360732A (en) * 2014-10-16 2015-02-18 南京大学 Compensation method and device for improving accuracy of sight line tracking system
CN102930252B (en) * 2012-10-26 2016-05-11 广东百泰科技有限公司 A Gaze Tracking Method Based on Neural Network Head Motion Compensation
CN105574518A (en) * 2016-01-25 2016-05-11 北京天诚盛业科技有限公司 Method and device for human face living detection

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6373961B1 (en) * 1996-03-26 2002-04-16 Eye Control Technologies, Inc. Eye controllable screen pointer
US20020105482A1 (en) * 2000-05-26 2002-08-08 Lemelson Jerome H. System and methods for controlling automatic scrolling of information on a display or screen
CN102930252B (en) * 2012-10-26 2016-05-11 广东百泰科技有限公司 A Gaze Tracking Method Based on Neural Network Head Motion Compensation
CN104360732A (en) * 2014-10-16 2015-02-18 南京大学 Compensation method and device for improving accuracy of sight line tracking system
CN105574518A (en) * 2016-01-25 2016-05-11 北京天诚盛业科技有限公司 Method and device for human face living detection

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
王信亮: "自然光下视线跟踪算法研究", 《中国优秀硕士学位论文全文数据库(电子期刊)信息科技辑》 *
秦华标等: "一种可克服头动影响的视线跟踪系统", 《电子学报》 *
赵静: "基于二维小波变换的圆形算子虹膜定位算法", 《计算机技术与发展》 *

Cited By (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018121635A1 (en) * 2016-12-28 2018-07-05 北京七鑫易维信息技术有限公司 Method and device for determining fixation point mapping function and method and device for determining fixation point
US10996745B2 (en) 2016-12-28 2021-05-04 Beijing 7Invensun Technology Co., Ltd. Method and device for determining gaze point mapping function, and method and device for determining gaze point
CN106778687B (en) * 2017-01-16 2019-12-17 大连理工大学 Gaze Detection Method Based on Local Evaluation and Global Optimization
CN106778687A (en) * 2017-01-16 2017-05-31 大连理工大学 Method for viewing points detecting based on local evaluation and global optimization
CN106778710A (en) * 2017-02-17 2017-05-31 吉林大学 A kind of flight simulator dynamic view system based on kinect sensors
CN107291238A (en) * 2017-06-29 2017-10-24 深圳天珑无线科技有限公司 A kind of data processing method and device
CN107291238B (en) * 2017-06-29 2021-03-05 南京粤讯电子科技有限公司 Data processing method and device
WO2019128677A1 (en) * 2017-12-29 2019-07-04 北京七鑫易维信息技术有限公司 Method and apparatus for determining gazing point based on eye movement analysis device
CN112219222A (en) * 2018-06-26 2021-01-12 索尼公司 Motion compensation for geometry information
CN109308472A (en) * 2018-09-30 2019-02-05 华南理工大学 A 3D Line-of-Sight Estimation Method Based on Iris Projection Matching Function
CN109308472B (en) * 2018-09-30 2022-03-29 华南理工大学 Three-dimensional sight estimation method based on iris projection matching function
CN109685829A (en) * 2018-12-17 2019-04-26 成都旷视金智科技有限公司 Eye-controlling focus method, apparatus and electronic equipment based on image
CN112051918B (en) * 2019-06-05 2024-03-29 京东方科技集团股份有限公司 Human eye gaze calculation method and human eye gaze calculation system
CN112051918A (en) * 2019-06-05 2020-12-08 京东方科技集团股份有限公司 Human eye gaze calculation method and human eye gaze calculation system
CN112132755A (en) * 2019-06-25 2020-12-25 京东方科技集团股份有限公司 Method, apparatus, system and computer readable medium for correcting and demarcating pupil position
CN110543843A (en) * 2019-08-23 2019-12-06 北京工业大学 Human eye positioning and size calculation algorithm based on forward oblique projection and backward oblique projection
CN110543843B (en) * 2019-08-23 2023-12-15 北京工业大学 Human eye positioning and size calculation algorithm based on forward oblique projection and reverse oblique projection
CN110568930A (en) * 2019-09-10 2019-12-13 Oppo广东移动通信有限公司 Gaze calibration method and related equipment
CN110568930B (en) * 2019-09-10 2022-05-17 Oppo广东移动通信有限公司 Method for calibrating fixation point and related equipment
CN112906431A (en) * 2019-11-19 2021-06-04 北京眼神智能科技有限公司 Iris image segmentation method and device, electronic equipment and storage medium
CN112906431B (en) * 2019-11-19 2024-05-24 北京眼神智能科技有限公司 Iris image segmentation method and device, electronic equipment and storage medium
CN112989914A (en) * 2019-12-16 2021-06-18 辉达公司 Gaze-determining machine learning system with adaptive weighted input
US12260017B2 (en) 2019-12-16 2025-03-25 Nvidia Corporation Gaze determination machine learning system having adaptive weighting of inputs
CN111681280A (en) * 2020-06-03 2020-09-18 中国建设银行股份有限公司 Sliding verification code notch positioning method and device
US12271518B2 (en) 2020-09-28 2025-04-08 Beijing Boe Optoelectronics Technology Co., Ltd. Gaze point calculation apparatus and driving method therefor, and electronic device
CN113158879A (en) * 2021-04-19 2021-07-23 天津大学 Three-dimensional fixation point estimation and three-dimensional eye movement model establishment method based on matching characteristics
CN113158879B (en) * 2021-04-19 2022-06-10 天津大学 3D Gaze Estimation and 3D Eye Movement Model Establishment Based on Matching Features
CN115311699A (en) * 2021-04-20 2022-11-08 深圳先进技术研究院 Self-adaptive threshold value eyelash segmentation method
CN113408408A (en) * 2021-06-17 2021-09-17 杭州嘉轩信息科技有限公司 Sight tracking method combining skin color and iris characteristics
CN116052235B (en) * 2022-05-31 2023-10-20 荣耀终端有限公司 Gaze point estimation method and electronic equipment
CN116052235A (en) * 2022-05-31 2023-05-02 荣耀终端有限公司 Gaze point estimation method and electronic equipment
CN115601823A (en) * 2022-10-12 2023-01-13 南通大学(Cn) Method for tracking and evaluating concentration degree of primary and secondary school students
CN115409845A (en) * 2022-11-03 2022-11-29 成都新西旺自动化科技有限公司 Special-shaped high-precision balanced alignment method and system
CN116594502A (en) * 2023-04-23 2023-08-15 武汉学院 Eye tracking error compensation or parameter calibration method and system based on gaze trajectory

Also Published As

Publication number Publication date
CN106066696B (en) 2019-05-14

Similar Documents

Publication Publication Date Title
CN106066696A (en) The sight tracing compensated based on projection mapping correction and point of fixation under natural light
Itoh et al. Interaction-free calibration for optical see-through head-mounted displays based on 3d eye localization
CN106598221B (en) 3D direction of visual lines estimation method based on eye critical point detection
US10353465B2 (en) Iris and pupil-based gaze estimation method for head-mounted device
CN110780739B (en) Eye control auxiliary input method based on gaze point estimation
CN105760826B (en) Face tracking method and device and intelligent terminal
Jain et al. Real-time upper-body human pose estimation using a depth camera
CN108985210A (en) A kind of Eye-controlling focus method and system based on human eye geometrical characteristic
CN104063700B (en) A Method for Eye Center Location in Frontal Face Images with Natural Illumination
CN103810491B (en) Head posture estimation interest point detection method fusing depth and gray scale image characteristic points
CN102830793A (en) Sight tracking method and sight tracking device
CN101788848A (en) Eye characteristic parameter detecting method for sight line tracking system
CN104766059A (en) Rapid and accurate human eye positioning method and sight estimation method based on human eye positioning
CN107506705A (en) A kind of pupil Purkinje image eye tracking is with watching extracting method attentively
CN102930252A (en) Sight tracking method based on neural network head movement compensation
CN103761519A (en) Non-contact sight-line tracking method based on self-adaptive calibration
CN110781712B (en) Human head space positioning method based on human face detection and recognition
CN112162629A (en) Real-time pupil positioning method based on circumscribed rectangle
CN112232128A (en) A method of identifying the care needs of the elderly with disabilities based on eye tracking
Bei et al. Sitting posture detection using adaptively fused 3D features
Guo et al. Robust fovea localization based on symmetry measure
Cao et al. Gaze tracking on any surface with your phone
CN116152073B (en) Improved multi-scale fundus image stitching method based on Loftr algorithm
Cao et al. Practical gaze tracking on any surface with your phone
CN114821814B (en) Gait recognition method integrating visible light, infrared light and structured light

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190514

CF01 Termination of patent right due to non-payment of annual fee