[go: up one dir, main page]

CN103440476A - A pupil location method in face video - Google Patents

A pupil location method in face video Download PDF

Info

Publication number
CN103440476A
CN103440476A CN2013103764510A CN201310376451A CN103440476A CN 103440476 A CN103440476 A CN 103440476A CN 2013103764510 A CN2013103764510 A CN 2013103764510A CN 201310376451 A CN201310376451 A CN 201310376451A CN 103440476 A CN103440476 A CN 103440476A
Authority
CN
China
Prior art keywords
image
pupil
module
point
sobel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2013103764510A
Other languages
Chinese (zh)
Inventor
陈喆
殷福亮
唐坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University of Technology
Original Assignee
Dalian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University of Technology filed Critical Dalian University of Technology
Priority to CN2013103764510A priority Critical patent/CN103440476A/en
Publication of CN103440476A publication Critical patent/CN103440476A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method for positioning pupils in a face video, and belongs to the technical field of signal processing. A positioning method of pupils in a face video comprises a face detection module, an image preprocessing module, a human eye coarse positioning module and a pupil fine positioning module; the input video image is processed by a face detection module, an image preprocessing module, a human eye coarse positioning module and a pupil fine positioning module to finally obtain the position of the pupil center.

Description

一种人脸视频中瞳孔的定位方法A pupil location method in face video

技术领域technical field

本发明涉及一种人脸视频中瞳孔的定位方法,属于信号处理技术领域。The invention relates to a pupil positioning method in a human face video, belonging to the technical field of signal processing.

背景技术Background technique

人脸图像的瞳孔定位在人脸图像处理、眼动人机交互等领域具有重要应用前景。瞳孔定位方法主要有区域分割法、边缘提取法、灰度投影法、模板匹配法、Adaboost法等。区域分割法易受眼镜干扰,效果较粗糙;特征提取法实际上是用Hough变换找到一个眼睛模板,但这需要大量的预处理,它和它的一些改进算法都要面对由镜片和睫毛等引起的干扰;灰度投影法是一种快速算法,它将图像向两个坐标轴投影,并根据投影的峰值和谷值来定位眼睛位置,但它仅依赖于二维投影,对于黑框眼镜、眉毛、头发等的干扰难以区分;模板匹配法需要归一化人脸图像的尺度和方向,且模板需要通过训练获得,所以计算量较大;基于样本训练Adaboost算法在眼睛定位方面有一定优势,它严格要求丰富的训练样本,但有着高灰度值的眉毛实际上也有可能被定为眼球,另外,候选搜索窗的规模限制了训练样本的尺寸,这也将导致对低分辨率图像的识别率降低。The pupil localization of facial images has important application prospects in the fields of facial image processing, eye movement human-computer interaction and so on. Pupil location methods mainly include region segmentation method, edge extraction method, gray projection method, template matching method, Adaboost method and so on. The region segmentation method is easily interfered by the glasses, and the effect is rough; the feature extraction method actually uses the Hough transform to find an eye template, but this requires a lot of preprocessing, and it and some of its improved algorithms have to face the problems caused by lenses and eyelashes, etc. The interference caused by it; the grayscale projection method is a fast algorithm that projects the image to two coordinate axes and locates the eye position according to the peak and valley values of the projection, but it only relies on two-dimensional projection, for black-rimmed glasses , eyebrows, hair, etc. are difficult to distinguish; the template matching method needs to normalize the scale and direction of the face image, and the template needs to be obtained through training, so the calculation is relatively large; the Adaboost algorithm based on sample training has certain advantages in eye positioning , it strictly requires abundant training samples, but eyebrows with high gray values may actually be determined as eyeballs. In addition, the size of the candidate search window limits the size of training samples, which will also lead to low-resolution images. The recognition rate is reduced.

发明内容Contents of the invention

为了克服上述的不足本发明的目的在于提供一种人脸视频中瞳孔的定位方法。In order to overcome the above-mentioned shortcomings, the object of the present invention is to provide a method for locating pupils in human face videos.

本发明采取的技术方案如下:The technical scheme that the present invention takes is as follows:

一种人脸视频中瞳孔的定位方法包括人脸检测模块、图像预处理模块、人眼粗定位模块及瞳孔细定位模块;输入的视频图像经过人脸检测模块、图像预处理模块、人眼粗定位模块及瞳孔细定位模块最后得到瞳孔中心的位置。A method for locating pupils in a face video includes a face detection module, an image preprocessing module, a coarse human eye positioning module, and a fine pupil positioning module; The positioning module and the pupil fine positioning module finally obtain the position of the pupil center.

本发明原理及有益效果:背景技术提到的方法并未利用瞳孔的特性,所以其性能都受眼睛图像质量的制约,若图像质量不佳,其定位准确率就会下降。在人眼瞳孔定位时,眼镜、眉毛、眼睫毛、头发等都会对其产生干扰。为了提高瞳孔定位的性能,需要充分利用瞳孔图像的特性,如瞳孔的形状为圆形,颜色较深,对应灰度值较低,瞳孔位于人脸的上半部分等。本发明利用瞳孔的径向对称特性,提出一种基于积分投影和径向对称变换的瞳孔定位方法,以提高瞳孔定位性能。Principles and beneficial effects of the present invention: The method mentioned in the background technology does not utilize the characteristics of the pupil, so its performance is restricted by the image quality of the eye. If the image quality is not good, the positioning accuracy will decrease. When positioning the pupil of the human eye, glasses, eyebrows, eyelashes, hair, etc. will interfere with it. In order to improve the performance of pupil location, it is necessary to make full use of the characteristics of the pupil image, such as the shape of the pupil is round, the color is darker, the corresponding gray value is lower, and the pupil is located in the upper half of the face. The invention utilizes the radial symmetry characteristic of the pupil, and proposes a pupil positioning method based on integral projection and radial symmetric transformation, so as to improve the pupil positioning performance.

附图说明Description of drawings

图1径向对称变换方法框图。Fig.1 Block diagram of radial symmetric transformation method.

图2像素的映射关系图。Figure 2 Mapping relationship diagram of pixels.

图3黄新宇和杨睿刚在公开号为201210393147.2的中国专利中使用的“基于人眼结构分类的虹膜和瞳孔的定位方法”的流程图。Fig. 3 is a flow chart of "Iris and pupil positioning method based on human eye structure classification" used by Huang Xinyu and Yang Ruigang in the Chinese patent publication number 201210393147.2.

图4Yefei Chen和Jianbo Su在论文“Fast eye localization based on a newhaar-like feature”(10th World Congress on Intelligent Control and Automation,Beijing,China.2012,4825-4830)中使用的基于新的类haar特征的快速人眼定位流程图。Figure 4 Yefei Chen and Jianbo Su in the paper "Fast eye localization based on a new haar-like feature" (10th World Congress on Intelligent Control and Automation, Beijing, China. 2012, 4825-4830) based on the new haar-like feature Flow chart of rapid human eye localization.

图5本发明技术方案功能框图。Fig. 5 is a functional block diagram of the technical solution of the present invention.

图6使用Mikael Nilsson,J.Nordberg,Ingvar Claesson在论文“Face detectionusing local SMQT features and split up SNOW classifier”(IEEE InternationalConference on Acoustics,Speech,and Signal Processing,Honolulu,USA.2007,589-592)中的人脸检测方法的人脸检测结果图。Figure 6 uses Mikael Nilsson, J.Nordberg, Ingvar Claesson in the paper "Face detection using local SMQT features and split up SNOW classifier" (IEEE International Conference on Acoustics, Speech, and Signal Processing, Honolulu, USA.2007, 589-592) The face detection result graph of the face detection method.

图7在图6中的人脸图。Figure 7. The face graph in Figure 6.

图8对图7人脸图像中值滤波后的图像。Fig. 8 is the image after median filtering of the face image in Fig. 7 .

图9对图8图像进行直方图均衡后的图像。Figure 9 is the image after performing histogram equalization on the image in Figure 8 .

图10图9中图像垂直投影曲线。Figure 10 The vertical projection curve of the image in Figure 9.

图11图9中左眼、右眼区域。Figure 11 The left and right eye areas in Figure 9.

图12单只人眼区域图像。Figure 12 Image of a single human eye area.

图13单只人眼瞳孔定位结果。Fig. 13 Results of pupil positioning for a single human eye.

图14两只眼睛瞳孔定位结果。Fig. 14 Results of pupil positioning for both eyes.

具体实施方式Detailed ways

下面结合附图对本发明做进一步说明:The present invention will be further described below in conjunction with accompanying drawing:

(1)径向对称变换(1) Radial symmetric transformation

径向对称变换是在广义对称变换的基础上发展而来,是一种基于梯度的目标检测算子,它能简单、快速地检测出具有径向对称特性的像素点,实现对圆形目标的有效检测。Radial symmetric transformation is developed on the basis of generalized symmetric transformation. It is a gradient-based target detection operator. It can simply and quickly detect pixels with radial symmetry characteristics, and realize the detection of circular targets. effective detection.

一般来说,给定圆形目标的半径n(n∈N,N为需要检测的径向对称特征的半径集合),就可得到径向对称变换的对应结果。该结果在点P处的值,表征了图像中在该点处径向对称的程度,即图像有多大可能包含一个以点P为圆心、以n为半径的圆。随着检测半径n的增加,具有高对称性的区域可用径向对称特性快速地累积出较大的径向对称强度值S,实现对圆形区域的检测。径向对称变换方法框图如图1所示。Generally speaking, given the radius n of a circular target (n∈N, N is the set of radii of the radially symmetric features to be detected), the corresponding result of the radially symmetric transformation can be obtained. The value of this result at point P represents the degree of radial symmetry in the image at this point, that is, how likely the image is to contain a circle with point P as the center and n as the radius. With the increase of the detection radius n, the region with high symmetry can quickly accumulate a larger radial symmetric intensity value S with the radial symmetry characteristic, and realize the detection of the circular region. The block diagram of the radial symmetric transformation method is shown in Figure 1.

将图像I分别与Sobel水平算子和垂直算子进行卷积,计算出边缘梯度图像Convolute the image I with the Sobel horizontal operator and vertical operator to calculate the edge gradient image

g(p)=[gx(p),gy(p)]=|[I*Sobelhor,I*Sobelver]|,g(p)=[g x (p), g y (p)]=|[I*Sobel hor ,I*Sobel ver ]|,

其中,‘*’表示卷积,Sobelhor为Sobel水平算子,Sobelver为Sobel垂直算子。Among them, '*' means convolution, Sobel hor is the Sobel horizontal operator, and Sobel ver is the Sobel vertical operator.

SobelSobel horhor == -- 11 00 11 -- 22 00 22 -- 11 00 11 ,,

SobelSobel verver == 11 22 11 00 00 00 -- 11 -- 22 -- 11 ,,

对于每个半径n,可计算出对应的方向投影图像On和幅度投影图像Mn。由图2可看出,对于给定点P,On和Mn是由正负影响像素P+ve和P-ve计算出的,而P+ve和P-ve是梯度g(p)的函数。正负影响像素的定义不同,正影响像素是梯度向量g(p)指向的、距离点P的长度为n的点的坐标;负影响像素是梯度向量g(p)负向指向的距离点P的长度为n的点的坐标。For each radius n, the corresponding direction projection image O n and magnitude projection image M n can be calculated. It can be seen from Figure 2 that for a given point P, O n and M n are calculated by the positive and negative influence pixels P +ve and P -ve , and P +ve and P -ve are functions of the gradient g(p) . The definitions of positive and negative impact pixels are different. A positive impact pixel is the coordinate of a point pointed by the gradient vector g(p) and the length of the point P is n; a negative impact pixel is the distance point P pointed by the gradient vector g(p) in the negative direction. The coordinates of a point of length n.

由梯度向量g(p),可计算出正影响像素P+ve和负影响像素P-ve,即From the gradient vector g(p), the positively affected pixel P +ve and the negatively affected pixel P -ve can be calculated, namely

PP ++ veve (( pp )) == pp ++ roundround (( gg (( pp )) || gg (( pp )) || nno )) ,,

PP -- veve (( pp )) == pp -- roundround (( gg (( pp )) || gg (( pp )) || nno )) ,,

其中,“round”表示将向量中的所有元素都取最接近它们的整数。|·|表示向量取模。Among them, "round" means to round all the elements in the vector to the integer closest to them. |·| means vector modulo.

方向投影图像On和梯度投影图像Mn都初始化为0。对于每一对受到影响的像素,方向投影图像On在P+ve点处的值加1,幅度投影图像Mn在P+ve点处的值增加|g(p)|;相应地,方向投影图像On在P-ve点处的值减1,幅度投影图像Mn在P-ve点处的值减去|g(p)|,即Both the direction projection image O n and the gradient projection image M n are initialized to 0. For each pair of affected pixels, the value of the direction projection image O n at the point P +ve is increased by 1, and the value of the magnitude projection image M n at the point P +ve is increased by |g(p)|; correspondingly, the direction The value of the projection image O n at the point P -ve minus 1, and the value of the magnitude projection image M n at the point P -ve minus |g(p)|, namely

On[P+ve(p)]=On[P+ve(p)]+1, On [P +ve (p)] = On [P +ve (p)] + 1,

On[P-ve(p)]=On[P-ve(p)]-1, On [P -ve (p)]= On [P -ve (p)]-1,

Mn[P+ve(p)]=Mn[P+ve(p)]+|g(p)|,M n [P +ve (p)]=M n [P +ve (p)]+|g(p)|,

Mn[P-ve(p)]=Mn[P-ve(p)]-|g(p)|,M n [P -ve (p)] = M n [P -ve (p)] -|g(p)|,

由此可见,On反映了P点周围的像素点沿着其梯度方向映射到该点上的像素个数,Mn反映了P点周围的点的梯度大小在该点上的叠加。It can be seen that On reflects the number of pixels around the point P mapped to the point along its gradient direction, and M n reflects the superposition of the gradient of the points around the point P on this point.

当检测半径为n时,径向对称强度值Sn定义为When the detection radius is n, the radially symmetric intensity value Sn is defined as

Figure BDA0000372121710000034
Figure BDA0000372121710000034

Ff nno (( pp )) == Mm nno (( pp )) kk nno [[ Oo ^^ nno (( pp )) kk nno ]] αα ,,

Sn=Fn*AnS n =F n *A n ,

其中,kn用来归一化在不同半径下得到的On和Mn的尺度因子。α是径向控制参数。‘*’表示卷积,An是二维高斯阵。Among them, k n is used to normalize the scale factors of On and M n obtained at different radii. α is the radial control parameter. '*' means convolution, and An is a two-dimensional Gaussian matrix.

最终的径向对称变换S是与所有检测半径n对应的径向对称结果Sn的均值The final radially symmetric transformation S is the mean of the radially symmetric results S n corresponding to all detection radii n

SS == 11 || NN || ΣΣ nno ∈∈ NN SS nno ,, -- -- -- (( 1313 ))

本发明将用径向对称变换良好的圆检测特性进行瞳孔精确定位。The present invention will use the radial symmetry transform good circle detection characteristics for precise pupil location.

与本发明相关的现有技术一Prior art relevant to the present invention one

现有技术一的技术方案Technical solution of prior art one

黄新宇和杨睿刚在公开号为201210393147.2的中国发明专利“基于人眼结构分类的虹膜和瞳孔的定位方法”中,提出了一种基于人眼结构分类的虹膜和瞳孔定位方法,方法流程图如图3所示。该发明在精确定位虹膜和瞳孔边界前,利用非监督学习技术,对人眼图像进行自动的结构分类。该分类能大致定位虹膜和瞳孔,估算有效虹膜区域大小,有效地去除非虹膜和瞳孔边界的离群数据,并且减少针对虹膜瞳孔位置和大小的搜索空间。在缩小的搜索空间中,该发明进一步依据虹膜瞳孔固有特征进行约束优化,搜索最优虹膜和瞳孔边界。该发明可增加虹膜和瞳孔定位的稳定性和精准性,尤其适合远距离非侵入式的虹膜获取系统。Huang Xinyu and Yang Ruigang proposed an iris and pupil positioning method based on human eye structure classification in the Chinese invention patent "Iris and pupil positioning method based on human eye structure classification" with the publication number 201210393147.2. The flow chart of the method is shown in the figure 3. The invention uses unsupervised learning technology to automatically classify the structure of human eye images before accurately locating the iris and pupil boundaries. This classification can roughly locate the iris and pupil, estimate the size of the effective iris region, effectively remove outlier data that are not borders of the iris and pupil, and reduce the search space for iris and pupil locations and sizes. In the narrowed search space, the invention further performs constrained optimization according to the inherent characteristics of the iris and pupil to search for the optimal iris and pupil boundary. The invention can increase the stability and accuracy of iris and pupil positioning, and is especially suitable for long-distance non-invasive iris acquisition systems.

现有技术一的缺点The shortcoming of prior art one

该方法需要进行训练,且严格要求丰富的训练样本,但有着高灰度值的眉毛实际上也有可能被定为眼球,因而会导致识别率降低。This method requires training and strictly requires abundant training samples, but eyebrows with high gray values may actually be identified as eyeballs, which will lead to a decrease in recognition rate.

与本发明相关的现有技术二Related prior art 2 of the present invention

现有技术二的技术方案Technical scheme of prior art 2

Yefei Chen和Jianbo Su在论文“Fast eye localization based on a new haar-likefeature”(10th World Congress on Intelligent Control and Automation,Beijing,China.2012,4825-4830)中,提出一种快速瞳孔定位的方法。根据面部特征的先验比例关系,该方法先在检测到的人脸区域中选定一个合适的候选窗;然后在该候选区域使用直方图均衡技术以去除光照效应;最后,该方法还提出一种新的类Haar特征用于快速准确地定位候选区域的瞳孔。该方法简单,无需训练,并能够鲁棒地处理由眉毛、头发、眼镜等引起的干扰。其方法流程图如图4所示。Yefei Chen and Jianbo Su proposed a fast pupil localization method in the paper "Fast eye localization based on a new haar-like feature" (10th World Congress on Intelligent Control and Automation, Beijing, China. 2012, 4825-4830). According to the prior proportional relationship of facial features, the method first selects a suitable candidate window in the detected face area; then uses histogram equalization technology in the candidate area to remove the lighting effect; finally, the method also proposes a A new Haar-like feature is used to quickly and accurately locate the pupil of the candidate region. The method is simple, requires no training, and is robust to distractions caused by eyebrows, hair, glasses, etc. The flow chart of the method is shown in Fig. 4 .

现有技术二的缺点The shortcoming of prior art two

基于新的类haar特征的人眼快速定位方法存在的主要问题有:(1)在眼镜的反射或者浓黑的眉毛等情况下,定位的出错率较高;(2)该方法对光照的鲁棒性依然较低;(3)该方法仅对姿势变化为±20度的人脸图像有效。The main problems of the new haar-like feature-based rapid human eye positioning method are: (1) the error rate of positioning is high in the case of reflection of glasses or thick black eyebrows; (2) the method is not sensitive to light The stickiness is still low; (3) The method is only effective for face images with pose changes of ±20 degrees.

本发明技术方案的详细阐述Detailed elaboration of the technical solution of the present invention

本发明所要解决的技术问题Technical problem to be solved by the present invention

本发明对人脸图像进行处理,去除眼镜、眉毛、头发、光照等干扰的影响,自动鲁棒地定位出目标人眼图像的瞳孔位置,从而为人脸识别、眼睛跟踪等后续研究提供准确的基础数据。The invention processes the face image, removes the influence of glasses, eyebrows, hair, light and other interference, and automatically and robustly locates the pupil position of the target human eye image, thereby providing an accurate basis for follow-up research such as face recognition and eye tracking. data.

本发明提供的完整技术方案Complete technical solution provided by the present invention

本发明首先通过人脸检测方法获取图像中的人脸区域;然后,根据瞳孔位置等先验知识,应用积分投影方法得到大致的眉眼区域,即人眼的粗定位;最后,利用径向对称变换圆形目标检测方法来精确地定位人眼。本发明的技术方案框图如图5所示,该方案主要包括人脸检测模块、图像预处理模块、人眼粗定位模块及瞳孔细定位模块。The present invention first obtains the face area in the image through the face detection method; then, according to the prior knowledge such as the pupil position, the integral projection method is used to obtain the approximate eyebrow area, that is, the rough positioning of the human eye; finally, the radial symmetric transformation is used Circular object detection method to precisely locate human eyes. The block diagram of the technical solution of the present invention is shown in Figure 5, and the solution mainly includes a face detection module, an image preprocessing module, a human eye coarse positioning module and a pupil fine positioning module.

人脸检测模块;人脸检测模块的处理方法为:该模块的输入为目标图像Iorig,利用Mikael Nilsson,J.Nordberg,Ingvar Claesson在论文“Face detectionusing local SMQT features and split up SNOW classifier”(IEEE InternationalConference on Acoustics,Speech,and Signal Processing,Honolulu,USA.2007,589-592)中给出的基于局部连续均值量化变换(Successive Mean QuantizationTransform,SMQT)和稀疏筛选网络(Sparse Network of Winnows,SNoW)分类器的人脸检测方法,检测出包含眼睛的人脸图像Iface,将其作为该模块的输出。Face detection module; the processing method of the face detection module is: the input of this module is the target image Iorig, using Mikael Nilsson, J.Nordberg, Ingvar Claesson in the paper "Face detection using local SMQT features and split up SNOW classifier" (IEEE InternationalConference on Acoustics, Speech, and Signal Processing, Honolulu, USA.2007, 589-592) based on local continuous mean quantization transform (Successive Mean Quantization Transform, SMQT) and sparse screening network (Sparse Network of Winnows, SNoW) classifier The face detection method, detects the face image Iface containing eyes, and takes it as the output of the module.

图像预处理模块,图像预处理模块的处理方法为:该模块的输入为人脸检测结果Iface,对Iface进行中值滤波及直方图均衡等预处理后,输出预处理后的图像I。Image preprocessing module, the processing method of image preprocessing module is: the input of this module is face detection result Iface, after carrying out preprocessing such as median filtering and histogram equalization to Iface, the image I after output preprocessing.

(1)中值滤波(1) Median filtering

采用一个3×3窗口对人脸图像进行中值滤波,这样图像中的每一个点都会对应一个3×3的窗口,该点位于窗口中心。将该窗口中心点周围的8个点按灰度值从大到小排序,最后用窗口中各点灰度值的中值,即排序后中间两个点灰度值的均值,来代替窗口中心点的灰度值,即完成中值滤波,得到图像为ItempA 3×3 window is used to perform median filtering on the face image, so that each point in the image corresponds to a 3×3 window, and the point is located in the center of the window. The 8 points around the center point of the window are sorted by the gray value from large to small, and finally the median value of the gray value of each point in the window, that is, the average value of the gray value of the middle two points after sorting, is used to replace the center of the window The gray value of the point, that is, the median filter is completed, and the obtained image is I temp .

(2)直方图均衡(2) Histogram equalization

对中值滤波后的图像Itemp进行直方图均衡。直方图均衡化是通过灰度变换将一幅图像映射为另一幅具有均衡直方图,映射函数为一累积分布函数。其具体步骤如下:Perform histogram equalization on the median-filtered image I temp . Histogram equalization is to map an image to another with a balanced histogram through grayscale transformation, and the mapping function is a cumulative distribution function. The specific steps are as follows:

1)首先计算Itemp的归一化灰度直方图H。1) First calculate the normalized gray histogram H of I temp .

Figure BDA0000372121710000051
i=0,1,…,255
Figure BDA0000372121710000051
i=0,1,...,255

其中,ni为图像中灰度级为i的像素个数,i=0,1,…,255,n为图像中像素的总数。Among them, n i is the number of pixels with gray level i in the image, i=0,1,...,255, and n is the total number of pixels in the image.

2)然后计算直方图积分H’,2) Then calculate the histogram integral H',

Hh ′′ (( ii )) == ΣΣ 00 ≤≤ jj ≤≤ ii Hh (( jj ))

3)最后计算直方图均衡后的图像I。3) Finally calculate the image I after histogram equalization.

I(x,y)=H′[Itemp(x,y)]I(x,y)=H'[I temp (x,y)]

人眼粗定位模块,人眼粗定位模块的处理方法为:该模块的输入为预处理后的图像I,对I进行垂直投影,获取垂直投影曲线1/3~1/2处峰值的坐标,即为人的鼻中部。利用该坐标在垂直方向上对Iface进行截取,即可得到人眼图像大致的人眼区域。最后以人眼区域宽度的中心为界对人眼区域进行分割,便可将两只眼睛分离开来,得到该模块的输出左眼区域Ileft和右眼区域Iright。Human eye rough positioning module, the processing method of the human eye rough positioning module is: the input of this module is the preprocessed image I, vertical projection is performed on I, and the coordinates of the peak at 1/3~1/2 of the vertical projection curve are obtained, It is the middle part of the human nose. Using the coordinates to intercept the Iface in the vertical direction, the approximate human eye area of the human eye image can be obtained. Finally, the human eye area is segmented with the center of the width of the human eye area as the boundary, and the two eyes can be separated to obtain the output left eye area Ileft and right eye area Iright of the module.

人眼粗定位的具体过程如下:The specific process of human eye coarse positioning is as follows:

(1)利用式(20)计算I的垂直灰度投影曲线Py(x)。(1) Use formula (20) to calculate the vertical grayscale projection curve P y (x) of I.

PP ythe y (( xx )) == ΣΣ xx == 11 ww II (( xx ,, ythe y )) ,, -- -- -- (( 2020 ))

其中,w为图像的宽度。Among them, w is the width of the image.

(2)Py(x)在1/3到1/2处峰值的坐标,此处即为人脸的鼻子中部,利用该坐标在垂直方向上对Iface进行截取,得到粗略的人眼区域。(2) The coordinates of the peak value of P y (x) at 1/3 to 1/2, which is the middle of the nose of the human face. Use this coordinate to intercept the I face in the vertical direction to obtain a rough human eye area.

(3)以人眼区域宽度的中心为界,对人眼区域在竖直方向上进行对半分割,便可将两只眼睛分离开来,得到左眼区域Ileft和右眼区域Iright(3) Taking the center of the width of the human eye region as the boundary, the human eye region is divided into half in the vertical direction, and the two eyes can be separated to obtain the left eye region I left and the right eye region I right .

瞳孔定位模块Pupil Location Module

该模块的输入为人眼粗定位得到的眼睛区域Ileft和Iright,利用径向对称变换对Ileft和Iright分别进行瞳孔的精确定位,得到最终的瞳孔定位结果。径向对称变换在径向对称区域检测方面是个有效的算子,它可以通过调整灰度梯度的方向分别检测亮点和暗点。梯度方向有正方向和负方向,并且正方向是从暗点指向亮点,负方向是从亮点指向暗点,这符合瞳孔的特性。为了能够有效地检测瞳孔区域,需要充分利用瞳孔的圆对称特性来为每个像素计算其负方向上能影响到的像素,因为它们在负方向上指向瞳孔。随着检测半径的增长,瞳孔边缘的所有像素点在负方向上都聚合于瞳孔中心。然而,亮点边缘的像素在负方向上是不会聚合于中心,因为它们在负方向上是背离中心的。另外,睫毛、头发和眼镜框没有径向对称特征,它们在负方向上的影响不会被检测到。该检测技术可有效地避免上述噪声的干扰。具体的计算步骤如下:The input of this module is the eye regions I left and I right obtained by the rough positioning of the human eye, and the pupils of I left and I right are precisely positioned respectively by radial symmetric transformation, and the final pupil positioning result is obtained. Radial symmetric transformation is an effective operator in the detection of radially symmetric regions. It can detect bright and dark points respectively by adjusting the direction of the gray gradient. The gradient direction has positive and negative directions, and the positive direction is from the dark point to the bright point, and the negative direction is from the bright point to the dark point, which is in line with the characteristics of the pupil. In order to be able to detect the pupil region effectively, it is necessary to make full use of the circular symmetry property of the pupil to calculate for each pixel the pixels it can affect in the negative direction, because they point to the pupil in the negative direction. As the detection radius grows, all pixels at the edge of the pupil converge toward the center of the pupil in the negative direction. However, the pixels at the edge of the bright spot will not converge to the center in the negative direction, because they are away from the center in the negative direction. In addition, eyelashes, hair, and spectacle frames do not have radial symmetry features, and their influence in the negative direction will not be detected. This detection technology can effectively avoid the above noise interference. The specific calculation steps are as follows:

(1)将人眼粗定位模块得到的眼睛图像Ileft或Iright分别与Sobel水平算子和垂直算子进行卷积,计算出边缘梯度图像g(p)=[gx(p),gy(p)]。(1) Convolute the eye image I left or I right obtained by the human eye coarse positioning module with the Sobel horizontal operator and vertical operator respectively, and calculate the edge gradient image g(p)=[g x (p),g y (p)].

(2)初始化起始搜索半径N=5,Sn_max=0,Smax=0。(2) Initialize the initial search radius N=5, S n_max =0, S max =0.

(3)如果满足Smax≥Sn_max,执行下列步骤:(3) If S max ≥ S n_max is satisfied, perform the following steps:

(a)Smax=Sn_max(a) S max = S n_max ;

(b)对于n=2:N(b) For n=2: N

首先根据式(5)计算P-ve(p),同时根据式(7)和式(9)可以得到On(P-ve(p))和Mn(P-ve(p));然后根据式(11)计算Fn(p),其中参数α=2,kn=9.9;最后根据式(10)得到Sn,其中,An为二维高斯阵,其大小为n×n,方差σ=0.25n;First calculate P -ve (p) according to formula (5), and can get O n (P -ve (p)) and M n (P -ve (p)) according to formula (7) and formula (9); then Calculate F n (p) according to formula (11), where parameter α=2, k n =9.9; finally get S n according to formula (10), where A n is a two-dimensional Gaussian matrix with a size of n×n, Variance σ=0.25n;

(c)根据式(13)计算S,选择其中最大的一个值作为瞳孔中心;(c) Calculate S according to formula (13), and select the largest value as the center of the pupil;

(d)N=N+2。(d) N=N+2.

(4)若Smax<Sn_max,迭代搜索结束,获得瞳孔中心的位置。(4) If S max <S n_max , the iterative search ends, and the position of the pupil center is obtained.

本发明技术方案带来的有益效果Beneficial effects brought by the technical solution of the present invention

CAS-PEAL人脸图像数据库由中国科学院计算机所创建,其包含了1040名中国人共99450幅头肩部图像,所有图像在专门的采集环境中采集,涵盖了姿态、表情、饰物和光照4种主要变化条件。IMM人脸库由丹麦技术大学创建,它包含了240张不同姿态、表情、光照的人脸图像。从IMM人脸库和CAS-PEAL人脸库中选取500张图像对本发明进行评估,这500张图像包含了从正面姿态到侧面姿态、从长头发到带帽子、从不带眼镜到带不同的眼镜等各种各样的人脸。The CAS-PEAL face image database was created by the computer of the Chinese Academy of Sciences. It contains a total of 99,450 head and shoulder images of 1,040 Chinese people. All images are collected in a special collection environment, covering four types of posture, expression, accessories and lighting. The main changing conditions. The IMM face database was created by the Technical University of Denmark, which contains 240 face images with different poses, expressions, and lighting. Select 500 images from the IMM face bank and the CAS-PEAL face bank to evaluate the present invention. These 500 images include from frontal posture to side posture, from long hair to hat, from no glasses to different Various human faces such as glasses.

利用Mikael Nilsson,J.Nordberg,Ingvar Claesson在文献“Face detection usinglocal SMQT features and split up SNOW classifier”(IEEE International Conferenceon Acoustics,Speech,and Signal Processing,Honolulu,USA.2007,589-592)中提出的人脸检测方法检测人脸,结果如图6所示。对检测到的人脸图像进行中值滤波及直方图均衡等预处理,其中,图7为处理前的人脸图像,图8为中值滤波后的图像,图9为直方图均衡后的图像。Using Mikael Nilsson, J.Nordberg, Ingvar Claesson in the document "Face detection using local SMQT features and split up SNOW classifier" (IEEE International Conference on Acoustics, Speech, and Signal Processing, Honolulu, USA.2007, 589-592) The face detection method detects human faces, and the results are shown in Figure 6. Perform preprocessing such as median filtering and histogram equalization on the detected face image, wherein, Figure 7 is the face image before processing, Figure 8 is the image after median filtering, and Figure 9 is the image after histogram equalization .

计算直方图均衡后的图像的垂直投影曲线,如图10所示,由于眼睛图像灰度值较低,鼻子图像灰度值较高,会在垂直投影曲线上形成明显的“谷-峰-谷”,对应于人眼区域;“峰”对应的横坐标就是鼻中部的位置;两个“谷”对应于左、右眼区域。如图11所示。Calculate the vertical projection curve of the image after histogram equalization, as shown in Figure 10, due to the low gray value of the eye image and the high gray value of the nose image, an obvious "valley-peak-valley" will be formed on the vertical projection curve ", corresponding to the human eye area; the abscissa corresponding to the "peak" is the position of the middle of the nose; the two "valleys" correspond to the left and right eye areas. As shown in Figure 11.

利用径向对称变换对图12所示单只人眼区域进行瞳孔定位,处理结果如图13所示。将两只眼睛分别进行定位后,得到的最终定位结果如图14所示。图中绿色十字标记即为最终瞳孔定位结果。The pupil location of the single human eye area shown in Figure 12 is performed using radial symmetric transformation, and the processing results are shown in Figure 13. After positioning the two eyes respectively, the final positioning result obtained is shown in FIG. 14 . The green cross mark in the figure is the final pupil positioning result.

表1显示了瞳孔定位的性能,从表1中可以看出,本发明提出的方法几乎在所有的图片中至少能检测到一个瞳孔,只有8张图片中的两只瞳孔都定位错误。另外,两只瞳孔都定位准确的比率为95.4%。表2显示了与准确瞳孔位置相比,本发明所检测到的瞳孔位置的分布规律。这里,我们不再考虑整张图片,而是将左右眼睛分开考虑,那么所有眼睛的数量是500*2=1000。从表2可以看出,定位误差在5个像素点之内的瞳孔数量占了95.5%,而定位误差超过10个像素点的瞳孔数量仅仅占了2.6%。Table 1 shows the performance of pupil location. It can be seen from Table 1 that the method proposed by the present invention can detect at least one pupil in almost all pictures, and only two pupils in 8 pictures are located incorrectly. In addition, the rate of accurate positioning of both pupils was 95.4%. Table 2 shows the distribution law of the pupil positions detected by the present invention compared with the accurate pupil positions. Here, we no longer consider the whole picture, but consider the left and right eyes separately, then the number of all eyes is 500*2=1000. It can be seen from Table 2 that the number of pupils whose positioning error is within 5 pixels accounts for 95.5%, while the number of pupils whose positioning error exceeds 10 pixels only accounts for 2.6%.

表1瞳孔定位性能Table 1 Pupil positioning performance

眼睛定位eye positioning 定位准确率Positioning accuracy 至少一只瞳孔定位准确Correct positioning of at least one pupil 98.2%(491/500)98.2% (491/500) 两只瞳孔都定位准确Both pupils are positioned accurately 95.4%(477/500)95.4% (477/500) 两只瞳孔定位都不准确Both pupil positioning is inaccurate 1.8%(9/500)1.8% (9/500)

表2不同定位误差的检测率Table 2 Detection rate of different positioning errors

与准确瞳孔位置的距离Distance from exact pupil position 数量quantity 5个像素之内within 5 pixels 955(95.5%)955 (95.5%) 5到10个像素5 to 10 pixels 19(1.9%)19 (1.9%) 10到15个像素10 to 15 pixels 8(0.8%)8(0.8%) 15到20个像素15 to 20 pixels 6(0.6%)6(0.6%) 20个像素以上20 pixels or more 12(1.2%)12 (1.2%)

以上所述,仅为本发明较佳的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,根据本发明的技术方案及其发明构思加以等同替换或改变,都应涵盖在本发明的保护范围之内。The above is only a preferred embodiment of the present invention, but the scope of protection of the present invention is not limited thereto, any person familiar with the technical field within the technical scope disclosed in the present invention, according to the technical solution of the present invention Any equivalent replacement or change of the inventive concepts thereof shall fall within the protection scope of the present invention.

本发明涉及的缩略语和关键术语定义如下:Abbreviations and key terms involved in the present invention are defined as follows:

RST:Radial Symmetry Transform,径向对称变换。RST: Radial Symmetry Transform, radial symmetric transformation.

SMQT:Successive Mean Quantization Transform,连续均值量化变换。SMQT: Successive Mean Quantization Transform, continuous mean quantization transform.

SNoW:Sparse Network of Winnows,稀疏筛选网络。SNoW: Sparse Network of Winnows, sparse screening network.

Claims (5)

1.一种人脸视频中瞳孔的定位方法,其特征在于:包括人脸检测模块、图像预处理模块、人眼粗定位模块及瞳孔细定位模块;输入的视频图像经过人脸检测模块、图像预处理模块、人眼粗定位模块及瞳孔细定位模块最后得到瞳孔中心的位置。1. a positioning method of pupil in a human face video, is characterized in that: comprise human face detection module, image preprocessing module, human eye rough positioning module and pupil fine positioning module; The video image of input is through human face detection module, image The preprocessing module, the human eye coarse positioning module and the pupil fine positioning module finally obtain the position of the pupil center. 2.一种人脸视频中瞳孔的定位方法,其特征在于:人脸检测模块的处理方法为:2. a positioning method of pupil in a human face video, is characterized in that: the processing method of human face detection module is: 该模块的输入为目标图像Iorig,利用基于局部连续均值量化变换和稀疏筛选网络分类器的人脸检测方法,检测出包含眼睛的人脸图像Iface,将其作为该模块的输出。The input of this module is the target image I orig , using the face detection method based on local continuous mean quantization transformation and sparse screening network classifier, the face image I face including eyes is detected, and it is taken as the output of this module. 3.一种人脸视频中瞳孔的定位方法,其特征在于:图像预处理模块的处理方法为:3. a positioning method for pupils in a face video, characterized in that: the processing method of the image preprocessing module is: (1)中值滤波(1) Median filtering 采用一个3×3窗口对人脸图像进行中值滤波,这样图像中的每一个点都会对应一个3×3的窗口,该点位于窗口中心;将该窗口中心点周围的8个点按灰度值从大到小排序,最后用窗口中各点灰度值的中值,即排序后中间两个点灰度值的均值,来代替窗口中心点的灰度值,即完成中值滤波,得到图像为ItempUse a 3×3 window to perform median filtering on the face image, so that each point in the image will correspond to a 3×3 window, and the point is located in the center of the window; the 8 points around the center point of the window are grayscale The values are sorted from large to small, and finally the median value of the gray value of each point in the window, that is, the average value of the gray value of the middle two points after sorting, is used to replace the gray value of the center point of the window, that is, the median filter is completed to obtain Image is I temp ; (2)直方图均衡(2) Histogram equalization 对中值滤波后的图像Itemp进行直方图均衡;直方图均衡化是通过灰度变换将一幅图像映射为另一幅具有均衡直方图,映射函数为一累积分布函数;其具体步骤如下:Carry out histogram equalization to the image I temp after the median filter; Histogram equalization is to map an image to another with a balanced histogram through grayscale transformation, and the mapping function is a cumulative distribution function; its specific steps are as follows: 1)首先计算Itemp的归一化灰度直方图H;1) First calculate the normalized grayscale histogram H of I temp ;
Figure FDA0000372121700000011
i=0,1,…,255,
Figure FDA0000372121700000011
i=0,1,...,255,
其中,ni为图像中灰度级为i的像素个数,i=0,1,…,255,n为图像中像素的总数;Among them, n i is the number of pixels with gray level i in the image, i=0,1,...,255, n is the total number of pixels in the image; 2)然后计算直方图积分H’,2) Then calculate the histogram integral H', Hh &prime;&prime; (( ii )) == &Sigma;&Sigma; 00 &le;&le; jj &le;&le; ii Hh (( jj )) 3)最后计算直方图均衡后的图像I;3) finally calculate the image I after histogram equalization; I(x,y)=H′[Itemp(x,y)]。I(x,y)=H'[I temp (x,y)].
4.一种人脸视频中瞳孔的定位方法,其特征在于:人眼粗定位模块的处理方法为:4. a positioning method for pupils in a human face video, characterized in that: the processing method of the coarse positioning module of human eyes is: (1)计算I的垂直灰度投影曲线Py(x),(1) Calculate the vertical grayscale projection curve P y (x) of I, PP ythe y (( xx )) == &Sigma;&Sigma; xx == 11 ww II (( xx ,, ythe y )) ,, 其中,w为图像的宽度;Among them, w is the width of the image; (2)Py(x)在1/3到1/2处峰值的坐标,此处即为人脸的鼻子中部,利用该坐标在垂直方向上对Iface进行截取,得到粗略的人眼区域;(2) The coordinates of the peak value of P y (x) at 1/3 to 1/2, which is the middle of the nose of the human face, use this coordinate to intercept the I face in the vertical direction to obtain a rough human eye area; (3)以人眼区域宽度的中心为界,对人眼区域在竖直方向上进行对半分割,便可将两只眼睛分离开来,得到左眼区域Ileft和右眼区域Iright(3) Taking the center of the width of the human eye region as the boundary, the human eye region is divided into half in the vertical direction, and the two eyes can be separated to obtain the left eye region I left and the right eye region I right . 5.一种人脸视频中瞳孔的定位方法,其特征在于:人眼粗定位模块的处理方法为:5. A positioning method for pupils in a human face video, characterized in that: the processing method of the coarse positioning module of human eyes is: (1)将人眼粗定位模块得到的眼睛图像Ileft或Iright分别与Sobel水平算子和垂直算子进行卷积,计算出边缘梯度图像g(p)=[gx(p),gy(p)];(1) Convolute the eye image I left or I right obtained by the human eye coarse positioning module with the Sobel horizontal operator and vertical operator respectively, and calculate the edge gradient image g(p)=[g x (p),g y (p)]; g(p)=[gx(p),gy(p)]=|[I*Sobelhor,I*Sobelver]|,g(p)=[g x (p), g y (p)]=|[I*Sobel hor ,I*Sobel ver ]|, 其中,‘*’表示卷积,Sobelhor为Sobel水平算子 Sobel hor = - 1 0 1 - 2 0 2 - 1 0 1 , Sobelver为Sobel垂直算子 Sobel ver = 1 2 1 0 0 0 - 1 - 2 - 1 ; Among them, '*' means convolution, Sobel hor is the Sobel level operator Sobel hor = - 1 0 1 - 2 0 2 - 1 0 1 , Sobel ver is the Sobel vertical operator Sobel ver = 1 2 1 0 0 0 - 1 - 2 - 1 ; (2)初始化起始搜索半径N=5,Sn_max=0,Smax=0;(2) Initialize the initial search radius N=5, S n_max =0, S max =0; (3)如果满足Smax≥Sn_max,执行下列步骤:(3) If S max ≥ S n_max is satisfied, perform the following steps: (a)Smax=Sn_max(a) S max = S n_max ; (b)对于n=2,3,…,N,可计算出对应的方向投影图像On和幅度投影图像Mn;对于给定点P,On和Mn是由正负影响像素P+ve和P-ve计算出的,而P+ve和P-ve是梯度g(p)的函数;正负影响像素的定义不同,正影响像素是梯度向量g(p)指向的、距离点P的长度为n的点的坐标;负影响像素是梯度向量g(p)负向指向的距离点P的长度为n的点的坐标;(b) For n=2,3,...,N, the corresponding direction projection image O n and amplitude projection image M n can be calculated; for a given point P, On and M n are determined by the positive and negative influence pixels P +ve Calculated with P -ve , and P +ve and P -ve are functions of the gradient g(p); the definitions of positive and negative impact pixels are different, and the positive impact pixels are pointed by the gradient vector g(p) and are far from point P The coordinates of a point with a length of n; the negatively affected pixel is the coordinate of a point with a length of n from the point P that the gradient vector g(p) negatively points to; 由梯度向量g(p),可计算出负影响像素P-ve和On[P-ve(p)]、Mn[P-ve(p)],即From the gradient vector g(p), the negatively affected pixels P -ve and On [P -ve (p)], M n [P -ve (p)] can be calculated, namely PP -- veve (( pp )) == pp -- roundround (( gg (( pp )) || gg (( pp )) || nno )) ,, -- -- -- (( 55 )) On[P-ve(p)]=On[P-ve(p)]-1,     (7) On [P -ve (p)] = On [P -ve (p)] - 1, (7) Mn[P-ve(p)]=Mn[P-ve(p)]-|g(p)|,     (9) Mn [P -ve (p)]= Mn [P -ve (p)]-|g(p)|, (9) 其中,“round”表示将向量中的所有元素都取最接近它们的整数;|·|表示向量取模;Among them, "round" means to take all the elements in the vector to the integer closest to them; |·| means to take the modulus of the vector; 然后计算Fn(p),Then calculate F n (p), Ff nno (( pp )) == Mm nno (( pp )) kk nno [[ Oo ^^ nno (( pp )) kk nno ]] &alpha;&alpha; ,, 其中,α是径向控制参数;参数α=2,kn=9.9;Among them, α is the radial control parameter; parameter α=2, k n =9.9;
Figure FDA0000372121700000032
Figure FDA0000372121700000032
其中,kn用来归一化在不同半径下得到的On和Mn的尺度因子;Among them, k n is used to normalize the scale factors of O n and M n obtained at different radii; 当检测半径为n时,径向对称强度值SnWhen the detection radius is n, the radial symmetric intensity value S n is Sn=Fn*An,     (10)S n =F n *A n , (10) 其中,‘*’表示卷积,An是二维高斯阵,其大小为n×n,方差σ=0.25n;Among them, '*' means convolution, A n is a two-dimensional Gaussian matrix, its size is n×n, variance σ=0.25n; (c)最终的径向对称变换S是与所有检测半径n对应的径向对称结果Sn的均值,选择其中最大的一个值作为瞳孔中心;(c) The final radial symmetry transformation S is the mean value of the radial symmetry results S n corresponding to all detection radii n, and the largest value is selected as the pupil center; SS == 11 || NN || &Sigma;&Sigma; nno &Element;&Element; NN SS nno ,, -- -- -- (( 1313 )) (d)N=N+2;(d) N=N+2; (4)若Smax<Sn_max,迭代搜索结束,获得瞳孔中心的位置。(4) If S max <S n_max , the iterative search ends, and the position of the pupil center is obtained.
CN2013103764510A 2013-08-26 2013-08-26 A pupil location method in face video Pending CN103440476A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2013103764510A CN103440476A (en) 2013-08-26 2013-08-26 A pupil location method in face video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2013103764510A CN103440476A (en) 2013-08-26 2013-08-26 A pupil location method in face video

Publications (1)

Publication Number Publication Date
CN103440476A true CN103440476A (en) 2013-12-11

Family

ID=49694169

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2013103764510A Pending CN103440476A (en) 2013-08-26 2013-08-26 A pupil location method in face video

Country Status (1)

Country Link
CN (1) CN103440476A (en)

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463080A (en) * 2013-09-16 2015-03-25 展讯通信(天津)有限公司 Detection method of human eye state
CN104463081A (en) * 2013-09-16 2015-03-25 展讯通信(天津)有限公司 Detection method of human eye state
CN104657722A (en) * 2015-03-10 2015-05-27 无锡桑尼安科技有限公司 Eye parameter detection equipment
CN104766059A (en) * 2015-04-01 2015-07-08 上海交通大学 Rapid and accurate human eye positioning method and sight estimation method based on human eye positioning
CN104835156A (en) * 2015-05-05 2015-08-12 浙江工业大学 Non-woven bag automatic positioning method based on computer vision
CN105184269A (en) * 2015-09-15 2015-12-23 成都通甲优博科技有限责任公司 Extraction method and extraction system of iris image
CN105205480A (en) * 2015-10-31 2015-12-30 潍坊学院 Complex scene human eye locating method and system
CN105893916A (en) * 2014-12-11 2016-08-24 深圳市阿图姆科技有限公司 New method for detection of face pretreatment, feature extraction and dimensionality reduction description
CN106063702A (en) * 2016-05-23 2016-11-02 南昌大学 A kind of heart rate detection system based on facial video image and detection method
CN106127160A (en) * 2016-06-28 2016-11-16 上海安威士科技股份有限公司 A kind of human eye method for rapidly positioning for iris identification
CN106203375A (en) * 2016-07-20 2016-12-07 济南大学 A kind of based on face in facial image with the pupil positioning method of human eye detection
CN106326880A (en) * 2016-09-08 2017-01-11 电子科技大学 Pupil center point positioning method
CN106919933A (en) * 2017-03-13 2017-07-04 重庆贝奥新视野医疗设备有限公司 The method and device of Pupil diameter
CN107808397A (en) * 2017-11-10 2018-03-16 京东方科技集团股份有限公司 Pupil positioning device, pupil positioning method and Eye-controlling focus equipment
CN108090463A (en) * 2017-12-29 2018-05-29 腾讯科技(深圳)有限公司 Object control method, apparatus, storage medium and computer equipment
CN108182380A (en) * 2017-11-30 2018-06-19 天津大学 A kind of flake pupil intelligent measurement method based on machine learning
CN108427926A (en) * 2018-03-16 2018-08-21 西安电子科技大学 A kind of pupil positioning method in gaze tracking system
CN108648201A (en) * 2018-05-14 2018-10-12 京东方科技集团股份有限公司 Pupil positioning method and device, storage medium, electronic equipment
CN109558825A (en) * 2018-11-23 2019-04-02 哈尔滨理工大学 A kind of pupil center's localization method based on digital video image processing
CN110472521A (en) * 2019-07-25 2019-11-19 中山市奥珀金属制品有限公司 A kind of Pupil diameter calibration method and system
CN111428680A (en) * 2020-04-07 2020-07-17 深圳市华付信息技术有限公司 Pupil positioning method based on deep learning
CN113366491A (en) * 2021-04-26 2021-09-07 华为技术有限公司 Eyeball tracking method, device and storage medium
CN114020155A (en) * 2021-11-05 2022-02-08 沈阳飞机设计研究所扬州协同创新研究院有限公司 High-precision sight line positioning method based on eye tracker
CN119251217A (en) * 2024-12-03 2025-01-03 南昌虚拟现实研究院股份有限公司 A pupil detection method and device based on deep learning

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1686051A (en) * 2005-05-08 2005-10-26 上海交通大学 Canthus and pupil location method based on VPP and improved SUSAN
CN101751551A (en) * 2008-12-05 2010-06-23 比亚迪股份有限公司 Method, device, system and device for identifying face based on image
US20110164825A1 (en) * 2005-11-25 2011-07-07 Quantum Signal, Llc Dot templates for object detection in images

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1686051A (en) * 2005-05-08 2005-10-26 上海交通大学 Canthus and pupil location method based on VPP and improved SUSAN
US20110164825A1 (en) * 2005-11-25 2011-07-07 Quantum Signal, Llc Dot templates for object detection in images
CN101751551A (en) * 2008-12-05 2010-06-23 比亚迪股份有限公司 Method, device, system and device for identifying face based on image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘翠响 等: "基于连续均值量化变换的人脸检测算法", 《电视技术》 *
唐坤: "面部特征点定位算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463081A (en) * 2013-09-16 2015-03-25 展讯通信(天津)有限公司 Detection method of human eye state
CN104463080A (en) * 2013-09-16 2015-03-25 展讯通信(天津)有限公司 Detection method of human eye state
CN105893916A (en) * 2014-12-11 2016-08-24 深圳市阿图姆科技有限公司 New method for detection of face pretreatment, feature extraction and dimensionality reduction description
CN104657722A (en) * 2015-03-10 2015-05-27 无锡桑尼安科技有限公司 Eye parameter detection equipment
CN104657722B (en) * 2015-03-10 2017-03-08 吉林大学 Eye parameter detection equipment
CN104766059A (en) * 2015-04-01 2015-07-08 上海交通大学 Rapid and accurate human eye positioning method and sight estimation method based on human eye positioning
CN104766059B (en) * 2015-04-01 2018-03-06 上海交通大学 Quick accurate human-eye positioning method and the gaze estimation method based on human eye positioning
CN104835156A (en) * 2015-05-05 2015-08-12 浙江工业大学 Non-woven bag automatic positioning method based on computer vision
CN104835156B (en) * 2015-05-05 2017-10-17 浙江工业大学 A kind of non-woven bag automatic positioning method based on computer vision
CN105184269A (en) * 2015-09-15 2015-12-23 成都通甲优博科技有限责任公司 Extraction method and extraction system of iris image
CN105205480A (en) * 2015-10-31 2015-12-30 潍坊学院 Complex scene human eye locating method and system
CN105205480B (en) * 2015-10-31 2018-12-25 潍坊学院 Human-eye positioning method and system in a kind of complex scene
CN106063702A (en) * 2016-05-23 2016-11-02 南昌大学 A kind of heart rate detection system based on facial video image and detection method
CN106127160A (en) * 2016-06-28 2016-11-16 上海安威士科技股份有限公司 A kind of human eye method for rapidly positioning for iris identification
CN106203375A (en) * 2016-07-20 2016-12-07 济南大学 A kind of based on face in facial image with the pupil positioning method of human eye detection
CN106326880A (en) * 2016-09-08 2017-01-11 电子科技大学 Pupil center point positioning method
CN106919933A (en) * 2017-03-13 2017-07-04 重庆贝奥新视野医疗设备有限公司 The method and device of Pupil diameter
CN107808397A (en) * 2017-11-10 2018-03-16 京东方科技集团股份有限公司 Pupil positioning device, pupil positioning method and Eye-controlling focus equipment
CN107808397B (en) * 2017-11-10 2020-04-24 京东方科技集团股份有限公司 Pupil positioning device, pupil positioning method and sight tracking equipment
CN108182380A (en) * 2017-11-30 2018-06-19 天津大学 A kind of flake pupil intelligent measurement method based on machine learning
CN108182380B (en) * 2017-11-30 2023-06-06 天津大学 A method of intelligent fisheye pupil measurement based on machine learning
CN108090463A (en) * 2017-12-29 2018-05-29 腾讯科技(深圳)有限公司 Object control method, apparatus, storage medium and computer equipment
CN108427926A (en) * 2018-03-16 2018-08-21 西安电子科技大学 A kind of pupil positioning method in gaze tracking system
CN108648201A (en) * 2018-05-14 2018-10-12 京东方科技集团股份有限公司 Pupil positioning method and device, storage medium, electronic equipment
CN109558825A (en) * 2018-11-23 2019-04-02 哈尔滨理工大学 A kind of pupil center's localization method based on digital video image processing
CN110472521A (en) * 2019-07-25 2019-11-19 中山市奥珀金属制品有限公司 A kind of Pupil diameter calibration method and system
CN110472521B (en) * 2019-07-25 2022-12-20 张杰辉 Pupil positioning calibration method and system
CN111428680A (en) * 2020-04-07 2020-07-17 深圳市华付信息技术有限公司 Pupil positioning method based on deep learning
CN111428680B (en) * 2020-04-07 2023-10-20 深圳华付技术股份有限公司 Pupil positioning method based on deep learning
CN113366491A (en) * 2021-04-26 2021-09-07 华为技术有限公司 Eyeball tracking method, device and storage medium
WO2022226747A1 (en) * 2021-04-26 2022-11-03 华为技术有限公司 Eyeball tracking method and apparatus and storage medium
CN114020155A (en) * 2021-11-05 2022-02-08 沈阳飞机设计研究所扬州协同创新研究院有限公司 High-precision sight line positioning method based on eye tracker
CN119251217A (en) * 2024-12-03 2025-01-03 南昌虚拟现实研究院股份有限公司 A pupil detection method and device based on deep learning

Similar Documents

Publication Publication Date Title
CN103440476A (en) A pupil location method in face video
CN104091147B (en) A kind of near-infrared eyes positioning and eye state identification method
CN103632136B (en) Human eye positioning method and device
CN103632132B (en) Face detection and recognition method based on skin color segmentation and template matching
CN100458831C (en) Human face model training module and method, human face real-time certification system and method
CN101763503B (en) Face recognition method of attitude robust
CN104766059B (en) Quick accurate human-eye positioning method and the gaze estimation method based on human eye positioning
CN108614999B (en) Eye opening and closing state detection method based on deep learning
CN111401145B (en) Visible light iris recognition method based on deep learning and DS evidence theory
CN103810491B (en) Head posture estimation interest point detection method fusing depth and gray scale image characteristic points
CN102915435B (en) Multi-pose face recognition method based on face energy diagram
CN106778506A (en) A kind of expression recognition method for merging depth image and multi-channel feature
CN103902978B (en) Face datection and recognition methods
CN105956552B (en) A kind of face blacklist monitoring method
Rouhi et al. A review on feature extraction techniques in face recognition
CN103336973B (en) The eye state identification method of multiple features Decision fusion
CN106599870A (en) Face recognition method based on adaptive weighting and local characteristic fusion
CN111291701B (en) Sight tracking method based on image gradient and ellipse fitting algorithm
CN102902986A (en) Automatic gender identification system and method
CN104091157A (en) Pedestrian detection method based on feature fusion
CN108520214A (en) A finger vein recognition method based on multi-scale HOG and SVM
CN109886086B (en) Pedestrian detection method based on HOG feature and linear SVM cascade classifier
CN110728185B (en) Detection method for judging existence of handheld mobile phone conversation behavior of driver
CN106682578A (en) Human face recognition method based on blink detection
CN118430054B (en) Human face recognition method and system based on AI intelligence

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20131211

WD01 Invention patent application deemed withdrawn after publication