[go: up one dir, main page]

CN105205480A - Complex scene human eye locating method and system - Google Patents

Complex scene human eye locating method and system Download PDF

Info

Publication number
CN105205480A
CN105205480A CN201510733877.6A CN201510733877A CN105205480A CN 105205480 A CN105205480 A CN 105205480A CN 201510733877 A CN201510733877 A CN 201510733877A CN 105205480 A CN105205480 A CN 105205480A
Authority
CN
China
Prior art keywords
image
face
eye
human
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510733877.6A
Other languages
Chinese (zh)
Other versions
CN105205480B (en
Inventor
王文成
刘云龙
吴小进
郑秀云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Weifang University
Original Assignee
Weifang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Weifang University filed Critical Weifang University
Priority to CN201510733877.6A priority Critical patent/CN105205480B/en
Publication of CN105205480A publication Critical patent/CN105205480A/en
Application granted granted Critical
Publication of CN105205480B publication Critical patent/CN105205480B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Ophthalmology & Optometry (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

本发明涉及人脸识别技术领域,提供一种复杂场景中人眼定位方法及系统,方法包括:对获取到的图像进行人脸图像处理和检测,生成包含纯人脸区域的人脸图像;对包含纯人脸区域的人脸图像进行对比度增强处理,获得凸显人脸眼部的灰度特征的人脸图像;在凸显人脸眼部的灰度特征的人脸图像上进行人眼的初步定位处理,获取候选眼睛区域的人眼图像;根据获取到的候选眼睛区域中眼睛中心的位置,计算眼睛中心在采集到的图像中的坐标,并标记,实现在大场景下对人脸进行快速、准确的定位。

The present invention relates to the technical field of face recognition, and provides a human eye positioning method and system in a complex scene. The method includes: performing face image processing and detection on an acquired image, and generating a face image containing a pure face area; Contrast enhancement processing is performed on the face image containing the pure face area to obtain a face image that highlights the grayscale features of the eyes of the face; preliminary positioning of the human eye is performed on the face image that highlights the grayscale features of the eyes of the face Processing, to obtain the human eye image of the candidate eye area; according to the position of the eye center in the acquired candidate eye area, calculate the coordinates of the eye center in the collected image, and mark it, so as to realize the fast and accurate face recognition in large scenes. accurate positioning.

Description

一种复杂场景中人眼定位方法及系统Human eye positioning method and system in a complex scene

技术领域technical field

本发明属于人脸识别技术领域,尤其涉及一种复杂场景中人眼定位方法及系统。The invention belongs to the technical field of face recognition, and in particular relates to a human eye positioning method and system in complex scenes.

背景技术Background technique

人脸识别是模式识别研究领域的重要课题,在信息安全、出入口访问控制、智能卡等方面有着良好的应用前景。其中,二维和三维人脸识别方法中一个很重要的过程就是人眼的检测和定位,这是由于眼睛区域包含了丰富的可用于个体区分的重要信息,不仅可以提高识别和检测的速度,而且能够降低识别算法的复杂度。同时,由于双眼位置和间距受光照和表情变化的影响最小,眼睛的定位又是人脸图像进行位置、大小和角度归一化的前提,也是人脸其他部件眉毛、鼻子、嘴巴等检测和抽取的基础。因此,人眼的自动定位成为人脸识别研究中的一个基本且非常重要的课题。Face recognition is an important topic in the field of pattern recognition research, and has a good application prospect in information security, entrance and exit access control, and smart cards. Among them, a very important process in two-dimensional and three-dimensional face recognition methods is the detection and positioning of human eyes. This is because the eye area contains a wealth of important information that can be used to distinguish individuals, which can not only improve the speed of recognition and detection, Moreover, the complexity of the recognition algorithm can be reduced. At the same time, since the position and distance of the eyes are least affected by changes in illumination and expression, the positioning of the eyes is the prerequisite for normalizing the position, size and angle of the face image, as well as the detection and extraction of other parts of the face such as eyebrows, nose, and mouth. Foundation. Therefore, the automatic positioning of human eyes has become a basic and very important topic in face recognition research.

目前,针对人眼定位的方法有很多,主要包括基于模板匹配的方法、基于灰度投影的方法和基于分类器设计的方法等,其中,这些人眼定位方法存在缺陷,具体为:At present, there are many methods for human eye positioning, mainly including methods based on template matching, methods based on grayscale projection, and methods based on classifier design. Among them, these human eye positioning methods have defects, specifically:

在基于模板匹配的方法中,需要分别使用左眼模板和右眼模板在图像中进行匹配,不需要大量的先验知识,但是其对初始位置有要求并且计算量大;In the method based on template matching, it is necessary to use the left-eye template and the right-eye template to match in the image, which does not require a lot of prior knowledge, but it has requirements for the initial position and a large amount of calculation;

基于灰度投影法,由于其计算量小的特点常常被用作人眼的定位,但是该方法需要大量的图像预处理,并且受光照和遮挡的影响较大,对于出现头发的遮挡会使得算法失效;Based on the gray-scale projection method, due to its small amount of calculation, it is often used as the positioning of the human eye, but this method requires a lot of image preprocessing, and is greatly affected by illumination and occlusion, and the occlusion of hair will make the algorithm failure;

基于分类器设计方法主要包括支持向量机(SupportVectorMachine,SVM)、神经网络、迭代算法AdaBoost等,这些方法将人眼定位作为一个分类问题,其定位精度较高,但是在大场景中,人眼作为微弱目标存在,利用分类器全局多次搜索图像计算量大,过程繁琐。Classifier-based design methods mainly include support vector machine (Support Vector Machine, SVM), neural network, iterative algorithm AdaBoost, etc. These methods regard human eye positioning as a classification problem, and their positioning accuracy is high. There are faint targets, and using the classifier to search the image globally multiple times is computationally intensive and the process is cumbersome.

发明内容Contents of the invention

本发明的目的在于提供一种能够在复杂大场景下,对人脸进行快速准确定位的复杂场景中人眼定位方法。The purpose of the present invention is to provide a human eye positioning method in a complex scene that can quickly and accurately locate a human face in a complex large scene.

本发明是这样实现的,一种复杂场景中人眼定位方法,所述方法包括下述步骤:The present invention is achieved in this way, a human eye positioning method in a complex scene, said method comprising the following steps:

对获取到的图像进行人脸图像处理和检测,生成包含纯人脸区域的人脸图像;Perform face image processing and detection on the acquired image to generate a face image containing a pure face area;

对包含纯人脸区域的人脸图像进行对比度增强处理,获得凸显人脸眼部的灰度特征的人脸图像;Contrast enhancement processing is performed on the face image containing the pure face area to obtain a face image that highlights the gray features of the eyes of the face;

在凸显人脸眼部的灰度特征的人脸图像上进行人眼的初步定位处理,获取候选眼睛区域的人眼图像;Perform preliminary positioning processing of the human eye on the human face image that highlights the grayscale features of the human face's eyes, and obtain the human eye image of the candidate eye area;

根据获取到的候选眼睛区域中眼睛中心的位置,计算所述眼睛中心在采集到的图像中的坐标,并标记。According to the obtained position of the eye center in the candidate eye area, the coordinates of the eye center in the collected image are calculated and marked.

作为一种改进的方案,所述对采集到的图像进行人脸图像处理和检测,生成包含纯人脸区域的人脸图像的步骤具体包括下述步骤:As an improved solution, the step of performing face image processing and detection on the collected images to generate a face image containing a pure face area specifically includes the following steps:

将获取到的RGB图像转换为彩色空间图像;Convert the acquired RGB image to a color space image;

对所述彩色空间图像进行基于肤色的模型分析以及基于形态学的运算处理,得到基于灰度的人脸图像;Carrying out model analysis based on skin color and arithmetic processing based on morphology on the color space image to obtain a face image based on grayscale;

对基于形态学运算处理后的人脸图像进行区域筛选,获取基于灰度的人脸候选区域的图像;Perform area screening on the face image processed based on the morphological operation, and obtain the image of the face candidate area based on the gray scale;

根据获取到的人脸候选区域的图像,获取人脸候选截图;According to the obtained image of the face candidate area, obtain a screenshot of the face candidate;

将所述人脸候选截图转换为灰度图像,并对所述灰度图像进行人脸区域的检测,生成包含纯人脸区域的人脸图像。The human face candidate screenshot is converted into a grayscale image, and a human face region is detected on the grayscale image to generate a human face image including a pure human face region.

作为一种改进的方案,所述对包含纯人脸区域的人脸图像进行对比度增强处理,获得凸显人脸眼部的灰度特征的人脸图像的步骤具体包括下述步骤:As an improved solution, the step of performing contrast enhancement processing on the face image containing the pure face region to obtain a face image highlighting the grayscale features of the eyes of the face specifically includes the following steps:

对包含纯人脸区域的人脸图像进行高帽变换处理;Perform high-hat transformation processing on face images containing pure face regions;

对人脸图像进行低帽变换处理;Perform low-hat transformation processing on the face image;

对人脸图像进行对比度增强计算;Perform contrast enhancement calculations on face images;

对对比度增强计算后的人脸图像进行二值化处理;Binarize the face image after the contrast enhancement calculation;

对二值化处理后的人脸图像进行滤除处理,得到凸显人脸眼部的灰度特征的人脸图像。Filtering is performed on the binarized face image to obtain a face image that highlights the gray features of the eyes of the face.

作为一种改进的方案,所述在凸显人脸眼部的灰度特征的人脸图像上进行人眼的初步定位处理,获取候选眼睛区域的人眼图像的步骤具体包括下述步骤:As an improved solution, the preliminary positioning process of the human eye is performed on the human face image highlighting the grayscale features of the eyes of the human face, and the step of obtaining the human eye image of the candidate eye area specifically includes the following steps:

对凸显人脸眼部的灰度特征的人脸图像进行裁图,并剔除边缘头发区域;Crop the face image that highlights the grayscale features of the eyes of the face, and remove the edge hair area;

对剔除边缘头发区域的人脸图像进行筛选,选取其中2个区域作为候选眼睛区域;Screen the face images that remove the edge hair area, and select two areas as candidate eye areas;

对选取到的2个候选眼睛区域进行框图标注和填充,形成掩膜二值图像;Carry out frame marking and filling on the selected two candidate eye regions to form a masked binary image;

将所述掩膜二值图像与裁图后的灰度特征的人脸图像进行抠图处理,获得粗左眼图像和粗右眼图像;Carry out matting process with described mask binary image and the face image of the gray-scale feature after cropping, obtain thick left-eye image and thick right-eye image;

将所述粗左眼图像和粗右眼图像送入支持向量机分类器进行检测验证,获取符合人眼特征的候选眼睛区域的人眼图像,并输出。The coarse left-eye image and the coarse right-eye image are sent to a support vector machine classifier for detection and verification, and human eye images of candidate eye regions conforming to human eye characteristics are obtained and output.

作为一种改进的方案,所述在凸显人脸眼部的灰度特征的人脸图像上进行人眼的初步定位处理,获取候选眼睛区域的人眼图像的步骤之后,所述根据获取到的候选眼睛区域中眼睛中心的位置,计算所述眼睛中心在采集到的图像中的坐标的步骤之前还包括下述步骤:As an improved solution, the preliminary positioning process of the human eye is performed on the human face image highlighting the grayscale features of the human face and eyes, and after the step of obtaining the human eye image of the candidate eye area, the obtained The position of the eye center in the candidate eye area, the step of calculating the coordinates of the eye center in the collected image also includes the following steps:

对获取到的候选眼睛区域的人眼图像,进行候选眼睛区域的瞳孔中心进行定位,确定候选眼睛区域中眼睛中心的位置。For the acquired human eye image of the candidate eye area, the pupil center of the candidate eye area is located, and the position of the eye center in the candidate eye area is determined.

本发明的另一目的在于提供一种复杂场景中人眼定位系统,所述系统包括:Another object of the present invention is to provide a human eye positioning system in complex scenes, said system comprising:

人脸图像生成模块,用于对获取到的图像进行人脸图像处理和检测,生成包含纯人脸区域的人脸图像;A human face image generation module, which is used to process and detect the acquired image to generate a human face image containing a pure human face area;

对比度增强处理模块,用于对包含纯人脸区域的人脸图像进行对比度增强处理,获得凸显人脸眼部的灰度特征的人脸图像;Contrast enhancement processing module, for carrying out contrast enhancement processing to the face image that contains pure face area, obtains the face image that highlights the gray scale feature of people's face eyes;

候选眼睛区域获取模块,用于在凸显人脸眼部的灰度特征的人脸图像上进行人眼的初步定位处理,获取候选眼睛区域的人眼图像;The candidate eye region acquisition module is used to perform preliminary positioning processing of the human eye on the human face image highlighting the gray-scale features of the human face and eyes, and obtain the human eye image of the candidate eye region;

眼睛中心计算标记模块,用于根据获取到的候选眼睛区域中眼睛中心的位置,计算所述眼睛中心在采集到的图像中的坐标,并标记。The eye center calculation and marking module is configured to calculate and mark the coordinates of the eye center in the collected image according to the acquired position of the eye center in the candidate eye area.

作为一种改进的方案,所述人脸图像生成模块具体包括:As an improved solution, the face image generation module specifically includes:

色彩转换模块,用于将获取到的RGB图像转换为彩色空间图像;A color conversion module is used to convert the obtained RGB image into a color space image;

肤色模型分析模块,用于对所述彩色空间图像进行基于肤色的模型分析;The skin color model analysis module is used to perform model analysis based on skin color to the color space image;

形态学运算模块,用于基于形态学的运算处理,得到基于灰度的人脸图像;The morphological operation module is used for morphological-based operation processing to obtain a grayscale-based face image;

区域筛选模块,用于对基于形态学运算处理后的人脸图像进行区域筛选,获取基于灰度的人脸候选区域的图像;The area screening module is used to perform area screening on the face image processed based on the morphological operation, and obtain an image of a grayscale-based face candidate area;

人脸候选截图获取模块,用于根据获取到的人脸候选区域的图像,获取人脸候选截图;The human face candidate screenshot acquisition module is used to obtain the human face candidate screenshot according to the acquired image of the human face candidate region;

转换检测模块,用于将所述人脸候选截图转换为灰度图像,并对所述灰度图像进行人脸区域的检测,生成包含纯人脸区域的人脸图像。The conversion detection module is used to convert the candidate screenshot of the human face into a grayscale image, and detect the human face area on the grayscale image to generate a human face image containing a pure human face area.

作为一种改进的方案,所述对比度增强处理模块具体包括:As an improved solution, the contrast enhancement processing module specifically includes:

高帽变换处理模块,用于对包含纯人脸区域的人脸图像进行高帽变换处理;A high-hat transformation processing module, which is used to perform high-hat transformation processing on a face image containing a pure face region;

低帽变换处理模块,用于对人脸图像进行低帽变换处理;A low-hat transformation processing module for performing low-hat transformation processing on the face image;

对比度增强计算模块,用于对人脸图像进行对比度增强计算;Contrast enhancement calculation module, for carrying out contrast enhancement calculation to face image;

二值化处理模块,用于对对比度增强计算后的人脸图像进行二值化处理;Binarization processing module, for carrying out binarization processing to the face image after contrast enhancement calculation;

滤除处理模块,用于对二值化处理后的人脸图像进行滤除处理,得到凸显人脸眼部的灰度特征的人脸图像。The filtering processing module is used to perform filtering processing on the binarized human face image to obtain the human face image highlighting the gray features of the eyes of the human face.

作为一种改进的方案,所述候选眼睛区域获取模块具体包括:As an improved solution, the candidate eye region acquisition module specifically includes:

截图处理模块,用于对凸显人脸眼部的灰度特征的人脸图像进行裁图,并剔除边缘头发区域;The screenshot processing module is used to crop the face image that highlights the grayscale features of the eyes of the face, and remove the edge hair area;

筛选模块,用于对剔除边缘头发区域的人脸图像进行筛选,选取其中2个区域作为候选眼睛区域;The screening module is used to screen the face images that remove the edge hair area, and select two areas as candidate eye areas;

掩膜二值图像形成模块,用于对选取到的2个候选眼睛区域进行框图标注和填充,形成掩膜二值图像;The mask binary image forming module is used to mark and fill the selected two candidate eye regions to form a mask binary image;

抠图处理模块,用于将所述掩膜二值图像与裁图后的灰度特征的人脸图像进行抠图处理,获得粗左眼图像和粗右眼图像;A matting processing module, which is used to perform matting processing on the mask binary image and the face image of the gray-scale feature after cropping, to obtain a thick left-eye image and a thick right-eye image;

分类检测验证模块,用于将所述粗左眼图像和粗右眼图像送入支持向量机分类器进行检测验证,获取符合人眼特征的候选眼睛区域的人眼图像,并输出。The classification, detection and verification module is used to send the coarse left-eye image and the coarse right-eye image to the support vector machine classifier for detection and verification, obtain human eye images of candidate eye regions conforming to human eye characteristics, and output them.

作为一种改进的方案,所述系统还包括:As an improved solution, the system also includes:

瞳孔中心定位模块,用于对获取到的候选眼睛区域的人眼图像,进行候选眼睛区域的瞳孔中心进行定位,确定候选眼睛区域中眼睛中心的位置。The pupil center positioning module is configured to locate the pupil center of the candidate eye area on the acquired human eye image of the candidate eye area, and determine the position of the eye center in the candidate eye area.

在本发明实施例中,对获取到的图像进行人脸图像处理和检测,生成包含纯人脸区域的人脸图像;对包含纯人脸区域的人脸图像进行对比度增强处理,获得凸显人脸眼部的灰度特征的人脸图像;在凸显人脸眼部的灰度特征的人脸图像上进行人眼的初步定位处理,获取候选眼睛区域的人眼图像;根据获取到的候选眼睛区域中眼睛中心的位置,计算眼睛中心在采集到的图像中的坐标,并标记,实现在大场景下对人脸进行快速、准确的定位。In the embodiment of the present invention, face image processing and detection are performed on the acquired image to generate a face image containing a pure face area; contrast enhancement processing is performed on a face image containing a pure face area to obtain a highlighted face The face image of the grayscale features of the eyes; the preliminary positioning of the human eye is performed on the face image highlighting the grayscale features of the eyes of the human face, and the human eye image of the candidate eye area is obtained; according to the obtained candidate eye area The position of the center of the eye in the middle, calculate the coordinates of the center of the eye in the collected image, and mark it, so as to realize the fast and accurate positioning of the face in a large scene.

附图说明Description of drawings

图1是本发明提供的复杂场景中人眼定位方法的实现流程图;Fig. 1 is the implementation flowchart of the human eye positioning method in the complex scene provided by the present invention;

图2是本发明提供的对采集到的图像进行人脸图像处理和检测,生成包含纯人脸区域的人脸图像的具体实现流程图;Fig. 2 is the specific implementation flow chart of carrying out face image processing and detection to the collected image provided by the present invention to generate a face image comprising a pure face area;

图3是本发明提供的对包含纯人脸区域的人脸图像进行对比度增强处理,获得凸显人脸眼部的灰度特征的人脸图像的实现流程图;Fig. 3 is the implementation flow chart of the present invention for carrying out contrast enhancement processing on a face image containing a pure face region to obtain a face image highlighting the grayscale features of the eyes of a face;

图4是本发明提供的在凸显人脸眼部的灰度特征的人脸图像上进行人眼的初步定位处理,获取候选眼睛区域的人眼图像的具体实现流程图;Fig. 4 is the concrete implementation flow chart of obtaining the human eye image of the candidate eye area on the human face image highlighting the gray scale feature of human face and eyes for the preliminary positioning processing of the human eye provided by the present invention;

图5是本发明提供的复杂场景中人眼定位系统的结构框图;Fig. 5 is a structural block diagram of a human eye positioning system in a complex scene provided by the present invention;

图6是本发明提供的人脸图像生成模块的结构框图;Fig. 6 is the structural block diagram of the human face image generation module provided by the present invention;

图7是本发明提供的对比度增强处理模块的结构框图;Fig. 7 is a structural block diagram of a contrast enhancement processing module provided by the present invention;

图8是本发明提供的候选眼睛区域获取模块的结构框图。Fig. 8 is a structural block diagram of a candidate eye region acquisition module provided by the present invention.

具体实施方式Detailed ways

为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention.

图1示出了本发明提供的复杂场景中人眼定位方法的实现流程,其具体的步骤如下所述:Fig. 1 shows the implementation process of the human eye positioning method in the complex scene provided by the present invention, and its specific steps are as follows:

在步骤S101中,对获取到的图像进行人脸图像处理和检测,生成包含纯人脸区域的人脸图像。In step S101, face image processing and detection are performed on the acquired image, and a face image including a pure face area is generated.

在步骤S102中,对包含纯人脸区域的人脸图像进行对比度增强处理,获得凸显人脸眼部的灰度特征的人脸图像。In step S102, contrast enhancement processing is performed on the face image including the pure face area to obtain a face image highlighting the gray features of the eyes of the face.

在步骤S103中,在凸显人脸眼部的灰度特征的人脸图像上进行人眼的初步定位处理,获取候选眼睛区域的人眼图像。In step S103, a preliminary positioning process of the human eye is performed on the human face image highlighting the grayscale features of the eyes of the human face, and the human eye image of the candidate eye area is obtained.

在步骤S104中,对获取到的候选眼睛区域的人眼图像,进行候选眼睛区域的瞳孔中心进行定位,确定候选眼睛区域中眼睛中心的位置。In step S104 , for the acquired human eye image of the candidate eye region, the pupil center of the candidate eye region is located, and the position of the eye center in the candidate eye region is determined.

在步骤S105中,根据获取到的候选眼睛区域中眼睛中心的位置,计算所述眼睛中心在采集到的图像中的坐标,并标记。In step S105, according to the obtained position of the eye center in the candidate eye area, the coordinates of the eye center in the collected image are calculated and marked.

其中,上述步骤S104为优选的方案,可以不执行该定位眼睛中心的位置的步骤,直接执行上述步骤S105,在此不再赘述。Wherein, the above step S104 is a preferred solution, the step of locating the center of the eye may not be performed, and the above step S105 may be directly performed, which will not be repeated here.

其中,图2示出了本发明提供的对采集到的图像进行人脸图像处理和检测,生成包含纯人脸区域的人脸图像的具体实现流程,其具体包括下述步骤:Wherein, Fig. 2 shows the specific implementation process of performing face image processing and detection on the collected images provided by the present invention to generate a face image containing a pure face area, which specifically includes the following steps:

在步骤S201中,将获取到的RGB图像转换为彩色空间图像。In step S201, the acquired RGB image is converted into a color space image.

其中,该RGB图像的获取方式可以有多种,例如通过摄像机现场采集或从数据库中读取彩色图片;Wherein, the RGB image can be obtained in various ways, such as collecting on-site by a camera or reading a color picture from a database;

上述色彩空间转换主要是将RGB图像转换为YCbCr色彩空间,其主要用于有效的将亮度和色度进行分离,其转换的模型模式可以采用如下:The above color space conversion is mainly to convert the RGB image into the YCbCr color space, which is mainly used to effectively separate the brightness and chroma, and the model mode of the conversion can be used as follows:

YY CC bb CC rr 11 == 0.29900.2990 0.58700.5870 0.11400.1140 00 -- 0.16870.1687 -- 0.33130.3313 0.50000.5000 128128 0.50000.5000 -- 0.41870.4187 0.08130.0813 128128 00 00 00 11 RR GG BB 11

其中,Y表示颜色的亮度,Cb表示蓝色分量,Cr表示红色分量,Cr和Cb一起表示颜色的色度信息,并且Cr和Cb之间是二维独立的。Among them, Y represents the brightness of the color, Cb represents the blue component, Cr represents the red component, Cr and Cb together represent the chromaticity information of the color, and Cr and Cb are two-dimensionally independent.

在步骤S202中,对彩色空间图像进行基于肤色的模型分析以及基于形态学的运算处理,得到基于灰度的人脸图像。In step S202, model analysis based on skin color and arithmetic processing based on morphology are performed on the color space image to obtain a face image based on grayscale.

其中,由于在YCbCr空间中对人脸皮肤颜色表现出了较好的聚类特性,与背景颜色具有较好的区分效果,通过计算单个像素的肤色相似性进行分割的图像,如果用1表示肤色区域,0表示非肤色区域,则可以得到肤色区域的判别函数如下所述:Among them, since the face skin color in the YCbCr space shows better clustering characteristics, and has a better distinguishing effect from the background color, the image segmented by calculating the skin color similarity of a single pixel, if 1 is used to represent the skin color area, 0 indicates a non-skinned area, then the discriminant function of the skinned area can be obtained as follows:

由于肤色区域只是对YCbCr空间的Cb和Cr分量进行处理后,需要应用形态学算子来除去人脸图像中孤立的背景区域,经过运算,腐蚀操作用于去除孤立噪声,膨胀操作用于填充脸部的非肤色区域,使整个图像出现为全填充的连通区域,因此,闭合运算为“·”,其是先进行膨胀然后再进行腐蚀运算,A用B来闭合记为A·B,其定义如下:Since the skin color area is only processed by the Cb and Cr components of the YCbCr space, it is necessary to apply a morphological operator to remove the isolated background area in the face image. After calculation, the erosion operation is used to remove isolated noise, and the expansion operation is used to fill the face. Therefore, the closing operation is "·", which is firstly expanded and then corroded. A is closed with B and recorded as A B. Its definition as follows:

AA ·&Center Dot; BB == (( AA ⊕⊕ BB )) ΘΘ BB

其中,腐蚀运算的符号为“Θ”,膨胀运算的符号为“”。Among them, the symbol of corrosion operation is "Θ", and the symbol of expansion operation is " ".

在步骤S203中,对基于形态学运算处理后的人脸图像进行区域筛选,获取基于灰度的人脸候选区域的图像。In step S203, area screening is performed on the face image processed based on the morphological operation, and images of face candidate areas based on gray scale are obtained.

其中,经过基于数学形态学的滤波方法处理后,人脸图像中的小块噪声大多数被清除,但是由于背景复杂影响因素较多,可能存在裸露的手臂或腿部等非人脸区域被误检为人脸候选区域,为了尽量删除非人脸图像的区域,根据先验几何知识如形状大小、长宽比例、椭圆近似的长短轴比例以及像素占有率等知识进行验证,剔除明显不是人脸的区域,保留包含人脸的肤色图像块。Among them, after processing by the filtering method based on mathematical morphology, most of the small block noise in the face image is removed, but due to the complex background and many influencing factors, there may be non-face areas such as bare arms or legs that are mistakenly detected. It is detected as a face candidate area. In order to delete the area of the non-face image as much as possible, it is verified based on prior geometric knowledge such as shape size, aspect ratio, ellipse approximation of the long and short axis ratio, and pixel occupancy. region, retaining skin color image patches containing faces.

对人脸区域来说,由于存在眼睛、嘴巴、眉毛等非肤色区域,因此在人脸区域中会存在一个或多个“孔”(非人脸区域),基于此可以去掉那些不包含孔的肤色区域,为此计算候选人脸区域中孔洞的数量,其计算方式采用欧拉数,欧拉数定义为连通成分数减去洞数,用公式表示:For the face area, since there are non-skin color areas such as eyes, mouth, eyebrows, etc., there will be one or more "holes" (non-face areas) in the face area. Based on this, those that do not contain holes can be removed. For the skin color area, the number of holes in the candidate face area is calculated. The calculation method uses the Euler number. The Euler number is defined as the number of connected components minus the number of holes, expressed by the formula:

E=C-HE=C-H

其中,E、C和H分别是欧拉数、连通成分数与孔数,由上式可得:Among them, E, C and H are Euler number, connected component fraction and hole number, respectively, and can be obtained from the above formula:

H=C-EH=C-E

考虑到肤色的生长区域,C=1,所以H=1-E。Considering the growth area of skin color, C=1, so H=1-E.

计算各个分块的欧拉数,反映出每个分块各有多少个空洞。因为人脸的眼睛、鼻子和嘴唇经过以上步骤会有黑色的空洞显示出来,因此通过所计算的欧拉数,设定一个阈值,当分块欧拉数>0时,将该分块视为人脸区域,进入下一轮人脸区域候选,反之视为非人脸区域。Calculate the Euler number of each block, reflecting how many holes each block has. Because the eyes, nose and lips of the human face will be displayed as black holes after the above steps, so a threshold is set through the calculated Euler number, and when the Euler number of the block is >0, the block is regarded as a face area, enter the next round of face area candidates, otherwise it is regarded as a non-face area.

此外,上述人脸区域的外接矩形长宽也用到区域筛选的方法,在此不再赘述。In addition, the length and width of the circumscribed rectangle of the above-mentioned face area are also used in the method of area screening, which will not be repeated here.

在步骤S204中,根据获取到的人脸候选区域的图像,获取人脸候选截图。In step S204, according to the obtained image of the face candidate area, a screenshot of the face candidate is obtained.

其中,根据获取到的人脸候选区域,将该人脸候选区域的上下左右范围分别扩展20个像素形成扩展矩形,最大限度的保证人脸区域全部落入扩展矩形内,然后将该扩展矩形保存为图片格式,得到人脸候选截图。Among them, according to the obtained face candidate area, the upper, lower, left, and right ranges of the face candidate area are respectively expanded by 20 pixels to form an extended rectangle, so as to ensure that all the face areas fall into the extended rectangle to the greatest extent, and then the extended rectangle is saved as Image format, get a screenshot of the face candidate.

在步骤S205中,将人脸候选截图转换为灰度图像,并对灰度图像进行人脸区域的检测,生成包含纯人脸区域的人脸图像。In step S205, the candidate screenshot of the face is converted into a grayscale image, and the face area is detected on the grayscale image to generate a face image including a pure face area.

其中,将人脸候选截图转换为灰度图像,可以结合人眼对颜色额的敏感度的原理,采用加权平均法,即:Among them, the conversion of the face candidate screenshot into a grayscale image can be combined with the principle of the sensitivity of the human eye to the color value, and the weighted average method is adopted, namely:

Y=ωk*R+ωG*G+ωB*BY=ω k *R+ω G *G+ω B *B

其中,WR、WG、WB分别为颜色分量R、G、B所对应的权重,Y为灰度图对应点的像素值,所用的参数设置为WR=0.30,WG=0.59,WB=0.11,得到灰度图像像素值为256级。Among them, W R , W G , and W B are the weights corresponding to the color components R, G, and B respectively, Y is the pixel value of the corresponding point in the grayscale image, and the parameters used are set as W R =0.30, W G =0.59, W B =0.11, the pixel value of the grayscale image is 256 levels.

对灰度图像进行人脸区域的检测主要是基于迭代算法AdaBoost实现,其具体的实现为:The detection of the face area on the grayscale image is mainly based on the iterative algorithm AdaBoost, and its specific implementation is as follows:

首先,使用Haar-like矩形特征描述人脸,采用“积分图”实现特征向量的快速计算;然后基于AdaBoost算法挑选出一些最能代表人脸的矩形特征形成弱分类器,按照加权投票的方式将弱分类器构造为一个强分类器;最后将训练得到的若干强分类器串联起来组成一个级联结构的层叠分类器,级联结构能够有效提高检测速度。First, use the Haar-like rectangular feature to describe the face, and use the "integral map" to realize the fast calculation of the feature vector; then, based on the AdaBoost algorithm, select some rectangular features that best represent the face to form a weak classifier, and use weighted voting. The weak classifier is constructed as a strong classifier; finally, several strong classifiers trained are connected in series to form a cascaded classifier, which can effectively improve the detection speed.

若在该步骤没有检测到人脸,则需要整体读入原始图像,进行灰度变换后对整幅图像进行搜索获得人脸,对于整幅图像采用Adaboost分类器搜索都未检测到人脸的情况,直接提示“未检测到人脸”。If no face is detected in this step, the original image needs to be read in as a whole, and the entire image is searched to obtain a face after grayscale transformation. For the case where no face is detected by using the Adaboost classifier search for the entire image , directly prompting "No face detected".

在该实施例中,上述步骤S205中,该包含纯人脸区域的人脸图像,需要基于灰度图像进行截图,截图后的包含纯人脸区域的人脸图像为正方形,长宽比例为1:1。In this embodiment, in the above step S205, the face image containing the pure face area needs to be screenshot based on the grayscale image, and the face image containing the pure face area after the screenshot is a square with an aspect ratio of 1 :1.

同时,对正方形的包含纯人脸区域的人脸图像需要进行尺寸归一化,即将图像大小缩放为100像素*100像素。At the same time, it is necessary to normalize the size of the square face image containing the pure face area, that is, to scale the image size to 100 pixels*100 pixels.

图3示出了本发明提供的对包含纯人脸区域的人脸图像进行对比度增强处理,获得凸显人脸眼部的灰度特征的人脸图像的实现流程,其具体的步骤如下所述:Fig. 3 shows the implementation process of performing contrast enhancement processing on a face image containing a pure face region to obtain a face image highlighting the gray features of the eyes of the face provided by the present invention, and its specific steps are as follows:

在步骤S301中,对包含纯人脸区域的人脸图像进行高帽变换处理。In step S301, a high-hat transformation process is performed on the face image including the pure face area.

其中,为了凸显人脸图像中眼部的灰度特征,需要利用形态学滤波的方法进行对比度增强处理,快速剪接映射算法(Top-Hat)具有高通滤波的某些特性,即开Top-Hat算子能检测图像中灰度值的峰,而闭Top-Hat算子则能检测图像中灰度值的谷,利用数学形态学中的Top-hat变换,对人脸图像进行预处理,来减弱外界光线变化对人脸辨识效果的影响。从背景中找出亮的像素聚集。Among them, in order to highlight the gray features of the eyes in the face image, it is necessary to use the method of morphological filtering for contrast enhancement processing. The fast splicing mapping algorithm (Top-Hat) has some characteristics of high-pass filtering, that is, the Top-Hat algorithm The operator can detect the peak of the gray value in the image, and the closed Top-Hat operator can detect the valley of the gray value in the image, and use the Top-hat transformation in mathematical morphology to preprocess the face image to weaken The impact of external light changes on the face recognition effect. Find clusters of bright pixels from the background.

在形态学中,腐蚀和膨胀是数学形态学的基础,是定义域内的最大和最小值运算,其他的变换都是由这两种变换的组合来定义。In morphology, erosion and dilation are the basis of mathematical morphology, which are the maximum and minimum operations in the domain of definition, and other transformations are defined by the combination of these two transformations.

设f(x)和b(x)是定义在二维离散空间F和B上的两个离散函数,其中f(x)表示需要处理的灰度图像,b(x)表示所选取的结构元素,则f(x)关于b(x)的膨胀和腐蚀分别定义为:Suppose f(x) and b(x) are two discrete functions defined on the two-dimensional discrete space F and B, where f(x) represents the grayscale image to be processed, and b(x) represents the selected structural element , then the expansion and erosion of f(x) with respect to b(x) are defined as:

(( ff ⊕⊕ bb )) (( xx )) == mm aa xx ythe y ∈∈ BB (( ff (( xx -- ythe y )) ++ bb (( ythe y )) ))

经过膨胀运算以后,结果中的灰度值是它在一个局部范围内点与结构元素中与之相对应的点的灰度值之和的最大值。它是一种使边界点向外部扩张的过程,它可以把物体边界点进行扩充,从而将与物体接触的所有背景点合并到该物体中。After the dilation operation, the gray value in the result is the maximum value of the sum of the gray values of the point in a local range and the corresponding point in the structural element. It is a process of expanding the boundary points to the outside, which can expand the boundary points of the object, so as to merge all the background points in contact with the object into the object.

(( ff ΘΘ bb )) (( xx )) == minmin ythe y ∈∈ BB (( ff (( xx ++ ythe y )) -- bb (( ythe y )) ))

腐蚀运算结果是在一个局部范围内点与结构元素中与之相对应的点的灰度值之差的最小值。它可以把小于结构元素的物体去除,可以消除物体边界点,它是一种使边界向内部收缩的过程。The result of the erosion operation is the minimum value of the difference between the gray value of the point and the corresponding point in the structure element within a local range. It can remove objects that are smaller than structural elements, and can eliminate object boundary points. It is a process of shrinking the boundary to the inside.

因此,该步骤S301中的高帽变换处理其具体的过程如下所述:Therefore, the specific process of the top-hat transformation in the step S301 is as follows:

原图像f(x)减去对其进行开运算后图像的差值,用于检测图像中的峰,从而提取图像的前景信息,其中,该开运算为先腐蚀后膨胀的运算,其算子为8*8大小。The difference between the original image f(x) and the image after the opening operation is subtracted to detect the peak in the image, thereby extracting the foreground information of the image. It is 8*8 size.

在步骤S302中,对人脸图像进行低帽变换处理。In step S302, low-hat transformation processing is performed on the face image.

其中,低帽变换处理即将原图像f(x)经闭运算后得到的图像与原图像的差值,用于检测图像中的谷,提取图像的背景信息,即对灰度图像进行先膨胀后腐蚀的运算,其算子为8*8大小。Among them, the low-hat transformation process is to use the difference between the original image obtained after the closed operation of the original image f(x) and the original image to detect valleys in the image and extract the background information of the image, that is, to expand the grayscale image first and then Corrosion operation, its operator is 8*8 in size.

在步骤S303中,对人脸图像进行对比度增强计算。In step S303, a contrast enhancement calculation is performed on the face image.

其中,该对比度增强计算的过程,即将步骤S301中高帽变换处理后的图像与原图像相加后,再减去步骤S302中低帽变换处理后的图像,得到对比度增强的人脸图像。Wherein, the process of the contrast enhancement calculation is to add the image processed by the high-hat transformation in step S301 to the original image, and then subtract the image processed by the low-hat transformation in step S302 to obtain a contrast-enhanced face image.

在步骤S304中,对对比度增强计算后的人脸图像进行二值化处理。In step S304, binarization processing is performed on the face image after the contrast enhancement calculation.

其中,假设经过上述步骤S303处理后的图像为f(x,y),二值化处理后的图像为g(x,y),阈值设为T,那么:Wherein, assuming that the image processed by the above step S303 is f(x, y), the image after binarization processing is g(x, y), and the threshold is set to T, then:

gg (( xx ,, ythe y )) == 00 ff (( xx ,, ythe y )) &GreaterEqual;&Greater Equal; TT 11 ff (( xx ,, ythe y )) << TT

其中,值为1的部分表示目标区域,值为0的部分表示背景。Among them, the part with a value of 1 represents the target area, and the part with a value of 0 represents the background.

在步骤S305中,对二值化处理后的人脸图像进行滤除处理,得到凸显人脸眼部的灰度特征的人脸图像。In step S305, filtering is performed on the binarized human face image to obtain a human face image highlighting the grayscale features of the eyes of the human face.

其中,对二值化处理后的人脸图像进行基于形态学的开运算处理,即利用形态学算子[0,1,1,1,0]进行开运算处理,排除一些纵向分布的连通区域,减少因头发或其他干扰造成的眼睛和眉毛相连接。Among them, the opening operation processing based on morphology is performed on the binarized face image, that is, the opening operation processing is performed using the morphological operator [0, 1, 1, 1, 0], and some vertically distributed connected regions are excluded. , to reduce the connection of eyes and eyebrows caused by hair or other disturbances.

图4示出本发明提供的在凸显人脸眼部的灰度特征的人脸图像上进行人眼的初步定位处理,获取候选眼睛区域的人眼图像的具体实现流程,其具体包括下述步骤:Fig. 4 shows the specific implementation process of performing the preliminary positioning processing of the human eye on the human face image highlighting the gray-scale features of the human face and eyes, and obtaining the human eye image of the candidate eye area provided by the present invention, which specifically includes the following steps :

在步骤S401中,对凸显人脸眼部的灰度特征的人脸图像进行裁图,并剔除边缘头发区域。In step S401, the face image highlighting the gray features of the eyes is cropped, and the edge hair area is removed.

其中,对于凸显人脸眼部的灰度特征的人脸图像进行截图,取该人脸图像的上半部分进行分析,例如截取图像高度减半,宽度不变的方式。Wherein, a screenshot of the face image highlighting the grayscale features of the eyes of the face is taken, and the upper half of the face image is taken for analysis, for example, the height of the intercepted image is halved, and the width remains unchanged.

在人脸图像中,头发的存在会似的图像中出现与边缘接壤的区域,因此需要剔除。In face images, the presence of hair will appear in the image bordering the edge area, so it needs to be culled.

首先,对截图后的人脸图像中的目标区域利用8连通区域标记法进行标记,将各个独立白色区域区分开来。First, the target area in the captured face image is marked with the 8-connected area marking method to distinguish each independent white area.

然后,寻找各区域中存在边缘坐标的区域(因图像为100×50像素大小,只要横坐标出现1或100,或者纵坐标中出现1或50,则认为该区域为边缘接壤区域)。Then, find the area with edge coordinates in each area (because the image is 100×50 pixels in size, as long as 1 or 100 appears in the abscissa, or 1 or 50 appears in the ordinate, the area is considered to be an edge border area).

最后,搜索各个边缘接壤区域,查找是否存在坐标点位于左上角坐标为[26,16],右下角坐标为[40,85]矩形区域之内,如果是,则将矩形区域外的其他区域用0填充为黑色;否则将该边缘接壤区域用0填充为黑色。Finally, search the bordering areas of each edge to find out whether there is a coordinate point located within the rectangular area with the upper left corner coordinates [26,16] and the lower right corner coordinates [40,85], if so, use other areas outside the rectangular area with 0 is filled with black; otherwise, the edge border area is filled with 0 and black.

在步骤S402中,对剔除边缘头发区域的人脸图像进行筛选,选取其中2个区域作为候选眼睛区域。In step S402, the face images with the edge hair regions removed are screened, and two of the regions are selected as candidate eye regions.

其中,对人脸图像进行筛选的条件为:Among them, the conditions for filtering face images are:

区域的高度大于其宽度;the height of the region is greater than its width;

区域宽度小于8个像素;The area width is less than 8 pixels;

区域面积小于15个像素;The area area is less than 15 pixels;

即:如果不满足以下条件,则该区域被筛选掉,该区域内像素点用0代替。That is: if the following conditions are not met, the area will be filtered out, and the pixels in the area will be replaced with 0.

在该步骤中,选取其中2个区域作为候选眼睛区域的具体实现为:In this step, the specific implementation of selecting two regions as candidate eye regions is as follows:

眉眼分离,提取候选眼睛区域;Separation of eyebrows and eyes, extracting candidate eye regions;

对于经过筛选后的区域个数多数为4个,但是也可能存在例外情况还需要进行处理。The number of filtered regions is mostly 4, but there may be exceptions that need to be processed.

首先对于区域个数进行统计,并且计算各区域的中心坐标;First, count the number of regions, and calculate the center coordinates of each region;

然后判断区域的个数,根据区域个数的不同分别进行处理;Then determine the number of regions, and process them separately according to the number of regions;

a.如果区域个数为4个,则选择纵坐标最小的2个区域作为眼睛候选区域;a. If the number of regions is 4, select the 2 regions with the smallest vertical coordinates as eye candidate regions;

b.如果区域个数为2-3个,则需要对人脸图像进行对称填充操作,具体方法为:将图像A进行左右镜像运算,获得镜像图像B,然后将图像A和B进行异或运算,获得图像C,然后选择纵坐标最小的2个区域作为眼睛候选区域。b. If the number of areas is 2-3, it is necessary to perform a symmetrical filling operation on the face image. The specific method is: perform a left-right mirror operation on image A to obtain a mirror image B, and then perform an XOR operation on images A and B , obtain image C, and then select the two regions with the smallest vertical coordinates as eye candidate regions.

c.如果区域个数为0-1个或大于4个,则直接对当前图像进行抠图操作,抠图的区域为10×20像素的矩形。c. If the number of regions is 0-1 or greater than 4, then directly perform cutout operation on the current image, and the cutout region is a rectangle of 10×20 pixels.

在步骤S403中,对选取到的2个候选眼睛区域进行框图标注和填充,形成掩膜二值图像。In step S403, the selected two candidate eye regions are frame marked and filled to form a masked binary image.

其中,对筛选操作后的2个候选眼睛区域进行框图标注,即对该2个候选眼睛区域做出最小矩形区域,用框图进行标注;Among them, the two candidate eye regions after the screening operation are frame-marked, that is, the two candidate eye regions are made the smallest rectangular area, and marked with a frame;

然后对该最小矩形区域进行填充,即将2个最小矩形区域部分用像素值为1进行填充,其他部分用0填充,最终形成该掩膜二值图像。Then the minimum rectangular area is filled, that is, the two minimum rectangular areas are filled with a pixel value of 1, and the other parts are filled with 0, and finally the mask binary image is formed.

在步骤S404中,将掩膜二值图像与裁图后的灰度特征的人脸图像进行抠图处理,获得粗左眼图像和粗右眼图像。In step S404, a matting process is performed on the masked binary image and the face image with gray features after cropping to obtain a rough left-eye image and a rough right-eye image.

在步骤S405中,将粗左眼图像和粗右眼图像送入支持向量机分类器进行检测验证,获取符合人眼特征的候选眼睛区域的人眼图像,并输出。In step S405, the coarse left-eye image and the coarse right-eye image are sent to the support vector machine classifier for detection and verification, and human eye images of candidate eye regions conforming to human eye characteristics are obtained and output.

将上述2幅图像送入支持向量机分类器进行检测和验证,如果满足人眼条件标准则进行下一步,否则,系统重新改变参数。(对于支持向量机分类器的设计,主要采用人眼样本和非人眼样本对其训练完成,技术和步骤成熟,非本发明创新点,不做详细描述)。Send the above two images to the support vector machine classifier for detection and verification. If the human eye condition standard is met, proceed to the next step, otherwise, the system will change the parameters again. (For the design of the support vector machine classifier, it mainly uses human eye samples and non-human eye samples to complete its training, the technology and steps are mature, and it is not an innovative point of the present invention, so it will not be described in detail).

在该实施例中,使得支持向量机分类器既能对人眼进行验证,有能避免对整个图像的全局搜索,降低了计算量,增大了粗定位的准确性。In this embodiment, the support vector machine classifier can not only verify the human eyes, but also avoid the global search of the entire image, reduce the amount of calculation, and increase the accuracy of rough positioning.

在本发明实施例中,对获取到的候选眼睛区域的人眼图像,进行候选眼睛区域的瞳孔中心进行定位,确定候选眼睛区域中眼睛中心的位置的具体的步骤如下所述:In the embodiment of the present invention, for the acquired human eye image of the candidate eye area, the pupil center of the candidate eye area is positioned, and the specific steps for determining the position of the eye center in the candidate eye area are as follows:

由于来自眼睫毛等干扰的影响,所获得的候选眼睛区域还需要进行精定位,对于获得的眼睛窗口,其主要由瞳孔和眼白等部分组成。利用瞳孔区域相对于周围灰度变化较大的特点可以首先对瞳孔位置进行粗定位,然后对瞳孔中心精确定位,从而实现了对这个眼睛中心的定位。其具体步骤如下:Due to the influence of interference from eyelashes, etc., the obtained candidate eye regions need to be precisely positioned. For the obtained eye window, it is mainly composed of pupils and whites of the eyes. Utilizing the characteristic that the pupil area changes greatly relative to the surrounding gray scale, the position of the pupil can be roughly positioned first, and then the center of the pupil can be precisely positioned, thereby realizing the positioning of the center of the eye. The specific steps are as follows:

(1)将获得的二值图像(即候选眼睛区域的人眼图像)作为蒙版,与对应灰度图像进行相乘运算,获得仅仅包含眼睛区域的抠图IM1,其中背景为0,裁剪出眼睛区域保存为新图像;(1) Use the obtained binary image (that is, the human eye image of the candidate eye area) as a mask, and multiply it with the corresponding grayscale image to obtain a cutout image IM1 containing only the eye area, in which the background is 0, and the cut out The eye area is saved as a new image;

(2)寻找IM中的像素为0的背景点,将其用灰度255代替,获得背景为白色的眼睛图像;(2) Find the background point where the pixel in the IM is 0, replace it with a grayscale of 255, and obtain an eye image with a white background;

(3)将图像采用阈值分割算法获得阈值,然后通过进行阈值分割获得候选瞳孔区域;(3) image adopts threshold segmentation algorithm to obtain threshold, then obtains candidate pupil region by performing threshold segmentation;

(4)对候选瞳孔区域进行开运算,运算子为:(4) Perform an open operation on the candidate pupil area, the operator is:

EE. == 00 11 00 11 11 11 00 11 00 ,,

并滤除多余的杂点;And filter out redundant noise points;

(5)挑选瞳孔区域;(5) Select the pupil area;

a.对二值图像中的白色区域进行标记;a. Mark the white area in the binary image;

b.统计各区域的面积;b. Count the area of each region;

c.然后对面积进行排序;c. Then sort the area;

d.筛选出前2个面积最大的区域,其他区域像素值用0代替。d. Filter out the first two areas with the largest area, and replace the pixel values of other areas with 0.

(6)对瞳孔区域空洞填补;(6) Fill the hole in the pupil area;

运算子为:The operators are:

EE. == 00 00 11 00 00 00 11 11 11 00 11 11 11 11 11 00 11 11 11 00 00 00 11 00 00 ,,

该运算可以使得填充因瞳孔区域反光造成的空缺。This operation makes it possible to fill in gaps caused by reflections in the pupil area.

(7)利用重心法计算瞳孔中心。计算公式为:利用图像中心点作为起始点进行边界跟踪。(7) Calculate the center of the pupil using the center of gravity method. The calculation formula is: use the center point of the image as the starting point for boundary tracking.

在本发明实施例中,上述步骤S105中,计算出眼睛中心在原始图像中的绝对坐标,然后采用“+”的方式进行眼睛中心的标注,并采用矩形方式标注眼睛区域,实现对复杂场景中人眼的识别。In the embodiment of the present invention, in the above step S105, the absolute coordinates of the eye center in the original image are calculated, and then the eye center is marked by "+", and the eye area is marked by a rectangle, so as to realize Human eye recognition.

图5示出了本发明提供的复杂场景中人眼定位系统的结构框图,为了便于说明,图中仅给出了与本发明相关的部分。FIG. 5 shows a structural block diagram of the human eye positioning system in a complex scene provided by the present invention. For the convenience of description, only the parts related to the present invention are shown in the figure.

人脸图像生成模块11用于对获取到的图像进行人脸图像处理和检测,生成包含纯人脸区域的人脸图像;对比度增强处理模块12用于对包含纯人脸区域的人脸图像进行对比度增强处理,获得凸显人脸眼部的灰度特征的人脸图像;候选眼睛区域获取模块13用于在凸显人脸眼部的灰度特征的人脸图像上进行人眼的初步定位处理,获取候选眼睛区域的人眼图像;眼睛中心计算标记模块14用于根据获取到的候选眼睛区域中眼睛中心的位置,计算所述眼睛中心在采集到的图像中的坐标,并标记。Face image generation module 11 is used to carry out face image processing and detection to the image obtained, generates the face image that comprises pure human face area; Contrast enhancement processing module 12 is used for comprising the human face image of pure human face area Contrast enhancement processing, to obtain a human face image highlighting the grayscale features of the eyes of the human face; the candidate eye region acquisition module 13 is used to perform preliminary positioning processing of the human eye on the human face image highlighting the grayscale features of the eyes of the human face, Acquire the human eye image of the candidate eye area; the eye center calculation and marking module 14 is used to calculate and mark the coordinates of the eye center in the collected image according to the acquired position of the eye center in the candidate eye area.

其中,瞳孔中心定位模块15用于对获取到的候选眼睛区域的人眼图像,进行候选眼睛区域的瞳孔中心进行定位,确定候选眼睛区域中眼睛中心的位置。Wherein, the pupil center positioning module 15 is used for locating the pupil center of the candidate eye area on the acquired human eye image of the candidate eye area, and determining the position of the eye center in the candidate eye area.

如图6所示,人脸图像生成模块11的具体结构如下所述:As shown in Figure 6, the specific structure of face image generating module 11 is as follows:

色彩转换模块21用于将获取到的RGB图像转换为彩色空间图像;肤色模型分析模块22用于对所述彩色空间图像进行基于肤色的模型分析;形态学运算模块23用于基于形态学的运算处理,得到基于灰度的人脸图像;区域筛选模块24用于对基于形态学运算处理后的人脸图像进行区域筛选,获取基于灰度的人脸候选区域的图像;人脸候选截图获取模块25用于根据获取到的人脸候选区域的图像,获取人脸候选截图;转换检测模块26用于将人脸候选截图转换为灰度图像,并对灰度图像进行人脸区域的检测,生成包含纯人脸区域的人脸图像。The color conversion module 21 is used to convert the acquired RGB image into a color space image; the skin color model analysis module 22 is used to perform model analysis based on skin color to the color space image; the morphology operation module 23 is used for the operation based on morphology Process to obtain a face image based on gray scale; the area screening module 24 is used to perform area screening on the face image processed based on morphological operations, and obtain the image of a face candidate area based on gray scale; the human face candidate screenshot acquisition module 25 is used for obtaining the human face candidate screenshot according to the obtained image of the human face candidate area; the conversion detection module 26 is used for converting the human face candidate screenshot into a grayscale image, and performing detection of the human face area on the grayscale image to generate Face images containing pure face regions.

如图7所示,对比度增强处理模块12的具体结构如下所述:As shown in Figure 7, the specific structure of the contrast enhancement processing module 12 is as follows:

高帽变换处理模块31用于对包含纯人脸区域的人脸图像进行高帽变换处理;低帽变换处理模块32用于对人脸图像进行低帽变换处理;对比度增强计算模块33用于对人脸图像进行对比度增强计算;二值化处理模块34用于对对比度增强计算后的人脸图像进行二值化处理;滤除处理模块35用于对二值化处理后的人脸图像进行滤除处理,得到凸显人脸眼部的灰度特征的人脸图像。High-hat transformation processing module 31 is used for carrying out high-hat transformation processing to the face image that contains pure human face region; Low-hat transformation processing module 32 is used for carrying out low-hat transformation processing to human face image; Contrast enhancement calculation module 33 is used for Face image carries out contrast enhancement calculation; Binarization processing module 34 is used for carrying out binarization processing to the face image after contrast enhancement calculation; Filtering out processing module 35 is used for filtering the face image after binarization processing In addition to processing, the face image that highlights the gray features of the eyes of the face is obtained.

如图8所示,候选眼睛区域获取模块13的具体结构如下所述:As shown in Figure 8, the specific structure of the candidate eye region acquisition module 13 is as follows:

截图处理模块41用于对凸显人脸眼部的灰度特征的人脸图像进行裁图,并剔除边缘头发区域;筛选模块42用于对剔除边缘头发区域的人脸图像进行筛选,选取其中2个区域作为候选眼睛区域;掩膜二值图像形成模块43用于对选取到的2个候选眼睛区域进行框图标注和填充,形成掩膜二值图像;抠图处理模块44用于将所述掩膜二值图像与裁图后的灰度特征的人脸图像进行抠图处理,获得粗左眼图像和粗右眼图像;分类检测验证模块45用于将所述粗左眼图像和粗右眼图像送入支持向量机分类器进行检测验证,获取符合人眼特征的候选眼睛区域的人眼图像,并输出。The screenshot processing module 41 is used to crop the face image highlighting the grayscale features of the eyes of the human face, and removes the edge hair region; the screening module 42 is used to screen the human face image removing the edge hair region, and selects 2 of them region as the candidate eye region; the mask binary image forming module 43 is used to frame mark and fill the two selected candidate eye regions to form a mask binary image; the matting processing module 44 is used to use the mask The film binary image and the face image of the gray-scale feature after the cropping are subjected to matting processing to obtain a thick left-eye image and a thick right-eye image; the classification detection verification module 45 is used to combine the thick left-eye image and thick right-eye image The image is sent to the support vector machine classifier for detection and verification, and the human eye image of the candidate eye area that conforms to the characteristics of the human eye is obtained and output.

其中,上述图5至图8所示的各个模块的具体实现如上述对应的方法实施例所记载,在此不再赘述,但不用以限制本发明。Wherein, the specific implementation of each module shown in the above-mentioned FIG. 5 to FIG. 8 is as described in the above-mentioned corresponding method embodiment, and will not be repeated here, but it is not used to limit the present invention.

在本发明实施例中,对获取到的图像进行人脸图像处理和检测,生成包含纯人脸区域的人脸图像;对包含纯人脸区域的人脸图像进行对比度增强处理,获得凸显人脸眼部的灰度特征的人脸图像;在凸显人脸眼部的灰度特征的人脸图像上进行人眼的初步定位处理,获取候选眼睛区域的人眼图像;根据获取到的候选眼睛区域中眼睛中心的位置,计算眼睛中心在采集到的图像中的坐标,并标记,实现在大场景下对人脸进行快速、准确的定位。In the embodiment of the present invention, face image processing and detection are performed on the acquired image to generate a face image containing a pure face area; contrast enhancement processing is performed on a face image containing a pure face area to obtain a highlighted face The face image of the grayscale features of the eyes; the preliminary positioning of the human eye is performed on the face image highlighting the grayscale features of the eyes of the human face, and the human eye image of the candidate eye area is obtained; according to the obtained candidate eye area The position of the center of the eye in the middle, calculate the coordinates of the center of the eye in the collected image, and mark it, so as to realize the fast and accurate positioning of the face in a large scene.

以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明的保护范围之内。The above descriptions are only preferred embodiments of the present invention, and are not intended to limit the present invention. Any modifications, equivalent replacements and improvements made within the spirit and principles of the present invention should be included in the protection of the present invention. within range.

Claims (10)

1.一种复杂场景中人眼定位方法,其特征在于,所述方法包括下述步骤:1. a human eye location method in a complicated scene, is characterized in that, described method comprises the steps: 对获取到的图像进行人脸图像处理和检测,生成包含纯人脸区域的人脸图像;Perform face image processing and detection on the acquired image to generate a face image containing a pure face area; 对包含纯人脸区域的人脸图像进行对比度增强处理,获得凸显人脸眼部的灰度特征的人脸图像;Contrast enhancement processing is performed on the face image containing the pure face area to obtain a face image that highlights the gray features of the eyes of the face; 在凸显人脸眼部的灰度特征的人脸图像上进行人眼的初步定位处理,获取候选眼睛区域的人眼图像;Perform preliminary positioning processing of the human eye on the human face image that highlights the grayscale features of the human face's eyes, and obtain the human eye image of the candidate eye area; 根据获取到的候选眼睛区域中眼睛中心的位置,计算所述眼睛中心在采集到的图像中的坐标,并标记。According to the obtained position of the eye center in the candidate eye area, the coordinates of the eye center in the collected image are calculated and marked. 2.根据权利要求1所述的复杂场景中人眼定位方法,其特征在于,所述对采集到的图像进行人脸图像处理和检测,生成包含纯人脸区域的人脸图像的步骤具体包括下述步骤:2. the human eye location method in the complex scene according to claim 1, is characterized in that, described face image processing and detection are carried out to the image that gathers, and the step of generating the face image that comprises pure human face area specifically comprises Follow the steps below: 将获取到的RGB图像转换为彩色空间图像;Convert the acquired RGB image to a color space image; 对所述彩色空间图像进行基于肤色的模型分析以及基于形态学的运算处理,得到基于灰度的人脸图像;Carrying out model analysis based on skin color and arithmetic processing based on morphology on the color space image to obtain a face image based on grayscale; 对基于形态学运算处理后的人脸图像进行区域筛选,获取基于灰度的人脸候选区域的图像;Perform area screening on the face image processed based on the morphological operation, and obtain the image of the face candidate area based on the gray scale; 根据获取到的人脸候选区域的图像,获取人脸候选截图;According to the obtained image of the face candidate area, obtain a screenshot of the face candidate; 将所述人脸候选截图转换为灰度图像,并对所述灰度图像进行人脸区域的检测,生成包含纯人脸区域的人脸图像。The human face candidate screenshot is converted into a grayscale image, and a human face region is detected on the grayscale image to generate a human face image including a pure human face region. 3.根据权利要求1所述的复杂场景中人眼定位方法,其特征在于,所述对包含纯人脸区域的人脸图像进行对比度增强处理,获得凸显人脸眼部的灰度特征的人脸图像的步骤具体包括下述步骤:3. the human eye positioning method in the complex scene according to claim 1, is characterized in that, the described human face image that comprises pure human face area is carried out contrast enhancement processing, obtains the people who highlights the grayscale feature of human face eyes. The steps of the face image specifically include the following steps: 对包含纯人脸区域的人脸图像进行高帽变换处理;Perform high-hat transformation processing on face images containing pure face regions; 对人脸图像进行低帽变换处理;Perform low-hat transformation processing on the face image; 对人脸图像进行对比度增强计算;Perform contrast enhancement calculations on face images; 对对比度增强计算后的人脸图像进行二值化处理;Binarize the face image after the contrast enhancement calculation; 对二值化处理后的人脸图像进行滤除处理,得到凸显人脸眼部的灰度特征的人脸图像。Filtering is performed on the binarized face image to obtain a face image that highlights the gray features of the eyes of the face. 4.根据权利要求1所述的复杂场景中人眼定位方法,其特征在于,所述在凸显人脸眼部的灰度特征的人脸图像上进行人眼的初步定位处理,获取候选眼睛区域的人眼图像的步骤具体包括下述步骤:4. the human eye positioning method in the complex scene according to claim 1, is characterized in that, the preliminary positioning processing of human eyes is carried out on the human face image highlighting the gray scale features of human face eyes, and the candidate eye area is obtained The steps of the human eye image specifically include the following steps: 对凸显人脸眼部的灰度特征的人脸图像进行裁图,并剔除边缘头发区域;Crop the face image that highlights the grayscale features of the eyes of the face, and remove the edge hair area; 对剔除边缘头发区域的人脸图像进行筛选,选取其中2个区域作为候选眼睛区域;Screen the face images that remove the edge hair area, and select two areas as candidate eye areas; 对选取到的2个候选眼睛区域进行框图标注和填充,形成掩膜二值图像;Carry out frame marking and filling on the selected two candidate eye regions to form a masked binary image; 将所述掩膜二值图像与裁图后的灰度特征的人脸图像进行抠图处理,获得粗左眼图像和粗右眼图像;Carry out matting process with described mask binary image and the face image of the gray-scale feature after cropping, obtain thick left-eye image and thick right-eye image; 将所述粗左眼图像和粗右眼图像送入支持向量机分类器进行检测验证,获取符合人眼特征的候选眼睛区域的人眼图像,并输出。The coarse left-eye image and the coarse right-eye image are sent to a support vector machine classifier for detection and verification, and human eye images of candidate eye regions conforming to human eye characteristics are obtained and output. 5.根据权利要求1所述的复杂场景中人眼定位方法,其特征在于,所述在凸显人脸眼部的灰度特征的人脸图像上进行人眼的初步定位处理,获取候选眼睛区域的人眼图像的步骤之后,所述根据获取到的候选眼睛区域中眼睛中心的位置,计算所述眼睛中心在采集到的图像中的坐标的步骤之前还包括下述步骤:5. the human eye positioning method in the complex scene according to claim 1, is characterized in that, the preliminary positioning processing of human eyes is carried out on the human face image highlighting the gray scale features of human face eyes, and the candidate eye area is obtained After the step of the human eye image, the step of calculating the coordinates of the eye center in the collected image according to the position of the eye center in the obtained candidate eye area also includes the following steps: 对获取到的候选眼睛区域的人眼图像,进行候选眼睛区域的瞳孔中心进行定位,确定候选眼睛区域中眼睛中心的位置。For the acquired human eye image of the candidate eye area, the pupil center of the candidate eye area is located, and the position of the eye center in the candidate eye area is determined. 6.一种复杂场景中人眼定位系统,其特征在于,所述系统包括:6. A human eye positioning system in a complex scene, characterized in that the system comprises: 人脸图像生成模块,用于对获取到的图像进行人脸图像处理和检测,生成包含纯人脸区域的人脸图像;A human face image generation module, which is used to process and detect the acquired image to generate a human face image containing a pure human face area; 对比度增强处理模块,用于对包含纯人脸区域的人脸图像进行对比度增强处理,获得凸显人脸眼部的灰度特征的人脸图像;Contrast enhancement processing module, for carrying out contrast enhancement processing to the face image that contains pure face area, obtains the face image that highlights the gray scale feature of people's face eyes; 候选眼睛区域获取模块,用于在凸显人脸眼部的灰度特征的人脸图像上进行人眼的初步定位处理,获取候选眼睛区域的人眼图像;The candidate eye region acquisition module is used to perform preliminary positioning processing of the human eye on the human face image highlighting the gray-scale features of the human face and eyes, and obtain the human eye image of the candidate eye region; 眼睛中心计算标记模块,用于根据获取到的候选眼睛区域中眼睛中心的位置,计算所述眼睛中心在采集到的图像中的坐标,并标记。The eye center calculation and marking module is configured to calculate and mark the coordinates of the eye center in the collected image according to the acquired position of the eye center in the candidate eye area. 7.根据权利要求6所述的复杂场景中人眼定位系统,其特征在于,所述人脸图像生成模块具体包括:7. human eye positioning system in complex scene according to claim 6, is characterized in that, described human face image generation module specifically comprises: 色彩转换模块,用于将获取到的RGB图像转换为彩色空间图像;A color conversion module is used to convert the obtained RGB image into a color space image; 肤色模型分析模块,用于对所述彩色空间图像进行基于肤色的模型分析;A skin color model analysis module, used for performing skin color model analysis on the color space image; 形态学运算模块,用于基于形态学的运算处理,得到基于灰度的人脸图像;The morphological operation module is used for morphological-based operation processing to obtain a grayscale-based face image; 区域筛选模块,用于对基于形态学运算处理后的人脸图像进行区域筛选,获取基于灰度的人脸候选区域的图像;The area screening module is used to perform area screening on the face image processed based on the morphological operation, and obtain an image of a grayscale-based face candidate area; 人脸候选截图获取模块,用于根据获取到的人脸候选区域的图像,获取人脸候选截图;The human face candidate screenshot acquisition module is used to obtain the human face candidate screenshot according to the acquired image of the human face candidate region; 转换检测模块,用于将所述人脸候选截图转换为灰度图像,并对所述灰度图像进行人脸区域的检测,生成包含纯人脸区域的人脸图像。The conversion detection module is used to convert the candidate screenshot of the human face into a grayscale image, and detect the human face area on the grayscale image to generate a human face image containing a pure human face area. 8.根据权利要求6所述的复杂场景中人眼定位系统,其特征在于,所述对比度增强处理模块具体包括:8. The human eye positioning system in a complex scene according to claim 6, wherein the contrast enhancement processing module specifically includes: 高帽变换处理模块,用于对包含纯人脸区域的人脸图像进行高帽变换处理;A high-hat transformation processing module, which is used to perform high-hat transformation processing on a face image containing a pure face region; 低帽变换处理模块,用于对人脸图像进行低帽变换处理;A low-hat transformation processing module for performing low-hat transformation processing on the face image; 对比度增强计算模块,用于对人脸图像进行对比度增强计算;Contrast enhancement calculation module, for carrying out contrast enhancement calculation to face image; 二值化处理模块,用于对对比度增强计算后的人脸图像进行二值化处理;Binarization processing module, for carrying out binarization processing to the face image after contrast enhancement calculation; 滤除处理模块,用于对二值化处理后的人脸图像进行滤除处理,得到凸显人脸眼部的灰度特征的人脸图像。The filtering processing module is used to perform filtering processing on the binarized human face image to obtain the human face image highlighting the gray features of the eyes of the human face. 9.根据权利要求6所述的复杂场景中人眼定位系统,其特征在于,所述候选眼睛区域获取模块具体包括:9. The human eye positioning system in a complex scene according to claim 6, wherein the candidate eye region acquisition module specifically includes: 截图处理模块,用于对凸显人脸眼部的灰度特征的人脸图像进行裁图,并剔除边缘头发区域;The screenshot processing module is used to crop the face image that highlights the grayscale features of the eyes of the face, and remove the edge hair area; 筛选模块,用于对剔除边缘头发区域的人脸图像进行筛选,选取其中2个区域作为候选眼睛区域;The screening module is used to screen the face images that remove the edge hair area, and select two areas as candidate eye areas; 掩膜二值图像形成模块,用于对选取到的2个候选眼睛区域进行框图标注和填充,形成掩膜二值图像;The mask binary image forming module is used to mark and fill the selected two candidate eye regions to form a mask binary image; 抠图处理模块,用于将所述掩膜二值图像与裁图后的灰度特征的人脸图像进行抠图处理,获得粗左眼图像和粗右眼图像;A matting processing module, which is used to perform matting processing on the mask binary image and the face image of the gray-scale feature after cropping, to obtain a thick left-eye image and a thick right-eye image; 分类检测验证模块,用于将所述粗左眼图像和粗右眼图像送入支持向量机分类器进行检测验证,获取符合人眼特征的候选眼睛区域的人眼图像,并输出。The classification, detection and verification module is used to send the coarse left-eye image and the coarse right-eye image to a support vector machine classifier for detection and verification, obtain human eye images of candidate eye regions conforming to human eye characteristics, and output them. 10.根据权利要求6所述的复杂场景中人眼定位系统,其特征在于,所述系统还包括:10. human eye positioning system in complex scene according to claim 6, is characterized in that, described system also comprises: 瞳孔中心定位模块,用于对获取到的候选眼睛区域的人眼图像,进行候选眼睛区域的瞳孔中心进行定位,确定候选眼睛区域中眼睛中心的位置。The pupil center positioning module is configured to locate the pupil center of the candidate eye area on the acquired human eye image of the candidate eye area, and determine the position of the eye center in the candidate eye area.
CN201510733877.6A 2015-10-31 2015-10-31 Human-eye positioning method and system in a kind of complex scene Active CN105205480B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510733877.6A CN105205480B (en) 2015-10-31 2015-10-31 Human-eye positioning method and system in a kind of complex scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510733877.6A CN105205480B (en) 2015-10-31 2015-10-31 Human-eye positioning method and system in a kind of complex scene

Publications (2)

Publication Number Publication Date
CN105205480A true CN105205480A (en) 2015-12-30
CN105205480B CN105205480B (en) 2018-12-25

Family

ID=54953152

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510733877.6A Active CN105205480B (en) 2015-10-31 2015-10-31 Human-eye positioning method and system in a kind of complex scene

Country Status (1)

Country Link
CN (1) CN105205480B (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106203375A (en) * 2016-07-20 2016-12-07 济南大学 A kind of based on face in facial image with the pupil positioning method of human eye detection
CN106778913A (en) * 2017-01-13 2017-05-31 山东大学 A kind of fuzzy license plate detection method based on pixel cascade nature
CN106960199A (en) * 2017-03-30 2017-07-18 博奥生物集团有限公司 A kind of RGB eye is as the complete extraction method in figure white of the eye region
CN106981066A (en) * 2017-03-06 2017-07-25 武汉嫦娥医学抗衰机器人股份有限公司 A kind of interior face image dividing method based on the colour of skin
CN108009495A (en) * 2017-11-30 2018-05-08 西安科锐盛创新科技有限公司 Fatigue driving method for early warning
CN108182422A (en) * 2018-01-26 2018-06-19 四川政安通科技有限公司 Multi-parameter identity identification method
CN108288040A (en) * 2018-01-26 2018-07-17 四川政安通科技有限公司 Multi-parameter face identification system based on face contour
CN108304792A (en) * 2018-01-26 2018-07-20 四川政安通科技有限公司 Human body biological characteristics acquisition platform
CN108629333A (en) * 2018-05-25 2018-10-09 厦门市美亚柏科信息股份有限公司 A kind of face image processing process of low-light (level), device, equipment and readable medium
CN108734102A (en) * 2018-04-18 2018-11-02 佛山市顺德区中山大学研究院 A kind of right and left eyes recognizer based on deep learning
CN109034051A (en) * 2018-07-24 2018-12-18 哈尔滨理工大学 Human-eye positioning method
CN109460044A (en) * 2019-01-10 2019-03-12 轻客小觅智能科技(北京)有限公司 A kind of robot method for homing, device and robot based on two dimensional code
CN109558812A (en) * 2018-11-13 2019-04-02 广州铁路职业技术学院(广州铁路机械学校) The extracting method and device of facial image, experience system and storage medium
CN111070207A (en) * 2019-12-20 2020-04-28 山东交通学院 Intelligent cleaning robot for ship
CN113327244A (en) * 2021-06-25 2021-08-31 南京爱奇艺智能科技有限公司 Handle controller LED lamp positioning method and system based on computer vision
TWI748596B (en) * 2020-08-11 2021-12-01 國立中正大學 Eye center positioning method and system thereof

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080118113A1 (en) * 2006-11-21 2008-05-22 Jung Sung Uk Method and apparatus for detecting eyes in face region
CN101930543A (en) * 2010-08-27 2010-12-29 南京大学 A method for adjusting eye images in selfie videos
CN102789575A (en) * 2012-07-10 2012-11-21 广东工业大学 Human eye center positioning method
CN103440476A (en) * 2013-08-26 2013-12-11 大连理工大学 A pupil location method in face video

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080118113A1 (en) * 2006-11-21 2008-05-22 Jung Sung Uk Method and apparatus for detecting eyes in face region
CN101930543A (en) * 2010-08-27 2010-12-29 南京大学 A method for adjusting eye images in selfie videos
CN102789575A (en) * 2012-07-10 2012-11-21 广东工业大学 Human eye center positioning method
CN103440476A (en) * 2013-08-26 2013-12-11 大连理工大学 A pupil location method in face video

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张起贵 等: ""人眼快速检测技术"", 《电子设计工程》 *

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106203375A (en) * 2016-07-20 2016-12-07 济南大学 A kind of based on face in facial image with the pupil positioning method of human eye detection
CN106778913A (en) * 2017-01-13 2017-05-31 山东大学 A kind of fuzzy license plate detection method based on pixel cascade nature
CN106778913B (en) * 2017-01-13 2020-11-10 山东大学 A fuzzy license plate detection method based on pixel cascade feature
CN106981066B (en) * 2017-03-06 2019-07-12 武汉嫦娥医学抗衰机器人股份有限公司 A kind of interior face image dividing method based on the colour of skin
CN106981066A (en) * 2017-03-06 2017-07-25 武汉嫦娥医学抗衰机器人股份有限公司 A kind of interior face image dividing method based on the colour of skin
CN106960199A (en) * 2017-03-30 2017-07-18 博奥生物集团有限公司 A kind of RGB eye is as the complete extraction method in figure white of the eye region
CN108009495A (en) * 2017-11-30 2018-05-08 西安科锐盛创新科技有限公司 Fatigue driving method for early warning
CN108182422A (en) * 2018-01-26 2018-06-19 四川政安通科技有限公司 Multi-parameter identity identification method
CN108288040A (en) * 2018-01-26 2018-07-17 四川政安通科技有限公司 Multi-parameter face identification system based on face contour
CN108304792A (en) * 2018-01-26 2018-07-20 四川政安通科技有限公司 Human body biological characteristics acquisition platform
CN108734102A (en) * 2018-04-18 2018-11-02 佛山市顺德区中山大学研究院 A kind of right and left eyes recognizer based on deep learning
CN108629333A (en) * 2018-05-25 2018-10-09 厦门市美亚柏科信息股份有限公司 A kind of face image processing process of low-light (level), device, equipment and readable medium
CN109034051A (en) * 2018-07-24 2018-12-18 哈尔滨理工大学 Human-eye positioning method
CN109558812A (en) * 2018-11-13 2019-04-02 广州铁路职业技术学院(广州铁路机械学校) The extracting method and device of facial image, experience system and storage medium
CN109460044A (en) * 2019-01-10 2019-03-12 轻客小觅智能科技(北京)有限公司 A kind of robot method for homing, device and robot based on two dimensional code
CN111070207A (en) * 2019-12-20 2020-04-28 山东交通学院 Intelligent cleaning robot for ship
TWI748596B (en) * 2020-08-11 2021-12-01 國立中正大學 Eye center positioning method and system thereof
CN113327244A (en) * 2021-06-25 2021-08-31 南京爱奇艺智能科技有限公司 Handle controller LED lamp positioning method and system based on computer vision
CN113327244B (en) * 2021-06-25 2024-09-13 南京爱奇艺智能科技有限公司 Computer vision-based positioning method and system for LED lamp of handle controller

Also Published As

Publication number Publication date
CN105205480B (en) 2018-12-25

Similar Documents

Publication Publication Date Title
CN105205480A (en) Complex scene human eye locating method and system
CN112381075B (en) Method and system for carrying out face recognition under specific scene of machine room
CN101142584B (en) Method for facial features detection
CN102360421B (en) Face identification method and system based on video streaming
Chaudhuri et al. Automatic building detection from high-resolution satellite images based on morphology and internal gray variance
CN110348319A (en) A kind of face method for anti-counterfeit merged based on face depth information and edge image
CN110298297B (en) Flame identification method and device
CN102194108B (en) Smile face expression recognition method based on clustering linear discriminant analysis of feature selection
CN104166841A (en) Rapid detection identification method for specified pedestrian or vehicle in video monitoring network
KR20170006355A (en) Method of motion vector and feature vector based fake face detection and apparatus for the same
CN110929593A (en) A Real-time Saliency Pedestrian Detection Method Based on Detail Discrimination
Achyutha et al. Real time COVID-19 facemask detection using deep learning
CN103440035A (en) Gesture recognition system in three-dimensional space and recognition method thereof
Kheirkhah et al. A hybrid face detection approach in color images with complex background
CN113486712A (en) Multi-face recognition method, system and medium based on deep learning
Fernando et al. Low cost approach for real time sign language recognition
Jindal et al. Sign language detection using convolutional neural network (CNN)
KR101408344B1 (en) Face detection device
Das et al. Human face detection in color images using HSV color histogram and WLD
Işikdoğan et al. Automatic recognition of Turkish fingerspelling
Curran et al. The use of neural networks in real-time face detection
KR20210144064A (en) Apparatus and method for detecting fake faces
Parente et al. Assessing facial image accordance to iso/icao requirements
CN105760881A (en) Facial modeling detection method based on Haar classifier method
Paul et al. Automatic adaptive facial feature extraction using CDF analysis

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant