[go: up one dir, main page]

CN100345153C - Man face image identifying method based on man face geometric size normalization - Google Patents

Man face image identifying method based on man face geometric size normalization Download PDF

Info

Publication number
CN100345153C
CN100345153C CNB200510067962XA CN200510067962A CN100345153C CN 100345153 C CN100345153 C CN 100345153C CN B200510067962X A CNB200510067962X A CN B200510067962XA CN 200510067962 A CN200510067962 A CN 200510067962A CN 100345153 C CN100345153 C CN 100345153C
Authority
CN
China
Prior art keywords
face image
face
point
human face
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB200510067962XA
Other languages
Chinese (zh)
Other versions
CN1687959A (en
Inventor
苏光大
孟凯
杜成
王俊艳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CNB200510067962XA priority Critical patent/CN100345153C/en
Publication of CN1687959A publication Critical patent/CN1687959A/en
Application granted granted Critical
Publication of CN100345153C publication Critical patent/CN100345153C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

本发明涉及基于人脸几何尺寸归一化的人脸图像识别方法,属于图像处理技术领域。本方法包括:在输入的人脸图像上确定左右眼球坐标,并根据坐标将图像旋转到水平位置得到人脸图像2,再确定出人脸图像2的左右眼球和下颌点坐标,规定人脸图像的归一化几何尺寸数值,对人脸图像2进行放大或缩小处理,得到满足标准距离的人脸图像3;根据人脸图像3的左右眼球和下颌点坐标位置,对人脸图像3进行裁减得到标准的归一化人脸图像。对训练集、已知的和待识别的人脸图像形成几何尺寸归一化的人脸图像,并提取人脸特征,在已知人脸数据库中对待识别的人脸采用计算相似度和按相似度排序的方法进行人脸识别。本发明改善了人脸视觉效果,识别率有较大的提升。

The invention relates to a face image recognition method based on normalization of face geometric dimensions, and belongs to the technical field of image processing. The method includes: determining the coordinates of the left and right eyeballs on the input face image, and rotating the image to a horizontal position according to the coordinates to obtain a face image 2, and then determining the coordinates of the left and right eyeballs and the mandible point of the face image 2, and specifying the face image The normalized geometric size value of the face image 2 is enlarged or reduced to obtain the face image 3 that meets the standard distance; according to the coordinate positions of the left and right eyeballs and mandible points of the face image 3, the face image 3 is cut A standard normalized face image is obtained. For the training set, known and to-be-recognized face images, form geometrically normalized face images, and extract face features, and use calculation similarity and similarity-based Sort of methods for face recognition. The invention improves the visual effect of the human face, and the recognition rate is greatly improved.

Description

基于人脸几何尺寸归一化的人脸图像识别方法Face Image Recognition Method Based on Normalization of Face Geometric Size

技术领域technical field

本发明属于图像处理技术领域,特别涉及提高人脸识别率的方法。The invention belongs to the technical field of image processing, in particular to a method for improving the face recognition rate.

背景技术Background technique

人脸识别涉及到很多学科,包括图像处理、计算机视觉、模式识别等,也和生理学和生物学对人脑结构的研究成果紧密相关。公认的人脸识别难点是:Face recognition involves many disciplines, including image processing, computer vision, pattern recognition, etc., and is also closely related to the research results of physiology and biology on the structure of the human brain. The recognized difficulties of face recognition are:

(1)年龄引起的人脸变化;(1) Face changes caused by age;

(2)姿态引起的人脸多样性;(2) Face diversity caused by posture;

(3)表情引起的人脸塑性变形;(3) Face plastic deformation caused by expression;

(4)眼镜、化装等因素引起的人脸模式的多重性;(4) The multiplicity of face patterns caused by factors such as glasses and makeup;

(5)光照引起的人脸图像的差异性。(5) Differences in face images caused by illumination.

在一般的人脸识别算法中没有对人脸图像做标准几何尺寸的归一化处理,而几何尺寸归一化处理不仅仅会影响人脸识别的识别率,还会影响人脸数据库中的人脸视觉效果。现有的人脸几何尺寸归一化方法主要以两眼之间的距离为基准,但是,人脸两眼之间的距离是不稳定的,特别是在水平转动的人脸图像中,这种不稳定性就更加突出。In the general face recognition algorithm, there is no normalization process of the standard geometric size of the face image, and the normalization process of the geometric size will not only affect the recognition rate of face recognition, but also affect the number of people in the face database. face visuals. The existing face geometric size normalization method is mainly based on the distance between the two eyes, however, the distance between the two eyes of the face is unstable, especially in the horizontally rotated face image, this Instability is even more prominent.

发明内容Contents of the invention

为了提高人脸识别的识别率,本发明提出了基于人脸几何尺寸归一化的人脸图像识别方法,该方法以颌下线上任何一点到两眼连线的垂直距离为基准对人脸图像做尺寸归一化,在此基础上提取人脸特征,可在一定程度上提高人脸识别的识别率和改善人脸数据库中的人脸图像的视觉效果。In order to improve the recognition rate of face recognition, the present invention proposes a face image recognition method based on the normalization of the geometric size of the face, which uses the vertical distance from any point on the submandibular line to the line connecting the eyes The size of the image is normalized, and the face features are extracted on this basis, which can improve the recognition rate of the face recognition to a certain extent and improve the visual effect of the face image in the face database.

本发明提出一种基于人脸几何尺寸归一化的人脸图像识别方法,包括对人脸几何尺寸归一化和人脸识别两部分,其特征在于,所述的对人脸几何尺寸归一化包括以下步骤:The present invention proposes a human face image recognition method based on the normalization of the geometric size of the human face, which includes two parts: normalization of the geometric size of the human face and recognition of the human face, and is characterized in that the normalization of the geometric size of the human face Transformation includes the following steps:

1)在输入的人脸图像上确定出左眼球上的一点A的坐标位置(x1,y1)、右眼球上的一点B的坐标位置(x2,y2),通过A、B两点做直线L1,并确定下颌点C0坐标(x0,y0);1) Determine the coordinate position (x 1 , y 1 ) of point A on the left eyeball and the coordinate position (x 2 , y 2 ) of point B on the right eyeball on the input face image. Point to make a straight line L 1 , and determine the coordinates of the mandibular point C 0 (x 0 , y 0 );

2)计算直线L1和水平线的夹角α;2) Calculate the angle α between the straight line L 1 and the horizontal line;

直线L1和水平线的夹角α由(1)式求得,其中(x1,y1),(x2,y2)分别对应左右眼球坐标:The included angle α between the straight line L1 and the horizontal line is obtained by formula (1), where (x 1 , y 1 ), (x 2 , y 2 ) correspond to the coordinates of the left and right eyeballs respectively:

αα == arctanarctan (( ythe y 22 -- ythe y 11 xx 22 -- xx 11 ))

3)对该人脸图像进行旋转角度为-α的旋转处理,得到人脸图像2;3) The face image is rotated at a rotation angle of -α to obtain a face image 2;

旋转表达式如下:The rotation expression is as follows:

xx '' ythe y '' == coscos αα sinsin αα -- sinsin αα coscos αα xx ythe y

式中,x、y为输入人脸图像的坐标,x′、y′为人脸图像2的坐标;In the formula, x, y are the coordinates of the input face image, and x', y' are the coordinates of the face image 2;

4)在人脸图像2上确定出左眼球上的一点C的坐标位置(x3,y3)、右眼球上的一点D的坐标位置(x4,y4),通过C、D两点做直线L2,并确定出人脸图像2的下颌点E的坐标位置(x5,y5);4) Determine the coordinate position (x 3 , y 3 ) of a point C on the left eyeball and the coordinate position (x 4 , y 4 ) of a point D on the right eyeball on the face image 2, and pass through the two points C and D Make a straight line L 2 , and determine the coordinate position (x 5 , y 5 ) of the mandible point E of the face image 2;

5)规定几何尺寸归一化的人脸图像的几何尺寸的数值,其中宽度的尺寸为W,高度的尺寸为H;规定颌下线上的任何一点到两眼连线的垂直距离的标准值为H0,到图像下边框的垂直距离的标准值为H1,两眼连线到图像上边框的垂直距离的标准值为H25) Specify the numerical value of the geometric size of the normalized face image, where the width is W and the height is H; specify the standard value of the vertical distance from any point on the submandibular line to the line connecting the eyes H 0 , the standard value of the vertical distance to the lower frame of the image is H 1 , the standard value of the vertical distance from the line connecting the two eyes to the upper frame of the image is H 2 ;

6)求出E点到直线L2的垂直距离hy,并计算图像放缩系数K=hy/H06) Calculate the vertical distance h y from the point E to the straight line L2, and calculate the image scaling factor K=h y /H 0 ;

其中,E点到直线L2的垂直距离hyAmong them, the vertical distance h y from point E to straight line L2 is

hh ythe y == ythe y 55 -- ythe y 33 ++ ythe y 44 22

7)对人脸图像2按照放缩系数K进行放大或缩小处理,得到满足标准距离H0的人脸图像3;7) Enlarge or reduce the face image 2 according to the scaling factor K to obtain a face image 3 satisfying the standard distance H0 ;

8)在人脸图像3上确定出左眼球上的一点M的坐标位置(x6,y6)、右眼球上的一点N的坐标位置(x7,y7),以及下颌点P的纵坐标y8位置;8) Determine the coordinate position (x 6 , y 6 ) of a point M on the left eyeball, the coordinate position (x 7 , y 7 ) of a point N on the right eyeball, and the longitudinal direction of the mandibular point P on the face image 3. Coordinate y 8 position;

9)对人脸图像3进行裁减得到标准的归一化人脸图像,裁去人脸图像3中x坐标小于(x6+x7)/2-W/2、大于(x6+x7)/2+W/2的部分,以及y坐标小于(y7-H2)、大于(y8+H1)的部分。如果裁减后图像的宽度小于W或者高度小于H,则采用插值的方法,将宽度补到W或者高度补到H;9) Crop the face image 3 to obtain a standard normalized face image, and cut out the x coordinate in the face image 3 that is less than (x 6 +x 7 )/2-W/2 and greater than (x 6 +x 7 )/2+W/2, and the part whose y coordinate is less than (y 7 -H 2 ) and greater than (y 8 +H 1 ). If the width of the cropped image is smaller than W or the height is smaller than H, the interpolation method is used to fill the width to W or the height to H;

所述的人脸识别包括以下步骤:Described face recognition comprises the following steps:

10)对训练集的每一个人的人脸图像采用步骤1)~步骤9)形成几何尺寸归一化的人脸图像,对归一化的人脸图像提取人脸特征;10) adopt step 1)~step 9) to form the human face image of geometric dimension normalization to the face image of each person of training set, extract the face feature to the normalized face image;

11)对已知的每一个人的人脸图像采用步骤1)~步骤9)形成几何尺寸归一化的人脸图像,对归一化的人脸图像提取人脸特征,并建立包含已知人脸的特征和已知人脸的压缩图像以及已知人的个人身份档案的数据库;11) Use steps 1) to 9) to form a face image with normalized geometric dimensions for each known face image, extract face features from the normalized face image, and create a face image containing known faces facial features and compressed images of known faces and databases of personal identification files of known persons;

12)对待识别的每一个人的人脸图像采用步骤1)~步骤9)形成几何尺寸归一化的人脸图像,对归一化的人脸图像提取人脸特征;12) adopt step 1)~step 9) to form the human face image of geometric size normalization for the face image of each person to be identified, and extract the face feature to the normalized face image;

13)在已知人脸数据库中对待识别的人脸采用计算相似度和按相似度排序的方法进行人脸识别。13) In the known face database, the faces to be recognized are recognized by the method of calculating similarity and sorting by similarity.

上述人脸识别部分中的步骤10)-13)可采用已有成熟技术实现。Steps 10)-13) in the above-mentioned face recognition part can be realized by using existing mature technology.

本发明的特点及效果Features and effects of the present invention

本发明的特点是选取了活动人脸中精度高的稳定的关键特征点作为基准来实现人脸几何尺寸的归一化,达到了最好的人脸图像归一化效果,并且改善了人脸数据库中的人脸视觉效果。基于人脸几何尺寸归一化的人脸图像识别方法的识别率较普通的人脸图像识别算法有了较大的提升。The feature of the present invention is to select the stable key feature points with high precision in the active face as a reference to realize the normalization of the geometric size of the face, achieve the best normalization effect of the face image, and improve the face Visual effects of faces in the database. The recognition rate of the face image recognition method based on the normalization of face geometric size has been greatly improved compared with the common face image recognition algorithm.

附图说明Description of drawings

图1为输入的原人脸图像。Figure 1 is the input original face image.

图2为本发明将眼球候选区域被分成9个子图像区域示意图。FIG. 2 is a schematic diagram of dividing eyeball candidate regions into 9 sub-image regions according to the present invention.

图3为原人脸图像的两眼、下颌的定位以及直线L1和水平线的夹角α的示意图。3 is a schematic diagram of the positioning of the two eyes and the mandible and the angle α between the straight line L 1 and the horizontal line in the original face image.

图4为将原人脸图像经过旋转得到的人脸图像2,以及两眼、下颌在其中的位置示意图。FIG. 4 is a schematic diagram of the human face image 2 obtained by rotating the original human face image, and the positions of the two eyes and the lower jaw.

图5为本发明实施例的人脸图像的几何尺寸的定义及标准示意图。FIG. 5 is a schematic diagram of definitions and standards of geometric dimensions of a face image according to an embodiment of the present invention.

图6为人脸图像2经过缩放得到的人脸图像3,以及两眼、下颌在其中的位置示意图。FIG. 6 is a schematic diagram of the human face image 3 obtained by scaling the human face image 2 and the positions of the two eyes and the mandible therein.

图7为本发明最终得到的归一化的人脸图像。Fig. 7 is the normalized human face image finally obtained by the present invention.

图8为从归一化的人脸图像中提取出裸脸、眉毛+眼睛、眼睛、鼻尖、嘴五种部件的示意图。Fig. 8 is a schematic diagram of extracting five parts of naked face, eyebrows+eyes, eyes, nose tip, and mouth from a normalized face image.

具体实施方式Detailed ways

本发明提出的基于人脸几何尺寸归一化的人脸图像识别方法实施例结合各附图详细说明,该方法包括以下步骤:The embodiment of the face image recognition method based on the normalization of the face geometric size proposed by the present invention is described in detail in conjunction with the accompanying drawings. The method includes the following steps:

1)在输入的人脸图像(如图1所示)上确定出左眼球上的一点A的坐标位置(x1,y1)、右眼球上的一点B的坐标位置(x2,y2),并通过A、B两点做直线L1,并确定下颌点C0坐标(x0,y0)。1) Determine the coordinate position (x 1 , y 1 ) of a point A on the left eyeball and the coordinate position (x 2 , y 2 ) of a point B on the right eyeball on the input face image (as shown in Figure 1) ), and make a straight line L 1 through two points A and B, and determine the coordinates (x 0 , y 0 ) of the mandibular point C 0 .

该步骤中左右眼球上的点A、B的坐标位置可采用两种方法实现,一种方法是用鼠标直接在人脸图像上确定左右眼球上的点A、B的坐标位置,另一种方法是采用积分投影和特征空间分析相结合的算法自动地确定人脸图像上左右眼球上的点A、B的坐标位置。本实施方法采用第二种方法,该方法包括以下步骤:In this step, the coordinate positions of points A and B on the left and right eyeballs can be realized in two ways. One method is to use the mouse to directly determine the coordinate positions of points A and B on the left and right eyeballs on the face image. It uses an algorithm combining integral projection and feature space analysis to automatically determine the coordinate positions of points A and B on the left and right eyeballs on the face image. This implementation method adopts the second method, which method includes the following steps:

①检测人脸区域:①Detect face area:

人脸区域检测,即确定图像中人脸区域的上下和左右边缘位置。本实施例对输入图像应用sobel算子来检测图像边缘,通过对边缘图像在水平和垂直方向上的积分投影进行分析来确定人脸区域的位置。边缘图在水平方向和垂直方向上的积分投影的计算如公式(1),(2)所示。Face area detection, that is, to determine the upper, lower, left, and right edge positions of the face area in the image. In this embodiment, the sobel operator is applied to the input image to detect the edge of the image, and the position of the face area is determined by analyzing the integral projection of the edge image in the horizontal and vertical directions. The calculation of the integral projection of the edge map in the horizontal and vertical directions is shown in formulas (1), (2).

Hh (( ythe y )) == ΣΣ xx == 00 Mm EE. (( xx ,, ythe y )) -- -- -- (( 11 ))

VV (( xx )) == ΣΣ ythe y == 00 NN EE. (( xx ,, ythe y )) -- -- -- (( 22 ))

如果点(x,y)是检测到的边缘点,E(x,y)=1;否则E(x,y)=0。If point (x,y) is a detected edge point, E(x,y)=1; otherwise E(x,y)=0.

人脸区域的左右边缘xl,xr由下式确定:The left and right edges x l and x r of the face area are determined by the following formula:

xx ll == argarg minmin xx VV (( xx )) >> VV (( xx 00 )) // 33 -- -- -- (( 33 ))

xx rr == argarg maxmax xx VV (( xx )) >> VV (( xx 00 )) // 33 -- -- -- (( 44 ))

式中,x0是边缘图垂直积分投影最大值对应的x坐标,即使得垂直积分投影值大于垂直积分投影最大值1/3(经验值)的最小的和最大的x值作为人脸区域的左右边缘。人脸区域的上下边缘yt,yb由公式(5),(6)确定:In the formula, x 0 is the x coordinate corresponding to the maximum value of the vertical integral projection of the edge map, that is, the minimum and maximum x values that make the vertical integral projection value greater than 1/3 (empirical value) of the maximum value of the vertical integral projection are used as the face area left and right edges. The upper and lower edges y t and y b of the face area are determined by formulas (5), (6):

ythe y tt == (( argarg minmin Hh (( ythe y )) ythe y >> (( xx rr -- xx ll )) // 1010 )) ++ (( xx rr -- xx ll )) // 33 -- -- -- (( 55 ))

                       yb=yt+(xr-xl)×0.8                      (6)y b =y t +(x r -x l )×0.8 (6)

公式(5)中又加上了人脸区域宽度的1/3(经验值)是为了尽量减少头发对定位结果的影响;该步骤虽然只能粗略的确定图像中的人脸区域,但它保证了两个眼睛都包含在其中。Adding 1/3 of the face area width (empirical value) to the formula (5) is to minimize the influence of hair on the positioning results; although this step can only roughly determine the face area in the image, it guarantees Both eyes are included in it.

②确定眼球位置候选点:② Determine eyeball position candidate points:

眼球位置的候选点是通过分析眼睛区域图像的灰度和梯度分布来确定的。选择梯度投影直方图中上方积分投影之和和下方积分投影之和相差最大的点作为眼球纵坐标的起始候选点YO,如公式(7)所示:Candidate points for eyeball locations are determined by analyzing the grayscale and gradient distributions of eye region images. Select the point with the largest difference between the sum of the upper integral projection and the sum of the lower integral projection in the gradient projection histogram as the starting candidate point Y O of the eyeball ordinate, as shown in formula (7):

ythe y Oo == argarg maxmax ythe y (( ΣΣ ii == 11 1515 Hh (( ythe y ++ ii )) -- ΣΣ ii == 11 1515 Hh (( ythe y -- ii )) )) -- -- -- (( 77 ))

确定yO之后,本实施例选择yO上方30个像素(经验值)区域内的所有点作为眼球位置的起始候选点。对于每个候选点,考察以该点为中心的30×30图像区域的灰度分布。本实施例把该图像区域分成如图2所示的9个子图像区域,并计算每个子图像区域的灰度积分,如公式(8)所示。After y O is determined, this embodiment selects all points within the area of 30 pixels (experience value) above y O as the initial candidate points of the eyeball position. For each candidate point, examine the gray distribution of the 30×30 image area centered on this point. In this embodiment, the image area is divided into 9 sub-image areas as shown in FIG. 2, and the gray-scale integral of each sub-image area is calculated, as shown in formula (8).

Figure C20051006796200087
Figure C20051006796200087

这里I(x,y)代表点(x,y)处的灰度值。由于眼球部分的灰度值一般要比其周边区域的灰度值小,所以如果某个子图像区域的灰度积分si,i=1,2,3,4,6,7,8,9小于中央子图像,即区域5的灰度积分S5,就去掉这个候选点。剩余的候选点作为最终眼球位置的候选点。Here I(x, y) represents the gray value at the point (x, y). Since the gray value of the eyeball is generally smaller than the gray value of its surrounding area, if the gray integral si of a certain sub-image area, i =1, 2, 3, 4, 6, 7, 8, 9 is less than The central sub-image, that is, the gray integral S 5 of region 5, removes this candidate point. The remaining candidate points are used as candidate points for the final eyeball position.

由于眼球反光,很多人脸图像都会在眼球内部留下小的亮点,这会导致一些好的候选点被错误的去除。所以在计算灰度积分投影之前,要对人脸图像作去除小亮点的处理。对于人脸图像中的每个点,把它的灰度值替换为以它为中心3×3邻域内9个点的最小灰度值。Due to the reflection of the eyeball, many face images will leave small bright spots inside the eyeball, which will cause some good candidate points to be removed by mistake. Therefore, before calculating the gray integral projection, it is necessary to remove small bright spots on the face image. For each point in the face image, replace its gray value with the minimum gray value of 9 points within a 3×3 neighborhood centered on it.

③眼球位置的确定:③ Determination of eyeball position:

对于检测到的候选点,本发明用特征空间分析的方法(PCA)来确定最终眼球的位置。选用另外9组不同姿态人脸图像中的眼睛区域图像作为训练集(眼睛区域可通过手工定位确定),分别训练了左眼和右眼的18个特征空间。对于每个眼球位置的候选点Ci,把它对应的子图像分别投影到这18组特征空间,得到投影向量Pi,i=1,2…18,定义每个投影向量的匹配误差为:For the detected candidate points, the present invention uses the feature space analysis method (PCA) to determine the final eyeball position. The eye region images in another 9 groups of face images with different poses are selected as the training set (the eye region can be determined by manual positioning), and 18 feature spaces of the left eye and right eye are trained respectively. For each candidate point C i of the eyeball position, its corresponding sub-image is projected into these 18 groups of feature spaces to obtain the projection vector P i , i=1, 2...18, and the matching error of each projection vector is defined as:

EE. (( CC ii )) == ΣΣ kk == 11 DD. PP kk 22 λλ kk -- -- -- (( 99 ))

这里pk是投影向量第k维的值,λk是对应第k个特征向量的特征值,D是保留的特征向量的维数。每个特征点对应图像区域的匹配误差定义为它在18组特征空间投影向量匹配误差的最小值。对于所有的候选点,选择匹配误差最小的候选点作为第一个眼球的位置。为了避免两个眼球都定位在同一个眼睛上,选择和第一个眼球位置距离大于给定值的候选点中匹配误差最小的点作为另外一个眼球位置。这两个位置确定后,选择x坐标较小的为左眼球A(x1,y1),x坐标较大的为右眼球B(x2,y2)Here p k is the value of the k-th dimension of the projection vector, λ k is the eigenvalue corresponding to the k-th eigenvector, and D is the dimension of the retained eigenvector. The matching error of each feature point corresponding to the image area is defined as the minimum value of the matching error of its projection vector in 18 groups of feature spaces. For all candidate points, the candidate point with the smallest matching error is selected as the position of the first eyeball. In order to prevent both eyeballs from being located on the same eye, select the point with the smallest matching error among the candidate points whose distance from the first eyeball position is greater than a given value as another eyeball position. After these two positions are determined, select the left eyeball A (x 1 , y 1 ) with the smaller x coordinate, and choose the right eyeball B (x 2 , y 2 ) with the larger x coordinate

④确定下颌点C0坐标(x0,y0):可采用2种方法实现,第1种采用鼠标直接在人脸图像上确定出,第2种采用积分投影和人脸器官(部件)比例相结合的方法。④ Determine the coordinates (x 0 , y 0 ) of the mandibular point C 0 : two methods can be used. The first method uses the mouse to directly determine it on the face image, and the second method uses integral projection and the proportion of facial organs (parts) combined method.

本实施例采用积分投影和人脸器官(部件)比例相结合的方法,即根据人脸区域的横向积分投影图、眼球的位置和人脸上各器官的比例关系确定下颌点。该方法法包括两个步骤:其一是检测出可能对应于器官的候选谷点(即横向积分投影曲线的谷值点),其二是确定各个器官对应的是哪些候选谷点。This embodiment adopts the method of integrating integral projection and facial organ (part) ratio, that is, the mandibular point is determined according to the horizontal integral projection map of the human face area, the position of the eyeball and the proportional relationship of various organs on the human face. The method includes two steps: one is to detect candidate valley points that may correspond to organs (that is, the valley points of the horizontal integral projection curve), and the other is to determine which candidate valley points correspond to each organ.

先对横向积分投影曲线进行均值滤波,然后求其二阶导数。二阶导数的极值点对应了投影曲线的峰、谷值点,其极大值点对应了投影曲线的谷点。检测二阶导数的峰,作为人脸器官纵坐标的候选位置。对得到的候选位置,采用循环的策略,找出与人脸器官比例关系最匹配的情况,作为最终的结果。这样就得到了各器官的纵坐标,其中包括下颌点C0纵坐标y0。C0横坐标x0由A、B的中点来确定,即x0=(x1+x2)/2。如图3所示,A、B、C0三点分别为确定的左、右眼睛的坐标点及下颌点,图中,直线L1为A、B的连线,L0为水平线,直线L1和水平线L0的夹角α。Mean filtering is performed on the transverse integral projection curve first, and then its second order derivative is calculated. The extreme points of the second derivative correspond to the peak and valley points of the projection curve, and the maximum points correspond to the valley points of the projection curve. Detect the peaks of the second derivatives as the candidate locations of the ordinates of the facial organs. For the obtained candidate positions, a loop strategy is used to find out the situation that best matches the proportion of human face organs as the final result. In this way, the ordinates of each organ are obtained, including the ordinate y 0 of the mandibular point C 0 . The abscissa x 0 of C 0 is determined by the midpoint of A and B, that is, x 0 =(x 1 +x 2 )/2. As shown in Figure 3, the three points A, B, and C0 are the determined coordinate points of the left and right eyes and the mandibular point respectively. In the figure, the straight line L1 is the connecting line between A and B, L0 is the horizontal line, and the straight line L 1 and the horizontal line L 0 the angle α.

2)计算直线L1和水平线L0的夹角α。2) Calculate the angle α between the straight line L 1 and the horizontal line L 0 .

直线L1和水平线L0的夹角α由(10)式求得,其中A(x1,y1),B(x2,y2)分别对应左右眼球坐标:The angle α between the straight line L1 and the horizontal line L 0 is obtained by the formula (10), where A(x 1 , y 1 ), B(x 2 , y 2 ) correspond to the coordinates of the left and right eyeballs respectively:

αα == arctanarctan (( ythe y 22 -- ythe y 11 xx 22 -- xx 11 )) -- -- -- (( 1010 ))

3)对该人脸图像进行旋转角度为-α的旋转处理,得到人脸图像2,如图4所示,在图4中,C、D分别为左眼球点和右眼球点,直线L2为通过C、D两点的连线,E为下颌点,L3为水平颌下线,hy为E点到直线L2的垂直距离。3) Carry out the rotation processing that the rotation angle is -α to this face image, obtain face image 2, as shown in Fig. 4, in Fig. 4, C, D are respectively left eyeball point and right eyeball point, straight line L 2 is the connecting line passing through C and D, E is the mandibular point, L3 is the horizontal submandibular line, h y is the vertical distance from E point to the straight line L2.

设输入的人脸图像的宽度和高度依次为SrcWidth、SrcHeight,经旋转得到的人脸图像2的宽度和高度分别是w、h。Let the width and height of the input face image be SrcWidth and SrcHeight in turn, and the width and height of the face image 2 obtained through rotation are w and h respectively.

由于在实际的图像旋转中,存在着坐标原点的平移,因此需要对坐标进行偏移校正。设水平和竖直方向上的偏移量分别是dx、dy。则根据α的不同,有如下关系:Since there is a translation of the coordinate origin in the actual image rotation, it is necessary to perform offset correction on the coordinates. Let the offsets in the horizontal and vertical directions be dx and dy respectively. According to the difference of α, there is the following relationship:

当α>0时:When α>0:

            w=INT(SrcWidth×cosα+SrcHeight×sinα);     w=INT(SrcWidth×cosα+SrcHeight×sinα);

            h=INT(SrcWidth×sinα+SrcHeight×cosα);h=INT(SrcWidth×sinα+SrcHeight×cosα);

            dx=0;dx=0;

            dy=SrcWidth×sinα;                        (10.1)dy = SrcWidth×sinα; (10.1)

当α<0时:When α<0:

            w=INT(SrcWidth×cosα-SrcHeight×sinα);     w=INT(SrcWidth×cosα-SrcHeight×sinα);

            h=INT(-SrcWidth×sinα+SrcHeight×cosα);h=INT(-SrcWidth×sinα+SrcHeight×cosα);

            dx=-SrcHeight×sinα;dx=-SrcHeight×sinα;

            dy=0;                                      (10.2)dy=0; (10.2)

对人脸图像2中的每一点(i,j),设(i,j)在原图像中的对应点为(io,jo)。当 0 &le; io < SrcWidth 0 &le; jo < SrcHeight 时,表明在原图中存在一点(io,jo)和(i,j)相对应,考虑偏移量dx、dy,有:For each point (i, j) in the face image 2, let the corresponding point (i, j) in the original image be (io, jo). when 0 &le; io < SrcWidth 0 &le; jo < SrcHeight , it indicates that there is a point (io, jo) corresponding to (i, j) in the original image, considering the offset dx, dy, there are:

ioio == INTINT (( (( ii -- dxdx )) &times;&times; coscos &alpha;&alpha; -- (( jj -- dydy )) &times;&times; sinsin &alpha;&alpha; )) jojo == INTINT (( (( ii -- dxdx )) &times;&times; sinsin &alpha;&alpha; ++ (( jj -- dydy )) &times;&times; coscos &alpha;&alpha; )) -- -- -- (( 1111 ))

则Image2[j][i]=OriginalImage[jo][io];否则,(io,jo)位于图像2的空白部分,可以将其赋值为:Image2[j][i]=0。Then Image2[j][i]=OriginalImage[jo][io]; otherwise, (io, jo) is located in the blank part of Image 2, which can be assigned as: Image2[j][i]=0.

4)在人脸图像2上确定出左眼球上的一点C的坐标位置(x3,y3)、右眼球上的一点D的坐标位置(x4,y4),通过C、D两点做直线L2,并确定出人脸图像2的下颌点E的坐标位置(x5,y5);4) Determine the coordinate position (x 3 , y 3 ) of a point C on the left eyeball and the coordinate position (x 4 , y 4 ) of a point D on the right eyeball on the face image 2, and pass through the two points C and D Make a straight line L 2 , and determine the coordinate position (x 5 , y 5 ) of the mandible point E of the face image 2;

该步骤中左右眼球上的点C、D的坐标位置可采用3种方法实现,第一种方法是用鼠标直接在人脸图像上确定出左右眼球上的点C、D的坐标位置,第二种方法是采用上述确定A、B的坐标的相同方法确定人脸图像上左右眼球上的点C、D的坐标位置,第三种方法是通过A、B的坐标和α,计算出C、D点的坐标。本实施例采用第三种方法:In this step, the coordinate positions of points C and D on the left and right eyeballs can be realized by three methods. The first method is to use the mouse to directly determine the coordinate positions of points C and D on the left and right eyeballs on the face image. The first method is to use the same method for determining the coordinates of A and B to determine the coordinate positions of points C and D on the left and right eyeballs on the face image. The third method is to calculate the coordinates of C and D through the coordinates of A and B and α. The coordinates of the point. This embodiment adopts the third method:

              x3=INT(x1×cosα+y1×sinα+dx+0.5);x 3 =INT(x 1 ×cosα+y 1 ×sinα+dx+0.5);

              y3=INT(-x1×sinα+y1×cosα+dy+0.5);y 3 =INT(-x 1 ×sinα+y 1 ×cosα+dy+0.5);

              x4=INT(x2×cosα+y2×sinα+dx+0.5); x4 =INT( x2 ×cosα+ y2 ×sinα+dx+0.5);

              y4=INT(-x2×sinα+y2×cosα+dy+0.5);         (12)y 4 =INT(-x 2 ×sinα+y 2 ×cosα+dy+0.5); (12)

该步骤中人脸图像2的下颌点E的坐标位置(x5,y5)可采用3种方法实现,第一种方法是用鼠标直接在人脸图像上确定出下颌点E点的坐标位置,第二种方法是采用上述确定下颌点C0的相同方法确定人脸图像上下颌点E点的坐标位置,第三种方法是通过C0的坐标和α,计算出E点的坐标(x5,y5)。本实施例采用第三种方法:In this step, the coordinate position (x 5 , y 5 ) of the mandible point E of the face image 2 can be realized by three methods. The first method is to use the mouse to directly determine the coordinate position of the mandible point E on the face image , the second method is to use the same method of determining the mandibular point C 0 to determine the coordinate position of the upper and lower jaw point E of the face image, and the third method is to calculate the coordinates of the E point through the coordinates of C 0 and α (x 5 , y 5 ). This embodiment adopts the third method:

              x5=INT(x0×cosα+y0×sinα+dx+0.5); x5 =INT( x0 ×cosα+ y0 ×sinα+dx+0.5);

              y5=INT(-x0×sinα+y0×cosα+dy+0.5);         (13)y 5 =INT(-x 0 ×sinα+y 0 ×cosα+dy+0.5); (13)

5)规定几何尺寸归一化的人脸图像的几何尺寸的数值,如图5所示,图5中1为图像上边框,2为两眼所确定的直线L2,3为颌下线L3,4为图像下边框,5为图像右边框,6为图像左边框。5) Specify the numerical value of the geometric size of the face image normalized by the geometric size, as shown in Figure 5, in Figure 5, 1 is the upper frame of the image, 2 is the straight line L2 determined by the two eyes, and 3 is the submandibular line L 3 , 4 is the lower border of the image, 5 is the right border of the image, and 6 is the left border of the image.

本实施例中规定图像的宽度(5、6之间的距离)为W=360像素,高度(1、4之间的距离)为H=480像素。规定颌下线上的任何一点到两眼连线的垂直距离(2、3之间的距离)的标准值为H0=200像素,到图像下边框的垂直距离(3、4之间的距离)的标准值为H1=28像素,两眼连线到图像上边框的垂直距离(1、2之间的距离)的标准值为H2=252像素。In this embodiment, it is specified that the width of the image (the distance between 5 and 6) is W=360 pixels, and the height (the distance between 1 and 4) is H=480 pixels. The standard value of the vertical distance between any point on the submandibular line and the line between the two eyes (the distance between 2 and 3) is H 0 =200 pixels, and the vertical distance to the lower frame of the image (the distance between 3 and 4 ) has a standard value of H 1 =28 pixels, and the standard value of the vertical distance from the line connecting the two eyes to the upper frame of the image (the distance between 1 and 2) is H 2 =252 pixels.

6)求出E点到直线L2的垂直距离hy6) Calculate the vertical distance h y from point E to straight line L2 as

hh ythe y == ythe y 55 -- ythe y 33 ++ ythe y 44 22 -- -- -- (( 1414 ))

并计算图像放缩系数K=hy/H0And calculate the image scaling factor K= hy /H 0 .

7)对人脸图像2按照放缩系数K进行放大或缩小处理,得到满足标准距离H0的人脸图像3,如图6所示,图6为图4经过缩小处理后的图,图中,M、N为左眼球点、右眼球点的坐标点,L4为颌下水平线,P为下颌点。7) The face image 2 is enlarged or reduced according to the scaling factor K, and the face image 3 meeting the standard distance H0 is obtained, as shown in Figure 6, and Figure 6 is the figure after the reduction process of Figure 4, in the figure , M and N are the coordinate points of the left eyeball point and the right eyeball point, L4 is the submandibular horizontal line, and P is the mandibular point.

人脸图像3的宽(w3)、高(h3)依次为:The width (w 3 ) and height (h 3 ) of the face image 3 are:

ww 33 == INTINT (( KK &times;&times; ww )) hh 33 == INTINT (( KK &times;&times; hh )) -- -- -- (( 1515 ))

8)在人脸图像3上确定出左眼球上的一点M的坐标位置(x6,y6)、右眼球上的一点N的坐标位置(x7,y7),以及下颌点P的纵坐标位置。8) Determine the coordinate position (x 6 , y 6 ) of a point M on the left eyeball, the coordinate position (x 7 , y 7 ) of a point N on the right eyeball, and the longitudinal direction of the mandibular point P on the face image 3. coordinate location.

则两眼M、N的中点MidPoint的坐标满足下式:Then the coordinates of the midpoint MidPoint of the two eyes M and N satisfy the following formula:

MidPointMidPoint .. xx == xx 66 ++ xx 77 22 == INTINT (( KK &times;&times; xx 33 ++ xx 44 22 )) MidPointMidPoint .. ythe y == ythe y 66 ++ ythe y 77 22 == INTINT (( KK &times;&times; ythe y 33 ++ ythe y 44 22 )) -- -- -- (( 1616 ))

P点的纵坐标y8满足:The ordinate y 8 of point P satisfies:

                      y8=MidPoint.y+H0                      (17)y 8 =MidPoint.y+H 0 (17)

9)对人脸图像3进行裁减得到标准的归一化人脸图像。9) Cutting the face image 3 to obtain a standard normalized face image.

根据标准归一化图像的尺寸,裁去人脸图像3中x坐标小于(x6+x7)/2-W/2、大于(x6+x7)/2+W/2的部分,以及y坐标小于(y7-H2)、大于(y8+H1)的部分。According to the size of the standard normalized image, cut off the part of the face image 3 whose x coordinate is less than (x 6 +x 7 )/2-W/2 and greater than (x 6 +x 7 )/2+W/2, And the part whose y coordinate is less than (y 7 -H 2 ) and greater than (y 8 +H 1 ).

具体实现时,定义裁剪矩形CropRect,其左右边界、上下边界的坐标分别定义为CropRect.left、CropRect.right、CropRect.top、CropRect.bottom。具体的裁剪尺寸为:In specific implementation, the cropping rectangle CropRect is defined, and the coordinates of its left and right boundaries and upper and lower boundaries are respectively defined as CropRect.left, CropRect.right, CropRect.top, and CropRect.bottom. The specific cut size is:

CropRectCropRect .. leftleft == (( MidPointMidPoint .. xx -- WW 22 )) >> 00 ?? (( MidPointMidPoint .. xx -- WW 22 )) :: 00 CropRectCropRect .. rightright == (( MidPointMidPoint .. xx ++ WW 22 )) << ww 33 ?? (( MidPointMidPoint .. xx ++ WW 22 )) :: (( ww 33 -- 11 )) CropRectCropRect .. toptop == (( MidPointMidPoint .. ythe y -- Hh 22 )) >> 00 ?? (( MidPointMidPoint .. ythe y -- Hh 22 )) :: 00 CropRectCropRect .. bottombottom == (( ythe y 88 ++ Hh 11 )) << hh 33 ?? (( ythe y 88 ++ Hh 11 )) :: (( hh 33 -- 11 )) -- -- -- (( 1818 ))

按该裁剪尺寸对人脸图像3进行裁剪。如果裁减后图像的宽度小于W或者高度小于H,则采用插值的方法,将宽度补到W或者高度补到H,从而得到标准尺寸的归一化人脸图像,如图7所示。The face image 3 is cropped according to the cropping size. If the width of the cropped image is smaller than W or the height is smaller than H, the interpolation method is used to fill the width to W or the height to H to obtain a normalized face image of a standard size, as shown in Figure 7.

10)对训练集的每一个人的人脸图像采用步骤1)~步骤9)形成几何尺寸归一化的人脸图像,对每一张归一化的人脸图像提取出每一个人的裸脸、眉毛+眼睛、眼睛、鼻尖、嘴五种人脸部件,对从训练集人脸中提取出来的裸脸、眉毛+眼睛、眼睛、鼻尖、嘴五种人脸部件,利用主分量分析方法中的特征脸方法,分别形成特征裸脸、特征(眼睛+眉毛)、特征眼睛、特征鼻子、特征嘴巴,图8给出了这5种部件的例示。本实施例的特征提取以及识别算法采用专利号01136577.3的专利:基于部件主分量分析的多模式人脸识别方法。10) For each person's face image in the training set, adopt steps 1) to 9) to form a geometrically normalized face image, and extract each person's naked face image for each normalized face image. Face, eyebrows+eyes, eyes, nose tip, and mouth five face parts, for the five face parts extracted from the training set faces, the naked face, eyebrows+eyes, eyes, nose tip, and mouth, use the principal component The eigenface method in the analysis method respectively forms the characteristic naked face, characteristic (eyes + eyebrows), characteristic eyes, characteristic nose, and characteristic mouth. Figure 8 shows examples of these five components. The feature extraction and recognition algorithm of this embodiment adopts the patent No. 01136577.3: multi-mode face recognition method based on component principal component analysis.

11)对已知的每一个人的人脸图像采用步骤1)~步骤9)形成几何尺寸归一化的人脸图像,对从已知的归一化人脸中提取出来的裸脸、眉毛+眼睛、眼睛、鼻尖、嘴五种人脸部件,利用主分量分析方法中的特征投影值分析方法,提取已知人脸的裸脸、眼睛+眉毛、眼睛、鼻子、嘴五种人脸部件的投影特征值并建立包括已知人脸裸脸、眼睛+眉毛、眼睛、鼻子、嘴五种人脸部件的投影特征值和已知人脸的压缩图像及已知人的个人身份档案的数据库。本步骤的特征提取以及识别算法也采用专利号01136577.3的专利:基于部件主分量分析的多模式人脸识别方法。11) For each known human face image, adopt steps 1) to 9) to form a geometrically normalized human face image, and extract the naked face and eyebrows from the known normalized human face +Five face parts including eyes, eyes, nose tip, and mouth, use the characteristic projection value analysis method in the principal component analysis method to extract five types of human faces including naked face, eyes+eyebrows, eyes, nose, and mouth. The projected feature values of the parts and the establishment of a database including the projected feature values of the five face parts of known faces, naked face, eyes+eyebrows, eyes, nose, and mouth, compressed images of known faces, and personal identity files of known people. The feature extraction and recognition algorithm in this step also adopts the patent No. 01136577.3: multi-mode face recognition method based on component principal component analysis.

12)对待识别的每一个人的人脸图像采用步骤1)~步骤9)形成几何尺寸归一化的人脸图像,对从待识别归一化人脸中提取出来的裸脸、眉毛+眼睛、眼睛、鼻尖、嘴五种人脸部件,利用主分量分析方法中的投影特征值分析方法,提取待识别人脸的裸脸、眼睛+眉毛、眼睛、鼻子、嘴巴的投影特征值。本步骤的特征提取以及识别算法也采用专利号01136577.3的专利:基于部件主分量分析的多模式人脸识别方法。12) Use steps 1) to 9) to form a normalized face image of geometric size for the face image of each person to be recognized, and extract the naked face, eyebrows+eyes from the normalized face to be recognized Using the projection eigenvalue analysis method in the principal component analysis method to extract the projection eigenvalues of the naked face, eyes+eyebrows, eyes, nose, and mouth of the face to be recognized. The feature extraction and recognition algorithm in this step also adopts the patent No. 01136577.3: multi-mode face recognition method based on component principal component analysis.

13)采用全局人脸识别和局部人脸识别的方法进行人脸识别。人脸识别的过程为:以待识别人脸的特征与数据库中存储人脸的特征进行比对,计算相似度,再按与待识别人脸相似度的大小对数据库中的人脸进行从大到小的排序,并按照这一顺序显示出被查找出的人的照片、个人的身份档案和与待识别人的相似度,从而查找出待识别者的身份或与待识别者在面貌上相似人的身份。计算待识别人脸与已知人脸相似度采用(19)式。13) The method of global face recognition and partial face recognition is used for face recognition. The process of face recognition is: compare the features of the face to be recognized with the features of the face stored in the database, calculate the similarity, and then compare the faces in the database according to the similarity with the face to be recognized. To the small order, and in this order, display the photo of the person being found, the personal identity file and the similarity with the person to be identified, so as to find out the identity of the person to be identified or similar in appearance to the person to be identified person's identity. Formula (19) is used to calculate the similarity between the face to be recognized and the known face.

RR == 11 -- || || AA -- BB || || || || AA || || ++ || || BB || || -- -- -- (( 1919 ))

(19)式中A为待识别人脸的特征投影值串、B为数据库中已知人脸的归特征投影值串。(19) where A is the feature projection value string of the face to be recognized, and B is the normalized feature projection value string of the known face in the database.

本实施例采用全局人脸识别方法时,对已知人脸的裸脸、眼睛+眉毛、眼睛、鼻子、嘴巴的特征投影值按5∶6∶4∶3∶2的比例进行加权,同时对待识别人脸的裸脸、眼睛+眉毛、眼睛、鼻子、嘴巴的特征投影值按5∶6∶4∶3∶2的比例进行加权,然后按(19)式计算相似度。When this embodiment adopts the global face recognition method, the feature projection values of the naked face, eyes+eyebrows, eyes, nose, and mouth of the known face are weighted in a ratio of 5:6:4:3:2, and at the same time The feature projection values of bare face, eyes+eyebrows, eyes, nose, and mouth are weighted according to the ratio of 5:6:4:3:2, and then the similarity is calculated according to (19).

采用局部人脸识别方法时,用人机交互的方法选择裸脸、眼睛+眉毛、眼睛、鼻子、嘴巴的任意组合,其组合数为5!=120种,即共有120种人脸识别模式。裸脸、眼睛+眉毛、眼睛、鼻子、嘴巴的特征投影值仍按5∶6∶4∶3∶2的比例进行加权。本步骤也采用专利号01136577.3的专利:基于部件主分量分析的多模式人脸识别方法。When using the partial face recognition method, use the method of human-computer interaction to select any combination of naked face, eyes+eyebrows, eyes, nose, and mouth, and the number of combinations is 5! = 120 types, that is, there are 120 face recognition modes in total. The feature projection values of bare face, eyes+brows, eyes, nose, and mouth are still weighted in a ratio of 5:6:4:3:2. This step also adopts the patent No. 01136577.3: multi-mode face recognition method based on component principal component analysis.

Claims (10)

1、一种基于人脸几何尺寸归一化的人脸图像识别方法,包括对人脸几何尺寸归一化和人脸识别两部分,其特征在于,所述的对人脸几何尺寸归一化包括以下步骤:1. A human face image recognition method based on normalization of human face geometric size, comprising two parts of normalization of human face geometric size and face recognition, characterized in that the normalization of human face geometric size Include the following steps: 1)在输入的人脸图像上确定出左眼球上的一点A的坐标位置(x1,y1)、右眼球上的一点B的坐标位置(x2,y2),通过A、B两点做直线L1,并确定下颌点C0坐标(x0,y0);1) Determine the coordinate position (x 1 , y 1 ) of point A on the left eyeball and the coordinate position (x 2 , y 2 ) of point B on the right eyeball on the input face image. Point to make a straight line L 1 , and determine the coordinates of the mandibular point C 0 (x 0 , y 0 ); 2)计算直线L1和水平线的夹角α;2) Calculate the angle α between the straight line L 1 and the horizontal line; 直线L1和水平线的夹角α由The angle α between the straight line L1 and the horizontal line is given by &alpha; = arctan ( y 2 - y 1 x 2 - x 1 ) 式求得, &alpha; = arctan ( the y 2 - the y 1 x 2 - x 1 ) obtained by the formula, 其中(x1,y1),(x2,y2)分别对应左右眼球A、B点坐标:Where (x 1 , y 1 ), (x 2 , y 2 ) correspond to the coordinates of A and B points of the left and right eyeballs respectively: 3)对该人脸图像进行旋转角度为-α的旋转处理,得到第二人脸图像;3) Carrying out the rotation processing that the rotation angle is -α to this human face image, obtains the second human face image; 旋转表达式如下:The rotation expression is as follows: xx &prime;&prime; ythe y &prime;&prime; == coscos &alpha;&alpha; sinsin &alpha;&alpha; -- sinsin &alpha;&alpha; coscos &alpha;&alpha; xx ythe y 式中,x、y为输入人脸图像上点的坐标,x′、y′为第二人脸图像上点的坐标;In the formula, x, y are the coordinates of the point on the input face image, and x', y' are the coordinates of the point on the second face image; 4)在第二人脸图像上确定出左眼球上的一点C的坐标位置(x3,y3)、右眼球上的一点D的坐标位置(x4,y4),通过C、D两点做直线L2,并确定出第二人脸图像的下颌点E的坐标位置(x5,y5);4) Determine the coordinate position (x 3 , y 3 ) of a point C on the left eyeball and the coordinate position (x 4 , y 4 ) of a point D on the right eyeball on the second face image. Point to make a straight line L 2 , and determine the coordinate position (x 5 , y 5 ) of the mandible point E of the second face image; 5)规定几何尺寸归一化的人脸图像的几何尺寸的数值,其中宽度的尺寸为W,高度的尺寸为H;规定颌下线上的任何一点到两眼连线的垂直距离的标准值为H0,到图像下边框的垂直距离的标准值为H1,两眼连线到图像上边框的垂直距离的标准值为H25) Specify the numerical value of the geometric size of the normalized face image, where the width is W and the height is H; specify the standard value of the vertical distance from any point on the submandibular line to the line connecting the eyes H 0 , the standard value of the vertical distance to the lower frame of the image is H 1 , the standard value of the vertical distance from the line connecting the two eyes to the upper frame of the image is H 2 ; 6)求出E点到直线L2的垂直距离hy,并计算图像放缩系数K=hy/H06) Calculate the vertical distance h y from the point E to the straight line L 2 , and calculate the image scaling factor K=h y /H 0 ; 其中,E点到直线L2的垂直距离hyAmong them, the vertical distance h y from point E to straight line L2 is hh ythe y == ythe y 55 -- ythe y 33 ++ ythe y 44 22 7)对第二人脸图像按照放缩系数K进行放大或缩小处理,得到满足标准距离H0的第三人脸图像;7) Enlarge or reduce the second human face image according to the scaling factor K to obtain the third human face image satisfying the standard distance H0 ; 8)在第三人脸图像上确定出左眼球上的一点M的坐标位置(x6,y6)、右眼球上的一点N的坐标位置(x7,y7),以及下颌点P的纵坐标y8位置;y8=MidPoint.y+H08) Determine the coordinate position (x 6 , y 6 ) of a point M on the left eyeball, the coordinate position (x 7 , y 7 ) of a point N on the right eyeball, and the point P of the mandible on the third face image Ordinate y 8 position; y 8 =MidPoint.y+H 0 ; 9)对第三人脸图像进行裁减得到标准的归一化人脸图像,裁去第三人脸图像中x坐标小于(x6+x7)/2-W/2、大于(x6+x7)/2+W/2的部分,以及y坐标小于(y7-H2)、大于(y8+H1)的部分;如果裁减后图像的宽度小于W或者高度小于H,则采用插值的方法,将宽度补到W或者高度补到H;9) Cutting the third face image to obtain a standard normalized face image, cutting out the x coordinates in the third face image less than (x 6 +x 7 )/2-W/2, greater than (x 6 + x 7 )/2+W/2, and the part whose y coordinate is less than (y 7 -H 2 ) and greater than (y 8 +H 1 ); if the width of the cropped image is less than W or the height is less than H, use Interpolation method, fill width to W or height to H; 所述的人脸识别包括以下步骤:Described face recognition comprises the following steps: 10)对训练集的每一个人的人脸图像采用步骤1)~步骤9)形成几何尺寸归一化的人脸图像,对归一化的人脸图像提取人脸特征;10) adopt step 1)~step 9) to form the human face image of geometric dimension normalization to the face image of each person of training set, extract the face feature to the normalized face image; 11)对已知的每一个人的人脸图像采用步骤1)~步骤9)形成几何尺寸归一化的人脸图像,对归一化的人脸图像提取人脸特征,并建立包含已知人脸的特征和已知人脸的压缩图像以及已知人的个人身份档案的数据库;11) Use steps 1) to 9) to form a face image with normalized geometric dimensions for each known face image, extract face features from the normalized face image, and create a face image containing known faces facial features and compressed images of known faces and databases of personal identification files of known persons; 12)对待识别的每一个人的人脸图像采用步骤1)~步骤9)形成几何尺寸归一化的人脸图像,对归一化的人脸图像提取人脸特征;12) adopt step 1)~step 9) to form the human face image of geometric size normalization for the face image of each person to be identified, and extract the face feature to the normalized face image; 13)在已知人脸数据库中对待识别的人脸采用计算相似度和按相似度排序的方法进行人脸识别。13) In the known face database, the faces to be recognized are recognized by the method of calculating similarity and sorting by similarity. 2、如权利要求1所述的基于人脸几何尺寸归一化的人脸图像识别方法,其特征在于,  所述步骤1)中确定左右眼球上的点A、B的坐标位置是用鼠标直接在人脸图像上读出左右眼球上的点A、B的坐标位置。2. The human face image recognition method based on the normalization of human face geometric size as claimed in claim 1, characterized in that, determining the coordinate positions of points A and B on the left and right eyeballs in the step 1) is to use the mouse to directly Read out the coordinate positions of points A and B on the left and right eyeballs on the face image. 3、如权利要求1所述的基于人脸几何尺寸归一化的人脸图像识别方法,其特征在于,所述步骤1)中确定左右眼球上的点A、B的坐标位置是采用积分投影和特征空间分析相结合的方法确定人脸图像上左右眼球上的点A、B的坐标位置。3. The human face image recognition method based on the normalization of human face geometric dimensions as claimed in claim 1, characterized in that, determining the coordinate positions of points A and B on the left and right eyeballs in the step 1) adopts integral projection The method combined with feature space analysis determines the coordinate positions of points A and B on the left and right eyeballs on the face image. 4、如权利要求1所述的基于人脸几何尺寸归一化的人脸图像识别方法,其特征在于,所述步骤4)中左右眼球上的点C、D的坐标位置采用鼠标直接在人脸图像上读出左右眼球上的点C、D的坐标位置。4, the human face image recognition method based on the normalization of human face geometric size as claimed in claim 1, is characterized in that, the coordinate position of point C, D on the left and right eyeballs in the described step 4) adopts mouse to directly click on the human face. The coordinate positions of points C and D on the left and right eyeballs are read out from the face image. 5、如权利要求1所述的基于人脸几何尺寸归一化的人脸图像识别方法,其特征在于,所述步骤4)中左右眼球上的点C、D的坐标位置采用积分投影和特征空间分析相结合的方法确定人脸图像上左右眼球上的点C、D的坐标位置。5. The human face image recognition method based on the normalization of human face geometric dimensions as claimed in claim 1, wherein the coordinate positions of points C and D on the left and right eyeballs in the step 4) adopt integral projection and feature The method of combining spatial analysis determines the coordinate positions of points C and D on the left and right eyeballs on the face image. 6、如权利要求1所述的基于人脸几何尺寸归一化的人脸图像识别方法,其特征在于,所述步骤4)中左右眼球上的点C、D的坐标位置采用通过A、B的坐标和α,计算出C、D点的坐标。6. The human face image recognition method based on the normalization of human face geometric size as claimed in claim 1, characterized in that, in the step 4), the coordinate positions of points C and D on the left and right eyeballs are adopted by A, B The coordinates and α are calculated to calculate the coordinates of C and D points. 7、如权利要求1所述的基于人脸几何尺寸归一化的人脸图像识别方法,其特征在于,所述步骤1)和步骤4)中确定下颌点C0和E坐标位置采用鼠标直接在人脸图像上读出。7, the human face image recognition method based on the normalization of human face geometric size as claimed in claim 1, is characterized in that, in described step 1) and step 4), determine mandibular point C 0 and E coordinate position to adopt mouse to directly Read out on face images. 8、如权利要求1所述的基于人脸几何尺寸归一化的人脸图像识别方法,其特征在于,所述步骤1)和步骤4)中确定下颌点C0和E坐标位置采用积分投影和人脸器官比例相结合的方法,包括两个步骤:其一是检测出对应于器官的候选谷点,其二是确定各个器官对应的是哪些候选谷点。8, the human face image recognition method based on the normalization of human face geometric size as claimed in claim 1, is characterized in that, in described step 1) and step 4), determine mandibular point C 0 and E coordinate position and adopt integral projection The method combined with the proportion of human face organs includes two steps: one is to detect the candidate valley points corresponding to the organs, and the other is to determine which candidate valley points correspond to each organ. 9、如权利要求1所述的基于人脸几何尺寸归一化的人脸图像识别方法,其特征在于,所述步骤4)中确定下颌点E坐标位置通过C0的坐标和α,计算出E点的坐标。9, the human face image recognition method based on the normalization of human face geometric size as claimed in claim 1, is characterized in that, in described step 4), determine the coordinate position of mandibular point E by the coordinate and α of C 0 , calculate The coordinates of point E. 10、如权利要求1所述的基于入脸几何尺寸归一化的人脸图像识别方法,其特征在于,所述步骤5)中规定几何尺寸归一化的人脸图像的几何尺寸的数值,其中宽度的尺寸W=360像素,高度的尺寸为H=480像素;规定颌下线上的任何一点到两眼连线的垂直距离的标准值为H0=200像素,到图像下边框的垂直距离的标准值为H1=28像素,两眼连线到图像下边框的垂直距离的标准值为H2=252像素。10. The face image recognition method based on the normalization of geometric dimensions of the face as claimed in claim 1, characterized in that, in said step 5), the numerical value of the geometric dimensions of the human face image normalized by geometric dimensions is specified, Wherein the dimension of width W=360 pixels, the dimension of height is H=480 pixels; the standard value of the vertical distance from any point on the submandibular line to the line connecting the two eyes is H 0 =200 pixels, and the vertical distance to the lower frame of the image The standard value of the distance is H 1 =28 pixels, and the standard value of the vertical distance from the line connecting the two eyes to the lower frame of the image is H 2 =252 pixels.
CNB200510067962XA 2005-04-30 2005-04-30 Man face image identifying method based on man face geometric size normalization Expired - Fee Related CN100345153C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB200510067962XA CN100345153C (en) 2005-04-30 2005-04-30 Man face image identifying method based on man face geometric size normalization

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB200510067962XA CN100345153C (en) 2005-04-30 2005-04-30 Man face image identifying method based on man face geometric size normalization

Publications (2)

Publication Number Publication Date
CN1687959A CN1687959A (en) 2005-10-26
CN100345153C true CN100345153C (en) 2007-10-24

Family

ID=35306000

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB200510067962XA Expired - Fee Related CN100345153C (en) 2005-04-30 2005-04-30 Man face image identifying method based on man face geometric size normalization

Country Status (1)

Country Link
CN (1) CN100345153C (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3617843A1 (en) * 2012-12-10 2020-03-04 Samsung Electronics Co., Ltd. Mobile device, control method thereof, and ui display method
US11134381B2 (en) 2012-12-10 2021-09-28 Samsung Electronics Co., Ltd. Method of authenticating user of electronic device, and electronic device for performing the same

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100440246C (en) * 2006-04-13 2008-12-03 北京中星微电子有限公司 A Face Feature Point Locating Method
CN101033955B (en) * 2007-04-18 2010-10-06 北京中星微电子有限公司 Method, device and display for implementing eyesight protection
CN101393597B (en) * 2007-09-19 2011-06-15 上海银晨智能识别科技有限公司 Method for identifying front of human face
CN101615241B (en) * 2008-06-24 2011-10-12 上海银晨智能识别科技有限公司 Method for screening certificate photos
CN101383001B (en) * 2008-10-17 2010-06-02 中山大学 A Fast and Accurate Frontal Face Discrimination Method
CN101751559B (en) * 2009-12-31 2012-12-12 中国科学院计算技术研究所 Method for detecting skin stains on face and identifying face by utilizing skin stains
JP5434708B2 (en) * 2010-03-15 2014-03-05 オムロン株式会社 Collation apparatus, digital image processing system, collation apparatus control program, computer-readable recording medium, and collation apparatus control method
JP5500194B2 (en) * 2012-03-22 2014-05-21 日本電気株式会社 Captured image processing apparatus and captured image processing method
CN102799877A (en) * 2012-09-11 2012-11-28 上海中原电子技术工程有限公司 Method and system for screening face images
CN102968775B (en) * 2012-11-02 2015-04-15 清华大学 Low-resolution face image rebuilding method based on super-resolution rebuilding technology
CN103035049A (en) * 2012-12-12 2013-04-10 山东神思电子技术股份有限公司 FPGA (Field Programmable Gate Array)-based face recognition entrance guard device and FPGA-based face recognition entrance guard method
CN105279473B (en) * 2014-07-02 2021-08-03 深圳Tcl新技术有限公司 Face image correction method and device, and face recognition method and system
CN105989331B (en) * 2015-02-11 2019-10-08 佳能株式会社 Face feature extraction element, facial feature extraction method, image processing equipment and image processing method
CN105147264A (en) * 2015-08-05 2015-12-16 上海理工大学 Diagnosis and treatment system
CN108875515A (en) * 2017-12-11 2018-11-23 北京旷视科技有限公司 Face identification method, device, system, storage medium and capture machine
CN109934948B (en) * 2019-01-10 2022-03-08 宿迁学院 Novel intelligent sign-in device and working method thereof
CN113158914B (en) * 2021-04-25 2022-01-18 胡勇 Intelligent evaluation method for dance action posture, rhythm and expression

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1207532A (en) * 1997-07-31 1999-02-10 三星电子株式会社 Device and method for retrieving image information in computer
JPH11144067A (en) * 1997-11-07 1999-05-28 Nec Corp System and method for image layout and recording medium
US20030133599A1 (en) * 2002-01-17 2003-07-17 International Business Machines Corporation System method for automatically detecting neutral expressionless faces in digital images
US20050058369A1 (en) * 2003-09-09 2005-03-17 Fuji Photo Film Co., Ltd. Apparatus, method and program for generating photo card data

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1207532A (en) * 1997-07-31 1999-02-10 三星电子株式会社 Device and method for retrieving image information in computer
JPH11144067A (en) * 1997-11-07 1999-05-28 Nec Corp System and method for image layout and recording medium
US20030133599A1 (en) * 2002-01-17 2003-07-17 International Business Machines Corporation System method for automatically detecting neutral expressionless faces in digital images
US20050058369A1 (en) * 2003-09-09 2005-03-17 Fuji Photo Film Co., Ltd. Apparatus, method and program for generating photo card data

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3617843A1 (en) * 2012-12-10 2020-03-04 Samsung Electronics Co., Ltd. Mobile device, control method thereof, and ui display method
US11134381B2 (en) 2012-12-10 2021-09-28 Samsung Electronics Co., Ltd. Method of authenticating user of electronic device, and electronic device for performing the same
US20220007185A1 (en) 2012-12-10 2022-01-06 Samsung Electronics Co., Ltd. Method of authenticating user of electronic device, and electronic device for performing the same
US11930361B2 (en) 2012-12-10 2024-03-12 Samsung Electronics Co., Ltd. Method of wearable device displaying icons, and wearable device for performing the same

Also Published As

Publication number Publication date
CN1687959A (en) 2005-10-26

Similar Documents

Publication Publication Date Title
CN100345153C (en) Man face image identifying method based on man face geometric size normalization
CN1261905C (en) Eye position detection method and eye position detection device
CN1313979C (en) Apparatus and method for generating 3-D cartoon
CN101059836A (en) Human eye positioning and human eye state recognition method
CN1928889A (en) Image processing apparatus and method
CN103839223B (en) Image processing method and device
CN100342399C (en) Method and apparatus for extracting feature vector used for face recognition and retrieval
CN1276389C (en) Graph comparing device and graph comparing method
CN1697478A (en) image correction device
CN1975759A (en) Human face identifying method based on structural principal element analysis
CN1940961A (en) Feature point detection apparatus and method
CN101034481A (en) Method for automatically generating portrait painting
CN1866271A (en) AAM-based head pose real-time estimating method and system
CN101079952A (en) Image processing method and image processing apparatus
CN1758264A (en) Biological authentification system register method, biological authentification system and program thereof
CN1818927A (en) Fingerprint identification method and system
CN101038629A (en) Biometric authentication method and biometric authentication apparatus
CN101055618A (en) Palm grain identification method based on direction character
WO2012169251A1 (en) Image processing device, information generation device, image processing method, information generation method, control program, and recording medium
CN1894703A (en) Pattern recognition method, and device and program therefor
CN1977286A (en) Object recognition method and apparatus therefor
CN1437161A (en) Personal recognition method, personal recognition apparatus and photographic apparatus
CN1794264A (en) Method and system of real time detecting and continuous tracing human face in video frequency sequence
CN1811793A (en) Automatic positioning method for characteristic point of human faces
CN1924894A (en) Multiple attitude human face detection and track system and method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20071024

Termination date: 20150430

EXPY Termination of patent right or utility model