[go: up one dir, main page]

CN1794265A - Method and device for distinguishing face expression based on video frequency - Google Patents

Method and device for distinguishing face expression based on video frequency Download PDF

Info

Publication number
CN1794265A
CN1794265A CN 200510135670 CN200510135670A CN1794265A CN 1794265 A CN1794265 A CN 1794265A CN 200510135670 CN200510135670 CN 200510135670 CN 200510135670 A CN200510135670 A CN 200510135670A CN 1794265 A CN1794265 A CN 1794265A
Authority
CN
China
Prior art keywords
face
image
facial expression
video
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 200510135670
Other languages
Chinese (zh)
Other versions
CN100397410C (en
Inventor
谢东海
黄英
王浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Zhongxing Electronics Co ltd
Original Assignee
Vimicro Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vimicro Corp filed Critical Vimicro Corp
Priority to CNB2005101356705A priority Critical patent/CN100397410C/en
Publication of CN1794265A publication Critical patent/CN1794265A/en
Application granted granted Critical
Publication of CN100397410C publication Critical patent/CN100397410C/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)
  • Image Processing (AREA)

Abstract

本发明提出一种基于视频的面部表情识别方法及装置,该方法在进行实时视频面部表情识别的时候,主要将ASM轮廓提取算法应用到特征矢量的提取当中,并根据人脸的眼睛的位置对人脸图像进行提取,由人脸下巴的位置生成归一化的特征脸,用AdaBoost算法提取特征脸中最有效的特征,最终达到面部表情识别的目的。本发明在使用中可以消除光照的影响,在方法中对人脸图像进行了专门处理,使人脸的左右部分的灰度均值和方差基本一致,并且本发明的方法针对常用的USB摄像头的视频数据来对人脸进行实时自动检测、跟踪并能识别出正面人脸常见的四种表情的算法,可以达到较佳的技术以及商用效果。

Figure 200510135670

The present invention proposes a video-based facial expression recognition method and device. When performing real-time video facial expression recognition, the method mainly applies the ASM contour extraction algorithm to the extraction of feature vectors, and according to the position of the eyes of the face. The face image is extracted, and the normalized eigenface is generated from the position of the chin of the face, and the most effective features in the eigenface are extracted with the AdaBoost algorithm, and the purpose of facial expression recognition is finally achieved. The present invention can eliminate the influence of light in use, and in the method, the face image has been specially processed, so that the gray mean value and variance of the left and right parts of the face are basically consistent, and the method of the present invention is aimed at the video of a commonly used USB camera. Algorithms that automatically detect and track faces in real time and recognize the four common expressions of positive faces can achieve better technical and commercial results.

Figure 200510135670

Description

基于视频的面部表情识别方法及装置Video-based facial expression recognition method and device

技术领域technical field

本发明涉及一种识别方法,尤其是指一种基于视频对人脸面部表情的识别方法及装置。The invention relates to a recognition method, in particular to a video-based recognition method and device for human facial expressions.

背景技术Background technique

随着人机交互研究的深入和巨大的应用前景,人脸面部表情识别已经成为当前模式识别和人工智能领域的一个研究热点。但人脸表情的实时识别是一个非常困难的问题,许多理论还不完善,成熟的商业成果几乎没有。人脸表情识别的困难在于不同人所做出的同一种表情有较大差异,而且不同表情之间的差别也很微妙。此外光照、人脸姿态也会影响到识别的正确率。表情识别的方法一般都是基于统计来完成的,即从人脸图像中提取出特征矢量,然后训练分类器,最后进行识别。With the deepening of human-computer interaction research and great application prospects, facial expression recognition has become a research hotspot in the field of pattern recognition and artificial intelligence. But the real-time recognition of human facial expressions is a very difficult problem, many theories are not perfect, and there are almost no mature commercial results. The difficulty of facial expression recognition is that the same expression made by different people is quite different, and the difference between different expressions is also very subtle. In addition, lighting and facial posture will also affect the accuracy of recognition. Expression recognition methods are generally based on statistics, that is, extracting feature vectors from face images, then training classifiers, and finally performing recognition.

特征的提取是识别成败的关键,目前用于表情识别的特征可以分为两种:局部特征和整体特征。基于局部特征的人脸面部表情识别是利用每个人的面部特征(眉毛、眼睛、鼻子、嘴巴和面部轮廓等)的位置、大小及其相互位置的不同进行特征提取,达到人脸面部表情识别的目的。基于人脸整体特征的识别是从整个人脸图像出发,提出反映了整体的特征实现人脸面部表情识别。局部特征的数据量比较小,但是它用有限的特征来代表整个图像,会丢失有用的信息。而且人脸特征的准确、自动提取是一个很难的问题。Feature extraction is the key to the success of recognition. Currently, the features used for expression recognition can be divided into two types: local features and overall features. Facial expression recognition based on local features is to use the position, size and mutual position of each person's facial features (eyebrows, eyes, nose, mouth and facial contours, etc.) to extract features and achieve facial expression recognition. Purpose. The recognition based on the overall features of the face starts from the whole face image, and proposes the features that reflect the whole to realize the recognition of facial expressions. The amount of data of local features is relatively small, but it uses limited features to represent the entire image, which will lose useful information. Moreover, the accurate and automatic extraction of facial features is a difficult problem.

在现有的技术中,有人提出采用对人脸的面部表情的识别采用Fisher准则函数进行识别,也就是对人脸部的整体特征进行识别,利用反向传播算法对人脸进行识别,该方法识别的基本步骤是:a、对接收的图像进行预处理;b、进行人脸的局部特征提取;c、整体特征的提取;d、对局部和整体特征进行融合;e、最后对接收的人脸的面部表情做出识别。但是这种识别方法只是更清楚的对人脸的特征进行分析和判断,尽管能大致反映出人脸上体现的面部表情,但是还受到光照等外在因素的影响,仍不能准确,迅速自动的提取出来。In the existing technology, someone proposes to use the Fisher criterion function to identify the facial expression of the human face, that is, identify the overall features of the human face, and use the back propagation algorithm to identify the human face. The basic steps of recognition are: a. Preprocessing the received image; b. Extracting local features of the face; c. Extracting overall features; d. Fusion of local and overall features; facial expression recognition. However, this recognition method only analyzes and judges the characteristics of the human face more clearly. Although it can roughly reflect the facial expressions on the human face, it is still affected by external factors such as lighting and cannot be accurately, quickly and automatically. Extract it.

发明内容Contents of the invention

本发明要解决的技术问题是:现有技术不能准确、自动提取人脸表情的问题,本发明提出一种视频情况下的面部表情识别方法,目的是能够解决现有技术中存在的缺陷,该方法基于人脸的整体特征,根据自动提取的人脸下巴轮廓生成一个标准脸,然后采用AdaBoost的算法选择最有效的特征,得到稳健的识别结果。The technical problem to be solved by the present invention is: the existing technology cannot accurately and automatically extract human facial expressions. The present invention proposes a facial expression recognition method in the case of video, and the purpose is to be able to solve the defects existing in the prior art. The method is based on the overall features of the face, and generates a standard face according to the automatically extracted chin contour of the face, and then uses the AdaBoost algorithm to select the most effective features to obtain a robust recognition result.

本发明的方法是针对常用的USB摄像头的视频数据提出的可以对人脸进行实时自动检测、跟踪并识别出正面人脸常见的表情的算法,尤其是最常见的四种表情,并且可以避免识别的面部表情受到光照等因素的影响。The method of the present invention is an algorithm that can automatically detect, track and recognize the common facial expressions on the face in real time, especially the four most common expressions, and can avoid recognition Facial expressions are affected by factors such as lighting.

本发明的目的是这样实现的:The purpose of the present invention is achieved like this:

一种基于视频的面部表情识别方法,包括以下步骤:A method for facial expression recognition based on video, comprising the following steps:

从USB摄像头输入的视频数据中采集人脸的面部表情图像数据,对该图像数据做预处理;Collect the facial expression image data of the face from the video data input by the USB camera, and preprocess the image data;

实时提取人脸在预处理后图像中的位置;Real-time extraction of the position of the face in the preprocessed image;

依据人眼分类器对确定出的图像中的人脸中的人眼做出定位;Positioning the human eyes in the human faces in the determined image according to the human eye classifier;

根据确定的人眼的位置和人脸分类器的信息提取包含人脸的图像区域,进行归一化处理;Extract the image area containing the face according to the determined position of the human eyes and the information of the face classifier, and perform normalization processing;

对人脸器官定位;Positioning of facial organs;

根据对人脸器官的定位确定人脸下巴的位置,确定图像中的人脸区域,生成特征脸,并作为分类样本;Determine the position of the chin of the face according to the positioning of the facial organs, determine the face area in the image, generate the eigenface, and use it as a classification sample;

计算所述的特征脸图像的Gabor特征;Calculate the Gabor feature of the eigenface image;

对计算出的Gabor特征进行选择;Select the calculated Gabor features;

由挑选的特征构造支持向量机分类器;Construct a support vector machine classifier from the selected features;

根据构造的分类器得出人脸表情识别结果。According to the constructed classifier, the result of facial expression recognition is obtained.

从USB摄像头输入的数据进行采集的时候,包括以下人脸图像跟踪步骤:When collecting data input from a USB camera, the following face image tracking steps are included:

在未获取跟踪目标前,搜索每帧图像,检测人脸图像的存在;Before obtaining the tracking target, search each frame of image to detect the existence of face image;

如果检测到某帧图像存在一个或多个人脸,则在接下来的两帧图像中跟踪检测到的人脸,并对这两帧图像中跟踪的人脸进行检测和验证,对检测结果作出判断;If one or more faces are detected in a frame of images, the detected faces are tracked in the next two frames of images, and the faces tracked in these two frames of images are detected and verified, and a judgment is made on the detection results ;

在同一个位置三帧图像都检测到人脸后,算法才认为该位置存在有人脸图像,此时执行实时人脸检测算法提取人脸在图像中的位置;After the face is detected in all three frames of images at the same position, the algorithm considers that there is a face image at this position, and then executes the real-time face detection algorithm to extract the position of the face in the image;

如果检测到场景中存在有多个人脸图像,挑选出最大的人脸图像开始跟踪,在后续帧中持续跟踪该人脸,如相邻帧中后一帧与前一帧的跟踪结果的相似度过低,或某个跟踪目标所在区域长时间未检测到正面直立人脸,则停止跟踪。If it is detected that there are multiple face images in the scene, select the largest face image to start tracking, and continue to track the face in subsequent frames, such as the similarity between the tracking results of the next frame and the previous frame in adjacent frames If the value is too low, or if no upright face is detected in the area where a tracking target is located for a long time, the tracking will stop.

所述归一化处理通过重采样算法实现:所述的重采样算法为缩放、旋转和平移变换,使检测的人眼的位置与人眼分类器的位置重叠进行定位。The normalization process is realized by a resampling algorithm: the resampling algorithm is scaling, rotation and translation transformation, so that the detected position of the human eye overlaps with the position of the human eye classifier for positioning.

所述的对人脸器官定位是采用目标提取方法,该目标提取方法为主动形状模型算法。The target extraction method is adopted for the positioning of the facial organs, and the target extraction method is an active shape model algorithm.

所述的主动形状模型算法的具体步骤为:The concrete steps of described active shape model algorithm are:

由视频数据中提取人脸的轮廓信息,建立样本单元;Extract the contour information of the face from the video data, and establish a sample unit;

对样本单元中的样本进行归一化和对齐处理,然后进行主分量分析变换;Normalize and align the samples in the sample unit, and then perform principal component analysis transformation;

对主分量分析变换后的轮廓信息中每个控制点的灰度信息,作为点搜索的依据;The gray level information of each control point in the transformed contour information is analyzed for the principal component, as the basis for point search;

将主分量分析计算得到的平均轮廓作为轮廓搜索的初始值,进行迭代搜索,得到最终结果。The average profile calculated by principal component analysis is used as the initial value of the profile search, and the iterative search is performed to obtain the final result.

所述的迭代搜索的步骤是:The steps of the iterative search are:

根据灰度信息来获得初始的平移值,将根据灰度搜索得到的新的轮廓对齐到平均轮廓,计算对齐的参数值;Obtain the initial translation value according to the grayscale information, align the new contour obtained according to the grayscale search to the average contour, and calculate the aligned parameter value;

根据对齐后的数据和主分量分析计算的统计值来计算形状的变化值;Calculate the change in shape based on the aligned data and statistics calculated by principal component analysis;

根据对齐的参数值将变化后的形状反算到原来的位置得到一次搜索的结果;Invert the changed shape to the original position according to the aligned parameter value to get the result of a search;

重复上述搜索步骤,继续进行迭代直到收敛得到最终结果。Repeat the above search steps and continue to iterate until convergence to get the final result.

所述的生成特征脸是将提取出来的人脸轮廓与人脸分类器中的人脸对比,进行倾斜度调整。The generating eigenface is to compare the extracted human face outline with the human face in the human face classifier, and adjust the inclination.

在生成特征脸与计算特征脸图像的Gabor特征之间还存在对生成的特征脸进行处理的步骤:对形成的特征脸的左右部分进行灰度的归一化,使左右部分的灰度均值和方差相同。There is also a step of processing the generated eigenface between generating the eigenface and calculating the Gabor feature of the eigenface image: normalizing the gray level of the left and right parts of the formed eigenface, so that the gray mean value of the left and right parts and The variance is the same.

所述的特征脸的左右部分之间设有灰度过滤带。A gray filter band is set between the left and right parts of the eigenface.

所建立的支持向量机分类器为多类分类器,为一对一、一对多或决策树形。The established support vector machine classifier is a multi-class classifier, which is one-to-one, one-to-many or decision tree.

本发明还提出一种基于视频的面部表情识别装置,包括视频数据采集单元,图像处理单元、人脸信息数据库以及面部表情识别单元;The present invention also proposes a video-based facial expression recognition device, including a video data acquisition unit, an image processing unit, a face information database, and a facial expression recognition unit;

视频数据采集单元对视频的人脸图像进行采集并将其传送给图像处理单元;The video data collection unit collects the face image of the video and transmits it to the image processing unit;

图像处理单元从人脸信息数据库中调取人脸信息与采集的人脸图像进行对比,再对人脸数据进行计算,将计算后的数据传送给所述的面部表情识别单元;The image processing unit calls the face information from the face information database and compares it with the collected face image, then calculates the face data, and transmits the calculated data to the facial expression recognition unit;

面部表情识别单元根据人脸信息数据库中存储的识别信息对采集的人脸图像进行识别。The facial expression recognition unit recognizes the collected face images according to the recognition information stored in the face information database.

还包括显示单元,将识别出来的面部表情显示出来。It also includes a display unit for displaying the recognized facial expressions.

所述的图像处理单元包括比较单元、特征生成单元、计算单元以及分类器单元;The image processing unit includes a comparison unit, a feature generation unit, a calculation unit and a classifier unit;

所述的比较单元将人脸的图像信息与人脸数据库中的图像信息做出对比,检测出人脸以及双眼,并根据双眼位置提取出人脸图像,将该人脸信息传送至特征生成单元;The comparison unit compares the image information of the face with the image information in the face database, detects the face and eyes, extracts the face image according to the positions of the eyes, and transmits the face information to the feature generation unit ;

所述的特征生成单元对人脸器官定位,根据人脸下巴生成特征脸,将特征脸作为样本传送至计算单元;The feature generation unit locates the facial organs, generates the eigenface according to the chin of the face, and sends the eigenface as a sample to the calculation unit;

所述的计算单元计算特征脸图像的Gabor特征,并采用AdaBoost算法挑选特征,再将挑选的特征传送至分类器单元;The computing unit calculates the Gabor feature of the eigenface image, and adopts the AdaBoost algorithm to select the feature, and then transmits the selected feature to the classifier unit;

所述的分类器单元根据挑选的特征构造支持向量机分类器,将分类器信息传送至面部表情识别单元。The classifier unit constructs a support vector machine classifier according to the selected features, and transmits the classifier information to the facial expression recognition unit.

所述的视频数据采集单元中还包含一视频数据追踪单元,该视频数据追踪单元对视频数据的人脸数据进行追踪检测,判断是否对输入数据采集。The video data collection unit also includes a video data tracking unit, which tracks and detects the face data of the video data, and judges whether to collect the input data.

本发明上述的方法的技术方案,使得在视频情况能自动提取准确的人脸的面部表情,并且本方法采用了Adaboost以及ASM算法,可以消除光照的影响,在方法中对人脸图像进行了专门处理,使人脸的左右部分的灰度均值和方差基本一致,并且本发明的方法针对常用的USB摄像头的视频数据来开发一个可以对人脸进行实时自动检测、跟踪并能识别出正面人脸常见的四种表情的算法,可以达到较佳的技术以及商用效果。The technical scheme of the above-mentioned method of the present invention makes the facial expression of accurate people's face can be extracted automatically in the video situation, and this method has adopted Adaboost and ASM algorithm, can eliminate the influence of illumination, has carried out special to people's face image in the method processing, so that the gray mean value and variance of the left and right parts of the human face are basically consistent, and the method of the present invention develops a real-time automatic detection and tracking of the human face and can identify the frontal human face for the video data of the commonly used USB camera. The algorithm of the four common expressions can achieve better technical and commercial results.

附图说明Description of drawings

图1为本发明的基于视频的面部表情识别的方法流程图。FIG. 1 is a flow chart of the method for video-based facial expression recognition of the present invention.

图2为本发明的基于视频的面部表情识别方法的实施例中表情采集示意图。FIG. 2 is a schematic diagram of expression collection in an embodiment of the video-based facial expression recognition method of the present invention.

图3为人脸图像形状的归一化处理示意图。Fig. 3 is a schematic diagram of the normalization processing of the face image shape.

图4为ASM算法的检测示意图。Figure 4 is a schematic diagram of the detection of the ASM algorithm.

图5a所示为采集到的人脸轮廓的特征脸。Figure 5a shows the eigenfaces of the collected face contours.

图5b所示为标准特征脸。Figure 5b shows the standard eigenface.

图6为特征脸生成的示意图。Figure 6 is a schematic diagram of eigenface generation.

图7为在进行特征人脸图像的Gabor特征计算时,图像在不同尺度、不同角度下的Gabor特征示意图。FIG. 7 is a schematic diagram of Gabor features of the image at different scales and angles when calculating the Gabor features of the feature face image.

图8为本发明所述的一对一分类器的示意图。Fig. 8 is a schematic diagram of a one-to-one classifier according to the present invention.

图9为本发明所述的方法的识别效果图。Fig. 9 is a recognition effect diagram of the method described in the present invention.

图10为本发明所述的装置的结构框图。Fig. 10 is a structural block diagram of the device of the present invention.

具体实施方式Detailed ways

本发明给出一种基于视频的人脸面部表情识别方法,该方法是针对常用的USB摄像头的视频数据而做出的,该方法可以对人脸进行实时自动检测、跟踪并能识别正面人脸常见的表情。The present invention provides a video-based human face and facial expression recognition method. The method is made for the video data of a commonly used USB camera. The method can automatically detect and track the human face in real time and can recognize the frontal human face Common expressions.

参考本发明的图1所示,为本发明所述的识别方法的流程图,其具体包括的步骤如下:With reference to shown in Fig. 1 of the present invention, it is the flow chart of identification method described in the present invention, and the steps that it specifically comprises are as follows:

首先,采集人脸表情图像,该采集步骤具体是:从USB摄像头输入的视频数据中采集人脸的面部表情图像数据,对该图像数据做预处理;First, collect the facial expression image, the collection step is specifically: collect the facial expression image data of the human face from the video data input by the USB camera, and preprocess the image data;

在本发明的实施例中,该采集图像过程中,还包括一个人脸数据追踪的步骤,该步骤的目的是实时检测拍摄场景中的多个人脸,对其中一个人脸如最大的人脸持续跟踪,并在跟踪过程中不断验证,判断人脸的存在与否。该追踪步骤可检测-20到20度深度旋转、-20到20度平面旋转的人脸,可检测不同肤色的、不同光照条件下的人脸、或者带眼镜的人脸等。跟踪算法不受人脸姿态的影响,侧面、旋转人脸同样可以跟踪。In an embodiment of the present invention, the image acquisition process also includes a face data tracking step, the purpose of this step is to detect multiple faces in the shooting scene in real time, and for one of the faces such as the largest face to continue Tracking and continuous verification during the tracking process to determine the presence or absence of a face. This tracking step can detect faces with a depth rotation of -20 to 20 degrees and a plane rotation of -20 to 20 degrees, and can detect faces with different skin colors, faces under different lighting conditions, or faces with glasses, etc. The tracking algorithm is not affected by the posture of the face, and the profile and rotated faces can also be tracked.

该追踪步骤是采用以下方式实现的:This trace step is achieved in the following way:

在未获取跟踪目标前,对每帧图像进行搜索,检测人脸是否存在;如果某帧图像检测到一个或多个人脸,则在接下来的两帧图像中跟踪这些人脸,并对这两帧图像中跟踪的人脸进行检测和验证,判断前面的检测结果是否是真人脸;只有在某个位置三帧都检测到人脸后,算法才认为该位置人脸存在,继续对人脸图像进行判断识别。在此跟踪步骤中,如果场景中存在有多个人脸,选择其中一个进行跟踪。在后续帧中持续跟踪该人脸,如果相邻帧中后一帧与前一帧的跟踪结果的相似度过低,则停止跟踪;如果某个跟踪目标所在区域长时间未检测到正面直立人脸,则认为该目标的跟踪价值不大,停止跟踪该目标。当前一个跟踪目标停止跟踪后,在后续图像中重新进行人脸检测,直到找到新的人脸,跟踪新的人脸,重复人脸追踪的步骤。Before obtaining the tracking target, search each frame image to detect whether there is a face; if one or more faces are detected in a certain frame image, then track these faces in the next two frames of images, and compare the two The face tracked in the frame image is detected and verified to determine whether the previous detection result is a real face; only after a face is detected in a certain position in three frames, the algorithm considers that the face exists at this position, and continues to process the face image To judge and identify. In this tracking step, if there are multiple faces in the scene, select one of them to track. Continue to track the face in subsequent frames. If the similarity between the tracking results of the next frame and the previous frame in the adjacent frames is too low, stop tracking; if no frontal upright person is detected in the area where a tracking target is located for a long time face, it thinks that the tracking value of the target is not great, and stops tracking the target. After the previous tracking target stops tracking, face detection is performed again in subsequent images until a new face is found, the new face is tracked, and the steps of face tracking are repeated.

参考图1所示的内容,采集到人脸表情图像后,然后进行人脸检测步骤,本实施例中的人脸检测,实际上是采用基于视频的实时人脸检测算法实时提取人脸在预处理后图像中的位置;该识别方式可参考图2所示,目前的算法可以对不同表情进行识别,例如:中性、笑、生气和惊讶等表情,而识别的算法基于统计原理。在进行本发明所述的方法的识别之前,必须首先大量的采集样本,可以由USB相机录下被采集者的表情视频,从视频文件中分离出的包含人脸表情的图像被作为用来进行统计的初始样本,形成初始样本,以便在识别过程中采用。With reference to the content shown in Fig. 1, after gathering facial expression image, carry out face detection step then, the face detection in the present embodiment is to adopt real-time face detection algorithm based on video to extract people's face in real time in the pre-preparation in fact. The position in the processed image; the recognition method can be referred to as shown in Figure 2. The current algorithm can recognize different expressions, such as neutral, laughing, angry and surprised, and the recognition algorithm is based on statistical principles. Before carrying out the identification of the method described in the present invention, must at first collect a large amount of samples, can record the facial expression video of the collected person by USB camera, the image that contains facial expression that separates from video file is used as to carry out An initial sample of statistics to form an initial sample for use in the recognition process.

在本发明所述的方法中,人脸检测的目的是确定人脸在采集到的图像中的位置,确定了人脸的位置,就可以进行双眼的检测。同时参看图1中的双眼检测步骤,该步骤是依据人眼分类器对确定出的图像中的人脸中的人眼做出定位;本步骤是在检测到的人脸的图像区域之后,基于人眼分类器来确定人眼的位置,人眼分类器一般基于统计的方法检测而建立,即首先根据人眼样本来训练出分类器,然后根据分类器来进行检测。In the method of the present invention, the purpose of face detection is to determine the position of the face in the collected image, and after the position of the face is determined, the detection of both eyes can be performed. Referring to the binocular detection step in Fig. 1 simultaneously, this step is to make location according to the people's eyes in the people's face in the determined image according to the people's eyes classifier; This step is after the image area of the people's face detected, based on The human eye classifier is used to determine the position of the human eye. The human eye classifier is generally established based on statistical method detection, that is, the classifier is first trained according to the human eye samples, and then detected according to the classifier.

参见图1以及图3所示的内容,根据双眼位置提取出只包含人脸的图像,该步骤是根据确定的人眼的位置和人眼分类器的信息提取包含人脸的图像区域,进行归一化处理。归一化的过程可参见图3的内容,由图3a中视频采集的图像参照图3b中的标准的人脸模板,最终得到图3c所示的归一化的结果。这是由于在视频情况下人脸的区域会随真实人脸距离USB摄像头的远近而发生大小的变化,这对器官定位的算法是很不利的,在检测出双眼的位置后,需要从原始视频数据中重采样出一个图像,图像中双眼位置是固定的而且连线是水平的,重采样后的图像覆盖了人脸的全部区域。Referring to the contents shown in Fig. 1 and Fig. 3, an image containing only a human face is extracted according to the position of the eyes. This step is to extract the image area containing a human face according to the determined position of the human eyes and the information of the human eye classifier, and perform classification. One treatment. The normalization process can be referred to in Figure 3. The image captured from the video in Figure 3a refers to the standard face template in Figure 3b, and finally the normalized result shown in Figure 3c is obtained. This is because in the case of video, the size of the face area will change with the distance between the real face and the USB camera, which is very unfavorable to the algorithm of organ positioning. An image is resampled from the data. The position of the eyes in the image is fixed and the connection is horizontal. The resampled image covers the entire area of the face.

重采样的算法是一个简单的缩放、旋转和平移变换,即将检测到的双眼经过上述的变换后和标准脸图像中的双眼位置重叠。标准图像的大小可以为:120*148。具体的计算公式为:The resampling algorithm is a simple transformation of scaling, rotation and translation, that is, after the above-mentioned transformation, the detected eyes overlap with the positions of the eyes in the standard face image. The standard image size can be: 120*148. The specific calculation formula is:

x=λ(x′cosθ+y′sinθ)+x0 x=λ(x′cosθ+y′sinθ)+x 0

y=λ(-x′sinθ+y′cosθ)+y0 y=λ(-x'sinθ+y'cosθ)+y 0

设λcosθ=a,λsinθ=b,那么公式可以写为:Let λcosθ=a, λsinθ=b, then the formula can be written as:

x=ax′+by′+x0 x=ax'+by'+x 0

y=-bx′+ay′+y0 y=-bx'+ay'+y 0

在上述公式中只有四个未知数,每个点可以列出两个方程,两个点就可以解出所有未知数。所以可以通过双眼的位置来进行这个变换。There are only four unknowns in the above formula, two equations can be listed for each point, and all unknowns can be solved with two points. So this transformation can be done by the position of the eyes.

经过上述重采样算法处理得到的人脸图像和事先训练的标准图像的大小相同,检测到的双眼(图3中的×点)在经过旋转和平移后和标准图像中的双眼位置是相同的。The size of the face image processed by the above resampling algorithm is the same as that of the pre-trained standard image, and the detected eyes (point × in Figure 3) are the same as those in the standard image after rotation and translation.

在进行上述人脸图像的提取后,继续参见图1,进行人脸器官定位,该定位人脸器官采用目标提取算法实现,在本发明的实施例中可采用ASM(Active Shape Model,主动形状模型)算法实现,该步骤的目的是:准确的提取出人脸的区域,并去掉图像中不相关的背景信息。本发明的方法中需要定出人脸的大致轮廓的位置,ASM引入已有的人脸轮廓的统计信息作为约束条件,在轮廓搜索中用来控制轮廓形状的变化。利用ASM的方法可快速、准确的提取出人脸的轮廓,对人脸器官定位。After carrying out the extraction of above-mentioned people's face image, continue to refer to Fig. 1, carry out people's face organ location, this location people's face organ adopts target extraction algorithm to realize, can adopt ASM (Active Shape Model, Active Shape Model) in the embodiment of the present invention ) algorithm, the purpose of this step is to accurately extract the region of the face, and remove irrelevant background information in the image. In the method of the present invention, the position of the general outline of the human face needs to be determined, and the ASM introduces the statistical information of the existing human face outline as a constraint condition, which is used to control the change of the outline shape in the outline search. The ASM method can quickly and accurately extract the contour of the face and locate the facial organs.

其中,所述的ASM算法的具体步骤为:Wherein, the concrete steps of described ASM algorithm are:

首先,由视频数据中提取人脸的轮廓信息,建立样本单元;Firstly, the contour information of the face is extracted from the video data, and the sample unit is established;

然后,对样本单元中的样本进行归一化和对齐处理,然后进行主分量分析(principal components analysis,简称PCA)变换;Then, normalize and align the samples in the sample unit, and then perform principal components analysis (PCA for short) transformation;

对PCA变换中处理的轮廓信息中每个控制点的灰度信息,作为点搜索的依据;The gray information of each control point in the contour information processed in the PCA transformation is used as the basis for point search;

进而将主分量分析计算得到的平均轮廓作为轮廓搜索的初始值,进行迭代搜索,得到最终结果。Furthermore, the average contour calculated by principal component analysis is used as the initial value of the contour search, and the iterative search is carried out to obtain the final result.

在进行ASM算法的时候,该迭代搜索的具体步骤是:When performing the ASM algorithm, the specific steps of the iterative search are:

根据灰度信息来获得初始的平移值,将根据灰度搜索得到的新的轮廓对齐到平均轮廓,计算对齐的参数值;Obtain the initial translation value according to the grayscale information, align the new contour obtained according to the grayscale search to the average contour, and calculate the aligned parameter value;

根据对齐后的数据和主分量分析计算的统计值来计算形状的变化值;Calculate the change in shape based on the aligned data and statistics calculated by principal component analysis;

根据对齐的参数值将变化后的形状反算到原来的位置得到一次搜索的结果;Invert the changed shape to the original position according to the aligned parameter value to get the result of a search;

重复搜索步骤,进行迭代直到收敛得到最终结果。Repeat the search step and iterate until convergence to get the final result.

在本发明所述的方法中,为了提高搜索的速度和准确度,还可以引入金字塔影像,用来进行分级搜索。并且,本发明进行ASM算法的时候,由于引入了PCA计算统计方法来控制人脸轮廓的变化,使得ASM的算法能够较为准确的找出人脸的轮廓,算法的速度也较快,迭代搜索的计算在1秒之内就能够收敛,在本发明的算法方案中,可以利用检测到的人眼的位置来确定轮廓的初始位置,同时为了提高器官定位的精度,本发明使数据库中存储的图像大小和实际检测的图像大小一致。实施本发明的时候,实际上也可以采用AAM(Active Aspect Model,主动外观模型)算法实现对人脸的轮廓的查找,由于该算法在现有技术中常常应用,所以在本实施例中不再赘述。In the method of the present invention, in order to improve the search speed and accuracy, pyramid images can also be introduced for hierarchical search. And, when the present invention carries out ASM algorithm, owing to introducing PCA calculation and statistics method to control the variation of human face contour, the algorithm of ASM can find out the contour of human face more accurately, and the speed of algorithm is also faster, and the iterative search The calculation can converge within 1 second. In the algorithm scheme of the present invention, the detected position of the human eye can be used to determine the initial position of the outline. At the same time, in order to improve the accuracy of organ positioning, the present invention makes the images stored in the database The size is consistent with the actual detected image size. When implementing the present invention, in fact also can adopt AAM (Active Aspect Model, active appearance model) algorithm to realize the search to the outline of human face, because this algorithm is often used in the prior art, so no longer in the present embodiment repeat.

通过上述描述,结合图4所示的内容可以看出,本发明的算法可以较好的恢复人脸中下巴的位置,可以很好的保持轮廓的整体形状。From the above description, combined with the content shown in FIG. 4, it can be seen that the algorithm of the present invention can better restore the position of the chin in the face, and can well maintain the overall shape of the outline.

继续参看图1的内容,进行人脸器官定位后,再根据对人脸器官的定位确定人脸下巴的位置,确定图像中的人脸区域,生成特征脸,并作为分类样本;在本步骤中间,生成特征脸的时候,用于分类的样本应该包含人脸的主要区域,并且去掉会影响识别效果的那些无用信息,在人脸表情识别的过程中,在只考虑正面人脸表情识别的情况下,影响识别的主要因素是背景和光照。本发明的方法是根据ASM算法提取出下巴的位置,可以将图像中的人脸区域单独提取出来作为一个用于面部表情识别的特征脸图像,特征脸的大小是固定的,一般特征脸的大小设置为64*64能够满足识别率和速度方面的要求,特征脸如果太小那么识别率会降低,太大则会影响算法的效率。Continue to refer to the content in Figure 1, after positioning the facial organs, determine the position of the chin of the face according to the positioning of the facial organs, determine the face area in the image, generate a feature face, and use it as a classification sample; in the middle of this step , when generating eigenfaces, the samples used for classification should contain the main area of the face, and remove those useless information that will affect the recognition effect. In the process of facial expression recognition, only the positive facial expression recognition is considered Under the background, the main factors affecting recognition are background and illumination. The method of the present invention is to extract the position of the chin according to the ASM algorithm, and the face area in the image can be extracted separately as an eigenface image for facial expression recognition. The size of the eigenface is fixed, and the size of the general eigenface Setting it to 64*64 can meet the recognition rate and speed requirements. If the eigenface is too small, the recognition rate will decrease, and if it is too large, it will affect the efficiency of the algorithm.

同时结合图5和图6所示的内容,其中图5a为采集到的特征脸,图5b为标准特征脸,图5a由上至下依次具有多条平行的直线,由直线的位置可见,其中一条线确定了人脸下巴的位置,而图5b所示的是人脸分类器中的标准特征脸,该标准特征脸是进行识别之前训练所得到,该图上也由上至下标示了与图5a中同样数目的多条相互平行的线条,与图5a相应的线条的位置同样可以确定脸下巴的位置;由于图5a与图5b的比较可以看出,依据视频输入实际提取出来的人脸轮廓的大小和标准特征脸的大小并不相同,而且可能存在倾斜。本发明的方法可以沿计算出来的倾斜角度来进行采样,如图5a中所示的多条线条,对应了图5b中的同样数目的线条,对应线条之间的关系可以将实际的人脸区域重采样为和标准特征脸大小完全一致的图像,通过这样的采样后可以将视频中检测到的人脸转换为与标准特征脸大小一致,且角度一致的人脸图像。这是一个对采集到的人脸图像进行标准化处理的过程,这里的标准化是指将视频中检测到的人脸经过几何变换使之和我们设定的标准特征脸一致。标准化的目的是为了方便样本的生成和特征的提取,提高识别的精度。Combining the contents shown in Figure 5 and Figure 6 at the same time, Figure 5a is the collected eigenface, Figure 5b is the standard eigenface, and Figure 5a has multiple parallel straight lines from top to bottom, which can be seen from the position of the straight lines, where A line determines the position of the chin of the face, and Figure 5b shows the standard eigenface in the face classifier, the standard eigenface is obtained by training before recognition, and the figure is also marked from top to bottom with With the same number of multiple parallel lines in Figure 5a, the position of the corresponding lines in Figure 5a can also determine the position of the chin of the face; as can be seen from the comparison between Figure 5a and Figure 5b, the face actually extracted according to the video input The size of the silhouette is not the same as that of the standard eigenface, and there may be a skew. The method of the present invention can sample along the calculated inclination angle, as shown in Figure 5a, the multiple lines correspond to the same number of lines in Figure 5b, and the relationship between the corresponding lines can be the actual face area Resampling is an image that is exactly the same size as the standard eigenface. After such sampling, the face detected in the video can be converted into a face image with the same size and angle as the standard eigenface. This is a process of standardizing the collected face images. Standardization here refers to geometrically transforming the faces detected in the video to make them consistent with the standard eigenfaces we set. The purpose of standardization is to facilitate the generation of samples and the extraction of features, and improve the accuracy of recognition.

同时参考图6的内容,左图为从视频数据中提取出来的人脸图像,右边是经过重采样后得到的特征脸,该特征脸大小优选为64*64。本发明的识别是基于图像灰度信息,所以光照会影响到我们最终的识别结果,为了去除光照影响,我们对生成的特征脸进行处理。方法是对特征脸的左右分别进行灰度的归一化,使左右部分的灰度均值和方差都相同。同时为了避免中间存在一个灰度的跳跃,依据本方法在左右脸的中间设立了一个灰度的过渡带,使灰度能够平滑的从脸的左部过渡到右部。Referring to the content of FIG. 6 at the same time, the left picture is the face image extracted from the video data, and the right side is the eigenface obtained after resampling. The size of the eigenface is preferably 64*64. The recognition of the present invention is based on image grayscale information, so the illumination will affect our final recognition result. In order to remove the influence of illumination, we process the generated eigenfaces. The method is to normalize the gray levels of the left and right sides of the eigenface, so that the gray mean and variance of the left and right parts are the same. At the same time, in order to avoid a grayscale jump in the middle, a grayscale transition zone is set up in the middle of the left and right faces according to this method, so that the grayscale can smoothly transition from the left part of the face to the right part.

继续参见图1的内容,生成特征脸后,对计算所述的特征脸图像的Gabor特征;如图7所示对特征脸图像的每个像素可以计算5个尺度,6个方向上的Gabor特征,即每个像素可以得到一个30维的向量,64*64的图像所有像素点的Gabor特征集中在一起可以得到一个122880维的特征向量。在实际计算中,为了加快计算的速度,本发明采用快速傅立叶变换(FFT)来计算Gabor特征。Continue to refer to the content of Figure 1, after the eigenface is generated, the Gabor feature of the eigenface image is calculated; as shown in Figure 7, 5 scales and Gabor features in 6 directions can be calculated for each pixel of the eigenface image , that is, each pixel can get a 30-dimensional vector, and the Gabor features of all pixels of the 64*64 image can be combined to get a 122880-dimensional feature vector. In actual calculation, in order to speed up the calculation, the present invention uses fast Fourier transform (FFT) to calculate the Gabor feature.

参见图1所示的内容,在计算特征脸图像的Gabor特征后,需要对计算出的Gabor特征进行选择;在本发明所述的方法中,根据特征脸计算出来的Gabor特征矢量的维数高达122880维,这会给本发明的训练和计算带来很大的麻烦,导致算法效率低下,因此,本发明采用AdaBoost算法来挑选特征,该Adaboost方法从原始矢量中提取出最为有效的一部分特征,作为分类的样本。AdaBoost算法的基本原理是将弱分类器不断的组合在一起,形成一个分类能力很强的强分类器。在运用AdaBoost进行计算的过程中,我们可以挑选出分类能力最好的一系列特征,并根据训练得到的权重来得到最终的分类器。Adaboost算法本身是通过改变数据分布来实现的,它根据每次训练集中每个例子的分类是否正确,以及上次的总体分类准确率,来确定每个例子的权重值。将每次训练得到的分类器最后融合起来,作为最后的决策分类器。Referring to the content shown in Figure 1, after calculating the Gabor feature of the eigenface image, the Gabor feature calculated needs to be selected; in the method of the present invention, the dimension of the Gabor feature vector calculated according to the eigenface is as high as 122880 dimensions, this will bring great trouble to the training and calculation of the present invention, and cause the algorithm to be inefficient. Therefore, the present invention uses the AdaBoost algorithm to select features, and the Adaboost method extracts the most effective part of the features from the original vector. as a sample for classification. The basic principle of the AdaBoost algorithm is to continuously combine weak classifiers to form a strong classifier with strong classification ability. In the process of using AdaBoost for calculation, we can select a series of features with the best classification ability, and obtain the final classifier according to the weights obtained from training. The Adaboost algorithm itself is realized by changing the data distribution. It determines the weight value of each example according to whether the classification of each example in each training set is correct, and the overall classification accuracy of the last time. Finally, the classifiers obtained from each training are fused together as the final decision classifier.

参考图1和图8所示的内容,在进行特征挑选后,由挑选的特征构造支持向量机(SVM)分类器;例如,本发明的方法采用AdaBoost算法来挑选出了2000维特征作为训练样本,当然在实际应用中也可以选择3000维、4000维等特征作为训练样本,在本实施例中以2000维为例,并构成SVM分类器,在本实施例中,基本上要区分四类表情,因此是多类的分类器。实际上多类分类器是相对简单的两类分类器而言的。在本发明的实施例中,由于至少要识别四种表情,每种表情可以看作是一个类,所以是一个多类分类器。而SVM是可以构造出线性分类器和非线性分类器。在本发明的方法中,两种分类器都可以实现,但是采用线性分类器来进行识别的速度会快一些。所以在不影响识别率的情况下,采用线性分类器是本发明的一个较佳实施方式。本发明中所述的多类分类器的设计可以有多种选择:一对一,一对多,决策树等。一对一是在每两个类之间设计一个分类器,比如本发明具有四个类别,那么就具有6种组合,本发明就可以构建得到6个分类器。如果是一对多,那个我们可以在每个类和其他类之间设计一个分类器,四个类别就可以得到四个分类器。复杂的还可以设计决策树。With reference to the content shown in Fig. 1 and Fig. 8, after carrying out feature selection, support vector machine (SVM) classifier is constructed by the feature selected; For example, the method of the present invention adopts AdaBoost algorithm to select 2000 dimension features as training samples , of course, in practical applications, features such as 3000 dimensions and 4000 dimensions can also be selected as training samples. In this embodiment, 2000 dimensions are taken as an example to form an SVM classifier. In this embodiment, four types of expressions are basically distinguished , and thus is a multi-class classifier. In fact, multi-class classifiers are relatively simple two-class classifiers. In the embodiment of the present invention, since at least four expressions are to be recognized, each expression can be regarded as a class, so it is a multi-class classifier. And SVM can construct linear classifier and nonlinear classifier. In the method of the present invention, both classifiers can be implemented, but the speed of recognition will be faster by using a linear classifier. Therefore, without affecting the recognition rate, using a linear classifier is a preferred embodiment of the present invention. The design of the multi-class classifier described in the present invention can have multiple options: one-to-one, one-to-many, decision tree, etc. One-to-one is to design a classifier between every two classes. For example, if the present invention has four classes, then there are 6 combinations, and the present invention can construct 6 classifiers. If it is one-to-many, then we can design a classifier between each class and other classes, and four classes can get four classifiers. Complex decision trees can also be designed.

在本实施例中,以一对一的设计方法进行说明,一对一分类器的作用就是把两个类进行划分。在表情识别中,将任意两个类的组合(如四种表情就有6种组合)都用上面的方法来设计分类器,就可以得到6个一对一的分类器。利用这些一对一的分类器,我们就可以区分四种表情。In this embodiment, a one-to-one design method is used for illustration, and the function of the one-to-one classifier is to divide two classes. In expression recognition, if any combination of two classes (for example, there are 6 combinations of four expressions) is used to design a classifier using the above method, 6 one-to-one classifiers can be obtained. Using these one-to-one classifiers, we can distinguish four expressions.

其原理可如图8所示,采用6根线条来表示6个分类器,其中线条11将中性表情和笑的表情分开;线条12条将愤怒和笑的表情分开;线条13将惊讶和笑的表情分开;线条21将中性表情和愤怒的表情分开;线条22将中性表情和惊讶的表情分开;线条23是将惊讶与愤怒的表情分开。The principle can be shown in Figure 8, using 6 lines to represent 6 classifiers, where line 11 separates neutral expressions from laughing expressions; line 12 separates angry and laughing expressions; line 13 separates surprised and laughing expressions Line 21 separates neutral and angry expressions; line 22 separates neutral and surprised expressions; line 23 separates surprised and angry expressions.

最后,参见图1,得到SVM分类器之后,本发明的就可以进行实时的人脸表情识别,在本发明实施过程中,首先对视频中的每一帧进行人脸检测,然后对人脸进行跟踪并提取出双眼的位置;如果跟踪成功,就对当前的图像中的人脸进行表情识别,并实时给出识别的结果;同时参看图9所示的内容,左边为USB摄像头输入的视频数据,右边的小窗口是面部表情识别的结果。At last, referring to Fig. 1, after obtaining the SVM classifier, the present invention can carry out real-time facial expression recognition, in the implementation process of the present invention, at first each frame in the video is carried out face detection, then face is carried out Track and extract the position of both eyes; if the tracking is successful, perform facial expression recognition on the face in the current image, and give the recognition result in real time; at the same time, refer to the content shown in Figure 9, the left side is the video data input by the USB camera , the small window on the right is the result of facial expression recognition.

本发明的方法可以应用于一种基于视频的面部表情识别装置,如图10所示,所述装置包括视频数据采集单元1,图像处理单元2、人脸信息数据库3以及面部表情识别单元4;视频数据采集单元1对视频的人脸图像进行采集并将其传送给图像处理单元2;图像处理单元2从人脸信息数据库3中调取人脸信息由图像处理单元2中的比较单元121将两者的图像对比,并采用AdaBoost计算单元123对人脸数据进行计算传送给所述的面部表情识别单元4;面部表情识别单元4根据人脸信息数据库3中存储的识别信息对采集的人脸图像进行识别。该装置还包括显示单元5,将识别出来的面部表情显示出来。The method of the present invention can be applied to a video-based facial expression recognition device, as shown in Figure 10, the device includes a video data acquisition unit 1, an image processing unit 2, a face information database 3 and a facial expression recognition unit 4; Video data acquisition unit 1 collects the face image of video and sends it to image processing unit 2; The image contrast of the two, and adopt AdaBoost calculation unit 123 to carry out calculation to face data and send to described facial expression recognition unit 4; The image is recognized. The device also includes a display unit 5 for displaying the recognized facial expressions.

其中所述的图像处理单元2包括比较单元121、特征生成单元122、计算单元123以及分类器单元124;所述的比较单元121将人脸的图像信息与人脸数据库3中的图像信息做出对比,检测出人脸以及双眼,并根据双眼位置提取出人脸图像,将该人脸图像信息传送至特征生成单元122;所述的特征生成单元122对人脸器官定位,根据人脸下巴生成特征脸,将特征脸作为样本传送至计算单元123;所述的计算单元123计算特征脸图像的Gabor特征,并采用AdaBoost算法挑选特征,再将挑选的特征传送至分类器单元124;所述的分类器单元124根据挑选的特征构造支持向量机分类器,将分类器信息传送至面部表情识别单元4。所述的视频数据采集单元1中还包含一视频数据追踪单元111,该视频数据追踪单元111对视频数据的人脸数据进行追踪检测,判断是否采集,执行本发明方法中的追踪人脸步骤。Wherein said image processing unit 2 comprises a comparison unit 121, a feature generation unit 122, a calculation unit 123 and a classifier unit 124; Contrast, detect the face and eyes, and extract the face image according to the position of the eyes, and send the face image information to the feature generation unit 122; the feature generation unit 122 locates the facial organs, generates Eigenface, the eigenface is sent to the calculation unit 123 as a sample; the calculation unit 123 calculates the Gabor feature of the eigenface image, and adopts the AdaBoost algorithm to select the feature, and then the selected feature is sent to the classifier unit 124; the described The classifier unit 124 constructs a support vector machine classifier according to the selected features, and sends the classifier information to the facial expression recognition unit 4 . The video data collection unit 1 also includes a video data tracking unit 111. The video data tracking unit 111 tracks and detects the face data of the video data, judges whether to collect, and executes the face tracking step in the method of the present invention.

本发明所述的方法使得在视频情况能自动提取准确的人脸的面部表情,并且本方法采用了Adaboost以及ASM算法,可以消除光照的影响,在方法中对人脸图像进行了专门处理,使人脸的左右部分的灰度均值和方差基本一致,并且本发明的方法针对常用的USB摄像头的视频数据来开发一个可以对人脸进行实时自动检测、跟踪并能识别出正面人脸常见表情的算法,可以达到较佳的商用效果。The method of the present invention can automatically extract the facial expression of accurate people's face in video situation, and this method has adopted Adaboost and ASM algorithm, can eliminate the influence of illumination, has carried out special processing to people's face image in the method, makes The gray mean value and variance of the left and right parts of the human face are basically the same, and the method of the present invention develops a real-time automatic detection and tracking of the human face for the video data of the commonly used USB camera and can recognize the common expressions of the frontal human face. Algorithms can achieve better commercial results.

以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内,所作的任何修改、等同替换等,均应包含在本发明的保护范围之内。The above descriptions are only preferred embodiments of the present invention, and are not intended to limit the present invention. Any modifications, equivalent replacements, etc. made within the spirit and principles of the present invention should be included in the protection scope of the present invention within.

Claims (14)

1、一种基于视频的面部表情识别方法,其特征在于,包括以下步骤:1, a kind of facial expression recognition method based on video, it is characterized in that, comprising the following steps: 从USB摄像头输入的视频数据中采集人脸的面部表情图像数据,对该图像数据做预处理;Collect the facial expression image data of the face from the video data input by the USB camera, and preprocess the image data; 实时提取人脸在预处理后图像中的位置;Real-time extraction of the position of the face in the preprocessed image; 依据人眼分类器对确定出的图像中的人脸中的人眼做出定位;Positioning the human eyes in the human faces in the determined image according to the human eye classifier; 根据确定的人眼的位置和人脸分类器的信息提取包含人脸的图像区域,进行归一化处理;Extract the image area containing the face according to the determined position of the human eyes and the information of the face classifier, and perform normalization processing; 对人脸器官定位;Positioning of facial organs; 根据对人脸器官的定位确定人脸下巴的位置,确定图像中的人脸区域,生成特征脸,并作为分类样本;Determine the position of the chin of the face according to the positioning of the facial organs, determine the face area in the image, generate the eigenface, and use it as a classification sample; 基于所述的分类样本计算所述的特征脸图像的Gabor特征;Calculating the Gabor feature of the eigenface image based on the classification sample; 对计算出的Gabor特征进行选择;Select the calculated Gabor features; 由挑选的特征构造支持向量机分类器;Construct a support vector machine classifier from the selected features; 根据构造的分类器得出人脸表情识别结果。According to the constructed classifier, the result of facial expression recognition is obtained. 2、如权利要求1所述的基于视频的面部表情识别方法,其特征在于,从USB摄像头输入的数据进行采集的时候,包括以下人脸跟踪步骤:2, the video-based facial expression recognition method as claimed in claim 1 is characterized in that, when collecting from the data of USB camera input, comprise following face tracking steps: 在未获取跟踪目标前,搜索每帧图像,检测人脸是否存在;Before obtaining the tracking target, search each frame of image to detect whether there is a face; 如果检测到某帧图像存在一个或多个人脸,则在接下来的两帧图像中跟踪检测到的人脸,并对这两帧图像中跟踪的人脸进行检测和验证,对检测结果作出判断;If one or more faces are detected in a frame of images, the detected faces are tracked in the next two frames of images, and the faces tracked in these two frames of images are detected and verified, and a judgment is made on the detection results ; 在同一个位置三帧图像都检测到人脸后,算法才认为该位置存在有人脸,此时执行实时人脸检测算法提取人脸在图像中的位置;After the face is detected in the three frames of images at the same position, the algorithm considers that there is a face at that position. At this time, the real-time face detection algorithm is executed to extract the position of the face in the image; 如果检测到场景中存在有多个人脸,挑选出其中一个人脸开始跟踪,在后续帧中持续跟踪该人脸,如相邻帧中后一帧与前一帧的跟踪结果的相似度过低,或某个跟踪目标所在区域长时间未检测到正面直立人脸,则停止跟踪。If multiple faces are detected in the scene, select one of the faces to start tracking, and continue to track the face in subsequent frames, if the similarity between the tracking results of the next frame and the previous frame in adjacent frames is too low , or if no upright face is detected for a long time in the area where a tracking target is located, the tracking will stop. 3、如权利要求1所述的基于视频的面部表情识别方法,其特征在于,所述归一化处理通过重采样算法实现:所述的重采样算法为缩放、旋转和平移变换,使检测的人眼的位置与人眼分类器的位置重叠进行定位。3. The facial expression recognition method based on video as claimed in claim 1, wherein said normalization process is realized by a resampling algorithm: said resampling algorithm is scaling, rotation and translation transformation, so that the detected The position of the human eye overlaps with the position of the human eye classifier for localization. 4、如权利要求1所述的在视频情况下的面部表情识别方法,其特征在于,所述的对人脸器官定位是采用目标提取方法,该目标提取方法为主动形状模型算法。4. The facial expression recognition method in the case of video as claimed in claim 1, characterized in that, the described positioning of human face organs adopts a target extraction method, and the target extraction method is an active shape model algorithm. 5、如权利要求4所述的基于视频的面部表情识别方法,其特征在于,所述的主动形状模型算法的具体步骤为:5. The video-based facial expression recognition method according to claim 4, wherein the specific steps of the active shape model algorithm are: 由视频数据中提取人脸的轮廓信息,建立样本单元;Extract the contour information of the face from the video data, and establish a sample unit; 对样本单元中的样本进行归一化和对齐处理,然后进行主分量分析变换;Normalize and align the samples in the sample unit, and then perform principal component analysis transformation; 对主分量分析变换后的轮廓信息中每个控制点的灰度信息,作为点搜索的依据;The gray level information of each control point in the transformed contour information is analyzed for the principal component, as the basis for point search; 将主分量分析计算得到的平均轮廓作为轮廓搜索的初始值,进行迭代搜索,得到人脸轮廓。The average contour calculated by principal component analysis is used as the initial value of the contour search, and the iterative search is performed to obtain the face contour. 6、如权利要求5所述的基于视频的面部表情识别方法,其特征在于,所述的迭代搜索的步骤是:6. The video-based facial expression recognition method as claimed in claim 5, wherein the iterative search step is: 根据灰度信息来获得初始的平移值,将根据灰度搜索得到的新的轮廓对齐到平均轮廓,计算对齐的参数值;Obtain the initial translation value according to the grayscale information, align the new contour obtained according to the grayscale search to the average contour, and calculate the aligned parameter value; 根据对齐后的数据和主分量分析计算的统计值来计算形状的变化值;Calculate the change in shape based on the aligned data and statistics calculated by principal component analysis; 根据对齐的参数值将变化后的形状反算到新轮廓的位置得到一次搜索的结果;Invert the changed shape to the position of the new contour according to the aligned parameter value to obtain the result of a search; 重复上述搜索步骤,继续进行迭代直到收敛得到人脸轮廓。Repeat the above search steps, and continue to iterate until the face contour is converged. 7、如权利要求1所述的基于视频的面部表情识别方法,其特征在于,所述特征脸的生成是将提取出来的人脸轮廓与人脸分类器中的人脸对比,进行倾斜度调整。7. The facial expression recognition method based on video as claimed in claim 1, wherein the generation of said eigenface is to compare the extracted human face contour with the human face in the human face classifier, and adjust the gradient . 8、如权利要求1所述的基于视频的面部表情识别方法,其特征在于,在生成特征脸与计算特征脸图像的Gabor特征之间还存在对生成的特征脸进行处理的步骤:对形成的特征脸的左右部分进行灰度的归一化,使左右部分的灰度均值和方差相同。8. The facial expression recognition method based on video as claimed in claim 1, characterized in that, between generating the eigenface and calculating the Gabor feature of the eigenface image, there is also a step of processing the generated eigenface: forming The left and right parts of the eigenface are normalized to the gray level, so that the gray mean and variance of the left and right parts are the same. 9、如权利要求8所述的基于视频的面部表情识别方法,其特征在于,在所述的特征脸左右部分之间设置灰度过滤带。9. The video-based facial expression recognition method according to claim 8, characterized in that a gray filter band is set between the left and right parts of the eigenface. 10、如权利要求1所述的基于视频的面部表情识别方法,其特征在于,所建立的支持向量机分类器为多类分类器,为一对一、一对多或决策树形。10. The video-based facial expression recognition method according to claim 1, characterized in that the established support vector machine classifier is a multi-class classifier, which is one-to-one, one-to-many or decision tree. 11、一种基于视频的面部表情识别装置,其特征在于:11. A video-based facial expression recognition device, characterized in that: 包括视频数据采集单元,图像处理单元、人脸信息数据库以及面部表情识别单元;Including video data acquisition unit, image processing unit, face information database and facial expression recognition unit; 视频数据采集单元对视频的人脸图像进行采集并将其传送给图像处理单元;The video data collection unit collects the face image of the video and transmits it to the image processing unit; 图像处理单元从人脸信息数据库中调取人脸信息与采集的人脸图像进行对比,再对人脸数据进行计算,将计算后的数据传送给所述的面部表情识别单元;The image processing unit calls the face information from the face information database and compares it with the collected face image, then calculates the face data, and transmits the calculated data to the facial expression recognition unit; 面部表情识别单元根据人脸信息数据库中存储的识别信息对采集的人脸图像进行识别。The facial expression recognition unit recognizes the collected face images according to the recognition information stored in the face information database. 12、如权利要求11所述的基于视频的面部表情识别装置,其特征在于,还包括显示单元,将识别出来的面部表情显示出来。12. The video-based facial expression recognition device according to claim 11, further comprising a display unit for displaying the recognized facial expressions. 13、如权利要求11所述的基于视频的面部表情识别装置,其特征在于,所述的图像处理单元包括比较单元、特征生成单元、计算单元以及分类器单元;13. The video-based facial expression recognition device according to claim 11, wherein said image processing unit includes a comparison unit, a feature generation unit, a calculation unit and a classifier unit; 所述的比较单元将人脸的图像信息与人脸数据库中的图像信息做出对比,检测出人脸以及双眼,并根据双眼位置提取出人脸图像,将该人脸图像信息传送至特征生成单元;The comparison unit compares the image information of the face with the image information in the face database, detects the face and eyes, extracts the face image according to the position of the eyes, and transmits the face image information to the feature generation unit; 所述的特征生成单元对人脸器官定位,根据人脸下巴生成特征脸,将特征脸作为样本传送至计算单元;The feature generation unit locates the facial organs, generates the eigenface according to the chin of the face, and sends the eigenface as a sample to the calculation unit; 所述的计算单元计算特征脸图像的Gabor特征,并采用AdaBoost算法挑选特征,再将挑选的特征传送至分类器单元;The computing unit calculates the Gabor feature of the eigenface image, and adopts the AdaBoost algorithm to select the feature, and then transmits the selected feature to the classifier unit; 所述的分类器单元根据挑选的特征构造支持向量机分类器,将分类器信息传送至面部表情识别单元。The classifier unit constructs a support vector machine classifier according to the selected features, and transmits the classifier information to the facial expression recognition unit. 14、如权利要求11所述的基于视频的面部表情识别装置,其特征在于,所述的视频数据采集单元中还包含一视频数据追踪单元,该视频数据追踪单元对视频数据的人脸数据进行追踪检测,判断是否对输入数据进行采集。14. The facial expression recognition device based on video as claimed in claim 11, wherein said video data acquisition unit also includes a video data tracking unit, and the video data tracking unit performs face data on video data. Track and detect to determine whether to collect the input data.
CNB2005101356705A 2005-12-31 2005-12-31 Video-based facial expression recognition method and device Expired - Lifetime CN100397410C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2005101356705A CN100397410C (en) 2005-12-31 2005-12-31 Video-based facial expression recognition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2005101356705A CN100397410C (en) 2005-12-31 2005-12-31 Video-based facial expression recognition method and device

Publications (2)

Publication Number Publication Date
CN1794265A true CN1794265A (en) 2006-06-28
CN100397410C CN100397410C (en) 2008-06-25

Family

ID=36805690

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2005101356705A Expired - Lifetime CN100397410C (en) 2005-12-31 2005-12-31 Video-based facial expression recognition method and device

Country Status (1)

Country Link
CN (1) CN100397410C (en)

Cited By (54)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008083535A1 (en) * 2007-01-11 2008-07-17 Shanghai Isvision Technologies Co. Ltd. Method for encrypting/decrypting electronic document based on human face identification
CN100426317C (en) * 2006-09-27 2008-10-15 北京中星微电子有限公司 Multiple attitude human face detection and track system and method
CN100426318C (en) * 2006-09-28 2008-10-15 北京中星微电子有限公司 AAM-based object location method
CN100444190C (en) * 2006-10-30 2008-12-17 邹采荣 A Facial Feature Localization Method Based on Weighted Active Shape Modeling
CN100447808C (en) * 2007-01-12 2008-12-31 郑文明 Method for classification human facial expression and semantics judgement quantization method
CN100556078C (en) * 2006-11-21 2009-10-28 索尼株式会社 Camera device, image processing device, and image processing method
CN101689303A (en) * 2007-06-18 2010-03-31 佳能株式会社 Facial expression recognition apparatus and method, and image capturing apparatus
CN101175187B (en) * 2006-10-31 2010-04-21 索尼株式会社 Image storage device, imaging device and image storage method
CN101226590B (en) * 2008-01-31 2010-06-02 湖南创合世纪智能技术有限公司 Method for recognizing human face
CN101206715B (en) * 2006-12-18 2010-10-06 索尼株式会社 Facial recognition device, method, Gabor filter application device and computer program
CN101944163A (en) * 2010-09-25 2011-01-12 德信互动科技(北京)有限公司 Method for realizing expression synchronization of game character through capturing face expression
CN101285677B (en) * 2007-04-12 2011-03-23 东京毅力科创株式会社 Optical metrology using a support vector machine with simulated diffraction signal inputs
CN102004906A (en) * 2010-11-18 2011-04-06 无锡中星微电子有限公司 Face identification system and method
CN102058983A (en) * 2010-11-10 2011-05-18 无锡中星微电子有限公司 Intelligent toy based on video analysis
CN101216881B (en) * 2007-12-28 2011-07-06 北京中星微电子有限公司 A method and device for automatic image acquisition
WO2011079458A1 (en) * 2009-12-31 2011-07-07 Nokia Corporation Method and apparatus for local binary pattern based facial feature localization
CN101719223B (en) * 2009-12-29 2011-09-14 西北工业大学 Identification method for stranger facial expression in static image
CN102214299A (en) * 2011-06-21 2011-10-12 电子科技大学 Method for positioning facial features based on improved ASM (Active Shape Model) algorithm
US8085996B2 (en) 2007-06-11 2011-12-27 Sony Corporation Image processing apparatus, image display apparatus, imaging apparatus, method for image processing therefor, and program
CN102306290A (en) * 2011-10-14 2012-01-04 刘伟华 Face tracking recognition technique based on video
CN101777116B (en) * 2009-12-23 2012-07-25 中国科学院自动化研究所 Method for analyzing facial expressions on basis of motion tracking
US8233678B2 (en) 2007-08-14 2012-07-31 Sony Corporation Imaging apparatus, imaging method and computer program for detecting a facial expression from a normalized face image
CN101887513B (en) * 2009-05-12 2012-11-07 联咏科技股份有限公司 Expression detection device and expression detection method thereof
CN101337128B (en) * 2008-08-20 2012-11-28 北京中星微电子有限公司 Game control method and system based on face
US8411911B2 (en) 2008-11-28 2013-04-02 Casio Computer Co., Ltd. Image processing apparatus, image processing method, and storage medium for storing program
WO2013149556A1 (en) * 2012-04-06 2013-10-10 腾讯科技(深圳)有限公司 Method and device for automatically playing expression on virtual image
CN103400105A (en) * 2013-06-26 2013-11-20 东南大学 Method identifying non-front-side facial expression based on attitude normalization
WO2014032496A1 (en) * 2012-08-28 2014-03-06 腾讯科技(深圳)有限公司 Method, device and storage medium for locating feature points on human face
CN104575495A (en) * 2013-10-21 2015-04-29 中国科学院声学研究所 Language identification method and system adopting total variable quantity factors
CN104573617A (en) * 2013-10-28 2015-04-29 季春宏 Video shooting control method
CN104767980A (en) * 2015-04-30 2015-07-08 深圳市东方拓宇科技有限公司 Real-time emotion demonstrating method, system and device and intelligent terminal
CN104951743A (en) * 2015-03-04 2015-09-30 苏州大学 Active-shape-model-algorithm-based method for analyzing face expression
CN105187721A (en) * 2015-08-31 2015-12-23 广州市幸福网络技术有限公司 An identification camera and method for rapidly extracting portrait features
CN105404878A (en) * 2015-12-11 2016-03-16 广东欧珀移动通信有限公司 A photo classification method and device
CN105678702A (en) * 2015-12-25 2016-06-15 北京理工大学 Face image sequence generation method and device based on feature tracking
CN105917305A (en) * 2013-08-02 2016-08-31 埃莫蒂安特公司 Filter and shutter based on image emotion content
CN106127829A (en) * 2016-06-28 2016-11-16 广东欧珀移动通信有限公司 A processing method, device and terminal for augmented reality
CN106127828A (en) * 2016-06-28 2016-11-16 广东欧珀移动通信有限公司 A processing method, device and mobile terminal for augmented reality
CN106157363A (en) * 2016-06-28 2016-11-23 广东欧珀移动通信有限公司 A camera method, device and mobile terminal based on augmented reality
CN106157262A (en) * 2016-06-28 2016-11-23 广东欧珀移动通信有限公司 A processing method, device and mobile terminal for augmented reality
CN106687989A (en) * 2014-10-23 2017-05-17 英特尔公司 Method and system of facial expression recognition using linear relationships within landmark subsets
CN107451560A (en) * 2017-07-31 2017-12-08 广东欧珀移动通信有限公司 User expression recognition method, device and terminal
CN107592507A (en) * 2017-09-29 2018-01-16 深圳市置辰海信科技有限公司 The method of automatic trace trap high-resolution front face photo
CN107729882A (en) * 2017-11-19 2018-02-23 济源维恩科技开发有限公司 Emotion identification decision method based on image recognition
CN108268838A (en) * 2018-01-02 2018-07-10 中国科学院福建物质结构研究所 Facial expression recognizing method and facial expression recognition system
CN108416291A (en) * 2018-03-06 2018-08-17 广州逗号智能零售有限公司 Face datection recognition methods, device and system
CN108446672A (en) * 2018-04-20 2018-08-24 武汉大学 A kind of face alignment method based on the estimation of facial contours from thick to thin
CN108583569A (en) * 2018-03-26 2018-09-28 刘福珍 A kind of collision warning device based on double moving average algorithm
CN108875519A (en) * 2017-12-19 2018-11-23 北京旷视科技有限公司 Method for checking object, device and system and storage medium
CN109727303A (en) * 2018-12-29 2019-05-07 广州华多网络科技有限公司 Video display method, system, computer equipment, storage medium and terminal
CN105095827B (en) * 2014-04-18 2019-05-17 汉王科技股份有限公司 Facial expression recognition device and method
CN110728252A (en) * 2019-10-22 2020-01-24 山西省信息产业技术研究院有限公司 Face detection method applied to regional personnel motion trail monitoring
WO2021248814A1 (en) * 2020-06-13 2021-12-16 德派(嘉兴)医疗器械有限公司 Robust visual supervision method and apparatus for home learning state of child
CN115205937A (en) * 2022-07-05 2022-10-18 上海云思智慧信息技术有限公司 Construction of face tracking network/face tracking method, system, medium and terminal

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12437202B2 (en) 2020-10-30 2025-10-07 Microsoft Technology Licensing, Llc Human characteristic normalization with an autoencoder

Cited By (75)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100426317C (en) * 2006-09-27 2008-10-15 北京中星微电子有限公司 Multiple attitude human face detection and track system and method
CN100426318C (en) * 2006-09-28 2008-10-15 北京中星微电子有限公司 AAM-based object location method
CN100444190C (en) * 2006-10-30 2008-12-17 邹采荣 A Facial Feature Localization Method Based on Weighted Active Shape Modeling
CN101175187B (en) * 2006-10-31 2010-04-21 索尼株式会社 Image storage device, imaging device and image storage method
CN100556078C (en) * 2006-11-21 2009-10-28 索尼株式会社 Camera device, image processing device, and image processing method
US8385607B2 (en) 2006-11-21 2013-02-26 Sony Corporation Imaging apparatus, image processing apparatus, image processing method and computer program
CN101206715B (en) * 2006-12-18 2010-10-06 索尼株式会社 Facial recognition device, method, Gabor filter application device and computer program
WO2008083535A1 (en) * 2007-01-11 2008-07-17 Shanghai Isvision Technologies Co. Ltd. Method for encrypting/decrypting electronic document based on human face identification
CN100447808C (en) * 2007-01-12 2008-12-31 郑文明 Method for classification human facial expression and semantics judgement quantization method
CN101285677B (en) * 2007-04-12 2011-03-23 东京毅力科创株式会社 Optical metrology using a support vector machine with simulated diffraction signal inputs
US8085996B2 (en) 2007-06-11 2011-12-27 Sony Corporation Image processing apparatus, image display apparatus, imaging apparatus, method for image processing therefor, and program
CN101689303A (en) * 2007-06-18 2010-03-31 佳能株式会社 Facial expression recognition apparatus and method, and image capturing apparatus
US8233678B2 (en) 2007-08-14 2012-07-31 Sony Corporation Imaging apparatus, imaging method and computer program for detecting a facial expression from a normalized face image
CN101216881B (en) * 2007-12-28 2011-07-06 北京中星微电子有限公司 A method and device for automatic image acquisition
CN101226590B (en) * 2008-01-31 2010-06-02 湖南创合世纪智能技术有限公司 Method for recognizing human face
CN101337128B (en) * 2008-08-20 2012-11-28 北京中星微电子有限公司 Game control method and system based on face
US8411911B2 (en) 2008-11-28 2013-04-02 Casio Computer Co., Ltd. Image processing apparatus, image processing method, and storage medium for storing program
CN101887513B (en) * 2009-05-12 2012-11-07 联咏科技股份有限公司 Expression detection device and expression detection method thereof
CN101777116B (en) * 2009-12-23 2012-07-25 中国科学院自动化研究所 Method for analyzing facial expressions on basis of motion tracking
CN101719223B (en) * 2009-12-29 2011-09-14 西北工业大学 Identification method for stranger facial expression in static image
CN102640168B (en) * 2009-12-31 2016-08-03 诺基亚技术有限公司 Method and apparatus for facial Feature Localization based on local binary pattern
WO2011079458A1 (en) * 2009-12-31 2011-07-07 Nokia Corporation Method and apparatus for local binary pattern based facial feature localization
CN102640168A (en) * 2009-12-31 2012-08-15 诺基亚公司 Method and apparatus for local binary pattern based facial feature localization
US8917911B2 (en) 2009-12-31 2014-12-23 Nokia Corporation Method and apparatus for local binary pattern based facial feature localization
CN101944163A (en) * 2010-09-25 2011-01-12 德信互动科技(北京)有限公司 Method for realizing expression synchronization of game character through capturing face expression
CN102058983B (en) * 2010-11-10 2012-08-29 无锡中星微电子有限公司 Intelligent toy based on video analysis
CN102058983A (en) * 2010-11-10 2011-05-18 无锡中星微电子有限公司 Intelligent toy based on video analysis
CN102004906A (en) * 2010-11-18 2011-04-06 无锡中星微电子有限公司 Face identification system and method
CN102214299A (en) * 2011-06-21 2011-10-12 电子科技大学 Method for positioning facial features based on improved ASM (Active Shape Model) algorithm
CN102306290A (en) * 2011-10-14 2012-01-04 刘伟华 Face tracking recognition technique based on video
CN102306290B (en) * 2011-10-14 2013-10-30 刘伟华 Face tracking recognition technique based on video
CN103366782B (en) * 2012-04-06 2014-09-10 腾讯科技(深圳)有限公司 Method and device automatically playing expression on virtual image
CN103366782A (en) * 2012-04-06 2013-10-23 腾讯科技(深圳)有限公司 Method and device automatically playing expression on virtual image
WO2013149556A1 (en) * 2012-04-06 2013-10-10 腾讯科技(深圳)有限公司 Method and device for automatically playing expression on virtual image
US9457265B2 (en) 2012-04-06 2016-10-04 Tenecent Technology (Shenzhen) Company Limited Method and device for automatically playing expression on virtual image
WO2014032496A1 (en) * 2012-08-28 2014-03-06 腾讯科技(深圳)有限公司 Method, device and storage medium for locating feature points on human face
CN103400105A (en) * 2013-06-26 2013-11-20 东南大学 Method identifying non-front-side facial expression based on attitude normalization
CN103400105B (en) * 2013-06-26 2017-05-24 东南大学 Method identifying non-front-side facial expression based on attitude normalization
CN105917305B (en) * 2013-08-02 2020-06-26 埃莫蒂安特公司 Filtering and Shutter Shooting Based on Image Emotional Content
CN105917305A (en) * 2013-08-02 2016-08-31 埃莫蒂安特公司 Filter and shutter based on image emotion content
CN104575495A (en) * 2013-10-21 2015-04-29 中国科学院声学研究所 Language identification method and system adopting total variable quantity factors
CN104573617A (en) * 2013-10-28 2015-04-29 季春宏 Video shooting control method
CN105095827B (en) * 2014-04-18 2019-05-17 汉王科技股份有限公司 Facial expression recognition device and method
CN106687989A (en) * 2014-10-23 2017-05-17 英特尔公司 Method and system of facial expression recognition using linear relationships within landmark subsets
CN106687989B (en) * 2014-10-23 2021-06-29 英特尔公司 Method, system, readable medium and device for facial expression recognition
CN104951743A (en) * 2015-03-04 2015-09-30 苏州大学 Active-shape-model-algorithm-based method for analyzing face expression
CN104767980A (en) * 2015-04-30 2015-07-08 深圳市东方拓宇科技有限公司 Real-time emotion demonstrating method, system and device and intelligent terminal
CN104767980B (en) * 2015-04-30 2018-05-04 深圳市东方拓宇科技有限公司 A kind of real-time emotion demenstration method, system, device and intelligent terminal
CN105187721B (en) * 2015-08-31 2018-09-21 广州市幸福网络技术有限公司 A kind of the license camera and method of rapid extraction portrait feature
CN105187721A (en) * 2015-08-31 2015-12-23 广州市幸福网络技术有限公司 An identification camera and method for rapidly extracting portrait features
CN105404878A (en) * 2015-12-11 2016-03-16 广东欧珀移动通信有限公司 A photo classification method and device
CN105678702B (en) * 2015-12-25 2018-10-19 北京理工大学 A kind of the human face image sequence generation method and device of feature based tracking
CN105678702A (en) * 2015-12-25 2016-06-15 北京理工大学 Face image sequence generation method and device based on feature tracking
CN106127829B (en) * 2016-06-28 2020-06-30 Oppo广东移动通信有限公司 Augmented reality processing method and device and terminal
CN106127829A (en) * 2016-06-28 2016-11-16 广东欧珀移动通信有限公司 A processing method, device and terminal for augmented reality
CN106127828A (en) * 2016-06-28 2016-11-16 广东欧珀移动通信有限公司 A processing method, device and mobile terminal for augmented reality
CN106157262B (en) * 2016-06-28 2020-04-17 Oppo广东移动通信有限公司 Augmented reality processing method and device and mobile terminal
CN106157262A (en) * 2016-06-28 2016-11-23 广东欧珀移动通信有限公司 A processing method, device and mobile terminal for augmented reality
CN106157363A (en) * 2016-06-28 2016-11-23 广东欧珀移动通信有限公司 A camera method, device and mobile terminal based on augmented reality
CN107451560A (en) * 2017-07-31 2017-12-08 广东欧珀移动通信有限公司 User expression recognition method, device and terminal
CN107592507A (en) * 2017-09-29 2018-01-16 深圳市置辰海信科技有限公司 The method of automatic trace trap high-resolution front face photo
CN107729882A (en) * 2017-11-19 2018-02-23 济源维恩科技开发有限公司 Emotion identification decision method based on image recognition
CN108875519A (en) * 2017-12-19 2018-11-23 北京旷视科技有限公司 Method for checking object, device and system and storage medium
CN108875519B (en) * 2017-12-19 2023-05-26 北京旷视科技有限公司 Object detection method, device and system and storage medium
CN108268838B (en) * 2018-01-02 2020-12-29 中国科学院福建物质结构研究所 Facial expression recognition method and facial expression recognition system
CN108268838A (en) * 2018-01-02 2018-07-10 中国科学院福建物质结构研究所 Facial expression recognizing method and facial expression recognition system
CN108416291A (en) * 2018-03-06 2018-08-17 广州逗号智能零售有限公司 Face datection recognition methods, device and system
CN108583569A (en) * 2018-03-26 2018-09-28 刘福珍 A kind of collision warning device based on double moving average algorithm
CN108446672A (en) * 2018-04-20 2018-08-24 武汉大学 A kind of face alignment method based on the estimation of facial contours from thick to thin
CN108446672B (en) * 2018-04-20 2021-12-17 武汉大学 Face alignment method based on shape estimation of coarse face to fine face
CN109727303A (en) * 2018-12-29 2019-05-07 广州华多网络科技有限公司 Video display method, system, computer equipment, storage medium and terminal
CN110728252A (en) * 2019-10-22 2020-01-24 山西省信息产业技术研究院有限公司 Face detection method applied to regional personnel motion trail monitoring
CN110728252B (en) * 2019-10-22 2023-08-04 山西省信息产业技术研究院有限公司 Face detection method applied to regional personnel motion trail monitoring
WO2021248814A1 (en) * 2020-06-13 2021-12-16 德派(嘉兴)医疗器械有限公司 Robust visual supervision method and apparatus for home learning state of child
CN115205937A (en) * 2022-07-05 2022-10-18 上海云思智慧信息技术有限公司 Construction of face tracking network/face tracking method, system, medium and terminal

Also Published As

Publication number Publication date
CN100397410C (en) 2008-06-25

Similar Documents

Publication Publication Date Title
CN1794265A (en) Method and device for distinguishing face expression based on video frequency
CN108229362B (en) Binocular face recognition living body detection method based on access control system
CN101059836A (en) Human eye positioning and human eye state recognition method
Shao et al. Deep convolutional dynamic texture learning with adaptive channel-discriminability for 3D mask face anti-spoofing
Gu et al. Feature points extraction from faces
KR102174595B1 (en) System and method for identifying faces in unconstrained media
CN107316333B (en) A method for automatically generating Japanese cartoon portraits
CN105893946B (en) A detection method for frontal face images
CN1977286A (en) Object recognition method and apparatus therefor
CN105956552B (en) A kind of face blacklist monitoring method
CN1794264A (en) Method and system of real time detecting and continuous tracing human face in video frequency sequence
CN101055618A (en) Palm grain identification method based on direction character
CN1503194A (en) Identification method using body information to assist face information
CN111126240A (en) A three-channel feature fusion face recognition method
CN1801181A (en) Robot capable of automatically recognizing face and vehicle license plate
CN105868716A (en) Method for human face recognition based on face geometrical features
CN107330371A (en) Acquisition methods, device and the storage device of the countenance of 3D facial models
CN103440476A (en) A pupil location method in face video
CN102971768A (en) State-of-posture estimation device and state-of-posture estimation method
CN109325462B (en) Face recognition living body detection method and device based on iris
CN1885310A (en) Human face model training module and method, human face real-time certification system and method
CN1932847A (en) Method for detecting colour image human face under complex background
CN100345153C (en) Man face image identifying method based on man face geometric size normalization
CN103218615B (en) Face judgment method
CN1710593A (en) A Hand Feature Fusion Authentication Method Based on Feature Relationship Measurement

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C41 Transfer of patent application or patent right or utility model
TR01 Transfer of patent right

Effective date of registration: 20160516

Address after: 519031 Guangdong city of Zhuhai province Hengqin Baohua Road No. 6, room 105 -478

Patentee after: Guangdong Zhongxing Electronics Co.,Ltd.

Address before: 100083, Haidian District, Xueyuan Road, Beijing No. 35, Nanjing Ning building, 15 Floor

Patentee before: VIMICRO Corp.

CX01 Expiry of patent term
CX01 Expiry of patent term

Granted publication date: 20080625