[go: up one dir, main page]

CN106295566A - Facial expression recognizing method and device - Google Patents

Facial expression recognizing method and device Download PDF

Info

Publication number
CN106295566A
CN106295566A CN201610653790.2A CN201610653790A CN106295566A CN 106295566 A CN106295566 A CN 106295566A CN 201610653790 A CN201610653790 A CN 201610653790A CN 106295566 A CN106295566 A CN 106295566A
Authority
CN
China
Prior art keywords
expression recognition
image
face area
key point
facial expression
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610653790.2A
Other languages
Chinese (zh)
Other versions
CN106295566B (en
Inventor
杨松
张旭华
王百超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN201610653790.2A priority Critical patent/CN106295566B/en
Publication of CN106295566A publication Critical patent/CN106295566A/en
Application granted granted Critical
Publication of CN106295566B publication Critical patent/CN106295566B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

本公开是关于一种人脸表情识别方法及装置,属于图像识别技术领域。所述方法包括:从待识别图像中检测人脸区域;获取人脸区域中的关键点;根据关键点从人脸区域中提取局部图像;采用完成训练的表情识别模型对局部图像进行识别,得到表情识别结果。本公开通过采用完成训练的表情识别模型进行人脸关键点处的特征提取和表情判别,将两个分开的步骤合二为一,从而减少累积误差,提高人脸表情识别的准确性。并且,由于仅提取人脸关键点处的局部图像的特征信息,而非提取整个人脸区域的全局特征信息,能够更为准确、有效地提取体现表情状态的特征,进一步提高人脸表情识别的准确性。

The present disclosure relates to a method and device for recognizing facial expressions, and belongs to the technical field of image recognition. The method includes: detecting a face area from an image to be recognized; obtaining key points in the face area; extracting a partial image from the face area according to the key points; using a trained expression recognition model to identify the partial image, and obtaining Expression recognition results. The present disclosure combines the two separate steps into one by using the trained facial expression recognition model for feature extraction and facial expression discrimination, thereby reducing cumulative errors and improving the accuracy of facial facial expression recognition. Moreover, because only the feature information of the local image at the key points of the face is extracted, rather than the global feature information of the entire face area, the features that reflect the state of expression can be extracted more accurately and effectively, and the performance of facial expression recognition can be further improved. accuracy.

Description

人脸表情识别方法及装置Facial expression recognition method and device

技术领域technical field

本公开涉及图像识别技术领域,特别涉及一种人脸表情识别方法及装置。The present disclosure relates to the technical field of image recognition, in particular to a method and device for recognizing facial expressions.

背景技术Background technique

人脸表情识别是指从给定的人脸图像中识别确定人脸的表情状态。例如,高兴、悲伤、惊讶、恐惧、厌恶、生气等。目前人脸表情识别已广泛应用于心理科学、神经系统科学、工程科学及计算机科学等领域。Facial expression recognition refers to identifying and determining the expression state of a human face from a given facial image. For example, happy, sad, surprised, fearful, disgusted, angry, etc. At present, facial expression recognition has been widely used in psychological science, neuroscience, engineering science, computer science and other fields.

在相关技术中,人脸表情识别包括如下两个主要步骤:第一,从待识别图像中检测人脸区域,并从人脸区域中提取脸部表情特征,其中,可采用HOG(Histogram of OrientedGradient,方向梯度直方图)、LBP(Local Binary Pattern,局部二值模式)、Gabor等特征提取算法提取脸部表情特征;第二,基于脸部表情特征进行表情分类,得到表情识别结果,其中,分类算法可采用Adaboost算法、SVM(Support Vector Machine,支持向量机)算法、随机森林算法等。In the related art, facial expression recognition includes the following two main steps: first, detect the human face area from the image to be recognized, and extract the facial expression features from the human face area, wherein, HOG (Histogram of OrientedGradient , directional gradient histogram), LBP (Local Binary Pattern, local binary pattern), Gabor and other feature extraction algorithms to extract facial expression features; second, perform expression classification based on facial expression features, and obtain expression recognition results, wherein, classification The algorithm can adopt Adaboost algorithm, SVM (Support Vector Machine, support vector machine) algorithm, random forest algorithm, etc.

发明内容Contents of the invention

本公开实施例提供了一种人脸表情识别方法及装置。所述技术方案如下:Embodiments of the present disclosure provide a method and device for recognizing facial expressions. Described technical scheme is as follows:

根据本公开实施例的第一方面,提供了一种人脸表情识别方法,所述方法包括:According to a first aspect of an embodiment of the present disclosure, a method for recognizing facial expressions is provided, the method comprising:

从待识别图像中检测人脸区域;Detect the face area from the image to be recognized;

获取所述人脸区域中的关键点;Obtain key points in the face area;

根据所述关键点从所述人脸区域中提取局部图像;Extracting a partial image from the face area according to the key point;

采用完成训练的表情识别模型对所述局部图像进行识别,得到表情识别结果。The partial image is recognized by using the trained expression recognition model to obtain an expression recognition result.

可选地,所述根据所述关键点从所述人脸区域中提取局部图像,包括:Optionally, the extracting a partial image from the face region according to the key points includes:

获取每个关键点周围的图像块;Get the image blocks around each key point;

将各个所述图像块按预设顺序进行叠加或拼接,得到所述局部图像。The partial image is obtained by superimposing or splicing each of the image blocks in a preset order.

可选地,所述获取每个关键点周围的图像块,包括:Optionally, the acquiring image blocks around each key point includes:

对于每一个关键点,截取以所述关键点为中心的预定尺寸的图像块。For each key point, an image block of a predetermined size centered on the key point is intercepted.

可选地,所述采用完成训练的表情识别模型对所述局部图像进行识别,得到表情识别结果,包括:Optionally, using the trained expression recognition model to identify the partial image to obtain an expression recognition result, including:

采用完成训练的表情识别模型提取所述局部图像的特征信息;Extracting feature information of the partial image by using the facial expression recognition model that has been trained;

采用所述表情识别模型根据所述特征信息,确定所述表情识别结果。Using the expression recognition model to determine the expression recognition result according to the feature information.

可选地,所述获取所述人脸区域中的关键点,包括:Optionally, the acquiring key points in the face area includes:

将所述人脸区域缩放至目标尺寸;Scaling the face area to a target size;

从缩放后的所述人脸区域中定位所述关键点。Locating the key points from the zoomed face area.

可选地,所述表情识别模型为卷积神经网络模型。Optionally, the facial expression recognition model is a convolutional neural network model.

根据本公开实施例的第二方面,提供了一种人脸表情识别装置,所述装置包括:According to a second aspect of an embodiment of the present disclosure, a human facial expression recognition device is provided, the device comprising:

人脸检测模块,被配置为从待识别图像中检测人脸区域;A face detection module configured to detect a face area from an image to be recognized;

关键点获取模块,被配置为获取所述人脸区域中的关键点;a key point acquisition module configured to acquire key points in the face area;

图像提取模块,被配置为根据所述关键点从所述人脸区域中提取局部图像;An image extraction module configured to extract a partial image from the face area according to the key points;

表情识别模块,被配置为采用完成训练的表情识别模型对所述局部图像进行识别,得到表情识别结果。The expression recognition module is configured to use the trained expression recognition model to recognize the partial image to obtain an expression recognition result.

可选地,所述图像提取模块,包括:Optionally, the image extraction module includes:

图像块获取子模块,被配置为获取每个关键点周围的图像块;The image block acquisition sub-module is configured to acquire image blocks around each key point;

图像块处理子模块,被配置为将各个所述图像块按预设顺序进行叠加或拼接,得到所述局部图像。The image block processing sub-module is configured to superimpose or stitch each of the image blocks in a preset order to obtain the partial image.

可选地,所述图像块获取子模块,被配置为对于每一个关键点,截取以所述关键点为中心的预定尺寸的图像块。Optionally, the image block acquisition submodule is configured to, for each key point, intercept an image block of a predetermined size centered on the key point.

可选地,所述表情识别模块,包括:Optionally, the facial expression recognition module includes:

特征提取子模块,被配置为采用完成训练的表情识别模型提取所述局部图像的特征信息;The feature extraction submodule is configured to extract feature information of the partial image using the trained expression recognition model;

识别确定子模块,被配置为采用所述表情识别模型根据所述特征信息,确定所述表情识别结果。The identification determination submodule is configured to determine the expression recognition result by using the expression recognition model according to the feature information.

可选地,所述关键点获取模块,包括:Optionally, the key point acquisition module includes:

人脸缩放子模块,被配置为将所述人脸区域缩放至目标尺寸;The face scaling submodule is configured to scale the face area to a target size;

关键点定位子模块,被配置为从缩放后的所述人脸区域中定位所述关键点。The key point positioning submodule is configured to locate the key points from the zoomed face area.

可选地,所述表情识别模型为卷积神经网络模型。Optionally, the facial expression recognition model is a convolutional neural network model.

根据本公开实施例的第三方面,提供了一种人脸表情识别装置,所述装置包括:According to a third aspect of an embodiment of the present disclosure, a human facial expression recognition device is provided, the device comprising:

处理器;processor;

用于存储所述处理器的可执行指令的存储器;memory for storing executable instructions of the processor;

其中,所述处理器被配置为:Wherein, the processor is configured as:

从待识别图像中检测人脸区域;Detect the face area from the image to be recognized;

获取所述人脸区域中的关键点;Obtain key points in the face area;

根据所述关键点从所述人脸区域中提取局部图像;Extracting a partial image from the face area according to the key point;

采用完成训练的表情识别模型对所述局部图像进行识别,得到表情识别结果。The partial image is recognized by using the trained expression recognition model to obtain an expression recognition result.

本公开实施例提供的技术方案可以包括以下有益效果:The technical solutions provided by the embodiments of the present disclosure may include the following beneficial effects:

通过从待识别的图像中检测人脸区域,获取人脸区域中的关键点,根据关键点从人脸区域中提取局部图像,并采用完成训练的表情识别模型对局部图像进行识别,得到表情识别结果;解决了相关技术中由于特征提取和表情判别是两个分开的步骤,需要分别采用两种不同的算法进行处理,导致存在累积误差,影响人脸表情识别的精度的问题;采用完成训练的表情识别模型进行人脸关键点处的特征提取和表情判别,将两个分开的步骤合二为一,从而减少累积误差,提高人脸表情识别的准确性。并且,由于仅提取人脸关键点处的局部图像的特征信息,而非提取整个人脸区域的全局特征信息,能够更为准确、有效地提取体现表情状态的特征,进一步提高人脸表情识别的准确性。By detecting the face area from the image to be recognized, the key points in the face area are obtained, and the partial image is extracted from the face area according to the key point, and the partial image is recognized by the trained expression recognition model, and the expression recognition is obtained. Results: Solve the problem that in related technologies, since feature extraction and expression discrimination are two separate steps, two different algorithms need to be used for processing, resulting in cumulative errors and affecting the accuracy of facial expression recognition; The expression recognition model performs feature extraction and expression discrimination at key points of the face, and combines the two separate steps into one, thereby reducing cumulative errors and improving the accuracy of facial expression recognition. Moreover, because only the feature information of the local image at the key points of the face is extracted, rather than the global feature information of the entire face area, the features that reflect the state of expression can be extracted more accurately and effectively, and the performance of facial expression recognition can be further improved. accuracy.

应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the present disclosure.

附图说明Description of drawings

此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description serve to explain the principles of the disclosure.

图1是根据一示例性实施例示出的一种人脸表情识别方法的流程图;Fig. 1 is a flow chart of a method for recognizing facial expressions according to an exemplary embodiment;

图2A是根据另一示例性实施例示出的一种人脸表情识别方法的流程图;Fig. 2A is a flowchart of a method for recognizing facial expressions according to another exemplary embodiment;

图2B是图2A所示实施例涉及的关键点定位的示意图;Fig. 2B is a schematic diagram of key point positioning involved in the embodiment shown in Fig. 2A;

图2C是示例性示出的一种卷积神经网络的结构示意图;Fig. 2C is a schematic structural diagram of a convolutional neural network shown exemplarily;

图3是根据一示例性实施例示出的一种人脸表情识别装置的框图;Fig. 3 is a block diagram of a human facial expression recognition device shown according to an exemplary embodiment;

图4是根据另一示例性实施例示出的一种人脸表情识别装置的框图;Fig. 4 is a block diagram of a human facial expression recognition device according to another exemplary embodiment;

图5是根据一示例性实施例示出的一种装置的框图。Fig. 5 is a block diagram of a device according to an exemplary embodiment.

具体实施方式detailed description

这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本公开相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本公开的一些方面相一致的装置和方法的例子。Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numerals in different drawings refer to the same or similar elements unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatuses and methods consistent with aspects of the present disclosure as recited in the appended claims.

在相关技术中,特征提取和表情判别是两个分开的步骤,需要分别采用两种不同的算法进行处理,存在累积误差,影响人脸表情识别的精度。本公开实施例提供的技术方案,采用完成训练的表情识别模型进行人脸关键点处的特征提取和表情判别,将两个分开的步骤合二为一,从而减少累积误差,提高人脸表情识别的准确性。并且,由于仅提取人脸关键点处的局部图像的特征信息,而非提取整个人脸区域的全局特征信息,能够更为准确、有效地提取体现表情状态的特征,进一步提高人脸表情识别的准确性。In related technologies, feature extraction and expression discrimination are two separate steps, which need to be processed by two different algorithms respectively, and there are accumulated errors, which affect the accuracy of facial expression recognition. In the technical solution provided by the embodiments of the present disclosure, the trained expression recognition model is used to perform feature extraction and expression discrimination at key points of the face, and the two separate steps are combined into one, thereby reducing cumulative errors and improving facial expression recognition accuracy. Moreover, because only the feature information of the local image at the key points of the face is extracted, rather than the global feature information of the entire face area, the features that reflect the state of expression can be extracted more accurately and effectively, and the performance of facial expression recognition can be further improved. accuracy.

本公开实施例提供的方法,各步骤的执行主体可以是具有图像处理能力的电子设备,例如个人电脑、智能手机、平板电脑、服务器等。为了便于描述,在下述方法实施例中,以各步骤的执行主体为电子设备进行说明。In the method provided by the embodiments of the present disclosure, each step may be executed by an electronic device capable of image processing, such as a personal computer, a smart phone, a tablet computer, a server, and the like. For the convenience of description, in the following method embodiments, the execution subject of each step is an electronic device for description.

图1是根据一示例性实施例示出的一种人脸表情识别方法的流程图。该方法可以包括如下几个步骤:Fig. 1 is a flow chart of a method for recognizing facial expressions according to an exemplary embodiment. The method may include the following steps:

在步骤101中,从待识别图像中检测人脸区域。In step 101, a human face area is detected from an image to be recognized.

在步骤102中,获取人脸区域中的关键点。In step 102, key points in the face area are acquired.

关键点也称为特征点、人脸关键点或人脸特征点,是指人脸区域中能够体现表情状态的脸部位置,包括但不限于眼睛(如眼角、眼球中心、眼尾)、鼻子(如鼻尖、鼻翼)、嘴巴(如嘴角、唇角、唇边)、下巴、眉角等脸部位置。Key points, also known as feature points, face key points, or face feature points, refer to the facial positions in the face area that can reflect the state of expression, including but not limited to eyes (such as eye corners, eyeball centers, and eye ends), noses, etc. (such as nose tip, nose wing), mouth (such as mouth corner, lip corner, lip edge), chin, eyebrow corner and other facial positions.

在步骤103中,根据关键点从人脸区域中提取局部图像。In step 103, a partial image is extracted from the face area according to the key points.

在步骤104中,采用完成训练的表情识别模型对局部图像进行识别,得到表情识别结果。In step 104, the trained facial expression recognition model is used to recognize the partial image to obtain the facial expression recognition result.

综上所述,本实施例提供的方法,解决了相关技术中由于特征提取和表情判别是两个分开的步骤,需要分别采用两种不同的算法进行处理,导致存在累积误差,影响人脸表情识别的精度的问题。采用完成训练的表情识别模型进行人脸关键点处的特征提取和表情判别,将两个分开的步骤合二为一,从而减少累积误差,提高人脸表情识别的准确性。并且,由于仅提取人脸关键点处的局部图像的特征信息,而非提取整个人脸区域的全局特征信息,能够更为准确、有效地提取体现表情状态的特征,进一步提高人脸表情识别的准确性。To sum up, the method provided in this embodiment solves the problem that in related technologies, since feature extraction and expression discrimination are two separate steps, two different algorithms need to be used for processing, resulting in cumulative errors that affect facial expressions. The problem of recognition accuracy. The trained expression recognition model is used for feature extraction and expression discrimination at key points of the face, and the two separate steps are combined into one, thereby reducing cumulative errors and improving the accuracy of facial expression recognition. Moreover, because only the feature information of the local image at the key points of the face is extracted, rather than the global feature information of the entire face area, the features that reflect the state of expression can be extracted more accurately and effectively, and the performance of facial expression recognition can be further improved. accuracy.

图2A是根据另一示例性实施例示出的一种人脸表情识别方法的流程图。该方法可以包括如下几个步骤:Fig. 2A is a flow chart of a method for recognizing facial expressions according to another exemplary embodiment. The method may include the following steps:

在步骤201中,从待识别图像中检测人脸区域。In step 201, a human face area is detected from an image to be recognized.

电子设备采用相关的人脸检测算法从待识别图像中检测人脸区域。在本实施例中,对人脸检测算法的具体种类不作限定。例如,人脸检测算法可以是LBP算法和Adaboost算法的组合,采用LBP算法从待识别图像中提取图像特征,并采用Adaboost级联分类器根据图像特征确定人脸区域。待识别图像可以是包含有人脸的图像。The electronic device uses a related face detection algorithm to detect the face area from the image to be recognized. In this embodiment, the specific type of the face detection algorithm is not limited. For example, the face detection algorithm can be a combination of the LBP algorithm and the Adaboost algorithm. The LBP algorithm is used to extract image features from the image to be recognized, and the Adaboost cascade classifier is used to determine the face area according to the image features. The image to be recognized may be an image containing a human face.

在步骤202中,将人脸区域缩放至目标尺寸。In step 202, the face area is scaled to a target size.

由于不同图像中的人脸区域尺寸不一,为了确保后续特征点定位的准确性,将提取的人脸区域缩放至固定的目标尺寸。在本实施例中,对目标尺寸的大小不作限定,其可根据实际情况预先设定。例如,该目标尺寸为96×96(像素)。Since the size of the face area in different images is different, in order to ensure the accuracy of the subsequent feature point positioning, the extracted face area is scaled to a fixed target size. In this embodiment, there is no limitation on the size of the target size, which can be preset according to the actual situation. For example, the target size is 96×96 (pixels).

在步骤203中,从缩放后的人脸区域中定位关键点。In step 203, key points are located from the zoomed face area.

关键点也称为特征点、人脸关键点或人脸特征点,是指人脸区域中能够体现表情状态的脸部位置,包括但不限于眼睛(如眼角、眼球中心、眼尾)、鼻子(如鼻尖、鼻翼)、嘴巴(如嘴角、唇角、唇边)、下巴、眉角等脸部位置。Key points, also known as feature points, face key points, or face feature points, refer to the facial positions in the face area that can reflect the state of expression, including but not limited to eyes (such as eye corners, eyeball centers, and eye ends), noses, etc. (such as nose tip, nose wing), mouth (such as mouth corner, lip corner, lip edge), chin, eyebrow corner and other facial positions.

电子设备采用相关的人脸关键点定位算法从缩放后的人脸区域中定位关键点。在本实施例中,对人脸关键点定位算法的具体种类不作限定。例如,人脸关键点定位算法可以是SDM(Supervised Descent Method,监督下降法)算法。The electronic device locates the key points from the zoomed face area by using a relevant human face key point positioning algorithm. In this embodiment, there is no limitation on the specific type of facial key point location algorithm. For example, the facial key point location algorithm may be an SDM (Supervised Descent Method, supervised descent method) algorithm.

在一个示例中,如图2B所示,采用SDM算法从缩放后的人脸区域21中定位获取多个关键点,每一个关键点如图中小黑点所示。所要定位的关键点的位置和数量可预先设定,例如20个。In one example, as shown in FIG. 2B , the SDM algorithm is used to locate and obtain multiple key points from the zoomed face area 21 , and each key point is shown as a small black dot in the figure. The position and quantity of the key points to be located can be preset, for example, 20.

在步骤204中,获取每个关键点周围的图像块。In step 204, image blocks around each key point are obtained.

关键点周围的图像块是指包括所述关键点在内的预定尺寸的图像块。在一个示例中,对于每一个关键点,截取以关键点为中心的预定尺寸的图像块。在本实施例中,对预定尺寸的大小不作限定,其可根据实际情况预先设定。例如,该预定尺寸为32×32(像素)。The image blocks around the key point refer to image blocks of a predetermined size including the key point. In one example, for each key point, an image block of a predetermined size centered on the key point is intercepted. In this embodiment, there is no limitation on the size of the predetermined size, which can be preset according to the actual situation. For example, the predetermined size is 32×32 (pixels).

可选地,在进行关键点定位之前,可将人脸区域转换为灰度图像,在灰度图像中定位关键点。相应地,截取的关键点周围的图像块也为灰度图像。Optionally, before performing key point positioning, the face area can be converted into a grayscale image, and key points can be located in the grayscale image. Correspondingly, the image blocks around the intercepted key points are also grayscale images.

在步骤205中,将各个图像块按预设顺序进行叠加或拼接,得到局部图像。In step 205, each image block is superimposed or spliced in a preset order to obtain a partial image.

在进行表情识别之前,需要将各个图像块进行整合,作为一个整体输入至表情识别模型。在一种可能的实施方式中,将各个图像块按预设顺序进行叠加后作为表情识别模型的输入。在另一种可能的实施方式中,将各个图像块按预设顺序进行拼接后作为表情识别模型的输入。Before facial expression recognition, each image block needs to be integrated and input to the facial expression recognition model as a whole. In a possible implementation manner, each image block is superimposed in a preset order and used as an input to the expression recognition model. In another possible implementation manner, each image block is spliced in a preset order and used as an input to the expression recognition model.

由于获取的关键点通常为多个,且不同的关键点对应不同的脸部位置,因此各个图像块需要按照预设顺序进行叠加或拼接。以关键点包括眼睛、鼻子、嘴巴、眉角、下巴为例,预设顺序可以依次为眼睛、鼻子、嘴巴、下巴、眉角。也即,不论是在表情识别模型的训练阶段,还是在采用表情识别模型进行表情识别阶段,从任一张图像中截取的多个图像块均按照同一种预设顺序进行叠加或拼接,以此保证表情识别模型的输入数据的结构相统一。Since there are usually multiple key points acquired, and different key points correspond to different facial positions, each image block needs to be superimposed or spliced in a preset order. Taking key points including eyes, nose, mouth, eyebrows, and chin as an example, the preset order can be eyes, nose, mouth, chin, and eyebrows. That is to say, whether it is in the training stage of the expression recognition model or in the expression recognition stage using the expression recognition model, multiple image blocks intercepted from any image are superimposed or spliced according to the same preset order, so that Ensure that the structure of the input data of the facial expression recognition model is unified.

在步骤206中,采用完成训练的表情识别模型对局部图像进行识别,得到表情识别结果。In step 206, the trained facial expression recognition model is used to recognize the partial image to obtain the facial expression recognition result.

电子设备采用完成训练的表情识别模型提取局部图像的特征信息,采用表情识别模型根据特征信息,确定表情识别结果。在本实施例中,特征提取和表情判别均由表情识别模型来完成,无需通过两种不同的算法分两步完成,将两个分开的步骤合二为一,从而减少累积误差,提高人脸表情识别的准确性。The electronic device uses the trained facial expression recognition model to extract feature information of the partial image, and uses the facial expression recognition model to determine the facial expression recognition result based on the feature information. In this embodiment, feature extraction and expression discrimination are both completed by the expression recognition model, and there is no need to use two different algorithms to complete it in two steps. The two separate steps are combined into one, thereby reducing cumulative errors and improving facial recognition. Accuracy of expression recognition.

在一个示例中,表情识别模型为卷积神经网络(Convolutional Neural Network,CNN)模型,或称为深度卷积神经网络模型。卷积神经网络具有强大的特征提取能力,利用卷积神经网络进行特征提取和表情判别,准确性较高。卷积神经网络包括一个输入层、至少一个卷积层、至少一个全连接层和一个输出层。其中,输入层的输入数据即为将各个图像块按序叠加或拼接后得到的局部图像;输出层的输出结果是长度为n的向量,分别表示n种表情的概率,n为大于1的整数。卷积层用于特征提取。全连接层用于对卷积层提取的特征进行组合和抽象,得到适用于输出层进行分类的数据。In one example, the facial expression recognition model is a convolutional neural network (Convolutional Neural Network, CNN) model, or called a deep convolutional neural network model. The convolutional neural network has powerful feature extraction capabilities, and the use of convolutional neural networks for feature extraction and expression discrimination has high accuracy. A convolutional neural network includes an input layer, at least one convolutional layer, at least one fully connected layer, and an output layer. Among them, the input data of the input layer is the local image obtained by superimposing or splicing each image block in sequence; the output result of the output layer is a vector of length n, respectively representing the probability of n kinds of expressions, and n is an integer greater than 1 . Convolutional layers are used for feature extraction. The fully connected layer is used to combine and abstract the features extracted by the convolutional layer to obtain data suitable for classification by the output layer.

结合参考图2C,其示例性示出了一种卷积神经网络的结构示意图。该卷积神经网络包括1个输入层、3个卷积层(第一卷积层C1、第二卷积层C2和第三卷积层C3)、2个全连接层(第一全连接层FC4和第二全连接层FC5)和1个输出层(Softmax层)。假设从人脸区域中提取20个关键点,所截取的每个关键点周围的图像块的尺寸为32×32(像素),输入层的输入数据即为20×32×32的叠加图像或拼接图像。其中3个卷积层C1、C2和C3中卷积核的个数分别为36、64和32。第一卷积层C1的步长为2,经第一卷积层C1计算后图像的长和宽都缩小至原来的二分之一,第二卷积层C2和第三卷积层C3的步长为1。需要说明的是,图2C所示的卷积神经网络仅是示例性和解释性的,并不用于限定本公开。一般来说,卷积神经网络的层数越多,效果越好但计算时间也会越长,在实际应用中,可结合对识别精度和效率的要求,设计适当层数的卷积神经网络。Referring to FIG. 2C , it exemplarily shows a schematic structural diagram of a convolutional neural network. The convolutional neural network includes 1 input layer, 3 convolutional layers (the first convolutional layer C1, the second convolutional layer C2 and the third convolutional layer C3), 2 fully connected layers (the first fully connected layer FC4 and the second fully connected layer FC5) and 1 output layer (Softmax layer). Assuming that 20 key points are extracted from the face area, the size of the image block around each key point is 32×32 (pixels), and the input data of the input layer is a superimposed image or splicing of 20×32×32 image. The numbers of convolution kernels in the three convolutional layers C1, C2 and C3 are 36, 64 and 32 respectively. The step size of the first convolutional layer C1 is 2. After the calculation of the first convolutional layer C1, the length and width of the image are reduced to half of the original. The second convolutional layer C2 and the third convolutional layer C3 The step size is 1. It should be noted that the convolutional neural network shown in FIG. 2C is only exemplary and explanatory, and is not intended to limit the present disclosure. Generally speaking, the more layers of the convolutional neural network, the better the effect but the longer the calculation time. In practical applications, a convolutional neural network with an appropriate number of layers can be designed in combination with the requirements for recognition accuracy and efficiency.

综上所述,本实施例提供的方法,采用完成训练的表情识别模型进行人脸关键点处的特征提取和表情判别,将两个分开的步骤合二为一,从而减少累积误差,提高人脸表情识别的准确性。并且,由于仅提取人脸关键点处的局部图像的特征信息,而非提取整个人脸区域的全局特征信息,能够更为准确、有效地提取体现表情状态的特征,进一步提高人脸表情识别的准确性。In summary, the method provided in this embodiment uses the trained facial expression recognition model to perform feature extraction and facial expression discrimination at key points of the face, and combines the two separate steps into one, thereby reducing cumulative errors and improving human facial expressions. Accuracy of facial expression recognition. Moreover, because only the feature information of the local image at the key points of the face is extracted, rather than the global feature information of the entire face area, the features that reflect the state of expression can be extracted more accurately and effectively, and the performance of facial expression recognition can be further improved. accuracy.

下述为本公开装置实施例,可以用于执行本公开方法实施例。对于本公开装置实施例中未披露的细节,请参照本公开方法实施例。The following are device embodiments of the present disclosure, which can be used to implement the method embodiments of the present disclosure. For details not disclosed in the disclosed device embodiments, please refer to the disclosed method embodiments.

图3是根据一示例性实施例示出的一种人脸表情识别装置的框图。该装置具有实现上述方法示例的功能,所述功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。该装置可以包括:人脸检测模块310、关键点获取模块320、图像提取模块330和表情识别模块340。Fig. 3 is a block diagram of a device for recognizing facial expressions according to an exemplary embodiment. The device has the function of realizing the above method example, and the function may be realized by hardware, or may be realized by executing corresponding software by hardware. The device may include: a face detection module 310 , a key point acquisition module 320 , an image extraction module 330 and an expression recognition module 340 .

人脸检测模块310,被配置为从待识别图像中检测人脸区域。The face detection module 310 is configured to detect a face area from the image to be recognized.

关键点获取模块320,被配置为获取所述人脸区域中的关键点。The key point acquiring module 320 is configured to acquire the key points in the face area.

图像提取模块330,被配置为根据所述关键点从所述人脸区域中提取局部图像。The image extraction module 330 is configured to extract a partial image from the face area according to the key points.

表情识别模块340,被配置为采用完成训练的表情识别模型对所述局部图像进行识别,得到表情识别结果。The expression recognition module 340 is configured to use the trained expression recognition model to recognize the partial image to obtain an expression recognition result.

综上所述,本实施例提供的装置,采用完成训练的表情识别模型进行人脸关键点处的特征提取和表情判别,将两个分开的步骤合二为一,从而减少累积误差,提高人脸表情识别的准确性。并且,由于仅提取人脸关键点处的局部图像的特征信息,而非提取整个人脸区域的全局特征信息,能够更为准确、有效地提取体现表情状态的特征,进一步提高人脸表情识别的准确性。In summary, the device provided in this embodiment uses the trained facial expression recognition model to perform feature extraction and facial expression discrimination at key points of the face, and combines the two separate steps into one, thereby reducing cumulative errors and improving human facial expression. Accuracy of facial expression recognition. Moreover, because only the feature information of the local image at the key points of the face is extracted, rather than the global feature information of the entire face area, the features that reflect the state of expression can be extracted more accurately and effectively, and the performance of facial expression recognition can be further improved. accuracy.

图4是根据另一示例性实施例示出的一种人脸表情识别装置的框图。该装置具有实现上述方法示例的功能,所述功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。该装置可以包括:人脸检测模块310、关键点获取模块320、图像提取模块330和表情识别模块340。Fig. 4 is a block diagram of a device for recognizing facial expressions according to another exemplary embodiment. The device has the function of realizing the above method example, and the function may be realized by hardware, or may be realized by executing corresponding software by hardware. The device may include: a face detection module 310 , a key point acquisition module 320 , an image extraction module 330 and an expression recognition module 340 .

人脸检测模块310,被配置为从待识别图像中检测人脸区域。The face detection module 310 is configured to detect a face area from the image to be recognized.

关键点获取模块320,被配置为获取所述人脸区域中的关键点。The key point acquiring module 320 is configured to acquire the key points in the face area.

图像提取模块330,被配置为根据所述关键点从所述人脸区域中提取局部图像。The image extraction module 330 is configured to extract a partial image from the face area according to the key points.

表情识别模块340,被配置为采用完成训练的表情识别模型对所述局部图像进行识别,得到表情识别结果。The expression recognition module 340 is configured to use the trained expression recognition model to recognize the partial image to obtain an expression recognition result.

在一个示例中,所述图像提取模块330,包括:图像块获取子模块330a和图像块处理子模块330b。In an example, the image extraction module 330 includes: an image block acquisition submodule 330a and an image block processing submodule 330b.

图像块获取子模块330a,被配置为获取每个关键点周围的图像块。The image block acquisition sub-module 330a is configured to acquire image blocks around each key point.

图像块处理子模块330b,被配置为将各个所述图像块按预设顺序进行叠加或拼接,得到所述局部图像。The image block processing sub-module 330b is configured to superimpose or stitch each of the image blocks in a preset order to obtain the partial image.

在一个示例中,所述图像块获取子模块330b,被配置为对于每一个关键点,截取以所述关键点为中心的预定尺寸的图像块。In one example, the image block acquisition sub-module 330b is configured to, for each key point, intercept an image block of a predetermined size centered on the key point.

在一个示例中,所述表情识别模块340,包括:特征提取子模块340a和识别确定子模块340b。In one example, the expression recognition module 340 includes: a feature extraction submodule 340a and a recognition determination submodule 340b.

特征提取子模块340a,被配置为采用完成训练的表情识别模型提取所述局部图像的特征信息。The feature extraction sub-module 340a is configured to use the trained expression recognition model to extract feature information of the partial image.

识别确定子模块340b,被配置为采用所述表情识别模型根据所述特征信息,确定所述表情识别结果。The recognition determination sub-module 340b is configured to determine the expression recognition result by using the expression recognition model according to the feature information.

在一个示例中,所述关键点获取模块320,包括:人脸缩放子模块320a和关键点定位子模块320b。In an example, the key point acquisition module 320 includes: a face scaling submodule 320a and a key point positioning submodule 320b.

人脸缩放子模块320a,被配置为将所述人脸区域缩放至目标尺寸。The human face scaling sub-module 320a is configured to scale the human face area to a target size.

关键点定位子模块320b,被配置为从缩放后的所述人脸区域中定位所述关键点。The key point positioning sub-module 320b is configured to locate the key points from the zoomed face area.

在一个示例中,所述表情识别模型为卷积神经网络模型。In one example, the facial expression recognition model is a convolutional neural network model.

综上所述,本实施例提供的装置,采用完成训练的表情识别模型进行人脸关键点处的特征提取和表情判别,将两个分开的步骤合二为一,从而减少累积误差,提高人脸表情识别的准确性。并且,由于仅提取人脸关键点处的局部图像的特征信息,而非提取整个人脸区域的全局特征信息,能够更为准确、有效地提取体现表情状态的特征,进一步提高人脸表情识别的准确性。In summary, the device provided in this embodiment uses the trained facial expression recognition model to perform feature extraction and facial expression discrimination at key points of the face, and combines the two separate steps into one, thereby reducing cumulative errors and improving human facial expression. Accuracy of facial expression recognition. Moreover, because only the feature information of the local image at the key points of the face is extracted, rather than the global feature information of the entire face area, the features that reflect the state of expression can be extracted more accurately and effectively, and the performance of facial expression recognition can be further improved. accuracy.

需要说明的一点是,上述实施例提供的装置在实现其功能时,仅以上述各个功能模块的划分进行举例说明,实际应用中,可以根据实际需要而将上述功能分配由不同的功能模块完成,即将设备的内容结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。It should be noted that when the device provided by the above embodiment realizes its functions, it only uses the division of the above-mentioned functional modules as an example for illustration. In practical applications, the above-mentioned function allocation can be completed by different functional modules according to actual needs. That is, the content structure of the device is divided into different functional modules to complete all or part of the functions described above.

关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。Regarding the apparatus in the foregoing embodiments, the specific manner in which each module executes operations has been described in detail in the embodiments related to the method, and will not be described in detail here.

本公开一示例性实施例还提供了一种人脸表情识别装置,能够实现本公开提供的人脸表情识别方法。该装置包括:处理器,以及用于存储处理器的可执行指令的存储器。其中,处理器被配置为:An exemplary embodiment of the present disclosure also provides a human facial expression recognition device capable of implementing the human facial expression recognition method provided in the present disclosure. The device includes: a processor, and a memory for storing executable instructions of the processor. where the processor is configured as:

从待识别图像中检测人脸区域;Detect the face area from the image to be recognized;

获取所述人脸区域中的关键点;Obtain key points in the face area;

根据所述关键点从所述人脸区域中提取局部图像;Extracting a partial image from the face area according to the key point;

采用完成训练的表情识别模型对所述局部图像进行识别,得到表情识别结果。The partial image is recognized by using the trained expression recognition model to obtain an expression recognition result.

可选地,所述处理器被配置为:Optionally, the processor is configured to:

获取每个关键点周围的图像块;Get image blocks around each key point;

将各个所述图像块按预设顺序进行叠加或拼接,得到所述局部图像。The partial image is obtained by superimposing or splicing each of the image blocks in a preset order.

可选地,所述处理器被配置为:Optionally, the processor is configured to:

对于每一个关键点,截取以所述关键点为中心的预定尺寸的图像块。For each key point, an image block of a predetermined size centered on the key point is intercepted.

可选地,所述处理器被配置为:Optionally, the processor is configured to:

采用完成训练的表情识别模型提取所述局部图像的特征信息;Extracting feature information of the partial image by using the facial expression recognition model that has been trained;

采用所述表情识别模型根据所述特征信息,确定所述表情识别结果。Using the expression recognition model to determine the expression recognition result according to the feature information.

可选地,所述处理器被配置为:Optionally, the processor is configured to:

将所述人脸区域缩放至目标尺寸;Scaling the face area to a target size;

从缩放后的所述人脸区域中定位所述关键点。Locating the key points from the zoomed face area.

可选地,所述表情识别模型为卷积神经网络模型。Optionally, the facial expression recognition model is a convolutional neural network model.

图5是根据一示例性实施例示出的一种装置500的框图。例如,装置500可以是移动电话,计算机,数字广播终端,消息收发设备,游戏控制台,平板设备,医疗设备,健身设备,个人数字助理等。Fig. 5 is a block diagram of an apparatus 500 according to an exemplary embodiment. For example, the apparatus 500 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, and the like.

参照图5,装置500可以包括以下一个或多个组件:处理组件502,存储器504,电源组件506,多媒体组件508,音频组件510,输入/输出(I/O)接口512,传感器组件514,以及通信组件516。5, apparatus 500 may include one or more of the following components: processing component 502, memory 504, power supply component 506, multimedia component 508, audio component 510, input/output (I/O) interface 512, sensor component 514, and Communication component 516 .

处理组件502通常控制装置500的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理组件502可以包括一个或多个处理器520来执行指令,以完成上述的方法的全部或部分步骤。此外,处理组件502可以包括一个或多个模块,便于处理组件502和其他组件之间的交互。例如,处理组件502可以包括多媒体模块,以方便多媒体组件508和处理组件502之间的交互。The processing component 502 generally controls the overall operations of the device 500, such as those associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 502 may include one or more processors 520 to execute instructions to complete all or part of the steps of the above method. Additionally, processing component 502 may include one or more modules that facilitate interaction between processing component 502 and other components. For example, processing component 502 may include a multimedia module to facilitate interaction between multimedia component 508 and processing component 502 .

存储器504被配置为存储各种类型的数据以支持在装置500的操作。这些数据的示例包括用于在装置500上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器504可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。The memory 504 is configured to store various types of data to support operations at the device 500 . Examples of such data include instructions for any application or method operating on device 500, contact data, phonebook data, messages, pictures, videos, and the like. The memory 504 can be implemented by any type of volatile or non-volatile storage device or their combination, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Magnetic or Optical Disk.

电源组件506为装置500的各种组件提供电力。电源组件506可以包括电源管理系统,一个或多个电源,及其他与为装置500生成、管理和分配电力相关联的组件。The power supply component 506 provides power to various components of the device 500 . Power components 506 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for device 500 .

多媒体组件508包括在所述装置500和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件508包括一个前置摄像头和/或后置摄像头。当装置500处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。The multimedia component 508 includes a screen that provides an output interface between the device 500 and the user. In some embodiments, the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may not only sense a boundary of a touch or swipe action, but also detect duration and pressure associated with the touch or swipe action. In some embodiments, the multimedia component 508 includes a front camera and/or a rear camera. When the device 500 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capability.

音频组件510被配置为输出和/或输入音频信号。例如,音频组件510包括一个麦克风(MIC),当装置500处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器504或经由通信组件516发送。在一些实施例中,音频组件510还包括一个扬声器,用于输出音频信号。The audio component 510 is configured to output and/or input audio signals. For example, the audio component 510 includes a microphone (MIC), which is configured to receive external audio signals when the device 500 is in operation modes, such as call mode, recording mode and voice recognition mode. Received audio signals may be further stored in memory 504 or sent via communication component 516 . In some embodiments, the audio component 510 also includes a speaker for outputting audio signals.

I/O接口512为处理组件502和外围接口模块之间提供接口,上述外围接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。The I/O interface 512 provides an interface between the processing component 502 and a peripheral interface module. The peripheral interface module may be a keyboard, a click wheel, a button, and the like. These buttons may include, but are not limited to: a home button, volume buttons, start button, and lock button.

传感器组件514包括一个或多个传感器,用于为装置500提供各个方面的状态评估。例如,传感器组件514可以检测到装置500的打开/关闭状态,组件的相对定位,例如所述组件为装置500的显示器和小键盘,传感器组件514还可以检测装置500或装置500一个组件的位置改变,用户与装置500接触的存在或不存在,装置500方位或加速/减速和装置500的温度变化。传感器组件514可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件514还可以包括光传感器,如CMOS或CCD图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件514还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。Sensor assembly 514 includes one or more sensors for providing status assessments of various aspects of device 500 . For example, the sensor component 514 can detect the open/closed state of the device 500, the relative positioning of components, such as the display and keypad of the device 500, and the sensor component 514 can also detect a change in the position of the device 500 or a component of the device 500 , the presence or absence of user contact with the device 500 , the device 500 orientation or acceleration/deceleration and the temperature change of the device 500 . Sensor assembly 514 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact. Sensor assembly 514 may also include an optical sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor component 514 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor or a temperature sensor.

通信组件516被配置为便于装置500和其他设备之间有线或无线方式的通信。装置500可以接入基于通信标准的无线网络,如Wi-Fi,2G或3G,或它们的组合。在一个示例性实施例中,通信组件516经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,所述通信组件516还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。The communication component 516 is configured to facilitate wired or wireless communication between the apparatus 500 and other devices. The device 500 can access wireless networks based on communication standards, such as Wi-Fi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 516 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 516 also includes a near field communication (NFC) module to facilitate short-range communication. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, Infrared Data Association (IrDA) technology, Ultra Wide Band (UWB) technology, Bluetooth (BT) technology and other technologies.

在示例性实施例中,装置500可以被一个或多个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述方法。In an exemplary embodiment, apparatus 500 may be programmed by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable A gate array (FPGA), controller, microcontroller, microprocessor or other electronic component implementation for performing the methods described above.

在示例性实施例中,还提供了一种包括指令的非临时性计算机可读存储介质,例如包括指令的存储器504,上述指令可由装置500的处理器520执行以完成上述方法。例如,所述非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。In an exemplary embodiment, there is also provided a non-transitory computer-readable storage medium including instructions, such as the memory 504 including instructions, which can be executed by the processor 520 of the device 500 to implement the above method. For example, the non-transitory computer readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, and the like.

一种非临时性计算机可读存储介质,当所述存储介质中的指令由装置500的处理器执行时,使得装置500能够执行上述方法。A non-transitory computer-readable storage medium, when the instructions in the storage medium are executed by the processor of the device 500, the device 500 can execute the above method.

应当理解的是,在本文中提及的“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。字符“/”一般表示前后关联对象是一种“或”的关系。It should be understood that the "plurality" mentioned herein refers to two or more than two. "And/or" describes the association relationship of associated objects, indicating that there may be three types of relationships, for example, A and/or B may indicate: A exists alone, A and B exist simultaneously, and B exists independently. The character "/" generally indicates that the contextual objects are an "or" relationship.

本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本公开的其它实施方案。本申请旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由下面的权利要求指出。Other embodiments of the present disclosure will be readily apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any modification, use or adaptation of the present disclosure, and these modifications, uses or adaptations follow the general principles of the present disclosure and include common knowledge or conventional technical means in the technical field not disclosed in the present disclosure . The specification and examples are to be considered exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求来限制。It should be understood that the present disclosure is not limited to the precise constructions which have been described above and shown in the drawings, and various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (13)

1.一种人脸表情识别方法,其特征在于,所述方法包括:1. a facial expression recognition method, is characterized in that, described method comprises: 从待识别图像中检测人脸区域;Detect the face area from the image to be recognized; 获取所述人脸区域中的关键点;Obtain key points in the face area; 根据所述关键点从所述人脸区域中提取局部图像;Extracting a partial image from the face area according to the key point; 采用完成训练的表情识别模型对所述局部图像进行识别,得到表情识别结果。The partial image is recognized by using the trained expression recognition model to obtain an expression recognition result. 2.根据权利要求1所述的方法,其特征在于,所述根据所述关键点从所述人脸区域中提取局部图像,包括:2. The method according to claim 1, wherein said extracting a partial image from said face region according to said key points comprises: 获取每个关键点周围的图像块;Get image blocks around each key point; 将各个所述图像块按预设顺序进行叠加或拼接,得到所述局部图像。The partial image is obtained by superimposing or splicing each of the image blocks in a preset order. 3.根据权利要求2所述的方法,其特征在于,所述获取每个关键点周围的图像块,包括:3. The method according to claim 2, wherein said acquiring image blocks around each key point comprises: 对于每一个关键点,截取以所述关键点为中心的预定尺寸的图像块。For each key point, an image block of a predetermined size centered on the key point is intercepted. 4.根据权利要求1所述的方法,其特征在于,所述采用完成训练的表情识别模型对所述局部图像进行识别,得到表情识别结果,包括:4. method according to claim 1, is characterized in that, described partial image is identified by the facial expression recognition model that described adopting finishing training, obtains facial expression recognition result, comprises: 采用完成训练的表情识别模型提取所述局部图像的特征信息;Extracting feature information of the partial image by using the facial expression recognition model that has been trained; 采用所述表情识别模型根据所述特征信息,确定所述表情识别结果。Using the expression recognition model to determine the expression recognition result according to the characteristic information. 5.根据权利要求1所述的方法,其特征在于,所述获取所述人脸区域中的关键点,包括:5. method according to claim 1, is characterized in that, described obtaining the key point in described people's face area, comprises: 将所述人脸区域缩放至目标尺寸;Scaling the face area to a target size; 从缩放后的所述人脸区域中定位所述关键点。Locating the key points from the zoomed face area. 6.根据权利要求1至5任一项所述的方法,其特征在于,所述表情识别模型为卷积神经网络模型。6. The method according to any one of claims 1 to 5, wherein the facial expression recognition model is a convolutional neural network model. 7.一种人脸表情识别装置,其特征在于,所述装置包括:7. A facial expression recognition device, characterized in that said device comprises: 人脸检测模块,被配置为从待识别图像中检测人脸区域;A face detection module configured to detect a face area from an image to be recognized; 关键点获取模块,被配置为获取所述人脸区域中的关键点;a key point acquisition module configured to acquire key points in the face area; 图像提取模块,被配置为根据所述关键点从所述人脸区域中提取局部图像;An image extraction module configured to extract a partial image from the face area according to the key points; 表情识别模块,被配置为采用完成训练的表情识别模型对所述局部图像进行识别,得到表情识别结果。The expression recognition module is configured to use the trained expression recognition model to recognize the partial image to obtain an expression recognition result. 8.根据权利要求7所述的装置,其特征在于,所述图像提取模块,包括:8. The device according to claim 7, wherein the image extraction module comprises: 图像块获取子模块,被配置为获取每个关键点周围的图像块;The image block acquisition sub-module is configured to acquire image blocks around each key point; 图像块处理子模块,被配置为将各个所述图像块按预设顺序进行叠加或拼接,得到所述局部图像。The image block processing sub-module is configured to superimpose or stitch each of the image blocks in a preset order to obtain the partial image. 9.根据权利要求8所述的装置,其特征在于,9. The device of claim 8, wherein: 所述图像块获取子模块,被配置为对于每一个关键点,截取以所述关键点为中心的预定尺寸的图像块。The image block acquisition sub-module is configured to, for each key point, intercept an image block of a predetermined size centered on the key point. 10.根据权利要求7所述的装置,其特征在于,所述表情识别模块,包括:10. The device according to claim 7, wherein the facial expression recognition module comprises: 特征提取子模块,被配置为采用完成训练的表情识别模型提取所述局部图像的特征信息;The feature extraction submodule is configured to extract feature information of the partial image using the trained expression recognition model; 识别确定子模块,被配置为采用所述表情识别模型根据所述特征信息,确定所述表情识别结果。The identification determination submodule is configured to determine the expression recognition result by using the expression recognition model according to the feature information. 11.根据权利要求7所述的装置,其特征在于,所述关键点获取模块,包括:11. The device according to claim 7, wherein the key point acquisition module comprises: 人脸缩放子模块,被配置为将所述人脸区域缩放至目标尺寸;The face scaling submodule is configured to scale the face area to a target size; 关键点定位子模块,被配置为从缩放后的所述人脸区域中定位所述关键点。The key point positioning submodule is configured to locate the key points from the zoomed face area. 12.根据权利要求7至11任一项所述的装置,其特征在于,所述表情识别模型为卷积神经网络模型。12. The device according to any one of claims 7 to 11, wherein the facial expression recognition model is a convolutional neural network model. 13.一种人脸表情识别装置,其特征在于,所述装置包括:13. A facial expression recognition device, characterized in that the device comprises: 处理器;processor; 用于存储所述处理器的可执行指令的存储器;memory for storing executable instructions of the processor; 其中,所述处理器被配置为:Wherein, the processor is configured as: 从待识别图像中检测人脸区域;Detect the face area from the image to be recognized; 获取所述人脸区域中的关键点;Obtain key points in the face area; 根据所述关键点从所述人脸区域中提取局部图像;Extracting a partial image from the face area according to the key point; 采用完成训练的表情识别模型对所述局部图像进行识别,得到表情识别结果。The partial image is recognized by using the trained expression recognition model to obtain an expression recognition result.
CN201610653790.2A 2016-08-10 2016-08-10 Facial expression recognizing method and device Active CN106295566B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610653790.2A CN106295566B (en) 2016-08-10 2016-08-10 Facial expression recognizing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610653790.2A CN106295566B (en) 2016-08-10 2016-08-10 Facial expression recognizing method and device

Publications (2)

Publication Number Publication Date
CN106295566A true CN106295566A (en) 2017-01-04
CN106295566B CN106295566B (en) 2019-07-09

Family

ID=57668257

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610653790.2A Active CN106295566B (en) 2016-08-10 2016-08-10 Facial expression recognizing method and device

Country Status (1)

Country Link
CN (1) CN106295566B (en)

Cited By (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106778751A (en) * 2017-02-20 2017-05-31 迈吉客科技(北京)有限公司 A kind of non-face ROI recognition methods and device
CN107369196A (en) * 2017-06-30 2017-11-21 广东欧珀移动通信有限公司 Expression, which packs, makees method, apparatus, storage medium and electronic equipment
CN107633207A (en) * 2017-08-17 2018-01-26 平安科技(深圳)有限公司 AU characteristic recognition methods, device and storage medium
CN107633204A (en) * 2017-08-17 2018-01-26 平安科技(深圳)有限公司 Face occlusion detection method, apparatus and storage medium
CN107679448A (en) * 2017-08-17 2018-02-09 平安科技(深圳)有限公司 Eyeball action-analysing method, device and storage medium
CN107679447A (en) * 2017-08-17 2018-02-09 平安科技(深圳)有限公司 Facial characteristics point detecting method, device and storage medium
CN107832746A (en) * 2017-12-01 2018-03-23 北京小米移动软件有限公司 Expression recognition method and device
CN107886070A (en) * 2017-11-10 2018-04-06 北京小米移动软件有限公司 Verification method, device and the equipment of facial image
CN107958230A (en) * 2017-12-22 2018-04-24 中国科学院深圳先进技术研究院 Facial expression recognizing method and device
CN108073910A (en) * 2017-12-29 2018-05-25 百度在线网络技术(北京)有限公司 For generating the method and apparatus of face characteristic
CN108197593A (en) * 2018-01-23 2018-06-22 深圳极视角科技有限公司 More size face's expression recognition methods and device based on three-point positioning method
CN108304936A (en) * 2017-07-12 2018-07-20 腾讯科技(深圳)有限公司 Machine learning model training method and device, facial expression image sorting technique and device
CN108304709A (en) * 2018-01-31 2018-07-20 广东欧珀移动通信有限公司 Face unlocking method and related product
CN108304793A (en) * 2018-01-26 2018-07-20 北京易真学思教育科技有限公司 Online learning analysis system and method
WO2018133034A1 (en) * 2017-01-20 2018-07-26 Intel Corporation Dynamic emotion recognition in unconstrained scenarios
CN108399370A (en) * 2018-02-02 2018-08-14 达闼科技(北京)有限公司 The method and cloud system of Expression Recognition
CN108596221A (en) * 2018-04-10 2018-09-28 江河瑞通(北京)技术有限公司 The image-recognizing method and equipment of rod reading
CN108710829A (en) * 2018-04-19 2018-10-26 北京红云智胜科技有限公司 A method of the expression classification based on deep learning and the detection of micro- expression
CN109145871A (en) * 2018-09-14 2019-01-04 广州杰赛科技股份有限公司 Psychology and behavior recognition methods, device and storage medium
CN109190487A (en) * 2018-08-07 2019-01-11 平安科技(深圳)有限公司 Face Emotion identification method, apparatus, computer equipment and storage medium
CN109241835A (en) * 2018-07-27 2019-01-18 上海商汤智能科技有限公司 Image processing method and device, electronic equipment and storage medium
CN109271930A (en) * 2018-09-14 2019-01-25 广州杰赛科技股份有限公司 Micro- expression recognition method, device and storage medium
WO2019033568A1 (en) * 2017-08-17 2019-02-21 平安科技(深圳)有限公司 Lip movement capturing method, apparatus and storage medium
CN109934173A (en) * 2019-03-14 2019-06-25 腾讯科技(深圳)有限公司 Expression recognition method, device and electronic device
CN109977867A (en) * 2019-03-26 2019-07-05 厦门瑞为信息技术有限公司 A kind of infrared biopsy method based on machine learning multiple features fusion
CN110020638A (en) * 2019-04-17 2019-07-16 唐晓颖 Facial expression recognizing method, device, equipment and medium
CN110147805A (en) * 2018-07-23 2019-08-20 腾讯科技(深圳)有限公司 Image processing method, device, terminal and storage medium
CN111008589A (en) * 2019-12-02 2020-04-14 杭州网易云音乐科技有限公司 Face key point detection method, medium, device and computing equipment
CN112580527A (en) * 2020-12-22 2021-03-30 之江实验室 Facial expression recognition method based on convolution long-term and short-term memory network
CN112818838A (en) * 2021-01-29 2021-05-18 北京嘀嘀无限科技发展有限公司 Expression recognition method and device and electronic equipment
CN112913253A (en) * 2018-11-13 2021-06-04 北京比特大陆科技有限公司 Image processing method, apparatus, equipment, storage medium and program product
CN113051958A (en) * 2019-12-26 2021-06-29 深圳市光鉴科技有限公司 Driver state detection method, system, device and medium based on deep learning
WO2021135509A1 (en) * 2019-12-30 2021-07-08 腾讯科技(深圳)有限公司 Image processing method and apparatus, electronic device, and storage medium
CN113128309A (en) * 2020-01-10 2021-07-16 中移(上海)信息通信科技有限公司 Facial expression recognition method, device, equipment and medium
CN113807205A (en) * 2021-08-30 2021-12-17 中科尚易健康科技(北京)有限公司 Locally enhanced human meridian recognition method and device, equipment and storage medium
CN114120386A (en) * 2020-08-31 2022-03-01 腾讯科技(深圳)有限公司 Face recognition method, device, equipment and storage medium
CN117373100A (en) * 2023-12-08 2024-01-09 成都乐超人科技有限公司 Face recognition method and system based on differential quantized local binary pattern

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104346607A (en) * 2014-11-06 2015-02-11 上海电机学院 Face recognition method based on convolutional neural network
CN104850825A (en) * 2015-04-18 2015-08-19 中国计量学院 Facial image face score calculating method based on convolutional neural network
CN105005774A (en) * 2015-07-28 2015-10-28 中国科学院自动化研究所 Face relative relation recognition method based on convolutional neural network and device thereof
CN105469087A (en) * 2015-07-13 2016-04-06 百度在线网络技术(北京)有限公司 Method for identifying clothes image, and labeling method and device of clothes image
CN105654049A (en) * 2015-12-29 2016-06-08 中国科学院深圳先进技术研究院 Facial expression recognition method and device
CN105825192A (en) * 2016-03-24 2016-08-03 深圳大学 Facial expression identification method and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104346607A (en) * 2014-11-06 2015-02-11 上海电机学院 Face recognition method based on convolutional neural network
CN104850825A (en) * 2015-04-18 2015-08-19 中国计量学院 Facial image face score calculating method based on convolutional neural network
CN105469087A (en) * 2015-07-13 2016-04-06 百度在线网络技术(北京)有限公司 Method for identifying clothes image, and labeling method and device of clothes image
CN105005774A (en) * 2015-07-28 2015-10-28 中国科学院自动化研究所 Face relative relation recognition method based on convolutional neural network and device thereof
CN105654049A (en) * 2015-12-29 2016-06-08 中国科学院深圳先进技术研究院 Facial expression recognition method and device
CN105825192A (en) * 2016-03-24 2016-08-03 深圳大学 Facial expression identification method and system

Cited By (61)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018133034A1 (en) * 2017-01-20 2018-07-26 Intel Corporation Dynamic emotion recognition in unconstrained scenarios
US11151361B2 (en) 2017-01-20 2021-10-19 Intel Corporation Dynamic emotion recognition in unconstrained scenarios
CN106778751A (en) * 2017-02-20 2017-05-31 迈吉客科技(北京)有限公司 A kind of non-face ROI recognition methods and device
CN106778751B (en) * 2017-02-20 2020-08-21 迈吉客科技(北京)有限公司 Non-facial ROI (region of interest) identification method and device
WO2018149350A1 (en) * 2017-02-20 2018-08-23 迈吉客科技(北京)有限公司 Method and apparatus for recognising non-facial roi
CN107369196A (en) * 2017-06-30 2017-11-21 广东欧珀移动通信有限公司 Expression, which packs, makees method, apparatus, storage medium and electronic equipment
CN108304936B (en) * 2017-07-12 2021-11-16 腾讯科技(深圳)有限公司 Machine learning model training method and device, and expression image classification method and device
CN108304936A (en) * 2017-07-12 2018-07-20 腾讯科技(深圳)有限公司 Machine learning model training method and device, facial expression image sorting technique and device
US11537884B2 (en) 2017-07-12 2022-12-27 Tencent Technology (Shenzhen) Company Limited Machine learning model training method and device, and expression image classification method and device
US12079696B2 (en) 2017-07-12 2024-09-03 Tencent Technology (Shenzhen) Company Limited Machine learning model training method and device, and expression image classification method and device
CN107633204A (en) * 2017-08-17 2018-01-26 平安科技(深圳)有限公司 Face occlusion detection method, apparatus and storage medium
CN107633204B (en) * 2017-08-17 2019-01-29 平安科技(深圳)有限公司 Face occlusion detection method, apparatus and storage medium
WO2019033568A1 (en) * 2017-08-17 2019-02-21 平安科技(深圳)有限公司 Lip movement capturing method, apparatus and storage medium
CN107679448A (en) * 2017-08-17 2018-02-09 平安科技(深圳)有限公司 Eyeball action-analysing method, device and storage medium
US10489636B2 (en) 2017-08-17 2019-11-26 Ping An Technology (Shenzhen) Co., Ltd. Lip movement capturing method and device, and storage medium
CN107679447A (en) * 2017-08-17 2018-02-09 平安科技(深圳)有限公司 Facial characteristics point detecting method, device and storage medium
CN107633207A (en) * 2017-08-17 2018-01-26 平安科技(深圳)有限公司 AU characteristic recognition methods, device and storage medium
CN107633207B (en) * 2017-08-17 2018-10-12 平安科技(深圳)有限公司 AU characteristic recognition methods, device and storage medium
CN107886070A (en) * 2017-11-10 2018-04-06 北京小米移动软件有限公司 Verification method, device and the equipment of facial image
CN107832746A (en) * 2017-12-01 2018-03-23 北京小米移动软件有限公司 Expression recognition method and device
CN107958230B (en) * 2017-12-22 2020-06-23 中国科学院深圳先进技术研究院 Facial expression recognition method and device
CN107958230A (en) * 2017-12-22 2018-04-24 中国科学院深圳先进技术研究院 Facial expression recognizing method and device
US11270099B2 (en) 2017-12-29 2022-03-08 Baidu Online Network Technology (Beijing) Co., Ltd. Method and apparatus for generating facial feature
CN108073910A (en) * 2017-12-29 2018-05-25 百度在线网络技术(北京)有限公司 For generating the method and apparatus of face characteristic
CN108197593B (en) * 2018-01-23 2022-02-18 深圳极视角科技有限公司 Multi-size facial expression recognition method and device based on three-point positioning method
CN108197593A (en) * 2018-01-23 2018-06-22 深圳极视角科技有限公司 More size face's expression recognition methods and device based on three-point positioning method
CN108304793B (en) * 2018-01-26 2021-01-08 北京世纪好未来教育科技有限公司 Online learning analysis system and method
CN108304793A (en) * 2018-01-26 2018-07-20 北京易真学思教育科技有限公司 Online learning analysis system and method
CN108304709B (en) * 2018-01-31 2022-01-04 Oppo广东移动通信有限公司 Face unlocking method and related product
CN108304709A (en) * 2018-01-31 2018-07-20 广东欧珀移动通信有限公司 Face unlocking method and related product
CN108399370A (en) * 2018-02-02 2018-08-14 达闼科技(北京)有限公司 The method and cloud system of Expression Recognition
CN108596221A (en) * 2018-04-10 2018-09-28 江河瑞通(北京)技术有限公司 The image-recognizing method and equipment of rod reading
CN108596221B (en) * 2018-04-10 2020-12-01 江河瑞通(北京)技术有限公司 Image recognition method and device for scale reading
CN108710829A (en) * 2018-04-19 2018-10-26 北京红云智胜科技有限公司 A method of the expression classification based on deep learning and the detection of micro- expression
US12283089B2 (en) 2018-07-23 2025-04-22 Tencent Technology (Shenzhen) Company Limited Head image editing based on face expression classification
US11631275B2 (en) 2018-07-23 2023-04-18 Tencent Technology (Shenzhen) Company Limited Image processing method and apparatus, terminal, and computer-readable storage medium
CN110147805A (en) * 2018-07-23 2019-08-20 腾讯科技(深圳)有限公司 Image processing method, device, terminal and storage medium
CN109241835A (en) * 2018-07-27 2019-01-18 上海商汤智能科技有限公司 Image processing method and device, electronic equipment and storage medium
CN109190487A (en) * 2018-08-07 2019-01-11 平安科技(深圳)有限公司 Face Emotion identification method, apparatus, computer equipment and storage medium
CN109271930A (en) * 2018-09-14 2019-01-25 广州杰赛科技股份有限公司 Micro- expression recognition method, device and storage medium
CN109145871B (en) * 2018-09-14 2020-09-15 广州杰赛科技股份有限公司 Psychological behavior recognition method, device and storage medium
CN109271930B (en) * 2018-09-14 2020-11-13 广州杰赛科技股份有限公司 Micro-expression recognition method, device and storage medium
CN109145871A (en) * 2018-09-14 2019-01-04 广州杰赛科技股份有限公司 Psychology and behavior recognition methods, device and storage medium
CN112913253A (en) * 2018-11-13 2021-06-04 北京比特大陆科技有限公司 Image processing method, apparatus, equipment, storage medium and program product
CN109934173A (en) * 2019-03-14 2019-06-25 腾讯科技(深圳)有限公司 Expression recognition method, device and electronic device
US12094247B2 (en) 2019-03-14 2024-09-17 Tencent Technology (Shenzhen) Company Limited Expression recognition method and related apparatus
CN109934173B (en) * 2019-03-14 2023-11-21 腾讯科技(深圳)有限公司 Expression recognition method and device and electronic equipment
CN109977867A (en) * 2019-03-26 2019-07-05 厦门瑞为信息技术有限公司 A kind of infrared biopsy method based on machine learning multiple features fusion
CN110020638A (en) * 2019-04-17 2019-07-16 唐晓颖 Facial expression recognizing method, device, equipment and medium
CN110020638B (en) * 2019-04-17 2023-05-12 唐晓颖 Facial expression recognition method, device, equipment and medium
CN111008589B (en) * 2019-12-02 2024-04-09 杭州网易云音乐科技有限公司 Human face key point detection method, medium, device and computing equipment
CN111008589A (en) * 2019-12-02 2020-04-14 杭州网易云音乐科技有限公司 Face key point detection method, medium, device and computing equipment
CN113051958A (en) * 2019-12-26 2021-06-29 深圳市光鉴科技有限公司 Driver state detection method, system, device and medium based on deep learning
WO2021135509A1 (en) * 2019-12-30 2021-07-08 腾讯科技(深圳)有限公司 Image processing method and apparatus, electronic device, and storage medium
CN113128309A (en) * 2020-01-10 2021-07-16 中移(上海)信息通信科技有限公司 Facial expression recognition method, device, equipment and medium
CN114120386A (en) * 2020-08-31 2022-03-01 腾讯科技(深圳)有限公司 Face recognition method, device, equipment and storage medium
CN112580527A (en) * 2020-12-22 2021-03-30 之江实验室 Facial expression recognition method based on convolution long-term and short-term memory network
CN112818838A (en) * 2021-01-29 2021-05-18 北京嘀嘀无限科技发展有限公司 Expression recognition method and device and electronic equipment
CN113807205A (en) * 2021-08-30 2021-12-17 中科尚易健康科技(北京)有限公司 Locally enhanced human meridian recognition method and device, equipment and storage medium
CN117373100B (en) * 2023-12-08 2024-02-23 成都乐超人科技有限公司 Face recognition method and system based on differential quantization local binary pattern
CN117373100A (en) * 2023-12-08 2024-01-09 成都乐超人科技有限公司 Face recognition method and system based on differential quantized local binary pattern

Also Published As

Publication number Publication date
CN106295566B (en) 2019-07-09

Similar Documents

Publication Publication Date Title
CN106295566B (en) Facial expression recognizing method and device
CN104850828B (en) Character recognition method and device
CN106339680B (en) Face key independent positioning method and device
CN106384098B (en) Head pose detection method, device and terminal based on image
CN106295515B (en) Method and device for determining a face area in an image
WO2021031609A1 (en) Living body detection method and device, electronic apparatus and storage medium
CN105654039B (en) The method and apparatus of image procossing
CN106295511A (en) Face tracking method and device
CN106204435A (en) Image processing method and device
CN106528879A (en) Picture processing method and device
CN106548145A (en) Image-recognizing method and device
CN107239535A (en) Similar pictures search method and device
CN107688781A (en) Face recognition method and device
CN106778531A (en) Face detection method and device
CN107832741A (en) The method, apparatus and computer-readable recording medium of facial modeling
CN107886070A (en) Verification method, device and the equipment of facial image
CN106980840A (en) Shape of face matching process, device and storage medium
CN104077563B (en) Face identification method and device
CN106845377A (en) Face key independent positioning method and device
CN104867112B (en) Photo processing method and device
CN107480665A (en) Character detecting method, device and computer-readable recording medium
CN107038428A (en) Vivo identification method and device
CN107766820A (en) Image classification method and device
CN107463903A (en) Face key independent positioning method and device
CN107729880A (en) Method for detecting human face and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant