[go: up one dir, main page]

CN107358169A - A kind of facial expression recognizing method and expression recognition device - Google Patents

A kind of facial expression recognizing method and expression recognition device Download PDF

Info

Publication number
CN107358169A
CN107358169A CN201710478188.4A CN201710478188A CN107358169A CN 107358169 A CN107358169 A CN 107358169A CN 201710478188 A CN201710478188 A CN 201710478188A CN 107358169 A CN107358169 A CN 107358169A
Authority
CN
China
Prior art keywords
model
expression
emotion identification
classification
mood
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710478188.4A
Other languages
Chinese (zh)
Inventor
陈书楷
钱叶青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen Central Intelligent Information Technology Co Ltd
Original Assignee
Xiamen Central Intelligent Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen Central Intelligent Information Technology Co Ltd filed Critical Xiamen Central Intelligent Information Technology Co Ltd
Priority to CN201710478188.4A priority Critical patent/CN107358169A/en
Publication of CN107358169A publication Critical patent/CN107358169A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2148Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the process organisation or structure, e.g. boosting cascade
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2431Multiple classes

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The present invention is applied to field of information processing, there is provided a kind of facial expression recognizing method and expression recognition device, including:Build and train the Emotion identification model based on convolutional neural networks;Facial image to be identified is inputted into the Emotion identification model, to export the mood classification of the facial image, the mood classification includes one kind in positive mood, negative emotions and neutral mood;Obtain Expression Recognition model corresponding with institute mood classification;Facial image is inputted into the Expression Recognition model, to export the expression classification of the facial image.The expression of face is identified by with different levels mode by the present invention, different Expression Recognition models is selected according to different moods, the content of memory, reduces the computational complexity of whole expression identification process, improves operation efficiency required for reducing each identification model.Compared to traditional facial expression recognizing method, in the present invention, the recognition accuracy and recognition efficiency of human face expression are higher.

Description

A kind of facial expression recognizing method and expression recognition device
Technical field
The invention belongs to field of information processing, more particularly to a kind of facial expression recognizing method and expression recognition dress Put.
Background technology
The basic facial expression classification of face is divided into 8 kinds, i.e., angry (anger), despise (contempt), detest (disgust), Frightened (fear), happy (happy), neutral (neutral), sadness are (sadness) and surprised (surprise).Human face expression is known How Jiu Shi not study makes computer obtain human face expression and the technology distinguished from still image or video sequence.Such as Fruit computer can understand human face expression exactly and identify which classification human face expression belongs to, then, will be in very great Cheng Change the relation between people and computer on degree, so as to reach more preferable man-machine interaction effect.
Current facial expression recognizing method predominantly based on random forests algorithm, expressive features method of descent or is based on SVM (Support Vector Machine) expression classification method etc..Because the attribute classification of expression is more, rule is more multiple Miscellaneous, therefore, in existing facial expression recognizing method, each identification model is required for remembering more content, so as to cause people The identification process computing of face expression is complicated, the recognition accuracy of human face expression and recognition efficiency are more low.
The content of the invention
The purpose of the embodiment of the present invention is to provide a kind of facial expression recognizing method and expression recognition device, it is intended to Solve in facial expression recognizing method at this stage, each identification model is required for remembering more content, so as to cause face The problem of identification process computing of expression is complicated, recognition accuracy and recognition efficiency are more low.
First aspect, there is provided a kind of facial expression recognizing method, including:
Build and train the Emotion identification model based on convolutional neural networks;
Facial image to be identified is inputted into the Emotion identification model, to export the mood classification of the facial image, The mood classification includes one kind in positive mood, negative emotions and neutral mood;
Obtain Expression Recognition model corresponding with the mood classification;
The facial image is inputted into the Expression Recognition model, to export the expression classification of the facial image.
Second aspect, there is provided a kind of expression recognition device, including:
First acquisition unit, for building and training the Emotion identification model based on convolutional neural networks;
Emotion identification unit, for facial image to be identified to be inputted into the Emotion identification model, to export the people The mood classification of face image, the mood classification include one kind in positive mood, negative emotions and neutral mood;
Second acquisition unit, for obtaining Expression Recognition model corresponding with the mood classification;
Expression Recognition unit, for the facial image to be inputted into the Expression Recognition model, to export the face figure The expression classification of picture.
The embodiment of the present invention is realized based on different identification models, after the mood of facial image is identified, is recycled The expression classification of facial image is further identified corresponding to the Expression Recognition model of the mood.By with different levels mode come pair The expression of face is identified, and different Expression Recognition models is selected according to different moods, is avoided and is settled ground at one go directly Identification human face expression is connect, therefore, reduces the content of memory required for each identification model, so as to reduce whole Expression Recognition The computational complexity of process, improves operation efficiency.Compared to traditional facial expression recognizing method, in the embodiment of the present invention In, the recognition accuracy and recognition efficiency of human face expression are higher.
Brief description of the drawings
Fig. 1 is the implementation process figure of facial expression recognizing method provided in an embodiment of the present invention;
Fig. 2 is the implementation process figure for the facial expression recognizing method that another embodiment of the present invention provides;
Fig. 3 is facial expression recognizing method S101 provided in an embodiment of the present invention specific implementation flow chart;
Fig. 4 is the network structure for the CNN models that further embodiment of this invention provides;
Fig. 5 is the specific implementation flow chart for the facial expression recognizing method S303 that further embodiment of this invention provides;
Fig. 6 is facial expression recognizing method S102 provided in an embodiment of the present invention specific implementation flow chart;
Fig. 7 is the sample figure in facial image test set provided in an embodiment of the present invention;
Fig. 8 is the structured flowchart of expression recognition device provided in an embodiment of the present invention.
Embodiment
In order to make the purpose , technical scheme and advantage of the present invention be clearer, it is right below in conjunction with drawings and Examples The present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and It is not used in the restriction present invention.
Facial expression recognizing method and expression recognition device provided in an embodiment of the present invention can apply to all intelligence Among energy terminal device, including smart mobile phone, flat board, palm PC (Personal Digital Assistant, PDA), photograph Camera and various human-computer interaction devices, etc..
The embodiment of the present invention realizes that facial image to be identified passes sequentially through each layer identification based on the identification model of cascade Model, every layer of identification model carries out an automatic recognition classification operation, again automatically into next to what should be classified after classification Layer identification model, finally, the human face expression expression classification result of last layer of identification model being judged as in image.
Fig. 1 shows the implementation process of facial expression recognizing method provided in an embodiment of the present invention, and details are as follows:
In S101, build and train the Emotion identification model based on convolutional neural networks.
Emotion identification model is by training obtained network mould comprising multiple facial images including different mood classifications Type.Specifically by the method based on supervised learning, generation can be used in performing automatically the affiliated mood classification of human face expression Identification and the deep neural network model judged.
In S102, facial image to be identified is inputted into the Emotion identification model, to export the facial image Mood classification, the mood classification include one kind in positive mood, negative emotions and neutral mood.
In the present embodiment, mood classification is divided into three major types, is positive mood (positive), negative emotions respectively And neutral mood (neutral) (negative).Every facial image is only capable of being judged as appointing in three major types mood classification It is a kind of.
Positive mood represents a kind of positive mood of people, embody in facial image it is expressed go out it is happy, optimistic, from The state such as believe, appreciate, loosening;Negative emotions represent a kind of negative feeling of people, on psychology anxiety, anxiety, indignation, prevent The mood that funeral, sadness, pain etc. are unfavorable for body and mind is referred to as Negative Emotional;Neutral mood is represented not partially not to without any The mood classification of emotion.
Due to all including multiple face characteristics in every facial image, these face characteristics are extracted, and performs Abstract analysis processing mathematically, may recognize that the emotional state that the facial image is shown.The execution of the process is by feelings Thread identification model is automatically performed, and only facial image to be identified need to be inputted into the mood model, you can by belonging to facial image Mood classification is exported, and obtains specific mood classification results.
In S103, Expression Recognition model corresponding with the mood classification is obtained.
After the mood classification that facial image to be identified is obtained by S102, by facial image transfer input to second knowledge Other model, i.e. Expression Recognition model.Also, the identification model is the Expression Recognition model corresponding with above-mentioned mood classification.
For example, if the mood classification of the facial image to be identified of Emotion identification model output is positive mood, obtain just Expression Recognition model under the mood of face;If the mood classification of the facial image to be identified of Emotion identification model output is negative feelings Thread, then obtain the Expression Recognition model under negative emotions.
Each Expression Recognition model is by training to obtain comprising multiple facial images including different expression classifications Network model.Specifically by the method based on supervised learning, generation can be used in the affiliated expression classification of facial image Perform automatic identification and the deep neural network model of judgement.Wherein, a certain Expression Recognition model in the training process pair Multiple facial images in the facial image database answered, multiple faces of the different expressions under specially above-mentioned known mood classification Image.
For example, the Expression Recognition model under negative emotions, only included in the facial image database relied in its training process Have multiple facial images for belonging to negative emotions, and the expression classification of every facial image may it is identical may also be different.Instructing , it is necessary to first (angry (anger), despise (contempt) by eight major class face basic facial expressions, detest (disgust), probably before white silk Fear (fear), happily (happy), neutral (neutral), sadness are (sadness) and surprised (surprise)) it is referred to each feelings Thread classification, become each expression classification under mood classification.Such as, after classification, the expression classification under negative emotions has: Anger, sadness and surprise.
In S104, the facial image is inputted into the Expression Recognition model, to export the expression of the facial image Classification.
The Expression Recognition model corresponding to some the mood classification come is trained, can be only used for distinguishing under the mood classification The human face expression of all categories.That is, the facial image for belonging to negative emotions is entered using the Expression Recognition model under negative emotions Row further identification when, only can recognize that the expression in the facial image is anger, sadness, or surprise.Therefore, Facial image after Emotion identification model treatment is inputted into the Expression Recognition model corresponding to its Emotion identification result again Afterwards, a kind of specific expression classification of the facial image under the mood classification can be exported.
Preferably, when face basic facial expression is referred under each mood classification, the expression class under each mood classification Sum is not more than five.So as to ensure that the Expression Recognition model corresponding to each mood classification need not learn excessive characteristics of image Information, the content for needing to remember is reduced, improve the average total time-consuming during Expression Recognition.
The embodiment of the present invention is realized based on different identification models, after the mood of facial image is identified, is recycled The expression classification of facial image is further identified corresponding to the Expression Recognition model of the mood.By with different levels mode come pair The expression of face is identified, and different Expression Recognition models is selected according to different moods, is avoided and is settled ground at one go directly Identification human face expression is connect, therefore, reduces the content of memory required for each identification model, so as to reduce whole Expression Recognition The computational complexity of process, improves operation efficiency.Compared to traditional facial expression recognizing method, in the embodiment of the present invention In, the recognition accuracy and recognition efficiency of human face expression are higher.
As an alternative embodiment of the invention, as shown in Fig. 2 methods described also includes:
In S105, if the expression classification includes one or more levels sublist feelings classification, the expression classification is obtained Corresponding sublist feelings identification models at different levels.
In S106, the facial image is sequentially input into sublist feelings identification models at different levels, to export the facial image Sublist feelings classification.
Under any one mood classification, there are a variety of expression classifications, a kind of expression classification therein may also have more Seed expression classification, and a kind of sublist feelings classification therein there may also be a variety of sublist feelings classifications of more next stage, therefore, at this In embodiment, when facial image is identified in S104 is defined as a certain expression classification, in order to more accurately obtain the face figure As final refinement expression is as a result, it is desirable to judge whether its fixed expression classification also includes the sublist feelings class of next stage Not.
If the fixed expression classification A of the facial image also includes multiple sublist feelings classifications of next stage, obtaining should Sublist feelings identification model a under expression classification A, to handle the facial image of input, to export the facial image One sub- expression classification B.
Now, judge whether the fixed sublist feelings classification B of the facial image also includes multiple sublists of more next stage Feelings classification, if so, the sublist feelings identification model b under expression classification B is then obtained, at the facial image to input Reason, to export the facial image second level sublist feelings classification C.
And so on, above-mentioned acts of determination is repeated, until an expression classification of the facial image finally given Or sublist feelings classification does not include subordinate's sublist feelings classification, and finally give expression classification or sublist feelings classification are exported For the Expression Recognition result of facial image.
Preferably, under a kind of expression classification, the sublist feelings classification sum per one-level is not more than five.
In the present embodiment, the expression of face is identified by successively progressive mode, determined according to every grade Sublist feelings classification performs classification operation come sublist feelings identification model corresponding to selecting, and avoids and settles ground at one go directly Human face expression is identified, reduces the content of memory required for each identification model, so as to reduce whole expression identification process Computational complexity, improve operation efficiency and improve the precision of Expression Recognition.
As one embodiment of the present of invention, Fig. 3 shows facial expression recognizing method provided in an embodiment of the present invention S101 specific implementation flow, including:
In S301, multiple face training images of known class are obtained.
In S302, using the face training image to the Emotion identification model based on multilayer convolutional neural networks and Expression Recognition model is trained.
In S303, the Emotion identification model and Expression Recognition model are evaluated respectively using cross entropy loss function Fitting degree, when the fitting degree reaches predetermined threshold value, the Emotion identification model and table are adjusted by backpropagation Each weight parameter in feelings identification model, with the Emotion identification model after the completion of being trained and the Expression Recognition Model.
In the present embodiment, the depth based on CNN (Convolutional Neural Network, convolutional neural networks) is used Learning method is spent to train above-mentioned Emotion identification model and Expression Recognition model, or even including sublist feelings identification models at different levels.
Different identification models is trained, uses different face training images.For Emotion identification model, above-mentioned people Face training image is including but not limited to multiple face figures under different facial orientations, different mood classifications and different illumination conditions Picture;For the Expression Recognition model under a certain mood classification, above-mentioned face training image is including but not limited to different faces Multiple facial images under different expression classifications and different illumination conditions under direction, the mood classification.
Face training image can obtain from following individual face expression database, including but not limited to CACD, ck+, JP, LAP_ data、face_db、Taiwanese、Chinese_imgs、Crawl_pics、MTFL(AFLW、LFW、NET_7876)、IMFDB、 Genki4k and some famous person's image libraries or collected by hand image library.
In the training process, each face training image is firstly the need of by pre-processing, i.e. each face is trained Image carries out face alignment, to obtain standard front face facial image, and by the size specification of the standard front face facial image Change to fixed size W × H, secondly just the face training image after the completion of pretreatment is input in CNN and trained.
As shown in figure 4, in the present embodiment, CNN structure has 11 layers, wherein, convolutional layer has 9 layers, and connecting layer entirely has 2 layers, For last two layers in CNN structures.
In CNN structures, receptive field (receptive field) size of filtering core is 3 × 3;Conv represents convolutional layer; D is Color Channel quantity, for example, D=1 represents gray-scale map, D=3 represents cromogram;N is port number, represents the width of convolutional layer Degree;Convolution step-length is 1 pixel, and with 0 filling it is wide and it is high be 1 pixel border;Avg pool are average pond layer, its Sample sliding-window is 4 × 4, step-length 1;FC represents full articulamentum;L->M represents that L neuron is mapped to M neuron, C It is the neuronal quantity finally exported, also illustrates that the quantity of classification.
In the CNN structures, the purpose of Dropout layers is to prevent CNN from occurring overfitting in the training process Situation, ensure the random zero setting of neuron in input layer and intermediate layer, and these neurons are not involved in forward direction with reversely passing The process broadcast, its weight is kept not send change.In this case, can man made noise to the face training image of input Various interference, neuron is avoided to occur the situation of missing inspection under some visual patterns.In addition, Dropout layers cause identification model Training process restrain slower, the identification model more robust obtained from.
In the CNN model parameters that the present embodiment provides, above-mentioned D parameters are preset as 3, that is, are masked as coloured image, N=6, 6 port numbers are provided.
As the implementation example of the present invention, for a face training image of 3 passages 72 × 72, above-mentioned negative feelings The training process of expression identification model is specific as follows under thread:
By the face training image that specification under 3 passages is 72 × 72, by cutting postnormalization to 64 × 64, this Afterwards, each gray value of pixel and the gray average of the image on the image are obtained, and by the gray value of each pixel The gray average is subtracted, so as to form 3 × 64 × 64 initial three-dimensional tensor.
Above-mentioned initial three-dimensional tensor is inputted into CNN models, because the face training image is the facial image of known class, Therefore, it is possible to obtain its expression classification, and the class label using the expression classification as the face training image.For example, with category Label happy represents that the expression classification of the face training image is happy.
Each initial three-dimensional tensor for corresponding to every face training image respectively, it is inputted the CNN knots shown in Fig. 4 Structure, after the 1st convolutional layer that width is N=6 passage is handled, it is mapped to a new tensor, its dimension is 6 × 64 × 64;Again after the processing of dropout layers, the dimension of an obtained tensor is 6 × 64 × 64, the like, until through wide After spending the 8th convolutional layer processing for N=48 passage, the three-dimensional tensor corresponding to original face training image has changed into dimension It is 48 × 4 × 4 new tensor.Then, handled via average pond layer, the dimension of the tensor is changed into 48 × 1 × 1, i.e. L=96* 1*1=96.Finally, after connecting layer entirely by two, CNN will export a new three-dimensional tensor for carrying primitive class expression.
As one embodiment of the present of invention, as shown in figure 5, above-mentioned S303 is specific as follows:
In S501, multiple face test images are known using the Emotion identification model and Expression Recognition model Do not test, obtain test result.
In S502, according to the test result, it is right respectively to generate the Emotion identification model and Expression Recognition model The confusion matrix answered.
In S503, by the confusion matrix, the Emotion identification model and the Expression Recognition model are calculated Recognition correct rate.
In S504, if the recognition correct rate of the Emotion identification model is not up to preset value, adjusted by backpropagation After each weight parameter in the whole Emotion identification model, test is identified to multiple face test images again, and count Recognition correct rate of the Emotion identification model in this time test is calculated, until the recognition correct rate of the Emotion identification model reaches During to preset value, the Emotion identification model after the completion of being trained.
In S505, if the recognition correct rate of the Expression Recognition model is not up to preset value, adjusted by backpropagation After each weight parameter in the whole Expression Recognition model, test is identified to multiple face test images again, and count Recognition correct rate of the Expression Recognition model in this time test is calculated, until the recognition correct rate of the Expression Recognition model reaches During to preset value, the Expression Recognition model after the completion of being trained.
On the one hand, by the face training image of multiple different expression classifications, CNN models, which can export, carries original category Multiple different three-dimensional tensors of label.On the other hand, for each face training image, what its class label was to determine, and it is every Open face training image and be all corresponding with a face test image similar to its, due to high phase be present between two images Like property, therefore, the class label of face test image in theory should be identical with the class label of face training image, but in test Before, the class label of the face test image does not predefine.By using a pair of the CNN models pair and face training image 1 The every face test image answered is handled, and can export the three-dimensional tensor for carrying new caused class label.
After obtaining the class label of every face test image, that is, the expression classification of every face test image is obtained, this When, generate the confusion matrix on every face test image expression kind judging result.According to the confusion matrix, can evaluate The training effect of CNN network models.
When training error is not up to minimum value, in other words when the recognition accuracy of expression classification is not up to default target During value, the parameter of CNN models is constantly adjusted so that output every face test image three-dimensional tensor with it is defeated The class label identical maximum probability of the three-dimensional tensor of the face training image corresponding to every face test image gone out.Instructing During white silk, specifically learn the parameter of CNN models using cross entropy loss function and back-propagation algorithm, so as to not Each weight parameter in disconnected adjustment and renewal CNN network models, and face test image is tested again, obtain most A new training effect.
When training error reaches minimum value, in other words when the recognition accuracy of expression classification reaches default desired value When, then it represents that the training process of CNN models is completed, and the CNN models are defined as to the Expression Recognition model under negative emotions. So that the expression classification of every facial image to be identified of Expression Recognition model output is closer to actual value.
Similarly, according to the training principle of expression identification model under above-mentioned negative emotions, training obtain Emotion identification model with And sublist feelings identification models at different levels.
In the present embodiment, instructed by collecting the face training image under various classifications or inputting the enough faces of quantity Practice image to establish Expression Recognition model and Emotion identification model, and using cross entropy loss function come the fine or not journey of evaluation model Degree and the weight parameter that CNN models are adjusted using backpropagation so that the model can be based on supervised learning, actually should Reach recognition performance as high as possible in, improve identification and the classifying quality of human face expression.There is provided by the present embodiment Identification model training method, more small-sized Expression Recognition model and Emotion identification model can be obtained, make its occupancy Space is less, and computation complexity is lower, therefore, can have faster recognition speed for facial image, improve human face expression Recognition efficiency.
As one embodiment of the present of invention, as shown in fig. 6, above-mentioned S102 is specific as follows:
In S601, the initial three-dimensional tensor of the facial image to be identified is obtained.
Facial image to be identified is pre-processed, i.e. process cutting postnormalization to fixed size W × H, this Afterwards, each gray value of pixel and the gray average of the image on the image are obtained, and by the gray value of each pixel The gray average is subtracted, so as to form initial three-dimensional tensor.
In S602, by the initial Emotion identification model of the three-dimensional tensor input based on SoftMax sorting algorithms.
In S603, using the Emotion identification model, the initial three-dimensional tensor is calculated respectively in the positive feelings Probability of occurrence in thread, negative emotions and neutral mood, and a maximum mood classification of wherein described probability of occurrence is defeated Go out the mood classification for the facial image.
To the Emotion identification model after the completion of parameter learning, SoftMax sorting algorithms are added, so as to be treated to input Identify the initial three-dimensional tensor of facial image, calculate it and belong to the probable value of each mood classification, and will wherein probable value it is maximum A mood kind judging be the facial image mood classification.
The step realization principle do not mentioned in the embodiment of the present invention, the realization principle all same with above-mentioned each embodiment, Therefore do not repeat one by one.
In order to verify the feasibility of scheme provided in an embodiment of the present invention and accuracy, in the fer2013 people of International Publication Expression recognition experiment test has been carried out in face image storehouse, and has been carried out with other facial expression recognizing methods of the prior art Compare.Wherein, the facial image test set to test is specially 5864 facial images in fer2013 facial image databases Data, image sample are as shown in Figure 7.
In above-mentioned 5864 face image datas, expression classification is that anger picture has 925, and expression classification is Happy picture has 1744, and expression classification is that surprise picture has 807, and expression classification is that sadness picture has 1190, mood classification is that neutral picture has 1198.In test process, test index is to determine every pictures warp Cross quantification treatment and whether input after Expression Recognition model or Emotion identification model the expression classification exported or mood classification It is identical with the correct class categories of its script.
Above-mentioned facial image test set test result indicates that, whether in Expression Recognition or in Emotion identification, this The scheme that inventive embodiments provide is superior to other methods of the prior art, is completed using training and combines SoftMax The Remanent Model of sorting algorithm, to multiple different mood classifications on third party's expression storehouse fer2013 and different expression classes Other facial image is tested, and the recognition accuracy of obtained mood classification is 69.08%, the Emotion identification model trained Size be 914kb, the average of Emotion identification is taken as 24ms, higher by 8.68% than the Emotion identification method based on Microsoft API;Table The recognition accuracy of feelings classification is 63.24%, and the size of Expression Recognition model is 915kb, and average take of Expression Recognition is 36ms is higher by 26.83% than the expression recognition method based on Microsoft API.
It should be understood that in embodiments of the present invention, the size of the sequence number of above-mentioned each process is not meant to the elder generation of execution sequence Afterwards, the execution sequence of each process should be determined with its function and internal logic, the implementation process structure without tackling the embodiment of the present invention Into any restriction.
The facial expression recognizing method provided corresponding to the embodiment of the present invention, Fig. 8 show that the embodiment of the present invention provides Expression recognition device structured flowchart.For convenience of description, it illustrate only part related to the present embodiment.
Reference picture 8, the device include:
Training unit 81, for building and training the Emotion identification model based on convolutional neural networks.
Emotion identification unit 82, for facial image to be identified to be inputted into the Emotion identification model, with described in output The mood classification of facial image, the mood classification include one kind in positive mood, negative emotions and neutral mood.
First acquisition unit 83, for obtaining Expression Recognition model corresponding with the mood classification.
Expression Recognition unit 84, for the facial image to be inputted into the Expression Recognition model, to export the face The expression classification of image.
Alternatively, described device also includes:
Second acquisition unit, if including one or more levels sublist feelings classification for the expression classification, obtain described in Sublist feelings identification models at different levels corresponding to expression classification.
Sublist feelings recognition unit, for the facial image to be sequentially input into sublist feelings identification models at different levels, to export State the sublist feelings classification of facial image.
Alternatively, the training unit 81 includes:
First obtains subelement, for obtaining multiple face training images of known class.
Subelement is trained, for utilizing the face training image to the Emotion identification mould based on multilayer convolutional neural networks Type and Expression Recognition model are trained.
Subelement is adjusted, for evaluating the Emotion identification model and Expression Recognition respectively using cross entropy loss function The fitting degree of model, when the fitting degree reaches predetermined threshold value, the Emotion identification model is adjusted by backpropagation And each weight parameter in Expression Recognition model, with the Emotion identification model after the completion of being trained and the table Feelings identification model.
Alternatively, the adjustment subelement is additionally operable to:
Test is identified to multiple face test images using the Emotion identification model and Expression Recognition model, obtained To test result;
According to the test result, generate that the Emotion identification model and Expression Recognition model are corresponding respectively to obscure square Battle array;
By the confusion matrix, the identification for calculating the Emotion identification model and the Expression Recognition model is correct Rate;
If the recognition correct rate of the Emotion identification model is not up to preset value, the mood is adjusted by backpropagation After each weight parameter in identification model, test is identified to multiple face test images again, and calculate the mood Recognition correct rate of the identification model in this time test, until the recognition correct rate of the Emotion identification model reaches preset value When, the Emotion identification model after the completion of being trained;
If the recognition correct rate of the Expression Recognition model is not up to preset value, the expression is adjusted by backpropagation After each weight parameter in identification model, test is identified to multiple face test images again, and calculate the expression Recognition correct rate of the identification model in this time test, until the recognition correct rate of the Expression Recognition model reaches preset value When, the Expression Recognition model after the completion of being trained.
Alternatively, the Emotion identification unit 82 includes:
Second obtains subelement, for obtaining the initial three-dimensional tensor of the facial image to be identified.
Subelement is inputted, for the initial mood of the three-dimensional tensor input based on SoftMax sorting algorithms to be known Other model.
Subelement is exported, for utilizing the Emotion identification model, calculates the initial three-dimensional tensor respectively described Probability of occurrence in positive mood, negative emotions and neutral mood, and by a maximum mood of the wherein probability of occurrence Classification output is the mood classification of the facial image.
Those of ordinary skill in the art are it is to be appreciated that the list of each example described with reference to the embodiments described herein Member and algorithm steps, it can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually Performed with hardware or software mode, application-specific and design constraint depending on technical scheme.Professional and technical personnel Described function can be realized using distinct methods to each specific application, but this realization is it is not considered that exceed The scope of the present invention.
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description, The specific work process of device and unit, the corresponding process in preceding method embodiment is may be referred to, will not be repeated here.
In several embodiments provided herein, it should be understood that disclosed systems, devices and methods, can be with Realize by another way.For example, device embodiment described above is only schematical, for example, the unit Division, only a kind of division of logic function, can there is other dividing mode, such as multiple units or component when actually realizing Another system can be combined or be desirably integrated into, or some features can be ignored, or do not perform.It is another, it is shown or The mutual coupling discussed or direct-coupling or communication connection can be the indirect couplings by some interfaces, device or unit Close or communicate to connect, can be electrical, mechanical or other forms.
The unit illustrated as separating component can be or may not be physically separate, show as unit The part shown can be or may not be physical location, you can with positioned at a place, or can also be distributed to multiple On NE.Some or all of unit therein can be selected to realize the mesh of this embodiment scheme according to the actual needs 's.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, can also That unit is individually physically present, can also two or more units it is integrated in a unit.
If the function is realized in the form of SFU software functional unit and is used as independent production marketing or in use, can be with It is stored in a computer read/write memory medium.Based on such understanding, technical scheme is substantially in other words The part to be contributed to prior art or the part of the technical scheme can be embodied in the form of software product, the meter Calculation machine software product is stored in a storage medium, including some instructions are causing a computer equipment (can be People's computer, server, or network equipment etc.) perform all or part of step of each embodiment methods described of the present invention. And foregoing storage medium includes:USB flash disk, mobile hard disk, read-only storage (ROM, Read-Only Memory), arbitrary access are deposited Reservoir (RAM, Random Access Memory), magnetic disc or CD etc. are various can be with the medium of store program codes.
The foregoing is only a specific embodiment of the invention, but protection scope of the present invention is not limited thereto, any Those familiar with the art the invention discloses technical scope in, change or replacement can be readily occurred in, should all be contained Cover within protection scope of the present invention.Therefore, protection scope of the present invention described should be defined by scope of the claims.

Claims (10)

  1. A kind of 1. facial expression recognizing method, it is characterised in that including:
    Build and train the Emotion identification model based on convolutional neural networks;
    Facial image to be identified is inputted into the Emotion identification model, it is described to export the mood classification of the facial image Mood classification includes one kind in positive mood, negative emotions and neutral mood;
    Obtain Expression Recognition model corresponding with the mood classification;
    The facial image is inputted into the Expression Recognition model, to export the expression classification of the facial image.
  2. 2. the method as described in claim 1, it is characterised in that methods described also includes:
    If the expression classification includes one or more levels sublist feelings classification, the sons at different levels corresponding to the expression classification are obtained Expression Recognition model;
    The facial image is sequentially input into sublist feelings identification models at different levels, to export the sublist feelings classification of the facial image.
  3. 3. the method as described in claim 1, it is characterised in that described to build and train the mood based on convolutional neural networks to know Other model includes:
    Obtain multiple face training images of known class;
    Using the face training image to Emotion identification model and Expression Recognition model based on multilayer convolutional neural networks It is trained;
    Evaluate the fitting degree of the Emotion identification model and Expression Recognition model respectively using cross entropy loss function, work as institute When stating fitting degree and reaching predetermined threshold value, adjusted by backpropagation in the Emotion identification model and Expression Recognition model Each weight parameter, with the Emotion identification model after the completion of being trained and the Expression Recognition model.
  4. 4. method as claimed in claim 3, it is characterised in that the Emotion identification model is adjusted by backpropagation described And after each weight parameter in Expression Recognition model, methods described also includes:
    Test is identified to multiple face test images using the Emotion identification model and Expression Recognition model, surveyed Test result;
    According to the test result, confusion matrix corresponding to Emotion identification model and Expression Recognition the model difference is generated;
    By the confusion matrix, the recognition correct rate of the Emotion identification model and the Expression Recognition model is calculated;
    If the recognition correct rate of the Emotion identification model is not up to preset value, the Emotion identification is adjusted by backpropagation After each weight parameter in model, test is identified to multiple face test images again, and calculate the Emotion identification Recognition correct rate of the model in this time test, until when the recognition correct rate of the Emotion identification model reaches preset value, is obtained The Emotion identification model after the completion of to training;
    If the recognition correct rate of the Expression Recognition model is not up to preset value, the Expression Recognition is adjusted by backpropagation After each weight parameter in model, test is identified to multiple face test images again, and calculate the Expression Recognition Recognition correct rate of the model in this time test, until when the recognition correct rate of the Expression Recognition model reaches preset value, is obtained The Expression Recognition model after the completion of to training.
  5. 5. the method as described in claim 1, it is characterised in that described that facial image to be identified is inputted into the Emotion identification Model, included with exporting the mood classification of the facial image:
    Obtain the initial three-dimensional tensor of the facial image to be identified;
    By the initial Emotion identification model of the three-dimensional tensor input based on SoftMax sorting algorithms;
    Using the Emotion identification model, calculate the initial three-dimensional tensor respectively the positive mood, negative emotions with And the probability of occurrence in neutral mood, and be the face figure by the maximum mood classification output of wherein described probability of occurrence The mood classification of picture.
  6. A kind of 6. expression recognition device, it is characterised in that including:
    Training unit, for building and training the Emotion identification model based on convolutional neural networks;
    Emotion identification unit, for facial image to be identified to be inputted into the Emotion identification model, to export the face figure The mood classification of picture, the mood classification include one kind in positive mood, negative emotions and neutral mood;
    First acquisition unit, for obtaining Expression Recognition model corresponding with the mood classification;
    Expression Recognition unit, for the facial image to be inputted into the Expression Recognition model, to export the facial image Expression classification.
  7. 7. device as claimed in claim 6, it is characterised in that described device also includes:
    Second acquisition unit, if including one or more levels sublist feelings classification for the expression classification, obtain the expression Sublist feelings identification models at different levels corresponding to classification;
    Sublist feelings recognition unit, for the facial image to be sequentially input into sublist feelings identification models at different levels, to export the people The sublist feelings classification of face image.
  8. 8. device as claimed in claim 6, it is characterised in that the training unit includes:
    First obtains subelement, for obtaining multiple face training images of known class;
    Train subelement, for using the face training image to based on the Emotion identification model of multilayer convolutional neural networks with And Expression Recognition model is trained;
    Subelement is adjusted, for evaluating the Emotion identification model and Expression Recognition model respectively using cross entropy loss function Fitting degree, when the fitting degree reaches predetermined threshold value, by backpropagation adjust the Emotion identification model and Each weight parameter in Expression Recognition model, known with the Emotion identification model after the completion of being trained and the expression Other model.
  9. 9. device as claimed in claim 8, it is characterised in that the adjustment subelement is additionally operable to:
    Test is identified to multiple face test images using the Emotion identification model and Expression Recognition model, surveyed Test result;
    According to the test result, confusion matrix corresponding to Emotion identification model and Expression Recognition the model difference is generated;
    By the confusion matrix, the recognition correct rate of the Emotion identification model and the Expression Recognition model is calculated;
    If the recognition correct rate of the Emotion identification model is not up to preset value, the Emotion identification is adjusted by backpropagation After each weight parameter in model, test is identified to multiple face test images again, and calculate the Emotion identification Recognition correct rate of the model in this time test, until when the recognition correct rate of the Emotion identification model reaches preset value, is obtained The Emotion identification model after the completion of to training;
    If the recognition correct rate of the Expression Recognition model is not up to preset value, the Expression Recognition is adjusted by backpropagation After each weight parameter in model, test is identified to multiple face test images again, and calculate the Expression Recognition Recognition correct rate of the model in this time test, until when the recognition correct rate of the Expression Recognition model reaches preset value, is obtained The Expression Recognition model after the completion of to training.
  10. 10. device as claimed in claim 6, it is characterised in that the Emotion identification unit includes:
    Second obtains subelement, for obtaining the initial three-dimensional tensor of the facial image to be identified;
    Subelement is inputted, for the initial three-dimensional tensor to be inputted into the Emotion identification mould based on SoftMax sorting algorithms Type;
    Subelement is exported, for utilizing the Emotion identification model, calculates the initial three-dimensional tensor respectively in the front Probability of occurrence in mood, negative emotions and neutral mood, and by a maximum mood classification of the wherein probability of occurrence Export the mood classification for the facial image.
CN201710478188.4A 2017-06-21 2017-06-21 A kind of facial expression recognizing method and expression recognition device Pending CN107358169A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710478188.4A CN107358169A (en) 2017-06-21 2017-06-21 A kind of facial expression recognizing method and expression recognition device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710478188.4A CN107358169A (en) 2017-06-21 2017-06-21 A kind of facial expression recognizing method and expression recognition device

Publications (1)

Publication Number Publication Date
CN107358169A true CN107358169A (en) 2017-11-17

Family

ID=60273965

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710478188.4A Pending CN107358169A (en) 2017-06-21 2017-06-21 A kind of facial expression recognizing method and expression recognition device

Country Status (1)

Country Link
CN (1) CN107358169A (en)

Cited By (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108176021A (en) * 2017-12-28 2018-06-19 必革发明(深圳)科技有限公司 Treadmill safe early warning method, device and treadmill
CN108537168A (en) * 2018-04-09 2018-09-14 云南大学 Human facial expression recognition method based on transfer learning technology
CN108563978A (en) * 2017-12-18 2018-09-21 深圳英飞拓科技股份有限公司 A kind of mood detection method and device
CN108733209A (en) * 2018-03-21 2018-11-02 北京猎户星空科技有限公司 Man-machine interaction method, device, robot and storage medium
CN108764207A (en) * 2018-06-07 2018-11-06 厦门大学 A kind of facial expression recognizing method based on multitask convolutional neural networks
CN109190487A (en) * 2018-08-07 2019-01-11 平安科技(深圳)有限公司 Face Emotion identification method, apparatus, computer equipment and storage medium
CN109558032A (en) * 2018-12-05 2019-04-02 北京三快在线科技有限公司 Operation processing method, device and computer equipment
CN109615022A (en) * 2018-12-20 2019-04-12 上海智臻智能网络科技股份有限公司 The online configuration method of model and device
CN109753950A (en) * 2019-02-11 2019-05-14 河北工业大学 Dynamic facial expression recognition method
CN109784153A (en) * 2018-12-10 2019-05-21 平安科技(深圳)有限公司 Emotion identification method, apparatus, computer equipment and storage medium
CN109816893A (en) * 2019-01-23 2019-05-28 深圳壹账通智能科技有限公司 Method for sending information, device, server and storage medium
CN109830280A (en) * 2018-12-18 2019-05-31 深圳壹账通智能科技有限公司 Psychological aided analysis method, device, computer equipment and storage medium
CN109858469A (en) * 2019-03-06 2019-06-07 百度在线网络技术(北京)有限公司 Method and apparatus for output information
CN109858362A (en) * 2018-12-28 2019-06-07 浙江工业大学 A kind of mobile terminal method for detecting human face based on inversion residual error structure and angle associated losses function
CN109919001A (en) * 2019-01-23 2019-06-21 深圳壹账通智能科技有限公司 Customer service monitoring method, device, device and storage medium based on emotion recognition
CN109978996A (en) * 2019-03-28 2019-07-05 北京达佳互联信息技术有限公司 Generate method, apparatus, terminal and the storage medium of expression threedimensional model
CN109992505A (en) * 2019-03-15 2019-07-09 平安科技(深圳)有限公司 Applied program testing method, device, computer equipment and storage medium
CN110135230A (en) * 2018-02-09 2019-08-16 财团法人交大思源基金会 Expression recognition training system and expression recognition training method
CN110147822A (en) * 2019-04-16 2019-08-20 北京师范大学 A kind of moos index calculation method based on the detection of human face action unit
CN110263681A (en) * 2019-06-03 2019-09-20 腾讯科技(深圳)有限公司 The recognition methods of facial expression and device, storage medium, electronic device
CN110287895A (en) * 2019-04-17 2019-09-27 北京阳光易德科技股份有限公司 A method of emotional measurement is carried out based on convolutional neural networks
CN110309339A (en) * 2018-07-26 2019-10-08 腾讯科技(北京)有限公司 Picture tag generation method and device, terminal and storage medium
CN110428114A (en) * 2019-08-12 2019-11-08 深圳前海微众银行股份有限公司 Output of the fruit tree prediction technique, device, equipment and computer readable storage medium
CN110852360A (en) * 2019-10-30 2020-02-28 腾讯科技(深圳)有限公司 Image emotion recognition method, device, equipment and storage medium
CN111368590A (en) * 2018-12-25 2020-07-03 北京嘀嘀无限科技发展有限公司 Emotion recognition method and device, electronic equipment and storage medium
CN111582136A (en) * 2020-04-30 2020-08-25 京东方科技集团股份有限公司 Expression recognition method and device, electronic equipment and storage medium
CN112489278A (en) * 2020-11-18 2021-03-12 安徽领云物联科技有限公司 Access control identification method and system
CN112581417A (en) * 2020-12-14 2021-03-30 深圳市众采堂艺术空间设计有限公司 Facial expression obtaining, modifying and imaging system and method
CN112733803A (en) * 2021-01-25 2021-04-30 中国科学院空天信息创新研究院 Emotion recognition method and system
CN112784776A (en) * 2021-01-26 2021-05-11 山西三友和智慧信息技术股份有限公司 BPD facial emotion recognition method based on improved residual error network
CN112818150A (en) * 2021-01-22 2021-05-18 世纪龙信息网络有限责任公司 Picture content auditing method, device, equipment and medium
CN112836679A (en) * 2021-03-03 2021-05-25 青岛大学 A Fast Expression Recognition Algorithm and System Based on Dual Model Probabilistic Optimization
CN112966128A (en) * 2021-02-23 2021-06-15 武汉大学 Self-media content recommendation method based on real-time emotion recognition
CN113763531A (en) * 2020-06-05 2021-12-07 北京达佳互联信息技术有限公司 Three-dimensional face reconstruction method and device, electronic equipment and storage medium
CN114681258A (en) * 2020-12-25 2022-07-01 深圳Tcl新技术有限公司 Method for adaptively adjusting massage mode and massage equipment
CN114842529A (en) * 2022-04-11 2022-08-02 浙江柔灵科技有限公司 Juvenile emotion recognition and detection method based on computer vision and deep learning
CN114898174A (en) * 2022-04-22 2022-08-12 广州番禺电缆集团有限公司 Cable fault recognition system based on different recognition models
CN115512424A (en) * 2022-10-19 2022-12-23 中山大学 Method and system for recognizing painful facial expressions of indoor personnel based on computer vision
CN116343314A (en) * 2023-05-30 2023-06-27 之江实验室 A facial expression recognition method, device, storage medium and electronic equipment
CN116363732A (en) * 2023-03-10 2023-06-30 武汉轻工大学 Facial emotion recognition method, device, equipment and storage medium
CN117275060A (en) * 2023-09-07 2023-12-22 广州像素数据技术股份有限公司 A facial expression recognition method and related equipment based on emotion grouping
CN120472238A (en) * 2025-05-20 2025-08-12 数据空间研究院 Multi-stage progressive expression recognition method and system based on improved residual network

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104063683A (en) * 2014-06-06 2014-09-24 北京搜狗科技发展有限公司 Expression input method and device based on face identification
CN104680141A (en) * 2015-02-13 2015-06-03 华中师范大学 Motion unit layering-based facial expression recognition method and system
CN105512624A (en) * 2015-12-01 2016-04-20 天津中科智能识别产业技术研究院有限公司 Smile face recognition method and device for human face image
CN105868694A (en) * 2016-03-24 2016-08-17 中国地质大学(武汉) Dual-mode emotion identification method and system based on facial expression and eyeball movement
CN106295568A (en) * 2016-08-11 2017-01-04 上海电力学院 The mankind's naturalness emotion identification method combined based on expression and behavior bimodal
CN106650610A (en) * 2016-11-02 2017-05-10 厦门中控生物识别信息技术有限公司 Human face expression data collection method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104063683A (en) * 2014-06-06 2014-09-24 北京搜狗科技发展有限公司 Expression input method and device based on face identification
CN104680141A (en) * 2015-02-13 2015-06-03 华中师范大学 Motion unit layering-based facial expression recognition method and system
CN105512624A (en) * 2015-12-01 2016-04-20 天津中科智能识别产业技术研究院有限公司 Smile face recognition method and device for human face image
CN105868694A (en) * 2016-03-24 2016-08-17 中国地质大学(武汉) Dual-mode emotion identification method and system based on facial expression and eyeball movement
CN106295568A (en) * 2016-08-11 2017-01-04 上海电力学院 The mankind's naturalness emotion identification method combined based on expression and behavior bimodal
CN106650610A (en) * 2016-11-02 2017-05-10 厦门中控生物识别信息技术有限公司 Human face expression data collection method and device

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
曾维亮等: "基于卷积神经网络的智能冰箱果蔬图像识别的研究", 《微型机与应用》 *
汤浩等: "全卷积网络结合改进的条件随机场-循环神经网络用于SAR图像场景分类", 《计算机应用》 *
董军: "《"心迹"的计算 隐性知识的人工智能途径》", 31 December 2016, 上海科学技术出版社 *
谭峰等: "《基于光谱技术的寒地水稻稻瘟病害分析及机理研究》", 31 August 2016, 哈尔滨工程大学出版社 *

Cited By (60)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108563978A (en) * 2017-12-18 2018-09-21 深圳英飞拓科技股份有限公司 A kind of mood detection method and device
CN108176021A (en) * 2017-12-28 2018-06-19 必革发明(深圳)科技有限公司 Treadmill safe early warning method, device and treadmill
CN110135230B (en) * 2018-02-09 2024-01-12 财团法人交大思源基金会 Expression recognition training system and expression recognition training method
CN110135230A (en) * 2018-02-09 2019-08-16 财团法人交大思源基金会 Expression recognition training system and expression recognition training method
CN108733209A (en) * 2018-03-21 2018-11-02 北京猎户星空科技有限公司 Man-machine interaction method, device, robot and storage medium
CN108537168A (en) * 2018-04-09 2018-09-14 云南大学 Human facial expression recognition method based on transfer learning technology
CN108537168B (en) * 2018-04-09 2021-12-31 云南大学 Facial expression recognition method based on transfer learning technology
CN108764207B (en) * 2018-06-07 2021-10-19 厦门大学 A facial expression recognition method based on multi-task convolutional neural network
CN108764207A (en) * 2018-06-07 2018-11-06 厦门大学 A kind of facial expression recognizing method based on multitask convolutional neural networks
CN110309339A (en) * 2018-07-26 2019-10-08 腾讯科技(北京)有限公司 Picture tag generation method and device, terminal and storage medium
CN110309339B (en) * 2018-07-26 2024-05-31 腾讯科技(北京)有限公司 Picture tag generation method and device, terminal and storage medium
CN109190487A (en) * 2018-08-07 2019-01-11 平安科技(深圳)有限公司 Face Emotion identification method, apparatus, computer equipment and storage medium
CN109558032A (en) * 2018-12-05 2019-04-02 北京三快在线科技有限公司 Operation processing method, device and computer equipment
CN109784153A (en) * 2018-12-10 2019-05-21 平安科技(深圳)有限公司 Emotion identification method, apparatus, computer equipment and storage medium
CN109830280A (en) * 2018-12-18 2019-05-31 深圳壹账通智能科技有限公司 Psychological aided analysis method, device, computer equipment and storage medium
CN109615022A (en) * 2018-12-20 2019-04-12 上海智臻智能网络科技股份有限公司 The online configuration method of model and device
CN109615022B (en) * 2018-12-20 2020-05-19 上海智臻智能网络科技股份有限公司 Model online configuration method and device
CN111368590A (en) * 2018-12-25 2020-07-03 北京嘀嘀无限科技发展有限公司 Emotion recognition method and device, electronic equipment and storage medium
CN111368590B (en) * 2018-12-25 2024-04-23 北京嘀嘀无限科技发展有限公司 Emotion recognition method and device, electronic equipment and storage medium
CN109858362A (en) * 2018-12-28 2019-06-07 浙江工业大学 A kind of mobile terminal method for detecting human face based on inversion residual error structure and angle associated losses function
CN109919001A (en) * 2019-01-23 2019-06-21 深圳壹账通智能科技有限公司 Customer service monitoring method, device, device and storage medium based on emotion recognition
CN109816893A (en) * 2019-01-23 2019-05-28 深圳壹账通智能科技有限公司 Method for sending information, device, server and storage medium
CN109753950A (en) * 2019-02-11 2019-05-14 河北工业大学 Dynamic facial expression recognition method
CN109858469A (en) * 2019-03-06 2019-06-07 百度在线网络技术(北京)有限公司 Method and apparatus for output information
CN109992505A (en) * 2019-03-15 2019-07-09 平安科技(深圳)有限公司 Applied program testing method, device, computer equipment and storage medium
CN109978996A (en) * 2019-03-28 2019-07-05 北京达佳互联信息技术有限公司 Generate method, apparatus, terminal and the storage medium of expression threedimensional model
CN110147822A (en) * 2019-04-16 2019-08-20 北京师范大学 A kind of moos index calculation method based on the detection of human face action unit
CN110287895A (en) * 2019-04-17 2019-09-27 北京阳光易德科技股份有限公司 A method of emotional measurement is carried out based on convolutional neural networks
CN110287895B (en) * 2019-04-17 2021-08-06 北京阳光易德科技股份有限公司 Method for measuring emotion based on convolutional neural network
CN110263681A (en) * 2019-06-03 2019-09-20 腾讯科技(深圳)有限公司 The recognition methods of facial expression and device, storage medium, electronic device
CN110263681B (en) * 2019-06-03 2021-07-27 腾讯科技(深圳)有限公司 Facial expression recognition method and device, storage medium, electronic device
US12236712B2 (en) 2019-06-03 2025-02-25 Tencent Technology (Shenzhen) Company Limited Facial expression recognition method and apparatus, electronic device and storage medium
CN110428114A (en) * 2019-08-12 2019-11-08 深圳前海微众银行股份有限公司 Output of the fruit tree prediction technique, device, equipment and computer readable storage medium
CN110428114B (en) * 2019-08-12 2023-05-23 深圳前海微众银行股份有限公司 Fruit tree yield prediction method, device, equipment and computer readable storage medium
CN110852360A (en) * 2019-10-30 2020-02-28 腾讯科技(深圳)有限公司 Image emotion recognition method, device, equipment and storage medium
CN110852360B (en) * 2019-10-30 2024-10-18 腾讯科技(深圳)有限公司 Image emotion recognition method, device, equipment and storage medium
US12131584B2 (en) 2020-04-30 2024-10-29 Boe Technology Group Co., Ltd Expression recognition method and apparatus, electronic device, and storage medium
CN111582136A (en) * 2020-04-30 2020-08-25 京东方科技集团股份有限公司 Expression recognition method and device, electronic equipment and storage medium
CN111582136B (en) * 2020-04-30 2024-04-16 京东方科技集团股份有限公司 Expression recognition method and device, electronic device, and storage medium
CN113763531B (en) * 2020-06-05 2023-11-28 北京达佳互联信息技术有限公司 Three-dimensional face reconstruction method and device, electronic equipment and storage medium
CN113763531A (en) * 2020-06-05 2021-12-07 北京达佳互联信息技术有限公司 Three-dimensional face reconstruction method and device, electronic equipment and storage medium
CN112489278A (en) * 2020-11-18 2021-03-12 安徽领云物联科技有限公司 Access control identification method and system
CN112581417A (en) * 2020-12-14 2021-03-30 深圳市众采堂艺术空间设计有限公司 Facial expression obtaining, modifying and imaging system and method
CN114681258B (en) * 2020-12-25 2024-04-30 深圳Tcl新技术有限公司 A method for adaptively adjusting massage mode and massage device
CN114681258A (en) * 2020-12-25 2022-07-01 深圳Tcl新技术有限公司 Method for adaptively adjusting massage mode and massage equipment
CN112818150B (en) * 2021-01-22 2024-05-07 天翼视联科技有限公司 A method, device, equipment and medium for reviewing image content
CN112818150A (en) * 2021-01-22 2021-05-18 世纪龙信息网络有限责任公司 Picture content auditing method, device, equipment and medium
CN112733803A (en) * 2021-01-25 2021-04-30 中国科学院空天信息创新研究院 Emotion recognition method and system
CN112784776A (en) * 2021-01-26 2021-05-11 山西三友和智慧信息技术股份有限公司 BPD facial emotion recognition method based on improved residual error network
CN112966128A (en) * 2021-02-23 2021-06-15 武汉大学 Self-media content recommendation method based on real-time emotion recognition
CN112836679B (en) * 2021-03-03 2022-06-14 青岛大学 A Fast Expression Recognition Algorithm and System Based on Dual Model Probabilistic Optimization
CN112836679A (en) * 2021-03-03 2021-05-25 青岛大学 A Fast Expression Recognition Algorithm and System Based on Dual Model Probabilistic Optimization
CN114842529A (en) * 2022-04-11 2022-08-02 浙江柔灵科技有限公司 Juvenile emotion recognition and detection method based on computer vision and deep learning
CN114898174A (en) * 2022-04-22 2022-08-12 广州番禺电缆集团有限公司 Cable fault recognition system based on different recognition models
CN115512424A (en) * 2022-10-19 2022-12-23 中山大学 Method and system for recognizing painful facial expressions of indoor personnel based on computer vision
CN116363732A (en) * 2023-03-10 2023-06-30 武汉轻工大学 Facial emotion recognition method, device, equipment and storage medium
CN116343314B (en) * 2023-05-30 2023-08-25 之江实验室 Expression recognition method and device, storage medium and electronic equipment
CN116343314A (en) * 2023-05-30 2023-06-27 之江实验室 A facial expression recognition method, device, storage medium and electronic equipment
CN117275060A (en) * 2023-09-07 2023-12-22 广州像素数据技术股份有限公司 A facial expression recognition method and related equipment based on emotion grouping
CN120472238A (en) * 2025-05-20 2025-08-12 数据空间研究院 Multi-stage progressive expression recognition method and system based on improved residual network

Similar Documents

Publication Publication Date Title
CN107358169A (en) A kind of facial expression recognizing method and expression recognition device
Li et al. Deep independently recurrent neural network (indrnn)
CN108182441B (en) Parallel multi-channel convolutional neural network, construction method and image feature extraction method
Pathar et al. Human emotion recognition using convolutional neural network in real time
CN114937151A (en) Lightweight target detection method based on multi-receptive-field and attention feature pyramid
CN108764471A (en) The neural network cross-layer pruning method of feature based redundancy analysis
CN109784366A (en) The fine grit classification method, apparatus and electronic equipment of target object
WO2018052587A1 (en) Method and system for cell image segmentation using multi-stage convolutional neural networks
CN106991372A (en) A kind of dynamic gesture identification method based on interacting depth learning model
CN116521908B (en) Multimedia content personalized recommendation method based on artificial intelligence
CN104517122A (en) Image target recognition method based on optimized convolution architecture
CN108090498A (en) A kind of fiber recognition method and device based on deep learning
CN110210380B (en) An Analysis Method Based on Expression Recognition and Psychological Test to Generate Personality
CN109919085A (en) Health For All Activity recognition method based on light-type convolutional neural networks
CN111582396A (en) A Fault Diagnosis Method Based on Improved Convolutional Neural Network
CN109344888A (en) A kind of image recognition method, device and equipment based on convolutional neural network
CN118447317A (en) Image classification learning method based on multi-scale pulse convolutional neural network
CN114766024A (en) Method and apparatus for pruning neural networks
Sarigül et al. Comparison of different deep structures for fish classification
Liu et al. Optimized facial emotion recognition technique for assessing user experience
CN110263808A (en) A kind of Image emotional semantic classification method based on LSTM network and attention mechanism
CN117273105A (en) A module construction method and device for neural network models
KR102636461B1 (en) Automated labeling method, device, and system for learning artificial intelligence models
CN109543749A (en) Drawing sentiment analysis method based on deep learning
Kim et al. Tweaking deep neural networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20171117