[go: up one dir, main page]

TW201909036A - Multi-dimensional emotion discrimination system and method for face image based on neural network - Google Patents

Multi-dimensional emotion discrimination system and method for face image based on neural network Download PDF

Info

Publication number
TW201909036A
TW201909036A TW107125104A TW107125104A TW201909036A TW 201909036 A TW201909036 A TW 201909036A TW 107125104 A TW107125104 A TW 107125104A TW 107125104 A TW107125104 A TW 107125104A TW 201909036 A TW201909036 A TW 201909036A
Authority
TW
Taiwan
Prior art keywords
emotion
face
neural network
dimensional
image
Prior art date
Application number
TW107125104A
Other languages
Chinese (zh)
Inventor
簡仁賢
Original Assignee
大陸商竹間智能科技(上海)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 大陸商竹間智能科技(上海)有限公司 filed Critical 大陸商竹間智能科技(上海)有限公司
Publication of TW201909036A publication Critical patent/TW201909036A/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present invention provides a system and method for multi-dimensional emotion discrimination of a face image based on a neural network. The system comprises: a face location module configured to identify a face region in an image to be detected and extract a face image in the image to be detected; a feature extraction module configured to extract emotion features of the face image; an identification module configured to identify the emotion features to obtain emotion information; and an output module configured to output the emotion information. The system and method for multi-dimensional emotion discrimination of the face image based on the neural network can adapt different face angles, skin colors and face shapes to extract emotion feature vectors and represent the multi-dimensional emotion to which the face image belongs so as to improve the accuracy of face emotion analysis.

Description

基於神經網路的人臉影像多維度情感判別系統及方法Multi-dimensional emotion discrimination system and method for face image based on neural network

本發明屬於影像處理技術領域,具體涉及基於神經網路的人臉影像多維度情感判別系統及方法。The invention belongs to the field of image processing technology, and particularly relates to a multi-dimensional emotion discrimination system and method for a face image based on a neural network.

人們運用表情、手勢、肢體與語言來傳達訊息與溝通,識別臉部情緒是瞭解人類所傳達的訊息最直接的方法之一。傳統的人臉情緒識別領域主要包括以下幾個方面:People use expressions, gestures, body and language to convey messages and communication. Recognizing facial emotions is one of the most direct ways to understand the message that humans convey. The traditional face emotion recognition field mainly includes the following aspects:

關鍵點檢測之情緒識別:識別出人臉所在區域,使用傳統演算法人臉關鍵點五官與輪廓定位,提取關鍵點特徵作為情緒識別之特徵,其作法受到關鍵點定位準確度之限制,且僅人臉輪廓,缺乏臉肌肉之變化,這樣的方式過於概括,情緒難以準確的被識別。Emotion recognition for keypoint detection: Identify the area where the face is located, use traditional algorithms to locate facial features and contours of keypoints, and extract keypoint features as features for emotion recognition. The method is limited by the accuracy of keypoint positioning, and only Face contour, lack of changes in facial muscles, this way is too general, and emotions are difficult to identify accurately.

另一方面,情緒識別之分類:人的情感變化,難以用離散的類別來解釋,舉例來說,生氣與傷心並非一線之隔,人的情緒是交雜且連續的,若一個人臉只用一種情緒表示,這樣的方式過於籠統,且不能仔細地描述人細膩的情感。On the other hand, the classification of emotion recognition: it is difficult to explain discrete changes in human emotions. For example, anger and sadness are not separated by a line. Human emotions are mixed and continuous. If one face uses only one type, Emotions indicate that this approach is too general and cannot describe people's delicate emotions carefully.

本發明針對現有技術中的缺陷,本發明提供基於神經網路的人臉影像多維度情感判別系統及方法,提高人臉情緒分析的準確度。The present invention addresses the shortcomings in the prior art. The present invention provides a multi-dimensional emotion discrimination system and method for a face image based on a neural network to improve the accuracy of facial emotion analysis.

本發明基於神經網路的人臉影像多維度情感判別系統,包括:一臉部定位模組、一特徵提取模組、一識別模組以及一輸出模組。該臉部定位模組,用於識別待檢測圖像中人臉區域,並利用人臉檢測演算法提取待檢測圖像中的人臉圖像;該特徵提取模組,用於提取該人臉圖像的情緒特徵;該識別模組,用於識別該情緒特徵得到情感資訊;該輸出模組,用於輸出該情感資訊。A facial image multi-dimensional emotion discrimination system based on a neural network of the present invention includes a face positioning module, a feature extraction module, a recognition module, and an output module. The face positioning module is used to identify a face region in an image to be detected, and a face detection algorithm is used to extract a face image in the image to be detected; the feature extraction module is used to extract the face The emotional characteristics of the image; the recognition module is used to identify the emotional characteristics to obtain emotional information; the output module is used to output the emotional information.

在本發明之一實施例中,上述之特徵提取模組通過卷積神經網路提取該人臉圖像的情緒特徵。In one embodiment of the present invention, the feature extraction module described above extracts the emotional features of the face image through a convolutional neural network.

在本發明之一實施例中,上述之卷積神經網路的訓練包括:採用通過一個多維的情緒向量來描述臉部圖像的訓練資料進行訓練;該特徵提取模組輸入為該人臉圖像,輸出為多維的情緒向量。In one embodiment of the present invention, the training of the above-mentioned convolutional neural network includes: training using training data describing a facial image through a multi-dimensional emotion vector; the feature extraction module input is the face image Like, the output is a multi-dimensional emotion vector.

在本發明之一實施例中,上述之特徵提取模組的輸出和該識別模組的輸出均為多維向量,該多維向量包括多個情緒類別。In an embodiment of the present invention, the output of the feature extraction module and the output of the recognition module are both multi-dimensional vectors, and the multi-dimensional vectors include multiple emotion categories.

在本發明之一實施例中,上述之情緒類別包括生氣、厭惡、害怕、開心、難過、驚訝或中性。In one embodiment of the present invention, the aforementioned emotion categories include anger, disgust, fear, happiness, sadness, surprise, or neutrality.

本發明更提供了一種基於神經網路的人臉影像多維度情感判別方法,適用於上述基於神經網路的人臉影像多維度情感判別系統,步驟包括:識別待檢測圖像中人臉區域,並利用人臉檢測演算法提取待檢測圖像中的人臉圖像;再提取該人臉圖像的情緒特徵;識別該情緒特徵得到情感資訊;以及輸出該情感資訊。The present invention further provides a neural network-based multi-dimensional emotion discrimination method for a face image, which is applicable to the above-mentioned neural network-based multi-dimensional emotion discrimination system. The steps include: identifying a face region in an image to be detected, The face detection algorithm is used to extract the face image in the image to be detected; then the emotional features of the face image are extracted; the emotional features are identified to obtain the emotional information; and the emotional information is output.

在本發明之一實施例中,上述之情感資訊為多維向量,該多維向量包括多個情緒類別。In one embodiment of the present invention, the aforementioned emotional information is a multi-dimensional vector, and the multi-dimensional vector includes a plurality of emotion categories.

在本發明之一實施例中,上述之情緒類別包括生氣、厭惡、害怕、開心、難過、驚訝或中性。In one embodiment of the present invention, the aforementioned emotion categories include anger, disgust, fear, happiness, sadness, surprise, or neutrality.

由上述技術方案可知,本發明提供的基於神經網路的人臉影像多維度情感判別系統及方法,適應不同臉部角度、膚色與臉型,來提取情緒特徵向量,體現該臉部圖像的所屬多維情緒,提高人臉情緒分析的準確度。It can be known from the above technical solution that the neural network-based facial image multi-dimensional emotion discrimination system and method provided by the present invention adapt to different facial angles, skin colors, and face shapes to extract emotional feature vectors and reflect the belonging of the facial image. Multi-dimensional emotion, improve the accuracy of facial emotion analysis.

為了更清楚地說明本發明具體實施方式或現有技術中的技術方案,下面將對具體實施方式或現有技術描述中所需要使用的附圖作簡單地介紹。在所有附圖中,類似的元件或部分一般由類似的附圖標記標識。附圖中,各元件或部分並不一定按照實際的比例繪製。In order to more clearly illustrate the specific embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the specific embodiments or the prior art description will be briefly introduced below. Throughout the drawings, similar elements or portions are generally identified by similar reference numerals. In the drawings, each element or part is not necessarily drawn to actual scale.

下面將結合附圖對本發明技術方案的實施例進行詳細的描述。以下實施例僅用於更加清楚地說明本發明的技術方案,因此只作為示例,而不能以此來限制本發明的保護範圍。需要注意的是,除非另有說明,本申請使用的技術術語或者科學術語應當為本發明所屬領域技術人員所理解的通常意義。The embodiments of the technical solution of the present invention will be described in detail below with reference to the accompanying drawings. The following embodiments are only used to explain the technical solution of the present invention more clearly, and therefore are only examples, and cannot be used to limit the protection scope of the present invention. It should be noted that, unless otherwise stated, the technical terms or scientific terms used in this application shall have the ordinary meanings understood by those skilled in the art to which the present invention belongs.

一種基於神經網路的人臉影像多維度情感判別系統,如圖1、2所示,包括:臉部定位模組110,用於識別待檢測圖像200中人臉區域,並提取待檢測圖像200中的人臉圖像210;臉部定位的方法是通過在影像、視頻或待檢測圖像200中執行人臉檢測的演算法(不限於何種機器學習的方法)藉以提取人臉圖像210。特徵提取模組120,用於提取該人臉圖像210的情緒特徵220;根據輸入的人臉圖像210,通過訓練神經網路得到深度學習模型,並且在測試時通過深度學習模型在最後一層的特徵層提取人臉圖像210相對應得特徵向量。識別模組130,用於識別該情緒特徵220得到情感資訊230;根據每一個輸入人臉圖像210所描述的特徵向量,通過多維度情感判別分類器(可同時描述不同情感發生的可能性之分類器,每一類別的機率輸出值皆為0到1)得到情感資訊230。輸出模組140,用於輸出該情感資訊230。A multi-dimensional emotion discrimination system for a face image based on a neural network, as shown in FIGS. 1 and 2, includes a face positioning module 110 for identifying a face region in an image to be detected 200 and extracting a to-be-detected image. Face image 210 in image 200; the method of face positioning is to extract a face image by performing a face detection algorithm (not limited to a machine learning method) in an image, a video, or an image to be detected 200 Like 210. A feature extraction module 120 is used to extract the emotional features 220 of the face image 210; according to the input face image 210, a deep learning model is obtained by training a neural network, and the deep learning model is used at the last layer during the test A feature vector corresponding to the face image 210 is extracted from the feature layer. The recognition module 130 is configured to identify the emotional feature 220 to obtain the emotional information 230; according to the feature vector described by each input face image 210, a multi-dimensional emotion discrimination classifier (which can simultaneously describe the possibility of different emotions occurring) The classifier has a probability output value of 0 to 1 for each category) to obtain emotional information 230. The output module 140 is configured to output the emotional information 230.

於本實施例中,該臉部定位模組,其用於分別獲取一個或多個待檢測圖像中所有人臉區域的位置,得到待檢測圖像中所有人臉圖像。特徵提取模組利用卷積神經網路提取人臉圖像的情緒特徵。識別模組用於將待測人臉的情緒特徵判別情感資訊。本系統能適應不同臉部角度、膚色與臉型,來提取情緒特徵向量,體現該臉部圖像的多維情緒。In this embodiment, the face positioning module is configured to obtain positions of all human face areas in one or more images to be detected, and obtain all human face images in the images to be detected. The feature extraction module uses a convolutional neural network to extract the emotional features of a face image. The recognition module is used for judging the emotional characteristics of the human face to be tested for emotional information. The system can adapt to different face angles, skin tones and face shapes to extract emotional feature vectors, reflecting the multi-dimensional emotions of the face image.

於本實施例中,主要解決問題有以下兩點:一、傳統人臉影像處理技術僅使用人臉關鍵點作為特徵之效果不佳之問題。二、僅以一種情緒類別描述人臉之結果不精確的問題。In this embodiment, there are two main problems to be solved: First, the traditional face image processing technology uses only the key points of the face as features, and the effect is not good. Second, the problem of inaccurate results in describing faces with only one emotion category.

於本實施例中,該系統輸入為整張人臉影像,其輸出為多維情緒之判斷,其輸入整張影像可以考慮臉部表情之肌肉細微變化,達到高精度的情緒特徵提取,其輸出用多維之情緒描述,以利精確描述人臉之情緒反應。In this embodiment, the input of the system is the entire face image, and its output is a judgment of multi-dimensional emotion. The input of the entire image can take into account the subtle changes in facial expression muscles to achieve high-precision emotional feature extraction. Multi-dimensional emotional description to accurately describe the emotional response of the human face.

於本實施例中,該特徵提取模組通過卷積神經網路提取該人臉圖像的情緒特徵。In this embodiment, the feature extraction module extracts the emotional features of the face image through a convolutional neural network.

於本實施例中,該卷積神經網路的訓練包括:採用通過一個多維的情緒向量來描述臉部圖像的訓練資料進行訓練;該特徵提取模組輸入為該人臉圖像,輸出為多維的情緒向量。特徵提取模組的卷積神經網路包含堆疊與殘差的卷積、池化層,使得提取臉部情緒特徵更為強健,能夠適應與學習不同臉部角度、膚色與臉型。在訓練卷積神經網路時,採用的訓練資料的臉部圖像由一個多維的情緒向量來描述。特徵提取模組,其輸入為臉部影像,由堆疊的卷積層來提取高維抽象的特徵。In this embodiment, the training of the convolutional neural network includes: training using training data describing a facial image through a multi-dimensional emotion vector; the feature extraction module input is the face image, and the output is Multi-dimensional emotions vector. The convolutional neural network of the feature extraction module includes stacking and residual convolution and pooling layers, which makes extracting facial emotional features more robust and able to adapt to and learn different facial angles, skin colors, and face shapes. When training the convolutional neural network, the facial image of the training data is described by a multi-dimensional emotion vector. The feature extraction module takes a facial image as an input and extracts high-dimensional abstract features from stacked convolutional layers.

於本實施例中,該識別模組使用目標函式訓練其神經網路,使得識別模組達到可描述多維的情緒向量之效果。由於特徵提取模組與識別模組神經網路的設計,使得本發明能夠在高度變化之人臉圖像給予精確的情緒向量分析與預測。In this embodiment, the recognition module uses a target function to train its neural network, so that the recognition module can achieve the effect of describing a multi-dimensional emotion vector. Due to the design of the neural network of the feature extraction module and the recognition module, the present invention can provide accurate emotion vector analysis and prediction in highly variable face images.

於本實施例中,該特徵提取模組的輸出和該識別模組的輸出均為多維向量,該多維向量包括多個情緒類別。該情緒類別包括生氣、厭惡、害怕、開心、難過、驚訝或中性。此發明能夠處理的情緒類別不限於前面的種類。In this embodiment, the output of the feature extraction module and the output of the recognition module are both multi-dimensional vectors, and the multi-dimensional vectors include multiple emotion categories. This emotion category includes angry, disgusted, scared, happy, sad, surprised or neutral. The types of emotions that this invention can handle are not limited to the previous ones.

於本實施例中,基於神經網路的人臉影像多維度情感判別方法,適用於上述基於神經網路的人臉影像多維度情感判別系統,包括:識別待檢測圖像中人臉區域,並提取待檢測圖像中的人臉圖像;提取該人臉圖像的情緒特徵;識別該情緒特徵得到情感資訊;輸出該情感資訊。In this embodiment, a neural network-based multi-dimensional emotion discrimination method for a facial image is applicable to the above-mentioned neural network-based multi-dimensional emotion discrimination system, and includes: identifying a face region in an image to be detected, and Extracting a face image from an image to be detected; extracting the emotional characteristics of the face image; identifying the emotional characteristics to obtain emotional information; and outputting the emotional information.

於本實施例中,該方法能夠對於各種人臉的圖像,不論人臉的角度與光源,並作出強健的預測。且能夠輸出多維的情緒結果,精准的描述人臉的情緒變化,可有效解決人臉情感辨識的問題。In this embodiment, the method can make robust predictions for images of various faces regardless of the angle and light source of the face. And it can output multi-dimensional emotional results, accurately describe the emotional changes of the face, and can effectively solve the problem of facial emotion recognition.

於本實施例中,該情感資訊為多維向量,該多維向量包括多個情緒類別。In this embodiment, the emotion information is a multi-dimensional vector, and the multi-dimensional vector includes a plurality of emotion categories.

於本實施例中,該情緒類別包括生氣、厭惡、害怕、開心、難過、驚訝或中性。In this embodiment, the emotion category includes angry, disgusted, scared, happy, sad, surprised or neutral.

於本實施例中,更包括以下幾種應用:In this embodiment, the following applications are further included:

應用場景一:廣告觀察回饋,以螢幕為載體,可置放於公共場合或個人電腦。顧客在觀看廣告時,分析廣告內容與顧客的臉部情緒反應,瞭解群眾對於廣告之吸引度及興趣,來預測產品反應或調整廣告內容。Application scenario 1: Advertising observation and feedback, with the screen as the carrier, can be placed in public places or personal computers. When customers watch the advertisement, they analyze the content of the advertisement and the customer's facial emotional response, understand the attraction and interest of the masses to the advertisement, and predict the product response or adjust the advertisement content.

應用場景二:商場,以攝影機為載體,置放於貨架,觀察顧客挑選物品時之情緒反應,可調整貨架上物品之擺放及瞭解顧客喜愛的產品,來做銷售分析,做出更加的銷售策略。Application Scenario 2: Shopping malls, with cameras as carriers, are placed on shelves to observe the emotional response of customers when selecting items. You can adjust the placement of items on the shelves and understand the products that customers love, to do sales analysis and make more sales. Strategy.

應用場景三:手機APP,以手機APP為載體,可於用戶觀看影片或社群軟體時,分析用戶情緒,以智慧聊天機器人關心用戶之心情,提升智慧聊天機器人與用戶間的黏性。Application Scenario 3: Mobile APP, using mobile APP as a carrier, can analyze user's emotions when watching videos or social software, and use smart chat robots to care about the mood of users, and improve the stickiness between smart chat robots and users.

綜上所述,本發明提供的基於神經網路的人臉影像多維度情感判別系統及方法,適應不同臉部角度、膚色與臉型,來提取情緒特徵向量,體現該臉部圖像的所屬多維情緒,提高人臉情緒分析的準確度。In summary, the neural network-based facial image multi-dimensional emotion discrimination system and method provided by the present invention adapt to different facial angles, skin colors, and face shapes to extract emotional feature vectors, reflecting the multi-dimensional belonging of the facial image. Emotion, improve the accuracy of facial emotion analysis.

雖然本發明以前述實施例揭露如上,然其並非用以限定本發明,任何熟習相像技藝者,在不脫離本發明之精神和範圍內,所作更動與潤飾之等效替換,仍為本發明之專利保護範圍內。Although the present invention is disclosed in the foregoing embodiment as above, it is not intended to limit the present invention. Any person skilled in the art of similarity, without departing from the spirit and scope of the present invention, makes equivalent substitutions for modifications and retouching. Within the scope of patent protection.

110‧‧‧臉部定位模組110‧‧‧Face positioning module

120‧‧‧特徵提取模組120‧‧‧Feature Extraction Module

130‧‧‧識別模組130‧‧‧Identification Module

140‧‧‧輸出模組140‧‧‧output module

200‧‧‧待檢測圖像200‧‧‧Image to be detected

210‧‧‧人臉圖像210‧‧‧face image

220‧‧‧情緒特徵220‧‧‧ Emotional characteristics

230‧‧‧情感資訊230‧‧‧ Emotional Information

[圖1]為基於神經網路的人臉影像多維度情感判別系統的結構框圖。 [圖2]為基於神經網路的人臉影像多維度情感判別方法中影像處理方法。[Fig. 1] A structural block diagram of a facial image multi-dimensional emotion discrimination system based on a neural network. [Figure 2] An image processing method in a multi-dimensional emotion discrimination method of a face image based on a neural network.

Claims (8)

一種基於神經網路的人臉影像多維度情感判別系統,其特徵在於,包括: 一臉部定位模組,用於識別待檢測圖像中人臉區域,並利用人臉檢測演算法提取待檢測圖像中的人臉圖像; 一特徵提取模組,用於提取該人臉圖像的情緒特徵; 一識別模組,用於識別該情緒特徵得到情感資訊;以及 一輸出模組,用於輸出該情感資訊。A multi-dimensional emotion discrimination system for a face image based on a neural network, comprising: a face positioning module for identifying a face region in an image to be detected; and using a face detection algorithm to extract the to be detected A face image in the image; a feature extraction module for extracting emotional characteristics of the face image; a recognition module for identifying the emotional characteristics to obtain emotional information; and an output module for Output the emotional information. 如申請專利範圍第1項所述的基於神經網路的人臉影像多維度情感判別系統,其中該特徵提取模組通過卷積神經網路提取該人臉圖像的情緒特徵。According to the neural network-based facial image multi-dimensional emotion discrimination system according to item 1 of the scope of the patent application, the feature extraction module extracts the emotional features of the face image through a convolutional neural network. 如申請專利範圍第2項所述的基於神經網路的人臉影像多維度情感判別系統,其中該卷積神經網路的訓練包括:採用通過一個多維的情緒向量來描述臉部圖像的訓練資料進行訓練;該特徵提取模組輸入為該人臉圖像,輸出為多維的情緒向量。As described in item 2 of the scope of the patent application, a neural network-based multi-dimensional emotion discrimination system for a face image, wherein the training of the convolutional neural network includes: training that describes a facial image through a multi-dimensional emotion vector The data is used for training; the input of the feature extraction module is the face image, and the output is a multi-dimensional emotion vector. 如申請專利範圍第1項所述的基於神經網路的人臉影像多維度情感判別系統,其中該特徵提取模組的輸出和該識別模組的輸出均為多維向量,該多維向量包括多個情緒類別。According to the neural network-based facial image multi-dimensional emotion discrimination system according to item 1 of the scope of patent application, the output of the feature extraction module and the output of the recognition module are both multi-dimensional vectors, and the multi-dimensional vector includes multiple Emotion categories. 如申請專利範圍第4項所述的基於神經網路的人臉影像多維度情感判別系統,其中該情緒類別包括生氣、厭惡、害怕、開心、難過、驚訝或中性。The neural network-based facial image multi-dimensional emotion discrimination system according to item 4 of the scope of the patent application, wherein the emotion category includes angry, disgusted, scared, happy, sad, surprised or neutral. 一種基於神經網路的人臉影像多維度情感判別方法,適用於申請專利範圍第1項該基於神經網路的人臉影像多維度情感判別系統,包括: 識別待檢測圖像中人臉區域,並利用人臉檢測演算法提取待檢測圖像中的人臉圖像; 提取該人臉圖像的情緒特徵; 識別該情緒特徵得到情感資訊;以及 輸出該情感資訊。A multi-dimensional emotion discrimination method for a facial image based on a neural network, which is applicable to the first scope of the patent application. The multi-dimensional emotion discrimination system for a facial image based on a neural network includes: identifying a face region in an image to be detected, A face detection algorithm is used to extract a face image in the image to be detected; extract the emotional characteristics of the face image; identify the emotional characteristics to obtain emotional information; and output the emotional information. 如申請專利範圍第6項所述的基於神經網路的人臉影像多維度情感判別方法,其中該情感資訊為多維向量,該多維向量包括多個情緒類別。As described in item 6 of the scope of the patent application, a neural network-based multi-dimensional emotion discrimination method for a face image, wherein the emotion information is a multi-dimensional vector, and the multi-dimensional vector includes a plurality of emotion categories. 如申請專利範圍第7項所述的基於神經網路的人臉影像多維度情感判別方法,其中該情緒類別包括生氣、厭惡、害怕、開心、難過、驚訝或中性。As described in item 7 of the scope of the patent application, a neural network-based multi-dimensional emotion discrimination method for a facial image, wherein the emotion category includes anger, disgust, fear, happy, sadness, surprise, or neutrality.
TW107125104A 2017-07-21 2018-07-20 Multi-dimensional emotion discrimination system and method for face image based on neural network TW201909036A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710602218.8A CN107392151A (en) 2017-07-21 2017-07-21 Face image various dimensions emotion judgement system and method based on neutral net
??CN201710602218.8 2017-07-21

Publications (1)

Publication Number Publication Date
TW201909036A true TW201909036A (en) 2019-03-01

Family

ID=60335706

Family Applications (1)

Application Number Title Priority Date Filing Date
TW107125104A TW201909036A (en) 2017-07-21 2018-07-20 Multi-dimensional emotion discrimination system and method for face image based on neural network

Country Status (2)

Country Link
CN (1) CN107392151A (en)
TW (1) TW201909036A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI767775B (en) * 2021-06-30 2022-06-11 國立陽明交通大學 Image processing based emotion recognition system and method

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10610109B2 (en) 2018-01-12 2020-04-07 Futurewei Technologies, Inc. Emotion representative image to derive health rating
CN108197667A (en) * 2018-01-30 2018-06-22 安徽斛兵信息科技有限公司 Personal abnormal emotion detection method and device based on dialogue
TWI780333B (en) 2019-06-03 2022-10-11 緯創資通股份有限公司 Method for dynamically processing and playing multimedia files and multimedia play apparatus
CN110569741A (en) * 2019-08-19 2019-12-13 昆山琪奥智能科技有限公司 Expression recognition system based on artificial intelligence
CN110909609A (en) * 2019-10-26 2020-03-24 湖北讯獒信息工程有限公司 Expression recognition method based on artificial intelligence
CN110796150B (en) * 2019-10-29 2022-09-16 中山大学 Image emotion recognition method based on emotion significant region detection
CN111108508B (en) * 2019-12-23 2023-10-13 深圳市优必选科技股份有限公司 Facial emotion recognition method, smart device and computer-readable storage medium
CN119107719A (en) * 2024-08-14 2024-12-10 湖南银行保险设备有限公司 Banking database security control system based on face recognition and dynamic password lock

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6994555B2 (en) * 2002-04-18 2006-02-07 Educcomm Llc Play cube to aid in recognizing and developing various emotional states
CN105069447B (en) * 2015-09-23 2018-05-29 河北工业大学 A kind of recognition methods of human face expression
CN106257489A (en) * 2016-07-12 2016-12-28 乐视控股(北京)有限公司 Expression recognition method and system
CN106372622A (en) * 2016-09-30 2017-02-01 北京奇虎科技有限公司 Facial expression classification method and device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI767775B (en) * 2021-06-30 2022-06-11 國立陽明交通大學 Image processing based emotion recognition system and method

Also Published As

Publication number Publication date
CN107392151A (en) 2017-11-24

Similar Documents

Publication Publication Date Title
TW201909036A (en) Multi-dimensional emotion discrimination system and method for face image based on neural network
US20240249504A1 (en) Systems and methods for improved facial attribute classification and use thereof
CN107679490B (en) Method and apparatus for detecting image quality
Rafique et al. Age and gender prediction using deep convolutional neural networks
CN107633207A (en) AU characteristic recognition methods, device and storage medium
CN108388876A (en) Image recognition method, device and related equipment
CN108229559B (en) Apparel inspection method, apparatus, electronic device, program and medium
CN107742107A (en) Facial image sorting technique, device and server
CN109409994A (en) The methods, devices and systems of analog subscriber garments worn ornaments
WO2014068567A1 (en) Method and system for predicting personality traits, capabilities and suggested interactions from images of a person
CN107463888A (en) Face mood analysis method and system based on multi-task learning and deep learning
Alrihaili et al. Music recommender system for users based on emotion detection through facial features
Gorbova et al. Going deeper in hidden sadness recognition using spontaneous micro expressions database
Tanveez et al. Facial emotional recognition system using machine learning
De Carolis et al. Soft biometrics for social adaptive robots
Singla et al. Age and gender detection using Deep Learning
Singh et al. Face emotion identification by fusing neural network and texture features: facial expression
Aslam et al. Gender classification based on isolated facial features and foggy faces using jointly trained deep convolutional neural network
Monica et al. Face and emotion recognition from real-time facial expressions using deep learning algorithms
Alugupally et al. Analysis of landmarks in recognition of face expressions
Mishra et al. Enhancing face emotion recognition with facs-based synthetic dataset using deep learning models
Khalifa et al. Deep multi-stage approach for emotional body gesture recognition in job interview
Yuvchenko et al. Human emotion recognition system using deep learning algorithms
Lopez-de-Arenosa et al. CBR tagging of emotions from facial expressions
KR20240011324A (en) Customized Makeup Techniques Recommended Display System for Individuals' Daily Emotional Information and Facial Skin Conditions