WO2019029261A1 - Procédé, dispositif de reconnaissance de micro-expressions et support d'informations - Google Patents
Procédé, dispositif de reconnaissance de micro-expressions et support d'informations Download PDFInfo
- Publication number
- WO2019029261A1 WO2019029261A1 PCT/CN2018/090990 CN2018090990W WO2019029261A1 WO 2019029261 A1 WO2019029261 A1 WO 2019029261A1 CN 2018090990 W CN2018090990 W CN 2018090990W WO 2019029261 A1 WO2019029261 A1 WO 2019029261A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- expression
- micro
- feature information
- video
- preset
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
Definitions
- the present application relates to the field of communications technologies, and in particular, to a micro-expression recognition method, apparatus, and storage medium.
- micro-expressions generally only last for 1/25 ⁇ 1/5 seconds. Although a subconscious micro-expression may last only for a moment, it is easy to expose people's true emotions. Therefore, micro-expression recognition has an extraordinary effect on analyzing people's true mental state. With the rapid development of computer vision, pattern recognition and other disciplines, the automatic recognition technology of micro-expressions is quite mature. The research on automatic recognition of micro-expressions has been greatly developed in recent years, and several standard micros have been established at home and abroad. Expression library.
- micro-expression library used in the current micro-expression recognition method is established under the unnatural conditions such as expression suppression, and is quite different from the actual life scene of the people, and cannot accurately reflect the true state of the micro-expression. Therefore, a micro-expression library established by capturing people's micro-expressions in real life state is needed, and an identification method capable of better reflecting the true state of the micro-expressions is determined by the micro-expression library.
- the main purpose of the present application is to provide a micro-expression recognition method, device and storage medium, which aim to solve the technical problem that the actual situation of the micro-expression cannot be better reflected in the prior art.
- the present application provides a micro-expression recognition method, the method comprising the following steps:
- the method before the step of performing image recognition on the video to be recognized, obtaining a face in the to-be-identified video, and dividing the face according to a preset area, the method further includes:
- the step of obtaining the face part in the to-be-identified video specifically includes:
- the face portion is segmented to eliminate video segments that do not contain micro-expressions.
- the step of extracting the expression feature information of each preset area in the to-be-identified video includes:
- the contour feature information, the texture feature information, and the area feature information are respectively used as expression feature information corresponding to the preset region.
- the method before the acquiring the to-be-identified video, the method further includes:
- a micro-expression model is established, and the micro-expression model is trained through the mapping relationship to form a preset micro-expression model.
- the method further includes:
- the establishing a mapping relationship between the micro-expression and the expression feature information includes:
- the method before the step of performing expression recognition on the sample video, the method further includes:
- the storing the mapping relationship to obtain a micro-expression library further includes:
- mapping relationship is stored according to the character type to obtain each type of micro-expression library.
- the present application further provides a micro-expression recognition device, including: a memory, a processor, and a micro-expression recognition stored on the memory and operable on the processor a program that, when executed by the processor, implements the steps of the micro-expression recognition method as described above.
- the present application further provides a storage medium on which a micro-expression recognition program is stored, and when the micro-expression recognition program is executed by a processor, the micro-expression recognition method as described above is implemented. step.
- the present invention performs image recognition on the video to be recognized, obtains a face in the to-be-identified video, and divides the face according to a preset area; and extracts expression feature information of each preset area from the to-be-identified video. And comparing the expression feature information with the preset micro-expression model, and determining the micro-expression in the to-be-identified video according to the comparison result. Since the video to be recognized adopted in the present embodiment is acquired in a natural state, and the expression feature information of each preset area of the face is extracted, the recognition of the micro-expression is more accurate, and the true state of the micro-expression can be better reflected.
- FIG. 1 is a schematic structural diagram of a micro-expression recognition device in a hardware operating environment according to an embodiment of the present application
- FIG. 2 is a schematic flowchart of a first embodiment of a micro-expression recognition method according to the present application
- FIG. 3 is a schematic flowchart of a second embodiment of a micro-expression recognition method according to the present application.
- FIG. 4 is a schematic flow chart of a third embodiment of a micro-expression recognition method according to the present application.
- FIG. 1 is a schematic structural diagram of a micro-expression recognition device in a hardware operating environment according to an embodiment of the present application.
- the micro-expression recognition device may include a processor 1001, such as a CPU, a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005.
- the communication bus 1002 is used to implement connection communication between these components.
- the user interface 1003 can include a display, and the optional user interface 1003 can also include a standard wired interface, a wireless interface.
- the network interface 1004 can optionally include a standard wired interface, a wireless interface (such as a WI-FI interface).
- the memory 1005 may be a high speed RAM memory or a stable memory (non-volatile) Memory), such as disk storage.
- the memory 1005 can also optionally be a storage device independent of the aforementioned processor 1001.
- FIG. 1 does not constitute a definition of a micro-expression recognition device, and may include more or fewer components than those illustrated, or some components may be combined, or different component arrangements.
- an operating system may be included in the memory 1005 as a storage medium.
- a network communication module may be included in the memory 1005 as a storage medium.
- a user interface module may be included in the memory 1005 as a storage medium.
- a micro-expression recognition program may be included in the memory 1005 as a storage medium.
- the network interface 1004 is mainly used to connect to other servers for data communication with the other servers;
- the user interface 1003 is mainly used for connecting to the user terminal and performing data communication with the user terminal;
- the micro-expression recognition device calls the micro-expression recognition program stored in the memory 1005 by the processor 1001, and performs the following operations:
- processor 1001 may call the micro-expression recognition program stored in the memory 1005, and further perform the following operations:
- processor 1001 may call the micro-expression recognition program stored in the memory 1005, and further perform the following operations:
- the face portion is segmented to eliminate video segments that do not contain micro-expressions.
- processor 1001 may call the micro-expression recognition program stored in the memory 1005, and further perform the following operations:
- the contour feature information, the texture feature information, and the area feature information are respectively used as expression feature information corresponding to the preset region.
- processor 1001 may call the micro-expression recognition program stored in the memory 1005, and further perform the following operations:
- a micro-expression model is established, and the micro-expression model is trained through the mapping relationship to form a preset micro-expression model.
- processor 1001 may call the micro-expression recognition program stored in the memory 1005, and further perform the following operations:
- the establishing a mapping relationship between the micro-expression and the expression feature information includes:
- processor 1001 may call the micro-expression recognition program stored in the memory 1005, and further perform the following operations:
- the storing the mapping relationship to obtain a micro-expression library further includes:
- mapping relationship is stored according to the character type to obtain each type of micro-expression library.
- the image is recognized by the image to be recognized, the face in the to-be-identified video is obtained, and the face is divided according to the preset area; and the expression features of each preset area are extracted from the to-be-recognized video. And comparing the facial expression feature information with the preset micro-expression model, and determining the micro-expression in the to-be-identified video according to the comparison result. Since the video to be recognized adopted in the present embodiment is acquired in a natural state, and the expression feature information of each preset area of the face is extracted, the recognition of the micro-expression is more accurate, and the true state of the micro-expression can be better reflected.
- FIG. 2 is a schematic flowchart of a first embodiment of a micro-expression recognition method according to the present application.
- the micro-expression recognition method comprises the following steps:
- Step S10 performing image recognition on the video to be recognized, obtaining a face in the to-be-identified video, and dividing the face according to a preset area;
- the micro-expressions in the unnatural state such as suppressed expressions are stored, and the true state of the micro-expressions cannot be fully reflected.
- the implementation in the micro-expression recognition method adopted in the example the micro-expressions in the natural state are adopted, and the micro-expressions are established by the micro-expressions in the natural state, and the micro-expressions to be recognized are identified by using the established micro-expression library.
- the micro-expressions used in this embodiment are collected in a natural state, rather than being collected in a suppressed unnatural state.
- the to-be-identified video containing the micro-expression in the natural state is acquired, the expression feature information in the to-be-identified video is extracted, and the micro-expression in the to-be-identified video is identified according to the expression feature information.
- the feature information of each part of the face will be extracted.
- a preset area that can display the micro-expression on the face of the person is selected in advance as the Each part, the preset area includes a facial features area, a nasolabial area, and an eyelid area, and the image to be recognized is image-decomposed into a continuous single-frame image to obtain the to-be-identified
- the face part in the video is divided into the preset part according to the preset area, so as to facilitate the subsequent extraction of the expression feature information of each preset area.
- the method further includes:
- the external environment will also affect the micro-expression. Even with the same facial expression information, different micro-expressions will still be produced due to different environments. For example, a person poses a smile in both environments, but in a bright, soft-colored environment, the smile represents a quiet, comfortable micro-expression, and instead, in a dark, narrow, dirty environment. The smile represents a micro-expression of bitter laughter and self-deprecating. Therefore, the embodiment further extracts the environment feature information and combines with the expression feature information to determine the micro-expression in the to-be-identified video, which is more accurate.
- the method further includes:
- the duration of the general micro-expression is 1/25 ⁇ 1/5 seconds, and the length of the pre-acquired video to be recognized is generally long, and it is difficult to extract the ephemeral micro-expression, and the video to be recognized is processed into 1
- the duration of ⁇ 2 seconds can not damage the micro-expression segments, and it is also convenient to extract the expression feature information in the to-be-identified video.
- the to-be-identified video includes other background environments. When extracting the facial expression feature information, the micro-expression is not prominent in the image, which affects the extraction effect. Therefore, after the environment feature information is extracted, the to-be-identified video is subjected to pre-processing including cropping and segmentation, and the to-be-identified video is converted into a micro-expression video of 1 to 2 seconds.
- the video to be recognized is cropped according to the width of the face, for example, centered on the nose, 1.5 times the length of the face is long, and 1.5 times the width of the face is wide, and a rectangular area is formed, according to the rectangular area
- the image of the to-be-identified video is cropped to obtain a face video.
- the face video is segmented, and the video segment that does not contain the micro-expression is removed, and the micro-expression video is obtained.
- the micro-expression video of the face is obtained, which facilitates the subsequent extraction of the expression feature information.
- Step S20 extracting expression feature information of each preset area from the to-be-identified video
- the expression feature information refers to a set of data information that can reflect the process of changing the micro-expression, including the duration and degree of change of each preset region of the face. Such as the duration of changes in eyebrows, the degree of change in eye contours, etc.
- the human micro-expression is presented by all parts of the face.
- the change of a single part cannot fully explain the micro-expression of the person. For example, when “happy”, the person will not only raise the corner of the mouth, but the mouth will be lifted and the cheeks will be lifted. Wrinkles, eyelid contraction, and "crow's feet” form at the end of the eye. These parts change together to produce a "happy" micro-expression.
- the parts affecting the human micro-expression mainly include the facial features area, the nasolabial area, and the eyelid area. Therefore, in the present embodiment, the above-mentioned part is selected as the preset area.
- the to-be-identified video is clipped and segmented, and converted into a micro-expression video, and extracting the expression feature information in the micro-expression video is more convenient and quick.
- the expression feature information is extracted, that is, the change duration of each preset area and the degree of change of each preset area are extracted.
- Step S30 comparing the expression feature information with a preset micro-expression model, and determining a micro-expression in the to-be-identified video according to the comparison result.
- a preset micro-expression model is established, and expression feature information is input in the preset micro-expression model, and the preset micro-expression model can input
- the expression feature information is identified, the micro-expression corresponding to the expression feature information is obtained, and the micro-expression is output, that is, the micro-expression in the to-be-identified video is recognized.
- the image is recognized by the image to be recognized, the face in the to-be-identified video is obtained, and the face is divided according to the preset area; and the expression features of each preset area are extracted from the to-be-recognized video. And comparing the facial expression feature information with the preset micro-expression model, and determining the micro-expression in the to-be-identified video according to the comparison result. Since the video to be recognized adopted in the present embodiment is acquired in a natural state, and the expression feature information of each preset area of the face is extracted, the recognition of the micro-expression is more accurate, and the true state of the micro-expression can be better reflected.
- FIG. 3 is a schematic flowchart of a second embodiment of the micro-expression recognition method according to the present application. Based on the embodiment shown in FIG. 2, a second embodiment of the micro-expression recognition method of the present application is proposed.
- step S20 specifically includes:
- Step S201 performing contour recognition on the facial features area, and acquiring contour feature information of the facial features area;
- the facial features area is a main area that affects a human micro-expression, and the facial features area has a clear outline.
- contour recognition of the facial features area contour feature information of the facial features area can be acquired, and the outline characteristic information is obtained. Includes the duration of the change in the profile of the facial features and the degree of change in the profile.
- the method for the contour recognition may be an edge detection algorithm, which is not limited in this embodiment.
- Step S202 performing texture analysis on the nasolabial region to obtain texture feature information of the nasolabial region
- the nasolabial region is an important region that affects the micro-expression of the human, and the nasolabial region has a texture.
- the texture feature information of the nasolabial region can be obtained.
- the texture characteristic information includes a duration of change of the nasolabial region and a degree of change of the nasolabial fold.
- the method of the texture analysis may be a grayscale transformation or a binarization, which is not limited in this embodiment.
- Step S203 acquiring area feature information of the eyelid region
- the eyelid region is also an important region affecting the human micro-expression
- the eyelid region has a skin near the plane
- the area of the eyelid region can be obtained by calculating the area of the eyelid region in each frame of the video image.
- Characteristic information, the area characteristic information including a change duration of the eyelid region and a degree of change in the eyelid area.
- Step S204 The contour feature information, the texture feature information, and the area feature information are respectively used as the expression feature information corresponding to the preset region.
- contour feature information is used as the expression feature information of the facial features
- texture feature information is used as the expression feature information of the nasolabial region
- area feature information is used as the eyelid region.
- the expression feature information is summarized, and the expression feature information of all the preset regions is summarized into the expression feature information corresponding to the to-be-identified video.
- different expression methods are used to extract the facial expression feature information of each preset region, which can better capture the change process of the micro-expression, and subsequently identify the to-be-identified according to the expression feature information.
- the micro-expressions in the video provide the basis.
- FIG. 4 is a schematic flowchart of a third embodiment of a micro-expression recognition method according to the present application. Based on the embodiment shown in FIG. 2, a third embodiment of the micro-expression recognition method of the present application is proposed.
- the method before the step S10, the method further includes:
- Step S001 classify the sample video according to a character type in the sample video, where the character type includes at least one of each preset age group, gender, and identity type;
- the embodiment provides a micro-expression recognition method, which is applied to the scenario of establishing the micro-expression library and establishing the preset micro-expression model. Pre-establishing a mapping relationship between the micro-expression and the expression feature information, and saving the mapping relationship to obtain a micro-expression library, wherein the micro-expression and the expression feature information in each set of mapping relationships are acquired according to the same sample video.
- the sample video adopts a video containing a micro-expression in a natural state, and constructs a mapping relationship between the micro-expression and the expression feature information through the micro-expression contained therein.
- the unique facial expression feature information corresponding to the unique micro-expression and the micro-expression in the sample video is obtained, and the mapping relationship between the micro-expression and the expression feature information corresponding to the sample video is established.
- the sample video is classified according to the type of the character, and the feature extraction of the classified video can finally obtain the micro-expression library of each character type.
- the sample video is first divided into a male sample video and a female sample video according to the gender of the person in the sample video, and then the male sample video and the female sample video are separately extracted, and finally the male micro-expression is obtained.
- Library and female micro-expression library are obtained.
- the sample video is classified according to each preset age group and the identity of the person, and the micro-expression library of each preset age group and the micro-expression library of each identity are obtained.
- Step S002 performing expression recognition on the sample video to determine a micro-expression in the sample video
- the expression expression in the sample video is determined by performing expression recognition on the sample video.
- the six basic expressions of the human being are pre-set as the expression category, so that the recognized expressions belong to the expression category, and the six basic expressions include surprise, disgust, anger, fear, sadness and pleasure. All human expressions can be included in these six basic expression ranges. Of course, it is also possible to subdivide the expression into more types of expressions as the expression category, which is not limited in this embodiment.
- Step S003 Extracting environmental feature information in the sample video
- the environment has an influence on the micro-expression
- the micro-expression in the sample video is determined by the environment feature information and the expression feature information to be more accurate.
- Step S004 performing image recognition on the sample video, obtaining a face part in the sample video, and dividing a face part in the sample video according to a preset area;
- Step S005 extract expression feature information of each preset area from the sample video.
- image recognition is performed on the sample video, a face portion in the sample video is obtained, and the face portion is divided according to a preset region, and image recognition is performed on the video to be recognized.
- the process of dividing the face part according to the preset area according to the preset part; and the process of extracting the expression feature information of each preset area from the sample video The process of extracting the expression feature information of each preset area in the recognition video is consistent.
- Step S006 establishing a mapping relationship between the micro-expression and the expression feature information and environment feature information, and storing the mapping relationship to obtain a micro-expression library;
- a mapping relationship between the micro-expression and the expression feature information and the environment feature information may be established.
- the mapping relationship is stored to obtain a micro-expression library, and the micro-expression library includes a mapping relationship between the micro-expressions of the character types and the expression feature information and the environment feature information.
- Step S007 Establish a micro-expression model, and train the micro-expression model through the mapping relationship to form a preset micro-expression model.
- the data such as the mapping relationship stored in the micro-expression library is classified according to the character type, and the data stored in each class is scattered and lacks systematicity.
- the model is trained to construct a data context. Can complete the finishing of the data.
- the micro-expression recognition can be conveniently and quickly performed on the to-be-identified video through the trained preset micro-expression model.
- a micro-expression model is pre-established, and the micro-expression model is trained by the mapping relationship to improve the recognition accuracy of the micro-expression model, and the mapping relationship is The obtained known relationship may be used to train the micro-expression model.
- the micro-expression model is accurately determined. The rate can reach a certain standard and become the preset micro-expression model.
- the specific process of training the micro-expression model by using the mapping relationship to form a preset micro-expression model is: inputting a set of mapping relationships in the micro-expression model, the micro-expression model according to the Deriving the identification result of the sample video by using the environment feature information and the expression feature information in the mapping relationship, and comparing the recognition result with the micro-expression in the mapping relationship to obtain a comparison result;
- the recognition accuracy after the training may still not reach the standard and cannot be preset.
- the micro-expression model only obtains the trial model. Therefore, when the trial model is used to initially perform micro-expression recognition on the trial video, the trial model is secondarily trained by the mapping relationship corresponding to the trial video, so as to realize the recognition accuracy of the trial model. Can reach the standard.
- the steps of the secondary training specifically include:
- the output determination result is true, the trial model connection right is increased, and the correspondence relationship between the environmental feature information, the expression feature information, and the micro-expression in the trial video is established.
- the corresponding relationship is saved in the micro-expression library, thereby expanding the micro-expression library;
- the sample video including the micro-expression in the natural state is obtained, the character type is classified into the sample video, and the environmental feature information and the expression feature information of the sample video are extracted, and the micro-expression and the environmental feature information are established.
- the mapping relationship between the expression feature information, the micro-expression library and the micro-expression model including the mapping relationship of each preset type are established, and the micro-expression model is trained by the mapping relationship to improve the recognition of the micro-expression model. Accuracy to achieve recognition of micro-expressions by the preset micro-expression model.
- the embodiment of the present application further provides a storage medium, where the micro-expression recognition program is stored, and when the micro-expression recognition program is executed by the processor, the following operations are implemented:
- micro-expression recognition program when executed by the processor, the following operations are also implemented:
- micro-expression recognition program when executed by the processor, the following operations are also implemented:
- the face portion is segmented to eliminate video segments that do not contain micro-expressions.
- micro-expression recognition program when executed by the processor, the following operations are also implemented:
- the contour feature information, the texture feature information, and the area feature information are respectively used as expression feature information corresponding to the preset region.
- micro-expression recognition program when executed by the processor, the following operations are also implemented:
- a micro-expression model is established, and the micro-expression model is trained through the mapping relationship to form a preset micro-expression model.
- micro-expression recognition program when executed by the processor, the following operations are also implemented:
- the establishing a mapping relationship between the micro-expression and the expression feature information includes:
- micro-expression recognition program when executed by the processor, the following operations are also implemented:
- the storing the mapping relationship to obtain a micro-expression library further includes:
- mapping relationship is stored according to the character type to obtain each type of micro-expression library.
- the image is recognized by the image to be recognized, the face in the to-be-identified video is obtained, and the face is divided according to the preset area; and the expression features of each preset area are extracted from the to-be-recognized video. And comparing the facial expression feature information with the preset micro-expression model, and determining the micro-expression in the to-be-identified video according to the comparison result. Since the video to be recognized adopted in the present embodiment is acquired in a natural state, and the expression feature information of each preset area of the face is extracted, the recognition of the micro-expression is more accurate, and the true state of the micro-expression can be better reflected.
- the embodiment method can be implemented by means of software plus a necessary general hardware platform, and of course Hardware, but in many cases the former is a better implementation.
- the technical solution of the present application may be in the form of a software product in essence or in part contributing to the prior art. It is now found that the computer software product is stored in a storage medium (such as ROM/RAM, disk, CD), and includes a plurality of instructions for making a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device). And the like) performing the methods described in the various embodiments of the present application.
- a storage medium such as ROM/RAM, disk, CD
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
La présente invention concerne un procédé, un dispositif de reconnaissance de micro-expressions et un support d'informations. Ledit procédé comprend : la réalisation d'une reconnaissance d'image sur une vidéo à reconnaître, l'obtention d'une partie de visage dans la vidéo à reconnaître et la division de la partie de visage selon des régions prédéfinies ; l'extraction, à partir de la vidéo à reconnaître, des informations de trait d'expression dans chaque région prédéfinie ; la comparaison des informations de traits d'expression à un modèle de micro-expression prédéfini, et la détermination, selon le résultat de la comparaison, d'une micro-expression dans la vidéo à reconnaître.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201710668442.7A CN107480622A (zh) | 2017-08-07 | 2017-08-07 | 微表情识别方法、装置及存储介质 |
| CN201710668442.7 | 2017-08-07 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2019029261A1 true WO2019029261A1 (fr) | 2019-02-14 |
Family
ID=60598941
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2018/090990 Ceased WO2019029261A1 (fr) | 2017-08-07 | 2018-06-13 | Procédé, dispositif de reconnaissance de micro-expressions et support d'informations |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN107480622A (fr) |
| WO (1) | WO2019029261A1 (fr) |
Cited By (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110276406A (zh) * | 2019-06-26 | 2019-09-24 | 腾讯科技(深圳)有限公司 | 表情分类方法、装置、计算机设备及存储介质 |
| CN110415015A (zh) * | 2019-06-19 | 2019-11-05 | 深圳壹账通智能科技有限公司 | 产品认可度分析方法、装置、终端及计算机可读存储介质 |
| CN110458018A (zh) * | 2019-07-05 | 2019-11-15 | 深圳壹账通智能科技有限公司 | 一种测试方法、装置及计算机可读存储介质 |
| CN110781810A (zh) * | 2019-10-24 | 2020-02-11 | 合肥盛东信息科技有限公司 | 一种人脸情绪识别方法 |
| CN111178151A (zh) * | 2019-12-09 | 2020-05-19 | 量子云未来(北京)信息科技有限公司 | 基于ai技术实现人脸微表情变化识别的方法和装置 |
| CN111967295A (zh) * | 2020-06-23 | 2020-11-20 | 南昌大学 | 一种语义标签挖掘的微表情捕捉方法 |
| CN113065512A (zh) * | 2021-04-21 | 2021-07-02 | 深圳壹账通智能科技有限公司 | 人脸微表情识别方法、装置、设备及存储介质 |
| CN113515702A (zh) * | 2021-07-07 | 2021-10-19 | 北京百度网讯科技有限公司 | 内容推荐方法、模型训练方法、装置、设备及存储介质 |
| CN114005153A (zh) * | 2021-02-01 | 2022-02-01 | 南京云思创智信息科技有限公司 | 面貌多样性的个性化微表情实时识别方法 |
| CN115937953A (zh) * | 2022-12-28 | 2023-04-07 | 中国科学院长春光学精密机械与物理研究所 | 心理变化检测方法、装置、设备及存储介质 |
Families Citing this family (24)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107480622A (zh) * | 2017-08-07 | 2017-12-15 | 深圳市科迈爱康科技有限公司 | 微表情识别方法、装置及存储介质 |
| CN107958230B (zh) * | 2017-12-22 | 2020-06-23 | 中国科学院深圳先进技术研究院 | 人脸表情识别方法及装置 |
| CN108335193A (zh) * | 2018-01-12 | 2018-07-27 | 深圳壹账通智能科技有限公司 | 全流程信贷方法、装置、设备及可读存储介质 |
| CN108537160A (zh) * | 2018-03-30 | 2018-09-14 | 平安科技(深圳)有限公司 | 基于微表情的风险识别方法、装置、设备及介质 |
| CN109145837A (zh) * | 2018-08-28 | 2019-01-04 | 厦门理工学院 | 人脸情感识别方法、装置、终端设备和存储介质 |
| CN109472206B (zh) * | 2018-10-11 | 2023-07-07 | 平安科技(深圳)有限公司 | 基于微表情的风险评估方法、装置、设备及介质 |
| CN109640104B (zh) * | 2018-11-27 | 2022-03-25 | 平安科技(深圳)有限公司 | 基于人脸识别的直播互动方法、装置、设备及存储介质 |
| CN109784175A (zh) * | 2018-12-14 | 2019-05-21 | 深圳壹账通智能科技有限公司 | 基于微表情识别的异常行为人识别方法、设备和存储介质 |
| CN109697421A (zh) * | 2018-12-18 | 2019-04-30 | 深圳壹账通智能科技有限公司 | 基于微表情的评价方法、装置、计算机设备和存储介质 |
| CN109784185A (zh) * | 2018-12-18 | 2019-05-21 | 深圳壹账通智能科技有限公司 | 基于微表情识别的客户餐饮评价自动获取方法及装置 |
| CN109830280A (zh) * | 2018-12-18 | 2019-05-31 | 深圳壹账通智能科技有限公司 | 心理辅助分析方法、装置、计算机设备和存储介质 |
| CN109766474A (zh) * | 2018-12-18 | 2019-05-17 | 深圳壹账通智能科技有限公司 | 审讯信息审核方法、装置、计算机设备和存储介质 |
| CN111353354B (zh) * | 2018-12-24 | 2024-01-23 | 杭州海康威视数字技术股份有限公司 | 一种人体应激性信息识别方法、装置及电子设备 |
| CN109800687A (zh) * | 2019-01-02 | 2019-05-24 | 深圳壹账通智能科技有限公司 | 会议效果反馈方法、装置、计算机设备和可读存储介质 |
| CN109858379A (zh) * | 2019-01-03 | 2019-06-07 | 深圳壹账通智能科技有限公司 | 笑容真诚度检测方法、装置、存储介质和电子设备 |
| CN109866230A (zh) * | 2019-01-17 | 2019-06-11 | 深圳壹账通智能科技有限公司 | 客服机器人控制方法、装置、计算机设备及存储介质 |
| CN110321845B (zh) * | 2019-07-04 | 2021-06-18 | 北京奇艺世纪科技有限公司 | 一种从视频中提取表情包的方法、装置及电子设备 |
| CN110852220B (zh) * | 2019-10-30 | 2023-08-18 | 深圳智慧林网络科技有限公司 | 人脸表情的智能识别方法、终端和计算机可读存储介质 |
| CN111028318A (zh) * | 2019-11-25 | 2020-04-17 | 天脉聚源(杭州)传媒科技有限公司 | 一种虚拟人脸合成方法、系统、装置和存储介质 |
| CN112733615B (zh) * | 2020-12-22 | 2025-04-11 | 杭州腾未科技有限公司 | 一种人脸识别方法、装置、存储介质及电子设备 |
| CN112749669B (zh) * | 2021-01-18 | 2024-02-02 | 吾征智能技术(北京)有限公司 | 一种基于人面部图像的微表情智能识别系统 |
| CN115482573B (zh) * | 2022-09-29 | 2025-10-28 | 歌尔科技有限公司 | 人脸表情识别方法、装置、设备及可读存储介质 |
| CN116392086B (zh) * | 2023-06-06 | 2023-08-25 | 浙江多模医疗科技有限公司 | 检测刺激方法、终端及存储介质 |
| CN117391746B (zh) * | 2023-10-25 | 2024-06-21 | 上海瀚泰智能科技有限公司 | 一种基于置信网络的智慧酒店顾客感知分析方法和系统 |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103426005A (zh) * | 2013-08-06 | 2013-12-04 | 山东大学 | 微表情自动识别的建库视频自动切段方法 |
| CN104881660A (zh) * | 2015-06-17 | 2015-09-02 | 吉林纪元时空动漫游戏科技股份有限公司 | 基于gpu加速的人脸表情识别及互动方法 |
| US20150254447A1 (en) * | 2014-03-10 | 2015-09-10 | FaceToFace Biometrics, Inc. | Expression recognition in messaging systems |
| CN107480622A (zh) * | 2017-08-07 | 2017-12-15 | 深圳市科迈爱康科技有限公司 | 微表情识别方法、装置及存储介质 |
Family Cites Families (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US8581911B2 (en) * | 2008-12-04 | 2013-11-12 | Intific, Inc. | Training system and methods for dynamically injecting expression information into an animated facial mesh |
| CN105139039B (zh) * | 2015-09-29 | 2018-05-29 | 河北工业大学 | 视频序列中人脸微表情的识别方法 |
-
2017
- 2017-08-07 CN CN201710668442.7A patent/CN107480622A/zh active Pending
-
2018
- 2018-06-13 WO PCT/CN2018/090990 patent/WO2019029261A1/fr not_active Ceased
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103426005A (zh) * | 2013-08-06 | 2013-12-04 | 山东大学 | 微表情自动识别的建库视频自动切段方法 |
| US20150254447A1 (en) * | 2014-03-10 | 2015-09-10 | FaceToFace Biometrics, Inc. | Expression recognition in messaging systems |
| CN104881660A (zh) * | 2015-06-17 | 2015-09-02 | 吉林纪元时空动漫游戏科技股份有限公司 | 基于gpu加速的人脸表情识别及互动方法 |
| CN107480622A (zh) * | 2017-08-07 | 2017-12-15 | 深圳市科迈爱康科技有限公司 | 微表情识别方法、装置及存储介质 |
Cited By (13)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110415015A (zh) * | 2019-06-19 | 2019-11-05 | 深圳壹账通智能科技有限公司 | 产品认可度分析方法、装置、终端及计算机可读存储介质 |
| CN110276406B (zh) * | 2019-06-26 | 2023-09-01 | 腾讯科技(深圳)有限公司 | 表情分类方法、装置、计算机设备及存储介质 |
| CN110276406A (zh) * | 2019-06-26 | 2019-09-24 | 腾讯科技(深圳)有限公司 | 表情分类方法、装置、计算机设备及存储介质 |
| CN110458018A (zh) * | 2019-07-05 | 2019-11-15 | 深圳壹账通智能科技有限公司 | 一种测试方法、装置及计算机可读存储介质 |
| CN110781810A (zh) * | 2019-10-24 | 2020-02-11 | 合肥盛东信息科技有限公司 | 一种人脸情绪识别方法 |
| CN110781810B (zh) * | 2019-10-24 | 2024-02-27 | 合肥盛东信息科技有限公司 | 一种人脸情绪识别方法 |
| CN111178151A (zh) * | 2019-12-09 | 2020-05-19 | 量子云未来(北京)信息科技有限公司 | 基于ai技术实现人脸微表情变化识别的方法和装置 |
| CN111967295A (zh) * | 2020-06-23 | 2020-11-20 | 南昌大学 | 一种语义标签挖掘的微表情捕捉方法 |
| CN111967295B (zh) * | 2020-06-23 | 2024-02-13 | 南昌大学 | 一种语义标签挖掘的微表情捕捉方法 |
| CN114005153A (zh) * | 2021-02-01 | 2022-02-01 | 南京云思创智信息科技有限公司 | 面貌多样性的个性化微表情实时识别方法 |
| CN113065512A (zh) * | 2021-04-21 | 2021-07-02 | 深圳壹账通智能科技有限公司 | 人脸微表情识别方法、装置、设备及存储介质 |
| CN113515702A (zh) * | 2021-07-07 | 2021-10-19 | 北京百度网讯科技有限公司 | 内容推荐方法、模型训练方法、装置、设备及存储介质 |
| CN115937953A (zh) * | 2022-12-28 | 2023-04-07 | 中国科学院长春光学精密机械与物理研究所 | 心理变化检测方法、装置、设备及存储介质 |
Also Published As
| Publication number | Publication date |
|---|---|
| CN107480622A (zh) | 2017-12-15 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2019029261A1 (fr) | Procédé, dispositif de reconnaissance de micro-expressions et support d'informations | |
| WO2019085495A1 (fr) | Procédé et appareil de reconnaissance de micro-expression, système et support de stockage lisible par ordinateur | |
| WO2015184760A1 (fr) | Procédé et appareil d'entrée de geste dans l'air | |
| WO2019041406A1 (fr) | Dispositif, terminal et procédé de reconnaissance d'image indécente et support de stockage lisible par ordinateur | |
| WO2019216593A1 (fr) | Procédé et appareil de traitement de pose | |
| WO2020246844A1 (fr) | Procédé de commande de dispositif, procédé de traitement de conflit, appareil correspondant et dispositif électronique | |
| WO2019051683A1 (fr) | Procédé de photographie de lumière de remplissage, terminal mobile et support de stockage lisible par ordinateur | |
| EP3740936A1 (fr) | Procédé et appareil de traitement de pose | |
| WO2020190112A1 (fr) | Procédé, appareil, dispositif et support permettant de générer des informations de sous-titrage de données multimédias | |
| WO2021261830A1 (fr) | Procédé et appareil d'évaluation de qualité de vidéo | |
| WO2018143707A1 (fr) | Système d'evaluation de maquillage et son procédé de fonctionnement | |
| WO2017164716A1 (fr) | Procédé et dispositif de traitement d'informations multimédia | |
| WO2021132851A1 (fr) | Dispositif électronique, système de soins du cuir chevelu et son procédé de commande | |
| WO2019051895A1 (fr) | Procédé et dispositif de commande de terminal, et support de stockage | |
| WO2019051899A1 (fr) | Procédé et dispositif de commande de terminaux, et support d'informations | |
| WO2019051890A1 (fr) | Procédé et dispositif de commande de terminal et support de stockage lisible par ordinateur | |
| WO2019205323A1 (fr) | Climatiseur et procédé et dispositif de réglage de paramètre associé, et support d'informations lisible | |
| WO2020233061A1 (fr) | Procédé, système et dispositif de détection de silence, et support de stockage lisible par ordinateur | |
| WO2013009020A2 (fr) | Procédé et appareil de génération d'informations de traçage de visage de spectateur, support d'enregistrement pour ceux-ci et appareil d'affichage tridimensionnel | |
| WO2024136012A1 (fr) | Procédé et système de génération d'un objet à mouvement tridimensionnel sur la base d'une intelligence artificielle | |
| WO2018166236A1 (fr) | Procédé, appareil et dispositif de reconnaissance de facture de règlement de revendication, et support d'informations lisible par ordinateur | |
| WO2013022226A4 (fr) | Procédé et appareil de génération d'informations personnelles d'un client, support pour leur enregistrement et système pos | |
| WO2015133699A1 (fr) | Appareil de reconnaissance d'objet, et support d'enregistrement sur lequel un procédé un et programme informatique pour celui-ci sont enregistrés | |
| WO2018120459A1 (fr) | Procédé, appareil et dispositif de vérification de l'authenticité d'une image, et support de stockage et extrémité de service | |
| WO2019041851A1 (fr) | Procédé de conseil après-vente d'appareil ménager, dispositif électronique et support de stockage lisible par ordinateur |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18843914 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 18843914 Country of ref document: EP Kind code of ref document: A1 |