[go: up one dir, main page]

CN111611860A - Micro-expression detection method and detection system - Google Patents

Micro-expression detection method and detection system Download PDF

Info

Publication number
CN111611860A
CN111611860A CN202010321480.7A CN202010321480A CN111611860A CN 111611860 A CN111611860 A CN 111611860A CN 202010321480 A CN202010321480 A CN 202010321480A CN 111611860 A CN111611860 A CN 111611860A
Authority
CN
China
Prior art keywords
data
time
expression
electroencephalogram
micro
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010321480.7A
Other languages
Chinese (zh)
Other versions
CN111611860B (en
Inventor
刘光远
赵兴骢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest University
Original Assignee
Southwest University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest University filed Critical Southwest University
Priority to CN202010321480.7A priority Critical patent/CN111611860B/en
Publication of CN111611860A publication Critical patent/CN111611860A/en
Application granted granted Critical
Publication of CN111611860B publication Critical patent/CN111611860B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/467Encoded features or binary features, e.g. local binary patterns [LBP]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Biomedical Technology (AREA)
  • Dermatology (AREA)
  • Neurology (AREA)
  • Neurosurgery (AREA)
  • Image Analysis (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

一种微表情发生检测方法,1)刺激诱发微表情之前记录被测试者的常态脑电数据和常态面部视频数据;2)刺激诱发微表情,记录脑电数据和面部视频数据;3)标记时间戳数据,匹配脑电时间数据和面部视频时间数据;4)获得脑电时间数据发生反应的起点时刻Tsm、面部表情变化的起止帧和顶帧;5)判定微表情是否发生。本发明通过时间戳关联脑电信号和面部图像信息,分别对脑电信号和面部视频数据进行处理,再结合处理结果判定是否发生了微表情,判定结果准确率高。

Figure 202010321480

A method for detecting the occurrence of micro-expressions, 1) recording normal EEG data and normal facial video data of a subject before stimulation induces micro-expressions; 2) stimulating micro-expressions, recording EEG data and facial video data; 3) marking time Stamping data, matching EEG time data and facial video time data; 4) Obtaining the starting time T sm of the EEG time data reaction, the start and end frames and top frames of facial expression changes; 5) Determine whether micro-expressions occur. The invention correlates the EEG signal and the facial image information through the time stamp, respectively processes the EEG signal and the facial video data, and then combines the processing results to determine whether a micro-expression occurs, and the accuracy of the determination result is high.

Figure 202010321480

Description

一种微表情发生检测方法及检测系统Micro-expression detection method and detection system

技术领域technical field

本发明涉及脑电信号及面部视频信号处理技术领域,特别是一种检测微表情是否发生的方法。The invention relates to the technical field of EEG signal and facial video signal processing, in particular to a method for detecting whether a micro-expression occurs.

背景技术Background technique

微表情作为一种发生在1/25s~1/2s内短暂的面部表达,被认为是人们压抑时或试图隐藏真实情绪时难以控制的自然情感表达,因此是测谎中的一个非常重要的线索,其重要性和广泛的应用场景受到了越来越广泛的关注。As a short-lived facial expression that occurs within 1/25s to 1/2s, microexpressions are considered to be natural emotional expressions that are difficult to control when people are repressed or trying to hide their true emotions, so they are a very important clue in lie detection. , its importance and wide range of application scenarios have received more and more attention.

目前,对微表情的探测主要是通过表情识别的方法,这种方法在一定程度上能够从给定的静态图像或动态视频序列中分离出特定的表情状态,从而确定被识别对象的心理情绪,实现计算机对人脸表情的理解与识别,但该方法检测准确率不高,容易产生误判。随着脑电技术的发展,其展现出的高时间动态分辨率和对大脑活动的高敏感性使得对微表情发生的探测有了更加直接的手段,但仅通过脑电信号进行表情和情绪的检测却受限于检测模态单一,准确率较低。At present, the detection of micro-expressions is mainly through the method of expression recognition, which to a certain extent can separate a specific expression state from a given static image or dynamic video sequence, so as to determine the psychological emotion of the recognized object. It realizes the computer's understanding and recognition of facial expressions, but the detection accuracy of this method is not high, and it is prone to misjudgment. With the development of EEG technology, its high temporal dynamic resolution and high sensitivity to brain activity make it more direct to detect the occurrence of micro-expressions. The detection is limited by the single detection mode, and the accuracy is low.

公告号为CN109344816A,公开了一件名为“一种基于脑电信号实时检测面部动作的方法”,通过时间关联脑电信号和对应的面部动作图片,从而可以提取每一帧图片对应的脑电信号,再通过提取脑电特征信息建立BP神经网络建立面部动作检测模型,达到通过脑电信号对面部三类动作进行识别的目的。其不足在于:1、该专利并未公开通过时间关联脑电信号和面部动作图片的具体方式,处理脑电信号和面部图片识别都具有一定的延迟性,无法解决数据同步的问题;2、通过建立BP神经网络进行面部动作检测需要大量的计算和数据处理,无法处理大量数据;3、该专利的本质还是通过脑电信号对面部图片识别,并非对脑电信号和面部图片进行分别识别再联合判断,其准确性不足。The announcement number is CN109344816A, which discloses a method called "A method for real-time detection of facial movements based on EEG signals". By correlating the EEG signals with the corresponding facial action pictures in time, the EEG signals corresponding to each frame of pictures can be extracted. , and then establish a BP neural network by extracting EEG feature information to establish a facial action detection model, so as to achieve the purpose of identifying three types of facial actions through EEG signals. The shortcomings are: 1. The patent does not disclose the specific method of correlating EEG signals and facial action pictures through time, and the processing of EEG signals and facial picture recognition has a certain delay, which cannot solve the problem of data synchronization; 2. Establishing a BP neural network for facial motion detection requires a lot of computation and data processing, and cannot process a large amount of data; 3. The essence of this patent is to recognize facial pictures through EEG signals, not to identify and combine EEG signals and facial pictures separately. Judgment, its accuracy is insufficient.

发明内容SUMMARY OF THE INVENTION

本发明的一个目的就是提供一种微表情发生检测方法,它通过时间戳关联脑电信号和面部图像信息,分别对脑电信号和面部视频数据进行处理,再结合处理结果判定是否发生了微表情,判定结果准确率高。One object of the present invention is to provide a method for detecting the occurrence of micro-expressions, which correlates EEG signals and facial image information through time stamps, processes the EEG signals and facial video data respectively, and then determines whether micro-expressions have occurred in combination with the processing results. , the judgment result has high accuracy.

本发明的该目的是通过这样的技术方案实现的,包括有刺激诱发微表情之前步骤和刺激诱发微表情之后步骤,刺激诱发微表情之前记录被测试者的常态脑电数据和常态面部视频数据,刺激诱发微表情之后包括以下步骤:The purpose of the present invention is achieved through such a technical solution, including the steps before the stimulation induces microexpressions and the steps after the stimulation induces microexpressions, recording the normal EEG data and normal facial video data of the subject before the stimulation induces microexpressions, The following steps are included after stimulus-induced microexpressions:

1)刺激诱发微表情,记录脑电数据和面部视频数据;1) Stimulation induces micro-expressions, recording EEG data and facial video data;

2)为每一段脑电数据和每一帧面部视频数据均标记时间戳数据,匹配生成脑电时间数据和面部视频时间数据;2) Marking timestamp data for each section of EEG data and each frame of facial video data, matching and generating EEG time data and facial video time data;

3)处理脑电数据、脑电时间数据、面部视频数据和面部视频时间数据,获得通过脑电数据和脑电时间数据判断的脑电微表情发生时刻Tsm,获得通过面部视频数据和面部视频时间数据判断的面部表情变化的起止帧和顶帧;3) Process the EEG data, EEG time data, facial video data and facial video time data, obtain the EEG micro-expression occurrence time T sm judged by the EEG data and EEG time data, and obtain the facial video data and facial video The start and end frames and the top frame of facial expression changes judged by time data;

4)根据步骤3)处理后的脑电判定微表情发生时刻Tsm、面部表情变化的起止帧和顶帧,判定微表情是否发生。4) According to the EEG processed in step 3), determine the micro-expression occurrence time T sm , the start and end frames and the top frame of the facial expression change, and determine whether the micro-expression occurs.

进一步,记录被测试者的常态脑电数据、常态面部视频数据,步骤1)中记录脑电数据和面部视频数据的具体方法为:Further, record the normal state EEG data, normal state facial video data of the tested person, and the concrete method of recording EEG data and facial video data in step 1) is:

通过使用Biosemi Active系统从128个电极记录以1024Hz采样率获取、记录常态脑电数据和脑电数据;通过Biosemi Active系统的高速摄像机以每秒80帧的速度获取、记录常态面部视频数据和面部视频数据。Normal EEG data and EEG data were acquired and recorded at 1024Hz sampling rate by recording from 128 electrodes using the Biosemi Active system; normal facial video data and facial video were acquired and recorded at 80 frames per second by the high-speed camera of the Biosemi Active system data.

进一步,步骤2)中所述为每一段脑电数据和每一帧面部视频数据均标记时间戳数据的具体方法为:Further, described in step 2) is that each section of EEG data and each frame of facial video data all mark the concrete method of time stamp data as:

利用Biosemi Active系统的时间同步模块,使时间同步模块中的时间戳数据同步传送至Biosemi Active系统的脑电采集模块和高速摄像机采集模块,使得脑电采集模块获取的每一段脑电数据和高速摄像机获取的每一帧面部视频数据均包含同步的时间戳数据,即生成脑电时间数据和面部视频时间数据。Using the time synchronization module of the Biosemi Active system, the time stamp data in the time synchronization module is synchronously transmitted to the EEG acquisition module and the high-speed camera acquisition module of the Biosemi Active system, so that each piece of EEG data acquired by the EEG acquisition module and the high-speed camera Each frame of acquired facial video data contains synchronized timestamp data, that is, generated EEG time data and facial video time data.

进一步,步骤3)中所述处理脑电数据和脑电时间数据的具体步骤如下:Further, the concrete steps of processing EEG data and EEG time data described in step 3) are as follows:

3-1)以常态脑电数据作为基线数据,计算常态时左侧颞叶通道D23、右侧颞叶通道A09、前额叶通道B26的常态Gamma波段的功率谱密度PSD常态值,功率谱密度表示单位频率波携带的功率,PSD常态值计算公式为

Figure BDA0002461597070000021
X(k)表示长度为N的序列的傅里叶变换,k表示频率;3-1) Using the normal EEG data as the baseline data, calculate the normal PSD normal value of the normal Gamma band of the left temporal lobe channel D23, the right temporal lobe channel A09, and the prefrontal lobe channel B26, and the power spectral density represents The power carried by a unit frequency wave, the formula for calculating the normal value of PSD is:
Figure BDA0002461597070000021
X(k) represents the Fourier transform of a sequence of length N, and k represents the frequency;

3-2)针对脑电时间数据,设置滑动窗时长为W=2s,以2*(1/fs)为滑动时间,fs为脑电采样频率;计算2s滑动窗内左侧颞叶通道D23、右侧颞叶通道A09、前额叶通道B26在Gamma频段的PSD滑动窗时长值,并与对应通道的常态PSD值进行比较;若左侧颞叶通道D23、右侧颞叶通道A09、前额叶通道B26中任何一个通道PSD值高于常态PSD值,则假设脑电数据发生变化,转向步骤3-3);左侧颞叶通道D23、右侧颞叶通道A09、前额叶通道B26中任何一个通道PSD值低于常态平均PSD值的情况不做处理,认定脑电数据未发生变化;3-2) For the EEG time data, set the sliding window duration as W=2s, take 2*(1/fs) as the sliding time, and fs as the EEG sampling frequency; calculate the left temporal lobe channel D23, The PSD sliding window duration value of the right temporal lobe channel A09 and the prefrontal lobe channel B26 in the Gamma frequency band, and compared with the normal PSD value of the corresponding channel; if the left temporal lobe channel D23, the right temporal lobe channel A09, the prefrontal lobe channel The PSD value of any channel in B26 is higher than the normal PSD value, assuming that the EEG data has changed, go to step 3-3); any channel in the left temporal lobe channel D23, the right temporal lobe channel A09, and the prefrontal lobe channel B26 If the PSD value is lower than the normal average PSD value, it will not be processed, and it is determined that the EEG data has not changed;

3-3)取假设脑电数据发生变化的滑动窗时长W=2s内的数据,首先计算这2s内的能量值E,能量公式为

Figure BDA0002461597070000031
其中x(n)为信号幅度,N为数据长度,即2s的数据,取能量值均值为门限值G:G=1/2E;将能量值E与门限值G相比,若E>G,且5ms以内给定采样点En均能持续达到门限值(E1...En>G),则初步认为Tn是反应起点时刻Ts;同时,设置对比门限值PR,公式为
Figure BDA0002461597070000032
计算起点Ts之前n个采样点的对比值PRn,若|PNn-PNN-1|=0则认为第一个采样点PRn对应的时刻为反应起点Ts;若反应起点Ts成功找到,则转向步骤3-4);若反应起点Ts未成功找到,则返回步骤3-2);3-3) Take the data within the sliding window duration W=2s assuming that the EEG data changes, first calculate the energy value E within the 2s, and the energy formula is
Figure BDA0002461597070000031
Where x(n) is the signal amplitude, N is the data length, that is, the data of 2s, and the average energy value is the threshold value G: G=1/2E; compare the energy value E with the threshold value G, if E> G, and the given sampling point En can continuously reach the threshold value within 5ms (E1...En>G), then Tn is initially considered to be the response starting time Ts; at the same time, the comparison threshold value PR is set, and the formula is
Figure BDA0002461597070000032
Calculate the comparison value PRn of n sampling points before the starting point Ts. If |PN n -PN N-1 |=0, the moment corresponding to the first sampling point PRn is considered to be the reaction starting point Ts; if the reaction starting point Ts is successfully found, turn to Step 3-4); If the reaction starting point Ts is not found successfully, then return to step 3-2);

3-4)在反应起点Ts时刻,分别取左侧颞叶通道D23、右侧颞叶通道A09、前额叶通道B26的脑区通道起始时刻Ts,若时刻TsD23-TsB26>0或TsD23-TsA09>0,且必须满足

Figure BDA0002461597070000033
则记录该时刻点,即微脑电表情发生时刻Tsm;若不满足条件,则返回步骤3-2);3-4) At the time of the response starting point Ts, respectively take the brain region channel start time Ts of the left temporal lobe channel D23, the right temporal lobe channel A09, and the prefrontal lobe channel B26, if the time T sD23 -T sB26 >0 or T sD23 -T sA09 >0, and must satisfy
Figure BDA0002461597070000033
Then record this moment, that is, the micro-EEG expression generation moment T sm ; if the condition is not met, then return to step 3-2);

3-5)获得脑电数据下的脑电微表情发生时刻Tsm3-5) Obtain the EEG micro-expression occurrence time T sm under the EEG data.

进一步,步骤3)中所述处理面部视频数据和面部视频时间数据的具体步骤为:Further, the concrete steps of processing face video data and face video time data described in step 3) are:

3-6)对人脸进行检测,从每一帧的原始图像中检测出人脸的具体位置;3-6) Detecting the human face, and detecting the specific position of the human face from the original image of each frame;

3-7)对人脸视频采集中的人脸进行人脸对齐和面部基准点定位;根据输入的人脸图像,采用CLM局部约束模型自动定位出面部关键特点CLM局部约束模型;3-7) Perform face alignment and face reference point positioning on the face in the face video collection; according to the input face image, use the CLM local constraint model to automatically locate the key facial feature CLM local constraint model;

3-8)提取基于形变的表情特征:利用CLM对面部特征点的标注,获取到面部基准点坐标后,计算这些面部基准点之间的相关斜率信息,提取基于形变的表情特征;同时对三个区域内的关键点进行追踪,提取相应位移信息,并提取表情图片的特定特征点之间的距离信息,将距离与平静图片相减,得到距离的变化信息,提取出基于运动的表情特征;3-8) Extraction of deformation-based expression features: Using CLM to mark facial feature points, after obtaining the coordinates of facial reference points, calculate the relative slope information between these facial reference points, and extract deformation-based expression features; Track the key points in each area, extract the corresponding displacement information, and extract the distance information between the specific feature points of the expression picture, subtract the distance from the calm picture, obtain the change information of the distance, and extract the movement-based expression features;

3-9)根据面部特征数据提取结果,得到起止帧和顶帧;其中,通过比较特征点之间的距离与平静图片的距离差k,设定阈值R,超过k>R的第一帧图像判定为起始帧;通过比对起始帧后图像,将k值最大值的一帧图像判定为顶帧数,当k<R的第一帧作为终止帧。3-9) According to the facial feature data extraction result, obtain the start and end frame and the top frame; wherein, by comparing the distance between the feature points and the distance difference k of the calm picture, the threshold value R is set to exceed the first frame image of k>R It is determined as the starting frame; by comparing the images after the starting frame, one frame of image with the maximum value of k is determined as the number of top frames, and the first frame with k<R is regarded as the ending frame.

进一步,步骤3-6)对人脸进行检测,从每一帧的原始图像中检测出人脸的具体位置的具体步骤为:Further, step 3-6) detects the human face, and the specific steps of detecting the specific position of the human face from the original image of each frame are:

3-6-1)采用局部二进制模式(LBP)提取响应图像;3-6-1) Use local binary pattern (LBP) to extract the response image;

3-6-2)采用AdaBoost算法对响应图像进行处理,分离出人脸区域;LBP算法首先对原始图像的每个像素点进行逐行扫描,对每个像素点以该点灰度值为阈值,对其周边3*3的邻接点进行二值化并按照顺序组成一个8位二进制数,以此二进制数的值(0~255)作为该点响应。3-6-2) The AdaBoost algorithm is used to process the response image to separate the face area; the LBP algorithm first scans each pixel of the original image line by line, and takes the gray value of each pixel as the threshold value. , binarize the 3*3 adjacent points around it and form an 8-bit binary number in sequence, and use the value of this binary number (0~255) as the point response.

进一步,步骤3-7)根据输入的人脸图像,采用CLM局部约束模型自动定位出面部关键特点CLM局部约束模型的具体步骤如下:Further, step 3-7) according to the input face image, using the CLM local constraint model to automatically locate the key features of the face The specific steps of the CLM local constraint model are as follows:

3-7-1)对人脸模型形状进行建模:针对M张图片,每张图片有N个特征点,每个特征点的坐标为(xi、yi),一张图像上的N个特征点的坐标组成的向量用x=[x1y1x2y2…xNyn]T表示,可得所有图像的平均脸坐标:

Figure BDA0002461597070000041
计算每个样本图像的形状和平均脸坐标的差值,可以得到一个零均值的形状变化矩阵X,对矩阵X进行PCA变换就可以得到人脸变化的主要成分,记特征值为λi,对应的特征向量为pi;选择最大的k个特征值对应的特征向量组成正交矩阵P=(p1,p2,…,pk);形状变化的权重向量b=(b1,b2,…,bk)T,b的每个分量表示其在对应的特征向量方向的大小:
Figure BDA0002461597070000042
对任意的人脸检测图像,其样本形状向量可以表示为:
Figure BDA0002461597070000043
3-7-1) Modeling the shape of the face model: for M pictures, each picture has N feature points, the coordinates of each feature point are ( xi , yi ), and N on an image The vector composed of the coordinates of the feature points is represented by x=[x 1 y 1 x 2 y 2 ...x N y n ] T , and the average face coordinates of all images can be obtained:
Figure BDA0002461597070000041
Calculate the difference between the shape of each sample image and the average face coordinate, and a shape change matrix X with zero mean can be obtained. The main component of face change can be obtained by performing PCA transformation on the matrix X, and the eigenvalue is λ i , corresponding to The eigenvector of is p i ; the eigenvectors corresponding to the largest k eigenvalues are selected to form an orthogonal matrix P=(p 1 , p 2 , ..., p k ); the weight vector of shape change b=(b 1 , b 2 , ..., b k ) T , each component of b represents its size in the direction of the corresponding eigenvector:
Figure BDA0002461597070000042
For any face detection image, the sample shape vector can be expressed as:
Figure BDA0002461597070000043

3-7-2)为每个特征点建立patch模型:对每个特征点周围取固定大小的patch区域,将包含特征点的patch区域标记为正样本;然后在非特征点区域截取同样大小的patch标记为负样本;每个特征点总共有r个patch,将其组成一个向量(x(1),x(2),…x(r))T,对样本集中的每幅图像有

Figure BDA0002461597070000044
其中y(i)={-1,1}i=1,2,…r,其中y(i)=1为正样本标记,y(i)=-1为负样本标记;则训练的线性支持向量机为:
Figure BDA0002461597070000045
其中xi表示样本集的子空间向量,αi是权重系数,Ms是每个特征点支持向量的数量,b为偏移量;可得:y(i)=WT·x(i)+θ,WT=[W1 W2…Wn]是每个支持向量的权重系数,θ是便宜量;3-7-2) Establish a patch model for each feature point: take a fixed size patch area around each feature point, mark the patch area containing the feature point as a positive sample; then intercept the same size in the non-feature point area The patch is marked as a negative sample; there are a total of r patches for each feature point, which are composed of a vector (x (1) , x (2) , ... x (r) ) T , for each image in the sample set, there are
Figure BDA0002461597070000044
where y (i) ={-1,1}i=1,2,...r, where y (i) =1 is a positive sample mark, and y (i) =-1 is a negative sample mark; then the linear support for training The vector machine is:
Figure BDA0002461597070000045
where x i represents the subspace vector of the sample set, α i is the weight coefficient, Ms is the number of support vectors for each feature point, and b is the offset; it can be obtained: y (i) = W T x (i) + θ, W T = [W 1 W 2 ... W n ] is the weight coefficient of each support vector, θ is the cheap quantity;

3-7-3)人脸点拟合:通过当前估计的特征点位置的限制区域进行局部搜索,对每个特征点生成相似响应图R(X,Y)。对响应图拟合一个二次函数,假设R(X,Y)是领域范围内(x0y0)处得到最大值,则可用二次函数r(x0,y0)=a(x-x0)2+b(y-y0)2+c拟合这个位置。其中a,b,c是二次函数的系数,利用最小二乘法δ=min∑x,y[R(x,y)-r(x,y)]2可求得二次函数r(x,y)和R(x,y)之间的最小误差;加上形变约束代价函数就能构成特征点查找的目标函数,目标函数可以表示为:

Figure BDA0002461597070000051
每次优化这个目标函数得到一个新的特征点位置,然后再迭代更新,直到收敛到最大值。3-7-3) Face point fitting: perform a local search through the limited area of the currently estimated feature point position, and generate a similar response map R(X, Y) for each feature point. Fit a quadratic function to the response graph, assuming that R(X, Y) is the maximum value obtained at (x 0 y 0 ) in the field, then the quadratic function r(x 0 , y 0 )=a(xx 0 ) 2 +b(yy 0 ) 2 +c fits this position. Where a, b, c are the coefficients of the quadratic function, the quadratic function r ( x , The minimum error between y) and R(x, y); plus the deformation constraint cost function, the objective function of feature point search can be formed, and the objective function can be expressed as:
Figure BDA0002461597070000051
Each time the objective function is optimized, a new feature point position is obtained, and then iteratively updated until it converges to the maximum value.

进一步,步骤5)中所述判定规则为:Further, the judgment rule described in step 5) is:

通过脑电活动反应时刻判断微表情是否发生,如果判断为发生则根据微脑电表情发生时刻Tsm为时间起点搜索时间阈值TL内表情的变化,如果通过表情的起止帧的时间判断有发生在500ms内的表情出现,则最终判定为微表情发生;如果没有发生在500ms内的表情出现,则不最终判为微表情发生。Judging whether the micro-expression occurs through the EEG activity response time, if it is determined to occur, the change in the expression within the time threshold TL is searched according to the micro-EEG expression generation time T sm as the time starting point, if it is judged by the time of the start and end frames of the expression If the expression within 500ms appears, it is finally judged that the micro-expression occurs; if the expression does not appear within 500ms, it is not finally judged that the micro-expression occurs.

进一步,时间阈值TL为500ms~1000ms。Further, the time threshold TL is 500ms to 1000ms.

本发明的另一个目的就是提供一种微表情发生检测系统。Another object of the present invention is to provide a micro-expression occurrence detection system.

本发明的该目的是通过这样的技术方案实现的,包括有:The purpose of the present invention is achieved through such technical solutions, including:

数据采集模块,用于记录刺激诱发微表情前的常态脑电数据和常态面部视频数据;记录刺激诱发微表情后的脑电数据和面部视频数据;The data acquisition module is used to record normal EEG data and normal facial video data before stimulation induces microexpressions; record EEG data and facial video data after stimulation induces microexpressions;

时间匹配模块,用于标记每一段脑电数据和每一帧面部视频数据的时间戳,生成脑电时间数据和面部视频时间数据;The time matching module is used to mark the time stamp of each piece of EEG data and each frame of facial video data to generate EEG time data and facial video time data;

数据处理模块,处理脑电数据、脑电时间数据、面部视频数据和面部视频时间数据,计算脑电微表情发生时刻Tsm、通过面部视频数据和面部视频时间数据判断的面部表情变化的起止帧和顶帧;The data processing module processes EEG data, EEG time data, facial video data and facial video time data, calculates the time T sm of EEG micro-expression occurrence, and the start and end frames of facial expression changes judged by facial video data and facial video time data and top frame;

微表情判定模块,根据脑电判定微表情发生时刻Tsm、面部表情变化的起止帧和顶帧,判定微表情是否发生。The micro-expression determination module determines whether the micro-expression occurs or not according to the EEG determination time T sm of the micro-expression occurrence, the start and end frames and the top frame of the facial expression change.

由于采用了上述技术方案,本发明具有如下的优点:Owing to adopting the above-mentioned technical scheme, the present invention has the following advantages:

1、本发明通过时间戳关联脑电数据和面部视频数据,解决了脑电信号和面部图片识别的延迟性问题,实现了数据同步;2、本发明结合脑电数据和面部视频数据的变化时间段,来判定是否发生微表情,判定方法简单,节省了大量计算时间和资源;3、本申请分别对脑电信号和面部视频数据进行处理,再结合处理结果判定是否发生了微表情,判定结果准确率高。1. The present invention solves the delay problem of EEG signal and facial image recognition by correlating EEG data and facial video data through time stamps, and realizes data synchronization; 2. The present invention combines the change time of EEG data and facial video data 3. This application processes the EEG signal and facial video data respectively, and then combines the processing results to determine whether a micro-expression occurs, and the result of the determination is as follows: High accuracy.

本发明的其他优点、目标和特征在某种程度上将在随后的说明书中进行阐述,并且在某种程度上,基于对下文的考察研究对本领域技术人员而言将是显而易见的,或者可以从本发明的实践中得到教导。本发明的目标和其他优点可以通过下面的说明书和权利要求书来实现和获得。Other advantages, objects, and features of the present invention will be set forth in the description that follows, and will be apparent to those skilled in the art based on a study of the following, to the extent that is taught in the practice of the present invention. The objectives and other advantages of the present invention may be realized and attained by the following description and claims.

附图说明Description of drawings

本发明的附图说明如下。The accompanying drawings of the present invention are described below.

图1为本发明的流程示意图。FIG. 1 is a schematic flow chart of the present invention.

具体实施方式Detailed ways

下面结合附图和实施例对本发明作进一步说明。The present invention will be further described below with reference to the accompanying drawings and embodiments.

实施例:Example:

一种微表情发生检测方法,刺激诱发微表情之前记录被测试者的常态脑电数据和常态面部视频数据,128导脑电数据中的左侧颞叶通道D23,D23号通道是左侧颞叶最有代表性的通道,右侧颞叶通道A09,A09号通道是右侧颞叶最有代表性的通道、前额叶通道B26,B26号通道是前额叶叶最有代表性的通道。面部数据使用66个面部坐标进行训练,并在其输出中提供49个点的面部坐标,在平静状态下校正头部姿势后提取面部点。在每个对象的中性脸和所有对象的平均脸之间进行比对,并使用比对记录序列的所有跟踪点。通过平均眼睛和鼻子界标内角的坐标来生成参考点。计算并平均38个点,包括眉毛,眼睛和嘴唇,到参考点的距离。A method for detecting the occurrence of micro-expressions, recording the subject's normal EEG data and normal facial video data before stimulating micro-expressions, the left temporal lobe channel D23 in the 128-lead EEG data, and the D23 channel is the left temporal lobe The most representative channel is the right temporal lobe channel A09, the channel A09 is the most representative channel of the right temporal lobe, the prefrontal channel B26, and the channel B26 is the most representative channel of the prefrontal lobe. The face data is trained with 66 face coordinates and provides the face coordinates of 49 points in its output, and the face points are extracted after correcting the head pose in the calm state. An alignment is made between the neutral face of each subject and the average face of all subjects, and all track points of the sequence are recorded using the alignment. Reference points are generated by averaging the coordinates of the inner corners of the eye and nose landmarks. Calculate and average 38 points, including eyebrows, eyes and lips, to the reference point.

刺激诱发微表情后的具体方法为:The specific method after stimulation induces microexpressions is as follows:

1)刺激诱发微表情,记录脑电数据和面部视频数据;1) Stimulation induces micro-expressions, recording EEG data and facial video data;

2)为每一段脑电数据和每一帧面部视频数据均标记时间戳数据,匹配生成脑电时间数据和面部视频时间数据;2) Marking timestamp data for each section of EEG data and each frame of facial video data, matching and generating EEG time data and facial video time data;

3)处理脑电数据、脑电时间数据、面部视频数据和面部视频时间数据,获得Tsm,Tsm即通过EEG判别为微表情发生的时刻,获得面部表情变化的起止帧和顶帧;3) process EEG data, EEG time data, facial video data and facial video time data to obtain T sm , which is the moment when T sm is judged by EEG as the occurrence of micro-expressions, and obtains the start and end frames and top frames of facial expression changes;

4)根据步骤3)处理后的Tsm、面部表情变化的起止帧和顶帧,判定微表情是否发生。4) According to the T sm processed in step 3), the start and end frames and the top frame of the facial expression change, determine whether a micro-expression occurs.

步骤1)中记录脑电数据和面部视频数据的具体方法为:通过使用Biosemi Active系统从128个电极记录以1024Hz采样率获取、记录常态脑电数据和脑电数据;通过BiosemiActive系统的高速摄像机以每秒80帧的速度获取、记录常态面部视频数据和面部视频数据。The specific method for recording EEG data and facial video data in step 1) is: by using the Biosemi Active system to record from 128 electrodes at a sampling rate of 1024 Hz to acquire and record normal EEG data and EEG data; Acquire and record normal facial video data and facial video data at a speed of 80 frames per second.

步骤2)中所述为每一段脑电数据和每一帧面部视频数据均标记时间戳数据的具体方法为:利用Biosemi Active系统的时间同步模块,使时间同步模块中的时间戳数据同步传送至Biosemi Active系统的脑电采集模块和高速摄像机采集模块,使得脑电采集模块获取的每一段脑电数据和高速摄像机获取的每一帧面部视频数据均包含同步的时间戳数据,即生成脑电时间数据和面部视频时间数据。Described in step 2) is that each section of EEG data and each frame of face video data are all marked with time stamp data The concrete method is: utilize the time synchronization module of the Biosemi Active system, make the time stamp data in the time synchronization module synchronously transmitted to The EEG acquisition module and the high-speed camera acquisition module of the Biosemi Active system enable each piece of EEG data acquired by the EEG acquisition module and each frame of facial video data acquired by the high-speed camera to contain synchronized timestamp data, that is, the generated EEG time. data and facial video time data.

处理脑电数据和脑电时间数据的具体步骤如下:The specific steps for processing EEG data and EEG time data are as follows:

3-1)以常态脑电数据作为基线数据,计算常态时左侧颞叶通道D23、右侧颞叶通道A09、前额叶通道B26的常态Gamma波段的功率谱密度PSD常态值,功率谱密度表示单位频率波携带的功率,PSD常态值计算公式为

Figure BDA0002461597070000071
X(k)表示长度为N的序列的傅里叶变换,k表示频率;3-1) Taking the normal EEG data as the baseline data, calculate the normal value of the power spectral density PSD of the normal Gamma band of the left temporal lobe channel D23, the right temporal lobe channel A09, and the prefrontal lobe channel B26 in the normal state. The power spectral density represents The power carried by a unit frequency wave, the formula for calculating the normal value of PSD is:
Figure BDA0002461597070000071
X(k) represents the Fourier transform of a sequence of length N, and k represents the frequency;

3-2)针对脑电时间数据,设置滑动窗时长W=2s,以2*(1/fs)为滑动时间,其中,fs为脑电采样频率;计算2s滑动窗内左侧颞叶通道D23、右侧颞叶通道A09、前额叶通道B26在Gamma频段的PSD滑动窗时长值,并与对应通道的常态PSD值进行比较;若左侧颞叶通道D23、右侧颞叶通道A09、前额叶通道B26中任何一个通道PSD值高于常态PSD值,则假设脑电数据发生变化,转向步骤3-3);左侧颞叶通道D23、右侧颞叶通道A09、前额叶通道B26中任何一个通道PSD值低于常态平均PSD值的情况不做处理,认定脑电数据未发生变化;3-2) For the EEG time data, set the sliding window duration W=2s, take 2*(1/fs) as the sliding time, where fs is the EEG sampling frequency; calculate the left temporal lobe channel D23 in the 2s sliding window , the right temporal lobe channel A09, the prefrontal lobe channel B26 PSD sliding window duration value in the Gamma frequency band, and compared with the normal PSD value of the corresponding channel; if the left temporal lobe channel D23, the right temporal lobe channel A09, the prefrontal lobe channel If the PSD value of any channel in channel B26 is higher than the normal PSD value, assuming that the EEG data has changed, go to step 3-3); any one of the left temporal lobe channel D23, the right temporal lobe channel A09, and the prefrontal lobe channel B26 If the channel PSD value is lower than the normal average PSD value, it will not be processed, and it is determined that the EEG data has not changed;

3-3)取假设脑电数据发生变化的滑动窗时长W=2s内的数据,首先计算这2s内的能量值Ei,能量公式

Figure BDA0002461597070000072
其中Xi(k)为EEG信号进行FFT变换后的结构,k为数据长度,即2s的数据,取能量值均值为门限值G:G=1/2E;将能量值E与门限值G相比,若E>G,且5ms以内给定采样点En均能持续达到门限值(E1...En>G),则初步认为Tn是不同脑区通道中反应的起始时刻Ts;同时,设置对比门限值PR,公式为
Figure BDA0002461597070000073
计算起点Ts之前n个采样点的对比值PRn,若|PNn-PNN-1|=0则认为第一个采样点PRn对应的时刻为不同脑区通道中反应的起始时刻T;若反应起点Ts成功找到,则转向步骤3-4);若反应起点Ts未成功找到,则返回步骤3-2);3-3) Take the data within the sliding window duration W=2s assuming that the EEG data changes, first calculate the energy value E i within these 2s, the energy formula
Figure BDA0002461597070000072
Among them, X i (k) is the structure of the EEG signal after FFT transformation, k is the data length, that is, the data of 2s, and the average value of the energy value is the threshold value G: G=1/2E; the energy value E and the threshold value are Compared with G, if E>G, and the given sampling point En can continuously reach the threshold value within 5ms (E1...En>G), it is preliminarily considered that Tn is the starting time T of the response in different brain area channels s ; at the same time, set the comparison threshold PR, the formula is
Figure BDA0002461597070000073
Calculate the contrast value PRn of n sampling points before the starting point Ts, if |PN n -PN N-1 |=0, then the time corresponding to the first sampling point PRn is considered to be the starting time T of the responses in different brain area channels; if If the reaction starting point Ts is successfully found, then turn to step 3-4); if the reaction starting point Ts is not successfully found, then return to step 3-2);

3-4)在反应起点Ts时刻,分别取左侧颞叶通道D23、右侧颞叶通道A09、前额叶通道B26的起始时刻Ts,若时刻TsD23-TsB26>0或TsD23-TsA09>0,即D23通道的起始时刻先于B26通道通道的起始时刻,或者D23通道的起始时刻先于A09时刻的起始时刻,且必须满足

Figure BDA0002461597070000081
则记录通过EEG判别为微表情发生的时刻Tsm;若不满足条件,则返回步骤3-2);3-4) At the time T s of the starting point of the reaction, respectively take the starting time T s of the left temporal lobe channel D23, the right temporal lobe channel A09, and the prefrontal lobe channel B26, if the time T sD23 -T sB26 >0 or T sD23 -T sA09 >0, that is, the start time of the D23 channel is before the start time of the B26 channel, or the start time of the D23 channel is before the start time of the A09 time, and must meet the
Figure BDA0002461597070000081
Then record the time T sm that is judged as the occurrence of micro-expressions by EEG; if the condition is not met, then return to step 3-2);

3-5)获得通过EEG判别为微表情发生的时刻Tsm3-5) Obtain the time T sm at which the micro-expression is identified by EEG.

步骤3)中所述处理面部视频数据和面部视频时间数据的具体步骤为:The concrete steps of processing face video data and face video time data described in step 3) are:

3-6)对人脸进行检测,从每一帧的原始图像中检测出人脸的具体位置;3-6) Detecting the human face, and detecting the specific position of the human face from the original image of each frame;

3-7)对人脸视频采集中的人脸进行人脸对齐和面部基准点定位;根据输入的人脸图像,采用CLM局部约束模型自动定位出面部关键特点CLM局部约束模型;3-7) Perform face alignment and face reference point positioning on the face in the face video collection; according to the input face image, use the CLM local constraint model to automatically locate the key facial feature CLM local constraint model;

3-8)提取基于形变的表情特征:利用CLM对面部特征点的标注,获取到面部基准点坐标后,计算这些面部基准点之间的相关斜率信息,提取基于形变的表情特征;同时对三个区域内的关键点进行追踪,提取相应位移信息,并提取表情图片的特定特征点之间的距离信息,将距离与平静图片相减,得到距离的变化信息,提取出基于运动的表情特征;3-8) Extraction of deformation-based expression features: Using CLM to mark facial feature points, after obtaining the coordinates of facial reference points, calculate the relative slope information between these facial reference points, and extract deformation-based expression features; Track the key points in each area, extract the corresponding displacement information, and extract the distance information between the specific feature points of the expression picture, subtract the distance from the calm picture, obtain the change information of the distance, and extract the movement-based expression features;

3-9)根据面部特征数据提取结果,得到起止帧和顶帧;其中,通过比较特征点之间的距离与平静图片的距离差k,设定阈值R,超过k>R的第一帧图像判定为起始帧;通过比对起始帧后图像,将k值最大值的一帧图像判定为顶帧数,当k<R的第一帧作为终止帧。3-9) According to the facial feature data extraction result, obtain the start and end frame and the top frame; wherein, by comparing the distance between the feature points and the distance difference k of the calm picture, the threshold value R is set to exceed the first frame image of k>R It is determined as the starting frame; by comparing the images after the starting frame, one frame of image with the maximum value of k is determined as the number of top frames, and the first frame with k<R is regarded as the ending frame.

步骤3-6)对人脸进行检测,从每一帧的原始图像中检测出人脸的具体位置的具体步骤为:Step 3-6) detects the human face, and the specific steps of detecting the specific position of the human face from the original image of each frame are:

3-6-1)采用局部二进制模式(LBP)提取响应图像;3-6-1) Use local binary pattern (LBP) to extract the response image;

3-6-2)采用AdaBoost算法对响应图像进行处理,分离出人脸区域;LBP算法首先对原始图像的每个像素点进行逐行扫描,对每个像素点以该点灰度值为阈值,对其周边3*3的邻接点进行二值化并按照顺序组成一个8位二进制数,以此二进制数的值(0~255)作为该点响应。3-6-2) The AdaBoost algorithm is used to process the response image to separate the face area; the LBP algorithm first scans each pixel of the original image line by line, and takes the gray value of each pixel as the threshold value. , binarize the 3*3 adjacent points around it and form an 8-bit binary number in sequence, and use the value of this binary number (0~255) as the point response.

步骤3-7)根据输入的人脸图像,采用CLM局部约束模型自动定位出面部关键特点CLM局部约束模型的具体步骤如下:Step 3-7) According to the input face image, the specific steps of using the CLM local constraint model to automatically locate the key features of the face and the CLM local constraint model are as follows:

3-7-1)对人脸模型形状进行建模:针对M张图片,每张图片有N个特征点,每个特征点的坐标为(xi、yi),一张图像上的N个特征点的坐标组成的向量用x=[x1 y1 x2 y2…xN yn]T表示,可得所有图像的平均脸坐标:

Figure BDA0002461597070000091
计算每个样本图像的形状和平均脸坐标的差值,可以得到一个零均值的形状变化矩阵X,对矩阵X进行PCA变换就可以得到人脸变化的主要成分,记特征值为λi,对应的特征向量为pi;选择最大的k个特征值对应的特征向量组成正交矩阵P=(p1,p2,…,pk);形状变化的权重向量b=(b1,b2,…,bk)T,b的每个分量表示其在对应的特征向量方向的大小:
Figure BDA0002461597070000092
对任意的人脸检测图像,其样本形状向量可以表示为:
Figure BDA0002461597070000093
3-7-1) Modeling the shape of the face model: for M pictures, each picture has N feature points, the coordinates of each feature point are ( xi , yi ), and N on an image The vector composed of the coordinates of the feature points is represented by x=[x 1 y 1 x 2 y 2 ...x N y n ] T , and the average face coordinates of all images can be obtained:
Figure BDA0002461597070000091
Calculate the difference between the shape of each sample image and the average face coordinate, and a shape change matrix X with zero mean can be obtained. The main component of face change can be obtained by performing PCA transformation on the matrix X, and the eigenvalue is λ i , corresponding to The eigenvector of is p i ; the eigenvectors corresponding to the largest k eigenvalues are selected to form an orthogonal matrix P=(p 1 , p 2 , ..., p k ); the weight vector of shape change b=(b 1 , b 2 , ..., b k ) T , each component of b represents its size in the direction of the corresponding eigenvector:
Figure BDA0002461597070000092
For any face detection image, the sample shape vector can be expressed as:
Figure BDA0002461597070000093

3-7-2)为每个特征点建立patch模型:对每个特征点周围取固定大小的patch区域,将包含特征点的patch区域标记为正样本;然后在非特征点区域截取同样大小的patch标记为负样本;每个特征点总共有r个patch,将其组成一个向量(x(1),x(2),…x(r))T,对样本集中的每幅图像有

Figure BDA0002461597070000094
其中y(i)={-1,1}i=1,2,…n,其中y(i)=1为正样本标记,y(i)=-1为负样本标记;则训练的线性支持向量机为:
Figure BDA0002461597070000095
其中xi表示样本集的子空间向量,αi是权重系数,Ms是每个特征点支持向量的数量,b为偏移量;可得:y(i)=WT·x(i)+θ,WT=[W1 W2 … Wn]是每个支持向量的权重系数,θ是便宜量;3-7-2) Establish a patch model for each feature point: take a fixed size patch area around each feature point, mark the patch area containing the feature point as a positive sample; then intercept the same size in the non-feature point area The patch is marked as a negative sample; there are a total of r patches for each feature point, which are composed of a vector (x (1) , x (2) , ... x (r) ) T , for each image in the sample set, there are
Figure BDA0002461597070000094
where y (i) ={-1,1}i=1,2,...n, where y (i) =1 is a positive sample mark, y (i) =-1 is a negative sample mark; then the linear support for training The vector machine is:
Figure BDA0002461597070000095
where x i represents the subspace vector of the sample set, α i is the weight coefficient, Ms is the number of support vectors for each feature point, and b is the offset; it can be obtained: y (i) = W T x (i) + θ, W T = [W 1 W 2 ... W n ] is the weight coefficient of each support vector, θ is the cheap quantity;

3-7-3)人脸点拟合:通过当前估计的特征点位置的限制区域进行局部搜索,对每个特征点生成相似响应图R(X,Y)。对响应图拟合一个二次函数,假设R(X,Y)是领域范围内(x0,y0)处得到最大值,则可用二次函数r(x0,y0)=a(x-x0)2+b(y-y0)2+c拟合这个位置。其中a,b,c是二次函数的系数,利用最小二乘法δ=min∑x,y[R(x,y)-r(x,y)]2可求得二次函数r(x,y)和R(x,y)之间的最小误差;加上形变约束代价函数就能构成特征点查找的目标函数,目标函数可以表示为:

Figure BDA0002461597070000096
每次优化这个目标函数得到一个新的特征点位置,然后再迭代更新,直到收敛到最大值。3-7-3) Face point fitting: perform a local search through the limited area of the currently estimated feature point position, and generate a similar response map R(X, Y) for each feature point. Fit a quadratic function to the response graph, assuming that R(X, Y) is the maximum value obtained at (x 0 , y 0 ) in the field, then the quadratic function r(x 0 , y 0 )=a(xx 0 ) 2 +b(yy 0 ) 2 +c fits this position. Where a, b, c are the coefficients of the quadratic function, the quadratic function r ( x , The minimum error between y) and R(x,y); plus the deformation constraint cost function, the objective function of feature point search can be formed, and the objective function can be expressed as:
Figure BDA0002461597070000096
Each time the objective function is optimized, a new feature point position is obtained, and then iteratively updated until it converges to the maximum value.

步骤5)中所述判定规则为:通过脑电活动反应判断微表情是否发生,如果判断为发生则根据微表情的发生时刻Tsm为时间起点搜索时间阈值TL,搜索时间阈值,一般为500ms~1000ms内表情的变化,如果通过表情的起止帧的时间判断有发生在500ms内的表情出现,则最终判定为微表情发生;如果没有发生在500ms内的表情出现,则不最终判为微表情发生。The determination rule described in step 5) is: determine whether the micro-expression occurs through the brain electrical activity response, and if it is determined to occur, then according to the time of occurrence of the micro-expression T sm is the time starting point to search for the time threshold TL, and the search time threshold is generally 500ms~ The change of the expression within 1000ms, if it is judged by the time of the start and end frames of the expression that there is an expression that occurs within 500ms, it is finally judged that the micro-expression occurs; .

本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。As will be appreciated by those skilled in the art, the embodiments of the present application may be provided as a method, a system, or a computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.

本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the present application. It will be understood that each flow and/or block in the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to the processor of a general purpose computer, special purpose computer, embedded processor or other programmable data processing device to produce a machine such that the instructions executed by the processor of the computer or other programmable data processing device produce Means for implementing the functions specified in a flow or flow of a flowchart and/or a block or blocks of a block diagram.

这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory result in an article of manufacture comprising instruction means, the instructions The apparatus implements the functions specified in the flow or flow of the flowcharts and/or the block or blocks of the block diagrams.

这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。These computer program instructions can also be loaded on a computer or other programmable data processing device to cause a series of operational steps to be performed on the computer or other programmable device to produce a computer-implemented process such that The instructions provide steps for implementing the functions specified in the flow or blocks of the flowcharts and/or the block or blocks of the block diagrams.

最后应当说明的是:以上实施例仅用以说明本发明的技术方案而非对其限制,尽管参照上述实施例对本发明进行了详细的说明,所属领域的普通技术人员应当理解:依然可以对本发明的具体实施方式进行修改或者等同替换,而未脱离本发明精神和范围的任何修改或者等同替换,其均应涵盖在本发明的权利要求保护范围之内。Finally, it should be noted that the above embodiments are only used to illustrate the technical solutions of the present invention rather than to limit them. Although the present invention has been described in detail with reference to the above embodiments, those of ordinary skill in the art should understand that: the present invention can still be Modifications or equivalent replacements are made to the specific embodiments of the present invention, and any modifications or equivalent replacements that do not depart from the spirit and scope of the present invention shall be included within the protection scope of the claims of the present invention.

Claims (10)

1. A method for detecting the occurrence of micro expression comprises the steps before stimulating and inducing micro expression and after stimulating and inducing micro expression, and the steps before stimulating and inducing micro expression, namely recording the normal electroencephalogram data and the normal facial video data of a tested person, and is characterized in that the method comprises the following steps after stimulating and inducing micro expression:
1) stimulating and inducing the micro expression, and recording electroencephalogram data and facial video data;
2) marking time stamp data for each segment of electroencephalogram data and each frame of face video data, and matching to generate electroencephalogram time data and face video time data;
3) processing the electroencephalogram data, the electroencephalogram time data, the face video data and the face video time data to obtain the electroencephalogram microexpression occurrence time T judged by the electroencephalogram data and the electroencephalogram time datasmObtaining start and stop frames and top frames of facial expression changes judged by the facial video data and the facial video time data;
4) judging the occurrence time T of the micro-expression according to the electroencephalogram processed in the step 3)smAnd judging whether the micro expression occurs or not according to the start-stop frame and the top frame of the facial expression change.
2. The method for detecting the occurrence of the micro-expression according to claim 1, wherein the normal electroencephalogram data and the normal facial video data of the tested person are recorded, and the specific method for recording the electroencephalogram data and the facial video data in the step 1) comprises the following steps:
acquiring and recording normal electroencephalogram data and electroencephalogram data at a 1024Hz sampling rate from 128 electrode records by using a Biosemiactive system; normal face video data and face video data were acquired and recorded at a rate of 80 frames per second by a high-speed camera of the BiosemiActive system.
3. The micro-expression occurrence detection method according to claim 1, wherein the specific method of marking time stamp data for each segment of electroencephalogram data and each frame of facial video data in step 2) is:
the time synchronization module of the Biosemi Active system is utilized to synchronously transmit the timestamp data in the time synchronization module to the electroencephalogram acquisition module and the high-speed camera acquisition module of the Biosemi Active system, so that each segment of electroencephalogram data acquired by the electroencephalogram acquisition module and each frame of facial video data acquired by the high-speed camera comprise the synchronous timestamp data, namely, the electroencephalogram time data and the facial video time data are generated.
4. The micro-expression occurrence detection method according to claim 1, wherein the specific steps of processing the electroencephalogram data and the electroencephalogram time data in step 3) are as follows:
3-1) calculating the PSD normal values of normal Gamma wave bands of the left temporal lobe channel D23, the right temporal lobe channel A09 and the prefrontal lobe channel B26 in normal state by taking normal electroencephalogram data as baseline data, wherein the PSD normal values represent the power carried by unit frequency waves, and the PSD normal value calculation formula is
Figure FDA0002461597060000011
X (k) represents the fourier transform of a sequence of length N, k representing the frequency;
3-2) setting the duration of a sliding window as W2 s, taking 2 x (1/fs) as the sliding time and fs as the electroencephalogram sampling frequency aiming at the electroencephalogram time data; calculating PSD sliding window time length values of a left temporal lobe channel D23, a right temporal lobe channel A09 and a prefrontal lobe channel B26 in a 2s sliding window in a Gamma frequency band, and comparing the PSD sliding window time length values with normal PSD values of corresponding channels; if the PSD value of any one of the left temporal lobe channel D23, the right temporal lobe channel A09 and the prefrontal lobe channel B26 is higher than the normal PSD value, assuming that the electroencephalogram data change, turning to the step 3-3); the PSD value of any one of a left temporal lobe channel D23, a right temporal lobe channel A09 and a prefrontal lobe channel B26 is lower than the normal average PSD value, and the electroencephalogram data are determined to be unchanged;
3-3) taking the sliding window time length W of the change of the electroencephalogram data as the data in 2s, firstly calculating the energy value E in the 2s, wherein the energy formula is
Figure FDA0002461597060000021
Where x (N) is the signal amplitude, N is the data length, i.e. 2s of data, and the mean value of the energy values is taken as the threshold value G: G-1/2E; comparing the energy value E with a threshold value G if E>G, and a given sampling point En can continuously reach a threshold value within 5ms (E1.. En)>G) Then, Tn is preliminarily considered as the reaction starting point Ts; at the same time, a contrast threshold value PR is set, and the formula is
Figure FDA0002461597060000022
Calculating the comparison value PRn of n sampling points before the starting point Ts, if | PNn-PNN-1If | ═ 0, the moment corresponding to the first sampling point PRn is considered as a reaction starting point Ts; if the reaction starting point Ts is found successfully, turning to the step 3-4); if the reaction starting point Ts is not found successfully, returning to the step 3-2);
3-4) respectively taking the brain area channel starting time T of the left temporal lobe channel D23, the right temporal lobe channel A09 and the prefrontal lobe channel B26 at the time of a reaction starting point TssIf at time TsD23-TsB26>0 or TsD23-TsA09>0, and must satisfy
Figure FDA0002461597060000023
The time point, namely the occurrence time T of the situation of the cerebellar ammeter is recordedsm(ii) a If the condition is not met, returning to the step 3-2);
3-5) obtaining the electroencephalogram micro-expression occurrence time T under the electroencephalogram datasm
5. The method according to claim 4, wherein the step 3) of processing the face video data and the face video time data comprises the following steps:
3-6) detecting the human face, and detecting the specific position of the human face from the original image of each frame;
3-7) carrying out face alignment and face reference point positioning on the face in the face video acquisition; automatically positioning a face key characteristic CLM local constraint model by adopting a CLM local constraint model according to an input face image;
3-8) extracting expression characteristics based on deformation: utilizing CLM to label the facial feature points, obtaining the coordinates of the facial reference points, calculating the related slope information between the facial reference points, and extracting expression features based on deformation; simultaneously tracking key points in the three regions, extracting corresponding displacement information, extracting distance information between specific feature points of the expression pictures, subtracting the distance from the calm pictures to obtain change information of the distance, and extracting expression features based on movement;
3-9) obtaining a start-stop frame and a top frame according to the facial feature data extraction result; setting a threshold value R by comparing the distance between the feature points with the distance difference k of the calm picture, and judging the first frame image exceeding k > R as an initial frame; and judging the frame image with the maximum value of the k value as the top frame number by comparing the images after the initial frame, and taking the first frame when k is less than R as the termination frame.
6. The micro-expression occurrence detection method according to claim 5, wherein the step 3-6) of detecting the human face comprises the specific steps of:
3-6-1) extracting a response image by adopting a Local Binary Pattern (LBP);
3-6-2) processing the response image by adopting an AdaBoost algorithm to separate a human face area; the LBP algorithm firstly scans each pixel point of an original image line by line, binarizes adjacent points of 3 x 3 around each pixel point by taking the gray value of the point as a threshold value, and forms an 8-bit binary number according to the sequence, and takes the value (0-255) of the binary number as the response of the point.
7. The micro-expression occurrence detection method of claim 6, wherein the specific steps of automatically positioning the face key feature CLM local constraint model by using the CLM local constraint model according to the input face image in the step 3-7) are as follows:
3-7-1) modeling the shape of the human face model: for M pictures, each picture has N characteristic points, and the coordinate of each characteristic point is (x)i、yi) The vector composed of the coordinates of N feature points on one image is represented by x ═ x1y1x2y2… xNyN]TThe mean face coordinates of all images are available:
Figure FDA0002461597060000031
calculating the difference between the shape of each sample image and the average face coordinate to obtain aThe shape change matrix X with zero mean value is subjected to PCA conversion to obtain the main components of face change, and the characteristic value is recorded as lambdaiThe corresponding feature vector is pi(ii) a Selecting the eigenvectors corresponding to the largest k eigenvalues to form an orthogonal matrix P ═ P1,P2,…,pk) (ii) a Weight vector b of shape change is (b)1,b2,…,bk)TEach component of b represents its magnitude in the direction of the corresponding eigenvector:
Figure FDA0002461597060000033
for any face detection image, the sample shape vector can be expressed as:
Figure FDA0002461597060000032
3-7-2) establishing a patch model for each feature point: taking a patch area with a fixed size around each feature point, and marking the patch area containing the feature points as a positive sample; then, intercepting a patch with the same size in a non-characteristic point area and marking the patch as a negative sample; there are a total of r patches per feature point, which are grouped into a vector (x)(1),x(2),…x(r))TFor each image in the sample set, there is
Figure FDA0002461597060000043
Wherein y is(i)1, 2, … r, wherein y is { -1, 1} i { -1, 2, … r(i)1 is a positive sample mark, y(i)-1 is a negative sample marker; the trained linear support vector machine is:
Figure FDA0002461597060000042
wherein xiSubspace vector representing a sample set, αiIs a weight coefficient, Ms is the number of support vectors for each feature point, b is an offset; the following can be obtained: y is(i)=WT·x(i)+θ,WT=[W1W2… Wn]Is the weight coefficient of each support vector, θ is the cheap quantity;
3-7-3) fitting face points: a similar response map R (X, Y) is generated for each feature point by performing a local search through the bounding region of the currently estimated feature point position. Fitting a quadratic function to the response plot, assuming R (X, Y) is domain-wide (X)0,y0) Where a maximum is obtained, a quadratic function r (x) may be used0,y0)=a(x-x0)2+b(y-y0)2Where a, b, and c are coefficients of a quadratic function, using a least squares method of min ∑x,y[R(x,y)-r(x,y)]2The minimum error between the quadratic functions R (x, y) and R (x, y) can be found; the deformation constraint cost function is added to form an objective function for searching the feature points, and the objective function can be expressed as:
Figure FDA0002461597060000041
optimizing the objective function each time to obtain a new feature point position, and then iteratively updating until the maximum value is converged.
8. The micro-expression occurrence detection method according to claim 7, wherein the determination rule in step 5) is:
judging whether the micro expression occurs or not according to the electroencephalogram activity reaction moment, and if so, judging the occurrence moment T according to the micro expressionsmSearching for the change of the expression in the time threshold TL for the time starting point, and finally judging that the micro expression occurs if the expression occurring in 500ms is judged to occur according to the time of the starting and ending frames of the expression; if no expression occurring within 500ms appears, the micro expression is not judged to occur finally.
9. The method of claim 8, wherein the time threshold TL is 500ms to 1000 ms.
10. A system for microexpression detection using the detection method of any one of claims 1 to 9, wherein said detection system comprises:
the data acquisition module is used for recording normal electroencephalogram data and normal facial video data before the micro expression is induced by stimulation; recording the electroencephalogram data and the facial video data after the micro expression is induced by stimulation;
the time matching module is used for marking the time stamp of each section of electroencephalogram data and each frame of face video data to generate electroencephalogram time data and face video time data;
the data processing module is used for processing the electroencephalogram data, the electroencephalogram time data, the face video data and the face video time data and calculating the occurrence time T of the electroencephalogram micro-expressionssmThe start-stop frame and the top frame of the facial expression change are judged through the facial video data and the facial video time data;
a microexpression judgment module for judging the microexpression occurrence time T according to the electroencephalogramsmAnd judging whether the micro expression occurs or not according to the start-stop frame and the top frame of the facial expression change.
CN202010321480.7A 2020-04-22 2020-04-22 Micro-expression occurrence detection method and detection system Active CN111611860B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010321480.7A CN111611860B (en) 2020-04-22 2020-04-22 Micro-expression occurrence detection method and detection system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010321480.7A CN111611860B (en) 2020-04-22 2020-04-22 Micro-expression occurrence detection method and detection system

Publications (2)

Publication Number Publication Date
CN111611860A true CN111611860A (en) 2020-09-01
CN111611860B CN111611860B (en) 2022-06-28

Family

ID=72204767

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010321480.7A Active CN111611860B (en) 2020-04-22 2020-04-22 Micro-expression occurrence detection method and detection system

Country Status (1)

Country Link
CN (1) CN111611860B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112258778A (en) * 2020-10-12 2021-01-22 南京云思创智信息科技有限公司 Micro-expression real-time alarm video recording method
CN112329663A (en) * 2020-11-10 2021-02-05 西南大学 Micro-expression time detection method and device based on face image sequence
CN112949495A (en) * 2021-03-04 2021-06-11 安徽师范大学 Intelligent identification system based on big data
CN115581468A (en) * 2022-09-30 2023-01-10 脑陆(重庆)智能科技研究院有限公司 Object prediction method and device based on electroencephalogram data

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070191691A1 (en) * 2005-05-19 2007-08-16 Martin Polanco Identification of guilty knowledge and malicious intent
US20110251511A1 (en) * 2008-07-15 2011-10-13 Petrus Wilhelmus Maria Desain Method for processing a brain wave signal and brain computer interface
US20120296569A1 (en) * 2010-01-18 2012-11-22 Elminda Ltd. Method and system for weighted analysis of neurophysiological data
US20170007165A1 (en) * 2015-07-08 2017-01-12 Samsung Electronics Company, Ltd. Emotion Evaluation
CN106874672A (en) * 2017-02-17 2017-06-20 北京太阳电子科技有限公司 A kind of method and mobile terminal for showing EEG data
CN106974621A (en) * 2017-03-16 2017-07-25 小菜儿成都信息科技有限公司 A kind of vision induction motion sickness detection method based on EEG signals gravity frequency
CN107798318A (en) * 2017-12-05 2018-03-13 四川文理学院 The method and its device of a kind of happy micro- expression of robot identification face
CN107874756A (en) * 2017-11-21 2018-04-06 博睿康科技(常州)股份有限公司 The precise synchronization method of eeg collection system and video acquisition system
CN109344816A (en) * 2018-12-14 2019-02-15 中航华东光电(上海)有限公司 A method of based on brain electricity real-time detection face action
CN109730701A (en) * 2019-01-03 2019-05-10 中国电子科技集团公司电子科学研究院 Method and device for acquiring emotional data
CN109766917A (en) * 2018-12-18 2019-05-17 深圳壹账通智能科技有限公司 Interview video data processing method, device, computer equipment and storage medium
CN109901711A (en) * 2019-01-29 2019-06-18 西安交通大学 By the asynchronous real-time brain prosecutor method of the micro- expression EEG signals driving of weak Muscle artifacts
CN109934145A (en) * 2019-03-05 2019-06-25 浙江强脑科技有限公司 Mood degree assists method of adjustment, smart machine and computer readable storage medium
CN109984759A (en) * 2019-03-15 2019-07-09 北京数字新思科技有限公司 The acquisition methods and device of individual emotional information
US20190347476A1 (en) * 2018-05-09 2019-11-14 Korea Advanced Institute Of Science And Technology Method for estimating human emotions using deep psychological affect network and system therefor
CN110680313A (en) * 2019-09-30 2020-01-14 北京工业大学 A classification method of epilepsy period based on pulse burst intelligence algorithm combined with STFT-PSD and PCA

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070191691A1 (en) * 2005-05-19 2007-08-16 Martin Polanco Identification of guilty knowledge and malicious intent
US20110251511A1 (en) * 2008-07-15 2011-10-13 Petrus Wilhelmus Maria Desain Method for processing a brain wave signal and brain computer interface
US20120296569A1 (en) * 2010-01-18 2012-11-22 Elminda Ltd. Method and system for weighted analysis of neurophysiological data
US20170007165A1 (en) * 2015-07-08 2017-01-12 Samsung Electronics Company, Ltd. Emotion Evaluation
CN106874672A (en) * 2017-02-17 2017-06-20 北京太阳电子科技有限公司 A kind of method and mobile terminal for showing EEG data
CN106974621A (en) * 2017-03-16 2017-07-25 小菜儿成都信息科技有限公司 A kind of vision induction motion sickness detection method based on EEG signals gravity frequency
CN107874756A (en) * 2017-11-21 2018-04-06 博睿康科技(常州)股份有限公司 The precise synchronization method of eeg collection system and video acquisition system
CN107798318A (en) * 2017-12-05 2018-03-13 四川文理学院 The method and its device of a kind of happy micro- expression of robot identification face
US20190347476A1 (en) * 2018-05-09 2019-11-14 Korea Advanced Institute Of Science And Technology Method for estimating human emotions using deep psychological affect network and system therefor
CN109344816A (en) * 2018-12-14 2019-02-15 中航华东光电(上海)有限公司 A method of based on brain electricity real-time detection face action
CN109766917A (en) * 2018-12-18 2019-05-17 深圳壹账通智能科技有限公司 Interview video data processing method, device, computer equipment and storage medium
CN109730701A (en) * 2019-01-03 2019-05-10 中国电子科技集团公司电子科学研究院 Method and device for acquiring emotional data
CN109901711A (en) * 2019-01-29 2019-06-18 西安交通大学 By the asynchronous real-time brain prosecutor method of the micro- expression EEG signals driving of weak Muscle artifacts
CN109934145A (en) * 2019-03-05 2019-06-25 浙江强脑科技有限公司 Mood degree assists method of adjustment, smart machine and computer readable storage medium
CN109984759A (en) * 2019-03-15 2019-07-09 北京数字新思科技有限公司 The acquisition methods and device of individual emotional information
CN110680313A (en) * 2019-09-30 2020-01-14 北京工业大学 A classification method of epilepsy period based on pulse burst intelligence algorithm combined with STFT-PSD and PCA

Non-Patent Citations (10)

* Cited by examiner, † Cited by third party
Title
XUCHENG LIU ET AL: "Emotion Recognition and Dynamic Functional Connectivity Analysis Based on EEG", 《IEEE ACCESS》 *
XUCHENG LIU ET AL: "Emotion Recognition and Dynamic Functional Connectivity Analysis Based on EEG", 《IEEE ACCESS》, 2 October 2019 (2019-10-02), pages 143293 *
张美妍: "基于EEG警觉状态检测方法及实验研究", 《中国优秀硕士学位论文全文数据库 医药卫生科技辑》 *
张美妍: "基于EEG警觉状态检测方法及实验研究", 《中国优秀硕士学位论文全文数据库 医药卫生科技辑》, no. 02, 15 February 2020 (2020-02-15), pages 080 - 26 *
杨豪等: "基于深度信念网络脑电信号表征情绪状态的识别研究", 《生物医学工程学杂志》 *
杨豪等: "基于深度信念网络脑电信号表征情绪状态的识别研究", 《生物医学工程学杂志》, no. 02, 25 April 2018 (2018-04-25), pages 182 - 190 *
段若男: "基于脑电信号的视频诱发情绪识别", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
段若男: "基于脑电信号的视频诱发情绪识别", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 07, 15 July 2015 (2015-07-15), pages 136 - 107 *
贾贝等: "模拟阅读"脑-机接口刺激物样式对正确率的影响", 《计算机与数字工程》 *
贾贝等: "模拟阅读"脑-机接口刺激物样式对正确率的影响", 《计算机与数字工程》, vol. 43, no. 02, 20 February 2015 (2015-02-20), pages 286 - 290 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112258778A (en) * 2020-10-12 2021-01-22 南京云思创智信息科技有限公司 Micro-expression real-time alarm video recording method
CN112329663A (en) * 2020-11-10 2021-02-05 西南大学 Micro-expression time detection method and device based on face image sequence
CN112949495A (en) * 2021-03-04 2021-06-11 安徽师范大学 Intelligent identification system based on big data
CN115581468A (en) * 2022-09-30 2023-01-10 脑陆(重庆)智能科技研究院有限公司 Object prediction method and device based on electroencephalogram data

Also Published As

Publication number Publication date
CN111611860B (en) 2022-06-28

Similar Documents

Publication Publication Date Title
CN111611860B (en) Micro-expression occurrence detection method and detection system
CN109919977B (en) Video motion person tracking and identity recognition method based on time characteristics
CN109472198B (en) Gesture robust video smiling face recognition method
CA2934514C (en) System and method for identifying faces in unconstrained media
CN102360421B (en) Face identification method and system based on video streaming
Poornima et al. Attendance monitoring system using facial recognition with audio output and gender classification
CN109993068B (en) A non-contact human emotion recognition method based on heart rate and facial features
Abiyev et al. Personal iris recognition using neural network
CN103902978A (en) Face detection and identification method
CN110390308A (en) A Video Action Recognition Method Based on Spatio-temporal Adversarial Generative Network
CN110909678B (en) A face recognition method and system based on width learning network feature extraction
Phuong et al. An eye blink detection technique in video surveillance based on eye aspect ratio
CN107103293B (en) A gaze point estimation method based on correlation entropy
CN114373091A (en) Gait recognition method based on deep learning fusion SVM
Prema et al. A review: Face recognition techniques for differentiate similar faces and twin faces
CN116645717A (en) A micro-expression recognition method and system based on PCANet+ and LSTM
CN115100722A (en) DeepFake detection method and device, computer equipment and storage medium
JP3980464B2 (en) Method for extracting nose position, program for causing computer to execute method for extracting nose position, and nose position extracting apparatus
CN119625806A (en) A multimodal recognition and analysis system of facial dynamic features based on artificial intelligence
CN115935140B (en) A method for identifying master transport migration components based on Riemannian manifold tangent space alignment
Samangooei et al. On acquisition and analysis of a dataset comprising of gait, ear and semantic data
CN113269080B (en) Palm vein identification method based on multi-channel convolutional neural network
CN117558044A (en) Face recognition method for wearing mask based on deep learning
Park Face Recognition: face in video, age invariance, and facial marks
Ma et al. Feature extraction method for lip-reading under variant lighting conditions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant