CN110148468B - Method and device for dynamic face image reconstruction - Google Patents
Method and device for dynamic face image reconstruction Download PDFInfo
- Publication number
- CN110148468B CN110148468B CN201910382834.6A CN201910382834A CN110148468B CN 110148468 B CN110148468 B CN 110148468B CN 201910382834 A CN201910382834 A CN 201910382834A CN 110148468 B CN110148468 B CN 110148468B
- Authority
- CN
- China
- Prior art keywords
- image
- face
- response data
- neural response
- type
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/50—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Public Health (AREA)
- Multimedia (AREA)
- Pathology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Primary Health Care (AREA)
- Epidemiology (AREA)
- Databases & Information Systems (AREA)
- Biomedical Technology (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
本发明提供的一种动态人脸图像重建的方法及装置,针对动态人脸图像以呈现高层次视觉特征信息为主的特点、不同属性面部特征是由不同的高级认知脑区负责加工的特性,采用三种不同属性高级特征信息,利用三种不同的高级认知脑区,获取到对应人脸三种不同属性高级特征信息的第一类神经响应数据、第二类神经响应数据和第三类神经响应数据,同时构建不同的高级认知脑区与动态人脸从视觉图像空间到脑感知空间的模型,以及模型之间的多维映射关系,获取到人脸基础图像、人脸表情图像和人脸身份图像,来实现多维度面部特征的重建,获取到动态人脸图像,可以重建一些患者感知到的动态人脸图像,使我们对精神疾病的认知障碍机理有更深入的理解和认知。
The method and device for reconstructing a dynamic face image provided by the present invention are aimed at the characteristics that the dynamic face image mainly presents high-level visual feature information, and that the facial features with different attributes are processed by different high-level cognitive brain regions. , using three different attributes of advanced feature information and three different advanced cognitive brain regions to obtain the first type of neural response data, the second type of neural response data and the third type of advanced feature information corresponding to the three different attributes of the face. Neural-like response data, construct different high-level cognitive brain regions and dynamic face models from visual image space to brain perception space, as well as the multi-dimensional mapping relationship between the models, and obtain basic face images, face expression images and Face identity image, to realize the reconstruction of multi-dimensional facial features, obtain dynamic face images, and reconstruct some dynamic face images perceived by patients, so that we can have a deeper understanding and recognition of the mechanism of cognitive impairment of mental diseases. Know.
Description
技术领域technical field
本发明涉及图像处理技术,尤其涉及一种动态人脸图像重建的方法及装置。The present invention relates to image processing technology, and in particular, to a method and device for reconstructing a dynamic face image.
背景技术Background technique
从脑部神经信号重现感知到的视觉物体是当前广受关注的一个前沿技术领域,它是指通过采集人类大脑的功能磁共振信号(functional Magnetic Resonance Imaging,简称:fMRI),并借助于图像处理和机器学习算法,还原出被视觉所看到的视觉图像,人脸作为我们在认识自然和进行社会交往中最常遇到的、也是最为重要的一种视觉感知物,某些具有认知和精神障碍的疾病如面孔失认症、自闭症、老年痴呆症、帕金森病患者在识别动态面孔的高层次特征属性时存在缺陷,因此,需要通过人脸重建技术对待测用户大脑中想象的人脸进行图像的重建。Reproducing perceived visual objects from brain nerve signals is a frontier technology field that has attracted widespread attention. It refers to collecting functional magnetic resonance imaging (fMRI) signals of the human brain and using images Processing and machine learning algorithms restore the visual images seen by vision. As the most common and important visual perception object we encounter in understanding nature and social interaction, some people have cognitive abilities. And mental disorders such as prosopagnosia, autism, Alzheimer's disease, Parkinson's disease patients have defects in recognizing the high-level feature attributes of dynamic faces. Therefore, it is necessary to use face reconstruction technology to imagine the user's brain. face for image reconstruction.
现有技术使用主成分分析(Principal Component Analysis,简称:PCA),建立了特征人脸和神经响应信号之间的单一线性映射关系,来实现人脸图像的重建。In the prior art, principal component analysis (Principal Component Analysis, PCA for short) is used to establish a single linear mapping relationship between the characteristic face and the neural response signal, so as to realize the reconstruction of the face image.
然而现有技术只能重建静态人脸图片,难以满足图像重建领域中对人脸多维信息重建的需求。However, the existing technology can only reconstruct static face pictures, which is difficult to meet the needs of face multi-dimensional information reconstruction in the field of image reconstruction.
发明内容SUMMARY OF THE INVENTION
本发明实施例提供一种动态人脸图像重建的方法及装置,实现了动态人脸重建,在重建的动态人脸图像中同时重建了表情特征、身份特征,丰富了重建的信息,提高了人脸重建的准确性。Embodiments of the present invention provide a method and device for reconstructing a dynamic face image, which realizes dynamic face reconstruction, reconstructs expression features and identity features in the reconstructed dynamic face image at the same time, enriches the reconstructed information, and improves human performance. Accuracy of face reconstruction.
本发明实施例的第一方面,提供一种动态人脸图像重建的方法,包括:A first aspect of the embodiments of the present invention provides a method for reconstructing a dynamic face image, including:
提取第一类神经响应数据,并根据所述第一类神经响应数据和预设的人脸图像重建模型,获取人脸基础图像。The first type of neural response data is extracted, and a basic face image is obtained according to the first type of neural response data and a preset face image reconstruction model.
提取第二类神经响应数据,并根据所述第二类神经响应数据和预设的人脸表情重建模型,获取人脸表情图像。The second type of neural response data is extracted, and a facial expression image is obtained according to the second type of neural response data and a preset facial expression reconstruction model.
提取第三类神经响应数据,并根据所述第三类神经响应数据和预设的人脸身份重建模型,获取人脸身份图像。The third type of neural response data is extracted, and a face identity image is obtained according to the third type of neural response data and a preset face identity reconstruction model.
根据所述人脸基础图像、所述人脸表情图像和所述人脸身份图像,获取动态人脸图像。A dynamic face image is acquired according to the basic face image, the face expression image and the face identity image.
可选地,在第一方面的一种可能实现方式中,所述根据所述第一类神经响应数据和预设的人脸图像重建模型,获取人脸基础图像,包括:Optionally, in a possible implementation manner of the first aspect, obtaining a basic face image according to the first type of neural response data and a preset face image reconstruction model includes:
根据如下公式一和所述第一类神经响应数据,获取人脸基础图像。According to the following formula 1 and the first type of neural response data, a basic face image is obtained.
其中,XG_RECON是所述人脸基础图像,是人脸图像重建模型中预设的动态人脸基础图像样本的平均图像,Ytest是所述第一类神经响应数据,是人脸图像重建模型中预设的动态人脸基础图像样本引起的第一类神经响应数据样本的平均数据,stest是中Ytest的投影坐标,ttest是XG_RECON的投影坐标,Wtrain是人脸图像重建模型中stest-ttest变换矩阵,Utrain是人脸图像重建模型中Ytest的特征向量,Vtrain是人脸图像重建模型中预设的动态人脸基础图像样本的特征向量。where X G_RECON is the basic face image, is the average image of the dynamic face basic image samples preset in the face image reconstruction model, Y test is the first type of neural response data, is the average data of the first type of neural response data samples caused by the dynamic face basic image samples preset in the face image reconstruction model, s test is the projection coordinates of Y test in the middle, t test is the projection coordinates of X G_RECON , W train is the s test -t test transformation matrix in the face image reconstruction model, U train is the feature vector of Y test in the face image reconstruction model, and V train is the feature of the dynamic face basic image sample preset in the face image reconstruction model vector.
可选地,在第一方面的一种可能实现方式中,在所述第一类神经响应数据和预设的人脸图像重建模型,获取人脸基础图像之前,还包括:Optionally, in a possible implementation manner of the first aspect, before the first type of neural response data and the preset face image reconstruction model are acquired, the method further includes:
获取动态人脸基础图像训练样本和以所述动态人脸基础图像样本引起的第一类神经响应数据训练样本。Obtain a dynamic face basic image training sample and a first type of neural response data training sample caused by the dynamic face basic image sample.
以所述动态人脸基础图像样本为输出量、以所述第一类神经响应数据样本为输入量,通过如下公式二对s-t变换矩阵、第一类神经响应数据样本的特征向量和动态人脸基础图像训练样本的特征向量进行参数学习,获取人脸图像重建模型中stest-ttest变换矩阵、人脸图像重建模型中第一类神经响应数据样本的特征向量、人脸图像重建模型中人脸基础图像的特征向量,Taking the basic image sample of the dynamic face as the output, and taking the first type of neural response data sample as the input, the st transformation matrix, the eigenvector of the first type of neural response data sample and the dynamic face are paired by the following formula. The eigenvectors of the basic image training samples are used for parameter learning to obtain the s test -t test transformation matrix in the face image reconstruction model, the eigenvectors of the first type of neural response data samples in the face image reconstruction model, and the human face in the face image reconstruction model. the feature vector of the base image of the face,
其中,X是所述动态人脸基础图像样本,是X的平均图像,Y是所述第一类神经响应数据样本,是Y的平均数据,s是Y的投影坐标,t是X的投影坐标,W是所述s-t变换矩阵,U是Y的特征向量,V是X的特征向量。Wherein, X is the basic image sample of the dynamic face, is the average image of X, Y is the first class of neural response data samples, is the mean data of Y, s is the projected coordinate of Y, t is the projected coordinate of X, W is the st transformation matrix, U is the eigenvector of Y, and V is the eigenvector of X.
根据所述人脸图像重建模型中stest-ttest变换矩阵、所述人脸图像重建模型中第一类神经响应数据样本的特征向量、所述人脸图像重建模型中人脸基础图像的特征向量,获取人脸图像重建模型。According to the s test -t test transformation matrix in the face image reconstruction model, the feature vector of the first type of neural response data samples in the face image reconstruction model, and the features of the face basic image in the face image reconstruction model vector to obtain the face image reconstruction model.
可选地,在第一方面的一种可能实现方式中,所述根据所述第二类神经响应数据和预设的人脸表情重建模型,获取人脸表情图像,包括:Optionally, in a possible implementation manner of the first aspect, obtaining a facial expression image according to the second type of neural response data and a preset facial expression reconstruction model includes:
根据如下公式三和所述第二类神经响应数据,获取人脸表情图像。The facial expression image is obtained according to the following formula 3 and the second type of neural response data.
其中,XE_RECON是所述人脸表情图像,是人脸表情重建模型中预设的动态人脸表情图像样本的平均图像,YE_test是第二类神经响应数据,是人脸表情重建模型中预设的动态人脸表情图像样本引起的第二类神经响应数据样本的平均数据,sE_test是YE_test的投影坐标,tE_test是XE_RECON的投影坐标,WE_train是人脸表情重建模型中的sE_test-tE_test变换矩阵,UE_train是人脸表情重建模型的YE_test的特征向量,VE_train是人脸表情重建模型预设的动态人脸表情图像样本的特征向量。where X E_RECON is the facial expression image, is the average image of the preset dynamic facial expression image samples in the facial expression reconstruction model, Y E_test is the second type of neural response data, is the average data of the second type of neural response data samples caused by the preset dynamic facial expression image samples in the facial expression reconstruction model, s E_test is the projected coordinate of Y E_test , t E_test is the projected coordinate of X E_RECON , and W E_train is The s E_test -t E_test transformation matrix in the facial expression reconstruction model, U E_train is the feature vector of Y E_test of the facial expression reconstruction model, and V E_train is the feature vector of the dynamic facial expression image samples preset by the facial expression reconstruction model .
可选地,在第一方面的一种可能实现方式中,在所述第二类神经响应数据和预设的人脸表情重建模型,获取人脸表情图像之前,还包括:Optionally, in a possible implementation manner of the first aspect, before the second type of neural response data and the preset facial expression reconstruction model are acquired, the method further includes:
获取动态人脸表情图像训练样本和以所述动态人脸表情图像样本引起的第二类神经响应数据训练样本。Obtain a dynamic facial expression image training sample and a second type of neural response data training sample caused by the dynamic facial expression image sample.
以所述动态人脸表情图像样本为输出量、以所述第二类神经响应数据样本为输入量,通过如下公式四对sE-tE变换矩阵、第二类神经响应数据样本的特征向量和动态人脸表情图像训练样本的特征向量进行参数学习,获取人脸表情重建模型中sE_test-tE_test变换矩阵、人脸表情重建模型中第二类神经响应数据样本的特征向量、人脸表情重建模型中人脸表情图像的特征向量,Taking the dynamic facial expression image sample as the output, and taking the second type of neural response data sample as the input, four pairs of s E -t E transformation matrices and the second type of neural response data sample feature vectors through the following formula Perform parameter learning with the eigenvectors of the dynamic facial expression image training samples to obtain the s E_test -t E_test transformation matrix in the facial expression reconstruction model, the eigenvectors of the second type of neural response data samples in the facial expression reconstruction model, and the facial expression The feature vector of the facial expression image in the reconstructed model,
其中,XE是所述动态人脸表情图像样本,是XE的平均图像,YE是所述第二类神经响应数据样本,是YE的平均数据,sE是YE的投影坐标,tE是XE的投影坐标,WE是所述sE-tE变换矩阵,UE是YE的特征向量,VE是XE的特征向量,id是每一个面部身份个体的标签。Wherein, X E is the dynamic facial expression image sample, is the average image of X E , Y E is the second class of neural response data samples, is the mean data of Y E , s E is the projected coordinate of Y E , t E is the projected coordinate of X E , W E is the s E -t E transformation matrix, U E is the eigenvector of Y E , and V E is the The feature vector of X E , id is the label of each facial identity individual.
根据所述人脸表情重建模型中的sE_test-tE_test变换矩阵、所述人脸表情重建模型中第二类神经响应数据样本的特征向量、所述人脸表情重建模型中人脸表情图像的特征向量,获取人脸表情重建模型。According to the s E_test -t E_test transformation matrix in the facial expression reconstruction model, the feature vector of the second type of neural response data samples in the facial expression reconstruction model, and the facial expression image in the facial expression reconstruction model. Feature vector to obtain facial expression reconstruction model.
可选地,在第一方面的一种可能实现方式中,所述根据所述第三类神经响应数据和预设的人脸身份重建模型,获取人脸身份图像,包括:Optionally, in a possible implementation manner of the first aspect, obtaining a face identity image according to the third type of neural response data and a preset face identity reconstruction model includes:
根据如下公式五和所述第三类神经响应数据,获取人脸身份图像。According to the following formula 5 and the third type of neural response data, a face identity image is obtained.
其中,XI_RECON是所述人脸身份图像,人脸身份重建模型中预设的动态人脸身份图像样本的平均图像,YI_test所述第三类神经响应数据,是人脸身份重建模型中预设的动态人脸身份图像样本引起的第三类神经响应数据样本的平均数据,sI_test是YI_test的投影坐标,tI_test是XI_RECON的投影坐标,WI-train是人脸身份重建模型中sI_test-tI_test变换矩阵,UI_train是人脸身份重建模型YI_test的特征向量,VI_train是人脸身份重建模型中预设的动态人脸身份图像样本的特征向量。where X I_RECON is the face identity image, The average image of the preset dynamic face identity image samples in the face identity reconstruction model, the third type of neural response data described in Y I_test , is the average data of the third type of neural response data samples caused by the preset dynamic face identity image samples in the face identity reconstruction model, s I_test is the projected coordinate of Y I_test , t I_test is the projected coordinate of X I_RECON , W I- train is the s I_test -t I_test transformation matrix in the face identity reconstruction model, U I_train is the feature vector of the face identity reconstruction model Y I_test , and V I_train is the feature of the dynamic face identity image sample preset in the face identity reconstruction model vector.
可选地,在第一方面的一种可能实现方式中,在所述第三类神经响应数据和预设的人脸身份重建模型,获取人脸身份图像之前,还包括:Optionally, in a possible implementation manner of the first aspect, before the third type of neural response data and the preset face identity reconstruction model are acquired, the method further includes:
获取动态人脸身份图像训练样本和以动态人脸身份图像训练样本引起的第三类神经响应数据训练样本。Obtain the training samples of the dynamic face identity image and the third type of neural response data training samples caused by the dynamic face identity image training samples.
以所述动态人脸身份图像样本为输出量、以所述第三类神经响应数据样本为输入量,通过如下公式六对变换矩阵sI-tI、第三类神经响应数据样本的特征向量和动态人脸身份图像训练样本的特征向量进行参数学习,获取人脸身份重建模型中的sI_test-tI_test变换矩阵、人脸身份重建模型中第三类神经响应数据样本的特征向量和人脸身份图像训练样本的特征向量,Taking the dynamic face identity image samples as the output and the third type of neural response data samples as the input, six pairs of transformation matrices s I -t I and the third type of neural response data samples through the following formula Perform parameter learning with the eigenvectors of the dynamic face identity image training samples to obtain the s I_test -t I_test transformation matrix in the face identity reconstruction model, the eigenvectors of the third type of neural response data samples in the face identity reconstruction model, and the face feature vector of training samples of identity images,
其中,XI是所述动态人脸身份图像样本,是XI的平均图像,YI是所述第三类神经响应数据样本,是YI的平均数据,sI是YI的投影坐标,tI是XI的投影坐标,WI是所述sI-tI变换矩阵,UI是YI的特征向量,VI是人脸身份重建模型XI的特征向量,ex是每一种面部表情的标签。Wherein, X I is the dynamic face identity image sample, is the average image of X I , Y I is the third class of neural response data samples, is the mean data of Y I , s I is the projected coordinate of Y I , t I is the projected coordinate of X I , W I is the s I -t I transformation matrix, U I is the eigenvector of Y I , and V I is the The feature vector of the face identity reconstruction model X I , ex is the label of each facial expression.
根据所述人脸身份重建模型中sI_test-tI_test的变换矩阵、所述人脸身份重建模型中的第三类神经响应数据样本的特征向量、所述人脸身份重建模型中人脸身份图像的特征向量,获取人脸身份重建模型。According to the transformation matrix of s I_test - t I_test in the face identity reconstruction model, the feature vector of the third type of neural response data samples in the face identity reconstruction model, the face identity image in the face identity reconstruction model , and obtain the face identity reconstruction model.
可选地,在第一方面的一种可能实现方式中,所述第一类神经响应数据为从待测用户的大脑初级视觉皮层脑区获取的神经响应数据。Optionally, in a possible implementation manner of the first aspect, the first type of neural response data is neural response data obtained from the brain region of the primary visual cortex of the user to be tested.
所述第二类神经响应数据为从待测用户的后侧颞上沟和杏仁核脑区获取的神经响应数据。The second type of neural response data is the neural response data obtained from the posterior superior temporal sulcus and amygdala brain regions of the user to be tested.
所述第三类神经响应数据为从待测用户的自梭状回面孔加工区脑区和前侧颞叶脑区获取的神经响应数据。The third type of neural response data is the neural response data obtained from the brain region of the fusiform gyrus face processing region and the anterior temporal lobe brain region of the user to be tested.
可选地,在第一方面的一种可能实现方式中,所述根据所述人脸基础图像、所述人脸表情图像和所述人脸身份图像,获取动态人脸图像,包括:Optionally, in a possible implementation manner of the first aspect, the obtaining a dynamic face image according to the basic face image, the face expression image and the face identity image includes:
将所述人脸基础图像、所述人脸表情图像和所述人脸身份图像的平均图像,确定为所述动态人脸图像。An average image of the basic face image, the face expression image and the face identity image is determined as the dynamic face image.
本发明实施例的第二方面,提供一种动态人脸图像重建的装置,包括:A second aspect of the embodiments of the present invention provides a dynamic face image reconstruction device, including:
第一获取模块,用于提取第一类神经响应数据,并根据所述第一类神经响应数据和预设的人脸图像重建模型,获取人脸基础图像。The first acquisition module is configured to extract the first type of neural response data, and acquire a basic face image according to the first type of neural response data and a preset face image reconstruction model.
第二获取模块,用于提取第二类神经响应数据,并根据所述第二类神经响应数据和预设的人脸表情重建模型,获取人脸表情图像。The second obtaining module is configured to extract the second type of neural response data, and obtain a facial expression image according to the second type of neural response data and a preset facial expression reconstruction model.
第三获取模块,用于提取第三类神经响应数据,并根据所述第三类神经响应数据和预设的人脸身份重建模型,获取人脸身份图像。The third acquisition module is configured to extract the third type of neural response data, and acquire a face identity image according to the third type of neural response data and a preset face identity reconstruction model.
动态人脸图像获取模块,用于根据所述人脸基础图像、所述人脸表情图像和所述人脸身份图像,获取动态人脸图像。A dynamic face image acquisition module, configured to acquire a dynamic face image according to the basic face image, the face expression image and the face identity image.
可选地,在第二方面的一种可能实现方式中,所述第一获取模块用于根据如下公式一和所述第一类神经响应数据,获取人脸基础图像。Optionally, in a possible implementation manner of the second aspect, the first acquisition module is configured to acquire a basic face image according to the following formula 1 and the first type of neural response data.
其中,XG_RECON是所述人脸基础图像,是人脸图像重建模型中预设的动态人脸基础图像样本的平均图像,Ytest是所述第一类神经响应数据样本,是人脸图像重建模型中预设的动态人脸基础图像样本引起的第一类神经响应数据样本的平均数据,stest是Ytest的投影坐标,ttest是XG_RECON的投影坐标,Wtrain是人脸图像重建模型中stest-ttest变换矩阵,Utrain是人脸图像重建模型中Ytest的特征向量,Vtrain是人脸图像重建模型中预设的动态人脸基础图像样本的特征向量。where X G_RECON is the basic face image, is the average image of the dynamic face basic image samples preset in the face image reconstruction model, Y test is the first type of neural response data samples, is the average data of the first type of neural response data samples caused by the dynamic face basic image samples preset in the face image reconstruction model, s test is the projected coordinate of Y test , t test is the projected coordinate of X G_RECON , and W train is s test -t test transformation matrix in the face image reconstruction model, U train is the feature vector of Y test in the face image reconstruction model, V train is the feature vector of the dynamic face basic image sample preset in the face image reconstruction model .
可选地,在第二方面的一种可能实现方式中,所述第一获取模块401还包括用于获取动态人脸基础图像训练样本和以所述动态人脸基础图像样本引起的第一类神经响应数据训练样本。Optionally, in a possible implementation manner of the second aspect, the first obtaining
以所述动态人脸基础图像样本为输出量、以所述第一类神经响应数据样本为输入量,通过如下公式二对s-t变换矩阵、第一类神经响应数据样本的特征向量和动态人脸基础图像训练样本的特征向量进行参数学习,获取人脸图像重建模型中stest-ttest变换矩阵、人脸图像重建模型中第一类神经响应数据样本的特征向量、人脸图像重建模型中人脸基础图像的特征向量,Taking the basic image sample of the dynamic face as the output, and taking the first type of neural response data sample as the input, the st transformation matrix, the eigenvector of the first type of neural response data sample and the dynamic face are paired by the following formula. The eigenvectors of the basic image training samples are used for parameter learning to obtain the s test -t test transformation matrix in the face image reconstruction model, the eigenvectors of the first type of neural response data samples in the face image reconstruction model, and the human face in the face image reconstruction model. the feature vector of the base image of the face,
其中,X是所述动态人脸基础图像样本,是X的平均图像,Y是所述第一类神经响应数据样本,是Y的平均数据,s是Y的投影坐标,t是X的投影坐标,W是所述s-t变换矩阵,U是Y的特征向量,V是X的特征向量。Wherein, X is the basic image sample of the dynamic face, is the average image of X, Y is the first class of neural response data samples, is the mean data of Y, s is the projected coordinate of Y, t is the projected coordinate of X, W is the st transformation matrix, U is the eigenvector of Y, and V is the eigenvector of X.
根据所述人脸图像重建模型中stest-ttest变换矩阵、所述人脸图像重建模型中第一类神经响应数据样本的特征向量、所述人脸图像重建模型中人脸基础图像的特征向量,获取人脸图像重建模型。According to the s test -t test transformation matrix in the face image reconstruction model, the feature vector of the first type of neural response data samples in the face image reconstruction model, and the features of the face basic image in the face image reconstruction model vector to obtain the face image reconstruction model.
可选地,在第二方面的一种可能实现方式中,所述第二获取模块用于根据如下公式三和所述第二类神经响应数据,获取人脸表情图像;Optionally, in a possible implementation manner of the second aspect, the second acquisition module is configured to acquire a facial expression image according to the following formula 3 and the second type of neural response data;
其中,XE_RECON是所述人脸表情图像,是人脸表情重建模型中预设的动态人脸表情图像样本的平均图像,YE_test是第二类神经响应数据样本,是人脸表情重建模型中预设的动态人脸表情图像样本引起的第二类神经响应数据样本的平均数据,sE_test是YE_test的投影坐标,tE_test是XE_RECON的投影坐标,WE_train是人脸表情重建模型中的sE_test-tE_test变换矩阵,UE_train是人脸表情重建模型的YE_test的特征向量,VE_train是人脸表情重建模型预设的动态人脸表情图像样本的特征向量。where X E_RECON is the facial expression image, is the average image of the preset dynamic facial expression image samples in the facial expression reconstruction model, Y E_test is the second type of neural response data samples, is the average data of the second type of neural response data samples caused by the preset dynamic facial expression image samples in the facial expression reconstruction model, s E_test is the projected coordinate of Y E_test , t E_test is the projected coordinate of X E_RECON , and W E_train is The s E_test -t E_test transformation matrix in the facial expression reconstruction model, U E_train is the feature vector of Y E_test of the facial expression reconstruction model, and V E_train is the feature vector of the dynamic facial expression image samples preset by the facial expression reconstruction model .
可选地,在第二方面的一种可能实现方式中,所述第二获取模块还包括用于获取动态人脸表情图像训练样本和以所述动态人脸表情图像样本引起的第二类神经响应数据训练样本。Optionally, in a possible implementation manner of the second aspect, the second acquisition module further includes a training sample for acquiring a dynamic facial expression image and a second type of nerve caused by the dynamic facial expression image sample. Response data training samples.
以所述动态人脸表情图像样本为输出量、以所述第二类神经响应数据样本为输入量,通过如下公式四对sE-tE变换矩阵、第二类神经响应数据样本的特征向量和动态人脸表情图像训练样本的特征向量进行参数学习,获取人脸表情重建模型中sE_test-tE_test变换矩阵、人脸表情重建模型中第二类神经响应数据样本的特征向量、人脸表情重建模型中人脸表情图像的特征向量,Taking the dynamic facial expression image sample as the output, and taking the second type of neural response data sample as the input, four pairs of s E -t E transformation matrices and the second type of neural response data sample feature vectors through the following formula Perform parameter learning with the eigenvectors of the dynamic facial expression image training samples to obtain the s E_test -t E_test transformation matrix in the facial expression reconstruction model, the eigenvectors of the second type of neural response data samples in the facial expression reconstruction model, and the facial expression The feature vector of the facial expression image in the reconstructed model,
其中,XE是所述动态人脸表情图像样本,是XE的平均图像,YE是所述第二类神经响应数据样本,是YE的平均数据,sE是YE的投影坐标,tE是XE的投影坐标,WE是所述sE-tE变换矩阵,UE是YE的特征向量,VE是XE的特征向量,id是每一个面部身份个体的标签。Wherein, X E is the dynamic facial expression image sample, is the average image of X E , Y E is the second class of neural response data samples, is the mean data of Y E , s E is the projected coordinate of Y E , t E is the projected coordinate of X E , W E is the s E -t E transformation matrix, U E is the eigenvector of Y E , and V E is the The feature vector of X E , id is the label of each facial identity individual.
根据所述人脸表情重建模型中的sE_test-tE_test变换矩阵、所述人脸表情重建模型中第二类神经响应数据样本的特征向量、所述人脸表情重建模型中人脸表情图像的特征向量,获取人脸表情重建模型。According to the s E_test -t E_test transformation matrix in the facial expression reconstruction model, the feature vector of the second type of neural response data samples in the facial expression reconstruction model, and the facial expression image in the facial expression reconstruction model. Feature vector to obtain facial expression reconstruction model.
可选地,所述第三获取模块用于根据如下公式五和所述第三类神经响应数据,获取人脸身份图像。Optionally, the third obtaining module is configured to obtain a face identity image according to the following formula 5 and the third type of neural response data.
其中,XI_RECON是所述人脸身份图像,人脸身份重建模型中预设的动态人脸身份图像样本的平均图像,YI_test所述第三类神经响应数据样本,是人脸身份重建模型中预设的动态人脸身份图像样本引起的第三类神经响应数据样本的平均数据,sI_test是YI_test的投影坐标,tI_test是XI_RECON的投影坐标,WI-train是人脸身份重建模型中sI_test-tI_test变换矩阵,UI_train是人脸身份重建模型YI_test的特征向量,VI_train是人脸身份重建模型中预设的动态人脸身份图像样本的特征向量。where X I_RECON is the face identity image, The average image of the preset dynamic face identity image samples in the face identity reconstruction model, the third type of neural response data samples described in Y I_test , is the average data of the third type of neural response data samples caused by the preset dynamic face identity image samples in the face identity reconstruction model, s I_test is the projected coordinate of Y I_test , t I_test is the projected coordinate of X I_RECON , W I- train is the s I_test -t I_test transformation matrix in the face identity reconstruction model, U I_train is the feature vector of the face identity reconstruction model Y I_test , and V I_train is the feature of the dynamic face identity image sample preset in the face identity reconstruction model vector.
可选地,在第二方面的一种可能实现方式中,所述第三获取模块还包括用于获取动态人脸身份图像训练样本和以动态人脸身份图像训练样本引起的第三类神经响应数据训练样本;以所述动态人脸身份图像样本为输出量、以所述第三类神经响应数据样本为输入量,通过如下公式六对变换矩阵sI-tI、第三类神经响应数据样本的特征向量和动态人脸身份图像训练样本的特征向量进行参数学习,获取人脸身份重建模型中的sI_test-tI_test变换矩阵、人脸身份重建模型中第三类神经响应数据样本的特征向量和人脸身份图像训练样本的特征向量,Optionally, in a possible implementation manner of the second aspect, the third acquisition module further includes a third type of neural response for acquiring a dynamic face identity image training sample and a third type of neural response caused by the dynamic face identity image training sample. Data training samples; with the dynamic face identity image samples as the output and the third type of neural response data samples as the input, six pairs of transformation matrices s I - t I and the third type of neural response data through the following formulas The feature vector of the sample and the feature vector of the dynamic face identity image training sample are used for parameter learning to obtain the s I_test -t I_test transformation matrix in the face identity reconstruction model, and the characteristics of the third type of neural response data samples in the face identity reconstruction model. vector and feature vector of face identity image training samples,
其中,XI是所述动态人脸身份图像样本,是XI的平均图像,YI是所述第三类神经响应数据样本,是YI的平均数据,sI是YI的投影坐标,tI是XI的投影坐标,WI是所述sI-tI变换矩阵,UI是YI的特征向量,VI是人脸身份重建模型XI的特征向量,ex是每一种面部表情的标签。Wherein, X I is the dynamic face identity image sample, is the average image of X I , Y I is the third class of neural response data samples, is the mean data of Y I , s I is the projected coordinate of Y I , t I is the projected coordinate of X I , W I is the s I -t I transformation matrix, U I is the eigenvector of Y I , and V I is the The feature vector of the face identity reconstruction model X I , ex is the label of each facial expression.
根据所述人脸身份重建模型中sI_test-tI_test的变换矩阵、所述人脸身份重建模型中的第三类神经响应数据样本的特征向量、所述人脸身份重建模型中人脸身份图像的特征向量,获取人脸身份重建模型。According to the transformation matrix of s I_test - t I_test in the face identity reconstruction model, the feature vector of the third type of neural response data samples in the face identity reconstruction model, the face identity image in the face identity reconstruction model , and obtain the face identity reconstruction model.
可选地,在第二方面的一种可能实现方式中,所述第一类神经响应数据为从待测用户的大脑初级视觉皮层脑区获取的神经响应数据。Optionally, in a possible implementation manner of the second aspect, the first type of neural response data is neural response data obtained from the brain region of the primary visual cortex of the user to be tested.
所述第二类神经响应数据为从待测用户的后侧颞上沟和杏仁核脑区获取的神经响应数据。The second type of neural response data is the neural response data obtained from the posterior superior temporal sulcus and amygdala brain regions of the user to be tested.
所述第三类神经响应数据为从待测用户的自梭状回面孔加工区脑区和前侧颞叶脑区获取的神经响应数据。The third type of neural response data is the neural response data obtained from the brain region of the fusiform gyrus face processing region and the anterior temporal lobe brain region of the user to be tested.
可选地,在第二方面的一种可能实现方式中,所述动态人脸图像获取模块用于将所述人脸基础图像、所述人脸表情图像和所述人脸身份图像的平均图像,确定为所述动态人脸图像。Optionally, in a possible implementation manner of the second aspect, the dynamic face image acquisition module is configured to obtain an average image of the basic face image, the face expression image and the face identity image. , which is determined as the dynamic face image.
本发明提供的一种动态人脸图像重建的方法,针对动态人脸图像主要以呈现高层次视觉特征信息为主的特点、不同属性面部特征是由不同的高级认知脑区负责加工的认知特性,本方案采用三种不同属性高级特征信息,利用三种不同的高级认知脑区,获取到对应人脸三种不同属性高级特征信息的第一类神经响应数据、第二类神经响应数据和第三类神经响应数据,同时构建不同的高级认知脑区与动态人脸从视觉图像空间到脑感知空间的模型,以及模型之间的多维映射关系,获取到人脸基础图像、人脸表情图像和人脸身份图像,来实现多维度面部特征的重建,获取到动态人脸图像,可以重建一些患者感知到的动态人脸图像,使我们对精神疾病的认知障碍机理有更深入的理解和认知。The method for reconstructing a dynamic face image provided by the present invention is aimed at the cognition that the dynamic face image mainly presents high-level visual feature information, and the facial features with different attributes are processed by different high-level cognitive brain regions. Features, this scheme uses three different attributes of advanced feature information, and uses three different advanced cognitive brain regions to obtain the first type of neural response data and the second type of neural response data corresponding to the three different attributes of the advanced feature information of the face. and the third type of neural response data, construct different high-level cognitive brain regions and dynamic face models from visual image space to brain perception space, as well as the multi-dimensional mapping relationship between the models, and obtain basic face images, face Expression images and face identity images can be used to reconstruct multi-dimensional facial features, and dynamic face images can be obtained to reconstruct some dynamic face images perceived by patients, which enables us to have a deeper understanding of the cognitive impairment mechanism of mental illness. understanding and cognition.
附图说明Description of drawings
图1为本发明提供的一种动态人脸图像重建的方法的流程示意图;1 is a schematic flowchart of a method for reconstructing a dynamic face image provided by the present invention;
图2为本发明提供的一种动态人脸图像重建的方法的信号传递示意图;2 is a schematic diagram of signal transmission of a method for reconstructing a dynamic face image provided by the present invention;
图3是本发明实施例提供的一种动态人脸图像重建的装置的结构示意图;3 is a schematic structural diagram of an apparatus for reconstructing a dynamic face image provided by an embodiment of the present invention;
图4是本发明实施例提供的一种动态人脸图像重建装置的硬件结构示意图。FIG. 4 is a schematic diagram of a hardware structure of a dynamic face image reconstruction apparatus provided by an embodiment of the present invention.
具体实施方式Detailed ways
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。In order to make the purposes, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments It is only a part of the embodiments of the present invention, but not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.
本发明的说明书和权利要求书及上述附图中的术语“第一”、“第二”、“第三”“第四”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本发明的实施例能够以除了在这里图示或描述的那些以外的顺序实施。The terms "first", "second", "third", "fourth", etc. in the description and claims of the present invention and the above drawings are used to distinguish similar objects and are not necessarily used to describe a specific order. or sequence. It is to be understood that the data so used may be interchanged under appropriate circumstances such that the embodiments of the invention described herein can be practiced in sequences other than those illustrated or described herein.
应当理解,在本发明的各种实施例中,各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本发明实施例的实施过程构成任何限定。It should be understood that, in various embodiments of the present invention, the size of the sequence numbers of each process does not mean the sequence of execution, and the execution sequence of each process should be determined by its functions and internal logic, and should not be used in the embodiments of the present invention. Implementation constitutes any limitation.
应当理解,在本发明中,“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。It should be understood that in the present invention, "comprising" and "having" and any variations thereof are intended to cover non-exclusive inclusion, for example, a process, method, system, product or device comprising a series of steps or units is not necessarily limited to Those steps or elements that are expressly listed may instead include other steps or elements that are not expressly listed or are inherent to the process, method, product or apparatus.
首先对本发明所涉及的名词进行解释:First, the terms involved in the present invention are explained:
功能磁共振成像技术(functional Magnetic Resonance Imaging,简称fMRI):是指一种新兴的神经影像学方式,其原理是利用磁振造影来测量神经元活动所引发之血液动力的改变。Functional Magnetic Resonance Imaging (fMRI): refers to an emerging neuroimaging method, the principle of which is to use magnetic resonance imaging to measure the hemodynamic changes caused by neuronal activity.
本发明具体的应用场景,可以适用于重建具有认知和精神障碍的疾病如面孔失认症、自闭症、老年痴呆症、帕金森病患者在识别动态面孔的高层次特征属性时存在缺陷的患者感知到的动态人脸图像,可以使我们对精神疾病的认知障碍机理有更深入的理解和认知,目前的人脸图像重建使用主成分分析(Principal Component Analysis,简称:PCA),来实现人脸图像的重建,然而现有技术没有考虑到对不同属性高级特征信息区别对待建立映射关系,只能重建静态人脸图片,难以满足图像重建领域中对人脸多维信息重建的需求。The specific application scenario of the present invention can be applied to reconstruct the patients with cognitive and mental disorders such as prosopagnosia, autism, Alzheimer's disease, and Parkinson's disease. The dynamic face images perceived by patients can enable us to have a deeper understanding and cognition of the cognitive impairment mechanism of mental illness. The current face image reconstruction uses Principal Component Analysis (PCA) to To achieve face image reconstruction, the existing technology does not consider the establishment of a mapping relationship for different attributes of high-level feature information, and can only reconstruct static face images, which is difficult to meet the needs of face multi-dimensional information reconstruction in the field of image reconstruction.
本发明提供一种动态人脸图像重建的方法,旨在解决现有技术的如上技术问题,实现了动态人脸重建,在重建的动态人脸图像中同时重建了表情特征、身份特征,丰富了重建的信息,提高了人脸重建的准确性。The present invention provides a method for reconstructing a dynamic face image, aiming at solving the above technical problems in the prior art, realizing dynamic face reconstruction, and simultaneously reconstructing expression features and identity features in the reconstructed dynamic face image, thereby enriching the The reconstructed information improves the accuracy of face reconstruction.
下面以具体地实施例对本发明的技术方案以及本申请的技术方案如何解决上述技术问题进行详细说明。下面这几个具体的实施例可以相互结合,对于相同或相似的概念或过程可能在某些实施例中不再赘述。下面将结合附图,对本发明的实施例进行描述。The technical solutions of the present invention and how the technical solutions of the present application solve the above-mentioned technical problems will be described in detail below with specific examples. The following specific embodiments may be combined with each other, and the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present invention will be described below with reference to the accompanying drawings.
图1为本发明提供的一种动态人脸图像重建的方法的流程示意图,图1所示方法的执行主体可以是软件和/或硬件装置。图1所示方法包括步骤S101至步骤S104,具体如下:FIG. 1 is a schematic flowchart of a method for reconstructing a dynamic face image provided by the present invention. The execution body of the method shown in FIG. 1 may be software and/or hardware devices. The method shown in FIG. 1 includes steps S101 to S104, and the details are as follows:
S101、提取第一类神经响应数据,并根据所述第一类神经响应数据和预设的人脸图像重建模型,获取人脸基础图像。S101. Extract the first type of neural response data, and obtain a basic face image according to the first type of neural response data and a preset face image reconstruction model.
具体的,所述第一类神经响应数据为从待测用户的大脑初级视觉皮层脑区获取的神经响应数据,人脸的不同属性特征是由大脑不同脑区负责认知加工的,大脑初级视觉皮层脑区可以感知人脸的像素级低层次视觉特征,第一类神经响应数据采用功能磁共振成像技术,采集待测用户的大脑初级视觉皮层脑区的功能磁共振信号,得到第一类神经响应数据,预设的人脸图像重建模型响应第一类神经响应数据即可获取到对应的人脸基础图像。Specifically, the first type of neural response data is the neural response data obtained from the brain area of the primary visual cortex of the user to be tested. Different attributes of the human face are processed by different brain areas of the brain. The cortical brain area can perceive the pixel-level low-level visual features of the face. The first type of neural response data adopts fMRI technology to collect the functional magnetic resonance signals of the primary visual cortex of the user's brain to obtain the first type of neural response data. In response to the data, the preset face image reconstruction model can obtain the corresponding basic face image by responding to the first type of neural response data.
S102、提取第二类神经响应数据,并根据所述第二类神经响应数据和预设的人脸表情重建模型,获取人脸表情图像。S102. Extract the second type of neural response data, and obtain a facial expression image according to the second type of neural response data and a preset facial expression reconstruction model.
具体的,所述第二类神经响应数据为从待测用户的后侧颞上沟和杏仁核脑区获取的神经响应数据,人脸的面部表情特征由后侧颞上沟和杏仁核脑区等脑区负责加工,第二类神经响应数据采用功能磁共振成像技术,采集待测用户的后侧颞上沟和杏仁核脑区的功能磁共振信号,得到第二类神经响应数据,预设的人脸表情重建模型响应第二类神经响应数据即可获取到对应的人脸表情图像。Specifically, the second type of neural response data is the neural response data obtained from the posterior superior temporal sulcus and amygdala brain regions of the user to be tested, and the facial expression features of the human face are determined by the posterior superior temporal sulcus and amygdala brain regions. The other brain regions are responsible for processing, and the second type of neural response data uses functional magnetic resonance imaging technology to collect the functional magnetic resonance signals of the posterior superior temporal sulcus and amygdala brain regions of the user to be tested to obtain the second type of neural response data. The preset The corresponding facial expression image can be obtained by responding to the second type of neural response data by the facial expression reconstruction model.
S103、提取第三类神经响应数据,并根据所述第三类神经响应数据和预设的人脸身份重建模型,获取人脸身份图像。S103: Extract the third type of neural response data, and obtain a face identity image according to the third type of neural response data and a preset face identity reconstruction model.
具体的,所述第三类神经响应数据为从待测用户的自梭状回面孔加工区脑区和前侧颞叶脑区获取的神经响应数据,人脸的面部身份特征由自梭状回面孔加工区脑区和前侧颞叶脑区等脑区负责加工,第三类神经响应数据采用功能磁共振成像技术,采集待测用户的自梭状回面孔加工区脑区脑区或前侧颞叶脑区的功能磁共振信号,得到第三类神经响应数据,预设的人脸身份重建模型响应第三类神经响应数据即可获取到对应的人脸身份图像。Specifically, the third type of neural response data is the neural response data obtained from the brain region of the self-fusiform gyrus face processing region and the anterior temporal lobe brain region of the user to be tested, and the facial identity features of the face are determined by the self-fusiform gyrus Brain areas such as the face processing area and the anterior temporal lobe are responsible for processing. The third type of neural response data uses functional magnetic resonance imaging technology to collect the brain area or the anterior side of the face processing area of the fusiform gyrus of the user to be tested. The functional magnetic resonance signals of the temporal lobe brain area are used to obtain the third type of neural response data, and the preset face identity reconstruction model can obtain the corresponding face identity image in response to the third type of neural response data.
S104、根据所述人脸基础图像、所述人脸表情图像和所述人脸身份图像,获取动态人脸图像。S104. Obtain a dynamic face image according to the basic face image, the face expression image, and the face identity image.
具体的,将所述人脸基础图像、所述人脸表情图像和所述人脸身份图像的平均图像,确定为所述动态人脸图像。Specifically, the average image of the basic face image, the face expression image and the face identity image is determined as the dynamic face image.
本实施例中上述步骤S101至步骤S103,并不受所描述的动作顺序的限制,步骤S101至步骤S103可以采用其他顺序或者同时进行。In this embodiment, the above steps S101 to S103 are not limited by the described sequence of actions, and steps S101 to S103 may be performed in other sequences or simultaneously.
上述实施例提供的一种动态人脸图像重建方法,采用三种不同属性高级特征信息,利用三种不同的高级认知脑区,获取到对应人脸三种不同属性高级特征信息的第一类神经响应数据、第二类神经响应数据和第三类神经响应数据,同时构建不同的高级认知脑区与动态人脸从视觉图像空间到脑感知空间的模型,以及模型之间的多维映射关系,获取到人脸基础图像、人脸表情图像和人脸身份图像,来实现多维度面部特征的重建,获取到动态人脸图像,可以重建一些被测用户感知到的动态人脸图像。The method for reconstructing a dynamic face image provided by the above embodiment adopts three different attribute high-level feature information and three different high-level cognitive brain regions to obtain the first type of high-level feature information corresponding to three different attributes of a face. Neural response data, the second type of neural response data, and the third type of neural response data, while building different advanced cognitive brain regions and dynamic face models from visual image space to brain perception space, as well as the multi-dimensional mapping relationship between the models , obtain the basic face image, face expression image and face identity image to realize the reconstruction of multi-dimensional facial features, and obtain dynamic face images, which can reconstruct some dynamic face images perceived by the tested users.
在上述实施例的基础上,步骤S101(根据所述第一类神经响应数据和预设的人脸图像重建模型,获取人脸基础图像)的具体实现方式可以是:On the basis of the above embodiment, the specific implementation of step S101 (acquiring a basic face image according to the first type of neural response data and a preset face image reconstruction model) may be:
参见图2,图2为本发明提供的一种动态人脸图像重建的方法的信号传递示意图,动态人脸在脑感知空间的表达包括基本动态人脸感知空间、人脸表情感知空间和人脸身份感知空间;动态人脸在图像空间内的表达包括基本图像像素空间、面部图像表情空间和面部图像身份空间。Referring to FIG. 2, FIG. 2 is a schematic diagram of signal transmission of a method for reconstructing a dynamic face image provided by the present invention. The expression of dynamic face in brain perception space includes basic dynamic face perception space, face expression perception space and human face Identity perception space; the expression of dynamic face in image space includes basic image pixel space, facial image expression space and facial image identity space.
根据如下公式一(即人脸图像重建模型)和所述第一类神经响应数据,获取到步骤S101中的人脸基础图像,According to the following formula 1 (that is, the face image reconstruction model) and the first type of neural response data, the basic face image in step S101 is obtained,
其中,XG_RECON是所述人脸基础图像,是人脸图像重建模型中预设的动态人脸基础图像样本的平均图像,Ytest是所述第一类神经响应数据样本,是人脸图像重建模型中预设的动态人脸基础图像样本引起的第一类神经响应数据样本的平均数据,stest是中Ytest的投影坐标,ttest是XG_RECON的投影坐标,Wtrain是人脸图像重建模型中stest-ttest变换矩阵,Utrain是人脸图像重建模型中Ytest的特征向量,Vtrain是人脸图像重建模型中预设的动态人脸基础图像样本的特征向量。where X G_RECON is the basic face image, is the average image of the dynamic face basic image samples preset in the face image reconstruction model, Y test is the first type of neural response data samples, is the average data of the first type of neural response data samples caused by the dynamic face basic image samples preset in the face image reconstruction model, s test is the projection coordinates of Y test in the middle, t test is the projection coordinates of X G_RECON , W train is the s test -t test transformation matrix in the face image reconstruction model, U train is the feature vector of Y test in the face image reconstruction model, and V train is the feature of the dynamic face basic image sample preset in the face image reconstruction model vector.
在上述实施例的基础上,在根据所述第一类神经响应数据和预设的人脸图像重建模型,获取人脸基础图像之前,还可以包括对人脸图像重建模型中各参数学习的过程,具体如下:On the basis of the above embodiment, before acquiring the basic face image according to the first type of neural response data and the preset face image reconstruction model, a process of learning each parameter in the face image reconstruction model may also be included ,details as follows:
S201、以所述动态人脸基础图像样本为输出量、以所述第一类神经响应数据样本为输入量,通过如上公式二对s-t变换矩阵、第一类神经响应数据样本的特征向量和动态人脸基础图像训练样本的特征向量进行参数学习,获取人脸图像重建模型中stest-ttest变换矩阵、人脸图像重建模型中第一类神经响应数据样本的特征向量、人脸图像重建模型中人脸基础图像的特征向量。S201, using the dynamic face basic image sample as the output quantity and the first type neural response data sample as the input quantity, through the above formula two pairs of st transformation matrix, the feature vector of the first type neural response data sample and dynamic The eigenvectors of the basic face image training samples are used for parameter learning, and the s test -t test transformation matrix in the face image reconstruction model, the eigenvectors of the first type of neural response data samples in the face image reconstruction model, and the face image reconstruction model are obtained. The feature vector of the base image of the face.
其中,动态人脸在基本图像像素空间下,假设Xj为动态人脸视觉图像,这里j=1,2,...,N,N为动态人脸图像个数,将每一幅动态人脸以单维向量的形式表示,则动态人脸基础图像样本X如下表示为:X=[X1 X2 ... Xj ... XN]。Among them, the dynamic face is in the basic image pixel space, assuming that X j is a dynamic face visual image, where j=1,2,...,N,N is the number of dynamic face images, and each dynamic face The face is represented in the form of a single-dimensional vector, and the dynamic face basic image sample X is represented as follows: X=[X 1 X 2 ... X j ... X N ].
对X进行PCA奇异值分解,以利用这些样本生成一个“基本图像像素空间”,每幅动态人脸图像样本在基本图像像素空间下的投影坐标为:Perform PCA singular value decomposition on X to use these samples to generate a "basic image pixel space", and the projected coordinates of each dynamic face image sample in the basic image pixel space are:
其中是X的平均图像,V是X的特征向量,按照它对应的特征值的大小由高往低排序,V可以更具体表示V=[V1,V2,...,VN],V是的所有(线性无关)特征向量特征向量,每一列为一组特征向量。in is the average image of X, V is the eigenvector of X, sorted from high to low according to the size of its corresponding eigenvalues, V can be more specifically expressed as V=[V 1 ,V 2 ,...,V N ],V Yes All (linearly independent) eigenvectors of eigenvectors, each column is a set of eigenvectors.
在动态人脸基础图像样本X下,每一幅动态图像(不局限于样本中的图像)都可以由它在这个空间下的投影坐标来线性表示,这一基于PCA的分解过程可逆,因此任何一幅视觉图像可以根据它在特征空间下的投影坐标重建出来,用公式表示为:Under the dynamic face basic image sample X, each dynamic image (not limited to the image in the sample) can be linearly represented by its projected coordinates in this space. This PCA-based decomposition process is reversible, so any A visual image can be reconstructed according to its projected coordinates in the feature space, which can be expressed as:
其中,动态人脸在基本动态人脸感知空间下,假设Yj为一幅动态人脸图像在一组大脑初级视觉皮层脑区内的神经响应分布,这里j=1,2,...,N,将Yj以一维向量的形式表示,则动态人脸图像样本集在大脑初级视觉皮层脑区的神经响应,即第一类神经响应数据样本Y,表示为Y=[Y1 Y2 ... Yj ... YN],使用PCA对Y进行奇异值分解,每幅动态人脸图像的神经响应在该神经响应空间下的投影坐标可以表示为:Among them, the dynamic face is in the basic dynamic face perception space, assuming that Y j is the neural response distribution of a dynamic face image in a group of brain areas of the primary visual cortex of the brain, where j=1,2,..., N, if Y j is represented in the form of a one-dimensional vector, then the neural response of the dynamic face image sample set in the primary visual cortex of the brain, that is, the first type of neural response data sample Y, is expressed as Y=[Y 1 Y 2 ... Y j ... Y N ], using PCA to perform singular value decomposition of Y, the projection coordinates of the neural response of each dynamic face image in this neural response space can be expressed as:
是Y的平均数据,U是Y的特征向量,按照对应的特征值从大到小,U具体表示为U=[U1,U2,...,UN]。 is the average data of Y, U is the eigenvector of Y, according to the corresponding eigenvalues from large to small, U is specifically expressed as U=[U 1 , U 2 ,...,U N ].
其中,多维映射关系为将动态人脸图像样本在X下的投影坐标t表示为它在Y下的投影坐标s的线性变换,即Among them, the multi-dimensional mapping relationship is to express the projection coordinate t of the dynamic face image sample under X as the linear transformation of its projection coordinate s under Y, namely
t=sW 公式2.4t=sW Equation 2.4
其中W是s-t变换矩阵,在t和s已知且为满秩矩阵的情况下,变换矩阵W的一种求解方式为:where W is the s-t transformation matrix. When t and s are known and are full-rank matrices, a solution to the transformation matrix W is:
W=(sTs+I)-1sTt 公式2.5W=(s T s+I) -1 s T t Equation 2.5
综上所述,由公式2.1、公式2.3、公2.4和公式2.5形成公式二,To sum up, formula 2 is formed by formula 2.1, formula 2.3, formula 2.4 and formula 2.5,
其中,X是所述动态人脸基础图像样本,是X的平均图像,Y是所述第一类神经响应数据样本,是Y的平均数据,s是Y的投影坐标,t是X的投影坐标,W是所述s-t变换矩阵,U是Y的特征向量,V是X的特征向量。Wherein, X is the basic image sample of the dynamic face, is the average image of X, Y is the first class of neural response data samples, is the mean data of Y, s is the projected coordinate of Y, t is the projected coordinate of X, W is the st transformation matrix, U is the eigenvector of Y, and V is the eigenvector of X.
S202、根据所述人脸图像重建模型中stest-ttest变换矩阵、所述人脸图像重建模型中第一类神经响应数据样本的特征向量、所述人脸图像重建模型中人脸基础图像的特征向量,获取如公式一的人脸图像重建模型。S202, according to the s test -t test transformation matrix in the face image reconstruction model, the feature vector of the first type of neural response data samples in the face image reconstruction model, the face basic image in the face image reconstruction model The feature vector of , obtains the face image reconstruction model as formula 1.
在上述实施例的基础上,步骤S102(提取第二类神经响应数据,并根据所述第二类神经响应数据和预设的人脸表情重建模型,获取人脸表情图像)的具体实现方式可以是:On the basis of the above embodiment, the specific implementation of step S102 (extracting the second type of neural response data, and obtaining a facial expression image according to the second type of neural response data and a preset facial expression reconstruction model) can be Yes:
根据如下公式三(即人脸表情重建模型)和所述第二类神经响应数据,获取人脸表情图像;Obtain the facial expression image according to the following formula 3 (that is, the facial expression reconstruction model) and the second type of neural response data;
其中,XE_RECON是所述人脸表情图像,是人脸表情重建模型中预设的动态人脸表情图像样本的平均图像,YE_test是第二类神经响应数据样本,是人脸表情重建模型中预设的动态人脸表情图像样本引起的第二类神经响应数据样本的平均数据,sE_test是YE_test的投影坐标,tE_test是XE_RECON的投影坐标,WE_train是人脸表情重建模型中的sE_test-tE_test变换矩阵,UE_train是人脸表情重建模型的YE_test的特征向量,VE_train是人脸表情重建模型预设的动态人脸表情图像样本的特征向量。where X E_RECON is the facial expression image, is the average image of the preset dynamic facial expression image samples in the facial expression reconstruction model, Y E_test is the second type of neural response data samples, is the average data of the second type of neural response data samples caused by the preset dynamic facial expression image samples in the facial expression reconstruction model, s E_test is the projected coordinate of Y E_test , t E_test is the projected coordinate of X E_RECON , and W E_train is The s E_test -t E_test transformation matrix in the facial expression reconstruction model, U E_train is the feature vector of Y E_test of the facial expression reconstruction model, and V E_train is the feature vector of the dynamic facial expression image samples preset by the facial expression reconstruction model .
在上述实施例的基础上,根据所述第二类神经响应数据和预设的人脸表情重建模型,获取人脸表情图像之前,还可以包括对人脸表情重建模型中各参数学习的过程,具体如下:On the basis of the above embodiment, according to the second type of neural response data and the preset facial expression reconstruction model, before acquiring the facial expression image, the process of learning each parameter in the facial expression reconstruction model may also be included, details as follows:
S301、以所述动态人脸表情图像样本为输出量、以所述第二类神经响应数据样本为输入量,通过如上公式四对sE-tE变换矩阵、第二类神经响应数据样本的特征向量和动态人脸表情图像训练样本的特征向量进行参数学习,获取人脸表情重建模型中sE_test-tE_test变换矩阵、人脸表情重建模型中第二类神经响应数据样本的特征向量、人脸表情重建模型中人脸表情图像的特征向量。S301, using the dynamic facial expression image sample as the output quantity and the second type of neural response data sample as the input quantity, through the above formula four pairs of sE - tE transformation matrix, the second type of neural response data sample The eigenvectors and the eigenvectors of the dynamic facial expression image training samples are used for parameter learning to obtain the s E_test -t E_test transformation matrix in the facial expression reconstruction model, the eigenvectors of the second type of neural response data samples in the facial expression reconstruction model, and the human The feature vector of facial expression images in the facial expression reconstruction model.
获取动态人脸表情图像训练样本和以所述动态人脸表情图像样本引起的第二类神经响应数据训练样本;其中,动态人脸在面部图像表情空间下,重新标记图像样本集中的N个样本图像,并将动态人脸样本集重新整合,以人脸表情图像训练样本XE的形式表示如下:Obtain the training samples of dynamic facial expression images and the training samples of the second type of neural response data caused by the dynamic facial expression image samples; wherein, the dynamic faces are in the facial image expression space, and the N samples in the image sample set are re-labeled image, and reintegrate the dynamic face sample set, which is expressed in the form of face expression image training sample X E as follows:
其中XE的每一列由同一种面部表情的P个不同面部身份单维向量拼接而成,代表一种面部表情,XE的每一行有Q个值,代表了同一个面部身份在一个图像局部位置上的Q种表情变化。Each column of X E is composed of P single-dimensional vectors of different facial identities of the same facial expression, representing a facial expression, and each row of X E has Q values, representing the same facial identity in an image part Q kinds of expression changes on the position.
对XE进行基于PCA的奇异值分解,每一种面部表情在XE下的投影坐标可以表示为:Perform PCA-based singular value decomposition on X E , the projected coordinates of each facial expression under X E can be expressed as:
是XE的平均图像,VE是XE的特征向量,特征值按从大到小的顺序排列表示为 is the average image of X E , V E is the eigenvector of X E , and the eigenvalues are arranged in descending order and expressed as
在XE下,每一种面部表情(不局限于样本中的表情种类)都可以由它在这个空间下的投影坐标表示,由于PCA的分解过程可逆,因此任何一类面部表情可以根据它在表情特征空间下的投影坐标重建出来:Under X E , each facial expression (not limited to the types of expressions in the sample) can be represented by its projected coordinates in this space. Since the decomposition process of PCA is reversible, any type of facial expression can be expressed according to its The projected coordinates in the expression feature space are reconstructed:
其中,动态人脸在人脸表情感知空间下,假设Yi,e为一幅动态人脸图像在后侧颞上沟和杏仁核脑区内的神经响应分布,对Yi,e按照每列为一类表情的神经响应进行重新排列得到第二类神经响应数据样本YE,对YE进行奇异值分解,每种面部表情的神经响应在该YE下的投影坐标可以表示为:Among them, the dynamic face is in the facial expression perception space, assuming that Yi ,e is the neural response distribution of a dynamic face image in the posterior superior temporal sulcus and amygdala brain area, and Y i,e is calculated according to each column. Rearrange the neural responses of one class of expressions to obtain the second class of neural response data samples Y E , perform singular value decomposition on Y E , and the projection coordinates of the neural responses of each facial expression under this Y E can be expressed as:
其中,是YE的平均数据,UE是特征向量,按对应特征值由大到小具体表示为 in, is the average data of Y E , U E is the eigenvector, which is specifically expressed as the corresponding eigenvalue from large to small
将动态人脸图像样本在XE下的投影坐标tE表示为它在YE下投影坐标sE的线性变换,定义tE和sE为:The projected coordinate t E of the dynamic face image sample under X E is expressed as the linear transformation of its projected coordinate s E under Y E , and t E and s E are defined as:
这里id是每一个面部身份个体的标签,建立的映射关系如下:Here id is the label of each face identity individual, and the established mapping relationship is as follows:
tE(id)=sE(id)WE(id) 公式4.4t E(id) = s E(id) W E(id) Equation 4.4
WE(id)的一种解析可表示为:A parsing of W E(id) can be expressed as:
WE(id)=(sT E(id)sE(id)+I)-1sT E(id)tE(id) 公式4.5W E(id) = (s T E(id) s E(id) +I) -1 s T E(id) t E(id) Equation 4.5
综上所述,由公式4.1、公式4.3、公式4.4和公式4.5形成公式四,To sum up, formula 4 is formed by formula 4.1, formula 4.3, formula 4.4 and formula 4.5,
其中,XE是所述动态人脸表情图像样本,是XE的平均图像,YE是所述第二类神经响应数据样本,是YE的平均数据,sE是YE的投影坐标,tE是XE的投影坐标,WE是所述sE-tE变换矩阵,UE是YE的特征向量,VE是XE的特征向量。Wherein, X E is the dynamic facial expression image sample, is the average image of X E , Y E is the second class of neural response data samples, is the mean data of Y E , s E is the projected coordinate of Y E , t E is the projected coordinate of X E , W E is the s E -t E transformation matrix, U E is the eigenvector of Y E , and V E is the Eigenvectors of X E.
S302、根据所述人脸表情重建模型中的sE_test-tE_test变换矩阵、所述人脸表情重建模型中第二类神经响应数据样本的特征向量、所述人脸表情重建模型中人脸表情图像的特征向量,获取人脸表情重建模型。S302, according to the s E_test -t E_test transformation matrix in the facial expression reconstruction model, the feature vector of the second type of neural response data samples in the facial expression reconstruction model, and the facial expression in the facial expression reconstruction model. The feature vector of the image to obtain the facial expression reconstruction model.
在上述实施例的基础上,步骤S103(提取第三类神经响应数据,并根据所述第三类神经响应数据和预设的人脸身份重建模型,获取人脸身份图像)的具体实现方式可以是:On the basis of the above embodiment, the specific implementation of step S103 (extracting the third type of neural response data, and obtaining a face identity image according to the third type of neural response data and the preset face identity reconstruction model) can be Yes:
根据如下公式五(即人脸身份重建模型)和所述第三类神经响应数据,获取人脸身份图像;Obtain a face identity image according to the following formula 5 (that is, the face identity reconstruction model) and the third type of neural response data;
其中,XI_RECON是所述人脸身份图像,人脸身份重建模型中预设的动态人脸身份图像样本的平均图像,YI_test所述第三类神经响应数据样本,是人脸身份重建模型中预设的动态人脸身份图像样本引起的第三类神经响应数据样本的平均数据,sI_test是YI_test的投影坐标,tI_test是XI_RECON的投影坐标,WI-train是人脸身份重建模型中sI_test-tI_test变换矩阵,UI_train是人脸身份重建模型YI_test的特征向量,VI_train是人脸身份重建模型中预设的动态人脸身份图像样本的特征向量。where X I_RECON is the face identity image, The average image of the preset dynamic face identity image samples in the face identity reconstruction model, the third type of neural response data samples described in Y I_test , is the average data of the third type of neural response data samples caused by the preset dynamic face identity image samples in the face identity reconstruction model, s I_test is the projected coordinate of Y I_test , t I_test is the projected coordinate of X I_RECON , W I- train is the s I_test -t I_test transformation matrix in the face identity reconstruction model, U I_train is the feature vector of the face identity reconstruction model Y I_test , and V I_train is the feature of the dynamic face identity image sample preset in the face identity reconstruction model vector.
在上述实施例的基础上,根据所述第三类神经响应数据和预设的人脸身份重建模型,获取人脸身份图像之前,还可以包括对人脸图像重建模型中各参数学习的过程,具体如下:On the basis of the above embodiment, according to the third type of neural response data and the preset face identity reconstruction model, before acquiring the face identity image, the process of learning each parameter in the face image reconstruction model may also be included, details as follows:
S401、以所述动态人脸身份图像样本为输出量、以所述第三类神经响应数据样本为输入量,通过如上公式六对变换矩阵sI-tI、第三类神经响应数据样本的特征向量和动态人脸身份图像训练样本的特征向量进行参数学习,获取人脸身份重建模型中的sI_test-tI_test变换矩阵、人脸身份重建模型中第三类神经响应数据样本的特征向量和人脸身份图像训练样本的特征向量。S401, using the dynamic face identity image sample as the output quantity and the third type of neural response data sample as the input quantity, through the above formula six pairs of transformation matrices sI - tI , the third type of neural response data sample The eigenvectors and the eigenvectors of the dynamic face identity image training samples are used for parameter learning to obtain the s I_test -t I_test transformation matrix in the face identity reconstruction model, the eigenvector sum of the third type of neural response data samples in the face identity reconstruction model Feature vector for training samples of face identity images.
获取动态人脸身份图像训练样本和以动态人脸身份图像训练样本引起的第三类神经响应数据训练样本。Obtain the training samples of the dynamic face identity image and the third type of neural response data training samples caused by the dynamic face identity image training samples.
其中,动态人脸在面部图像身份空间下,再次对标记的N个样本图像进行整合,并以动态人脸身份图像样本XI表示,XI的每一列由同一种面部身份的Q个不同面部表情单维向量拼接而成,代表一个面部身份个体,XI的每一行有P个值,代表了同一个面部表情在图像局部位置上的P个个体身份变化。Among them, the dynamic face in the face image identity space, the labeled N sample images are integrated again, and represented by the dynamic face identity image sample X I , each column of X I is composed of Q different faces of the same facial identity The expression single-dimensional vector is spliced together, representing a facial identity individual, and each row of X I has P values, representing the P individual identity changes of the same facial expression at the local position of the image.
对XI进行基于PCA的奇异值分解,每一个面部身份个体在这个新的身份特征空间下的投影坐标可以表示为:Perform PCA-based singular value decomposition on X I , and the projected coordinates of each face identity individual in this new identity feature space can be expressed as:
是XI的平均图像,VI是XI的特征向量,特征值按从大到小的顺序排列可以表示为 is the average image of X I , V I is the eigenvector of X I , and the eigenvalues are arranged in descending order and can be expressed as
在XI下,每一个面部身份个体(不局限于样本中的身份个体)都可以由它在这个空间下的投影坐标表示,由于PCA的分解过程可逆,因此任何一个身份个体可以根据它在身份特征空间下的投影坐标重建出来:Under X I , each facial identity individual (not limited to the identity individuals in the sample) can be represented by its projected coordinates in this space. Since the decomposition process of PCA is reversible, any identity individual can be represented according to its identity The projected coordinates in the feature space are reconstructed:
其中,动态人脸在人脸身份感知空间下,假设Yi,e为一幅动态人脸图像在自梭状回面孔加工区脑区和前侧颞叶脑区内的神经响应分布,对Yi,e按照每列为一个面部身份个体的神经响应数据进行重新排列得到YI,并对YI进行PCA奇异值分解,每种面部表情的神经响应在YI下的投影坐标可以表示为:Among them, the dynamic face is in the face identity perception space, assuming that Y i, e are the neural response distribution of a dynamic face image in the face processing area of the fusiform gyrus and the anterior temporal lobe brain area. i,e are rearranged according to the neural response data of a face identity individual in each column to obtain Y I , and Y I is subjected to PCA singular value decomposition. The projected coordinates of the neural responses of each facial expression under Y I can be expressed as:
这里是YI的平均数据,UI是YI的特征向量,按对应特征值由大到小具体表示为 here is the average data of Y I , and U I is the eigenvector of Y I , which is specifically expressed as the corresponding eigenvalue from large to small.
将动态人脸图像样本在面部图像身份空间下的投影坐标tI表示为它在神经响应空间下投影坐标sI的线性变换。重新定义tI和sI为:The projected coordinate tI of a dynamic face image sample in the face image identity space is expressed as a linear transformation of its projected coordinate sI in the neural response space. Redefine t I and s I as:
这里ex是每一种面部表情的标签。映射关系如下:Here ex is the label for each facial expression. The mapping relationship is as follows:
tI(ex)=sI(ex)WI(ex) 公式6.4t I(ex) = s I(ex) W I(ex) Equation 6.4
WI(ex)的解析解可表示为:The analytical solution of W I(ex) can be expressed as:
WI(ex)=(sT I(ex)sI(ex)+I)-1sT I(ex)tI(ex) 公式6.5W I(ex) = (s T I(ex) s I(ex) +I) -1 s T I(ex) t I(ex) Equation 6.5
综上所述,由公式6.1、公式6.3、公式6.4和公式6.5形成公式六,To sum up, formula 6 is formed by formula 6.1, formula 6.3, formula 6.4 and formula 6.5,
其中,XI是所述动态人脸身份图像样本,是XI的平均图像,YI是所述第三类神经响应数据样本,是YI的平均数据,sI是YI的投影坐标,tI是XI的投影坐标,WI是所述sI-tI变换矩阵,UI是YI的特征向量,VI是人脸身份重建模型XI的特征向量。Wherein, X I is the dynamic face identity image sample, is the average image of X I , Y I is the third class of neural response data samples, is the mean data of Y I , s I is the projected coordinate of Y I , t I is the projected coordinate of X I , W I is the s I -t I transformation matrix, U I is the eigenvector of Y I , and V I is the The feature vector of the face identity reconstruction model XI .
S402、根据所述人脸身份重建模型中sI_test-tI_test的变换矩阵、所述人脸身份重建模型中的第三类神经响应数据样本的特征向量、所述人脸身份重建模型中人脸身份图像的特征向量,获取人脸身份重建模型。S402, according to the transformation matrix of s I_test - t I_test in the face identity reconstruction model, the feature vector of the third type of neural response data samples in the face identity reconstruction model, the face in the face identity reconstruction model The feature vector of the identity image to obtain the face identity reconstruction model.
参见图3,是本发明实施例提供的一种动态人脸图像重建的装置的结构示意图,如图3所示的动态人脸图像重建的装置40,包括:Referring to FIG. 3 , it is a schematic structural diagram of an apparatus for reconstructing a dynamic face image provided by an embodiment of the present invention. The
第一获取模块401,用于提取第一类神经响应数据,并根据所述第一类神经响应数据和预设的人脸图像重建模型,获取人脸基础图像。The
第二获取模块402,用于提取第二类神经响应数据,并根据所述第二类神经响应数据和预设的人脸表情重建模型,获取人脸表情图像。The second obtaining
第三获取模块403,用于提取第三类神经响应数据,并根据所述第三类神经响应数据和预设的人脸身份重建模型,获取人脸身份图像。The third obtaining
动态人脸图像获取模块404,用于根据所述人脸基础图像、所述人脸表情图像和所述人脸身份图像,获取动态人脸图像。The dynamic face
图3所示实施例的动态人脸图像重建的装置对应地可用于执行图1所示方法实施例中的步骤,其实现原理和技术效果类似,此处不再赘述。The apparatus for dynamic face image reconstruction in the embodiment shown in FIG. 3 can correspondingly be used to execute the steps in the method embodiment shown in FIG. 1 , and the implementation principles and technical effects thereof are similar, and will not be repeated here.
可选地,所述第一获取模块401用于根据如下公式一和所述第一类神经响应数据,获取人脸基础图像;Optionally, the first obtaining
其中,XG_RECON是所述人脸基础图像,是人脸图像重建模型中预设的动态人脸基础图像样本的平均图像,Ytest是所述第一类神经响应数据样本,是人脸图像重建模型中预设的动态人脸基础图像样本引起的第一类神经响应数据样本的平均数据,stest是中Ytest的投影坐标,ttest是XG_RECON的投影坐标,Wtrain是人脸图像重建模型中stest-ttest变换矩阵,Utrain是人脸图像重建模型中Ytest的特征向量,Vtrain是人脸图像重建模型中预设的动态人脸基础图像样本的特征向量。where X G_RECON is the basic face image, is the average image of the dynamic face basic image samples preset in the face image reconstruction model, Y test is the first type of neural response data samples, is the average data of the first type of neural response data samples caused by the dynamic face basic image samples preset in the face image reconstruction model, s test is the projection coordinates of Y test in the middle, t test is the projection coordinates of X G_RECON , W train is the s test -t test transformation matrix in the face image reconstruction model, U train is the feature vector of Y test in the face image reconstruction model, and V train is the feature of the dynamic face basic image sample preset in the face image reconstruction model vector.
可选地,所述第一获取模块401还包括用于获取动态人脸基础图像训练样本和以所述动态人脸基础图像样本引起的第一类神经响应数据训练样本。Optionally, the first obtaining
以所述动态人脸基础图像样本为输出量、以所述第一类神经响应数据样本为输入量,通过如下公式二对s-t变换矩阵、第一类神经响应数据样本的特征向量和动态人脸基础图像训练样本的特征向量进行参数学习,获取人脸图像重建模型中stest-ttest变换矩阵、人脸图像重建模型中第一类神经响应数据样本的特征向量、人脸图像重建模型中人脸基础图像的特征向量,Taking the basic image sample of the dynamic face as the output, and taking the first type of neural response data sample as the input, the st transformation matrix, the eigenvector of the first type of neural response data sample and the dynamic face are paired by the following formula. The eigenvectors of the basic image training samples are used for parameter learning to obtain the s test -t test transformation matrix in the face image reconstruction model, the eigenvectors of the first type of neural response data samples in the face image reconstruction model, and the human face in the face image reconstruction model. the feature vector of the base image of the face,
其中,X是所述动态人脸基础图像样本,是X的平均图像,Y是所述第一类神经响应数据样本,是Y的平均数据,s是Y的投影坐标,t是X的投影坐标,W是所述s-t变换矩阵,U是Y的特征向量,V是X的特征向量;根据所述人脸图像重建模型中stest-ttest变换矩阵、所述人脸图像重建模型中第一类神经响应数据样本的特征向量、所述人脸图像重建模型中人脸基础图像的特征向量,获取人脸图像重建模型。Wherein, X is the basic image sample of the dynamic face, is the average image of X, Y is the first class of neural response data samples, is the average data of Y, s is the projection coordinate of Y, t is the projection coordinate of X, W is the st transformation matrix, U is the feature vector of Y, V is the feature vector of X; reconstruct the model according to the face image In the s test -t test transformation matrix, the feature vector of the first type of neural response data sample in the described face image reconstruction model, the feature vector of the face basic image in the described face image reconstruction model, obtain the face image reconstruction model .
可选地,所述第二获取模块402用于根据如下公式三和所述第二类神经响应数据,获取人脸表情图像;Optionally, the second obtaining
其中,XE_RECON是所述人脸表情图像,是人脸表情重建模型中预设的动态人脸表情图像样本的平均图像,YE_test是第二类神经响应数据样本, 是人脸表情重建模型中预设的动态人脸表情图像样本引起的第二类神经响应数据样本的平均数据,sE_test是YE_test的投影坐标,tE_test是XE_RECON的投影坐标,WE_train是人脸表情重建模型中的sE_test-tE_test变换矩阵,UE_train是人脸表情重建模型的YE_test的特征向量,VE_train是人脸表情重建模型预设的动态人脸表情图像样本的特征向量。where X E_RECON is the facial expression image, is the average image of the preset dynamic facial expression image samples in the facial expression reconstruction model, Y E_test is the second type of neural response data samples, is the average data of the second type of neural response data samples caused by the preset dynamic facial expression image samples in the facial expression reconstruction model, s E_test is the projected coordinate of Y E_test , t E_test is the projected coordinate of X E_RECON , and W E_train is The s E_test -t E_test transformation matrix in the facial expression reconstruction model, U E_train is the feature vector of Y E_test of the facial expression reconstruction model, and V E_train is the feature vector of the dynamic facial expression image samples preset by the facial expression reconstruction model .
可选地,所述第二获取模块402还包括用于获取动态人脸表情图像训练样本和以所述动态人脸表情图像样本引起的第二类神经响应数据训练样本。Optionally, the second obtaining
以所述动态人脸表情图像样本为输出量、以所述第二类神经响应数据样本为输入量,通过如下公式四对sE-tE变换矩阵、第二类神经响应数据样本的特征向量和动态人脸表情图像训练样本的特征向量进行参数学习,获取人脸表情重建模型中sE_test-tE_test变换矩阵、人脸表情重建模型中第二类神经响应数据样本的特征向量、人脸表情重建模型中人脸表情图像的特征向量,Taking the dynamic facial expression image sample as the output, and taking the second type of neural response data sample as the input, four pairs of s E -t E transformation matrices and the second type of neural response data sample feature vectors through the following formula Perform parameter learning with the eigenvectors of the dynamic facial expression image training samples to obtain the s E_test -t E_test transformation matrix in the facial expression reconstruction model, the eigenvectors of the second type of neural response data samples in the facial expression reconstruction model, and the facial expression The feature vector of the facial expression image in the reconstructed model,
其中,XE是所述动态人脸表情图像样本,是XE的平均图像,YE是所述第二类神经响应数据样本,是YE的平均数据,sE是YE的投影坐标,tE是XE的投影坐标,WE是所述sE-tE变换矩阵,UE是YE的特征向量,VE是XE的特征向量;根据所述人脸表情重建模型中的sE_test-tE_test变换矩阵、所述人脸表情重建模型中第二类神经响应数据样本的特征向量、所述人脸表情重建模型中人脸表情图像的特征向量,获取人脸表情重建模型。Wherein, X E is the dynamic facial expression image sample, is the average image of X E , Y E is the second class of neural response data samples, is the mean data of Y E , s E is the projected coordinate of Y E , t E is the projected coordinate of X E , W E is the s E -t E transformation matrix, U E is the eigenvector of Y E , and V E is the The feature vector of X E ; according to the s E_test -t E_test transformation matrix in the facial expression reconstruction model, the feature vector of the second type of neural response data samples in the facial expression reconstruction model, the facial expression reconstruction model The feature vector of the facial expression image in the middle is obtained, and the facial expression reconstruction model is obtained.
可选地,所述第三获取模块403用于根据如下公式五和所述第三类神经响应数据,获取人脸身份图像;Optionally, the third obtaining
其中,XI_RECON是所述人脸身份图像,人脸身份重建模型中预设的动态人脸身份图像样本的平均图像,YI_test所述第三类神经响应数据样本,是人脸身份重建模型中预设的动态人脸身份图像样本引起的第三类神经响应数据样本的平均数据,sI_test是YI_test的投影坐标,tI_test是XI_RECON的投影坐标,WI-train是人脸身份重建模型中sI_test-tI_test变换矩阵,UI_train是人脸身份重建模型YI_test的特征向量,VI_train是人脸身份重建模型中预设的动态人脸身份图像样本的特征向量。where X I_RECON is the face identity image, The average image of the preset dynamic face identity image samples in the face identity reconstruction model, the third type of neural response data samples described in Y I_test , is the average data of the third type of neural response data samples caused by the preset dynamic face identity image samples in the face identity reconstruction model, s I_test is the projected coordinate of Y I_test , t I_test is the projected coordinate of X I_RECON , W I- train is the s I_test -t I_test transformation matrix in the face identity reconstruction model, U I_train is the feature vector of the face identity reconstruction model Y I_test , and V I_train is the feature of the dynamic face identity image sample preset in the face identity reconstruction model vector.
可选地,所述第三获取模块403还包括用于获取动态人脸身份图像训练样本和以动态人脸身份图像训练样本引起的第三类神经响应数据训练样本。以所述动态人脸身份图像样本为输出量、以所述第三类神经响应数据样本为输入量,通过如下公式六对变换矩阵sI-tI、第三类神经响应数据样本的特征向量和动态人脸身份图像训练样本的特征向量进行参数学习,获取人脸身份重建模型中的sI_test-tI_test变换矩阵、人脸身份重建模型中第三类神经响应数据样本的特征向量和人脸身份图像训练样本的特征向量,Optionally, the third obtaining
其中,XI是所述动态人脸身份图像样本,是XI的平均图像,YI是所述第三类神经响应数据样本,是YI的平均数据,sI是YI的投影坐标,tI是XI的投影坐标,WI是所述sI-tI变换矩阵,UI是YI的特征向量,VI是人脸身份重建模型XI的特征向量;Wherein, X I is the dynamic face identity image sample, is the average image of X I , Y I is the third class of neural response data samples, is the mean data of Y I , s I is the projected coordinate of Y I , t I is the projected coordinate of X I , W I is the s I -t I transformation matrix, U I is the eigenvector of Y I , and V I is the The feature vector of the face identity reconstruction model XI ;
根据所述人脸身份重建模型中sI_test-tI_test的变换矩阵、所述人脸身份重建模型中的第三类神经响应数据样本的特征向量、所述人脸身份重建模型中人脸身份图像的特征向量,获取人脸身份重建模型。According to the transformation matrix of s I_test - t I_test in the face identity reconstruction model, the feature vector of the third type of neural response data samples in the face identity reconstruction model, the face identity image in the face identity reconstruction model , and obtain the face identity reconstruction model.
可选地,所述第一类神经响应数据为从待测用户的大脑初级视觉皮层脑区获取的神经响应数据。Optionally, the first type of neural response data is neural response data obtained from the brain region of the primary visual cortex of the user's brain to be tested.
所述第二类神经响应数据为从待测用户的后侧颞上沟和杏仁核脑区获取的神经响应数据。The second type of neural response data is the neural response data obtained from the posterior superior temporal sulcus and amygdala brain regions of the user to be tested.
所述第三类神经响应数据为从待测用户的自梭状回面孔加工区脑区和前侧颞叶脑区获取的神经响应数据。The third type of neural response data is the neural response data obtained from the brain region of the fusiform gyrus face processing region and the anterior temporal lobe brain region of the user to be tested.
可选地,所述动态人脸图像获取模块404用于将所述人脸基础图像、所述人脸表情图像和所述人脸身份图像的平均图像,确定为所述动态人脸图像。Optionally, the dynamic face
参见图4,是本发明实施例提供的一种设备的硬件结构示意图,该设备50包括:处理器51、存储器52和计算机程序;其中4 is a schematic diagram of a hardware structure of a device provided by an embodiment of the present invention, the
存储器52,用于存储所述计算机程序,该存储器还可以是闪存(flash)。所述计算机程序例如是实现上述方法的应用程序、功能模块等。The
处理器51,用于执行所述存储器存储的计算机程序,以实现上述方法中终端执行的各个步骤。具体可以参见前面方法实施例中的相关描述。The
可选地,存储器52既可以是独立的,也可以跟处理器51集成在一起。Optionally, the
当所述存储器52是独立于处理器51之外的器件时,所述设备还可以包括:When the
总线53,用于连接所述存储器52和处理器51。The
本发明还提供一种可读存储介质,所述可读存储介质中存储有计算机程序,所述计算机程序被处理器执行时用于实现上述的各种实施方式提供的方法。The present invention also provides a readable storage medium, where a computer program is stored in the readable storage medium, and when the computer program is executed by a processor, is used to implement the methods provided by the above-mentioned various embodiments.
其中,可读存储介质可以是计算机存储介质,也可以是通信介质。通信介质包括便于从一个地方向另一个地方传送计算机程序的任何介质。计算机存储介质可以是通用或专用计算机能够存取的任何可用介质。例如,可读存储介质耦合至处理器,从而使处理器能够从该可读存储介质读取信息,且可向该可读存储介质写入信息。当然,可读存储介质也可以是处理器的组成部分。处理器和可读存储介质可以位于专用集成电路(ApplicationSpecific Integrated Circuits,简称:ASIC)中。另外,该ASIC可以位于用户设备中。当然,处理器和可读存储介质也可以作为分立组件存在于通信设备中。可读存储介质可以是只读存储器(ROM)、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。The readable storage medium may be a computer storage medium or a communication medium. Communication media includes any medium that facilitates transfer of a computer program from one place to another. Computer storage media can be any available media that can be accessed by a general purpose or special purpose computer. For example, a readable storage medium is coupled to the processor such that the processor can read information from, and write information to, the readable storage medium. Of course, the readable storage medium can also be an integral part of the processor. The processor and the readable storage medium may be located in application specific integrated circuits (Application Specific Integrated Circuits, ASIC for short). Alternatively, the ASIC may be located in the user equipment. Of course, the processor and the readable storage medium may also exist in the communication device as discrete components. The readable storage medium may be read only memory (ROM), random access memory (RAM), CD-ROMs, magnetic tapes, floppy disks, optical data storage devices, and the like.
本发明还提供一种程序产品,该程序产品包括执行指令,该执行指令存储在可读存储介质中。设备的至少一个处理器可以从可读存储介质读取该执行指令,至少一个处理器执行该执行指令使得设备实施上述的各种实施方式提供的动态人脸图像重建方法。The present invention also provides a program product including execution instructions stored in a readable storage medium. At least one processor of the device can read the execution instruction from the readable storage medium, and the execution of the execution instruction by the at least one processor causes the device to implement the dynamic face image reconstruction method provided by the above-mentioned various embodiments.
在上述设备的实施例中,应理解,处理器可以是中央处理单元(英文:CentralProcessing Unit,简称:CPU),还可以是其他通用处理器、数字信号处理器(英文:DigitalSignal Processor,简称:DSP)、专用集成电路(英文:Application Specific IntegratedCircuit,简称:ASIC)等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本发明所公开的方法的步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。In the embodiment of the above-mentioned device, it should be understood that the processor may be a central processing unit (English: Central Processing Unit, referred to as: CPU), and can also be other general-purpose processors, digital signal processors (English: Digital Signal Processor, referred to as: DSP) ), application specific integrated circuit (English: Application Specific Integrated Circuit, referred to as: ASIC) and so on. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in conjunction with the present invention can be directly embodied as executed by a hardware processor, or executed by a combination of hardware and software modules in the processor.
最后应说明的是:以上各实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述各实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围。Finally, it should be noted that the above embodiments are only used to illustrate the technical solutions of the present invention, but not to limit them; although the present invention has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that: The technical solutions described in the foregoing embodiments can still be modified, or some or all of the technical features thereof can be equivalently replaced; and these modifications or replacements do not make the essence of the corresponding technical solutions deviate from the technical solutions of the embodiments of the present invention. scope.
Claims (8)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910382834.6A CN110148468B (en) | 2019-05-09 | 2019-05-09 | Method and device for dynamic face image reconstruction |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910382834.6A CN110148468B (en) | 2019-05-09 | 2019-05-09 | Method and device for dynamic face image reconstruction |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110148468A CN110148468A (en) | 2019-08-20 |
CN110148468B true CN110148468B (en) | 2021-06-29 |
Family
ID=67594881
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910382834.6A Expired - Fee Related CN110148468B (en) | 2019-05-09 | 2019-05-09 | Method and device for dynamic face image reconstruction |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110148468B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI719696B (en) * | 2019-11-01 | 2021-02-21 | 財團法人工業技術研究院 | Face image reconstruction method and system |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008013575A2 (en) * | 2006-01-31 | 2008-01-31 | University Of Southern California | 3d face reconstruction from 2d images |
CN101159015A (en) * | 2007-11-08 | 2008-04-09 | 清华大学 | A Recognition Method of Two-Dimensional Face Image |
CN101320484A (en) * | 2008-07-17 | 2008-12-10 | 清华大学 | Three-dimensional face recognition method based on full-automatic face positioning |
CN102254154A (en) * | 2011-07-05 | 2011-11-23 | 南京大学 | Method for authenticating human-face identity based on three-dimensional model reconstruction |
CN109255830A (en) * | 2018-08-31 | 2019-01-22 | 百度在线网络技术(北京)有限公司 | Three-dimensional facial reconstruction method and device |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108109198A (en) * | 2017-12-18 | 2018-06-01 | 深圳市唯特视科技有限公司 | A kind of three-dimensional expression method for reconstructing returned based on cascade |
-
2019
- 2019-05-09 CN CN201910382834.6A patent/CN110148468B/en not_active Expired - Fee Related
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008013575A2 (en) * | 2006-01-31 | 2008-01-31 | University Of Southern California | 3d face reconstruction from 2d images |
CN101159015A (en) * | 2007-11-08 | 2008-04-09 | 清华大学 | A Recognition Method of Two-Dimensional Face Image |
CN101320484A (en) * | 2008-07-17 | 2008-12-10 | 清华大学 | Three-dimensional face recognition method based on full-automatic face positioning |
CN102254154A (en) * | 2011-07-05 | 2011-11-23 | 南京大学 | Method for authenticating human-face identity based on three-dimensional model reconstruction |
CN109255830A (en) * | 2018-08-31 | 2019-01-22 | 百度在线网络技术(北京)有限公司 | Three-dimensional facial reconstruction method and device |
Non-Patent Citations (1)
Title |
---|
"读心术:扫描大脑活动可重建你想象中的人脸图像";TechPunk;《https://www.sohu.com/a/223687701_102883》;20180223;正文第1-5页 * |
Also Published As
Publication number | Publication date |
---|---|
CN110148468A (en) | 2019-08-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
McIntosh et al. | Partial least squares analysis of neuroimaging data: applications and advances | |
Correa et al. | Canonical correlation analysis for feature-based fusion of biomedical imaging modalities and its application to detection of associative networks in schizophrenia | |
Kringelbach et al. | A specific and rapid neural signature for parental instinct | |
Hauk et al. | Imagery or meaning? Evidence for a semantic origin of category‐specific brain activity in metabolic imaging | |
Jiang et al. | Evaluation of a shape-based model of human face discrimination using fMRI and behavioral techniques | |
Delorme et al. | Independent EEG sources are dipolar | |
Matuszewski et al. | Hi4D-ADSIP 3-D dynamic facial articulation database | |
CN109875509B (en) | Testing system and method for rehabilitation training effect of Alzheimer's patients | |
CN111568412A (en) | Method and device for reconstructing visual image by utilizing electroencephalogram signal | |
Zhao et al. | Connectome-scale group-wise consistent resting-state network analysis in autism spectrum disorder | |
Oosterwijk et al. | Shared states: using MVPA to test neural overlap between self-focused emotion imagery and other-focused emotion understanding | |
CN117172294B (en) | Method, system, equipment and storage medium for constructing sparse brain network | |
CN117612710B (en) | Medical diagnosis auxiliary system based on electroencephalogram signals and artificial intelligence classification | |
Akhonda et al. | Consecutive independence and correlation transform for multimodal fusion: Application to EEG and fMRI data | |
Bieniek et al. | Early ERPs to faces and objects are driven by phase, not amplitude spectrum information: evidence from parametric, test-retest, single-subject analyses | |
CN113842152A (en) | Electroencephalogram classification network training method, classification method, equipment and storage medium | |
Kraus et al. | Oscillatory alpha power at rest reveals an independent self: A cross-cultural investigation | |
Zhu et al. | EEG-eye movement based subject dependence, cross-subject, and cross-session emotion recognition with multidimensional homogeneous encoding space alignment | |
CN110148468B (en) | Method and device for dynamic face image reconstruction | |
CN113052800A (en) | Alzheimer disease image analysis method and device | |
Yang et al. | Lateralized functional connectivity of the sensorimotor cortex and its variations during complex visuomotor tasks | |
Wagner et al. | Statistical non-parametric mapping in sensor space | |
Li et al. | Transformer-based spatial-temporal feature learning for P300 | |
Fang et al. | Angular gyrus responses show joint statistical dependence with brain regions selective for different categories | |
CN116649902A (en) | Space-time characteristic mapping system and method for brain electrical signals of schizophrenia |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20210629 |
|
CF01 | Termination of patent right due to non-payment of annual fee |