CN108009528A - Face authentication method, device, computer equipment and storage medium based on Triplet Loss - Google Patents
Face authentication method, device, computer equipment and storage medium based on Triplet Loss Download PDFInfo
- Publication number
- CN108009528A CN108009528A CN201711436879.4A CN201711436879A CN108009528A CN 108009528 A CN108009528 A CN 108009528A CN 201711436879 A CN201711436879 A CN 201711436879A CN 108009528 A CN108009528 A CN 108009528A
- Authority
- CN
- China
- Prior art keywords
- face
- training
- scene
- sample
- convolutional neural
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Collating Specific Patterns (AREA)
Abstract
Description
技术领域technical field
本发明涉及图像处理技术领域,特别是涉及一种基于Triplet Loss的人脸认证方法、装置、计算机设备和存储介质。The present invention relates to the technical field of image processing, in particular to a face authentication method, device, computer equipment and storage medium based on Triplet Loss.
背景技术Background technique
人脸认证,是指对比现场采集的人物场景照片以及身份信息中的证件照片,判断是否为同一个人。人脸认证的关键技术为人脸识别。Face authentication refers to comparing the photos of people and scenes collected on the spot with the ID photos in the identity information to determine whether they are the same person. The key technology of face authentication is face recognition.
随着深度学习技术的兴起,人脸识别的相关问题不断突破传统的技术瓶颈,性能水平得到较大的提升。在运用深度学习解决人脸识别问题的研究工作中,主要有两派主流的方法:基于分类学习的方法和基于度量学习的方法。其中,基于分类学习的方法主要是在深度卷积网络提取特征之后计算样本的分类损失函数(比如softmax loss、center loss及相关变体)来对网络进行优化,网络最后一层是用于分类的全连接层,其输出节点的数量往往要与训练数据集的总类别数保持一致,该类方法适用于训练样本较多,尤其是同一类别的训练样本比较丰富的情况,网络可以得到较好的训练效果和泛化能力。但当类别数达到数十万或更高数量级时,网络最后的分类层(全连接层)参数量会呈线性增长而相当庞大,导致网络难以训练。With the rise of deep learning technology, the related issues of face recognition continue to break through the traditional technical bottleneck, and the performance level has been greatly improved. In the research work of using deep learning to solve the problem of face recognition, there are two mainstream methods: the method based on classification learning and the method based on metric learning. Among them, the method based on classification learning mainly calculates the classification loss function (such as softmax loss, center loss and related variants) of the sample after the deep convolutional network extracts the features to optimize the network. The last layer of the network is used for classification. In the fully connected layer, the number of output nodes is often consistent with the total number of categories in the training data set. This type of method is suitable for a large number of training samples, especially when the training samples of the same category are abundant, and the network can get better results. Training effect and generalization ability. However, when the number of categories reaches hundreds of thousands or more, the parameters of the final classification layer (fully connected layer) of the network will increase linearly and become quite large, making it difficult to train the network.
另一类方法是基于度量学习的方法,该方法以元组的方式组织训练样本(比如二元组pair或者三元组triplet),在深度卷积网络之后无需通过分类层,而是直接基于卷积特征向量计算样本间的度量损失(比如contrastive loss、triplet loss等)来对网络进行优化,该方法不需要训练分类层,因此网络参数量不受类别数增长的影响,对训练数据集的类别数无限定,只需要根据相应策略选取同类或异类样本构造合适的元组即可。相比分类学习方法,度量学习方法更适用于训练数据广度较大但深度不足(样本类别数多,但同类样本少)的情况,通过样本之间的不同组合,可以构造相当丰富的元组数据用于训练,同时度量学习方式更加关注元组内部关系,对于1:1人脸验证这类判断是与不是的问题有其先天的优势。Another type of method is a method based on metric learning, which organizes training samples in tuples (such as binary pairs or triplets), and does not need to pass through the classification layer after the deep convolutional network, but directly based on the volume The product feature vector is used to calculate the metric loss between samples (such as contrastive loss, triplet loss, etc.) to optimize the network. This method does not need to train the classification layer, so the number of network parameters is not affected by the increase in the number of categories. The category of the training data set The number is unlimited, and it is only necessary to select the same or different samples according to the corresponding strategy to construct the appropriate tuple. Compared with the classification learning method, the metric learning method is more suitable for the situation where the training data has a large breadth but insufficient depth (there are many sample categories, but few similar samples). Through different combinations of samples, quite rich tuple data can be constructed It is used for training, and the metric learning method pays more attention to the internal relationship of tuples. It has inherent advantages for judging yes or no such as 1:1 face verification.
在实际应用中,许多的机构都要求实名制登记,例如,银行开户,手机号码登记、金融账号开户等等。实名制登记要求用户携带身份证到指定的地点,由工作人员验证本人与身份证的照片对应后,方可开户成功。而随着互联网技术地发展,越来越多的机构推出了便民服务,不再强制要求客户到指定网点。用户的地理位置不受限制,上传身份证,并利用移动终端的图像采集装置采集现场的人物场景照片,由系统进行人脸认证,并在人脸认证通过后,即可开户成功。而传统地基于度量的学习方法,使用欧式距离来度量样本之间的相似度,而欧氏距离衡量的是空间各点的绝对距离,跟各个点所在的位置坐标直接相关,这并不符合人脸特征空间的分布属性,导致人脸识别的可靠性较低。In practical applications, many institutions require real-name registration, for example, bank account opening, mobile phone number registration, financial account opening and so on. The real-name system registration requires the user to bring the ID card to the designated place, and the account can only be successfully opened after the staff verifies that the person and the photo of the ID card correspond. With the development of Internet technology, more and more institutions have launched convenience services, and no longer require customers to go to designated outlets. The geographical location of the user is not limited, upload the ID card, and use the image acquisition device of the mobile terminal to collect photos of people and scenes on the scene, and the system will perform face authentication, and after the face authentication is passed, the account can be successfully opened. The traditional measurement-based learning method uses Euclidean distance to measure the similarity between samples, and Euclidean distance measures the absolute distance of each point in space, which is directly related to the position coordinates of each point, which is not in line with human The distribution properties of the face feature space lead to low reliability of face recognition.
发明内容Contents of the invention
基于此,有必要针对传统的人脸认证方法可靠性低的问题,提供一种基于TripletLoss的人脸认证方法、装置、计算机设备和存储介质。Based on this, it is necessary to provide a face authentication method, device, computer equipment and storage medium based on TripletLoss to solve the problem of low reliability of traditional face authentication methods.
一种基于Triplet Loss的人脸认证方法,包括:A face authentication method based on Triplet Loss, comprising:
基于人脸认证请求,获取证件照片和人物的场景照片;Based on the face authentication request, obtain the photo of the ID card and the scene photo of the person;
对所述场景照片和所述证件照片分别进行人脸检测、关键点定位和图像预处理,得到所述场景照片对应的场景人脸图像,以及所述证件照片对应的证件人脸图像;Carrying out face detection, key point positioning and image preprocessing on the scene photo and the certificate photo respectively, to obtain a scene face image corresponding to the scene photo and a certificate face image corresponding to the certificate photo;
将所述场景人脸图像和证件人脸图像输入到预先训练好的用于人脸认证的卷积神经网络模型,并获取所述卷积神经网络模型输出的所述场景人脸图像对应的第一特征向量,以及所述证件人脸图像对应的第二特征向量;其中,所述卷积神经网络模型基于三元组损失函数的监督训练得到;The scene face image and the certificate face image are input to the pre-trained convolutional neural network model for face authentication, and the first corresponding to the scene face image output by the convolutional neural network model is obtained. A feature vector, and a second feature vector corresponding to the ID face image; wherein, the convolutional neural network model is obtained based on the supervised training of the triple loss function;
计算所述第一特征向量和所述第二特征向量的余弦距离;calculating the cosine distance of the first eigenvector and the second eigenvector;
比较所述余弦距离和预设阈值,并根据比较结果确定人脸认证结果。Comparing the cosine distance with a preset threshold, and determining a face authentication result according to the comparison result.
在一个实施例中,所述方法还包括:In one embodiment, the method also includes:
获取带标记的训练样本,所述训练样本包括标记了属于每个标记对象的一张证件人脸图像和至少一张场景人脸图像;Acquiring marked training samples, the training samples include a document face image and at least one scene face image marked with each marked object;
根据所述训练样本训练卷积神经网络模块,通过OHEM产生各训练样本对应的三元组元素;所述三元组元素包括参考样本、正样本和负样本;According to the training sample training convolutional neural network module, the triplet element corresponding to each training sample is generated by OHEM; the triplet element includes a reference sample, a positive sample and a negative sample;
根据各训练样本的三元组元素,基于三元组损失函数的监督,训练所述卷积神经网络模型;该三元组损失函数,以余弦距离作为度量方式,通过随机梯度下降算法来优化模型参数;According to the triplet elements of each training sample, based on the supervision of the triplet loss function, train the convolutional neural network model; the triplet loss function uses the cosine distance as a measurement method to optimize the model through a stochastic gradient descent algorithm parameter;
将验证集数据输入所述卷积神经网络模型,达到训练结束条件时,得到训练好的用于人脸认证的卷积神经网络模型。The verification set data is input into the convolutional neural network model, and when the training end condition is reached, the trained convolutional neural network model for face authentication is obtained.
在另一个实施例中,根据所述训练样本训练卷积神经网络模型,通过OHEM产生各训练样本对应的三元组元素的步骤,包括:In another embodiment, according to the training sample training convolutional neural network model, the step of generating triplet elements corresponding to each training sample by OHEM includes:
随机选择一个图像作为参考样本,选择属于同一标签对象、与参考样本类别不同的图像作为正样本;Randomly select an image as a reference sample, and select an image that belongs to the same label object and is different from the reference sample category as a positive sample;
根据OHEM策略,利用当前训练的卷积神经网络模型提取特征之间的余弦距离,对于每一个参考样本,从其它不属于所述标签对象的图像中,选择距离最小、与所述参考样本属于不同类别的图像,作为该参考样本的负样本。According to the OHEM strategy, the currently trained convolutional neural network model is used to extract the cosine distance between features. For each reference sample, from other images that do not belong to the label object, select the image with the smallest distance that is different from the reference sample. The image of the category is used as a negative sample for this reference sample.
在另一个实施例中,所述三元组损失函数包括对同类样本的余弦距离的限定,以及对异类样本的余弦距离的限定。In another embodiment, the triplet loss function includes a limitation on the cosine distance of samples of the same type, and a limitation on the cosine distance of samples of different types.
在另一个实施例中,所述三元组损失函数为:In another embodiment, the triplet loss function is:
其中,cos(·)表示余弦距离,其计算方式为N是三元组数量,表示参考样本的特征向量,表示同类正样本的特征向量,表示异类负样本的特征向量,[·]+的含义如下:α1为类间间隔参数,α2为类内间隔参数。Among them, cos(·) represents the cosine distance, and its calculation method is N is the number of triplets, Represents the feature vector of the reference sample, Represents the feature vector of the same positive sample, Represents the eigenvector of heterogeneous negative samples, and the meaning of [ ] + is as follows: α 1 is the inter-class interval parameter, and α 2 is the intra-class interval parameter.
在另一个实施例中,所述方法还包括:利用基于海量开源人脸数据训练好的基础模型参数进行初始化,在特征输出层后添加归一化层及三元组损失函数层,得到待训练的卷积神经网络模型。In another embodiment, the method further includes: initializing the basic model parameters based on massive open source face data training, adding a normalization layer and a triplet loss function layer after the feature output layer to obtain the convolutional neural network model.
一种基于Triplet Loss的人脸认证装置,包括:图像获取模块、图像预处理模块、特征获取模块、计算模块和认证模块;A face authentication device based on Triplet Loss, comprising: an image acquisition module, an image preprocessing module, a feature acquisition module, a calculation module and an authentication module;
所述图像获取模块,用于基于人脸认证请求,获取证件照片和人物的场景照片;The image acquisition module is used to acquire a photo of a certificate and a scene photo of a person based on a face authentication request;
所述图像预处理模块,用于对所述场景照片和所述证件照片分别进行人脸检测、关键点定位和图像预处理,得到所述场景照片对应的场景人脸图像,以及所述证件照片对应的证件人脸图像;The image preprocessing module is used to perform face detection, key point positioning and image preprocessing on the scene photo and the ID photo, respectively, to obtain the scene face image corresponding to the scene photo, and the ID photo The corresponding ID face image;
所述特征获取模块,用于将所述场景人脸图像和证件人脸图像输入到预先训练好的用于人脸认证的卷积神经网络模型,并获取所述卷积神经网络模型输出的所述场景人脸图像对应的第一特征向量,以及所述证件人脸图像对应的第二特征向量;其中,所述卷积神经网络模型基于三元组损失函数的监督训练得到;The feature acquisition module is used to input the scene face image and the certificate face image into the pre-trained convolutional neural network model for face authentication, and obtain all the output of the convolutional neural network model. The first feature vector corresponding to the face image of the scene, and the second feature vector corresponding to the face image of the certificate; wherein, the convolutional neural network model is obtained based on the supervised training of the triple loss function;
所述计算模块,用于计算所述第一特征向量和所述第二特征向量的余弦距离;The calculation module is used to calculate the cosine distance between the first eigenvector and the second eigenvector;
所述认证模块,用于比较所述余弦距离和预设阈值,并根据比较结果确定人脸认证结果。The authentication module is configured to compare the cosine distance with a preset threshold, and determine a face authentication result according to the comparison result.
在另一个实施例中,所述装置还包括:样本获取模块、三元组获取模块、训练模块和验证模块;In another embodiment, the device further includes: a sample acquisition module, a triplet acquisition module, a training module and a verification module;
所述样本获取模块,用于获取带标记的训练样本,所述训练样本包括标记了属于每个标记对象的一张证件人脸图像和至少一张场景人脸图像;The sample acquisition module is used to acquire marked training samples, the training samples include a certificate face image and at least one scene face image marked with each marked object;
所述三元组获取模块,用于根据所述训练样本训练卷积神经网络模型,通过OHEM产生各训练样本对应的三元组元素;所述三元组元素包括参考样本、正样本和负样本;The triplet acquisition module is used to train the convolutional neural network model according to the training samples, and generate triplet elements corresponding to each training sample through OHEM; the triplet elements include reference samples, positive samples and negative samples ;
所述训练模块,用于根据各训练样本的三元组元素,基三元组损失函数的监督,训练所述卷积神经网络模型;该三元组损失函数,以余弦距离作为度量方式,通过随机梯度下降算法来优化模型参数;The training module is used to train the convolutional neural network model according to the triplet elements of each training sample and the supervision of the basic triplet loss function; the triplet loss function uses cosine distance as a measurement method, through Stochastic gradient descent algorithm to optimize model parameters;
所述验证模块,用于将验证集数据输入所述卷积神经网络模型,达到训练结束条件时,得到训练好的用于人脸认证的卷积神经网络模型。The verification module is used to input the verification set data into the convolutional neural network model, and when the training end condition is reached, the trained convolutional neural network model for face authentication is obtained.
一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现上述的基于Triplet Loss的人脸认证方法的步骤。A computer device includes a memory, a processor, and a computer program stored on the memory and operable on the processor. When the processor executes the computer program, the steps of the above-mentioned face authentication method based on Triplet Loss are realized.
一种存储介质,其上存储有计算机程序,其特征在于,该计算机程序被处理器执行时,实现上述的基于Triplet Loss的人脸认证方法的步骤。A storage medium, on which a computer program is stored, is characterized in that, when the computer program is executed by a processor, the steps of the above-mentioned face authentication method based on Triplet Loss are realized.
本发明所述的基于Triplet Loss的人脸认证方法、装置、计算机设备和存储介质,利用预先训练的卷积神经网络进行人脸认证,由于卷积神经网络模型基于三元组损失函数的监督训练得到,而场景人脸图像和证件人脸图像的相似度根据场景人脸图像对应的第一特征向量和证件人脸图像对应的第二特征向量的余弦距离计算得到,余弦距离衡量的是空间向量的夹角,更加体现在方向上的差异,从而更符合人脸特征空间的分布属性,提高了人脸认证的可靠性。The face authentication method, device, computer equipment and storage medium based on Triplet Loss of the present invention utilizes the pre-trained convolutional neural network for face authentication, since the convolutional neural network model is based on the supervised training of the triplet loss function obtained, and the similarity between the scene face image and the certificate face image is calculated according to the cosine distance between the first feature vector corresponding to the scene face image and the second feature vector corresponding to the certificate face image, and the cosine distance measures the space vector The included angle is more reflected in the difference in direction, which is more in line with the distribution attribute of the face feature space, and improves the reliability of face authentication.
附图说明Description of drawings
图1为一个实施例的基于Triplet Loss的人脸认证系统的结构示意图;Fig. 1 is the structural representation of the face authentication system based on Triplet Loss of an embodiment;
图2为一个实施例中基于Triplet Loss的人脸认证方法的流程图;Fig. 2 is the flowchart of the face authentication method based on Triplet Loss in an embodiment;
图3为一个实施例中训练得到用于人脸认证的卷积神经网络模型的步骤的流程图;Fig. 3 is a flowchart of the steps of training the convolutional neural network model for face authentication in one embodiment;
图4为在类间间隔一致、类内方差较大情况下,样本错分的概率示意图;Figure 4 is a schematic diagram of the probability of sample misclassification when the inter-class interval is consistent and the intra-class variance is large;
图5为在类间间隔一致、类内方差较小情况下,样本错分的概率示意图;Figure 5 is a schematic diagram of the probability of sample misclassification when the inter-class interval is consistent and the intra-class variance is small;
图6为一个实施例中基于Triplet Loss的人脸认证的迁移学习过程的示意图;Fig. 6 is a schematic diagram of the migration learning process of face authentication based on Triplet Loss in an embodiment;
图7为一个实施例中用于人脸认证的卷积神经网络模型的结构示意图;Fig. 7 is a schematic structural diagram of a convolutional neural network model for face authentication in an embodiment;
图8为一个实施例中基于Triplet Loss的人脸认证方法的流程示意图;Fig. 8 is a schematic flow chart of a face authentication method based on Triplet Loss in an embodiment;
图9为一个实施例中基于Triplet Loss的人脸认证装置的结构框图;Fig. 9 is a structural block diagram of a face authentication device based on Triplet Loss in an embodiment;
图10为另一个实施例中基于Triplet Loss的人脸认证装置的结构框图。Fig. 10 is a structural block diagram of a face authentication device based on Triplet Loss in another embodiment.
具体实施方式Detailed ways
图1为一个实施例的基于Triplet Loss的人脸认证系统的结构示意图。如图1所示,人脸认证系统包括服务器101和图像采集装置102。其中,服务器101与图像采集装置102网络连接。图像采集装置102采集待认证用户的实时场景照片,以及证件照片,并将采集的实时场景照片和证件照片发送至服务器101。服务器101判断场景照片的人物与证件照中的人物是否为同一人,对待认证用户的身份进行认证。基于具体的应用场景,图像采集装置102可以为摄像头,或是具有摄像功能的用户终端。以在开户现场为例,图像采集装置102可以为摄像头;以通过互联网进行金融账号开户为例,图像采集装置102可以为具有摄像功能的移动终端。FIG. 1 is a schematic structural diagram of a face authentication system based on Triplet Loss in an embodiment. As shown in FIG. 1 , the face authentication system includes a server 101 and an image acquisition device 102 . Wherein, the server 101 is connected to the image acquisition device 102 through a network. The image collection device 102 collects real-time scene photos and ID photos of the user to be authenticated, and sends the collected real-time scene photos and ID photos to the server 101 . The server 101 judges whether the person in the scene photo and the person in the ID photo are the same person, and authenticates the identity of the user to be authenticated. Based on specific application scenarios, the image acquisition device 102 may be a camera, or a user terminal with a camera function. Taking the account opening site as an example, the image collection device 102 may be a camera; taking financial account opening via the Internet as an example, the image collection device 102 may be a mobile terminal with a camera function.
在其它的实施例中,人脸认证系统还可以包括读卡器,用于读取证件(如身份证等)芯片内的证件照。In other embodiments, the face authentication system may also include a card reader, which is used to read the ID photo in the chip of the ID card (eg, ID card, etc.).
图2为一个实施例中基于Triplet Loss的人脸认证方法的流程图。如图2所示,该方法包括:Fig. 2 is a flowchart of a face authentication method based on Triplet Loss in an embodiment. As shown in Figure 2, the method includes:
S202,基于人脸认证请求,获取证件照片和人物的场景照片。S202. Based on the face authentication request, acquire a photo of a certificate and a scene photo of a person.
其中,证件照片是指能够证明人物身份的证件所对应的照片,例如身份证上所印制的证件照或芯片内的证件照。证件照片的获取方式可以采用对证件进行拍照获取,也可以通过读卡器读取证件芯片所存储的证件照片。本实施例中的证件可以为身份证,驾驶证或社会保障卡等。Wherein, the certificate photo refers to the photo corresponding to the certificate that can prove the identity of the person, such as the certificate photo printed on the ID card or the certificate photo in the chip. The certificate photo can be obtained by taking a photo of the certificate, or reading the certificate photo stored in the certificate chip through a card reader. The certificate in this embodiment can be an ID card, a driver's license or a social security card, etc.
人物的场景照片是指待认证用户在认证时所采集,该待认证用户在现场环境的照片。现场环境是指用户在拍照时的所处环境,现场环境不受限制。场景照片的获取方式可以为,利用具有摄像功能的移动终端采集场景照片并发送至服务器。The scene photo of the person refers to the photo of the user to be authenticated in the on-site environment collected when the user to be authenticated is authenticated. The on-site environment refers to the environment in which the user takes a photo, and the on-site environment is not limited. The way to acquire the scene photo may be to use a mobile terminal with a camera function to collect the scene photo and send it to the server.
人脸认证,是指对比现场采集的人物场景照片以及身份信息中的证件照片,判断是否为同一个人。人脸认证请求基于实际的应用操作触发,例如,基于用户的开户请求,触发人脸认证请求。应用程序在用户终端的显示界面提示用户进行照片的采集操作,并在照片采集完成后,将采集的照片发送至服务器,进行人脸认证。Face authentication refers to comparing the photos of people and scenes collected on the spot with the ID photos in the identity information to determine whether they are the same person. The face authentication request is triggered based on the actual application operation, for example, based on the user's account opening request, the face authentication request is triggered. The application program prompts the user to collect photos on the display interface of the user terminal, and after the photo collection is completed, the collected photos are sent to the server for face authentication.
S204,对场景照片和证件照片分别进行人脸检测、关键点定位和图像预处理,得到场景照片对应的场景人脸图像,以及证件照片对应的证件人脸图像。S204. Perform face detection, key point location, and image preprocessing on the scene photo and the ID photo, respectively, to obtain a scene face image corresponding to the scene photo and a ID face image corresponding to the ID photo.
人脸检测是指识别照片并获取照片中的人脸区域。Face detection refers to identifying photos and obtaining face areas in photos.
关键点定位,是指对照片中检测的人脸区域,获取人脸关键点在每幅照片中的位置。人脸关键点包括眼睛,鼻尖、嘴角尖、眉毛以及人脸各部件轮廓点。Key point positioning refers to obtaining the position of the key points of the face in each photo for the detected face area in the photo. The key points of the human face include the eyes, the tip of the nose, the corner of the mouth, the eyebrows and the contour points of each part of the face.
本实施例中,可采用基于多任务联合学习的级联卷积神经网络MTCNN方法同时完成人脸检测和人脸关键点检测,亦可采用基于LBP特征的人脸检测方法和基于形状回归的人脸关键点检测方法。In this embodiment, the cascaded convolutional neural network MTCNN method based on multi-task joint learning can be used to complete face detection and face key point detection at the same time, and the face detection method based on LBP features and the human face detection method based on shape regression can also be used. Face keypoint detection method.
图像预处理是指将根据检测的人脸关键点在每张图片中的位置,进行人像对齐和剪切处理,从而得到尺寸归一化的场景人脸图像和证件人脸图像。其中,场景人脸图像是指对场景照片进行人脸检测、关键点定位和图像预处理后得到的人脸图像,证件人脸图像是指对证件照片进行人脸检测、关键点定位和图像预处理后得到的人脸图像。Image preprocessing means that according to the position of the detected face key points in each picture, portrait alignment and cropping are performed, so as to obtain the normalized size of the scene face image and the ID face image. Among them, the scene face image refers to the face image obtained after performing face detection, key point positioning and image preprocessing on the scene photo, and the certificate face image refers to the face detection, key point positioning and image preprocessing of the certificate photo. The face image obtained after processing.
S206,将场景人脸图像和证件人脸图像输入到预先训练好的用于人脸认证的卷积神经网络模型,并获取卷积神经网络模型输出的场景人脸图像对应的第一特征向量,以及证件人脸图像对应的第二特征向量。S206, input the scene face image and the certificate face image to the pre-trained convolutional neural network model for face authentication, and obtain the first feature vector corresponding to the scene face image output by the convolutional neural network model, And the second feature vector corresponding to the face image of the certificate.
其中,卷积神经网络模型基于三元组损失函数的监督预先根据训练样本提前训练好的。卷积神经网络包括卷积层、池化层、激活函数层和全连接层,每层的各个神经元参数通过训练确定。利用训练好的卷积神经网络,通过网络前向传播,获取卷积神经网络模型的全连接层输出的场景人脸图像的第一特征向量,以及证件人脸图像对应的第二特征向量。Among them, the convolutional neural network model is pre-trained based on the supervision of the triple loss function based on the training samples. The convolutional neural network includes a convolutional layer, a pooling layer, an activation function layer, and a fully connected layer. The parameters of each neuron in each layer are determined through training. Using the trained convolutional neural network, through the forward propagation of the network, the first feature vector of the scene face image output by the fully connected layer of the convolutional neural network model and the second feature vector corresponding to the ID face image are obtained.
三元组(triplet)是指从训练数据集中随机选一个样本,该样本称为参考样本,然后再随机选取一个和参考样本属于同一人的样本作为正样本,选取不属于同一人的样本作为负样本,由此构成一个(参考样本、正样本、负样本)三元组。由于人证比对主要是基于证件照与场景照的比对,而不是证件照与证件照、或者场景照与场景照的比对,因此三元组的模式主要有两种组合:以证件照图像为参考样本时,正样本和负样本均为场景照;以场景照图像为参考样本时,正样本和负样本均为证件照。A triplet is to randomly select a sample from the training data set, which is called a reference sample, and then randomly select a sample that belongs to the same person as the reference sample as a positive sample, and select a sample that does not belong to the same person as a negative sample. samples, thus forming a (reference sample, positive sample, negative sample) triplet. Since the witness comparison is mainly based on the comparison between the ID photo and the scene photo, rather than the comparison between the ID photo and the ID photo, or the scene photo and the scene photo, there are mainly two combinations of triplet modes: When the image is used as a reference sample, both the positive sample and the negative sample are scene photos; when the scene photo image is used as the reference sample, both the positive sample and the negative sample are ID photos.
针对三元组中的每个样本,训练一个参数共享的网络,得到三个元素的特征表达。改进三元组损失(triplet loss)的目的就是通过学习,让参考样本和正样本的特征表达之间的距离尽可能小,而参考样本和负样本的特征表达之间的距离尽可能大,并且要让参考样本和正样本的特征表达之间的距离和参考样本和负样本的特征表达之间的距离之间有一个最小的间隔。For each sample in the triplet, a parameter-sharing network is trained to obtain the feature representation of the three elements. The purpose of improving the triplet loss (triplet loss) is to make the distance between the feature expression of the reference sample and the positive sample as small as possible through learning, and the distance between the feature expression of the reference sample and the negative sample is as large as possible, and to Let there be a minimum interval between the distance between the reference sample and the feature expression of the positive sample and the distance between the reference sample and the feature expression of the negative sample.
S208,计算第一特征向量和第二特征向量的余弦距离。S208. Calculate a cosine distance between the first eigenvector and the second eigenvector.
余弦距离,也称为余弦相似度,是用向量空间中两个向量夹角的余弦值作为衡量两个个体间差异的大小的度量。第一特征向量和第二特征向量的余弦距离越大,表示场景人脸图像和证件人脸图像的相似度越大,第一特征向量和第二特征向量的余弦距离越小,表示场景人脸图像和证件人脸图像的相似度越小。当场景人脸图像和证件人脸图像的余弦距离越接收于1时,两张图像属于同一人的机率越大,当场景人脸图像和证件人脸图像的余弦距离越小,两张图像属于同一人的机率越小。Cosine distance, also known as cosine similarity, uses the cosine value of the angle between two vectors in a vector space as a measure of the difference between two individuals. The larger the cosine distance between the first eigenvector and the second eigenvector, the greater the similarity between the scene face image and the document face image, and the smaller the cosine distance between the first eigenvector and the second eigenvector, it represents the scene face The similarity between the image and the ID face image is smaller. When the cosine distance between the scene face image and the certificate face image is closer to 1, the probability that the two images belong to the same person is greater; when the cosine distance between the scene face image and the certificate face image is smaller, the two images belong to The less likely it is the same person.
传统的三元组损失(triplet loss)方法中,使用欧式距离来度量样本之间的相似度。而欧氏距离衡量的是空间各点的绝对距离,跟各个点所在的位置坐标直接相关,这并不符合人脸特征空间的分布属性。本实施例中,考虑人脸特征空间的分布属性和实际应用场景,采用余弦距离来度量样本之间的相似度。余弦距离衡量的是空间向量的夹角,更加体现在方向上的差异,而不是位置,从而更符合人脸特征空间的分布属性。In the traditional triplet loss method, Euclidean distance is used to measure the similarity between samples. The Euclidean distance measures the absolute distance of each point in the space, which is directly related to the position coordinates of each point, which does not conform to the distribution attribute of the face feature space. In this embodiment, the cosine distance is used to measure the similarity between samples in consideration of the distribution attribute of the face feature space and the actual application scenario. The cosine distance measures the included angle of the space vector, which is more reflected in the difference in direction rather than position, so it is more in line with the distribution attribute of the face feature space.
具体地,余弦距离的计算公式为:Specifically, the formula for calculating the cosine distance is:
其中,x表示第一特征向量,y表示第二特征向量。Wherein, x represents the first eigenvector, and y represents the second eigenvector.
S210,比较余弦距离和预设阈值,并根据比较结果确定人脸认证结果。S210. Compare the cosine distance with a preset threshold, and determine a face authentication result according to the comparison result.
认证结果包括认证通过,即证件照片和场景照片属于同一人。认证结果还包括认证失败,即证件照片和场景照片不属于同一人。The authentication result includes the authentication pass, that is, the ID photo and the scene photo belong to the same person. The authentication result also includes authentication failure, that is, the ID photo and the scene photo do not belong to the same person.
具体地,将余弦距离与预设阈值进行比较,当余弦距离大于预设阈值时,表示即证件照片与场景照片的相似度大于预设阈值,认证成功,当余弦距离小于预设阈值时,表示即证件照片与场景照片的相似度小于预设阈值,认证失败。Specifically, the cosine distance is compared with the preset threshold. When the cosine distance is greater than the preset threshold, it means that the similarity between the ID photo and the scene photo is greater than the preset threshold, and the authentication is successful. When the cosine distance is smaller than the preset threshold, it means That is, if the similarity between the ID photo and the scene photo is less than the preset threshold, the authentication fails.
上述的基于Triplet Loss的人脸认证方法,利用预先训练的卷积神经网络进行人脸认证,由于卷积神经网络模型基于三元组损失函数的监督训练得到,而场景人脸图像和证件人脸图像的相似度根据场景人脸图像对应的第一特征向量和证件人脸图像对应的第二特征向量的余弦距离计算得到,余弦距离衡量的是空间向量的夹角,更加体现在方向上的差异,从而更符合人脸特征空间的分布属性,提高了人脸认证的可靠性。The above-mentioned face authentication method based on Triplet Loss uses the pre-trained convolutional neural network for face authentication. Since the convolutional neural network model is obtained based on the supervised training of the triplet loss function, the scene face image and the ID face The similarity of the image is calculated based on the cosine distance between the first feature vector corresponding to the scene face image and the second feature vector corresponding to the ID face image. The cosine distance measures the included angle of the space vector, which is more reflected in the difference in direction , which is more in line with the distribution attribute of the face feature space, and improves the reliability of face authentication.
在另一个实施例中,人脸认证方法还包括训练得到用于人脸认证的卷积神经网络模型的步骤。图3为一个实施例中训练得到用于人脸认证的卷积神经网络模型的步骤的流程图。如图3所示,该步骤包括:In another embodiment, the face authentication method further includes the step of training a convolutional neural network model for face authentication. Fig. 3 is a flow chart of the steps of training a convolutional neural network model for face authentication in one embodiment. As shown in Figure 3, this step includes:
S302,获取带标记的训练样本,训练样本包括标记了属于每个标记对象的一张证件人脸图像和至少一张场景人脸图像。S302. Obtain marked training samples, where the training samples include a certificate face image and at least one scene face image marked with each marked object.
本实施例中,标记对象即人,训练样本以人为单位,标记了同属于一个人的场景人脸图像和证件人脸图像。具体地,场景人脸图像和证件人脸图像可通过对带标记的场景照片和证件照片进行人脸检测、关键点定位和图像预处理得到。In this embodiment, the marking object is a person, and the training sample is based on a person, and the scene face image and the certificate face image belonging to the same person are marked. Specifically, the scene face image and the ID face image can be obtained by performing face detection, key point location and image preprocessing on the marked scene photo and ID photo.
人脸检测是指识别照片并获取照片中的人脸区域。Face detection refers to identifying photos and obtaining face areas in photos.
关键点定位,是指对照片中检测的人脸区域,获取人脸关键点在每幅照片中的位置。人脸关键点包括眼睛,鼻尖、嘴角尖、眉毛以及人脸各部件轮廓点。Key point positioning refers to obtaining the position of the key points of the face in each photo for the detected face area in the photo. The key points of the human face include the eyes, the tip of the nose, the corner of the mouth, the eyebrows and the contour points of each part of the face.
本实施例中,可采用基于多任务联合学习的级联卷积神经网络MTCNN方法同时完成人脸检测和人脸关键,亦可采用基于LBP特征的人脸检测方法和基于形状回归的人脸关键点检测方法。In this embodiment, the cascaded convolutional neural network MTCNN method based on multi-task joint learning can be used to complete face detection and face keying at the same time, and the face detection method based on LBP features and the face keying based on shape regression can also be used. point detection method.
图像预处理是指将根据检测的人脸关键点在每张图片中的位置,进行人像对齐和剪切处理,从而得到尺寸归一化场景人脸图像和证件人脸图像。其中,场景人脸图像是指对场景照片进行人脸检测、关键点定位和图像预处理后得到的人脸图像,证件人脸图像是指对证件照片进行人脸检测、关键点定位和图像预处理后得到的人脸图像。Image preprocessing means that according to the position of the detected face key points in each picture, portrait alignment and cropping are performed to obtain a size-normalized scene face image and a certificate face image. Among them, the scene face image refers to the face image obtained after performing face detection, key point positioning and image preprocessing on the scene photo, and the certificate face image refers to the face detection, key point positioning and image preprocessing of the certificate photo. The face image obtained after processing.
S304,根据训练样本训练卷积神经网络模型,通过OHEM产生各训练样本对应的三元组元素;三元组元素包括参考样本、正样本和负样本。S304. Train the convolutional neural network model according to the training samples, and generate triplet elements corresponding to each training sample through OHEM; the triplet elements include reference samples, positive samples, and negative samples.
三元组有两种组合方式:以证件照图像为参考样本时,正样本和负样本均为场景照图像;以场景照图像为参考样本时,正样本和负样本均为证件照图像。There are two combinations of triplets: when the ID photo image is used as the reference sample, both the positive sample and the negative sample are scene photos; when the scene photo image is used as the reference sample, both the positive sample and the negative sample are ID photo images.
具体地,以证件照为参考图像为例,从训练数据集中随机选一个人的证件照样本,该样本称为参考样本,然后再随机选取一个和参考样本属于同一人的场景照样本作为正样本,选取不属于同一人的场景照样本作为负样本,由此构成一个(参考样本、正样本、负样本)三元组。Specifically, taking the ID photo as the reference image as an example, a sample of a person’s ID photo is randomly selected from the training data set, which is called a reference sample, and then a scene photo sample belonging to the same person as the reference sample is randomly selected as a positive sample , select scene photos that do not belong to the same person as negative samples, thus forming a (reference sample, positive sample, negative sample) triplet.
即正样本与参考样本为同类样本,即属于同一人图像。负样本是参考样本的异类样本,即不属于同一人的图像。其中,三元组元素中的参考样本和正样本是训练样本中已标记的,负样本在卷积神经网络的训练过程中,采用OHEM(Online Hard Example Mining)策略在线构造三元组,即在网络每次迭代优化的过程中,利用当前网络对候选三元组进行前向计算,选择训练样本中与参考样本不属于同一用户,且余弦距离最近的图像作为负样本,从而得到各训练样本对应的三元组元素。That is, the positive sample and the reference sample are the same type of sample, that is, they belong to the same person image. Negative samples are heterogeneous samples from the reference sample, i.e. images that do not belong to the same person. Among them, the reference samples and positive samples in the triplet elements are marked in the training samples. During the training process of the convolutional neural network, the negative samples are constructed online using the OHEM (Online Hard Example Mining) strategy, that is, in the network In the process of each iterative optimization, the current network is used to perform forward calculations on the candidate triplets, and the images in the training samples that do not belong to the same user as the reference samples and have the closest cosine distance are selected as negative samples, so as to obtain the corresponding training samples. triplet element.
一个实施例中,根据训练样本训练卷积神经网络,并产生各训练样本对应的三元组元素的步骤,包括以下步骤S1和S2:In one embodiment, the step of training the convolutional neural network according to the training samples and generating the triplet elements corresponding to each training sample includes the following steps S1 and S2:
S1:随机选择一个图像作为参考样本,选择属于同一标签对象、与参考样本类别不同的图像作为正样本。S1: Randomly select an image as a reference sample, and select an image that belongs to the same label object and is different from the reference sample category as a positive sample.
类别是指所属的图像类型,本实施例中,训练样本的类别包括场景人脸图像和证件人脸图像。因为人脸认证主要是证件照和场景照之间的对比,因此,参考样本和正样本应当属于不同的类别,若参考样本为场景人脸图像,则正样本为证件人脸图像;若参考样本为证件人脸图像,则正样本为场景人脸图像。The category refers to the image type to which it belongs. In this embodiment, the categories of the training samples include scene face images and certificate face images. Because face authentication is mainly a comparison between the ID photo and the scene photo, the reference sample and the positive sample should belong to different categories. If the reference sample is a scene face image, the positive sample is a ID face image; if the reference sample is a ID face image, the positive sample is the scene face image.
S2:根据OHEM策略,利用当前训练的卷积神经网络模型提取特征之间的余弦距离,对于每一个参考样本,从其它不属于同一标签对象的图像中,选择距离最小、与参考样本属于不同类别的图像,作为该参考样本的负样本。S2: According to the OHEM strategy, use the currently trained convolutional neural network model to extract the cosine distance between features. For each reference sample, select the smallest distance from other images that do not belong to the same label object and belong to a different category from the reference sample. The image of , as the negative sample of the reference sample.
负样本从与参考样本不属于同一人的标签的人脸图像中选择,具体地,负样本在卷积神经网络的训练过程中,采用OHEM策略在线构造三元组,即在网络每次迭代优化的过程中,利用当前网络对候选三元组进行前向计算,选择训练样本中与参考样本不属于同一用户,且余弦距离最近、与参考样本不属于同一类别的图像作为负样本。即,负样本与参考样本的类别不同。可以认为,三元组中若以证件照为参考样本,则正样本和负样本均是场景照;反之若以场景照为参考样本,则另外正样本和负样本均是证件照。Negative samples are selected from face images with labels that do not belong to the same person as the reference sample. Specifically, during the training process of the convolutional neural network, the negative samples are constructed online using the OHEM strategy, that is, each iteration of the network is optimized In the process of , the current network is used to perform forward calculation on the candidate triplets, and the images in the training samples that do not belong to the same user as the reference samples, have the closest cosine distance, and do not belong to the same category as the reference samples are selected as negative samples. That is, a negative sample is of a different class than the reference sample. It can be considered that if the ID photo is used as the reference sample in the triplet, both the positive sample and the negative sample are scene photos; otherwise, if the scene photo is used as the reference sample, then both the positive sample and the negative sample are ID photos.
S306,根据各训练样本的三元组元素,基于三元组损失函数的监督,训练卷积神经网络模型,该三元组损失函数,以余弦距离作为度量方式,通过随机梯度下降算法来优化模型参数。S306, according to the triplet elements of each training sample, and based on the supervision of the triplet loss function, train the convolutional neural network model. The triplet loss function uses the cosine distance as the measurement method to optimize the model through the stochastic gradient descent algorithm. parameter.
人证核验终端通过比对用户证件芯片照与场景照是否一致来对用户身份进行验证,后台采集到的数据往往是单个人的样本只有两张图,即证件照与比对时刻抓拍到的场景照,而不同个体的数量却可以成千上万。这种类别数量较大而同类样例少的数据如果用基于分类的方法来进行训练,分类层参数会过于庞大而导致网络非常难以学习,因此考虑用度量学习的方法来解决。其中度量学习的典型的一般是用三元组损失(triplet loss)方法,通过构造图像三元组来学习一种有效的特征映射,在该映射下同类样本的特征距离小于异类样本的特征距离,从而达到正确比对的目的。The witness verification terminal verifies the identity of the user by comparing whether the chip photo of the user’s ID card is consistent with the scene photo. The data collected in the background is often a sample of a single person with only two pictures, that is, the ID photo and the scene captured at the moment of comparison. photos, but the number of different individuals can be tens of thousands. If this kind of data with a large number of categories and few similar samples is trained by a classification-based method, the parameters of the classification layer will be too large and the network will be very difficult to learn. Therefore, the method of metric learning is considered to solve it. Among them, the typical metric learning is to use triplet loss (triplet loss) method to learn an effective feature map by constructing image triplets. Under this map, the feature distance of similar samples is smaller than that of heterogeneous samples. In order to achieve the purpose of correct comparison.
三元组损失(triplet loss)的目的就是通过学习,让参考样本和正样本的特征表达之间的距离尽可能小,而参考样本和负样本的特征表达之间的距离尽可能大,并且要让参考样本和正样本的特征表达之间的距离和参考样本和负样本的特征表达之间的距离之间有一个最小的间隔。The purpose of triplet loss (triplet loss) is to make the distance between the feature expression of the reference sample and the positive sample as small as possible through learning, and the distance between the feature expression of the reference sample and the negative sample is as large as possible, and let There is a minimum gap between the distance between the feature representation of the reference sample and the positive sample and the distance between the feature representation of the reference sample and the negative sample.
在另一个实施例中,三元组损失函数包括对同类样本的余弦距离的限定,以及对异类样本的余弦距离的限定。In another embodiment, the triplet loss function includes a restriction on the cosine distance of samples of the same kind, and a restriction on the cosine distance of samples of the different class.
其中,同类样本是指参考样本和正样本,异类样本是指参考样本和负样本。同类样本的余弦距离是指参考样本和正样本的余弦距离,异类样本的余弦距离是指参考样本和负样本的余弦距离。Among them, similar samples refer to reference samples and positive samples, and heterogeneous samples refer to reference samples and negative samples. The cosine distance of the same sample refers to the cosine distance between the reference sample and the positive sample, and the cosine distance of the heterogeneous sample refers to the cosine distance between the reference sample and the negative sample.
一方面,原始的triplet loss方法只是考虑了类间差距而没有考虑类内差距,如果类内分布不够聚敛,网络的泛化能力就会减弱,对场景适应性也会随之降低。另一方面,原始的triplet loss方法采用的是欧式距离来度量样本之间的相似度,实际上人脸模型部署后在特征比对环节,更多地会采用余弦距离来进行度量。欧氏距离衡量的是空间各点的绝对距离,跟各个点所在的位置坐标直接相关;而余弦距离衡量的是空间向量的夹角,更加体现在方向上的差异,而不是位置,从而更符合人脸特征空间的分布属性。On the one hand, the original triplet loss method only considers the inter-class gap but not the intra-class gap. If the intra-class distribution is not convergent enough, the generalization ability of the network will be weakened, and the adaptability to the scene will also be reduced. On the other hand, the original triplet loss method uses the Euclidean distance to measure the similarity between samples. In fact, after the face model is deployed, the cosine distance is often used to measure the feature comparison. Euclidean distance measures the absolute distance of each point in space, which is directly related to the position coordinates of each point; while cosine distance measures the angle between space vectors, which is more reflected in the difference in direction rather than position, so it is more in line with Distribution properties of face feature space.
采用triplet loss方法,通过在线构造三元组数据输入网络,然后反向传播三元组的度量损失来进行迭代优化。每一个三元组包含三张图像,分别是一个参考样本,一个与参考样本同类的正样本,以及一个与参考样本异类的负样本,标记为(anchor,positive,negative)。原始triplet loss的基本思想是,通过度量学习使得参考样本与正样本之间的距离小于参考样本与负样本之间的距离,并且距离之差大于一个最小间隔参数α。因此原始的triplet loss损失函数如下:The triplet loss method is adopted to perform iterative optimization by constructing triplet data input network online, and then backpropagating the triplet metric loss. Each triplet contains three images, which are a reference sample, a positive sample of the same type as the reference sample, and a negative sample of the same type as the reference sample, marked as (anchor, positive, negative). The basic idea of the original triplet loss is that the distance between the reference sample and the positive sample is smaller than the distance between the reference sample and the negative sample through metric learning, and the distance difference is greater than a minimum interval parameter α. So the original triplet loss loss function is as follows:
其中,N是三元组数量,表示参考样本(anchor)的特征向量,表示同类正样本(positive)的特征向量,表示异类负样本(negative)的特征向量。表示L2范式,即欧氏距离。[·]+的含义如下: where N is the number of triplets, Represents the feature vector of the reference sample (anchor), Represents the eigenvector of the same positive sample (positive), A feature vector representing a heterogeneous negative sample (negative). Represents the L2 paradigm, that is, the Euclidean distance. [ ] + has the following meanings:
从上式可看出,原始的triplet loss函数只限定了同类样本(anchor,positive)与异类样本(anchor,negative)之间的距离,即通过间隔参数α尽可能增大类间距离,而对类内距离未作任何限定,即对同类样本之间的距离未作任何约束。如果类内距离比较分散,方差过大,网络的泛化能力就会减弱,样本被错分的概率就会更大。图4为在类间间隔一致、类内方差较大情况下,样本错分的概率示意图,图5为在类间间隔一致、类内方差较小情况下,样本错分的概率示意图,如图4和图5所示,阴影部分表示样本错分的概率,在类间间隔一致、类内方差较大情况下,样本错分的概率明显大于类间间隔一致、类内方差较小情况下,样本错分的概率。It can be seen from the above formula that the original triplet loss function only limits the distance between similar samples (anchor, positive) and heterogeneous samples (anchor, negative), that is, the inter-class distance can be increased as much as possible through the interval parameter α, while for The intra-class distance is not limited, that is, there is no constraint on the distance between samples of the same class. If the intra-class distance is scattered and the variance is too large, the generalization ability of the network will be weakened, and the probability of samples being misclassified will be greater. Figure 4 is a schematic diagram of the probability of sample misclassification when the inter-class interval is consistent and the intra-class variance is large. Figure 5 is a schematic diagram of the probability of sample misclassification when the inter-class interval is consistent and the intra-class variance is small. 4 and Figure 5, the shaded part represents the probability of sample misclassification. In the case of consistent inter-class intervals and large intra-class variance, the probability of sample misclassification is significantly greater than that of consistent inter-class intervals and small intra-class variance. The probability of sample misclassification.
针对上述问题,本发明提出改进的triplet loss方法,一方面保留了原始方法中对类间距离的限定,同时增加了对类内距离的约束项,使得类内距离尽可能聚敛。其loss函数表达式为:In view of the above problems, the present invention proposes an improved triplet loss method. On the one hand, it retains the limitation on the distance between classes in the original method, and at the same time adds constraints on the distance within a class, so that the distance within a class can converge as much as possible. Its loss function expression is:
其中,cos(·)表示余弦距离,其计算方式为N是三元组数量,表示参考样本的特征向量,表示同类正样本的特征向量,表示异类负样本的特征向量,[·]+的含义如下:α1为类间间隔参数,α2为类内间隔参数。Among them, cos(·) represents the cosine distance, and its calculation method is N is the number of triplets, Represents the feature vector of the reference sample, Represents the feature vector of the same positive sample, Represents the eigenvector of heterogeneous negative samples, and the meaning of [ ] + is as follows: α 1 is the inter-class interval parameter, and α 2 is the intra-class interval parameter.
相比原始的triplet loss函数,改进后的triplet loss函数的度量方式由欧氏距离改为余弦距离,这样可以保持训练阶段与部署阶段度量方式的一致性,提高特征学习的连续性。同时新的triplet loss函数第一项与原始的triplet loss作用一致,用于增大类间差距,第二项添加了对同类样本对(正元组)的距离约束,用于缩小类内差距。α1为类间间隔参数,取值范围为0~0.2,α2为类内间隔参数,取值范围为0.8~1.0。值得注意的是,由于是用余弦方式度量,得到的度量值对应两个样本之间的相似度,因此在表达式中,只有负元组余弦相似度在α1范围内大于正元组余弦相似度的样本,才会真正参与训练。Compared with the original triplet loss function, the measurement method of the improved triplet loss function is changed from Euclidean distance to cosine distance, which can maintain the consistency of the measurement method between the training phase and the deployment phase, and improve the continuity of feature learning. At the same time, the first item of the new triplet loss function has the same effect as the original triplet loss, which is used to increase the inter-class gap, and the second item adds a distance constraint on the same sample pair (positive tuple), which is used to narrow the intra-class gap. α 1 is the inter-class interval parameter, the value range is 0-0.2, α 2 is the intra-class interval parameter, the value range is 0.8-1.0. It is worth noting that, since it is measured by the cosine method, the obtained measurement value corresponds to the similarity between two samples, so in In the expression, only the samples whose negative tuple cosine similarity is greater than the positive tuple cosine similarity in the range of α 1 will really participate in the training.
基于改进后的三元组损失函数来训练模型,通过类间损失与类内损失的联合约束来对模型进行反向传播的优化训练,使得同类样本在特征空间尽可能接近而异类样本在特征空间尽可能远离,提高模型的辨识力,从而提高人脸认证的可靠性。The model is trained based on the improved triplet loss function, and the model is optimized for backpropagation through the joint constraints of inter-class loss and intra-class loss, so that similar samples are as close as possible in the feature space and heterogeneous samples are in the feature space. Keep it as far away as possible to improve the recognition of the model, thereby improving the reliability of face authentication.
S308,将验证集数据输入卷积神经网络,达到训练结束条件时,得到训练好的用于人脸认证的卷积神经网络。S308. Input the verification set data into the convolutional neural network, and obtain a trained convolutional neural network for face authentication when the training end condition is reached.
具体地,从人证图像数据池中取90%数据作为训练集,剩余10%作为验证集。基于上式计算出改进后的triplet loss值,反馈到卷积神经网络中进行迭代优化。同时观测模型在验证集中的性能表现,当验证性能不再升高时,模型达到收敛状态,训练阶段终止。Specifically, take 90% of the data from the witness image data pool as the training set, and the remaining 10% as the verification set. Based on the above formula, the improved triplet loss value is calculated and fed back to the convolutional neural network for iterative optimization. At the same time, observe the performance of the model in the verification set. When the verification performance no longer improves, the model reaches the convergence state and the training phase ends.
上述的人脸认证方法,一方面在原始triplet loss的损失函数中增加了对类内样本距离的约束,从而在增大类间差距的同时减小类内差距,提升模型的泛化能力;另一方面,将原始triplet loss的度量方式由欧氏距离改为余弦距离,保持训练与部署的度量一致性,提高特征学习的连续性。The above-mentioned face authentication method, on the one hand, adds a constraint on the distance of samples within a class to the loss function of the original triplet loss, thereby increasing the gap between classes while reducing the gap within a class, and improving the generalization ability of the model; On the one hand, the measurement method of the original triplet loss is changed from Euclidean distance to cosine distance, so as to maintain the measurement consistency between training and deployment, and improve the continuity of feature learning.
在另一个实施例中,训练卷积神经网络的步骤还包括:利用基于海量开源人脸数据训练好的基础模型参数进行初始化,在特征输出层后添加归一化层及改进后的三元组损失函数层,得到待训练的卷积神经网络。In another embodiment, the step of training the convolutional neural network further includes: initializing the basic model parameters trained based on massive open source face data, adding a normalization layer and an improved triplet after the feature output layer The loss function layer is used to obtain the convolutional neural network to be trained.
具体地,在用深度学习解决人证合一问题时,常规的基于互联网海量人脸数据训练得到的深度人脸识别模型在特定场景下的人证比对应用上性能会大幅下降,而特定应用场景下的人证数据来源又比较有限,直接地学习往往由于样本不足导致训练结果不理想,因此极需要研发一种有效地针对小数据集的场景数据进行扩展训练的方法,以提升人脸识别模型在特定应用场景下的准确率,满足市场应用需求。Specifically, when deep learning is used to solve the problem of the combination of human and evidence, the performance of the conventional deep face recognition model trained based on massive Internet face data will be greatly reduced in the application of human-evidence comparison in specific scenarios, while the specific application The source of witness data in the scene is relatively limited, and direct learning often leads to unsatisfactory training results due to insufficient samples. Therefore, it is extremely necessary to develop an effective method of expanding training for small data sets of scene data to improve face recognition. The accuracy of the model in a specific application scenario meets market application requirements.
深度学习算法往往依赖于海量数据的训练,在人证合一应用中,证件照与场景照比对属于异质样本比对问题,常规的基于海量互联网人脸数据训练得到的深度人脸识别模型在人证比对应用上性能会大幅下降。然而人证数据来源有限(需要同时具备同一个人的身份证图像及相应的场景图像),可用于训练的数据量较少,直接训练会由于样本不足导致训练效果不理想,因此在运用深度学习进行人证合一的模型训练时,往往是利用迁移学习的思想,先基于海量的互联网人脸数据训练一个在开源测试集上性能可靠的基础模型,然后再在有限的人证数据上进行二次扩展训练,使模型能自动学习特定模态的特征表示,提升模型性能。此过程如图6所示。Deep learning algorithms often rely on the training of massive data. In the application of human-certificate integration, the comparison of ID photos and scene photos is a heterogeneous sample comparison problem. The conventional deep face recognition model based on massive Internet face data training The performance of the application of witness comparison will drop significantly. However, the source of witness data is limited (need to have the ID card image of the same person and the corresponding scene image at the same time), the amount of data available for training is small, and direct training will lead to unsatisfactory training results due to insufficient samples. When training the model of the combination of witnesses and witnesses, the idea of transfer learning is often used to train a basic model with reliable performance on the open source test set based on massive Internet face data, and then perform a secondary test on the limited witness data. Extended training enables the model to automatically learn the feature representation of a specific modality and improve model performance. This process is shown in Figure 6.
在二次训练的过程中,整个网络用预训练好的基础模型参数进行初始化,然后在网络的特征输出层之后添加一个L2归一化层以及改进后的triplet loss层,待训练的卷积神经网络结构图如图7所示。In the process of secondary training, the entire network is initialized with pre-trained basic model parameters, and then an L2 normalization layer and an improved triplet loss layer are added after the feature output layer of the network. The convolutional neural network to be trained The network structure diagram is shown in Figure 7.
一个实施例中,一种人脸认证方法的流程示意图如图8所示,包括三个阶段,分别为数据采集与预处理阶段、训练阶段和部署阶段。In one embodiment, a schematic flowchart of a face authentication method is shown in FIG. 8 , which includes three stages, namely, a data collection and preprocessing stage, a training stage, and a deployment stage.
数据采集与预处理阶段,由人证核验终端设备的读卡器模块读取证件芯片照,以及前置摄像头抓取现场照片,经过人脸检测器、关键点检测器、人脸对齐与剪切模块之后得到尺寸归一化的证件人脸图像和场景人脸图像。In the data collection and preprocessing stage, the card reader module of the personal identification verification terminal device reads the chip photo of the certificate, and the front camera captures the on-site photo, and passes through the face detector, key point detector, face alignment and cutting After the module, the size-normalized ID face image and scene face image are obtained.
训练阶段,从人证图像数据池中取90%数据作为训练集,剩余10%作为验证集。由于人证比对主要是证件照与场景照之间的比对,因为三元组中若以证件照为参考图(anchor),则另外两张图均是场景照;反之若以场景照为参考图,则另外两张图均是证件照。采用OHEM在线构造三元组的策略,即在网络每次迭代优化的过程中,利用当前网络对候选三元组进行前向计算,筛选满足条件的有效三元组,按照上式计算出改进后的tripletloss值,反馈到网络中进行迭代优化。同时观测模型在验证集中的性能表现,当验证性能不再升高时,模型达到收敛状态,训练阶段终止。In the training phase, take 90% of the data from the witness image data pool as the training set, and the remaining 10% as the verification set. Since the witness comparison is mainly the comparison between the ID photo and the scene photo, because if the ID photo is used as the reference image (anchor) in the triplet, the other two images are both scene photos; otherwise, if the scene photo is used as a reference picture, the other two pictures are ID photos. Adopt the strategy of OHEM to construct triplets online, that is, in the process of each iterative optimization of the network, use the current network to perform forward calculation on the candidate triplets, filter the effective triplets that meet the conditions, and calculate the improved triplets according to the above formula The triplet loss value is fed back to the network for iterative optimization. At the same time, observe the performance of the model in the verification set. When the verification performance no longer improves, the model reaches the convergence state and the training phase ends.
部署阶段,将训练好的模型部署到人证核验终端进行使用时,设备采集到的图像经过与训练阶段相同的预处理程序,然后通过网络前向计算得到每张人脸图像的特征向量,通过计算余弦距离得到两张图像的相似度,然后根据预设阈值进行判决,大于预设阈值的为同一人,反之为不同人。In the deployment stage, when the trained model is deployed to the witness verification terminal for use, the images collected by the device undergo the same preprocessing procedure as in the training stage, and then the feature vector of each face image is obtained through the forward calculation of the network. Calculate the cosine distance to obtain the similarity of the two images, and then make a judgment according to the preset threshold. If it is greater than the preset threshold, it is the same person, otherwise it is a different person.
上述的人脸认证方法,原始triplet loss函数只限定了类间距离的学习关系,上述的人脸认证方法,通过改进原始triplet loss损失函数增加了类内距离的约束项,可以使得网络在训练过程中增大类间差距的同时尽可能减小类内差距,从而提高网络的泛化能力,进而提升模型的场景适应性。另外,用余弦距离替代了原始triplet loss中的欧氏距离度量方式,更符合人脸特征空间的分布属性,保持了训练阶段与部署阶段度量方式的一致性,使得比对结果更加可靠。In the above-mentioned face authentication method, the original triplet loss function only limits the learning relationship of the inter-class distance. In the above-mentioned face authentication method, by improving the original triplet loss loss function and increasing the constraint item of the intra-class distance, the network can be trained In this method, the inter-class gap is increased while the intra-class gap is reduced as much as possible, so as to improve the generalization ability of the network, and then improve the scene adaptability of the model. In addition, the Euclidean distance measurement method in the original triplet loss is replaced by the cosine distance, which is more in line with the distribution attribute of the face feature space, and the consistency of the measurement method between the training phase and the deployment phase is maintained, making the comparison result more reliable.
在一个实施例中,提供一种人脸认证装置,如图9所示,包括:图像获取模块902、图像预处理模块904、特征获取模块906、计算模块908和认证模块910。In one embodiment, a face authentication device is provided, as shown in FIG. 9 , including: an image acquisition module 902 , an image preprocessing module 904 , a feature acquisition module 906 , a calculation module 908 and an authentication module 910 .
图像获取模块902,用于基于人脸认证请求,获取证件照片和人物的场景照片。The image acquisition module 902 is configured to acquire the certificate photo and the person's scene photo based on the face authentication request.
图像预处理模块904,用于对场景照片和证件照片分别进行人脸检测、关键点定位和图像预处理,得到场景照片对应的场景人脸图像,以及证件照片对应的证件人脸图像。The image preprocessing module 904 is configured to perform face detection, key point location, and image preprocessing on the scene photo and the ID photo, respectively, to obtain the scene face image corresponding to the scene photo, and the ID face image corresponding to the ID photo.
特征获取模块906,用于将场景人脸图像和证件人脸图像输入到预先训练好的用于人脸认证的卷积神经网络模型,并获取卷积神经网络模型输出的场景人脸图像对应的第一特征向量,以及证件人脸图像对应的第二特征向量;其中,卷积神经网络模型基于三元组损失函数的监督训练得到。The feature acquisition module 906 is used to input the scene face image and the certificate face image into the pre-trained convolutional neural network model for face authentication, and obtain the scene face image corresponding to the convolutional neural network model output. The first feature vector, and the second feature vector corresponding to the ID face image; wherein, the convolutional neural network model is obtained based on the supervised training of the triplet loss function.
计算模块908,用于计算第一特征向量和第二特征向量的余弦距离。A calculation module 908, configured to calculate a cosine distance between the first eigenvector and the second eigenvector.
认证模块910,用于比较余弦距离和预设阈值,并根据比较结果确定人脸认证结果。The authentication module 910 is configured to compare the cosine distance with a preset threshold, and determine a face authentication result according to the comparison result.
上述的人脸认证装置,利用预先训练的卷积神经网络进行人脸认证,由于卷积神经网络模型基于改进后的三元组损失函数的监督训练得到,而场景人脸图像和证件人脸图像的相似度根据场景人脸图像对应的第一特征向量和证件人脸图像对应的第二特征向量的余弦距离计算得到,余弦距离衡量的是空间向量的夹角,更加体现在方向上的差异,而不是位置,从而更符合人脸特征空间的分布属性,提高了人脸认证的可靠性。The above-mentioned face authentication device uses a pre-trained convolutional neural network for face authentication. Since the convolutional neural network model is obtained based on the supervised training of the improved triple loss function, the scene face image and the certificate face image The similarity of is calculated based on the cosine distance between the first eigenvector corresponding to the scene face image and the second eigenvector corresponding to the ID face image. The cosine distance measures the angle between space vectors, which is more reflected in the difference in direction. Instead of the position, it is more in line with the distribution attribute of the face feature space and improves the reliability of face authentication.
如图9所示,在另一个实施例中,人脸认证装置还包括:样本获取模块912、三元组获取模块914、训练模块916和验证模块918。As shown in FIG. 9 , in another embodiment, the face authentication device further includes: a sample acquisition module 912 , a triplet acquisition module 914 , a training module 916 and a verification module 918 .
样本获取模块912,用于获取带标记的训练样本,所述训练样本包括标记了属于每个标记对象的一张证件人脸图像和至少一张场景人脸图像。The sample obtaining module 912 is configured to obtain marked training samples, where the training samples include a certificate face image and at least one scene face image marked with each marked object.
三元组获取模块914,用于根据训练样本训练卷积神经网络模型,通过OHEM产生各训练样本对应的三元组元素;三元组元素包括参考样本、正样本和负样本。The triplet acquisition module 914 is used to train the convolutional neural network model according to the training samples, and generate triplet elements corresponding to each training sample through OHEM; the triplet elements include reference samples, positive samples and negative samples.
具体地,三元组获取模块914,用于随机选择一个图像作为参考样本,选择属于同一标签对象、与参考样本类别不同的图像作为正样本,还用于根据OHEM策略,利用当前训练的卷积神经网络模型提取特征之间的余弦距离,对于每一个参考样本,从其它具有不属于同一标签对象的人脸图像中,选择距离最小、与参考样本属于不同类别的图像,作为该参考样本的负样本。Specifically, the triplet acquisition module 914 is used to randomly select an image as a reference sample, select an image that belongs to the same label object and is different from the reference sample category as a positive sample, and is also used to use the currently trained convolution The neural network model extracts the cosine distance between features. For each reference sample, from other face images that do not belong to the same label object, select the image with the smallest distance and belonging to a different category from the reference sample as the negative of the reference sample. sample.
具体地,以证件照作为参考样本时,正样本和负样本均为场景照;以场景照作为参考样本时,正样本和负样本均为证件照。Specifically, when ID photos are used as reference samples, both positive and negative samples are scene photos; when scene photos are used as reference samples, both positive samples and negative samples are ID photos.
训练模块916,用于根据各训练样本的三元组元素,基于三元组损失函数的监督,训练卷积神经网络模型,该三元组损失函数,以余弦距离作为度量方式,通过随机梯度下降算法来优化模型参数。The training module 916 is used to train the convolutional neural network model based on the triplet elements of each training sample, based on the supervision of the triplet loss function, and the triplet loss function uses the cosine distance as a measurement method to descend through stochastic gradient Algorithms to optimize model parameters.
具体地,改进型三元组损失函数包括对同类样本的余弦距离的限定,以及对异类样本的余弦距离的限定。Specifically, the improved triplet loss function includes the limitation of the cosine distance of the same kind of samples, and the limitation of the cosine distance of the heterogeneous samples.
改进型三元组损失函数为:The improved triplet loss function is:
其中,cos(·)表示余弦距离,其计算方式为N是三元组数量,表示参考样本的特征向量,表示同类正样本的特征向量,表示异类负样本的特征向量,[·]+的含义如下:α1为类间间隔参数,α2为类内间隔参数。Among them, cos(·) represents the cosine distance, and its calculation method is N is the number of triplets, Represents the feature vector of the reference sample, Represents the feature vector of the same positive sample, Represents the eigenvector of heterogeneous negative samples, and the meaning of [ ] + is as follows: α 1 is the inter-class interval parameter, and α 2 is the intra-class interval parameter.
验证模块918,用于将验证集数据输入卷积神经网络模型,达到训练结束条件时,得到训练好的用于人脸认证的卷积神经网络模型。The verification module 918 is used to input the verification set data into the convolutional neural network model, and obtain the trained convolutional neural network model for face authentication when the training end condition is reached.
在另一个实施例中,人脸认证装置还包括模型初始化模块920,用于利用基于海量开源人脸数据训练好的基础模型参数进行初始化,在特征输出层后添加归一化层及三元组损失函数层,得到待训练的卷积神经网络。上述的人脸认证装置,一方面在原始tripletloss的损失函数中增加了对类内样本距离的约束,从而在增大类间差距的同时减小类内差距,提升模型的泛化能力;另一方面,将原始triplet loss的度量方式由欧氏距离改为余弦距离,保持训练与部署的度量一致性,提高特征学习的连续性。In another embodiment, the face authentication device also includes a model initialization module 920, which is used to initialize the basic model parameters trained based on massive open source face data, and add a normalization layer and a triplet after the feature output layer The loss function layer is used to obtain the convolutional neural network to be trained. The above-mentioned face authentication device, on the one hand, adds a constraint on the intra-class sample distance in the loss function of the original triplet loss, thereby increasing the inter-class gap while reducing the intra-class gap, and improving the generalization ability of the model; On the one hand, the measurement method of the original triplet loss is changed from Euclidean distance to cosine distance, so as to maintain the measurement consistency between training and deployment, and improve the continuity of feature learning.
一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,处理器执行计算机程序时实现上述各实施例的人脸认证方法的步骤。A computer device includes a memory, a processor, and a computer program stored on the memory and operable on the processor. When the processor executes the computer program, the steps of the face authentication method in the above-mentioned embodiments are realized.
一种存储介质,其上存储有计算机程序,其特征在于,该计算机程序被处理器执行时,实现上述各实施例的人脸认证方法的步骤。A storage medium, on which a computer program is stored, is characterized in that, when the computer program is executed by a processor, the steps of the face authentication method in the above-mentioned embodiments are realized.
以上所述实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。The technical features of the above-mentioned embodiments can be combined arbitrarily. To make the description concise, all possible combinations of the technical features in the above-mentioned embodiments are not described. However, as long as there is no contradiction in the combination of these technical features, should be considered as within the scope of this specification.
以上所述实施例仅表达了本发明的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干变形和改进,这些都属于本发明的保护范围。因此,本发明专利的保护范围应以所附权利要求为准。The above-mentioned embodiments only express several implementation modes of the present invention, and the descriptions thereof are relatively specific and detailed, but should not be construed as limiting the patent scope of the invention. It should be pointed out that those skilled in the art can make several modifications and improvements without departing from the concept of the present invention, and these all belong to the protection scope of the present invention. Therefore, the protection scope of the patent for the present invention should be based on the appended claims.
Claims (10)
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201711436879.4A CN108009528B (en) | 2017-12-26 | 2017-12-26 | Triple Loss-based face authentication method and device, computer equipment and storage medium |
| PCT/CN2018/109169 WO2019128367A1 (en) | 2017-12-26 | 2018-09-30 | Face verification method and apparatus based on triplet loss, and computer device and storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201711436879.4A CN108009528B (en) | 2017-12-26 | 2017-12-26 | Triple Loss-based face authentication method and device, computer equipment and storage medium |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN108009528A true CN108009528A (en) | 2018-05-08 |
| CN108009528B CN108009528B (en) | 2020-04-07 |
Family
ID=62061566
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201711436879.4A Active CN108009528B (en) | 2017-12-26 | 2017-12-26 | Triple Loss-based face authentication method and device, computer equipment and storage medium |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN108009528B (en) |
| WO (1) | WO2019128367A1 (en) |
Cited By (78)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108922542A (en) * | 2018-06-01 | 2018-11-30 | 平安科技(深圳)有限公司 | Acquisition methods, device, computer equipment and the storage medium of sample triple |
| CN108921952A (en) * | 2018-06-15 | 2018-11-30 | 深圳大学 | Object functionality prediction technique, device, computer equipment and storage medium |
| CN108921033A (en) * | 2018-06-04 | 2018-11-30 | 北京京东金融科技控股有限公司 | Face picture comparison method, device, medium and electronic equipment |
| CN108960209A (en) * | 2018-08-09 | 2018-12-07 | 腾讯科技(深圳)有限公司 | Personal identification method, device and computer readable storage medium |
| CN108960342A (en) * | 2018-08-01 | 2018-12-07 | 中国计量大学 | Based on the image similarity calculation method for improving SoftMax loss function |
| CN108985198A (en) * | 2018-07-02 | 2018-12-11 | 四川斐讯信息技术有限公司 | A kind of COS distance calculation method based on big data feature vector |
| CN109145991A (en) * | 2018-08-24 | 2019-01-04 | 北京地平线机器人技术研发有限公司 | Image group generation method, image group generating means and electronic equipment |
| CN109145956A (en) * | 2018-07-26 | 2019-01-04 | 上海慧子视听科技有限公司 | Methods of marking, device, computer equipment and storage medium |
| CN109145704A (en) * | 2018-06-14 | 2019-01-04 | 西安电子科技大学 | A kind of human face portrait recognition methods based on face character |
| CN109165589A (en) * | 2018-08-14 | 2019-01-08 | 北京颂泽科技有限公司 | Vehicle based on deep learning recognition methods and device again |
| CN109214361A (en) * | 2018-10-18 | 2019-01-15 | 康明飞(北京)科技有限公司 | A kind of face identification method and device and ticket verification method and device |
| CN109271877A (en) * | 2018-08-24 | 2019-01-25 | 北京智芯原动科技有限公司 | A kind of human figure identification method and device |
| CN109344740A (en) * | 2018-09-12 | 2019-02-15 | 上海了物网络科技有限公司 | Face identification system, method and computer readable storage medium |
| CN109359541A (en) * | 2018-09-17 | 2019-02-19 | 南京邮电大学 | A sketch face recognition method based on deep transfer learning |
| CN109492583A (en) * | 2018-11-09 | 2019-03-19 | 安徽大学 | A kind of recognition methods again of the vehicle based on deep learning |
| CN109522850A (en) * | 2018-11-22 | 2019-03-26 | 中山大学 | A kind of movement similarity estimating method based on small-sample learning |
| CN109543524A (en) * | 2018-10-18 | 2019-03-29 | 同盾控股有限公司 | A kind of image-recognizing method, device |
| CN109685121A (en) * | 2018-12-11 | 2019-04-26 | 中国科学院苏州纳米技术与纳米仿生研究所 | Training method, image search method, the computer equipment of image encrypting algorithm |
| CN109685106A (en) * | 2018-11-19 | 2019-04-26 | 深圳博为教育科技有限公司 | A kind of image-recognizing method, face Work attendance method, device and system |
| CN109711358A (en) * | 2018-12-28 | 2019-05-03 | 四川远鉴科技有限公司 | Neural network training method, face identification method and system and storage medium |
| CN109711443A (en) * | 2018-12-14 | 2019-05-03 | 平安城市建设科技(深圳)有限公司 | Floor plan recognition methods, device, equipment and storage medium neural network based |
| CN109815801A (en) * | 2018-12-18 | 2019-05-28 | 北京英索科技发展有限公司 | Face identification method and device based on deep learning |
| CN109886186A (en) * | 2019-02-18 | 2019-06-14 | 上海骏聿数码科技有限公司 | A kind of face identification method and device |
| CN109948568A (en) * | 2019-03-26 | 2019-06-28 | 东华大学 | Embedded human face identifying system based on ARM microprocessor and deep learning |
| WO2019128367A1 (en) * | 2017-12-26 | 2019-07-04 | 广州广电运通金融电子股份有限公司 | Face verification method and apparatus based on triplet loss, and computer device and storage medium |
| CN110147732A (en) * | 2019-04-16 | 2019-08-20 | 平安科技(深圳)有限公司 | Refer to vein identification method, device, computer equipment and storage medium |
| CN110147833A (en) * | 2019-05-09 | 2019-08-20 | 北京迈格威科技有限公司 | Facial image processing method, apparatus, system and readable storage medium storing program for executing |
| CN110213660A (en) * | 2019-05-27 | 2019-09-06 | 广州荔支网络技术有限公司 | Distribution method, system, computer equipment and the storage medium of program |
| CN110598840A (en) * | 2018-06-13 | 2019-12-20 | 富士通株式会社 | Knowledge migration method, information processing apparatus, and storage medium |
| CN110674688A (en) * | 2019-08-19 | 2020-01-10 | 深圳力维智联技术有限公司 | Method, system and medium for obtaining face recognition model for video surveillance scene |
| CN110738071A (en) * | 2018-07-18 | 2020-01-31 | 浙江中正智能科技有限公司 | face algorithm model training method based on deep learning and transfer learning |
| CN110765933A (en) * | 2019-10-22 | 2020-02-07 | 山西省信息产业技术研究院有限公司 | Dynamic portrait sensing comparison method applied to driver identity authentication system |
| CN110874602A (en) * | 2018-08-30 | 2020-03-10 | 北京嘀嘀无限科技发展有限公司 | Image identification method and device |
| CN110929099A (en) * | 2019-11-28 | 2020-03-27 | 杭州趣维科技有限公司 | Short video frame semantic extraction method and system based on multitask learning |
| CN111062338A (en) * | 2019-12-19 | 2020-04-24 | 厦门商集网络科技有限责任公司 | Certificate portrait consistency comparison method and system |
| CN111178249A (en) * | 2019-12-27 | 2020-05-19 | 杭州艾芯智能科技有限公司 | Face comparison method and device, computer equipment and storage medium |
| CN111325223A (en) * | 2018-12-13 | 2020-06-23 | 中国电信股份有限公司 | Deep learning model training method and device and computer readable storage medium |
| WO2020125404A1 (en) * | 2018-12-19 | 2020-06-25 | 北京世纪好未来教育科技有限公司 | Method and apparatus for constructing neural network and computer-readable medium |
| CN111368644A (en) * | 2020-02-14 | 2020-07-03 | 深圳市商汤科技有限公司 | Image processing method, device, electronic device and storage medium |
| CN111401277A (en) * | 2020-03-20 | 2020-07-10 | 深圳前海微众银行股份有限公司 | Face recognition model updating method, device, equipment and medium |
| CN111401257A (en) * | 2020-03-17 | 2020-07-10 | 天津理工大学 | Non-constraint condition face recognition method based on cosine loss |
| WO2020147408A1 (en) * | 2019-01-16 | 2020-07-23 | 平安科技(深圳)有限公司 | Facial recognition model evaluation method and apparatus, and storage medium and computer device |
| CN111461152A (en) * | 2019-01-21 | 2020-07-28 | 同方威视技术股份有限公司 | Goods detection method and device, electronic device and computer readable medium |
| CN111709313A (en) * | 2020-05-27 | 2020-09-25 | 杭州电子科技大学 | Pedestrian Re-identification Method Based on Local and Channel Combination Features |
| CN111723222A (en) * | 2019-03-19 | 2020-09-29 | Sap欧洲公司 | Image Search and Training System |
| CN111832364A (en) * | 2019-04-22 | 2020-10-27 | 普天信息技术有限公司 | A face recognition method and device |
| CN112052821A (en) * | 2020-09-15 | 2020-12-08 | 浙江智慧视频安防创新中心有限公司 | Fire fighting channel safety detection method, device, equipment and storage medium |
| CN112116021A (en) * | 2020-09-27 | 2020-12-22 | 广州华多网络科技有限公司 | A kind of gemstone similarity measurement data processing method and related equipment |
| CN112149491A (en) * | 2019-06-27 | 2020-12-29 | 罗伯特·博世有限公司 | Method for determining a trust value of a detected object |
| CN112492383A (en) * | 2020-12-03 | 2021-03-12 | 珠海格力电器股份有限公司 | Video frame generation method and device, storage medium and electronic equipment |
| CN112560880A (en) * | 2019-09-25 | 2021-03-26 | 中国电信股份有限公司 | Object classification method, object classification apparatus, and computer-readable storage medium |
| CN112580406A (en) * | 2019-09-30 | 2021-03-30 | 北京中关村科金技术有限公司 | Face comparison method and device and storage medium |
| WO2021073150A1 (en) * | 2019-10-16 | 2021-04-22 | 平安科技(深圳)有限公司 | Data detection method and apparatus, and computer device and storage medium |
| CN112949780A (en) * | 2020-04-21 | 2021-06-11 | 佳都科技集团股份有限公司 | Feature model training method, device, equipment and storage medium |
| CN113065495A (en) * | 2021-04-13 | 2021-07-02 | 深圳技术大学 | Image similarity calculation method, target object re-identification method and system |
| CN113269155A (en) * | 2021-06-28 | 2021-08-17 | 苏州市科远软件技术开发有限公司 | End-to-end face recognition method, device, equipment and storage medium |
| CN113283359A (en) * | 2021-06-02 | 2021-08-20 | 万达信息股份有限公司 | Authentication method and system for handheld certificate photo and electronic equipment |
| CN113344875A (en) * | 2021-06-07 | 2021-09-03 | 武汉象点科技有限公司 | Abnormal image detection method based on self-supervision learning |
| CN113486804A (en) * | 2021-07-07 | 2021-10-08 | 科大讯飞股份有限公司 | Object identification method, device, equipment and storage medium |
| CN113538075A (en) * | 2020-04-14 | 2021-10-22 | 阿里巴巴集团控股有限公司 | Data processing method, model training method, device and equipment |
| CN113642468A (en) * | 2021-08-16 | 2021-11-12 | 中国银行股份有限公司 | Identity authentication method and device |
| CN113688793A (en) * | 2021-09-22 | 2021-11-23 | 万章敏 | Training method of face model and face recognition system |
| CN113808318A (en) * | 2020-06-16 | 2021-12-17 | 富士胶片商业创新有限公司 | Building entrance management system, method and non-transitory computer readable medium |
| CN114065901A (en) * | 2020-07-31 | 2022-02-18 | 华为技术有限公司 | Method and device for training neural network model |
| CN114283483A (en) * | 2021-12-31 | 2022-04-05 | 中国联合网络通信集团有限公司 | Face recognition method and device |
| CN114387457A (en) * | 2021-12-27 | 2022-04-22 | 腾晖科技建筑智能(深圳)有限公司 | Face intra-class interval optimization method based on parameter adjustment |
| CN114723933A (en) * | 2020-12-18 | 2022-07-08 | 重庆中星微人工智能芯片技术有限公司 | Region information generation method and device, electronic equipment and computer readable medium |
| CN114743279A (en) * | 2022-05-13 | 2022-07-12 | 以萨技术股份有限公司 | Living body detection function generation method and device, storage medium and equipment |
| CN114882573A (en) * | 2022-06-06 | 2022-08-09 | 浙江工业大学 | Face recognition method and device based on personalized federal learning |
| CN114898454A (en) * | 2022-05-31 | 2022-08-12 | 中国第一汽车股份有限公司 | Neural network model training method and device, electronic equipment and medium |
| CN114937287A (en) * | 2022-05-31 | 2022-08-23 | 深圳市优必选科技股份有限公司 | Pedestrian re-identification method, device, computer-readable storage medium and terminal device |
| CN115422517A (en) * | 2022-09-14 | 2022-12-02 | 平安银行股份有限公司 | Identity authentication method, device, medium and equipment based on credit card |
| CN116127298A (en) * | 2023-02-22 | 2023-05-16 | 北京邮电大学 | Small sample radio frequency fingerprint identification method based on triplet loss |
| US20230169792A1 (en) * | 2021-11-26 | 2023-06-01 | GenkiTek Technology Co., Ltd. | System and method of mode selection face recognition with parallel cnns |
| WO2023093838A1 (en) * | 2021-11-25 | 2023-06-01 | 北京字跳网络技术有限公司 | Super-resolution image processing method and apparatus, and device and medium |
| CN117114972A (en) * | 2023-08-04 | 2023-11-24 | 东莞理工学院 | A method, device, terminal equipment and storage medium for generating an ID photo |
| EP4383212A1 (en) * | 2022-12-06 | 2024-06-12 | Bundesdruckerei GmbH | Authentication device and method for authenticating a person by means of an identification document associated with the person, and identification document and method for producing |
| CN118337532A (en) * | 2024-06-13 | 2024-07-12 | 浙江鹏信信息科技股份有限公司 | Zero trust-based traffic safety audit protection method and system and readable medium |
Families Citing this family (103)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110516533B (en) * | 2019-07-11 | 2023-06-02 | 同济大学 | Pedestrian re-identification method based on depth measurement |
| CN110414431B (en) * | 2019-07-29 | 2022-12-27 | 广州像素数据技术股份有限公司 | Face recognition method and system based on elastic context relation loss function |
| CN110647880A (en) * | 2019-08-12 | 2020-01-03 | 深圳市华付信息技术有限公司 | Mobile terminal identity card image shielding judgment method |
| CN110458233B (en) * | 2019-08-13 | 2024-02-13 | 腾讯云计算(北京)有限责任公司 | Mixed granularity object recognition model training and recognition method, device and storage medium |
| CN110705357A (en) * | 2019-09-02 | 2020-01-17 | 深圳中兴网信科技有限公司 | Face recognition method and face recognition device |
| CN110555478B (en) * | 2019-09-05 | 2023-02-03 | 东北大学 | Multi-fault diagnosis method for fan based on deep metric network based on hard sample mining |
| CN111008550A (en) * | 2019-09-06 | 2020-04-14 | 上海芯灵科技有限公司 | Identification method for finger vein authentication identity based on Multiple loss function |
| CN110674637B (en) * | 2019-09-06 | 2023-07-11 | 腾讯科技(深圳)有限公司 | Character relationship recognition model training method, device, equipment and medium |
| CN110705393B (en) * | 2019-09-17 | 2023-02-03 | 中国计量大学 | A method to improve the performance of community population face recognition |
| CN110647938B (en) * | 2019-09-24 | 2022-07-15 | 北京市商汤科技开发有限公司 | Image processing method and related device |
| CN112733574B (en) * | 2019-10-14 | 2023-04-07 | 中移(苏州)软件技术有限公司 | Face recognition method and device and computer readable storage medium |
| CN110796057A (en) * | 2019-10-22 | 2020-02-14 | 上海交通大学 | Pedestrian re-identification method and device and computer equipment |
| CN110852367B (en) * | 2019-11-05 | 2023-10-31 | 上海联影智能医疗科技有限公司 | Image classification methods, computer equipment and storage media |
| CN110956098B (en) * | 2019-11-13 | 2023-05-12 | 深圳数联天下智能科技有限公司 | Image processing method and related equipment |
| CN111126360B (en) * | 2019-11-15 | 2023-03-24 | 西安电子科技大学 | Cross-domain pedestrian re-identification method based on unsupervised combined multi-loss model |
| CN111222411B (en) * | 2019-11-28 | 2023-09-01 | 中国船舶重工集团公司第七一三研究所 | Laser emission safety rapid alarm method and device |
| CN111079566B (en) * | 2019-11-28 | 2023-05-02 | 深圳市信义科技有限公司 | Large-scale face recognition model optimization system |
| CN111091089B (en) * | 2019-12-12 | 2022-07-29 | 新华三大数据技术有限公司 | Face image processing method and device, electronic equipment and storage medium |
| CN111144240B (en) * | 2019-12-12 | 2023-02-07 | 深圳数联天下智能科技有限公司 | Image processing method and related equipment |
| CN111062430B (en) * | 2019-12-12 | 2023-05-09 | 易诚高科(大连)科技有限公司 | A Pedestrian Re-ID Evaluation Method Based on Probability Density Function |
| CN111126240B (en) * | 2019-12-19 | 2023-04-07 | 西安工程大学 | Three-channel feature fusion face recognition method |
| CN111191563A (en) * | 2019-12-26 | 2020-05-22 | 三盟科技股份有限公司 | Face recognition method and system based on data sample and test data set training |
| CN111241925B (en) * | 2019-12-30 | 2023-08-18 | 新大陆数字技术股份有限公司 | Face quality assessment method, system, electronic equipment and readable storage medium |
| CN111209839B (en) * | 2019-12-31 | 2023-05-23 | 上海涛润医疗科技有限公司 | Face recognition method |
| CN111198964B (en) * | 2020-01-10 | 2023-04-25 | 中国科学院自动化研究所 | Image retrieval method and system |
| CN111274946B (en) * | 2020-01-19 | 2023-05-05 | 杭州涂鸦信息技术有限公司 | Face recognition method, system and equipment |
| CN113362096A (en) * | 2020-03-04 | 2021-09-07 | 驰众信息技术(上海)有限公司 | Frame advertisement image matching method based on deep learning |
| CN111368766B (en) * | 2020-03-09 | 2023-08-18 | 云南安华防灾减灾科技有限责任公司 | Deep learning-based cow face detection and recognition method |
| CN111539247B (en) * | 2020-03-10 | 2023-02-10 | 西安电子科技大学 | A hyperspectral face recognition method, device, electronic equipment and storage medium thereof |
| CN111429414B (en) * | 2020-03-18 | 2023-04-07 | 腾讯科技(深圳)有限公司 | Artificial intelligence-based focus image sample determination method and related device |
| CN111414862B (en) * | 2020-03-22 | 2023-03-24 | 西安电子科技大学 | Expression recognition method based on neural network fusion key point angle change |
| CN111507289A (en) * | 2020-04-22 | 2020-08-07 | 上海眼控科技股份有限公司 | Video matching method, computer device and storage medium |
| CN111582107B (en) * | 2020-04-28 | 2023-09-29 | 浙江大华技术股份有限公司 | Training method and recognition method of target re-recognition model, electronic equipment and device |
| CN111639535B (en) * | 2020-04-29 | 2023-08-22 | 深圳英飞拓智能技术有限公司 | Face recognition method and device based on deep learning |
| CN111626212B (en) * | 2020-05-27 | 2023-09-26 | 腾讯科技(深圳)有限公司 | Method and device for identifying object in picture, storage medium and electronic device |
| CN113807122B (en) * | 2020-06-11 | 2024-11-22 | 阿里巴巴(中国)网络技术有限公司 | Model training method, object recognition method and device, and storage medium |
| CN111738157B (en) * | 2020-06-23 | 2023-07-21 | 平安科技(深圳)有限公司 | Face action unit data set construction method and device and computer equipment |
| CN112257738B (en) * | 2020-07-31 | 2025-01-17 | 北京京东尚科信息技术有限公司 | Machine learning model training method, device and image classification method and device |
| CN111988614B (en) * | 2020-08-14 | 2022-09-13 | 深圳前海微众银行股份有限公司 | Hash coding optimization method and device and readable storage medium |
| EP4189632A4 (en) * | 2020-08-25 | 2023-07-05 | Huawei Technologies Co., Ltd. | DISTANCES BETWEEN DISTRIBUTIONS TO MEASURE THE IMAGE BETWEEN UNITS |
| CN112069993B (en) * | 2020-09-04 | 2024-02-13 | 西安西图之光智能科技有限公司 | Dense face detection method and system based on five-sense organ mask constraint and storage medium |
| CN112084956A (en) * | 2020-09-11 | 2020-12-15 | 上海交通大学烟台信息技术研究院 | Special face crowd screening system based on small sample learning prototype network |
| CN112287765B (en) * | 2020-09-30 | 2024-06-04 | 新大陆数字技术股份有限公司 | Face living body detection method, device, equipment and readable storage medium |
| CN112232226B (en) * | 2020-10-19 | 2025-09-30 | 戴姆勒股份公司 | Method and system for detecting target objects through discriminant models |
| CN112329826B (en) * | 2020-10-24 | 2024-10-18 | 中国人民解放军空军军医大学 | Image recognition model training method, image recognition method and device |
| CN112307968B (en) * | 2020-10-30 | 2022-11-08 | 天地伟业技术有限公司 | Face recognition feature compression method |
| CN112328786A (en) * | 2020-11-03 | 2021-02-05 | 平安科技(深圳)有限公司 | BERT-based text classification method, device, computer equipment and storage medium |
| GB2600922B (en) * | 2020-11-05 | 2024-04-10 | Thales Holdings Uk Plc | One shot learning for identifying data items similar to a query data item |
| CN112347957A (en) * | 2020-11-12 | 2021-02-09 | 广联达科技股份有限公司 | Pedestrian re-identification method, device, computer equipment and storage medium |
| CN112200154A (en) * | 2020-11-17 | 2021-01-08 | 苏州方正璞华信息技术有限公司 | A face recognition method, device, electronic device and storage medium for wearing a mask |
| CN113392866B (en) * | 2020-11-19 | 2025-08-29 | 腾讯科技(深圳)有限公司 | Image processing method, device and storage medium based on artificial intelligence |
| CN112836566A (en) * | 2020-12-01 | 2021-05-25 | 北京智云视图科技有限公司 | Multitask neural network face key point detection method for edge equipment |
| CN112836719B (en) * | 2020-12-11 | 2024-01-05 | 南京富岛信息工程有限公司 | A similarity detection method for dynamometer diagrams that combines binary classification and triplet |
| CN112580011B (en) * | 2020-12-25 | 2022-05-24 | 华南理工大学 | Portrait encryption and decryption system facing biological feature privacy protection |
| CN112861626B (en) * | 2021-01-04 | 2024-03-08 | 西北工业大学 | Fine-grained expression classification method based on small sample learning |
| CN114764937B (en) * | 2021-01-13 | 2026-02-03 | 武汉Tcl集团工业研究院有限公司 | Face alignment method and equipment |
| CN113762019B (en) * | 2021-01-22 | 2024-04-09 | 北京沃东天骏信息技术有限公司 | Feature extraction network training method, face recognition method and device |
| CN113569081B (en) * | 2021-01-26 | 2025-09-23 | 腾讯科技(北京)有限公司 | Image recognition method, device, equipment and storage medium |
| CN112836629B (en) * | 2021-02-01 | 2024-03-08 | 清华大学深圳国际研究生院 | Image classification method |
| CN112966724B (en) * | 2021-02-07 | 2024-04-09 | 惠州市博实结科技有限公司 | Method and device for classifying image single categories |
| CN112766237A (en) * | 2021-03-12 | 2021-05-07 | 东北林业大学 | Unsupervised pedestrian re-identification method based on cluster feature point clustering |
| CN113157956B (en) * | 2021-04-23 | 2022-08-05 | 雅马哈发动机(厦门)信息系统有限公司 | Picture searching method, system, mobile terminal and storage medium |
| CN113344031B (en) * | 2021-05-13 | 2022-12-27 | 清华大学 | Text classification method |
| TWI857253B (en) * | 2021-07-02 | 2024-10-01 | 瑞昱半導體股份有限公司 | Training method of neural network model and associated device |
| CN113469253B (en) * | 2021-07-02 | 2024-05-14 | 河海大学 | Electric larceny detection method based on triple twinning network |
| CN113705383A (en) * | 2021-08-12 | 2021-11-26 | 南京英诺森软件科技有限公司 | Cross-age face recognition method and system based on ternary constraint |
| CN113435545B (en) * | 2021-08-14 | 2025-03-14 | 北京达佳互联信息技术有限公司 | Image processing model training method and device |
| CN113642481A (en) * | 2021-08-17 | 2021-11-12 | 百度在线网络技术(北京)有限公司 | Identification method, training method, device, electronic device and storage medium |
| CN113569991B (en) * | 2021-08-26 | 2024-05-28 | 深圳市捷顺科技实业股份有限公司 | Person evidence comparison model training method, computer equipment and computer storage medium |
| CN113780461B (en) * | 2021-09-23 | 2022-08-05 | 中国人民解放军国防科技大学 | A Robust Neural Network Training Method Based on Feature Matching |
| CN113947790A (en) * | 2021-09-23 | 2022-01-18 | 秦桂珍 | Financial big data face recognition method and financial management system |
| CN113887653B (en) * | 2021-10-20 | 2024-02-06 | 西安交通大学 | Positioning method and system for tight coupling weak supervision learning based on ternary network |
| CN114299306B (en) * | 2021-10-22 | 2025-01-21 | 腾讯科技(深圳)有限公司 | Method for obtaining image retrieval model, image retrieval method, device and equipment |
| CN114049479A (en) * | 2021-11-10 | 2022-02-15 | 苏州魔视智能科技有限公司 | Self-supervision fisheye camera image feature point extraction method and device and storage medium |
| CN116129479A (en) * | 2021-11-11 | 2023-05-16 | 北京眼神智能科技有限公司 | Convolutional neural network training and face detection living method, device, medium and equipment |
| CN114241017B (en) * | 2021-11-16 | 2025-06-27 | 东软医疗系统股份有限公司 | Image registration method, device, storage medium and computer equipment |
| CN114511897A (en) * | 2021-12-21 | 2022-05-17 | 特斯联科技集团有限公司 | Identity recognition method, system, storage medium and server |
| CN114445871A (en) * | 2021-12-22 | 2022-05-06 | 天翼云科技有限公司 | Intelligent video monitoring method and system for face recognition |
| CN114333015B (en) * | 2021-12-29 | 2025-05-20 | 以萨技术股份有限公司 | Training method and device for face recognition model and electronic equipment |
| CN114491476A (en) * | 2022-01-24 | 2022-05-13 | 支付宝(杭州)信息技术有限公司 | Image similarity matching method and device with reliable generalization |
| CN114581963B (en) * | 2022-02-25 | 2025-05-13 | 浙江工业大学 | OCT fingerprint section image authenticity detection method based on reconstruction difference |
| CN114863425B (en) * | 2022-03-31 | 2024-10-29 | 哈尔滨工程大学 | Urine erythrocyte classification method based on supervised contrast learning |
| CN114708488B (en) * | 2022-04-26 | 2026-02-03 | 桂林电子科技大学 | YOLOv5 neural network improvement method based on intra-class inter-class correlation |
| CN114882558B (en) * | 2022-04-29 | 2024-02-23 | 陕西师范大学 | Real-time identity authentication method in learning scenarios based on face recognition technology |
| CN114663965B (en) * | 2022-05-24 | 2022-10-21 | 之江实验室 | Testimony comparison method and device based on two-stage alternative learning |
| CN114863124B (en) * | 2022-05-25 | 2025-07-29 | 小荷医疗器械(海南)有限公司 | Model training method, polyp detection method, corresponding device, medium and equipment |
| CN114926445B (en) * | 2022-05-31 | 2024-03-26 | 哈尔滨工业大学 | Small sample crop disease image identification method and system based on twin network |
| CN115063374B (en) * | 2022-06-24 | 2025-09-16 | 合肥的卢深视科技有限公司 | Model training, face image quality scoring method, electronic equipment and storage medium |
| CN114817888A (en) * | 2022-06-27 | 2022-07-29 | 中国信息通信研究院 | Certificate registration and issuance method, device and storage medium |
| CN115661569B (en) * | 2022-10-17 | 2025-12-12 | 北京航空航天大学 | A high-precision, fine-grained SAR target detection method |
| CN116152933A (en) * | 2022-12-12 | 2023-05-23 | 支付宝(杭州)信息技术有限公司 | A training method, device, equipment and storage medium for an anomaly detection model |
| CN116206355A (en) * | 2023-04-25 | 2023-06-02 | 鹏城实验室 | Face recognition model training, image registration and face recognition method and device |
| CN116910604A (en) * | 2023-06-07 | 2023-10-20 | 中国银行股份有限公司 | User classification methods, devices, computer equipment, storage media and program products |
| CN116959064B (en) * | 2023-06-25 | 2024-04-26 | 上海腾桥信息技术有限公司 | Certificate verification method and device, computer equipment and storage medium |
| CN116977461B (en) * | 2023-06-30 | 2024-03-08 | 北京开普云信息科技有限公司 | Portrait generation method, device, storage medium and equipment for specific scene |
| CN117935380B (en) * | 2024-01-29 | 2025-09-26 | 华南理工大学 | A fraud detection method and system for makeup faces |
| CN118038277B (en) * | 2024-03-27 | 2025-03-18 | 杭州电子科技大学 | A robot scene recognition method based on lifelong learning |
| CN118410192B (en) * | 2024-06-24 | 2024-10-11 | 广州思德医疗科技有限公司 | Endoscopic image deduplication method, endoscopic image deduplication apparatus, endoscopic image deduplication computer device, endoscopic image deduplication program product, and endoscopic image deduplication program product |
| CN119964131B (en) * | 2024-10-22 | 2025-10-31 | 湖北省烟草公司咸宁市公司 | A method, device, and electronic equipment for cigarette pack reflection recognition based on a contrastive learning model. |
| CN119138866B (en) * | 2024-11-19 | 2025-02-11 | 江西财经大学 | Signal monitoring system based on human vital signs feature extraction |
| CN119784972B (en) * | 2024-11-21 | 2025-10-24 | 北京邮电大学 | An end-to-end scatterer profile modeling method based on online hard example mining |
| CN120013929B (en) * | 2025-04-17 | 2025-06-24 | 国科大杭州高等研究院 | Wafer defect detection method based on one-shot |
| CN120223444B (en) * | 2025-05-28 | 2025-11-14 | 小元感知(葫芦岛)科技有限公司 | Remote identity authentication method based on multiple video recognition |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106599827A (en) * | 2016-12-09 | 2017-04-26 | 浙江工商大学 | Small target rapid detection method based on deep convolution neural network |
| CN107194341A (en) * | 2017-05-16 | 2017-09-22 | 西安电子科技大学 | The many convolution neural network fusion face identification methods of Maxout and system |
Family Cites Families (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US9129216B1 (en) * | 2013-07-15 | 2015-09-08 | Xdroid Kft. | System, method and apparatus for computer aided association of relevant images with text |
| CN107423690B (en) * | 2017-06-26 | 2020-11-13 | 广东工业大学 | A face recognition method and device |
| CN108009528B (en) * | 2017-12-26 | 2020-04-07 | 广州广电运通金融电子股份有限公司 | Triple Loss-based face authentication method and device, computer equipment and storage medium |
-
2017
- 2017-12-26 CN CN201711436879.4A patent/CN108009528B/en active Active
-
2018
- 2018-09-30 WO PCT/CN2018/109169 patent/WO2019128367A1/en not_active Ceased
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106599827A (en) * | 2016-12-09 | 2017-04-26 | 浙江工商大学 | Small target rapid detection method based on deep convolution neural network |
| CN107194341A (en) * | 2017-05-16 | 2017-09-22 | 西安电子科技大学 | The many convolution neural network fusion face identification methods of Maxout and system |
Non-Patent Citations (1)
| Title |
|---|
| BRANDON,AMOS等: "OpenFace:A general-purpose face recognition library with mobile applications", 《CMU SCHOOL OF COMPUTER SCIENCE,TECH. REP.》 * |
Cited By (108)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2019128367A1 (en) * | 2017-12-26 | 2019-07-04 | 广州广电运通金融电子股份有限公司 | Face verification method and apparatus based on triplet loss, and computer device and storage medium |
| CN108922542A (en) * | 2018-06-01 | 2018-11-30 | 平安科技(深圳)有限公司 | Acquisition methods, device, computer equipment and the storage medium of sample triple |
| WO2019227613A1 (en) * | 2018-06-01 | 2019-12-05 | 平安科技(深圳)有限公司 | Method and device for obtaining triple of samples, computer device and storage medium |
| CN108921033A (en) * | 2018-06-04 | 2018-11-30 | 北京京东金融科技控股有限公司 | Face picture comparison method, device, medium and electronic equipment |
| CN110598840A (en) * | 2018-06-13 | 2019-12-20 | 富士通株式会社 | Knowledge migration method, information processing apparatus, and storage medium |
| CN110598840B (en) * | 2018-06-13 | 2023-04-18 | 富士通株式会社 | Knowledge migration method, information processing apparatus, and storage medium |
| CN109145704A (en) * | 2018-06-14 | 2019-01-04 | 西安电子科技大学 | A kind of human face portrait recognition methods based on face character |
| US11288538B2 (en) | 2018-06-15 | 2022-03-29 | Shenzhen University | Object functionality predication methods, computer device, and storage medium |
| CN108921952A (en) * | 2018-06-15 | 2018-11-30 | 深圳大学 | Object functionality prediction technique, device, computer equipment and storage medium |
| WO2019237450A1 (en) * | 2018-06-15 | 2019-12-19 | 深圳大学 | Object functionality prediction method, computer device, and storage medium |
| CN108985198A (en) * | 2018-07-02 | 2018-12-11 | 四川斐讯信息技术有限公司 | A kind of COS distance calculation method based on big data feature vector |
| CN110738071A (en) * | 2018-07-18 | 2020-01-31 | 浙江中正智能科技有限公司 | face algorithm model training method based on deep learning and transfer learning |
| CN109145956B (en) * | 2018-07-26 | 2021-12-14 | 上海慧子视听科技有限公司 | Scoring method, scoring device, computer equipment and storage medium |
| CN109145956A (en) * | 2018-07-26 | 2019-01-04 | 上海慧子视听科技有限公司 | Methods of marking, device, computer equipment and storage medium |
| CN108960342A (en) * | 2018-08-01 | 2018-12-07 | 中国计量大学 | Based on the image similarity calculation method for improving SoftMax loss function |
| CN108960209B (en) * | 2018-08-09 | 2023-07-21 | 腾讯科技(深圳)有限公司 | Identity recognition method, identity recognition device and computer readable storage medium |
| CN108960209A (en) * | 2018-08-09 | 2018-12-07 | 腾讯科技(深圳)有限公司 | Personal identification method, device and computer readable storage medium |
| CN109165589B (en) * | 2018-08-14 | 2021-02-23 | 北京颂泽科技有限公司 | Vehicle weight recognition method and device based on deep learning |
| CN109165589A (en) * | 2018-08-14 | 2019-01-08 | 北京颂泽科技有限公司 | Vehicle based on deep learning recognition methods and device again |
| CN109271877A (en) * | 2018-08-24 | 2019-01-25 | 北京智芯原动科技有限公司 | A kind of human figure identification method and device |
| CN109145991B (en) * | 2018-08-24 | 2020-07-31 | 北京地平线机器人技术研发有限公司 | Image group generation method, image group generation device and electronic equipment |
| CN109145991A (en) * | 2018-08-24 | 2019-01-04 | 北京地平线机器人技术研发有限公司 | Image group generation method, image group generating means and electronic equipment |
| CN110874602A (en) * | 2018-08-30 | 2020-03-10 | 北京嘀嘀无限科技发展有限公司 | Image identification method and device |
| CN109344740A (en) * | 2018-09-12 | 2019-02-15 | 上海了物网络科技有限公司 | Face identification system, method and computer readable storage medium |
| CN109359541A (en) * | 2018-09-17 | 2019-02-19 | 南京邮电大学 | A sketch face recognition method based on deep transfer learning |
| CN109214361A (en) * | 2018-10-18 | 2019-01-15 | 康明飞(北京)科技有限公司 | A kind of face identification method and device and ticket verification method and device |
| CN109543524A (en) * | 2018-10-18 | 2019-03-29 | 同盾控股有限公司 | A kind of image-recognizing method, device |
| CN109492583A (en) * | 2018-11-09 | 2019-03-19 | 安徽大学 | A kind of recognition methods again of the vehicle based on deep learning |
| CN109685106A (en) * | 2018-11-19 | 2019-04-26 | 深圳博为教育科技有限公司 | A kind of image-recognizing method, face Work attendance method, device and system |
| CN109522850B (en) * | 2018-11-22 | 2023-03-10 | 中山大学 | An Action Similarity Evaluation Method Based on Few Sampling Learning |
| CN109522850A (en) * | 2018-11-22 | 2019-03-26 | 中山大学 | A kind of movement similarity estimating method based on small-sample learning |
| CN109685121B (en) * | 2018-12-11 | 2023-07-18 | 中国科学院苏州纳米技术与纳米仿生研究所 | Image retrieval model training method, image retrieval method, computer equipment |
| CN109685121A (en) * | 2018-12-11 | 2019-04-26 | 中国科学院苏州纳米技术与纳米仿生研究所 | Training method, image search method, the computer equipment of image encrypting algorithm |
| CN111325223B (en) * | 2018-12-13 | 2023-10-24 | 中国电信股份有限公司 | Training method, device and computer-readable storage medium for deep learning model |
| CN111325223A (en) * | 2018-12-13 | 2020-06-23 | 中国电信股份有限公司 | Deep learning model training method and device and computer readable storage medium |
| CN109711443A (en) * | 2018-12-14 | 2019-05-03 | 平安城市建设科技(深圳)有限公司 | Floor plan recognition methods, device, equipment and storage medium neural network based |
| CN109815801A (en) * | 2018-12-18 | 2019-05-28 | 北京英索科技发展有限公司 | Face identification method and device based on deep learning |
| WO2020125404A1 (en) * | 2018-12-19 | 2020-06-25 | 北京世纪好未来教育科技有限公司 | Method and apparatus for constructing neural network and computer-readable medium |
| CN109711358A (en) * | 2018-12-28 | 2019-05-03 | 四川远鉴科技有限公司 | Neural network training method, face identification method and system and storage medium |
| WO2020147408A1 (en) * | 2019-01-16 | 2020-07-23 | 平安科技(深圳)有限公司 | Facial recognition model evaluation method and apparatus, and storage medium and computer device |
| CN111461152A (en) * | 2019-01-21 | 2020-07-28 | 同方威视技术股份有限公司 | Goods detection method and device, electronic device and computer readable medium |
| CN111461152B (en) * | 2019-01-21 | 2024-04-05 | 同方威视技术股份有限公司 | Cargo detection method and device, electronic equipment and computer readable medium |
| CN109886186A (en) * | 2019-02-18 | 2019-06-14 | 上海骏聿数码科技有限公司 | A kind of face identification method and device |
| CN111723222A (en) * | 2019-03-19 | 2020-09-29 | Sap欧洲公司 | Image Search and Training System |
| CN109948568A (en) * | 2019-03-26 | 2019-06-28 | 东华大学 | Embedded human face identifying system based on ARM microprocessor and deep learning |
| CN110147732A (en) * | 2019-04-16 | 2019-08-20 | 平安科技(深圳)有限公司 | Refer to vein identification method, device, computer equipment and storage medium |
| CN111832364B (en) * | 2019-04-22 | 2024-04-23 | 普天信息技术有限公司 | A face recognition method and device |
| CN111832364A (en) * | 2019-04-22 | 2020-10-27 | 普天信息技术有限公司 | A face recognition method and device |
| CN110147833A (en) * | 2019-05-09 | 2019-08-20 | 北京迈格威科技有限公司 | Facial image processing method, apparatus, system and readable storage medium storing program for executing |
| CN110213660B (en) * | 2019-05-27 | 2021-08-20 | 广州荔支网络技术有限公司 | Program distribution method, system, computer device and storage medium |
| CN110213660A (en) * | 2019-05-27 | 2019-09-06 | 广州荔支网络技术有限公司 | Distribution method, system, computer equipment and the storage medium of program |
| CN112149491A (en) * | 2019-06-27 | 2020-12-29 | 罗伯特·博世有限公司 | Method for determining a trust value of a detected object |
| CN112149491B (en) * | 2019-06-27 | 2025-09-09 | 罗伯特·博世有限公司 | Method for determining trust value of detected object |
| CN110674688B (en) * | 2019-08-19 | 2023-10-31 | 深圳力维智联技术有限公司 | Face recognition model acquisition method, system and medium for video monitoring scene |
| CN110674688A (en) * | 2019-08-19 | 2020-01-10 | 深圳力维智联技术有限公司 | Method, system and medium for obtaining face recognition model for video surveillance scene |
| CN112560880A (en) * | 2019-09-25 | 2021-03-26 | 中国电信股份有限公司 | Object classification method, object classification apparatus, and computer-readable storage medium |
| CN112580406A (en) * | 2019-09-30 | 2021-03-30 | 北京中关村科金技术有限公司 | Face comparison method and device and storage medium |
| WO2021073150A1 (en) * | 2019-10-16 | 2021-04-22 | 平安科技(深圳)有限公司 | Data detection method and apparatus, and computer device and storage medium |
| CN110765933A (en) * | 2019-10-22 | 2020-02-07 | 山西省信息产业技术研究院有限公司 | Dynamic portrait sensing comparison method applied to driver identity authentication system |
| CN110929099A (en) * | 2019-11-28 | 2020-03-27 | 杭州趣维科技有限公司 | Short video frame semantic extraction method and system based on multitask learning |
| CN111062338B (en) * | 2019-12-19 | 2023-11-17 | 厦门商集网络科技有限责任公司 | A method and system for comparing the consistency of certificates and portraits |
| CN111062338A (en) * | 2019-12-19 | 2020-04-24 | 厦门商集网络科技有限责任公司 | Certificate portrait consistency comparison method and system |
| CN111178249A (en) * | 2019-12-27 | 2020-05-19 | 杭州艾芯智能科技有限公司 | Face comparison method and device, computer equipment and storage medium |
| CN111368644B (en) * | 2020-02-14 | 2024-01-05 | 深圳市商汤科技有限公司 | Image processing method, device, electronic equipment and storage medium |
| CN111368644A (en) * | 2020-02-14 | 2020-07-03 | 深圳市商汤科技有限公司 | Image processing method, device, electronic device and storage medium |
| CN111401257A (en) * | 2020-03-17 | 2020-07-10 | 天津理工大学 | Non-constraint condition face recognition method based on cosine loss |
| CN111401257B (en) * | 2020-03-17 | 2022-10-04 | 天津理工大学 | A face recognition method based on cosine loss under unconstrained conditions |
| CN111401277A (en) * | 2020-03-20 | 2020-07-10 | 深圳前海微众银行股份有限公司 | Face recognition model updating method, device, equipment and medium |
| CN113538075A (en) * | 2020-04-14 | 2021-10-22 | 阿里巴巴集团控股有限公司 | Data processing method, model training method, device and equipment |
| CN112949780A (en) * | 2020-04-21 | 2021-06-11 | 佳都科技集团股份有限公司 | Feature model training method, device, equipment and storage medium |
| CN111709313A (en) * | 2020-05-27 | 2020-09-25 | 杭州电子科技大学 | Pedestrian Re-identification Method Based on Local and Channel Combination Features |
| CN111709313B (en) * | 2020-05-27 | 2022-07-29 | 杭州电子科技大学 | Pedestrian re-identification method based on local and channel combination characteristics |
| CN113808318A (en) * | 2020-06-16 | 2021-12-17 | 富士胶片商业创新有限公司 | Building entrance management system, method and non-transitory computer readable medium |
| CN113808318B (en) * | 2020-06-16 | 2025-09-26 | 富士胶片商业创新有限公司 | Building entry management system, method, and non-transitory computer-readable medium |
| CN114065901A (en) * | 2020-07-31 | 2022-02-18 | 华为技术有限公司 | Method and device for training neural network model |
| CN112052821B (en) * | 2020-09-15 | 2023-07-07 | 浙江智慧视频安防创新中心有限公司 | Fire-fighting channel safety detection method, device, equipment and storage medium |
| CN112052821A (en) * | 2020-09-15 | 2020-12-08 | 浙江智慧视频安防创新中心有限公司 | Fire fighting channel safety detection method, device, equipment and storage medium |
| CN112116021B (en) * | 2020-09-27 | 2024-12-10 | 广州华多网络科技有限公司 | A gemstone similarity measurement data processing method and related equipment |
| CN112116021A (en) * | 2020-09-27 | 2020-12-22 | 广州华多网络科技有限公司 | A kind of gemstone similarity measurement data processing method and related equipment |
| CN112492383A (en) * | 2020-12-03 | 2021-03-12 | 珠海格力电器股份有限公司 | Video frame generation method and device, storage medium and electronic equipment |
| CN114723933A (en) * | 2020-12-18 | 2022-07-08 | 重庆中星微人工智能芯片技术有限公司 | Region information generation method and device, electronic equipment and computer readable medium |
| CN113065495A (en) * | 2021-04-13 | 2021-07-02 | 深圳技术大学 | Image similarity calculation method, target object re-identification method and system |
| CN113065495B (en) * | 2021-04-13 | 2023-07-14 | 深圳技术大学 | Image similarity calculation method, target object re-identification method and system |
| CN113283359A (en) * | 2021-06-02 | 2021-08-20 | 万达信息股份有限公司 | Authentication method and system for handheld certificate photo and electronic equipment |
| CN113344875A (en) * | 2021-06-07 | 2021-09-03 | 武汉象点科技有限公司 | Abnormal image detection method based on self-supervision learning |
| CN113269155A (en) * | 2021-06-28 | 2021-08-17 | 苏州市科远软件技术开发有限公司 | End-to-end face recognition method, device, equipment and storage medium |
| CN113486804A (en) * | 2021-07-07 | 2021-10-08 | 科大讯飞股份有限公司 | Object identification method, device, equipment and storage medium |
| CN113486804B (en) * | 2021-07-07 | 2024-02-20 | 科大讯飞股份有限公司 | Object identification method, device, equipment and storage medium |
| CN113642468A (en) * | 2021-08-16 | 2021-11-12 | 中国银行股份有限公司 | Identity authentication method and device |
| CN113688793A (en) * | 2021-09-22 | 2021-11-23 | 万章敏 | Training method of face model and face recognition system |
| WO2023093838A1 (en) * | 2021-11-25 | 2023-06-01 | 北京字跳网络技术有限公司 | Super-resolution image processing method and apparatus, and device and medium |
| US20230169792A1 (en) * | 2021-11-26 | 2023-06-01 | GenkiTek Technology Co., Ltd. | System and method of mode selection face recognition with parallel cnns |
| US12080101B2 (en) * | 2021-11-26 | 2024-09-03 | GenkiTek Technology Co., Ltd. | System and method of mode selection face recognition with parallel CNNS |
| CN114387457A (en) * | 2021-12-27 | 2022-04-22 | 腾晖科技建筑智能(深圳)有限公司 | Face intra-class interval optimization method based on parameter adjustment |
| CN114283483A (en) * | 2021-12-31 | 2022-04-05 | 中国联合网络通信集团有限公司 | Face recognition method and device |
| CN114743279A (en) * | 2022-05-13 | 2022-07-12 | 以萨技术股份有限公司 | Living body detection function generation method and device, storage medium and equipment |
| CN114898454A (en) * | 2022-05-31 | 2022-08-12 | 中国第一汽车股份有限公司 | Neural network model training method and device, electronic equipment and medium |
| CN114937287A (en) * | 2022-05-31 | 2022-08-23 | 深圳市优必选科技股份有限公司 | Pedestrian re-identification method, device, computer-readable storage medium and terminal device |
| CN114937287B (en) * | 2022-05-31 | 2025-04-18 | 深圳市优必选科技股份有限公司 | Pedestrian re-identification method, device, computer-readable storage medium and terminal device |
| US12525049B2 (en) * | 2022-05-31 | 2026-01-13 | Ubtech Robotics Corp Ltd | Person re-identification method, computer-readable storage medium, and terminal device |
| CN114882573A (en) * | 2022-06-06 | 2022-08-09 | 浙江工业大学 | Face recognition method and device based on personalized federal learning |
| CN114882573B (en) * | 2022-06-06 | 2025-04-08 | 浙江工业大学 | A facial recognition method and device based on personalized federated learning |
| CN115422517A (en) * | 2022-09-14 | 2022-12-02 | 平安银行股份有限公司 | Identity authentication method, device, medium and equipment based on credit card |
| EP4383212A1 (en) * | 2022-12-06 | 2024-06-12 | Bundesdruckerei GmbH | Authentication device and method for authenticating a person by means of an identification document associated with the person, and identification document and method for producing |
| CN116127298B (en) * | 2023-02-22 | 2024-03-19 | 北京邮电大学 | Small sample radio frequency fingerprint identification method based on triplet loss |
| CN116127298A (en) * | 2023-02-22 | 2023-05-16 | 北京邮电大学 | Small sample radio frequency fingerprint identification method based on triplet loss |
| CN117114972A (en) * | 2023-08-04 | 2023-11-24 | 东莞理工学院 | A method, device, terminal equipment and storage medium for generating an ID photo |
| CN118337532A (en) * | 2024-06-13 | 2024-07-12 | 浙江鹏信信息科技股份有限公司 | Zero trust-based traffic safety audit protection method and system and readable medium |
Also Published As
| Publication number | Publication date |
|---|---|
| WO2019128367A1 (en) | 2019-07-04 |
| CN108009528B (en) | 2020-04-07 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN108009528A (en) | Face authentication method, device, computer equipment and storage medium based on Triplet Loss | |
| US20250260780A1 (en) | Liveness Detection for Mobile Devices | |
| Shi et al. | Docface: Matching id document photos to selfies | |
| CN105844206A (en) | Identity authentication method and identity authentication device | |
| CN105654056A (en) | Human face identifying method and device | |
| CN110321871B (en) | Palm vein identification system and method based on LSTM | |
| CN110414299A (en) | A computer vision-based method for kinship analysis of monkey faces | |
| CN109344709A (en) | A method for detecting fake images generated by faces | |
| CN110472495A (en) | A Deep Learning Face Recognition Method Based on Graph Reasoning Global Features | |
| CN113591603A (en) | Certificate verification method and device, electronic equipment and storage medium | |
| CN116311400A (en) | Palmprint image processing method, electronic device and storage medium | |
| CN112686191B (en) | Living body anti-counterfeiting method, system, terminal and medium based on three-dimensional information of human face | |
| CN114863499A (en) | Finger vein and palm vein identification method based on federal learning | |
| KR102215535B1 (en) | Partial face image based identity authentication method using neural network and system for the method | |
| CN117095436A (en) | Intelligent management system and method for enterprise employee information | |
| Goud et al. | Smart attendance notification system using SMTP with face recognition | |
| CN110443577A (en) | A kind of campus attendance checking system based on recognition of face | |
| Yuan et al. | SALM: Smartphone-based identity authentication using lip motion characteristics | |
| Yang et al. | Privileged information-based conditional structured output regression forest for facial point detection | |
| Alniemi et al. | Class Attendance System Based on Face Recognition. | |
| Prayogo et al. | A novel approach for face recognition: Yolo-based face detection and facenet | |
| Santemiz et al. | Automatic face recognition for home safety using video‐based side‐view face images | |
| Karmakar et al. | Generation of new points for training set and feature-level fusion in multimodal biometric identification | |
| Sabeenian et al. | Transition from holistic to deep learning face recognition methods | |
| Farooqui et al. | Automatic detection of fake profiles in online social network using soft computing |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant | ||
| PE01 | Entry into force of the registration of the contract for pledge of patent right |
Denomination of invention: Face authentication method, device, computer device and storage medium based on triplet loss Effective date of registration: 20210621 Granted publication date: 20200407 Pledgee: Bank of China Limited by Share Ltd. Guangzhou Tianhe branch Pledgor: GRG Banking Equipment Co.,Ltd. Registration number: Y2021980004993 |
|
| PE01 | Entry into force of the registration of the contract for pledge of patent right | ||
| PC01 | Cancellation of the registration of the contract for pledge of patent right |
Date of cancellation: 20230302 Granted publication date: 20200407 Pledgee: Bank of China Limited by Share Ltd. Guangzhou Tianhe branch Pledgor: GRG BANKING EQUIPMENT Co.,Ltd. Registration number: Y2021980004993 |
|
| PC01 | Cancellation of the registration of the contract for pledge of patent right | ||
| CP03 | Change of name, title or address |
Address after: 510663 9, 11, science Road, science and Technology City, Guangzhou high tech Industrial Development Zone, Guangdong Patentee after: Guangdian Yuntong Group Co.,Ltd. Country or region after: China Address before: No. 9-11 Kelin Road, Science City, Guangzhou High tech Industrial Development Zone, Guangdong Province Patentee before: GRG BANKING EQUIPMENT Co.,Ltd. Country or region before: China |
|
| CP03 | Change of name, title or address |