[go: up one dir, main page]

CN113076927B - Method and system for finger vein recognition based on multi-source domain migration - Google Patents

Method and system for finger vein recognition based on multi-source domain migration Download PDF

Info

Publication number
CN113076927B
CN113076927B CN202110449007.1A CN202110449007A CN113076927B CN 113076927 B CN113076927 B CN 113076927B CN 202110449007 A CN202110449007 A CN 202110449007A CN 113076927 B CN113076927 B CN 113076927B
Authority
CN
China
Prior art keywords
domain
features
network
finger vein
general
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110449007.1A
Other languages
Chinese (zh)
Other versions
CN113076927A (en
Inventor
康文雄
钟飞
马钰儿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN202110449007.1A priority Critical patent/CN113076927B/en
Publication of CN113076927A publication Critical patent/CN113076927A/en
Application granted granted Critical
Publication of CN113076927B publication Critical patent/CN113076927B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/14Vascular patterns

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

本发明提供的基于多源域迁移的指静脉识别方法,包括以下步骤:首先将目标域的指静脉图片,获取到该图片对应的ROI;再将ROI输入基础特征提取网络提取基础特征;之后再将基础特征分别输入CFTN和DFTN,得到通用特征和域指定特征;将通用特征和域指定特征进行拼接后,得到最终的聚合特征;将获得的聚合特征在已有的指静脉特征数据库中进行搜索,得到输入的指静脉图片和数据库中已有指静脉的匹配分数;最终根据匹配分数输出该输入指静脉的匹配结果。通过使用多个源域迁移到目标域,减少目标域的样本需求,从而降低采集标注成本;将多个源域数据集的通用知识迁移到目标域,并且保留领域指定特征,从而最大程度地提升迁移学习的性能。

Figure 202110449007

The method for recognizing finger veins based on multi-source domain migration provided by the present invention includes the following steps: first, obtain the ROI corresponding to the finger vein picture in the target domain; then input the ROI into the basic feature extraction network to extract basic features; and then Input the basic features into CFTN and DFTN respectively to obtain general features and domain-specific features; after splicing the general features and domain-specific features, the final aggregated features are obtained; the obtained aggregated features are searched in the existing finger vein feature database , get the matching score of the input finger vein picture and the existing finger vein in the database; finally output the matching result of the input finger vein according to the matching score. By using multiple source domains to migrate to the target domain, the sample requirements of the target domain are reduced, thereby reducing the cost of collection and labeling; the general knowledge of multiple source domain datasets is transferred to the target domain, and domain-specified features are retained, thereby maximizing the improvement Performance of transfer learning.

Figure 202110449007

Description

基于多源域迁移的指静脉识别方法及系统Method and system for finger vein recognition based on multi-source domain migration

技术领域technical field

本发明属于生物特征识别领域,尤其涉及基于多源域迁移的指静脉识别方法及系统。The invention belongs to the field of biological feature recognition, in particular to a finger vein recognition method and system based on multi-source domain migration.

背景技术Background technique

指静脉识别是新提出的、具有用户友好性的、高安全性的并且拥有天然防伪性质的用于生物特征识别的模态。越来越多的研究人员和工程人员开始致力于在指静脉识别领域进行研究。然而,在指静脉工程实践,指静脉识别仍然还受到以下两个问题的影响:Finger vein recognition is a newly proposed modality for biometric recognition that is user-friendly, highly secure, and possesses natural anti-counterfeiting properties. More and more researchers and engineers have begun to devote themselves to research in the field of finger vein recognition. However, in the practice of finger vein engineering, finger vein recognition is still affected by the following two problems:

图像差异:由于光学成像传感器和近红外光光照的区别,不同场景下被不同设备所采集的指静脉图像都有区别,如图1所示。除此之外,他们的灰度直方图和局部二进制编码图像也分别在下面列出,以说明他们的区别。除此之外,不同的近红外光等照射强度也会对指静脉图像产生非线性的退化。由于这种图像退化具有很强的非线性,因此光照强度所导致的非线性退化也不能够直接建模,从而消除光照强度的影响。除此之外,不同的图像传感器有不同的光谱响应曲线,这也会导致同一个手指在相同的光照下产生不同的成像结果。尽管上面的两个原因已经被提出,但是由于目前的电子设备精度、光学设备的限制、制造成本限制,截止目前仍然没有合适的方法来解决他们。因此,在指静脉的实际应用中,不同设备采集的指静脉图像存在着不可避免的差别,这会导致在一个数据集上训练的深度学习模型无法很好的应用在新设计的指静脉设备上。Image difference: Due to the difference between the optical imaging sensor and the near-infrared light, the finger vein images collected by different devices in different scenarios are different, as shown in Figure 1. In addition, their grayscale histograms and local binary coded images are also listed below to illustrate their differences. In addition, different irradiation intensities such as near-infrared light will also cause nonlinear degradation to the finger vein image. Since this image degradation is highly nonlinear, the nonlinear degradation caused by illumination intensity cannot be directly modeled, thereby eliminating the influence of illumination intensity. In addition, different image sensors have different spectral response curves, which will also lead to different imaging results for the same finger under the same light. Although the above two reasons have been proposed, there is still no suitable method to solve them due to the limitation of the precision of electronic equipment, the limitation of optical equipment, and the limitation of manufacturing cost. Therefore, in the actual application of finger veins, there are inevitable differences in the finger vein images collected by different devices, which will cause the deep learning model trained on a data set to be unable to be well applied to the newly designed finger vein device .

数据缺失:指静脉相对指纹、人脸和手势而言,是一个新的生物特征模态。目前指静脉数据库中的训练数据量还不足,并且类内的变化也不够大,这都会影响指静脉识别算法的性能。而目前大多数模式识别、深度学习方法,都需要大量的数据集来获得一个有效、鲁棒的识别模型。为了缓解数据短缺的问题,一个解决方法是在有充足数据的相关数据集上进行预训练然后在目标数据集上进行微调或者是联合训练。但是当从多个源域进行微调时,深度学习模型会存在严重的遗忘问题,并且联合训练会丢弃多个数据集之间的通用特征。Missing data: Finger veins are a new biometric modality compared to fingerprints, faces, and gestures. At present, the amount of training data in the finger vein database is not enough, and the variation within the class is not large enough, which will affect the performance of the finger vein recognition algorithm. However, most of the current pattern recognition and deep learning methods require a large amount of data sets to obtain an effective and robust recognition model. To alleviate the data shortage problem, one solution is to pre-train on related datasets with sufficient data and then fine-tune or jointly train on the target dataset. But deep learning models suffer from severe forgetting problems when fine-tuned from multiple source domains, and joint training discards common features across multiple datasets.

指静脉识别由于其极高的安全性和独一无二的活体检测性,其在学术界和工业界的影响力不断加大。大多数方法,尤其是基于深度学习的方法都倾向于使用大量的训练数据来获得一个有效且鲁棒的识别模型。然而,在实际的应用中,为每个新设计的指静脉识别设备收集充足的数据是十分耗时耗力、昂贵的。因此,在小样本的实验设置下如果获得一个最优的模型成了最近的一个研究热点。解决这类小样本学习问题的常用方法是使用微调或者是联合训练。但是从多个源域进行微调会受到严重的遗忘问题影响,而在多个源域上进行联合训练会丢弃通用特征。Due to its extremely high security and unique liveness detection, finger vein recognition is gaining more and more influence in academia and industry. Most methods, especially those based on deep learning, tend to use a large amount of training data to obtain an effective and robust recognition model. However, in practical applications, collecting sufficient data for each newly designed finger vein recognition device is time-consuming, labor-intensive and expensive. Therefore, how to obtain an optimal model under the experimental setting of small samples has become a recent research hotspot. A common approach to this type of few-shot learning problem is to use fine-tuning or joint training. But fine-tuning from multiple source domains suffers from severe forgetting problems, while joint training on multiple source domains discards common features.

Kyoung Jun Noh等人(Noh K J,Choi J,Hong J S,et al.Finger-VeinRecognition Using Heterogeneous Databases by Domain Adaption Based on aCycle-Consistent Adversarial Network[J].Sensors,2021,21(2):524)为了解决指静脉识别中的多源域问题,深度学习模型在多个数据集上测试效果都比较差的问题,提出了使用CycleGAN来提升异源数据集上识别性能的方法。即使用CycleGAN在图像层进行迁移,来达到模型能够在多个不同的指静脉领域上适应的目的。其算法整体流程如下:a.将采集到的图片进行预处理;b.将预处理得到的ROI图片送入CycleGAN,生成领域自适应的图片;c.将领域自适应的图片送入深度神经网络进行特征提取;d.使用上一步提取到的特征进行特征匹配与检索,得到最终识别结果。但是CycleGAN是一种无监督生成对抗网络,它的主要想法是训练两对生成器-判别器模型以将图像从一个领域转换为另一个领域,在这过程中要求循环一致性。即在序列地应用生成器后,应该得到一个相似于原始L1损失的图像。因此需要一个循环损失函数 (cyclic loss),它能确保生成器不会将一个领域的图像转换到另一个和原始图像完全不相关的领域。但是这样的转换通常是不稳定的,通常要求测试时的输入图像分布要和训练时的图像分布完全一致,但是这个要求在实际使用时是很难满足的。因此,使用CycleGAN进行图像层的领域适应,会影响原始图像的质量。Kyoung Jun Noh et al. (Noh K J, Choi J, Hong J S, et al.Finger-VeinRecognition Using Heterogeneous Databases by Domain Adaption Based on aCycle-Consistent Adversarial Network[J].Sensors,2021,21(2):524) for To solve the problem of multi-source domains in finger vein recognition, the test effect of deep learning models on multiple data sets is relatively poor, and a method of using CycleGAN to improve the recognition performance on heterogeneous data sets is proposed. That is to use CycleGAN to migrate at the image layer to achieve the purpose of the model being able to adapt to multiple different finger vein fields. The overall process of the algorithm is as follows: a. Preprocess the collected pictures; b. Send the preprocessed ROI pictures to CycleGAN to generate domain-adaptive pictures; c. Send the domain-adaptive pictures to the deep neural network Perform feature extraction; d. Use the features extracted in the previous step to perform feature matching and retrieval to obtain the final recognition result. But CycleGAN is an unsupervised generative adversarial network, and its main idea is to train two pairs of generator-discriminator models to convert images from one domain to another, requiring cycle consistency in the process. That is, after sequentially applying the generator, you should get an image similar to the original L1 loss. Therefore, a cyclic loss function (cyclic loss) is needed, which can ensure that the generator does not transfer images from one domain to another domain that is completely unrelated to the original image. However, such conversion is usually unstable, and it is usually required that the distribution of input images during testing should be completely consistent with the distribution of images during training, but this requirement is difficult to meet in actual use. Therefore, using CycleGAN for domain adaptation of the image layer will affect the quality of the original image.

Guoqing Wang等人(Wang G,Sun C,Sowmya A.Learning a compact veindiscrimination model with GANerated samples[J].IEEE Transactions onInformation Forensics and Security,2019,15:635-650)为了解决指静脉识别中的小样本问题,提出了级联生成对抗网络GAN来进行数据扩增。但是,生成对抗神经网络由生成器和判别器组成。判别器尽可能使D(G(Z))接近0,而生成器尽可能生成同分布高质量的样本使D(G(Z))接近1。当生成器和判别器的性能训练得足够好时,达到纳什均衡(生成器生成的G(Z)与训练数据有相同的分布,对于判别器的每个输入x,D(x)=0.5)。生成对抗神经网络训练不稳定有如下三点原因:a.很难使一对模型(G和D同时)收敛。大多深度模型的训练都使用优化算法寻找损失函数比较低的值。优化算法通常是个可靠的“下山”过程。生成对抗神经网络要求双方在博弈的过程中达到势均力敌(均衡)。每个模型在更新的过程中(比如生成器)成功的“下山”,但同样的更新可能会造成博弈的另一个模型(比如判别器)“上山”。甚至有时候博弈双方虽然最终达到了均衡,但双方在不断的抵消对方的进步并没有使双方同时达到一个有用的地方。对所有模型同时梯度下降使得某些模型收敛但不是所有模型都达到收敛最优。b.生成器G发生模式崩溃:对于不同的输入生成相似的样本,最坏的情况仅生成一个单独的样本,判别器的学习会拒绝这些相似甚至相同的单一样本。在实际应用中,完全的模式崩溃很少,局部的模式崩溃很常见。局部模式崩溃是指生成器使不同的图片包含相同的颜色或者纹理主题,或者不同的图片包含同一只狗的不同部分。MinBatch GAN缓解了模式崩溃的问题但同时也引发了 counting,perspective和全局结构等问题。c.生成器梯度消失问题:当判别器非常准确时,判别器的损失很快收敛到0,从而无法提供可靠的路径使生成器的梯度继续更新,造成生成器梯度消失。GAN的训练因为一开始随机噪声分布,与真实数据分布相差距离太远,两个分布之间几乎没有任何重叠的部分,这时候判别器能够很快的学习把真实数据和生成的假数据区分开来达到判别器的最优,造成生成器的梯度无法继续更新甚至梯度消失。即基于GAN的领域自适应方法、数据扩增方法相对来说不稳定。Guoqing Wang et al. (Wang G, Sun C, Sowmya A. Learning a compact vein discrimination model with GANerated samples[J]. IEEE Transactions on Information Forensics and Security, 2019, 15:635-650) in order to solve small samples in finger vein recognition problem, a cascaded Generative Adversarial Network (GAN) is proposed for data augmentation. However, GANs consist of a generator and a discriminator. The discriminator makes D(G(Z)) close to 0 as much as possible, and the generator tries to generate high-quality samples with the same distribution as much as possible to make D(G(Z)) close to 1. When the performance of the generator and the discriminator are trained well enough, a Nash equilibrium is reached (G(Z) generated by the generator has the same distribution as the training data, and for each input x of the discriminator, D(x)=0.5) . There are three reasons why the training of the generative confrontational neural network is unstable: a. It is difficult to make a pair of models (G and D at the same time) converge. Most deep model training uses optimization algorithms to find lower values of the loss function. Optimization algorithms are usually a reliable "downhill" process. Generating an adversarial neural network requires both parties to reach an even match (equilibrium) in the process of the game. Each model successfully "goes downhill" during the update process (such as the generator), but the same update may cause another model of the game (such as the discriminator) to "go uphill". Even sometimes, although the two sides of the game finally reach an equilibrium, the continuous offsetting of the progress of the other side does not make both sides reach a useful place at the same time. Simultaneous gradient descent for all models leads some models to converge but not all models to converge optimally. b. Mode collapse occurs in the generator G: similar samples are generated for different inputs, and only a single sample is generated in the worst case, and the learning of the discriminator will reject these similar or even the same single samples. In practical applications, complete mode collapse is rare, and partial mode collapse is common. Partial mode crashes are when the generator makes different images contain the same color or texture theme, or different images contain different parts of the same dog. MinBatch GAN alleviates the problem of mode collapse but also causes problems such as counting, perspective and global structure. c. The problem of generator gradient disappearance: When the discriminator is very accurate, the loss of the discriminator quickly converges to 0, which cannot provide a reliable path to continue updating the generator gradient, causing the generator gradient to disappear. The training of GAN is due to the random noise distribution at the beginning, which is too far away from the real data distribution, and there is almost no overlap between the two distributions. At this time, the discriminator can quickly learn to distinguish the real data from the generated fake data. To achieve the optimum of the discriminator, the gradient of the generator cannot continue to be updated or even the gradient disappears. That is, the domain adaptive method and data amplification method based on GAN are relatively unstable.

发明内容Contents of the invention

为了解决现有技术中存在的小样本识别和多源域迁移问题,本发明提供了基于多源域迁移的指静脉识别方法,通过使用多个源域迁移到目标域,减少目标域的样本需求,从而降低采集标注成本;将多个源域数据集的通用知识迁移到目标域,并且保留领域指定特征,从而最大程度地提升迁移学习的性能。In order to solve the problems of small sample identification and multi-source domain migration in the prior art, the present invention provides a finger vein recognition method based on multi-source domain migration, which reduces the sample requirements of the target domain by using multiple source domains to migrate to the target domain , so as to reduce the cost of collection and labeling; transfer the general knowledge of multiple source domain data sets to the target domain, and retain the domain-specific features, so as to maximize the performance of transfer learning.

为了实现发明目的,本发明提供的基于多源域迁移的指静脉识别方法,包括以下步骤:In order to achieve the purpose of the invention, the finger vein recognition method based on multi-source domain migration provided by the present invention includes the following steps:

构建并训练解耦迁移学习网络,所述解耦迁移学习网络包括嵌入网络、通用特征变换网络和目标域域指定特征变换网络,嵌入网络用于将指静脉图像转换为基础特征,通用特征变换网络用于将基础特征解耦得到通用特征,目标域域指定特征变换网络用于将基础特征解耦得到域指定特征,其中,在训练解耦迁移学习网络时,采用目标域和多个源域进行训练,将多个源域的域指定特征和通用知识迁移到目标域,得到目标域域指定特征变换网络;Construct and train a decoupled transfer learning network, which includes an embedding network, a general feature transformation network and a target domain-specific feature transformation network, the embedding network is used to convert finger vein images into basic features, and the general feature transformation network It is used to decouple the basic features to obtain general features, and the target domain domain-specified feature transformation network is used to decouple the basic features to obtain domain-specified features. When training the decoupled transfer learning network, the target domain and multiple source domains are used for Training, the domain-specific features and general knowledge of multiple source domains are transferred to the target domain, and the target domain domain-specific feature transformation network is obtained;

将待识别的指静脉图片的感兴趣区域输入嵌入网络,得到基础特征;Input the region of interest of the finger vein picture to be recognized into the embedding network to obtain the basic features;

将基础特征分别输入通用特征变换网络和目标域域指定特征变换网络,分别得到通用特征和域指定特征;The basic features are input into the general feature transformation network and the target domain domain-specific feature transformation network respectively, and the general features and domain-specific features are obtained respectively;

将得到的通用特征和域指定特征进行拼接,得到聚合特征;Concatenate the obtained general features and domain-specific features to obtain aggregated features;

计算聚合特征和已注册样本数据库中所有已注册样本特征之间的余弦距离,得到输入的指静脉图片和数据库中已有指静脉的匹配分数;Calculate the cosine distance between the aggregated features and all registered sample features in the registered sample database, and obtain the matching score of the input finger vein picture and the existing finger veins in the database;

根据匹配分数输出该输入指静脉的匹配结果。Output the matching result of the input finger vein according to the matching score.

进一步地,所述训练解耦迁移学习网络的方法步骤包括:Further, the method steps of the training decoupling transfer learning network include:

首先对于多个源域,在任意两个源域中都随机采样出mini-batch,并对两个源域的 mini-batch分别进行基础特征提取、通用特征提取、域指定特征提取,随后利用总体损失函数计算两个mini-batch之间的损失并使用RMSprop算法更新DTL网络;First, for multiple source domains, randomly sample mini-batches in any two source domains, and perform basic feature extraction, general feature extraction, and domain-specific feature extraction on the mini-batches of the two source domains, and then use the overall The loss function calculates the loss between two mini-batches and uses the RMSprop algorithm to update the DTL network;

在任意一个源域和目标域中都随机采样出mini-batch,并对该源域和目标域的mini-batch分别进行基础特征提取、通用特征提取、域指定特征提取,随后利用总体损失函数计算两个mini-batch之间的损失并使用RMSprop算法更新DTL网络;A mini-batch is randomly sampled in any source domain and target domain, and the basic feature extraction, general feature extraction, and domain-specific feature extraction are performed on the mini-batch of the source domain and the target domain, and then the overall loss function is used to calculate The loss between two mini-batches and update the DTL network using the RMSprop algorithm;

对目标域随机采样mini-batch,对该mini-batch进行基础特征提取、通用特征提取、域指定特征提取,随后使用目标域损失函数计算损失并使用RMSprop算法更新DTL网络。Randomly sample the mini-batch of the target domain, perform basic feature extraction, general feature extraction, and domain-specific feature extraction on the mini-batch, and then use the target domain loss function to calculate the loss and use the RMSprop algorithm to update the DTL network.

进一步地,所述目标域损失函数的计算公式如下:Further, the calculation formula of the target domain loss function is as follows:

Figure BDA0003037906710000041
Figure BDA0003037906710000041

式中,

Figure BDA0003037906710000042
表示DFTN的交叉熵损失,β是中心损失的系数,Lcenter表示中心损失。In the formula,
Figure BDA0003037906710000042
Represents the cross-entropy loss of DFTN, β is the coefficient of the center loss, and L center represents the center loss.

进一步地,所述总体损失函数的计算公式如下:Further, the calculation formula of the overall loss function is as follows:

Figure BDA0003037906710000043
Figure BDA0003037906710000043

其中,

Figure BDA0003037906710000044
in,
Figure BDA0003037906710000044

Figure BDA0003037906710000045
Figure BDA0003037906710000045

式中,

Figure BDA0003037906710000046
表示通用特征变换网络CFTN的交叉熵损失;
Figure BDA0003037906710000047
表示DFTN的交叉熵损失; Lmmmd表示多源域最大均值差异损失;Lcenter表示中心损失,α和β分别是MMMD损失和中心损失的系数,N是整个指静脉混合数据集的类别数目,y表示该样本类别标签的one-hot编码, ya表示类别标签one-hot编码的第a个类别的指示值;la(g)是通用特征g属于第a个类别的分类分数,Z表示源域的样本数量,gm表示第m个源域的通用特征,gt表示目标域上的通用特征,mmd(gm,gn)表示第m个源域和第n个源域之间通用特征的两个小批量数据之间的MMD距离。In the formula,
Figure BDA0003037906710000046
Represents the cross-entropy loss of the general feature transformation network CFTN;
Figure BDA0003037906710000047
Represents the cross-entropy loss of DFTN; L mmmd represents the maximum mean difference loss in multi-source domain; L center represents the center loss, α and β are the coefficients of MMMD loss and center loss respectively, N is the category number of the entire finger vein mixed data set, y Indicates the one-hot encoding of the sample category label, y a indicates the indicator value of the a-th category of the category label one-hot encoding; l a (g) is the classification score of the general feature g belonging to the a-th category, and Z indicates the source The number of samples in the domain, g m represents the general feature of the mth source domain, g t represents the general feature on the target domain, mmd(g m , g n ) represents the common feature between the mth source domain and the nth source domain The MMD distance between two mini-batches of a feature.

进一步地,通用特征变换网络CFTN的交叉熵损失中的分类分数la(g)的计算公式如下:Further, the classification score l a (g) in the cross-entropy loss of the general feature transformation network CFTN is calculated as follows:

Figure BDA0003037906710000048
Figure BDA0003037906710000048

式中,s是一个超参数,表示缩放的尺度因子;θa是特征g和对应超平面

Figure BDA0003037906710000049
之间的角度值;θb是特征g和对应超平面
Figure BDA00030379067100000410
之间的角度值,M为加性角度间隔惩罚项。In the formula, s is a hyperparameter, which represents the scale factor of scaling; θ a is the feature g and the corresponding hyperplane
Figure BDA0003037906710000049
The angle value between; θ b is the feature g and the corresponding hyperplane
Figure BDA00030379067100000410
The angle value between, M is the additive angle interval penalty term.

进一步地,所述通用特征变换网络包括网络结构和池化层,网络结构用于对对通用特征进行特征变换,池化层用于沿通道维度将每个通道均值池化为一个值,使最终各个通道的均值组成特征向量。Further, the general feature transformation network includes a network structure and a pooling layer, the network structure is used to perform feature transformation on the general features, and the pooling layer is used to pool the mean value of each channel into a value along the channel dimension, so that the final The means of each channel form the feature vector.

进一步地,所述目标域域指定特征变换网络包括网络结构和池化层,网络结构用于对对通用特征进行特征变换,池化层用于沿通道维度将每个通道均值池化为一个值,使最终各个通道的均值组成特征向量。Further, the target domain specified feature transformation network includes a network structure and a pooling layer, the network structure is used to perform feature transformation on general features, and the pooling layer is used to pool the mean value of each channel into a value along the channel dimension , so that the final mean of each channel forms a feature vector.

进一步地,所述将待识别的指静脉图片的感兴趣区域输入嵌入网络中,将待识别的指静脉图片,经过灰度化、边缘提取算子处理、手指边缘获取、ROI截取后,得到该图片对应的感兴趣区域ROI。Further, the region of interest of the finger vein picture to be recognized is input into the network, and the finger vein picture to be recognized is processed by grayscale, edge extraction operator processing, finger edge acquisition, and ROI interception to obtain the The region of interest ROI corresponding to the picture.

进一步地,所述根据匹配分数输出该输入指静脉的匹配结果,包括:根据匹配分数的高低,对输入指静脉图像和所有已注册样本的相似程度进行从高到低排序,使用该排序并采用决策算法得到最终的决策匹配结果。Further, outputting the matching result of the input finger vein according to the matching score includes: sorting the similarity between the input finger vein image and all registered samples from high to low according to the matching score, using the sorting and adopting The decision algorithm obtains the final decision matching result.

本发明还提供基于多源域迁移的指静脉识别系统,用于实现前述的方法,包括:网络建立模块,用于构建并训练解耦迁移学习网络,所述解耦迁移学习网络包括嵌入网络、通用特征变换网络和目标域域指定特征变换网络,嵌入网络用于将指静脉图像转换为基础特征,通用特征变换网络用于将基础特征解耦得到通用特征,目标域域指定特征变换网络用于将基础特征解耦得到域指定特征,其中,在训练解耦迁移学习网络时,采用目标域和多个源域进行训练,将多个源域的域指定特征和通用知识迁移到目标域,得到目标域域指定特征变换网络;The present invention also provides a finger vein recognition system based on multi-source domain migration, which is used to implement the aforementioned method, including: a network building module, used to build and train a decoupled transfer learning network, the decoupled transfer learning network includes an embedded network, The general feature transformation network and the target domain specified feature transformation network, the embedding network is used to convert finger vein images into basic features, the general feature transformation network is used to decouple the basic features to obtain general features, and the target domain specified feature transformation network is used for Decoupling the basic features to obtain domain-specific features, in which, when training the decoupled transfer learning network, the target domain and multiple source domains are used for training, and the domain-specific features and general knowledge of multiple source domains are transferred to the target domain to obtain The target domain specifies the feature transformation network;

基础特征提取模块,用于将待识别的指静脉图片的感兴趣区域输入嵌入网络,得到基础特征;The basic feature extraction module is used to input the region of interest of the finger vein picture to be identified into the embedding network to obtain the basic features;

通用特征提取模块,用于将基础特征输入通用特征变换网络,得到通用特征;The general feature extraction module is used to input the basic features into the general feature transformation network to obtain the general features;

域指定特征提取模块,用于输入目标域域指定特征变换网络,得到域指定特征;The domain specified feature extraction module is used to input the target domain domain specified feature transformation network to obtain the domain specified feature;

聚合模块,用于将得到的通用特征和域指定特征进行拼接,得到聚合特征;The aggregation module is used to splice the obtained general features and domain-specific features to obtain aggregated features;

计算模块,用于计算聚合特征和已注册样本数据库中所有已注册样本特征之间的余弦距离,得到输入的指静脉图片和数据库中已有指静脉的匹配分数;Calculation module, used to calculate the cosine distance between the aggregation feature and all registered sample features in the registered sample database, and obtain the matching score of the input finger vein picture and the existing finger vein in the database;

匹配模块,用于根据匹配分数输出该输入指静脉的匹配结果。The matching module is configured to output the matching result of the input finger vein according to the matching score.

本发明提出了一种全新的解耦迁移学习方法来学习不同指静脉数据库之间的通用知识,然后将这些提取到的通用知识迁移到的目标数据库上。实验表明本发明所提出的解耦迁移学习方法能够很好地提取不同设备采集的多个数据集内的通用知识,来实现在最小的训练样本下的指静脉识别。The present invention proposes a brand-new decoupling transfer learning method to learn general knowledge between different finger vein databases, and then transfer the extracted general knowledge to the target database. Experiments show that the decoupling transfer learning method proposed by the present invention can well extract general knowledge in multiple data sets collected by different devices to realize finger vein recognition under the smallest training samples.

1)分析了不同设备采集的不同指静脉图像之间的差异,本发明提出了一种将指静脉图像的基础特征解耦为通用特征和域指定特征的方法。其中,通用特征包含识别的通用信息,域指定特征包含受容易采集因素、外界因素影响的特征。1) After analyzing the differences between different finger vein images collected by different devices, the present invention proposes a method to decouple the basic features of finger vein images into general features and domain-specific features. Among them, general features include general information for identification, and domain-specific features include features affected by factors that are easy to collect and external factors.

2)提出了一种多源域解耦迁移识别的框架,能够从多个源域到单个目标域进行迁移学习。因此,一个指静脉识别模型能够被更充分得训练并且在小样本的条件下,能够获得更加优越的性能。2) A framework for multi-source domain decoupled transfer recognition is proposed, enabling transfer learning from multiple source domains to a single target domain. Therefore, a finger vein recognition model can be trained more fully and can achieve superior performance under the condition of small samples.

3)通过实验获得了小样本识别下的最好结果,实验表明,本发明能够解决指静脉识别在训练数据不足时的性能。尤其是对于指静脉识别的实际工程应用中,能够防止采集过程中在采集数据、标注数据中心耗费的大量人力物力。3) The best result under small sample recognition is obtained through experiments, and the experiments show that the present invention can solve the performance of finger vein recognition when training data is insufficient. Especially for the actual engineering application of finger vein recognition, it can prevent a lot of manpower and material resources from collecting data and labeling data centers during the collection process.

附图说明Description of drawings

图1是现有技术中不同数据集的指静脉图像及其GLH、LBP示意图。Fig. 1 is a schematic diagram of finger vein images and GLH and LBP of different data sets in the prior art.

图2是本发明实施例提供的基于多源域迁移的指静脉识别方法中解耦迁移学习网络示意图。Fig. 2 is a schematic diagram of a decoupled transfer learning network in a finger vein recognition method based on multi-source domain transfer provided by an embodiment of the present invention.

图3是本发明实施例中特征空间在不同训练集类别数的情况下被划分的情况示意图。Fig. 3 is a schematic diagram of the division of the feature space in the case of different numbers of training set categories in the embodiment of the present invention.

图4是本发明实施例中不同领域指静脉图像的特征落在不同的流形示意图。Fig. 4 is a schematic diagram of manifolds in which features of finger vein images in different fields fall in different manifolds according to an embodiment of the present invention.

图5是本发明实施例中CFTN结构示意图。Fig. 5 is a schematic diagram of the CFTN structure in the embodiment of the present invention.

图6是本发明实施例中两个不同的流形经过规范化、分布对齐的示意图。Fig. 6 is a schematic diagram of normalization and distribution alignment of two different manifolds in an embodiment of the present invention.

图7是本发明实施例中DFTN结构示意图。Fig. 7 is a schematic diagram of a DFTN structure in an embodiment of the present invention.

图8是本发明实施例中指静脉识别流程图。Fig. 8 is a flowchart of finger vein recognition in an embodiment of the present invention.

图9是本发明实施例提供的系统的结构示意图。Fig. 9 is a schematic structural diagram of a system provided by an embodiment of the present invention.

具体实施方式Detailed ways

为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整的描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都是本发明保护的范围。In order to make the purpose, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below in conjunction with the drawings in the embodiments of the present invention. Obviously, the described embodiments It is a part of embodiments of the present invention, but not all embodiments. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts are within the protection scope of the present invention.

请参阅图8,本发明提供的基于多源域迁移的指静脉识别方法,包括以下步骤:Please refer to Fig. 8, the multi-source domain migration-based finger vein recognition method provided by the present invention includes the following steps:

步骤1:构建并训练解耦迁移学习网络,所述解耦迁移学习网络包括嵌入网络、通用特征变换网络和目标域域指定特征变换网络,嵌入网络用于将指静脉图像转换为基础特征,通用特征变换网络用于将基础特征解耦得到通用特征,目标域域指定特征变换网络用于将基础特征解耦得到域指定特征。Step 1: Construct and train a decoupled transfer learning network, which includes an embedding network, a general feature transformation network, and a target domain-specific feature transformation network. The embedding network is used to convert finger vein images into basic features, and the general The feature transformation network is used to decouple the basic features to obtain general features, and the target domain domain-specific feature transformation network is used to decouple the basic features to obtain domain-specific features.

步骤1.1:构建的解耦迁移学习网络如下:Step 1.1: The constructed decoupled transfer learning network is as follows:

深度学习方法由于其强大的特征提取能力,已经被广泛使用在生物特征识别领域。许多杰出的网络结构,比如AlexNet,ResNet,MobileNet,SENet等都被提出并且在不同的领域都取得了显著的成效。通常来说,有三种方式来对一个深度神经网络进行缩放,以此来适应不同的任务:宽度尺度、深度尺度、分辨率尺度,但是修改单个尺度只能从单个因素修改网络容量。然而以上已经提出的网络都主要关注单层或者单块的网络结构,其最优的宽度、深度和分辨率仍然需要通过超参数调优确定。在指静脉识别的实际应用中,如何选取一个适合的网络尺寸来在有限的计算资源下尽可能提升性能仍然是一个亟待解决的问题。尤其是当深度学习模型部署在嵌入式平台时,计算量与性能之间的权衡就显得尤为重要。因此可以使用神经网络架构搜索技术(Network Architecture Search,NAS)对不同的参数量下的网络结构进行搜索。嵌入网络可以采用比如EfficientNet,NasNet等深度神经网络,这些深度神经网络可以在有限的计算资源下,拥有很强的特征提取能力。Deep learning methods have been widely used in the field of biometric recognition due to their powerful feature extraction capabilities. Many outstanding network structures, such as AlexNet, ResNet, MobileNet, SENet, etc. have been proposed and have achieved remarkable results in different fields. Generally speaking, there are three ways to scale a deep neural network to adapt to different tasks: width scale, depth scale, and resolution scale, but modifying a single scale can only modify the network capacity from a single factor. However, the networks that have been proposed above mainly focus on the single-layer or single-block network structure, and its optimal width, depth, and resolution still need to be determined through hyperparameter tuning. In the practical application of finger vein recognition, how to select a suitable network size to improve performance as much as possible under limited computing resources is still an urgent problem to be solved. Especially when the deep learning model is deployed on an embedded platform, the trade-off between computation and performance is particularly important. Therefore, a neural network architecture search technology (Network Architecture Search, NAS) can be used to search for network structures under different parameter quantities. The embedding network can use deep neural networks such as EfficientNet and NasNet. These deep neural networks can have strong feature extraction capabilities under limited computing resources.

在本发明其中一个实施例中,解耦迁移学习网络中的嵌入网络采用EfficientNet网络。 EfficientNet网络是现有网络,如在文献“Tan M,Le Q.Efficientnet:Rethinkingmodel scaling for convolutional neural networks[C]//International Conferenceon Machine Learning.PMLR,2019:6105-6114.”中即有关于该网络的介绍,本实施例采用的网络与其具有相同的复合尺度参数和基本结构。在EfficienetNet-B1中,宽度、深度、分辨率和 dropout概率分别为1.0,1.0,224和0.2。该网络的主要区块是mobile invertedbottleneck (MBConv)结合了squeeze-and-excitation(SE)优化。In one embodiment of the present invention, the embedding network in the decoupling transfer learning network adopts the EfficientNet network. The EfficientNet network is an existing network, such as in the document "Tan M, Le Q. Efficientnet: Rethinking model scaling for convolutional neural networks [C]//International Conference on Machine Learning. PMLR, 2019: 6105-6114." The network used in this embodiment has the same composite scale parameters and basic structure as the network used in this example. In EfficienetNet-B1, the width, depth, resolution and dropout probability are 1.0, 1.0, 224 and 0.2, respectively. The main block of the network is the mobile inverted bottleneck (MBConv) combined with squeeze-and-excitation (SE) optimization.

嵌入网络提取到的基础特征b相对来说是低层级的,并且包含所有的通用和领域指定特征。为了解耦这两种特征,本发明在嵌入网络的后面串联了一个双流的结构,用来将基础特征解耦为与领域无关和领域相关的知识。其中一个支路是通用特征变换网络CFTN,接收来自嵌入网络输出的特征图b,然后输出通用特征g。用来提取通用特征g=G(b),另一个支路是目标域域指定特征变换网络DFTN,用来提取域指定特征。The basic features b extracted by the embedding network are relatively low-level and contain all general and domain-specific features. In order to decouple these two features, the present invention connects a dual-stream structure behind the embedding network to decouple the basic features into domain-independent and domain-related knowledge. One of the branches is the general feature transformation network CFTN, which receives the feature map b from the output of the embedding network, and then outputs the general feature g. It is used to extract general features g=G(b), and the other branch is the target domain domain-specific feature transformation network DFTN, which is used to extract domain-specific features.

在本发明其中一个实施例中,请参阅图5,通用特征变换网络CFTN包括一个二维卷积和一个全局平均池化(GAP)。通过单个二维卷积,能够对通用特征进行特征变换,让变换的特征能够适应不同的源域。为了最终得到特征向量,通过全局平均池化沿通道维度将每个通道均值池化为一个值,最终各个通道的均值组成特征向量。当然,通用特征变换网络CFTN不局限于上述结构,在其他实施例中,通用特征变换网络CFTN的网络结构可以采用普通的卷积网络、SE模块、残差网络模块等等的组合,池化层可以采用全局最大池化或SPP池化。In one embodiment of the present invention, please refer to FIG. 5 , the general feature transformation network CFTN includes a two-dimensional convolution and a global average pooling (GAP). Through a single two-dimensional convolution, feature transformation can be performed on general features, so that the transformed features can adapt to different source domains. In order to finally obtain the feature vector, the mean value of each channel is pooled into a value along the channel dimension through global average pooling, and finally the mean value of each channel forms a feature vector. Of course, the general-purpose feature transformation network CFTN is not limited to the above-mentioned structure. In other embodiments, the network structure of the general-purpose feature transformation network CFTN can use a combination of ordinary convolutional networks, SE modules, residual network modules, etc., and the pooling layer Global max pooling or SPP pooling can be used.

目标域域指定特征变换网络接收来自嵌入网络输出待识别的指静脉图片的特征图b,并且输出域指定特征。在本发明其中一个实施例中,请参阅图7,目标域域指定特征变换网络包括一个二维卷积和一个全局平局池化(GAP),通过单个二维卷积,能够对通用特征进行特征变换,让变换的特征能够针对不同的源域。为了最终得到特征向量,采用全局平均池化沿通道维度将每个通道均值池化为一个值,最终各个通道的均值组成特征向量。当然,目标域域指定特征变换网络不局限于前述结构,在其他实施例中,目标域域指定特征变换网络DFTN的网络结构可以采用普通的卷积网络、SE模块、残差网络模块等等的组合,池化层可以采用全局最大池化或SPP池化;本发明的实例中网络结构采用简单的二维卷积;池化层采用全局平均池化。The target domain domain-specified feature transformation network receives the feature map b of the finger vein picture to be recognized output from the embedding network, and outputs domain-specified features. In one embodiment of the present invention, please refer to FIG. 7 , the target domain-specific feature transformation network includes a two-dimensional convolution and a global average pooling (GAP), through a single two-dimensional convolution, the general features can be characterized Transformation, so that the transformed features can target different source domains. In order to finally obtain the feature vector, the global average pooling is used to pool the mean value of each channel into a value along the channel dimension, and finally the mean value of each channel forms the feature vector. Of course, the target domain-specified feature transformation network is not limited to the aforementioned structure. In other embodiments, the network structure of the target domain-specified feature transformation network DFTN can adopt ordinary convolutional networks, SE modules, residual network modules, etc. In combination, the pooling layer can adopt global maximum pooling or SPP pooling; the network structure in the example of the present invention adopts simple two-dimensional convolution; the pooling layer adopts global average pooling.

步骤1.2:训练解耦迁移学习网络,解耦迁移学习网络在训练时设置有一个嵌入网络、一个通用特征变换网络CFTN和多个域指定特征变换网络DFTN,其中,多个域指定特征变换网络包括一个目标域域指定特征变换网络和多个源域域指定特征变换网络,具体的训练过程如下:Step 1.2: Train the decoupled transfer learning network. The decoupled transfer learning network is equipped with an embedding network, a general feature transformation network CFTN and multiple domain-specific feature transformation networks DFTN during training. Among them, multiple domain-specific feature transformation networks include A target domain-specified feature transformation network and multiple source domain-specified feature transformation networks, the specific training process is as follows:

在本发明其中一个实施例中,在特征空间中,本发明提出的解耦迁移学习方法能够减少不同数据集之间的领域差异并且将基础特征解耦为通用特征和领域指定特征。In one embodiment of the present invention, in the feature space, the decoupled transfer learning method proposed by the present invention can reduce domain differences between different data sets and decouple basic features into general features and domain-specific features.

训练时解耦迁移学习网络DTL的整个框架如图2所示:嵌入网络F被用来提取基础特征b: b=F(x),其中x是领域D的指静脉图像,来自不同领域

Figure BDA0003037906710000081
的指静脉图像x分别被嵌入网络F转换为相应的基础特征b,这些不同的嵌入网络是共享参数的。然后基础特征b被两个特征变换网络解耦。第一条支路是域指定特征变换网络,这条支路是和特定领域相关的。在这条支路上,基础特征b被变换为领域指定特征h。每个领域都有一个自己独特的DFTN,即每个领域的网络参数不一样,并且这条支路最终得到的领域指定特征只在他们对应的领域上进行分类。另一条支路通用特征变换网络,他在多个领域之间是参数共享的。这条支路上得到的通用特征g是在整个混合数据集上进行分类的。在测试阶段,指静脉图像被转换为领域指定特征和通用特征,这两类特征最终被串联成为最终的混合特征,用于指静脉认证。The entire framework of the decoupled transfer learning network DTL during training is shown in Figure 2: the embedding network F is used to extract the basic feature b: b=F(x), where x is the finger vein image of domain D, from different domains
Figure BDA0003037906710000081
The finger vein images x of are respectively converted into corresponding basic features b by the embedding network F, and these different embedding networks share parameters. Then the base feature b is decoupled by two feature transformation networks. The first branch is the domain-specific feature transformation network, which is domain-specific. In this branch, the base feature b is transformed into a domain-specific feature h. Each domain has its own unique DFTN, that is, the network parameters of each domain are different, and the domain-specific features finally obtained by this branch are only classified in their corresponding domains. Another branch is the general feature transformation network, which is parameter shared among multiple domains. The general feature g obtained on this branch is classified on the whole mixed data set. In the testing stage, finger vein images are transformed into domain-specific features and general features, and these two types of features are finally concatenated into the final hybrid features for finger vein authentication.

通用特征变换网络CFTN提取到的通用特征是领域无关的,因此本发明在混合了目标域和所有源域的混合数据集

Figure BDA0003037906710000091
上训练通用特征变换网络CFTN,其中,
Figure BDA0003037906710000092
表示不同源域的数据集,Dt表示目标域的数据集。The general features extracted by the general feature transformation network CFTN are domain-independent, so the present invention is used in the mixed data set that mixes the target domain and all source domains
Figure BDA0003037906710000091
On training the general feature transformation network CFTN, where,
Figure BDA0003037906710000092
Denote the datasets of different source domains, and D t denote the datasets of the target domain.

在本发明其中一个实施例中,在混合数据集上训练时,通用特征g的交叉熵损失可以表示为:In one embodiment of the present invention, when training on a mixed data set, the cross-entropy loss of the general feature g can be expressed as:

Figure BDA0003037906710000093
Figure BDA0003037906710000093

其中,N是整个指静脉混合数据集的类别数目,y表示该样本类别标签的one-hot编码, ya表示类别标签one-hot编码的第a个类别的0/1指示值;la(g)是通用特征g属于第a个类别的分类分数。Among them, N is the number of categories of the entire finger vein mixed data set, y represents the one-hot encoding of the sample category label, and y a indicates the 0/1 indicator value of the ath category of the category label one-hot encoding; l a ( g) is the classification score of the generic feature g belonging to the a-th category.

在本发明其中一个实施例中,在分类或者是基于代理的度量学习中,一个常用的计算分类分数la(g)的方式是使用softmax方程,公式如下:In one embodiment of the present invention, in classification or agent-based metric learning, a common way to calculate the classification score l a (g) is to use the softmax equation, the formula is as follows:

Figure BDA0003037906710000094
Figure BDA0003037906710000094

其中,d表示特征向量长度,N表示指静脉混合数据集的类别数目;

Figure BDA0003037906710000095
表示长度为d的一维实数空间;
Figure BDA0003037906710000096
表示权重
Figure BDA0003037906710000097
的第a行,
Figure BDA0003037906710000098
表示权重
Figure BDA0003037906710000099
的第b行,
Figure BDA00030379067100000910
表示长度分别为d和N的二维实数空间;Ba
Figure BDA00030379067100000911
表示偏置项,,
Figure BDA00030379067100000912
为超平面,b表示第b个类别。Among them, d represents the length of the feature vector, and N represents the number of categories of the finger vein mixed data set;
Figure BDA0003037906710000095
Represents a one-dimensional real number space of length d;
Figure BDA0003037906710000096
Indicates the weight
Figure BDA0003037906710000097
row a of
Figure BDA0003037906710000098
Indicates the weight
Figure BDA0003037906710000099
line b of
Figure BDA00030379067100000910
represent the two-dimensional real number space whose lengths are d and N respectively; B a ,
Figure BDA00030379067100000911
represents the bias term,
Figure BDA00030379067100000912
is a hyperplane, and b represents the bth category.

为了把整个特征空间推向一个超球面上,在本发明其中一个实施例中,将通用特征进行单位化g:g=g/|g|。同时,本实施例参考加性角度间隔损失(ArcFace)在人脸识别中的优秀表现,将偏置项Bb设置为0,并且单位化权重矩阵Wya的每一列。因此Wb和g的内积可以表示为一个角度的余弦值

Figure BDA00030379067100000913
θb表示特征g和Wb之间的向量夹角。为了更进一步地优化,加性角度间隔惩罚项M被加在了角度值θ上。因此最终的分类分数la(g) 可以表示为:In order to push the entire feature space onto a hypersphere, in one embodiment of the present invention, the general feature is unitized to g: g=g/|g|. At the same time, this embodiment refers to the excellent performance of the additive angular margin loss (ArcFace) in face recognition, sets the bias item B b to 0, and normalizes each column of the weight matrix W ya . So the inner product of W b and g can be expressed as the cosine of an angle
Figure BDA00030379067100000913
θ b represents the vector angle between feature g and W b . For further optimization, an additive angle interval penalty term M is added to the angle value θ. So the final classification score l a (g) can be expressed as:

Figure BDA00030379067100000914
Figure BDA00030379067100000914

其中,s是一个超参数,表示缩放的尺度因子;θa是特征g和对应超平面

Figure BDA00030379067100000915
之间的角度值;θb是特征g和对应超平面
Figure BDA00030379067100000916
之间的角度值。在测试阶段,本发明计算测试样本对xi和xj的通用特征gi和gj之间的余弦距离,其中,xi和xj为不同领域的指静脉图像,gi和gj是嵌入网络分别输入xi和xj得到的通用特征:余弦距离distance=|gi||gj|cos(θ)。最终,使用正则化和加性角度间隔,能够使得作用在训练阶段的损失惩罚作用和作用在测试阶段的度量准则保持一致。Among them, s is a hyperparameter, representing the scale factor of scaling; θ a is the feature g and the corresponding hyperplane
Figure BDA00030379067100000915
The angle value between; θ b is the feature g and the corresponding hyperplane
Figure BDA00030379067100000916
Angle value between. In the test phase, the present invention calculates the cosine distance between the general features g i and g j of the test sample pair x i and x j , where x i and x j are finger vein images in different fields, g i and g j are The general features obtained by embedding the network into x i and x j respectively: cosine distance distance=|g i ||g j |cos(θ). Finally, using regularization and additive angle intervals, it is possible to align the loss penalties applied during training with the metrics applied during testing.

将经通用特征变换网络得到的不同领域的通用特征进行分布对齐。The general features of different fields obtained by the general feature transformation network are distributed and aligned.

通用特征的维度是固定的,但是通常来说,整个特征空间不会被已有的所有特征点都覆盖满。整个数据集或者领域其实是落在特征空间的一个子空间或者是低维流形上。如果解耦迁移学习网络直接在混合数据集上进行训练,尽管可能会获得更好的性能,但是不同领域上提取的特征仍然可能会落在不同的流形上,如图6所示,流形分布A和的流形分布B就是两个不同的领域。The dimensionality of general features is fixed, but generally speaking, the entire feature space will not be covered by all existing feature points. The entire data set or domain actually falls on a subspace or low-dimensional manifold of the feature space. If the decoupled transfer learning network is directly trained on the mixed data set, although better performance may be obtained, the features extracted from different fields may still fall on different manifolds, as shown in Figure 6, the manifold The distribution A and the manifold distribution B are two different fields.

因此,在本发明其中一个实施例中,使用规范化的方法首先将整个特征空间压缩到一个子空间上(即超球面)。在这个超球面中,与整个特征空间中的原始流形相比,不同域的分布差异会由于特征空间的变小而变窄。因此,目标域的流形更容易与其他域重叠,来自源域的更多样本也更有可能被投影到目标流形,这相当于直接在特征空间中增加了训练样本,将源域样本在特征空间中迁移到了目标域。与图(a)中的整个原始特征空间相比,图(b)中的两个流形的分布在归一化的超球体中更容易重叠,在这个超球体表面上,领域之间的迁移更加容易且度量学习也更有效。Therefore, in one of the embodiments of the present invention, the whole feature space is firstly compressed to a subspace (ie, a hypersphere) using a normalization method. In this hypersphere, compared with the original manifold in the entire feature space, the distribution differences of different domains will be narrowed due to the smaller feature space. Therefore, the manifold of the target domain is more likely to overlap with other domains, and more samples from the source domain are more likely to be projected onto the target manifold, which is equivalent to directly increasing the training samples in the feature space, and the source domain samples in the The feature space is transferred to the target domain. The distributions of the two manifolds in (b) overlap more easily in a normalized hypersphere than the entire original feature space in (a), on which the transfer between domains Easier and more efficient metric learning.

规范化可以将整个特征空间缩小到一个超球面,从而让迁移更容易有效地发生。但是,不同的域可能位于同一超球面上的不同区域。因此,本发明提出了多源域的最大均值差异 (MMMD)对齐多个数据集的分布,如下式所示。MMMD定义如下:Normalization reduces the entire feature space to a hypersphere, making transfer easier and more efficient. However, different domains may lie in different regions on the same hypersphere. Therefore, the present invention proposes the Multi-Source Domain Maximum Mean Difference (MMMD) to align the distributions of multiple datasets, as shown in the following equation. MMMD is defined as follows:

Figure BDA0003037906710000101
Figure BDA0003037906710000101

式中,Z表示源域的样本数量,gm表示第m个源域的通用特征gm=G(F(xm))。gt表示目标域上的通用特征gt=G(F(Xt))。mmd(gm,gn)表示第m个源域和第n个源域之间通用特征的两个mini-batch(小批量数据)之间的MMD距离。MMD是一个基于可再生核希尔伯特空间的度量两个分布差异的核技巧。假定X=x1,...,xn1和Y=y1,...,yn1为两个任意的集合,其中x1、xn1表示集合X中的样本,y1,...,yn1表示集合Y中的样本。这两个集合拥有相同的维度并且其分布分别为

Figure BDA0003037906710000108
Figure BDA0003037906710000109
计算这两个分布之间距离的方法是:In the formula, Z represents the number of samples in the source domain, and g m represents the general feature g m =G(F(x m )) of the mth source domain. g t represents the general feature g t =G(F(X t )) on the target domain. mmd(g m , g n ) represents the MMD distance between two mini-batches of common features between the mth source domain and the nth source domain. MMD is a kernel technique that measures the difference between two distributions based on a reproducible kernel Hilbert space. Suppose X=x 1 ,...,x n1 and Y=y 1 ,...,y n1 are two arbitrary sets, where x 1 , x n1 represent samples in set X, y 1 ,... , y n1 represents the samples in the set Y. The two sets have the same dimensionality and their distributions are
Figure BDA0003037906710000108
and
Figure BDA0003037906710000109
The way to calculate the distance between these two distributions is:

Figure BDA0003037906710000102
Figure BDA0003037906710000102

其中

Figure BDA0003037906710000103
表示分布
Figure BDA0003037906710000107
上的数学期望,
Figure BDA0003037906710000104
表示分布
Figure BDA0003037906710000105
上的数学期望。其中
Figure BDA0003037906710000106
表示广义的的RKHS (Reproducing kernel Hilbert space,可再生核希尔伯特空间)。φ(·)表示将特征图从原始的样本空间映射到RKHS的映射函数,在符合
Figure BDA0003037906710000115
的前提下,选取核函数k(x,y),使得k与φ(·)满足k(x,y)=<φ(x),φ(y)),其中<·,·>表示向量之间的内积。在解耦迁移学习方法中,核k采用一个高斯核函数,
Figure BDA0003037906710000111
Figure BDA0003037906710000112
两分布之间的距离
Figure BDA0003037906710000113
始终是非负的,并且当
Figure BDA0003037906710000114
n1,n2→∞时,其值不断趋向于0。in
Figure BDA0003037906710000103
Indicates the distribution
Figure BDA0003037906710000107
The mathematical expectation on
Figure BDA0003037906710000104
Indicates the distribution
Figure BDA0003037906710000105
Mathematical expectations on . in
Figure BDA0003037906710000106
Represents a generalized RKHS (Reproducing kernel Hilbert space, reproducible kernel Hilbert space). φ( ) represents the mapping function that maps the feature map from the original sample space to the RKHS.
Figure BDA0003037906710000115
Under the premise of , the kernel function k(x, y) is selected so that k and φ(·) satisfy k(x, y)=<φ(x), φ(y)), where <·,·> represent the vector Inner product between. In the decoupled transfer learning method, the kernel k uses a Gaussian kernel function,
Figure BDA0003037906710000111
and
Figure BDA0003037906710000112
distance between two distributions
Figure BDA0003037906710000113
is always non-negative, and when
Figure BDA0003037906710000114
When n1, n2→∞, its value tends to 0 continuously.

MMD被用来衡量源域和目标域之间的分布差异。在最小化MMD损失的同时,深度神经网络能够提取领域无关的特征。因此,本发明其中一个实施例中使用MMD来对齐多个源域和目标域之间的边际分布。正如上文中解释的,通过最小化两个流形A和B之间的分布差异,就可以使用深度神经网络提取到在超球面上几乎对齐的特征。这样,不同领域之间的指静脉图像经过深度神经网络后就拥有了几乎完全相同的特征分布。MMD is used to measure the distribution difference between source and target domains. While minimizing the MMD loss, deep neural networks are able to extract domain-independent features. Therefore, one embodiment of the present invention uses MMD to align multiple marginal distributions between source and target domains. As explained above, by minimizing the distribution difference between two manifolds A and B, features that are nearly aligned on the hypersphere can be extracted using deep neural networks. In this way, the finger vein images in different fields have almost the same feature distribution after passing through the deep neural network.

将前面提取到的不同领域的通用特征相应输入域指定特征变换网络DFTN,提取不同的域指定特征。The general features of different fields extracted above are correspondingly input into the domain-specified feature transformation network DFTN, and different domain-specified features are extracted.

由前面的分析可知,不同领域的可分特征是非常相似的,但是在可分特征中还是存在部分与领域相关的特征,这些特征是随着指静脉成像设备的成像特点不同而变化的。因此,在本发明所提出的方法中,双流结构的另一条支路是域指定特征变换网络DFTN。域指定特征变换网络DFTN用来提取特定域指定特征,比如背景的灰度信息、亮度分布、手指表面纹理、图像梯度。From the previous analysis, we can see that the separable features of different domains are very similar, but there are still some domain-related features in the separable features, and these features vary with the imaging characteristics of the finger vein imaging equipment. Therefore, in the method proposed by the present invention, another branch of the two-stream structure is the domain-specific feature transformation network DFTN. The domain-specific feature transformation network DFTN is used to extract specific domain-specific features, such as grayscale information of the background, brightness distribution, finger surface texture, and image gradient.

在本发明其中一个实施例中,请参阅图7,每个域指定特征变换网络DFTN均包括一个二维卷积、一个全局平局池化(GAP)。DFTN接收来自嵌入网络输出各个领域的特征图b,并且相应输出各域的域指定特征h。In one embodiment of the present invention, please refer to FIG. 7 , each domain-specific feature transformation network DFTN includes a two-dimensional convolution and a global average pooling (GAP). DFTN receives feature maps b from each domain output by the embedding network, and correspondingly outputs domain-specific features h for each domain.

在通用特征变换网络CFTN的工作流程中,通过将特征空间规范化到球面上,多个源域和目标域的特征被尽可能地限制在了同一个流形上。然而,直接强迫多个源域和目标域的特征分布变得完全相同可能会导致深度神经网络只关注于在不同领域相对通用的特征。这样的话,部分与特定领域相关的特征可能会在特征提取阶段被丢弃。从这个观点出发,在本发明提出的解耦迁移学习中,由于CFTN只关注通用特征,因此DFTN扮演着保留领域指定特征的重要角色。In the workflow of the general feature transformation network CFTN, by normalizing the feature space to a sphere, the features of multiple source and target domains are restricted to the same manifold as much as possible. However, directly forcing the feature distributions of multiple source and target domains to become exactly the same may lead deep neural networks to only focus on features that are relatively common in different domains. In this case, some domain-specific features may be discarded in the feature extraction stage. From this point of view, in the decoupled transfer learning proposed by the present invention, since CFTN only focuses on general-purpose features, DFTN plays an important role in preserving domain-specific features.

这种双流的机制能够缓解深度神经网络在一个特定领域上,同时需要学习通用知识和领域指定知识的压力。This dual-stream mechanism can relieve the pressure of deep neural networks in a specific domain while learning general knowledge and domain-specific knowledge.

在通用特征变换网络CFTN和域指定特征变换网络DFTN中,交叉熵损失的唯一区别就是样本的类别数目不一样。在通用特征变换网络CFTN中,特征是在混合数据集上进行分类,因此N是混合数据集中的类别数量。在域指定特征变换网络DFTN中,N是单个特定领域的类别数量。因此,为了防止重复,DFTN的交叉熵损失

Figure BDA0003037906710000121
不再重复列举。In the general feature transformation network CFTN and the domain-specific feature transformation network DFTN, the only difference in cross-entropy loss is the number of categories of samples. In the general feature transformation network CFTN, the features are classified on the mixed dataset, so N is the number of categories in the mixed dataset. In domain-specific feature transformation network (DFTN), N is the number of categories for a single domain-specific. Therefore, to prevent duplication, the cross-entropy loss of DFTN
Figure BDA0003037906710000121
Do not repeat the enumeration.

为了获取最优的参数,本发明使用三个损失来训练深度神经网络:MMMD损失、ArcFAce 损失和center损失。通用特征变换网络CFTN的输出特征被用来计算MMMD损失,能够减小领域差异并且提取多个领域之间的通用特征。通用特征变换网络CFTN和域指定特征变换网络 DFTN的输出都会被用来计算ArcFace损失和center损失。额外的规范化加性角度间隔的交叉熵损失能够很好地在单领域上做度量学习。正则化操作也消除了绝对数值的影响,并且加性角度间隔损失为分类边界提供了额外的间隔。然而,类内的距离仍然没有很强的限制。因此,本发明使用额外的中心损失来在欧式空间中提取更加有效、鲁棒的类内特征。In order to obtain the optimal parameters, the present invention uses three losses to train the deep neural network: MMMD loss, ArcFAce loss and center loss. The output features of the general feature transformation network CFTN are used to calculate the MMMD loss, which can reduce domain differences and extract common features between multiple domains. The output of the general feature transformation network CFTN and the domain specific feature transformation network DFTN will be used to calculate the ArcFace loss and center loss. The additional normalized additive angle-spaced cross-entropy loss works well for single-domain metric learning. Regularization operations also remove the impact of absolute values, and an additive angular margin loss provides additional margins for classification boundaries. However, the intra-class distance is still not strongly constrained. Therefore, the present invention uses an additional center loss to extract more effective and robust intra-class features in the Euclidean space.

Figure BDA0003037906710000122
Figure BDA0003037906710000122

这个公式中,LC为中心损失,

Figure BDA0003037906710000123
表示第yi个类的特征中心,并且gi也属于类yi。F表示 mini-batch的大小。与交叉熵损失类似的是,通用特征变换网络CFTN的中心损失的类别数与混合数据集的类别数相同;特征变换网络DFTN的中心损失的类别数与单个特定领域的类别数相同。In this formula, LC is the center loss,
Figure BDA0003037906710000123
represents the feature center of the y i -th class, and g i also belongs to the class y i . F represents the size of the mini-batch. Similar to the cross-entropy loss, the number of categories of the central loss of the general feature transformation network CFTN is the same as that of the mixed dataset; the number of categories of the central loss of the feature transformation network DFTN is the same as that of a single domain.

在MMMD损失的介绍中强调过,目标域的数据是不足的,因此其相对而言更容易产生统计误差。因此训练策略的设计思路是:从一个大的源域中学习知识并且以很小的尺度来更新网络。实际上,如何使用少量样本来更有效得优化特征提取器是指静脉小样本学习的关键所在。为了获得更稳定的知识,本发明首先使用源域的样本对来训练网络,使用下面这个损失函数来让通用网络更好的适应目标域:As emphasized in the introduction of MMMD loss, the data in the target domain is insufficient, so it is relatively more prone to statistical errors. Therefore, the design idea of the training strategy is: learn knowledge from a large source domain and update the network at a small scale. In fact, how to use a small number of samples to optimize the feature extractor more effectively is the key to vein small sample learning. In order to obtain more stable knowledge, the present invention first uses the sample pairs of the source domain to train the network, and uses the following loss function to make the general network better adapt to the target domain:

Figure BDA0003037906710000124
Figure BDA0003037906710000124

其中,α和β分别是MMMD损失和中心损失的系数,最终在单独的目标域上的损失可以写作:Among them, α and β are the coefficients of MMMD loss and center loss respectively, and the final loss on a separate target domain can be written as:

Figure BDA0003037906710000125
Figure BDA0003037906710000125

其中,L表示最终用于反向传播的损失;

Figure BDA0003037906710000126
表示通用特征变换网络CFTN的交叉熵损失;
Figure BDA0003037906710000127
表示DFTN的交叉熵损失;Lmmmd表示多源域最大均值差异损失;LC表示中心损失。Among them, L represents the loss that is finally used for backpropagation;
Figure BDA0003037906710000126
Represents the cross-entropy loss of the general feature transformation network CFTN;
Figure BDA0003037906710000127
Indicates the cross-entropy loss of DFTN; L mmmd indicates the multi-source domain maximum mean difference loss; L C indicates the center loss.

即训练过程如下:That is, the training process is as follows:

首先对于多个源域,在任意两个源域中都随机采样出mini-batch,并对两个源域的 mini-batch分别进行基础特征提取、通用特征提取、域指定特征提取,随后利用公式(8)计算两个mini-batch之间的损失并使用反向传播算法更新DTL网络;其次,在任意一个源域和目标域中都随机采样出mini-batch,并对该源域和目标域的mini-batch分别进行基础特征提取、通用特征提取、域指定特征提取,随后利用公式(8)计算两个mini-batch之间的损失并使用反向传播算法更新DTL网络;最终,对目标域随机采样mini-batch,对该mini-batch 进行基础特征提取、通用特征提取、域指定特征提取,随后使用公式(9)计算损失并使用反向传播算法更新DTL网络。First, for multiple source domains, randomly sample mini-batches in any two source domains, and perform basic feature extraction, general feature extraction, and domain-specific feature extraction on the mini-batches of the two source domains, and then use the formula (8) Calculate the loss between two mini-batches and use the backpropagation algorithm to update the DTL network; secondly, randomly sample the mini-batch in any source domain and target domain, and compare the source domain and target domain The mini-batch of the mini-batch performs basic feature extraction, general feature extraction, and domain-specific feature extraction respectively, and then uses the formula (8) to calculate the loss between the two mini-batches and uses the backpropagation algorithm to update the DTL network; finally, the target domain Randomly sample the mini-batch, perform basic feature extraction, general feature extraction, and domain-specific feature extraction on the mini-batch, then use formula (9) to calculate the loss and use the back propagation algorithm to update the DTL network.

其中,在本发明其中一个实施例中,反向传播算法采用RMSprop算法。当然,在其他的实施例中,还可以采用其他算法,如SGD算法(随机梯度下降)或Adam算法或Adagrad算法。Wherein, in one embodiment of the present invention, the backpropagation algorithm adopts the RMSprop algorithm. Of course, in other embodiments, other algorithms may also be used, such as SGD algorithm (stochastic gradient descent) or Adam algorithm or Adagrad algorithm.

步骤2:通过指静脉采集装置采集指静脉图片。Step 2: Collect finger vein pictures through the finger vein collection device.

步骤3:将待识别的指静脉图片输入嵌入网络,得到基础特征。Step 3: Input the picture of the finger vein to be recognized into the embedding network to obtain the basic features.

首先将待识别的指静脉图片,经过灰度化、边缘提取算子处理、手指边缘获取、ROI截取后,获取到该图片对应的感兴趣区域ROI;再将感兴趣区域ROI输入嵌入网络,提取得到待识别的指静脉图的基础特征。Firstly, the finger vein image to be recognized is processed by gray scale, edge extraction operator, finger edge acquisition, and ROI interception to obtain the ROI corresponding to the image; then input the ROI into the network to extract Obtain the basic features of the finger vein map to be recognized.

步骤4:将嵌入网络F提取到的基础特征输入通用特征变换网络CFTN中,提取得到通用特征。将嵌入网络F提取到的基础特征输入目标域域指定特征变换网络,提取得到目标域的域指定特征;Step 4: Input the basic features extracted by the embedding network F into the general feature transformation network CFTN, and extract general features. Input the basic features extracted by the embedded network F into the domain-specified feature transformation network of the target domain, and extract the domain-specified features of the target domain;

步骤5:将得到的通用特征和域指定特征进行拼接,得到聚合特征;Step 5: Concatenate the obtained general features and domain-specific features to obtain aggregated features;

步骤6:计算聚合特征和已注册样本数据库中所有已注册样本特征之间的余弦距离,得到输入的指静脉图片和数据库中已有指静脉的匹配分数;Step 6: Calculate the cosine distance between the aggregated feature and all registered sample features in the registered sample database, and obtain the matching score between the input finger vein picture and the existing finger veins in the database;

步骤7:根据匹配分数的高低,对输入指静脉图像和所有已注册样本的相似程度进行从高到低排序,使用该排序并采用决策算法得到最终的决策匹配结果。决策算法可以使用最大分数匹配、重匹配和top-k方法中的任一种。在本发明的一个实施例中,使用top-k作为决策算法,取出最大的K个,将出现频次最高的相似度对应的已注册样本作为输入指静脉的匹配结果。Step 7: According to the matching score, sort the similarity between the input finger vein image and all the registered samples from high to low, use this sort and use the decision algorithm to get the final decision matching result. The decision algorithm can use any of the maximum score matching, re-matching and top-k methods. In one embodiment of the present invention, top-k is used as a decision-making algorithm, the largest K samples are taken out, and the registered samples corresponding to the similarity with the highest frequency of occurrence are used as the matching result of input finger veins.

为了验证本发明所提出方法的有效性,本发明使用四个公开指静脉数据库:FV-USM、 MMCBNU-6000、THU-FVFDT、PolyU进行实验,实验结果如表1所示。从表1中可以看出,本发明在单源域迁移的情况下,使用完全相同的训练集、验证集、测试集的条件下,DTL方法(本发明所提供方法)所取得的等误率显然比微调效果更好,证明了DTL方法在单源域迁移上是有效的。从表中可以也看到,DTL方法不仅能够应用在单源域迁移上,其在多源域迁移上同样有着很好的效果。在Fusion database(混合数据集)上进行多源域迁移时,能够取得比在Fusion数据集上进行跨数据集测试、微调都要好的结果。在多源域迁移中,DTL方法不仅能够从多个源域中迁移多个掌静脉数据集中的通用特征,也能够保留各个目标域的领域指定特征。对于目标域而言,既将源域的知识迁移到了目标域,同时还将目标域的领域指定特征保留,因此采用DTL方法能够取得更低的等误率。In order to verify the effectiveness of the method proposed in the present invention, the present invention uses four public finger vein databases: FV-USM, MMCBNU-6000, THU-FVFDT, PolyU to conduct experiments, and the experimental results are shown in Table 1. As can be seen from Table 1, in the case of single-source domain migration in the present invention, under the condition of using exactly the same training set, verification set, and test set, the equal error rate obtained by the DTL method (the method provided by the present invention) It is obviously better than fine-tuning, proving that the DTL method is effective on single-source domain transfer. It can also be seen from the table that the DTL method can not only be applied to single-source domain migration, but also has a good effect on multi-source domain migration. When performing multi-source domain migration on the Fusion database (mixed dataset), better results can be obtained than cross-dataset testing and fine-tuning on the Fusion dataset. In multi-source domain transfer, the DTL method can not only transfer the common features in multiple palm vein datasets from multiple source domains, but also preserve the domain-specific features of each target domain. For the target domain, it not only transfers the knowledge of the source domain to the target domain, but also retains the domain-specific features of the target domain, so the DTL method can achieve a lower equal error rate.

表1Table 1

Figure BDA0003037906710000141
Figure BDA0003037906710000141

本发明还提供用于实现前述方法的系统。The present invention also provides a system for implementing the aforementioned method.

在本发明其中一个实施例中,提供的基于多源域迁移的指静脉识别系统,包括:In one of the embodiments of the present invention, the finger vein recognition system based on multi-source domain migration is provided, including:

网络建立模块,用于构建并训练解耦迁移学习网络,所述解耦迁移学习网络包括嵌入网络、通用特征变换网络和目标域域指定特征变换网络,嵌入网络用于将指静脉图像转换为基础特征,通用特征变换网络用于将基础特征解耦得到通用特征,目标域域指定特征变换网络用于将基础特征解耦得到域指定特征,其中,在训练解耦迁移学习网络时,采用目标域和多个源域进行训练,将多个源域的域指定特征和通用知识迁移到目标域,得到目标域域指定特征变换网络;The network building module is used to construct and train a decoupling transfer learning network, the decoupling transfer learning network includes an embedding network, a general feature transformation network and a target domain domain specified feature transformation network, and the embedding network is used to convert the finger vein image into a base Features, the general feature transformation network is used to decouple the basic features to obtain general features, and the target domain domain-specified feature transformation network is used to decouple the basic features to obtain domain-specified features. When training the decoupled transfer learning network, the target domain is used Train with multiple source domains, transfer the domain-specific features and general knowledge of multiple source domains to the target domain, and obtain the target domain-specific feature transformation network;

基础特征提取模块,用于将待识别的指静脉图片的感兴趣区域输入嵌入网络,得到基础特征;The basic feature extraction module is used to input the region of interest of the finger vein picture to be identified into the embedding network to obtain the basic features;

通用特征提取模块,用于将基础特征输入通用特征变换网络,得到通用特征;The general feature extraction module is used to input the basic features into the general feature transformation network to obtain the general features;

域指定特征提取模块,用于输入目标域域指定特征变换网络,得到域指定特征;The domain specified feature extraction module is used to input the target domain domain specified feature transformation network to obtain the domain specified feature;

聚合模块,用于将得到的通用特征和域指定特征进行拼接,得到聚合特征;The aggregation module is used to splice the obtained general features and domain-specific features to obtain aggregated features;

计算模块,用于计算聚合特征和已注册样本数据库中所有已注册样本特征之间的余弦距离,得到输入的指静脉图片和数据库中已有指静脉的匹配分数;Calculation module, used to calculate the cosine distance between the aggregation feature and all registered sample features in the registered sample database, and obtain the matching score of the input finger vein picture and the existing finger vein in the database;

匹配模块,用于根据匹配分数输出该输入指静脉的匹配结果。The matching module is configured to output the matching result of the input finger vein according to the matching score.

本实施例提供的系统具有如上述方法相同的有益效果。The system provided by this embodiment has the same beneficial effect as the above method.

本说明书中各个实施例采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似部分互相参见即可。对于实施例公开的基于多源域迁移的指静脉识别系统而言,由于其与实施例公开的方法相对应,所以描述的比较简单,相关之处参见方法部分说明即可。Each embodiment in this specification is described in a progressive manner, each embodiment focuses on the difference from other embodiments, and the same and similar parts of each embodiment can be referred to each other. For the finger vein recognition system based on multi-source domain migration disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and for relevant details, please refer to the description of the method part.

对所公开的实施例的上述说明,使本领域专业技术人员能够实现或使用本发明。对这些实施例的多种修改对本领域的专业技术人员来说将是显而易见的,本发明中所定义的一般原理可以在不脱离本发明的精神或范围的情况下,在其他实施例中实现。因此,本发明将不会被限制于本发明所示的这些实施例,而是要符合与本发明所公开的原理和新颖特点相一致的最宽的范围。The above description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the general principles defined in this invention may be implemented in other embodiments without departing from the spirit or scope of the invention. Therefore, the present invention will not be limited to these embodiments shown in the present invention, but will conform to the widest scope consistent with the principles and novel features disclosed in the present invention.

Claims (7)

1.基于多源域迁移的指静脉识别方法,其特征在于,包括以下步骤:1. The finger vein recognition method based on multi-source domain transfer, is characterized in that, comprises the following steps: 构建并训练解耦迁移学习网络,所述解耦迁移学习网络包括嵌入网络、通用特征变换网络和目标域域指定特征变换网络,嵌入网络用于将指静脉图像转换为基础特征,通用特征变换网络用于将基础特征解耦得到通用特征,目标域域指定特征变换网络用于将基础特征解耦得到域指定特征,其中,在训练解耦迁移学习网络时,采用目标域和多个源域进行训练,将多个源域的域指定特征和通用知识迁移到目标域,得到目标域域指定特征变换网络;Construct and train a decoupled transfer learning network, which includes an embedding network, a general feature transformation network and a target domain-specific feature transformation network, the embedding network is used to convert finger vein images into basic features, and the general feature transformation network It is used to decouple the basic features to obtain general features, and the target domain domain-specified feature transformation network is used to decouple the basic features to obtain domain-specified features. When training the decoupled transfer learning network, the target domain and multiple source domains are used for Training, the domain-specific features and general knowledge of multiple source domains are transferred to the target domain, and the target domain domain-specific feature transformation network is obtained; 将待识别的指静脉图片的感兴趣区域输入嵌入网络,得到基础特征;Input the region of interest of the finger vein picture to be recognized into the embedding network to obtain the basic features; 将基础特征分别输入通用特征变换网络和目标域域指定特征变换网络,分别得到通用特征和域指定特征;The basic features are input into the general feature transformation network and the target domain domain-specific feature transformation network respectively, and the general features and domain-specific features are obtained respectively; 将得到的通用特征和域指定特征进行拼接,得到聚合特征;Concatenate the obtained general features and domain-specific features to obtain aggregated features; 计算聚合特征和已注册样本数据库中所有已注册样本特征之间的余弦距离,得到输入的指静脉图片和数据库中已有指静脉的匹配分数;Calculate the cosine distance between the aggregated features and all registered sample features in the registered sample database, and obtain the matching score of the input finger vein picture and the existing finger veins in the database; 根据匹配分数输出该输入指静脉的匹配结果;Outputting the matching result of the input finger vein according to the matching score; 所述训练解耦迁移学习网络的方法步骤包括:The method steps of the training decoupling transfer learning network include: 首先,对于多个源域,在任意两个源域中都随机采样出mini-batch,并对两个源域的mini-batch分别进行基础特征提取、通用特征提取、域指定特征提取,随后利用总体损失函数计算两个mini-batch之间的损失并使用反向传播算法更新DTL网络;First, for multiple source domains, randomly sample mini-batches in any two source domains, and perform basic feature extraction, general feature extraction, and domain-specific feature extraction on the mini-batches of the two source domains, and then use The overall loss function calculates the loss between two mini-batches and updates the DTL network using the backpropagation algorithm; 然后,在任意一个源域和目标域中都随机采样出mini-batch,并对该源域和目标域的mini-batch分别进行基础特征提取、通用特征提取、域指定特征提取,随后利用总体损失函数计算两个mini-batch之间的损失并使用反向传播算法更新DTL网络;Then, randomly sample a mini-batch in any source domain and target domain, and perform basic feature extraction, general feature extraction, and domain-specific feature extraction on the mini-batch of the source domain and target domain, and then use the overall loss The function calculates the loss between two mini-batches and updates the DTL network using the backpropagation algorithm; 最后,对目标域随机采样出mini-batch,对该mini-batch进行基础特征提取、通用特征提取、域指定特征提取,随后使用目标域损失函数计算损失并使用反向传播算法更新DTL网络;Finally, randomly sample a mini-batch from the target domain, perform basic feature extraction, general feature extraction, and domain-specific feature extraction on the mini-batch, and then use the target domain loss function to calculate the loss and use the back propagation algorithm to update the DTL network; 所述目标域损失函数的计算公式如下:The calculation formula of the target domain loss function is as follows:
Figure 141764DEST_PATH_IMAGE001
(9)
Figure 141764DEST_PATH_IMAGE001
(9)
式中,
Figure 856779DEST_PATH_IMAGE002
表示DFTN的交叉熵损失,
Figure 226580DEST_PATH_IMAGE003
是中心损失的系数,
Figure 661716DEST_PATH_IMAGE004
表示中心损失;
In the formula,
Figure 856779DEST_PATH_IMAGE002
Denotes the cross-entropy loss of DFTN,
Figure 226580DEST_PATH_IMAGE003
is the coefficient of the center loss,
Figure 661716DEST_PATH_IMAGE004
Indicates the center loss;
所述总体损失函数的计算公式如下:The calculation formula of the overall loss function is as follows:
Figure 521088DEST_PATH_IMAGE005
Figure 521088DEST_PATH_IMAGE005
;
其中,
Figure 208552DEST_PATH_IMAGE006
in,
Figure 208552DEST_PATH_IMAGE006
;
Figure 178782DEST_PATH_IMAGE007
Figure 178782DEST_PATH_IMAGE007
;
Figure 189464DEST_PATH_IMAGE008
Figure 189464DEST_PATH_IMAGE008
式中,
Figure 970469DEST_PATH_IMAGE009
表示通用特征变换网络CFTN的交叉熵损失;
Figure 394497DEST_PATH_IMAGE010
表示DFTN的交叉熵损 失;
Figure 653571DEST_PATH_IMAGE011
表示多源域最大均值差异损失;
Figure 846655DEST_PATH_IMAGE012
表示中心损失,
Figure 535912DEST_PATH_IMAGE013
Figure 181657DEST_PATH_IMAGE014
分别是MMMD损失和 中心损失的系数,N是整个指静脉混合数据集的类别数目
Figure 510002DEST_PATH_IMAGE015
Figure 292013DEST_PATH_IMAGE016
表示样本类别标签的one-hot 编码,
Figure 414821DEST_PATH_IMAGE017
表示类别标签one-hot编码的第a个类别的指示值;
Figure 813441DEST_PATH_IMAGE018
是通用特征
Figure 414318DEST_PATH_IMAGE019
属于第a 个类别的分类分数,
Figure 581994DEST_PATH_IMAGE020
表示源域的样本数量,
Figure 669511DEST_PATH_IMAGE021
表示第
Figure 961952DEST_PATH_IMAGE022
个源域的通用特征,
Figure 694416DEST_PATH_IMAGE023
表示第n 个源域的通用特征,
Figure 201752DEST_PATH_IMAGE024
表示目标域上的通用特征,
Figure 181209DEST_PATH_IMAGE025
表示第m个源域和第n个 源域之间通用特征的两个小批量数据之间的MMD距离,
Figure 773996DEST_PATH_IMAGE026
表示第
Figure 497101DEST_PATH_IMAGE022
个源域和 第
Figure 670330DEST_PATH_IMAGE027
个源域之间通用特征的两个小批量数据之间的MMD距离,F表示mini-batch的大小,
Figure 837000DEST_PATH_IMAGE028
表示第
Figure 900771DEST_PATH_IMAGE029
个类的通用特征,
Figure 912720DEST_PATH_IMAGE030
表示第
Figure 643916DEST_PATH_IMAGE029
个类的特征中心。
In the formula,
Figure 970469DEST_PATH_IMAGE009
Represents the cross-entropy loss of the general feature transformation network CFTN;
Figure 394497DEST_PATH_IMAGE010
Represents the cross-entropy loss of DFTN;
Figure 653571DEST_PATH_IMAGE011
Indicates the multi-source domain maximum mean difference loss;
Figure 846655DEST_PATH_IMAGE012
represents the central loss,
Figure 535912DEST_PATH_IMAGE013
and
Figure 181657DEST_PATH_IMAGE014
are the coefficients of MMMD loss and center loss, respectively, and N is the number of categories in the entire finger vein mixed data set
Figure 510002DEST_PATH_IMAGE015
Figure 292013DEST_PATH_IMAGE016
represents the one-hot encoding of the sample category label,
Figure 414821DEST_PATH_IMAGE017
Indicates the indicator value of the a-th category of the category label one-hot encoding;
Figure 813441DEST_PATH_IMAGE018
is a common feature
Figure 414318DEST_PATH_IMAGE019
the classification score belonging to the a-th class,
Figure 581994DEST_PATH_IMAGE020
represents the sample size of the source domain,
Figure 669511DEST_PATH_IMAGE021
Indicates the first
Figure 961952DEST_PATH_IMAGE022
common features of source domains,
Figure 694416DEST_PATH_IMAGE023
Denotes the general features of the nth source domain,
Figure 201752DEST_PATH_IMAGE024
Represents general features on the target domain,
Figure 181209DEST_PATH_IMAGE025
The MMD distance between two mini-batches representing common features between the mth source domain and the nth source domain,
Figure 773996DEST_PATH_IMAGE026
Indicates the first
Figure 497101DEST_PATH_IMAGE022
source domain and the
Figure 670330DEST_PATH_IMAGE027
The MMD distance between two mini-batches of common features between source domains, F represents the size of the mini-batch,
Figure 837000DEST_PATH_IMAGE028
Indicates the first
Figure 900771DEST_PATH_IMAGE029
common features of the class,
Figure 912720DEST_PATH_IMAGE030
Indicates the first
Figure 643916DEST_PATH_IMAGE029
feature center of a class.
2.根据权利要求1所述的基于多源域迁移的指静脉识别方法,其特征在于,通用特征变 换网络CFTN的交叉熵损失中的分类分数
Figure 450329DEST_PATH_IMAGE031
的计算公式如下:
2. The finger vein recognition method based on multi-source domain migration according to claim 1, wherein the classification score in the cross-entropy loss of the general feature transformation network CFTN
Figure 450329DEST_PATH_IMAGE031
The calculation formula is as follows:
Figure 532554DEST_PATH_IMAGE032
Figure 532554DEST_PATH_IMAGE032
式中,s是一个超参数,表示缩放的尺度因子;
Figure 814107DEST_PATH_IMAGE033
是特征
Figure 399809DEST_PATH_IMAGE034
和对应超平面
Figure 908282DEST_PATH_IMAGE035
之间的角度 值;
Figure 681066DEST_PATH_IMAGE036
是特征
Figure 34818DEST_PATH_IMAGE034
和对应超平面
Figure 6185DEST_PATH_IMAGE037
之间的角度值,
Figure 888821DEST_PATH_IMAGE038
为加性角度间隔惩罚项。
In the formula, s is a hyperparameter, representing the scale factor of scaling;
Figure 814107DEST_PATH_IMAGE033
is a feature
Figure 399809DEST_PATH_IMAGE034
and the corresponding hyperplane
Figure 908282DEST_PATH_IMAGE035
Angle value between;
Figure 681066DEST_PATH_IMAGE036
is a feature
Figure 34818DEST_PATH_IMAGE034
and the corresponding hyperplane
Figure 6185DEST_PATH_IMAGE037
the angle value between,
Figure 888821DEST_PATH_IMAGE038
is an additive angular interval penalty term.
3.根据权利要求1所述的基于多源域迁移的指静脉识别方法,其特征在于,所述通用特征变换网络包括网络结构和池化层,网络结构用于对通用特征进行特征变换,池化层用于沿通道维度将每个通道均值池化为一个值,使最终各个通道的均值组成特征向量。3. The finger vein recognition method based on multi-source domain migration according to claim 1, wherein the general feature transformation network comprises a network structure and a pooling layer, and the network structure is used to perform feature transformation to general features, and the pooling The layer is used to pool the mean value of each channel into a value along the channel dimension, so that the final mean value of each channel forms a feature vector. 4.根据权利要求1所述的基于多源域迁移的指静脉识别方法,其特征在于,所述目标域域指定特征变换网络包括网络结构和池化层,网络结构用于对通用特征进行特征变换,池化层用于沿通道维度将每个通道均值池化为一个值,使最终各个通道的均值组成特征向量。4. The finger vein recognition method based on multi-source domain migration according to claim 1, wherein the target domain-specified feature transformation network includes a network structure and a pooling layer, and the network structure is used to characterize general features Transformation, the pooling layer is used to pool the mean value of each channel into a value along the channel dimension, so that the mean value of each channel finally forms a feature vector. 5.根据权利要求1所述的基于多源域迁移的指静脉识别方法,其特征在于,所述将待识别的指静脉图片的感兴趣区域输入嵌入网络中,包括:将待识别的指静脉图片,经过灰度化、边缘提取算子处理、手指边缘获取、ROI截取后,得到该图片对应的感兴趣区域ROI。5. The method for recognizing finger veins based on multi-source domain migration according to claim 1, wherein said inputting the region of interest of the finger vein picture to be recognized into the embedding network comprises: inputting the finger vein to be recognized After the image is grayscaled, edge extraction operator processing, finger edge acquisition, and ROI interception, the ROI corresponding to the image is obtained. 6.根据权利要求1-5任一所述的基于多源域迁移的指静脉识别方法,其特征在于,所述根据匹配分数输出该输入指静脉的匹配结果,包括:根据匹配分数的高低,对输入指静脉图像和所有已注册样本的相似程度进行从高到低排序,使用该排序并采用决策算法得到最终的决策匹配结果。6. The finger vein recognition method based on multi-source domain migration according to any one of claims 1-5, wherein the outputting the matching result of the input finger vein according to the matching score comprises: according to the level of the matching score, The similarity between the input finger vein image and all registered samples is sorted from high to low, and the final decision matching result is obtained by using the sorting and decision-making algorithm. 7.基于多源域迁移的指静脉识别系统,其特征在于,用于实现权利要求1-6任一所述的方法,所述系统包括:7. The finger vein recognition system based on multi-source domain migration is characterized in that, for realizing the method described in any one of claims 1-6, the system comprises: 网络建立模块,用于构建并训练解耦迁移学习网络,所述解耦迁移学习网络包括嵌入网络、通用特征变换网络和目标域域指定特征变换网络,嵌入网络用于将指静脉图像转换为基础特征,通用特征变换网络用于将基础特征解耦得到通用特征,目标域域指定特征变换网络用于将基础特征解耦得到域指定特征,其中,在训练解耦迁移学习网络时,采用目标域和多个源域进行训练,将多个源域的域指定特征和通用知识迁移到目标域,得到目标域域指定特征变换网络;The network building module is used to construct and train a decoupling transfer learning network, the decoupling transfer learning network includes an embedding network, a general feature transformation network and a target domain domain specified feature transformation network, and the embedding network is used to convert the finger vein image into a base Features, the general feature transformation network is used to decouple the basic features to obtain general features, and the target domain domain-specified feature transformation network is used to decouple the basic features to obtain domain-specified features. When training the decoupled transfer learning network, the target domain is used Train with multiple source domains, transfer the domain-specific features and general knowledge of multiple source domains to the target domain, and obtain the target domain-specific feature transformation network; 基础特征提取模块,用于将待识别的指静脉图片的感兴趣区域输入嵌入网络,得到基础特征;The basic feature extraction module is used to input the region of interest of the finger vein picture to be identified into the embedding network to obtain the basic features; 通用特征提取模块,用于将基础特征输入通用特征变换网络,得到通用特征;The general feature extraction module is used to input the basic features into the general feature transformation network to obtain the general features; 域指定特征提取模块,用于输入目标域域指定特征变换网络,得到域指定特征;A domain-specified feature extraction module is used to input the target domain domain-specified feature transformation network to obtain domain-specified features; 聚合模块,用于将得到的通用特征和域指定特征进行拼接,得到聚合特征;The aggregation module is used to splice the obtained general features and domain-specific features to obtain aggregated features; 计算模块,用于计算聚合特征和已注册样本数据库中所有已注册样本特征之间的余弦距离,得到输入的指静脉图片和数据库中已有指静脉的匹配分数;Calculation module, used to calculate the cosine distance between the aggregation feature and all registered sample features in the registered sample database, and obtain the matching score of the input finger vein picture and the existing finger vein in the database; 匹配模块,用于根据匹配分数输出该输入指静脉的匹配结果。The matching module is configured to output the matching result of the input finger vein according to the matching score.
CN202110449007.1A 2021-04-25 2021-04-25 Method and system for finger vein recognition based on multi-source domain migration Active CN113076927B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110449007.1A CN113076927B (en) 2021-04-25 2021-04-25 Method and system for finger vein recognition based on multi-source domain migration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110449007.1A CN113076927B (en) 2021-04-25 2021-04-25 Method and system for finger vein recognition based on multi-source domain migration

Publications (2)

Publication Number Publication Date
CN113076927A CN113076927A (en) 2021-07-06
CN113076927B true CN113076927B (en) 2023-02-14

Family

ID=76618598

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110449007.1A Active CN113076927B (en) 2021-04-25 2021-04-25 Method and system for finger vein recognition based on multi-source domain migration

Country Status (1)

Country Link
CN (1) CN113076927B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113780284B (en) * 2021-09-17 2024-04-19 焦点科技股份有限公司 Logo detection method based on target detection and metric learning
CN113807371B (en) * 2021-10-08 2024-03-29 中国人民解放军国防科技大学 Unsupervised domain self-adaption method for beneficial feature alignment under similar conditions
CN113869501B (en) * 2021-10-19 2024-06-18 京东科技信息技术有限公司 Neural network generation method and device, electronic equipment and storage medium
CN114882542B (en) * 2022-04-20 2025-02-11 五邑大学 Finger vein recognition method, device and storage medium based on unsupervised domain adaptation
CN114818917B (en) * 2022-04-25 2025-06-10 五邑大学 Finger vein recognition training method, test method and related device
CN115082299B (en) * 2022-07-21 2022-11-25 中国科学院自动化研究所 Method, system and equipment for converting different source images of small samples in non-strict alignment
CN115457611B (en) * 2022-10-21 2023-04-21 中国矿业大学 Vein recognition method based on characteristic decoupling network
CN116050507B (en) * 2023-01-18 2023-12-22 合肥中科立恒智能科技有限公司 Carbon dioxide emission monitoring method and system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107967442A (en) * 2017-09-30 2018-04-27 广州智慧城市发展研究院 A kind of finger vein identification method and system based on unsupervised learning and deep layer network
CN110163136A (en) * 2019-05-13 2019-08-23 南京邮电大学 A kind of fingerprint and finger vein bimodal recognition decision blending algorithm based on perceptron
CN111183424A (en) * 2017-08-30 2020-05-19 深圳市长桑技术有限公司 System and method for identifying users
CN112036383A (en) * 2020-11-04 2020-12-04 北京圣点云信息技术有限公司 A kind of identification method and device based on hand vein
CN112241680A (en) * 2020-09-14 2021-01-19 中国矿业大学 Multi-mode identity authentication method based on vein similar image knowledge migration network
CN112560710A (en) * 2020-12-18 2021-03-26 北京曙光易通技术有限公司 Method for constructing finger vein recognition system and finger vein recognition system
CN112597812A (en) * 2020-12-03 2021-04-02 西安格威西联科技有限公司 Finger vein identification method and system based on convolutional neural network and SIFT algorithm

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3558025B2 (en) * 2000-09-06 2004-08-25 株式会社日立製作所 Personal authentication device and method
JP4207717B2 (en) * 2003-08-26 2009-01-14 株式会社日立製作所 Personal authentication device
JP5292821B2 (en) * 2008-01-16 2013-09-18 ソニー株式会社 Vein image acquisition device and vein image acquisition method
CN107004113B (en) * 2014-12-01 2021-01-29 熵基科技股份有限公司 System and method for obtaining multi-modal biometric information
CN105518716A (en) * 2015-10-10 2016-04-20 厦门中控生物识别信息技术有限公司 Finger vein recognition method and apparatus
US9916542B2 (en) * 2016-02-02 2018-03-13 Xerox Corporation Domain adaptation by multi-noising stacked marginalized denoising encoders
CN107103364A (en) * 2017-03-28 2017-08-29 上海大学 A kind of task based on many source domain splits transfer learning Forecasting Methodology
CN108280417A (en) * 2018-01-18 2018-07-13 苏州折衍光电科技有限公司 A kind of finger vena method for quickly identifying
CN108416776B (en) * 2018-03-16 2021-04-30 京东方科技集团股份有限公司 Image recognition method, image recognition apparatus, computer product, and readable storage medium
CN109034016B (en) * 2018-07-12 2021-10-15 辽宁工业大学 A universal method for image recognition of dorsal hand veins based on S-CNN model
CN111222479B (en) * 2020-01-12 2022-02-18 杭州电子科技大学 Adaptive Radius LBP Feature Layer Fusion Recognition Method Combined with Equivalent Mode
CN111460915B (en) * 2020-03-13 2023-04-18 华南理工大学 Light weight neural network-based finger vein verification method and system
CN111738315B (en) * 2020-06-10 2022-08-12 西安电子科技大学 Image classification method based on adversarial fusion multi-source transfer learning
CN111950454B (en) * 2020-08-12 2024-04-02 辽宁工程技术大学 Finger vein recognition method based on bidirectional feature extraction

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111183424A (en) * 2017-08-30 2020-05-19 深圳市长桑技术有限公司 System and method for identifying users
CN107967442A (en) * 2017-09-30 2018-04-27 广州智慧城市发展研究院 A kind of finger vein identification method and system based on unsupervised learning and deep layer network
CN110163136A (en) * 2019-05-13 2019-08-23 南京邮电大学 A kind of fingerprint and finger vein bimodal recognition decision blending algorithm based on perceptron
CN112241680A (en) * 2020-09-14 2021-01-19 中国矿业大学 Multi-mode identity authentication method based on vein similar image knowledge migration network
CN112036383A (en) * 2020-11-04 2020-12-04 北京圣点云信息技术有限公司 A kind of identification method and device based on hand vein
CN112597812A (en) * 2020-12-03 2021-04-02 西安格威西联科技有限公司 Finger vein identification method and system based on convolutional neural network and SIFT algorithm
CN112560710A (en) * 2020-12-18 2021-03-26 北京曙光易通技术有限公司 Method for constructing finger vein recognition system and finger vein recognition system

Also Published As

Publication number Publication date
CN113076927A (en) 2021-07-06

Similar Documents

Publication Publication Date Title
CN113076927B (en) Method and system for finger vein recognition based on multi-source domain migration
CN105138973B (en) The method and apparatus of face authentication
CN107609497B (en) Real-time video face recognition method and system based on visual tracking technology
Gangwar et al. DeepIrisNet: Deep iris representation with applications in iris recognition and cross-sensor iris recognition
JP7130905B2 (en) Fast and Robust Dermatoglyphic Mark Minutia Extraction Using Feedforward Convolutional Neural Networks
Zhang et al. Combining global and minutia deep features for partial high-resolution fingerprint matching
CN106650808A (en) Image classification method based on quantum nearest-neighbor algorithm
CN109344856B (en) Offline signature identification method based on multilayer discriminant feature learning
CN108875907A (en) A kind of fingerprint identification method and device based on deep learning
CN102955855A (en) Palm print database search method based on quantum algorithms
CN114511912B (en) Cross-database micro-expression recognition method and device based on two-stream convolutional neural network
CN116092134A (en) A Fingerprint Liveness Detection Method Based on Deep Learning and Feature Fusion
CN114996688B (en) An online signature authentication system and method based on soft dynamic time warping
Ge et al. Deep and discriminative feature learning for fingerprint classification
Wu et al. Deep learning in automatic fingerprint identification
Khan et al. A common convolutional neural network model to classify plain, rolled and latent fingerprints
CN115795394A (en) Hierarchical Multimodality and Advanced Incremental Learning for Biometric Fusion Identity Recognition
CN115861779A (en) Unbiased scene graph generation method based on effective feature representation
CN115909398A (en) A cross-domain pedestrian re-identification method based on feature enhancement
Chen et al. A finger vein recognition algorithm based on deep learning
Li et al. FVGNN: A novel GNN to finger vein recognition from limited training data
CN111985434B (en) Model enhanced face recognition method, device, device and storage medium
CN118247813A (en) A person re-identification method based on adaptive optimization network structure
CN117237937A (en) A disordered parts recognition method based on PointNet++ network
CN112800959B (en) Difficult sample mining method for data fitting estimation in face recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20210706

Assignee: Guangzhou Zhongjian TCM Technology Co.,Ltd.

Assignor: SOUTH CHINA University OF TECHNOLOGY

Contract record no.: X2025980003809

Denomination of invention: Finger vein recognition method and system based on multi-source domain transfer

Granted publication date: 20230214

License type: Common License

Record date: 20250218

Application publication date: 20210706

Assignee: CHARTU TECHNOLOGIES Co.,Ltd.

Assignor: SOUTH CHINA University OF TECHNOLOGY

Contract record no.: X2025980003802

Denomination of invention: Finger vein recognition method and system based on multi-source domain transfer

Granted publication date: 20230214

License type: Common License

Record date: 20250218

Application publication date: 20210706

Assignee: Guangzhou maize Technology Co.,Ltd.

Assignor: SOUTH CHINA University OF TECHNOLOGY

Contract record no.: X2025980003796

Denomination of invention: Finger vein recognition method and system based on multi-source domain transfer

Granted publication date: 20230214

License type: Common License

Record date: 20250218

OL01 Intention to license declared
OL01 Intention to license declared