[go: up one dir, main page]

CN107545277B - Model training, authentication method, apparatus, storage medium and computer equipment - Google Patents

Model training, authentication method, apparatus, storage medium and computer equipment Download PDF

Info

Publication number
CN107545277B
CN107545277B CN201710687385.7A CN201710687385A CN107545277B CN 107545277 B CN107545277 B CN 107545277B CN 201710687385 A CN201710687385 A CN 201710687385A CN 107545277 B CN107545277 B CN 107545277B
Authority
CN
China
Prior art keywords
image
model
identification
layer
feature map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710687385.7A
Other languages
Chinese (zh)
Other versions
CN107545277A (en
Inventor
李绍欣
邰颖
梁亦聪
丁守鸿
李安平
汪铖杰
李季檩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shanghai Co Ltd
Original Assignee
Tencent Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shanghai Co Ltd filed Critical Tencent Technology Shanghai Co Ltd
Priority to CN201710687385.7A priority Critical patent/CN107545277B/en
Publication of CN107545277A publication Critical patent/CN107545277A/en
Application granted granted Critical
Publication of CN107545277B publication Critical patent/CN107545277B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

本发明涉及一种模型训练、身份验证方法、装置、存储介质和计算机设备,其中模型训练方法包括获取原始图像和相应的带噪声图像;将所述原始图像和所述带噪声图像输入鉴别模型,得到第一鉴别置信度;通过图像生成模型,生成所述带噪声图像的去噪声图像;将所述去噪声图像和所述带噪声图像输入鉴别模型,得到第二鉴别置信度;按照增大所述第一鉴别置信度和所述第二鉴别置信度间差异的调整方式,调整所述鉴别模型和所述图像生成模型,并继续训练,直至满足训练结束条件。本申请提出的方案能够极大程度上克服图像失真的问题。

Figure 201710687385

The present invention relates to a model training, identity verification method, device, storage medium and computer equipment, wherein the model training method includes acquiring an original image and a corresponding image with noise; inputting the original image and the image with noise into a discrimination model, Obtain the first identification degree of confidence; generate the denoised image of the noised image through the image generation model; input the denoised image and the described noised image into the identification model to obtain the second identification degree of confidence; according to increasing the The method for adjusting the difference between the first identification confidence level and the second identification confidence level is to adjust the identification model and the image generation model, and continue training until the training end condition is met. The solution proposed in this application can overcome the problem of image distortion to a great extent.

Figure 201710687385

Description

模型训练、身份验证方法、装置、存储介质和计算机设备Model training, authentication method, device, storage medium and computer equipment

技术领域technical field

本发明涉及计算机技术领域,特别是涉及一种模型训练、身份验证方法、装置、存储介质和计算机设备。The present invention relates to the field of computer technology, in particular to a model training, identity verification method, device, storage medium and computer equipment.

背景技术Background technique

随着计算机技术的发展和图像处理技术的进步,基于图像的处理方式变得越来越多样。比如,出于安全等因素考虑,会对原始图像进行处理得到相应的带噪声图像。目前,在用户使用带噪声图像进行交互时,通常需要对带噪声图像进行去噪,以得到原始图像从而进行后续交互。With the development of computer technology and the advancement of image processing technology, image-based processing methods have become more and more diverse. For example, for consideration of security and other factors, the original image will be processed to obtain a corresponding noisy image. At present, when a user interacts with a noisy image, it is usually necessary to denoise the noisy image to obtain an original image for subsequent interaction.

然而,传统的图像去噪过程中,主要是通过基于纹理合成的方式,将已知的原始图像区域纹理扩散到待去噪的图像区域,来实现图像去噪。但在采用该方式对待去噪图像进行去噪的过程中,很容易出现误匹配的情况,从而导致处理得到的图像产生严重失真。However, in the traditional image denoising process, image denoising is mainly achieved by diffusing the known texture of the original image region to the image region to be denoised based on texture synthesis. However, in the process of denoising the image to be denoised by using this method, it is easy to cause a mismatch, which leads to serious distortion of the processed image.

发明内容Contents of the invention

基于此,有必要针对传统的图像去噪技术导致的图像失真的问题,提供一种模型训练、身份验证方法、装置、存储介质和计算机设备。Based on this, it is necessary to provide a model training, identity verification method, device, storage medium and computer equipment for the problem of image distortion caused by traditional image denoising technology.

一种模型训练方法,所述方法包括:A model training method, the method comprising:

获取原始图像和相应的带噪声图像;Get the original image and the corresponding noisy image;

将所述原始图像和所述带噪声图像输入鉴别模型,得到第一鉴别置信度;inputting the original image and the noisy image into a discrimination model to obtain a first discrimination confidence;

通过图像生成模型,生成所述带噪声图像的去噪声图像;Generate a denoised image of the noisy image through an image generation model;

将所述去噪声图像和所述带噪声图像输入鉴别模型,得到第二鉴别置信度;inputting the denoised image and the noisy image into a discrimination model to obtain a second discrimination confidence;

按照增大所述第一鉴别置信度和所述第二鉴别置信度间差异的调整方式,调整所述鉴别模型和所述图像生成模型,并继续训练,直至满足训练结束条件。According to the adjustment manner of increasing the difference between the first identification confidence level and the second identification confidence level, adjust the identification model and the image generation model, and continue training until the training end condition is satisfied.

一种模型训练装置,所述装置包括:A model training device, said device comprising:

图像获取模块,用于获取原始图像和相应的带噪声图像;Image acquisition module, used to acquire original image and corresponding image with noise;

第一输出模块,用于将所述原始图像和所述带噪声图像输入鉴别模型,得到第一鉴别置信度;A first output module, configured to input the original image and the noisy image into a discrimination model to obtain a first discrimination confidence;

图像生成模块,用于通过图像生成模型,生成所述带噪声图像的去噪声图像;An image generation module, configured to generate a denoised image of the noised image through an image generation model;

第二输出模块,用于将所述去噪声图像和所述带噪声图像输入鉴别模型,得到第二鉴别置信度;A second output module, configured to input the denoised image and the noisy image into a discrimination model to obtain a second discrimination confidence;

模型调整模块,用于按照增大所述第一鉴别置信度和所述第二鉴别置信度间差异的调整方式,调整所述鉴别模型和所述图像生成模型,并继续训练,直至满足训练结束条件。The model adjustment module is used to adjust the identification model and the image generation model according to the adjustment method of increasing the difference between the first identification confidence level and the second identification confidence level, and continue training until the end of training is satisfied. condition.

一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机可读指令,所述计算机可读指令被处理器执行时,使得所述处理器执行上述模型训练方法的步骤。A computer-readable storage medium, where computer-readable instructions are stored on the computer-readable storage medium, and when the computer-readable instructions are executed by a processor, the processor is made to execute the steps of the above-mentioned model training method.

一种计算机设备,包括存储器和处理器,所述存储器中储存有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行上述模型训练方法的步骤。A computer device includes a memory and a processor, wherein computer-readable instructions are stored in the memory, and when the computer-readable instructions are executed by the processor, the processor is made to execute the steps of the above-mentioned model training method.

上述模型训练方法、装置、存储介质和计算机设备,包括图像生成模型和鉴别模型两个模型的训练。其中,训练图像生成模型的过程在于学习生成带噪声图像的去噪声图像,训练鉴别模型的过程在于学习在给定带噪声图像的条件下,学习判断输入的另一图像是原始图像还是通过图像生成模型生成的去噪声图像。这样图像生成模型学习生成与原始图像更相似的图像,以干扰鉴别模型的判断,鉴别模型学习更加精准地进行原始图像和去噪声图像的判断,两个模型相互对抗,相互促进,使得训练得到的模型性能更优,从而在使用训练得到的图像生成模型进行图像去噪时,能够极大程度上克服图像失真的问题。The above model training method, device, storage medium and computer equipment include the training of two models, an image generation model and a discrimination model. Among them, the process of training the image generation model is to learn to generate denoised images of noisy images, and the process of training the identification model is to learn to judge whether another input image is an original image or generated through an image under the condition of a given noisy image. The denoised image generated by the model. In this way, the image generation model learns to generate an image that is more similar to the original image to interfere with the judgment of the discrimination model, and the discrimination model learns to judge the original image and the denoised image more accurately. The performance of the model is better, so that when using the trained image generation model for image denoising, the problem of image distortion can be overcome to a great extent.

一种身份验证方法,所述方法包括:A method of identity verification, the method comprising:

获取与用户标识对应的人脸图像帧;Obtain the face image frame corresponding to the user ID;

从与所述用户标识对应的身份证件中,获取带网纹人脸证件图像;Obtain an image of a certificate with a textured face from the identity certificate corresponding to the user identifier;

通过上述模型训练方法训练得到的图像生成模型,生成所述带网纹人脸证件图像的去网纹人脸证件图像;The image generation model obtained by training the above-mentioned model training method generates the de-screened human face certificate image of the de-screened human face certificate image with the reticulated human face certificate image;

将所述人脸图像帧和所述去网纹人脸证件图像对比,得到身份验证结果。Comparing the face image frame with the descreened face certificate image to obtain an identity verification result.

一种身份验证装置,所述装置包括:An identity verification device, the device comprising:

获取模块,用于采集与用户标识对应的人脸图像帧;从与所述用户标识对应的身份证件中,获取带网纹人脸证件图像;The acquisition module is used to collect the face image frame corresponding to the user identification; from the ID card corresponding to the user identification, obtain the image of the certificate with a reticulated face;

生成模块,用于通过上述模型训练方法训练得到的图像生成模型,生成所述带网纹人脸证件图像的去网纹人脸证件图像;Generation module, for the image generation model obtained by the training of the above-mentioned model training method, to generate the de-screened face certificate image of the de-screened face certificate image;

验证模块,用于将所述人脸图像帧和所述去网纹人脸证件图像对比,得到身份验证结果。A verification module, configured to compare the face image frame with the de-screened face certificate image to obtain an identity verification result.

一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机可读指令,所述计算机可读指令被处理器执行时,使得所述处理器执行上述身份验证方法的步骤。A computer-readable storage medium, where computer-readable instructions are stored on the computer-readable storage medium, and when the computer-readable instructions are executed by a processor, the processor is made to execute the steps of the above identity verification method.

一种计算机设备,包括存储器和处理器,所述存储器中储存有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行上述身份验证方法的步骤。A computer device includes a memory and a processor, where computer-readable instructions are stored in the memory, and when the computer-readable instructions are executed by the processor, the processor is made to execute the steps of the above identity verification method.

上述身份验证方法、装置、存储介质和计算机设备,在需要进行用户身份验证时,采集与用户标识对应的人脸图像帧,再从与该用户标识对应的身份证件中,读取带网纹人脸证件图像,然后通过按照图像生成模型与鉴别模型相互对抗、相互促进的方式训练得到的图像生成模型,生成去噪效果好且不失真的去网纹人脸证件图像,再将采集的人脸图像帧和生成的去网纹人脸证件图像对比,即可得到身份验证结果,极大地提高了身份验证的准确性。The above identity verification method, device, storage medium and computer equipment, when user identity verification is required, collect the face image frame corresponding to the user identification, and then read the person with texture from the identity document corresponding to the user identification. Then, the image generation model obtained by training the image generation model and the identification model in the way of mutual confrontation and mutual promotion can generate a de-screened face certificate image with good denoising effect and no distortion, and then the collected face The identity verification result can be obtained by comparing the image frame with the generated de-screened face certificate image, which greatly improves the accuracy of identity verification.

附图说明Description of drawings

图1为一个实施例中模型训练方法的应用环境图;Fig. 1 is an application environment diagram of a model training method in an embodiment;

图2为一个实施例中模型训练方法的流程示意图;Fig. 2 is a schematic flow chart of a model training method in an embodiment;

图3为一个实施例中原始图像和相应的带噪声图像的示意图;Figure 3 is a schematic diagram of an original image and a corresponding image with noise in one embodiment;

图4为一个实施例中图像生成模型中各层输出的特征图的传递示意图;Fig. 4 is a schematic diagram of the transmission of the feature maps output by each layer in the image generation model in an embodiment;

图5为另一个实施例中模型训练方法的流程示意图;Fig. 5 is a schematic flow chart of a model training method in another embodiment;

图6为一个实施例中图像生成模型和鉴别模型的输入输出示意图;Fig. 6 is a schematic diagram of input and output of an image generation model and a discrimination model in one embodiment;

图7为一个实施例中身份验证方法的流程示意图;FIG. 7 is a schematic flow diagram of an identity verification method in an embodiment;

图8为一个实施例中模型训练装置的结构框图;Fig. 8 is a structural block diagram of a model training device in an embodiment;

图9为一个实施例中身份验证装置的结果框图;Fig. 9 is a result block diagram of an identity verification device in an embodiment;

图10为一个实施例中计算机设备的内部结构图。Figure 10 is a diagram of the internal structure of a computer device in one embodiment.

具体实施方式Detailed ways

为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention.

图1为一个实施例中模型训练方法的应用环境图。如图1所示,该应用环境包括终端110和服务器120,其中终端110和服务器120可通过网络进行通信。终端110可以是台式设备或者移动终端。服务器120可以是独立的物理服务器、物理服务器集群或者虚拟服务器。终端110可通过服务器120从互联网中获取原始图像和相应的带噪声图像,再根据获取的原始图像和相应的带噪声图像对鉴别模型和图像生成模型进行训练,从而不断调整优化鉴别模型和图像生成模型。服务器120也可获取终端110上传的原始图像和相应的带噪声图像,再根据获取的原始图像和相应的带噪声图像对鉴别模型和图像生成模型进行训练,从而不断调整优化鉴别模型和图像生成模型。Fig. 1 is an application environment diagram of a model training method in an embodiment. As shown in FIG. 1 , the application environment includes a terminal 110 and a server 120, wherein the terminal 110 and the server 120 can communicate through a network. Terminal 110 may be a desktop device or a mobile terminal. Server 120 may be an individual physical server, a cluster of physical servers, or a virtual server. The terminal 110 can obtain the original image and the corresponding noisy image from the Internet through the server 120, and then train the identification model and the image generation model according to the acquired original image and the corresponding noisy image, so as to continuously adjust and optimize the identification model and image generation Model. The server 120 can also obtain the original image uploaded by the terminal 110 and the corresponding image with noise, and then train the identification model and the image generation model according to the obtained original image and the corresponding image with noise, so as to continuously adjust and optimize the identification model and the image generation model .

在一个实施例中,图1的应用环境图还可以应用于身份验证方法。终端110可将采集的与用户标识对应的人脸图像帧发送至服务器120,服务器120获取与用户标识对应的人脸图像帧后,从与该用户标识对应的身份证件中,获取带网纹人脸证件图像,再通过上述模型训练方法训练得到的图像生成模型,生成带网纹人脸证件图像的去网纹人脸证件图像,将人脸图像帧和去网纹人脸证件图像对比,即可得到身份验证结果。终端110也可在采集到与用户标识对应的人脸图像帧后,执行身份验证方法,在终端本地完成身份验证过程。In one embodiment, the application environment diagram in FIG. 1 can also be applied to the identity verification method. The terminal 110 can send the collected face image frame corresponding to the user ID to the server 120. After the server 120 obtains the face image frame corresponding to the user ID, it can obtain the person with a texture from the ID card corresponding to the user ID. Face certificate image, and then the image generation model obtained by training the above model training method to generate a de-screened face certificate image with a de-screened face certificate image, and compare the face image frame with the de-screened face certificate image, that is The authentication result is available. The terminal 110 may also execute the identity verification method after collecting the face image frame corresponding to the user identification, and complete the identity verification process locally on the terminal.

图2为一个实施例中模型训练方法的流程示意图。该模型训练方法可以由终端实现,也可由服务器实现,还可以由终端和服务器协同实现。本实施例主要以该方法应用于上述图1中的服务器120来举例说明。参照图2,该模型训练方法具体包括如下步骤:Fig. 2 is a schematic flowchart of a model training method in an embodiment. The model training method may be implemented by a terminal, may also be implemented by a server, and may also be implemented jointly by the terminal and the server. This embodiment is mainly described by taking the method applied to the server 120 in FIG. 1 above as an example. Referring to Fig. 2, the model training method specifically includes the following steps:

S202,获取原始图像和相应的带噪声图像。S202. Acquire an original image and a corresponding image with noise.

其中,原始图像是在图像形成时直接存储的、且未携带噪声的图像。比如,图像采集设备直接采集得到的图像等。带噪声图像是对原始图像进行噪声添加处理后得到的图像。噪声是对原始图像内容产生干扰的数据。比如,网纹或者防伪标记等。带噪声图像是携带干扰数据的原始图像。比如,带网纹的图像或者带防伪标记的图像等。在某些特定场景中,出于安全等因素考虑,会对原始图像进行处理得到相应的带噪声图像。比如,身份证上随机添加的网纹或者护照上添加的防伪标记。Wherein, the original image is an image that is directly stored when the image is formed and does not carry noise. For example, an image directly acquired by an image acquisition device and the like. A noisy image is an image obtained by adding noise to the original image. Noise is data that interferes with the original image content. For example, texture or anti-counterfeiting marks. A noisy image is an original image that carries noisy data. For example, images with texture or images with anti-counterfeiting marks. In some specific scenarios, the original image will be processed to obtain a corresponding noisy image for reasons of safety and the like. For example, the texture added randomly on the ID card or the anti-counterfeiting mark added to the passport.

在一个实施例中,服务器中设置有训练样本集,在训练样本集中存储着多组原始图像和相应的带噪声图像,服务器从训练样本集中获取任意一组原始图像和相应的带噪声图像。In one embodiment, a training sample set is set in the server, and multiple sets of original images and corresponding noisy images are stored in the training sample set, and the server obtains any set of original images and corresponding noisy images from the training sample set.

其中,训练样本集中的原始图像和相应的带噪声图像,可以是由服务器通过互联网爬取的进行噪声添加处理前后的图像作为原始图像和相应的带噪声图像;也可以是服务器在爬取原始图像后,在该原始图像上随机添加噪声,自行得到的相应的带噪声图像。训练样本集中的原始图像和相应的带噪声图像还可以是由终端通过图像采集设备采集的原始图像,或者是终端从相册中选取的原始图像。终端可再将得到的原始图像上传至服务器,服务器获取原始图像后,在原始图像上随机添加噪声,得到相应的带噪声图像。Among them, the original image in the training sample set and the corresponding image with noise can be crawled by the server through the Internet before and after adding noise as the original image and the corresponding image with noise; it can also be the server crawling the original image After that, random noise is added to the original image, and the corresponding noisy image is obtained by itself. The original images in the training sample set and the corresponding noisy images may also be original images collected by the terminal through an image collection device, or original images selected by the terminal from an album. The terminal can then upload the obtained original image to the server, and after the server obtains the original image, randomly adds noise to the original image to obtain a corresponding noisy image.

在一个实施例中,服务器中可设置多个训练样本集,用户可通过终端选择用于进行训练的训练样本集。服务器可获取终端检测到的用户触发的携带有训练样本集标识的选择指令,再提取选择指令中的训练样本集标识,从训练样本集标识对应的训练样本集中获取原始图像和相应的带噪声图像。In one embodiment, multiple training sample sets can be set in the server, and the user can select a training sample set for training through the terminal. The server can obtain the user-triggered selection command carrying the training sample set ID detected by the terminal, then extract the training sample set ID in the selection command, and obtain the original image and the corresponding noisy image from the training sample set corresponding to the training sample set ID .

在一个实施例中,训练样本集中可仅包括原始图像。服务器从训练样本集中获取任意一帧原始图像后,在原始图像上随机添加噪声,得到相应的带噪声图像,从而获取到原始图像和相应的带噪声图像。In one embodiment, the training sample set may only include original images. After obtaining any frame of original image from the training sample set, the server randomly adds noise to the original image to obtain the corresponding noisy image, thereby obtaining the original image and the corresponding noisy image.

在上述实施例中,原始图像和相应的带噪声图像,只有噪声或者分辨率存在差异,两帧图像中除噪声之外的图像内容一致。In the above embodiments, the original image and the corresponding image with noise only have noise or resolution difference, and the image content of the two frames of images is the same except for the noise.

S204,将原始图像和带噪声图像输入鉴别模型,得到第一鉴别置信度。S204. Input the original image and the image with noise into the identification model to obtain a first identification confidence degree.

其中,鉴别模型是经过训练后具有鉴别能力的机器学习模型。机器学习英文全称为Machine Learning,简称ML。机器学习模型可通过样本学习具备鉴别能力。机器学习模型可采用神经网络模型、支持向量机或者逻辑回归模型等。神经网络模型比如卷积神经网络、反向传播神经网络、反馈神经网络、径向基神经网络或者自组织神经网络等。在本实施例中,鉴别模型用于鉴别除带噪声图像外输入的另一图像是否为原始图像,并输出鉴别结果的鉴别置信度。Wherein, the identification model is a machine learning model with identification ability after training. The full name of machine learning in English is Machine Learning, or ML for short. Machine learning models can learn from samples to be discriminative. The machine learning model can use a neural network model, a support vector machine or a logistic regression model, etc. Neural network models such as convolutional neural networks, backpropagation neural networks, feedback neural networks, radial basis neural networks, or self-organizing neural networks. In this embodiment, the identification model is used to identify whether another input image other than the image with noise is an original image, and outputs the identification confidence of the identification result.

鉴别置信度与非带噪声图像的另一输入图像一一对应,是表示相应的另一输入图像是原始图像的可信程度。鉴别置信度越高,表示相应的图像是原始图像的可能性越高。可以理解的是,这里的第一鉴别置信度和后文中的第二鉴别置信度均是鉴别置信度,但是对应不同的输入图像条件下的鉴别置信度。The identification confidence level is in one-to-one correspondence with another input image of the non-noisy image, and represents the degree of confidence that the corresponding other input image is the original image. The higher the discrimination confidence, the higher the probability that the corresponding image is the original image. It can be understood that both the first identification confidence level here and the second identification confidence level hereinafter are identification confidence levels, but correspond to identification confidence levels under different input image conditions.

在一个实施例中,鉴别模型可以是由多层互相连接而形成的复杂网络模型。鉴别模型可包括多层特征转换层,每层特征转换层都有对应的模型参数,每层的模型参数可以是多个,每层特征转换层中一个模型参数对输入的图像进行线性或非线性变化,得到特征图(Feature Map)作为运算结果。每个特征转换层接收前一层的运算结果,经过自身的运算,对下一层输出本层的运算结果。其中,模型参数是模型结构中的各个参数,能反应模型输出和输入的对应关系。In one embodiment, the authentication model may be a complex network model formed by interconnecting multiple layers. The identification model can include multiple layers of feature conversion layers, each layer of feature conversion layers has corresponding model parameters, each layer of model parameters can be multiple, and one model parameter in each layer of feature conversion layers performs linear or nonlinear Change, get the feature map (Feature Map) as the operation result. Each feature conversion layer receives the calculation result of the previous layer, and outputs the calculation result of this layer to the next layer through its own calculation. Among them, the model parameter is each parameter in the model structure, which can reflect the corresponding relationship between the model output and the input.

具体地,服务器在获取到原始图像和待噪声图像后,将获取的原始图像和带噪声图像均输入鉴别模型中,鉴别模型中包括的层逐层对输入的原始图像和带噪声图像进行线性或非线性变化操作,直至鉴别模型中最后一层完成线性或非线性变化操作,服务器从而根据鉴别模型最后一层输出的结果,得到针对当前输入的第一鉴别置信度。Specifically, after the server acquires the original image and the image to be noised, both the acquired original image and the image with noise are input into the identification model, and the layers included in the identification model perform linear OR The non-linear change operation is performed until the last layer of the authentication model completes the linear or nonlinear change operation, so that the server obtains the first authentication confidence level for the current input according to the output result of the last layer of the authentication model.

在一个实施例中,鉴别模型可以是已经训练完成的通用的具有鉴别能力的机器学习模型。在将通用的机器学习模型用于特定场景进行鉴别时,鉴别效果不佳,因此需要通过专用于特定场景的样本对通用的机器学习模型进行进一步训练和优化。在本实施例中,服务器可获取根据通用的机器学习模型的模型结构和模型参数,并将该模型参数导入鉴别模型结构,得到带有模型参数的鉴别模型。鉴别模型所带的模型参数,作为本实施例中训练鉴别模型的初始参数参与到训练中。In one embodiment, the identification model may be a general machine learning model with identification capability that has been trained. When a general machine learning model is used for identification in a specific scene, the identification effect is not good, so it is necessary to further train and optimize the general machine learning model through samples dedicated to a specific scene. In this embodiment, the server can obtain the model structure and model parameters according to the general machine learning model, and import the model parameters into the identification model structure to obtain the identification model with model parameters. The model parameters carried by the identification model participate in the training as the initial parameters for training the identification model in this embodiment.

在一个实施例中,鉴别模型也可以是开发人员根据历史模型训练经验初始化的机器学习模型。服务器直接将初始化的机器学习模型中所带的模型参数,作为本实施例中训练鉴别模型的初始参数参与到训练中。In one embodiment, the identification model may also be a machine learning model initialized by developers based on historical model training experience. The server directly uses the model parameters carried in the initialized machine learning model as the initial parameters for training the identification model in this embodiment to participate in the training.

S206,通过图像生成模型,生成带噪声图像的去噪声图像。S206. Generate a denoised image of the image with noise by using the image generation model.

其中,图像生成模型是经过训练后具有图像生成能力的机器学习算法模型。去噪声图像是对带噪声图像进行去噪处理后得到的图像,也就是对携带干扰数据的原始图像去除干扰数据后得到的图像。可以理解的是,图像生成模型与鉴别模型都是机器学习模型,但经过训练后两个模型学习到的能力不同。在本实施例中,图像生成模型用于对带噪声图像进行去噪处理,生成去噪声图像。Among them, the image generation model is a machine learning algorithm model with image generation ability after training. A denoised image is an image obtained after denoising an image with noise, that is, an image obtained by removing interference data from an original image carrying interference data. It is understandable that both the image generation model and the discrimination model are machine learning models, but the learned capabilities of the two models are different after training. In this embodiment, the image generation model is used to perform denoising processing on a noisy image to generate a denoised image.

具体地,服务器在获取到待噪声图像后,将获取的带噪声图像均输入图像生成模型中,图像生成模型中包括的层逐层对输入的带噪声图像进行线性或非线性变化操作,直至图像生成模型中最后一层完成线性或非线性变化操作,服务器从而获取图像生成模型最后一层输出的图像,作为针对当前输入的带噪声图像生成的去噪声图像。Specifically, after the server acquires the image to be noisy, it inputs the acquired image with noise into the image generation model, and the layers included in the image generation model perform linear or non-linear change operations on the input image with noise layer by layer until the image The last layer in the generation model completes the linear or nonlinear change operation, and the server obtains the image output by the last layer of the image generation model as a denoised image generated for the currently input noisy image.

S208,将去噪声图像和带噪声图像输入鉴别模型,得到第二鉴别置信度。S208. Input the denoised image and the noised image into the identification model to obtain a second identification confidence.

具体地,服务器在获取到针对当前带噪声图像生成的去噪声图像后,将获取的去噪声图像和带噪声图像均输入鉴别模型中,鉴别模型中包括的层逐层对输入的去噪声图像和带噪声图像进行线性或非线性变化操作,直至鉴别模型中最后一层完成线性或非线性变化操作,服务器从而根据鉴别模型最后一层输出的结果,得到针对当前输入的第二鉴别置信度。Specifically, after the server acquires the denoised image generated for the current image with noise, it inputs both the acquired denoised image and the image with noise into the identification model, and the layers included in the identification model layer by layer the input denoised image and The image with noise is subjected to a linear or nonlinear change operation until the last layer of the identification model completes the linear or nonlinear change operation, and the server obtains the second identification confidence level for the current input according to the output result of the last layer of the identification model.

S210,按照增大第一鉴别置信度和第二鉴别置信度间差异的调整方式,调整鉴别模型和图像生成模型,并继续训练,直至满足训练结束条件。S210. Adjust the identification model and the image generation model according to the adjustment method of increasing the difference between the first identification confidence level and the second identification confidence level, and continue training until the training end condition is satisfied.

其中,训练模型的过程为确定模型中各特征转换层对应的模型参数的过程。在确定各模型参数时,服务器可以先初始化需训练的模型中各特征转换层对应的模型参数,并在后续的训练过程中,不断优化该初始化的模型参数,并将优化得到的最优的模型参数作为训练好的模型的模型参数。Wherein, the process of training the model is the process of determining the model parameters corresponding to each feature conversion layer in the model. When determining each model parameter, the server can first initialize the model parameters corresponding to each feature conversion layer in the model to be trained, and in the subsequent training process, continuously optimize the initialized model parameters, and optimize the optimal model parameters as the model parameters of the trained model.

在一个实施例中,训练结束条件可以是对模型的训练次数达到预设训练次数。服务器可在对模型进行训练时,对训练次数进行计数,当计数达到预设训练次数时,服务器可判定模型满足训练结束条件,并结束对模型的训练。In an embodiment, the training end condition may be that the number of training times of the model reaches a preset number of training times. When the server is training the model, it can count the number of training times. When the count reaches the preset number of training times, the server can determine that the model meets the training end condition, and end the training of the model.

在一个实施例中,训练结束条件也可以是调整后的鉴别模型的鉴别性能指标达到预设指标,调整后的图像生成模型的图像生成性能指标达到预设指标。In one embodiment, the training end condition may also be that the discrimination performance index of the adjusted discrimination model reaches a preset index, and the image generation performance index of the adjusted image generation model reaches a preset index.

具体地,服务器可比较第一鉴别置信度和第二鉴别置信度的差异,从而朝增大差异的方向,调整鉴别模型和图像生成模型的模型参数。如果调整模型参数后,不满足训练停止条件,则返回步骤S202继续训练,直到满足训练停止条件时结束训练。Specifically, the server may compare the difference between the first identification confidence level and the second identification confidence level, so as to adjust the model parameters of the identification model and the image generation model in the direction of increasing the difference. If the training stop condition is not satisfied after the model parameters are adjusted, return to step S202 to continue the training until the training stop condition is met and the training ends.

在一个实施例中,第一鉴别置信度和第二鉴别置信度的差异可以用代价函数来衡量。代价函数是模型参数的函数,能够衡量模型的第一鉴别置信度和第二鉴别置信度之间的差异的函数。服务器可在代价函数的值小于预先设定的值时结束训练,得到用于对输入图像进行鉴别的鉴别模型以及用于对带噪声图像进行去噪处理的图像生成模型。服务器具体可以选择交叉熵或均方误差等函数作为代价函数。In one embodiment, the difference between the first discrimination confidence and the second discrimination confidence can be measured by a cost function. The cost function is a function of the model parameters, a function capable of measuring the difference between the first discriminative confidence and the second discriminative confidence of the model. The server can end the training when the value of the cost function is less than a preset value, and obtain a discrimination model for discriminating the input image and an image generation model for denoising the noisy image. Specifically, the server can choose a function such as cross entropy or mean square error as the cost function.

上述模型训练方法,包括图像生成模型和鉴别模型两个模型的训练。其中,训练图像生成模型的过程在于学习生成带噪声图像的去噪声图像,训练鉴别模型的过程在于学习在给定带噪声图像的条件下,学习判断输入的另一图像是原始图像还是通过图像生成模型生成的去噪声图像。这样图像生成模型学习生成与原始图像更相似的图像,以干扰鉴别模型的判断,鉴别模型学习更加精准地进行原始图像和去噪声图像的判断,两个模型相互对抗,相互促进,使得训练得到的模型性能更优,从而在使用训练得到的图像生成模型进行图像去噪时,能够极大程度上克服图像失真的问题。The above model training method includes the training of two models, an image generation model and a discrimination model. Among them, the process of training the image generation model is to learn to generate denoised images of noisy images, and the process of training the identification model is to learn to judge whether another input image is an original image or generated through an image under the condition of a given noisy image. The denoised image generated by the model. In this way, the image generation model learns to generate an image that is more similar to the original image to interfere with the judgment of the discrimination model, and the discrimination model learns to judge the original image and the denoised image more accurately. The performance of the model is better, so that when using the trained image generation model for image denoising, the problem of image distortion can be overcome to a great extent.

在一个实施例中,原始图像为原始人脸证件图像。步骤S202包括:获取原始人脸证件图像集;从原始人脸证件图像集中选取原始人脸证件图像;为选取的原始人脸证件图像添加网纹,得到相应的带噪声图像。In one embodiment, the original image is an original face certificate image. Step S202 includes: acquiring an original face certificate image set; selecting an original face certificate image from the original face certificate image set; adding a texture to the selected original face certificate image to obtain a corresponding noisy image.

其中,原始人脸证件图像,是用作证件中证明证件持有者身份的包含人脸的图像。证件是用来证明证件持有者身份或者经历的证书或者文件。比如,身份证、学生证或者军官证等。在某些特定场景中,出于安全或者防伪等因素考虑,会在原始人脸证件图像上添加网纹或者防伪标记等后制成证件。网纹是用于干扰图像内容的特殊纹理。比如方格纹、网格纹或者点状纹等。Among them, the original face certificate image is an image containing a face used in the certificate to prove the identity of the certificate holder. A certificate is a certificate or document used to prove the identity or experience of the certificate holder. For example, ID card, student ID card or military officer card, etc. In some specific scenarios, due to security or anti-counterfeiting considerations, a texture or anti-counterfeiting mark will be added to the original face certificate image to make a certificate. Moire is a special texture used to interfere with image content. Such as grid pattern, grid pattern or dot pattern.

具体地,服务器中设置有原始人脸证件图像集,在原始人脸证件图像集中存储着若干原始人脸证件图像,服务器从原始人脸证件图像集中选取任意一帧原始人脸证件图像,为选取的原始人脸证件图像随机添加网纹,得到相应的带噪声图像。Specifically, the server is provided with an original face certificate image set, and several original face certificate images are stored in the original face certificate image set. The server selects any frame of the original face certificate image from the original face certificate image set. Randomly add texture to the original face certificate image to obtain the corresponding noisy image.

其中,原始人脸证件图像集中的原始人脸证件图像,可以是由服务器通过网络爬取的未添加网纹的、且符合证件照片样式的人脸图像作为原始人脸证件图像;也可以是由终端通过图像采集设备采集的符合证件照片样式的人脸图像,将采集到的人脸图像作为原始人脸证件图像,或者是终端从相册中选取的符合证件照片样式的人脸图片作为原始人脸证件图像。Among them, the original face certificate image in the original face certificate image set can be a face image crawled by the server through the network without adding texture and conforming to the certificate photo style as the original face certificate image; The terminal uses the face image collected by the image acquisition device that conforms to the photo style of the certificate, and uses the collected face image as the original face certificate image, or the face image selected by the terminal from the photo album that conforms to the photo style of the certificate as the original face ID image.

图3示出了一个实施例中原始图像和相应的带噪声图像的示意图。参考图3,该示意图包括原始图像301和带噪声图像302。从图中可以看出带噪声图像302可由原始图像301添加网纹得到。Fig. 3 shows a schematic diagram of an original image and a corresponding noisy image in one embodiment. Referring to FIG. 3 , the schematic diagram includes an original image 301 and a noisy image 302 . It can be seen from the figure that the noisy image 302 can be obtained by adding texture to the original image 301 .

在本实施例中,以原始人脸证件图像和添加网纹后得到的带噪声图像为样本来训练图像生成模型和鉴别模型,这样每个样本均可以尽可能为模型的训练提供有用信息,提高模型训练效率,且能够训练得到可从添加网纹后得到的带噪声图像中恢复出不带网纹的清晰的人脸图像的图像生成模型,从而可利用恢复出的清晰的人脸图像进行后续的人脸识别或者身份验证。In this embodiment, the image generation model and the discrimination model are trained by using the original face certificate image and the noisy image obtained after adding mesh as samples, so that each sample can provide useful information for the training of the model as much as possible, and improve Model training efficiency, and can be trained to obtain an image generation model that can restore a clear face image without mesh from a noisy image obtained after adding mesh, so that the recovered clear face image can be used for subsequent facial recognition or identity verification.

在一个实施例中,鉴别模型为卷积神经网络模型。步骤S204包括:分别将原始图像和带噪声图像输入鉴别模型;获取鉴别模型中的卷积层输出的与原始图像对应的特征图;获取卷积层输出的与带噪声图像对应的特征图;根据与原始图像对应的特征图和与带噪声图像对应的特征图,计算原始图像与带噪声图像的第一鉴别置信度。In one embodiment, the discrimination model is a convolutional neural network model. Step S204 includes: respectively inputting the original image and the noisy image into the identification model; obtaining the feature map corresponding to the original image output by the convolutional layer in the identification model; obtaining the feature map corresponding to the noisy image output by the convolutional layer; The feature map corresponding to the original image and the feature map corresponding to the noisy image are used to calculate the first discrimination confidence between the original image and the noisy image.

其中,卷积神经网络(Convolutional Neural Network,简称CNN)是一种人工神经网络。神经网络模型是由多层互相连接而形成的复杂网络模型,包括多层特征转换层,如卷积层(Convolutional Layer)和子采样层(Pooling Layer)等。神经网络模型的每层特征转换层都有对应的模型参数,每层的模型参数可以是多个,每层特征转换层中一个模型参数对输入的图像进行线性或非线性变化,得到特征图作为运算结果。每个特征转换层接收前一层的运算结果,经过自身的运算,对下一层输出本层的运算结果。Among them, Convolutional Neural Network (CNN for short) is an artificial neural network. The neural network model is a complex network model formed by multi-layer interconnection, including multi-layer feature conversion layers, such as convolutional layer and sub-sampling layer (Pooling Layer). Each feature conversion layer of the neural network model has corresponding model parameters. There can be multiple model parameters in each layer. One model parameter in each feature conversion layer changes the input image linearly or nonlinearly, and the feature map is obtained as Operation result. Each feature conversion layer receives the calculation result of the previous layer, and outputs the calculation result of this layer to the next layer through its own calculation.

在卷积神经网络的卷积层中,存在多个特征图,每个特征图包括多个神经元,同一个特征图的所有神经元共用一个卷积核。卷积核就是相应神经元的权值,卷积核代表一个特征。卷积核一般以随机小数矩阵的形式初始化,在网络的训练过程中将学习得到合理的卷积核。卷积层可以减少神经网络中各层之间的连接,同时又降低了过拟合的风险。In the convolutional layer of the convolutional neural network, there are multiple feature maps, each feature map includes multiple neurons, and all neurons in the same feature map share a convolution kernel. The convolution kernel is the weight of the corresponding neuron, and the convolution kernel represents a feature. The convolution kernel is generally initialized in the form of a random decimal matrix, and a reasonable convolution kernel will be learned during the training process of the network. Convolutional layers can reduce the connections between layers in a neural network while reducing the risk of overfitting.

子采样也叫做池化(Pooling),通常有均值子采样(Mean Pooling)和最大值子采样(Max Pooling)两种形式。子采样可以看作一种特殊的卷积过程。卷积和子采样大大简化了神经网络的复杂度,减少了神经网络的参数。Subsampling is also called pooling (Pooling), usually in two forms: Mean Pooling and Max Pooling. Subsampling can be seen as a special kind of convolution process. Convolution and subsampling greatly simplify the complexity of the neural network and reduce the parameters of the neural network.

卷积神经网络模型是使用了卷积神经网络算法训练而成的机器学习模型。本实施例中采用卷积神经网络可直接构建,也可将已有的卷积神经网络进行改造得到。A convolutional neural network model is a machine learning model trained using a convolutional neural network algorithm. In this embodiment, the convolutional neural network can be directly constructed, or the existing convolutional neural network can be transformed.

具体地,服务器分别将原始图像和带噪声图像输入鉴别模型,依次通过鉴别模型的各特征转换层。在每一层特征转换层上,服务器利用该特征转换层对应的模型参数,对上一层输出的特征图中包括的像素点对应的像素值进行线性或者非线性变化,并输出当前特征转换层上的特征图。其中,如果当前特征转换层为第一级特征转换层,则上一层输出的特征图为输入的原始图像或带噪声图像。像素点对应的像素值具体可以为像素点的RGB(RedGreen Blue)三通道颜色值。Specifically, the server respectively inputs the original image and the image with noise into the identification model, and sequentially passes through each feature conversion layer of the identification model. On each feature conversion layer, the server uses the model parameters corresponding to the feature conversion layer to linearly or non-linearly change the pixel values corresponding to the pixels included in the feature map output by the previous layer, and output the current feature conversion layer feature map on . Among them, if the current feature conversion layer is the first-level feature conversion layer, the feature map output by the previous layer is the input original image or image with noise. The pixel value corresponding to the pixel point may specifically be an RGB (RedGreenBlue) three-channel color value of the pixel point.

进一步地,服务器可获取鉴别模型所包括的卷积层对输入的原始图像和带噪声图像对应的像素值矩阵进行操作,得到相应的响应值构成特征图。鉴别模型中不同的层提取的特征不同。服务器获取的与原始图像对应的特征图来自的卷积层可以是鉴别模型中的任一层,也可以是任意多层。Further, the server can obtain the convolutional layer included in the identification model to operate on the pixel value matrix corresponding to the input original image and the image with noise, and obtain corresponding response values to form a feature map. Different layers in the discriminative model extract different features. The convolutional layer from which the feature map corresponding to the original image obtained by the server can be any layer in the discriminative model, or any multi-layer.

在一个实施例中,服务器获取的与原始图像对应的特征图和与带噪声图像对应的特征图,可以是提取噪声特征后的特征图,服务器可计算与原始图像对应的特征图和与带噪声图像对应的特征图中噪声特征的相似度,并根据该相似度计算出相应的鉴别置信度。其中,相似度可采用余弦相似度或者图像间各自感知哈希值的汉明距离。那么,在本实施例中,与原始图像对应的特征图和与带噪声图像对应的特征图中噪声特征的相似度越低,则鉴别置信度越高,也就是说输入的非带噪声图像为原始图像的可能性越大。In one embodiment, the feature map corresponding to the original image and the feature map corresponding to the noisy image acquired by the server may be the feature map after extracting noise features, and the server can calculate the feature map corresponding to the original image and the feature map corresponding to the noisy image. The similarity of the noise features in the feature map corresponding to the image, and the corresponding identification confidence is calculated according to the similarity. Wherein, the similarity may adopt a cosine similarity or a Hamming distance of respective perceptual hash values between images. Then, in this embodiment, the lower the similarity of the noise features in the feature map corresponding to the original image and the feature map corresponding to the noisy image, the higher the identification confidence, that is to say, the input non-noisy image is The more likely the original image is.

在一个实施例中,服务器获取的与原始图像对应的特征图和与带噪声图像对应的特征图,也可以是提取非噪声特征后的特征图,在本实施例中,与原始图像对应的特征图和与带噪声图像对应的特征图中非噪声特征的相似度越高,则鉴别置信度越高,也就是说输入的非带噪声图像为原始图像的可能性越大。In one embodiment, the feature map corresponding to the original image and the feature map corresponding to the noisy image obtained by the server may also be the feature map after extracting non-noise features. In this embodiment, the feature map corresponding to the original image The higher the similarity between the non-noisy features in the image and the feature map corresponding to the noisy image, the higher the identification confidence, that is to say, the greater the possibility that the input non-noisy image is the original image.

上述实施例中,鉴别模型的卷积层所输出的特征图,可以更好地反映出相应输入图像的特性,从而可以根据反映特征的特征图计算鉴别模型的鉴别置信度,可进一步鉴别模型的训练效率,并保证训练出的鉴别模型的鉴别准确性。In the above embodiment, the feature map output by the convolutional layer of the identification model can better reflect the characteristics of the corresponding input image, so that the identification confidence of the identification model can be calculated according to the feature map reflecting the feature, and the identification model can be further identified. Training efficiency, and ensure the identification accuracy of the trained identification model.

在一个实施例中,图像生成模型为卷积神经网络模型,卷积神经网络模型包括编码层和相应的解码层。步骤S206包括:将带噪声图像输入图像生成模型;在按照图像生成模型所包括的层的顺序,将当前层输出的特征图作为下一层的输入时,若当前层为解码层,则获取与当前解码层相应的编码层所输出的特征图,并将获取的特征图与当前解码层输出的特征图融合后输入下一层;获取图像生成模型中末层输出的特征图,得到去噪声图像。In one embodiment, the image generation model is a convolutional neural network model, and the convolutional neural network model includes an encoding layer and a corresponding decoding layer. Step S206 includes: inputting the image with noise into the image generation model; according to the order of the layers included in the image generation model, when the feature map output by the current layer is used as the input of the next layer, if the current layer is the decoding layer, then obtain and The feature map output by the corresponding encoding layer of the current decoding layer, and the obtained feature map is fused with the feature map output by the current decoding layer and then input to the next layer; the feature map output by the last layer in the image generation model is obtained to obtain a denoised image .

其中,编码层(Encoder)用于对输入图像进行特征提取并降维得到低维的特征图。解码层(Decoder)用于对降维得到的特征图进行升维,得到与输入图像维度相同的输出图像。在本实施例中,图像生成模型为卷积神经网络模型,卷积神经网络模型包括编码层和相应的解码层。其中,编码层对图像的降维幅度与相应的解码层对图像的升维幅度一致。比如,编码层对图像进行编码后,使得图像分辨率缩小至四分之一,那么与该编码层对应的解码层对图像进行解密后,使得图像分辨率增大至四倍。Among them, the encoding layer (Encoder) is used to perform feature extraction on the input image and reduce the dimension to obtain a low-dimensional feature map. The decoding layer (Decoder) is used to increase the dimensionality of the feature map obtained by dimensionality reduction, and obtain an output image with the same dimension as the input image. In this embodiment, the image generation model is a convolutional neural network model, and the convolutional neural network model includes an encoding layer and a corresponding decoding layer. Wherein, the degree of dimensionality reduction of the image by the encoding layer is consistent with the degree of dimensionality enhancement of the image by the corresponding decoding layer. For example, after the encoding layer encodes the image, the resolution of the image is reduced to a quarter, and the decoding layer corresponding to the encoding layer decrypts the image, so that the resolution of the image is increased to four times.

具体地,服务器将带噪声图像输入图像生成模型后,带噪声图像先经过编码层的特征变换作用,再经过解码层的特征变换作用。由于图像生成模型中包括的每一层,均是对输入当前层的图像进行特征提取,输出特征提取后的特征图,那么在层层特征提取后,图像生成模型意图提取的特征已明确,但特征图包含的图像信息确越来越少,则需要在解码时将缺失的图像信息添加至特征图中,以输出更接近于原始图像的去噪声图像。Specifically, after the server inputs the image with noise into the image generation model, the image with noise first undergoes feature transformation at the encoding layer, and then undergoes feature transformation at the decoding layer. Since each layer included in the image generation model extracts the features of the image input to the current layer and outputs the feature map after the feature extraction, then after the layer-by-layer feature extraction, the features that the image generation model intends to extract are clear, but The image information contained in the feature map is indeed less and less, and the missing image information needs to be added to the feature map during decoding to output a denoised image closer to the original image.

在一个实施例中,服务器在按照图像生成模型所包括的层的顺序,将当前层输出的特征图作为下一层的输入时,检测当前层是解码层还是编码层。若服务器判定当前层为编码层时,则直接将当前层输出的特征图作为下一层的输入;若服务器判定当前层为解码层时,则获取与当前解码层相应的编码层所输出的特征图,并将获取的特征图与当前解码层输出的特征图融合后输入下一层,直至当前层为图像生成模型的最后一层时,获取当前层输出的特征图,从而得到去噪声图像。In one embodiment, the server detects whether the current layer is a decoding layer or an encoding layer when using the feature map output by the current layer as the input of the next layer according to the order of the layers included in the image generation model. If the server determines that the current layer is the encoding layer, it directly uses the feature map output by the current layer as the input of the next layer; if the server determines that the current layer is the decoding layer, it obtains the features output by the encoding layer corresponding to the current decoding layer The obtained feature map is fused with the feature map output by the current decoding layer and input to the next layer until the current layer is the last layer of the image generation model, and the feature map output by the current layer is obtained to obtain a denoised image.

其中,将获取的特征图与当前解码层输出的特征图融合,可以是将获取的特征图与当前解码层输出的特征图对应像素位的像素值合并;也可以是确定当前解码层输出的特征图中缺失像素值的像素位,将获取的特征图中该像素位的像素值添加至前解码层输出的特征图中相应的像素位。Wherein, merging the obtained feature map with the feature map output by the current decoding layer may be merging the obtained feature map with the pixel value corresponding to the pixel bit of the feature map output by the current decoding layer; it may also be to determine the feature output by the current decoding layer For a pixel with a missing pixel value in the figure, add the pixel value of the pixel in the obtained feature map to the corresponding pixel in the feature map output by the previous decoding layer.

举例说明,假设图像生成模型为N层结构,那么第i层编码层相应的解码层为第N-i层。服务器在按照图像生成模型所包括的层的顺序,得到第N-i层输出的特征图后,获取第i层输出的特征图,将第N-i层输出的特征图与第i层输出的特征图融合后,输入第N-i+1层。For example, assuming that the image generation model has an N-layer structure, then the corresponding decoding layer of the i-th coding layer is the N-i-th layer. After obtaining the feature map output by the N-i layer according to the order of the layers included in the image generation model, the server obtains the feature map output by the i-th layer, and fuses the feature map output by the N-i-th layer with the feature map output by the i-th layer , enter the N-i+1th layer.

图4示出了一个实施例中图像生成模型中各层输出的特征图的传递示意图。参考图4,该示意图包括输入图像410、中间图像420和输出图像430。其中,中间图像420和输出图像430均为图像生成模型中所包括的层输出的特征图。输入图像410经过第一层编码层输出特征图421,特征图421经过第二层编码层输出特征图422,特征图422经过第二层编码层输出特征图423,特征图423第一层解码层输出特征图424,特征图424与特征图422融合后经过第二层解码层输出特征图425,特征图425与特征图421融合后经过第二层解码层输出输出图像430。Fig. 4 shows a schematic diagram of transfer of feature maps output by each layer in an image generation model in one embodiment. Referring to FIG. 4 , the schematic diagram includes an input image 410 , an intermediate image 420 and an output image 430 . Wherein, both the intermediate image 420 and the output image 430 are feature maps output by layers included in the image generation model. The input image 410 outputs a feature map 421 through the first encoding layer, and the feature map 421 outputs a feature map 422 through the second encoding layer, and the feature map 422 outputs a feature map 423 through the second encoding layer, and the feature map 423 is the first decoding layer The feature map 424 is output, and the feature map 424 is fused with the feature map 422 to output the feature map 425 through the second decoding layer, and the feature map 425 is fused with the feature map 421 to output the output image 430 through the second decoding layer.

上述实施例中,由于图像生成模型中的编码层和解码层在逐层进行特征提取时,得到的特征图中提取的特征更加明显,但包括的原始输入的图像的信息越来越少,为此,通过在对特征图进行解码操作时,将相应的编码层输出的特征图进行融合,使得后续解码层的输入图像即明确了图像特征,又能携带原始输入图像的图像信息,从而保证最终输出的图像既能去除噪声,又能够保留完整的图像信息。In the above embodiment, since the encoding layer and decoding layer in the image generation model perform feature extraction layer by layer, the features extracted in the obtained feature map are more obvious, but the information of the original input image included is less and less, as Therefore, when the feature map is decoded, the feature map output by the corresponding coding layer is fused, so that the input image of the subsequent decoding layer can not only clarify the image characteristics, but also carry the image information of the original input image, so as to ensure the final The output image can not only remove noise, but also retain complete image information.

在一个实施例中,按照增大第一鉴别置信度和第二鉴别置信度间差异的调整方式,调整鉴别模型和图像生成模型,包括:按照最大化第一鉴别置信度的调整方式,调整鉴别模型;按照最小化第二鉴别置信度的调整方式,调整鉴别模型和图像生成模型。In one embodiment, adjusting the identification model and the image generation model according to the adjustment manner of increasing the difference between the first identification confidence degree and the second identification confidence degree includes: adjusting the identification model according to the adjustment manner of maximizing the first identification confidence degree. model; adjusting the discrimination model and the image generation model according to the adjustment manner of minimizing the second discrimination confidence.

具体地,服务器可调整鉴别模型的模型参数,使得调整后的鉴别模型在输入原始图像和待噪声图像后输出的第一鉴别置信度增大。服务器还可异步调整鉴别模型和图像生成模型的模型参数,使得通过调整后的图像生成模型生成的去噪声图像,与带噪声图像输入调整后的鉴别模型后输出的第二鉴别置信度减小。Specifically, the server may adjust the model parameters of the identification model, so that the first identification confidence output by the adjusted identification model increases after inputting the original image and the image to be noised. The server can also asynchronously adjust the model parameters of the identification model and the image generation model, so that the denoised image generated by the adjusted image generation model and the second identification confidence output after inputting the adjusted identification model to the image with noise are reduced.

在一个实施例中,按照最大化第一鉴别置信度的调整方式,调整鉴别模型,包括:按照鉴别模型所包括的层的次序,逆序确定第一鉴别置信度随各层所对应的模型参数的变化率;按逆序调整鉴别模型所包括的层所对应的模型参数,使得第一鉴别置信度随相应调整的层所对应的模型参数的变化率增大。In one embodiment, adjusting the identification model according to the adjustment method of maximizing the first identification confidence degree includes: determining the first identification confidence degree in reverse order according to the order of the layers included in the identification model along with the model parameters corresponding to each layer Rate of change: adjust the model parameters corresponding to the layers included in the identification model in reverse order, so that the first identification confidence increases with the rate of change of the model parameters corresponding to the adjusted layers.

其中,第一鉴别置信度随模型参数的变化率是第一鉴别置信度随模型参数变化的速度。第一鉴别置信度随模型参数的变化率越大,第一鉴别置信度随模型参数变化得越快。最大化第一鉴别置信度的过程是求解第一鉴别置信度最大值的过程,具体可沿变化率上升方向求解极大值。Wherein, the rate of change of the first identification confidence degree with the model parameters is the speed at which the first identification confidence degree changes with the model parameters. The greater the rate of change of the first discrimination confidence degree with the model parameters, the faster the first discrimination confidence degree changes with the model parameters. The process of maximizing the first discrimination confidence degree is the process of finding the maximum value of the first discrimination confidence degree, specifically, the maximum value can be obtained along the direction of the rate of change increase.

具体地,图像被输入模型后,每经过一层则进行一次非线性变化,并将输出的运算结果作为下一层的输入。由于服务器直接将原始图像和带噪声图像输入鉴别模型进行鉴别得到第一鉴别置信度,那么服务器则可按照鉴别模型所包括的层的次序,从鉴别模型所包括的最后一层起,确定第一鉴别置信度随当前层所对应的模型参数的变化率,再依次逆序确定第一鉴别置信度随各层所对应的模型参数的变化率。服务器可再按逆序依次调整鉴别模型所包括的层所对应的模型参数,使得第一鉴别置信度随相应调整的层所对应的模型参数的变化率增大,使得鉴别模型鉴别出输入的非带噪声图像的另一图像为原始图像的鉴别结果更准确。Specifically, after the image is input into the model, it undergoes a nonlinear change every time it passes through a layer, and the output operation result is used as the input of the next layer. Since the server directly inputs the original image and the image with noise into the identification model to obtain the first identification confidence degree, then the server can determine the first identification confidence level from the last layer included in the identification model according to the order of the layers included in the identification model. The rate of change of the identification confidence degree with the model parameters corresponding to the current layer, and then determine the rate of change of the first identification confidence degree with the model parameters corresponding to each layer in reverse order. The server can adjust the model parameters corresponding to the layers included in the identification model in reverse order, so that the first identification confidence increases with the rate of change of the model parameters corresponding to the correspondingly adjusted layers, so that the identification model can identify the input non-band Another image of the noisy image is the original image for more accurate discrimination.

举例说明,假设第一鉴别置信度为D1,按照鉴别模型所包括的层的次序,逆序第一层所对应的模型参数为z,则D1随z的变化率为

Figure BDA0001377078800000141
逆序第二层所对应的模型参数为b,则D1随b的变化率为/>
Figure BDA0001377078800000142
逆序第三层所对应的模型参数为c,则D1随c的变化率为/>
Figure BDA0001377078800000143
在求解变化率时,链式求导会一层一层的将梯度传导到在前的层。在求解变化率至鉴别模型所包括的第一层,服务器可逆序依次调整模型参数z、b、c至鉴别模型所包括的第一层对应的模型参数,使得最后一层求得的变化率增大。For example, assuming that the first identification confidence degree is D 1 , according to the order of the layers included in the identification model, the model parameter corresponding to the first layer in reverse order is z, then the rate of change of D 1 with z is
Figure BDA0001377078800000141
The model parameter corresponding to the second layer in reverse order is b, then the rate of change of D 1 with b is />
Figure BDA0001377078800000142
The model parameter corresponding to the third layer in reverse order is c, then the rate of change of D 1 with c is />
Figure BDA0001377078800000143
When solving the rate of change, the chain derivation will pass the gradient layer by layer to the previous layer. From solving the rate of change to the first layer included in the identification model, the server can adjust the model parameters z, b, c to the model parameters corresponding to the first layer included in the identification model in reverse order, so that the rate of change obtained in the last layer increases. big.

在本实施例中,通过反向传播方式求解第一鉴别置信度随鉴别模型各层所对应的模型参数的变化率,通过调节鉴别模型各层所对应的模型参数使得计算得到的变化率减大,以训练鉴别模型,使得训练得到的鉴别模型效果更优。In this embodiment, the rate of change of the first discriminant confidence degree with the model parameters corresponding to each layer of the discriminant model is calculated by backpropagation, and the calculated rate of change is reduced by adjusting the model parameters corresponding to each layer of the discriminant model. , to train the identification model, so that the effect of the trained identification model is better.

在一个实施例中,按照最小化第二鉴别置信度的调整方式,调整鉴别模型和图像生成模型,包括:依次按照图像生成模型和鉴别模型所包括的层的次序,逆序确定第二鉴别置信度随各层所对应的模型参数的变化率;按逆序,调整鉴别模型和图像生成模型所包括的层所对应的模型参数,使得第二鉴别置信度随相应调整的层所对应的模型参数的变化率减小。In one embodiment, adjusting the identification model and the image generation model according to the adjustment manner of minimizing the second identification confidence level includes: sequentially determining the second identification confidence level according to the order of the layers included in the image generation model and the identification model in reverse order With the rate of change of the model parameters corresponding to each layer; in reverse order, adjust the model parameters corresponding to the layers included in the identification model and the image generation model, so that the second identification confidence varies with the model parameters corresponding to the correspondingly adjusted layers rate decreases.

具体地,最小化第二鉴别置信度的过程是求解第二鉴别置信度最小值的过程,具体可沿变化率下降方向求解极小值。由于服务器先通过图像生成模型生成带噪声图像的去噪声图像,再将去噪声图像和带噪声图像输入鉴别模型进行鉴别,那么服务器则可依次按照图像生成模型和鉴别模型所包括的层的次序,从鉴别模型所包括的最后一层起,确定第二鉴别置信度随当前层所对应的模型参数的变化率,再依次逆序确定第二鉴别置信度随各层所对应的模型参数的变化率。服务器可再按逆序依次调整图像生成模型和鉴别模型所包括的层所对应的非线性变化算子,使得第二鉴别置信度随相应调整的层所对应的模型参数的变化率减小。Specifically, the process of minimizing the second identification confidence is a process of solving the minimum value of the second identification confidence, and specifically, the minimum value may be obtained along the decreasing direction of the rate of change. Since the server first generates a denoised image with a noisy image through the image generation model, and then inputs the denoised image and the noised image into the identification model for identification, then the server can sequentially follow the order of the layers included in the image generation model and the identification model, Starting from the last layer included in the identification model, determine the rate of change of the second identification confidence with the model parameters corresponding to the current layer, and then determine the rate of change of the second identification confidence with the model parameters corresponding to each layer in reverse order. The server may then adjust the non-linear change operators corresponding to the layers included in the image generation model and the identification model in reverse order, so that the second identification confidence decreases with the rate of change of the model parameters corresponding to the correspondingly adjusted layers.

举例说明,假设第二鉴别置信度为D2,按照图像生成模型和鉴别模型所包括的层的次序,逆序第一层所对应的模型参数为z,则D2随z的变化率为

Figure BDA0001377078800000151
逆序第二层所对应的模型参数为b,则D2随b的变化率为/>
Figure BDA0001377078800000152
逆序第三层所对应的模型参数为c,则D2随c的变化率为/>
Figure BDA0001377078800000153
在求解变化率时,链式求导会一层一层的将梯度传导到在前的层。在求解变化率至图像生成模型所包括的第一层,服务器可逆序依次调整模型参数z、b、c至图像生成模型所包括的第一层对应的模型参数,使得最后一层求得的变化率减小。For example, assuming that the second discrimination confidence is D 2 , according to the order of the layers included in the image generation model and the discrimination model, the model parameter corresponding to the first layer in reverse order is z, then the rate of change of D 2 with z is
Figure BDA0001377078800000151
The model parameter corresponding to the second layer in reverse order is b, then the rate of change of D 2 with b is />
Figure BDA0001377078800000152
The model parameter corresponding to the third layer in reverse order is c, then the rate of change of D 2 with c is />
Figure BDA0001377078800000153
When solving the rate of change, the chain derivation will pass the gradient layer by layer to the previous layer. From solving the rate of change to the first layer included in the image generation model, the server can adjust the model parameters z, b, c to the model parameters corresponding to the first layer included in the image generation model in reverse order, so that the change obtained in the last layer rate decreases.

在本实施例中,通过反向传播方式求解第二鉴别置信度随图像生成模型和鉴别模型各层所对应的模型参数的变化率,通过调节图像生成模型和鉴别模型各层所对应的模型参数使得计算得到的变化率减小,以训练图像生成模型和鉴别模型,使得训练得到的图像生成模型和鉴别模型效果更优。In this embodiment, the rate of change of the second discrimination confidence degree with the model parameters corresponding to the layers of the image generation model and the discrimination model is solved by back propagation, and by adjusting the model parameters corresponding to the layers of the image generation model and the discrimination model The calculated rate of change is reduced to train the image generation model and the identification model, so that the effect of the trained image generation model and identification model is better.

在一个实施例中,服务器还可获取原始图像与相应的去噪声图像的内容损耗,根据第一鉴别置信度、第二鉴别置信度和内容损耗生成训练代价,按照最小化训练代价的调整方式,调整图像生成模型和鉴别模型。其中,内容损耗是指通过图像生成模型生成的去噪声图像与相应的原始图像之间在图像内容上的差异。In one embodiment, the server can also obtain the content loss of the original image and the corresponding denoised image, generate a training cost according to the first discrimination confidence, the second discrimination confidence and the content loss, and use the adjustment method to minimize the training cost, Tuning image generative models and discriminative models. Among them, content loss refers to the difference in image content between the denoised image generated by the image generation model and the corresponding original image.

在一个实施例中,训练代价具体可表示为:In one embodiment, the training cost can be specifically expressed as:

L=K1[logD(Ii,Ji)]+K2[log(1-D(G(Ii),Ji))]+K3R(Ii,G(Ii))(1)L=K 1 [logD(I i ,J i )]+K 2 [log(1-D(G(I i ),J i ))]+K 3 R(I i ,G(I i ))( 1)

其中,Ii表示原始图像,G(Ii)表示图像生成模型生成的去噪声图像,Ji表示带噪声图像。D(Ii,Ji)为鉴别模型在输入原始图像和带噪声图像时输出的第一鉴别置信度,D(G(Ii),Ji)为鉴别模型在输入去噪声图像和带噪声图像时输出的第二鉴别置信度。R(Ii,G(Ii))为内容损耗,K1、K2和K3分别为第一鉴别置信度、第二鉴别置信度和内容损耗所占训练代价的权重。K1具体可以是Ii在服从的概率分布下的期望,K2具体可以是G(Ii)和Ji在各自服从的概率分布下的联合期望。Among them, I i represents the original image, G(I i ) represents the denoised image generated by the image generation model, and J i represents the noisy image. D(I i ,J i ) is the first discriminative confidence output of the discriminator model when inputting the original image and the noisy image, D(G(I i ),J i ) is the discriminative model’s output of the denoised image and the noisy image The second discriminative confidence of the output for the image. R(I i , G(I i )) is the content loss, and K 1 , K 2 and K 3 are respectively the weights of the first discriminative confidence, the second discriminative confidence and the content loss in the training cost. Specifically, K 1 may be the expectation of I i under the probability distribution it obeys, and K 2 may specifically be the joint expectation of G(I i ) and J i under the respective probability distributions it obeys.

R(Ii,G(Ii))=K4[||Ii-G(Ii)||] (2)R(I i ,G(I i ))=K 4 [||I i -G(I i )||] (2)

其中,K4具体可以是Ii和G(Ii)在各自服从的概率分布下的联合期望。||Ii-G(Ii)||具体可以是两帧图像相应像素位的像素值差值矩阵的值。Wherein, K 4 may specifically be the joint expectation of I i and G(I i ) under their respective probability distributions. ||I i -G(I i )|| may specifically be the value of the pixel value difference matrix of the corresponding pixel bits of the two frames of images.

具体地,模型训练的目的在于最小化训练代价L,也就是最大化D(Ii,Ji),最小化D(G(Ii),Ji)和R(Ii,G(Ii))。Specifically, the purpose of model training is to minimize the training cost L, that is, to maximize D(I i ,J i ), minimize D(G(I i ),J i ) and R(I i ,G(I i )).

在本实施例中,将去噪声图像与原始图像的内容损耗,协同第一鉴别置信度和第二鉴别置信度作为反馈调节依据调节图像生成模型和鉴别模型,提高了模型训练效率,且进一步减少了模型训练过程中的过拟合风险。In this embodiment, the content loss of the denoised image and the original image is combined with the first discrimination confidence and the second discrimination confidence as the feedback adjustment basis to adjust the image generation model and the discrimination model, which improves the model training efficiency and further reduces The risk of overfitting in the model training process.

上述实施例中,通过调整图像生成模型和鉴别模型的模型参数,以最大化输入原始图像和带噪声图像得到的第一鉴别置信度,最小化输入去噪声图像和带噪声图像得到的第二鉴别置信度,不断增大第一鉴别置信度和第二鉴别置信度的差异,使得鉴别模型能够精确鉴别输入的非带噪声图像的另一图像是否为原始图像,图像生成模型能够生成更接近原始图像的去噪声图像来干扰鉴别模型,从而提高鉴别模型的鉴别准确率以及图像生成模型的图像生成效果。In the above embodiment, by adjusting the model parameters of the image generation model and the identification model, the first identification confidence obtained by inputting the original image and the image with noise is maximized, and the second identification confidence obtained by inputting the denoised image and the image with noise is minimized. Confidence, continuously increasing the difference between the first identification confidence and the second identification confidence, so that the identification model can accurately identify whether another image of the input non-noisy image is the original image, and the image generation model can generate images closer to the original The denoised image is used to interfere with the identification model, thereby improving the identification accuracy of the identification model and the image generation effect of the image generation model.

如图5所示,在一个具体地实施例中,模型训练方法具体包括以下步骤:As shown in Figure 5, in a specific embodiment, the model training method specifically includes the following steps:

S502,获取原始人脸证件图像集;从原始人脸证件图像集中选取原始人脸证件图像;为选取的原始人脸证件图像添加网纹,得到相应的带噪声图像。S502. Obtain an original face certificate image set; select an original face certificate image from the original face certificate image set; add a texture to the selected original face certificate image to obtain a corresponding noisy image.

S504,分别将原始人脸证件图像和带噪声图像输入鉴别模型;获取鉴别模型中的卷积层输出的与原始人脸证件图像对应的特征图;获取卷积层输出的与带噪声图像对应的特征图;根据与原始人脸证件图像对应的特征图和与带噪声图像对应的特征图,计算原始人脸证件图像与带噪声图像的第一鉴别置信度。S504, respectively input the original face certificate image and the noisy image into the identification model; obtain the feature map corresponding to the original face certificate image output by the convolution layer in the identification model; obtain the feature map corresponding to the noisy image output by the convolution layer Feature map: According to the feature map corresponding to the original face certificate image and the feature map corresponding to the noise image, calculate the first discrimination confidence between the original face certificate image and the noise image.

S506,将带噪声图像输入图像生成模型;在按照图像生成模型所包括的层的顺序,将当前层输出的特征图作为下一层的输入时,若当前层为解码层,则获取与当前解码层相应的编码层所输出的特征图,并将获取的特征图与当前解码层输出的特征图融合后输入下一层;获取图像生成模型中末层输出的特征图,得到去噪声图像。S506, input the noisy image into the image generation model; when the feature map output by the current layer is used as the input of the next layer according to the order of the layers included in the image generation model, if the current layer is the decoding layer, obtain the current decoding The feature map output by the corresponding encoding layer of the layer, and the obtained feature map is fused with the feature map output by the current decoding layer and input to the next layer; the feature map output by the last layer in the image generation model is obtained to obtain a denoised image.

S508,分别将去噪声图像和带噪声图像输入鉴别模型;获取鉴别模型中的卷积层输出的与去噪声图像对应的特征图;获取卷积层输出的与带噪声图像对应的特征图;根据与去噪声图像对应的特征图和与带噪声图像对应的特征图,计算去噪声图像与带噪声图像的第二鉴别置信度。S508, respectively input the denoised image and the noised image into the identification model; obtain the feature map corresponding to the denoised image output by the convolution layer in the identification model; obtain the feature map corresponding to the noise image output by the convolution layer; according to The feature map corresponding to the denoised image and the feature map corresponding to the noisy image are used to calculate the second discrimination confidence between the denoised image and the noisy image.

S510,按照鉴别模型所包括的层的次序,逆序确定第一鉴别置信度随各层所对应的模型参数的变化率;按逆序调整鉴别模型所包括的层所对应的模型参数,使得第一鉴别置信度随相应调整的层所对应的模型参数的变化率增大。S510, according to the order of the layers included in the identification model, determine the rate of change of the first identification confidence with the model parameters corresponding to each layer in reverse order; adjust the model parameters corresponding to the layers included in the identification model in reverse order, so that the first identification Confidence increases with the rate of change of the model parameter corresponding to the correspondingly adjusted layer.

S512,依次按照图像生成模型和鉴别模型所包括的层的次序,逆序确定第二鉴别置信度随各层所对应的模型参数的变化率;按逆序,调整鉴别模型和图像生成模型所包括的层所对应的模型参数,使得第二鉴别置信度随相应调整的层所对应的模型参数的变化率减小。S512, according to the order of the layers included in the image generation model and the identification model, determine the rate of change of the second identification confidence degree with the model parameters corresponding to each layer in reverse order; adjust the layers included in the identification model and the image generation model in reverse order The corresponding model parameters make the second discrimination confidence decrease with the rate of change of the model parameters corresponding to the correspondingly adjusted layers.

S514,检测鉴别模型与图像生成模型是否满足训练结束条件;若是,则跳转至步骤S516;若否,则返回步骤S502。S514, detecting whether the identification model and the image generation model meet the training end condition; if yes, jump to step S516; if not, return to step S502.

S516,结束模型训练。S516, end model training.

在本实施例中,包括图像生成模型和鉴别模型两个模型的训练。其中,训练图像生成模型的过程在于学习生成带噪声图像的去噪声图像,训练鉴别模型的过程在于学习在给定带噪声图像的条件下,学习判断输入的另一图像是原始人脸证件图像还是通过图像生成模型生成的去噪声图像。这样图像生成模型学习生成与原始人脸证件图像更相似的图像,以干扰鉴别模型的判断,鉴别模型学习更加精准地进行原始图像和去噪声图像的判断,两个模型相互对抗,相互促进,使得训练得到的模型性能更优,从而在使用训练得到的图像生成模型进行图像去噪时,能够极大程度上克服图像失真的问题。In this embodiment, the training of two models including the image generation model and the discrimination model is included. Among them, the process of training the image generation model is to learn to generate denoised images with noisy images, and the process of training the identification model is to learn to judge whether another input image is the original face certificate image or A denoised image generated by an image generation model. In this way, the image generation model learns to generate an image that is more similar to the original face certificate image to interfere with the judgment of the identification model, and the identification model learns to judge the original image and the denoised image more accurately. The two models compete with each other and promote each other, so that The performance of the trained model is better, so that when the trained image generation model is used for image denoising, the problem of image distortion can be overcome to a great extent.

图6示出了一个实施例中图像生成模型和鉴别模型的输入输出示意图。参考图6,该示意图包括原始人脸证件图像601,带网纹人脸证件图像602、去网纹人脸证件图像603、鉴别模型604和图像生成网络605。将原始人脸证件图像601和带网纹人脸证件图像602输入鉴别模型604得到第一鉴别置信度。将带网纹人脸证件图像602输入图像生成网络605后得到去网纹人脸证件图像603,再将去网纹人脸证件图像603和带网纹人脸证件图像602输入鉴别模型604得到第二鉴别置信度。Fig. 6 shows a schematic diagram of the input and output of the image generation model and the discrimination model in one embodiment. Referring to FIG. 6 , the schematic diagram includes an original face certificate image 601 , a screened face certificate image 602 , a de-screened face certificate image 603 , an identification model 604 and an image generation network 605 . Input the original face certificate image 601 and the textured face certificate image 602 into the identification model 604 to obtain the first identification confidence. After inputting the textured face certificate image 602 into the image generation network 605, the de-screened human face certificate image 603 is obtained, and then the de-screened human face certificate image 603 and the textured face certificate image 602 are input into the identification model 604 to obtain the first Two identification confidence.

图7为一个实施例中身份验证方法的流程示意图。该身份验证方法可以由终端实现,也可由服务器实现,还可以由终端和服务器协同实现。本实施例主要以该方法应用于上述图1中的服务器120来举例说明。参照图7,该身份验证方法具体包括如下步骤:Fig. 7 is a schematic flowchart of an identity verification method in an embodiment. The identity verification method can be implemented by a terminal, or by a server, or by cooperation of the terminal and the server. This embodiment is mainly described by taking the method applied to the server 120 in FIG. 1 above as an example. With reference to Figure 7, the identity verification method specifically includes the following steps:

S702,获取与用户标识对应的人脸图像帧。S702. Acquire a face image frame corresponding to the user identifier.

具体地,用户标识用于唯一标识一个用户。用户标识可以是包括数字、字母和符号中的至少一种字符的字符串。终端可调用摄像头采集人脸图像帧,将采集到的人脸图像帧协同用户标识发送至服务器,服务器从而获取与用户标识对应的人脸图像帧。Specifically, the user identifier is used to uniquely identify a user. The user identifier may be a character string including at least one character among numbers, letters and symbols. The terminal can call the camera to collect face image frames, and send the collected face image frames together with the user ID to the server, so that the server can obtain the face image frames corresponding to the user ID.

S704,从与用户标识对应的身份证件中,获取带网纹人脸证件图像。S704. Obtain an image of a certificate with a textured face from the identity certificate corresponding to the user identifier.

具体地,服务器可在获取用户标识后,根据用户标识查找与用户标识对应的身份证件,从查找到的身份证件中获取带网纹人脸证件图像。服务器也可通过证件扫描仪扫描读取与用户标识对应的身份证件,再从读取的身份证件中获取带网纹人脸证件图像Specifically, after obtaining the user ID, the server may search for an ID card corresponding to the user ID according to the user ID, and obtain the image of the ID card with a textured face from the found ID card. The server can also scan and read the ID card corresponding to the user ID through the ID scanner, and then obtain the image of the ID card with a textured face from the read ID card

S706,通过上述模型训练方法训练得到的图像生成模型,生成带网纹人脸证件图像的去网纹人脸证件图像。S706, using the image generation model trained by the above model training method to generate a de-screened face certificate image with a screened face certificate image.

具体地,服务器可通过上述任一实施例中的模型训练方法训练得到的图像生成模型,生成带网纹人脸证件图像的去网纹人脸证件图像。Specifically, the server may use the image generation model trained by the model training method in any of the above embodiments to generate a de-screened face certificate image with a textured face certificate image.

S708,将人脸图像帧和去网纹人脸证件图像对比,得到身份验证结果。S708. Comparing the face image frame with the descreened face certificate image to obtain an identity verification result.

具体地,服务器可计算人脸图像帧和去网纹人脸证件图像的相似度,在计算得到的相似度超过预设相似度阈值时,判定人脸图像帧和去网纹人脸证件图像中人脸区域对应相同的用户,得到身份验证通过的结果。在计算得到的相似度低于预设相似度阈值时,判定人脸图像帧和去网纹人脸证件图像中人脸区域对应不同的用户,得到身份验证未通过的结果。Specifically, the server can calculate the similarity between the face image frame and the de-screened face certificate image, and when the calculated similarity exceeds the preset similarity threshold, determine whether the face image frame and the de-screened face certificate image The face area corresponds to the same user, and the result of identity verification is obtained. When the calculated similarity is lower than the preset similarity threshold, it is determined that the face regions in the face image frame and the de-screened face certificate image correspond to different users, and a result of identity verification failure is obtained.

上述身份验证方法,在需要进行用户身份验证时,采集与用户标识对应的人脸图像帧,再从与该用户标识对应的身份证件中,读取带网纹人脸证件图像,然后通过按照图像生成模型与鉴别模型相互对抗、相互促进的方式训练得到的图像生成模型,生成去噪效果好且不失真的去网纹人脸证件图像,再将采集的人脸图像帧和生成的去网纹人脸证件图像对比,即可得到身份验证结果,极大地提高了身份验证的准确性。The above identity verification method, when user identity verification is required, collects the face image frame corresponding to the user identification, and then reads the face certificate image with mesh from the identity document corresponding to the user identification, and then passes the image according to the The image generation model trained by the generative model and the discriminant model against each other and promoted each other can generate a de-screened face certificate image with good denoising effect and no distortion, and then combine the collected face image frame and the generated de-screened image The identity verification result can be obtained by comparing the face certificate image, which greatly improves the accuracy of identity verification.

如图8所示,在一个实施例中,提供了一种模型训练装置800,包括:图像获取模块801、第一输出模块802、图像生成模块803、第二输出模块804和模型调整模块805。As shown in FIG. 8 , in one embodiment, a model training device 800 is provided, including: an image acquisition module 801 , a first output module 802 , an image generation module 803 , a second output module 804 and a model adjustment module 805 .

图像获取模块801,用于获取原始图像和相应的带噪声图像。An image acquisition module 801, configured to acquire an original image and a corresponding image with noise.

第一输出模块802,用于将原始图像和带噪声图像输入鉴别模型,得到第一鉴别置信度。The first output module 802 is configured to input the original image and the image with noise into the identification model to obtain a first identification confidence.

图像生成模块803,用于通过图像生成模型,生成带噪声图像的去噪声图像。The image generation module 803 is configured to generate a denoised image of the noisy image through the image generation model.

第二输出模块804,用于将去噪声图像和带噪声图像输入鉴别模型,得到第二鉴别置信度。The second output module 804 is configured to input the denoised image and the image with noise into the identification model to obtain a second identification confidence.

模型调整模块805,用于按照增大第一鉴别置信度和第二鉴别置信度间差异的调整方式,调整鉴别模型和图像生成模型,并继续训练,直至满足训练结束条件。The model adjustment module 805 is configured to adjust the identification model and the image generation model according to the adjustment method of increasing the difference between the first identification confidence level and the second identification confidence level, and continue training until the training end condition is met.

上述模型训练装置800,包括图像生成模型和鉴别模型两个模型的训练。其中,训练图像生成模型的过程在于学习生成带噪声图像的去噪声图像,训练鉴别模型的过程在于学习在给定带噪声图像的条件下,学习判断输入的另一图像是原始图像还是通过图像生成模型生成的去噪声图像。这样图像生成模型学习生成与原始图像更相似的图像,以干扰鉴别模型的判断,鉴别模型学习更加精准地进行原始图像和去噪声图像的判断,两个模型相互对抗,相互促进,使得训练得到的模型性能更优,从而在使用训练得到的图像生成模型进行图像去噪时,能够极大程度上克服图像失真的问题。The above-mentioned model training device 800 includes the training of two models, the image generation model and the discrimination model. Among them, the process of training the image generation model is to learn to generate denoised images of noisy images, and the process of training the identification model is to learn to judge whether another input image is an original image or generated through an image under the condition of a given noisy image. The denoised image generated by the model. In this way, the image generation model learns to generate an image that is more similar to the original image to interfere with the judgment of the discrimination model, and the discrimination model learns to judge the original image and the denoised image more accurately. The performance of the model is better, so that when using the trained image generation model for image denoising, the problem of image distortion can be overcome to a great extent.

在一个实施例中,原始图像为原始人脸证件图像。图像获取装置8001还用于获取原始人脸证件图像集;从原始人脸证件图像集中选取原始人脸证件图像;为选取的原始人脸证件图像添加网纹,得到相应的带噪声图像。In one embodiment, the original image is an original face certificate image. The image acquisition device 8001 is also used to acquire the original face certificate image set; select the original face certificate image from the original face certificate image set; add mesh to the selected original face certificate image to obtain a corresponding noisy image.

在本实施例中,以原始人脸证件图像和添加网纹后得到的带噪声图像为样本来训练图像生成模型和鉴别模型,这样每个样本均可以尽可能为模型的训练提供有用信息,提高模型训练效率,且能够训练得到可从添加网纹后得到的带噪声图像中恢复出不带网纹的清晰的人脸图像的图像生成模型,从而可利用恢复出的清晰的人脸图像进行后续的人脸识别或者身份验证。In this embodiment, the image generation model and the discrimination model are trained by using the original face certificate image and the noisy image obtained after adding mesh as samples, so that each sample can provide useful information for the training of the model as much as possible, and improve Model training efficiency, and can be trained to obtain an image generation model that can restore a clear face image without mesh from a noisy image obtained after adding mesh, so that the recovered clear face image can be used for subsequent facial recognition or identity verification.

在一个实施例中,鉴别模型为卷积神经网络模型。第一输出模块802还用于分别将原始图像和带噪声图像输入鉴别模型;获取鉴别模型中的卷积层输出的与原始图像对应的特征图;获取卷积层输出的与带噪声图像对应的特征图;根据与原始图像对应的特征图和与带噪声图像对应的特征图,计算原始图像与带噪声图像的第一鉴别置信度。In one embodiment, the discrimination model is a convolutional neural network model. The first output module 802 is also used to respectively input the original image and the image with noise into the discrimination model; obtain the feature map corresponding to the original image output by the convolution layer in the discrimination model; obtain the feature map corresponding to the noise image output by the convolution layer A feature map; according to the feature map corresponding to the original image and the feature map corresponding to the noisy image, calculate the first discrimination confidence between the original image and the noisy image.

在本实施例中,鉴别模型的卷积层所输出的特征图,可以更好地反映出相应输入图像的特性,从而可以根据反映特征的特征图计算鉴别模型的鉴别置信度,可进一步鉴别模型的训练效率,并保证训练出的鉴别模型的鉴别准确性。In this embodiment, the feature map output by the convolutional layer of the identification model can better reflect the characteristics of the corresponding input image, so that the identification confidence of the identification model can be calculated according to the feature map reflecting the feature, and the identification model can be further The training efficiency and ensure the identification accuracy of the trained identification model.

在一个实施例中,图像生成模型为卷积神经网络模型;卷积神经网络模型包括编码层和相应的解码层。图像生成模块803还用于将带噪声图像输入图像生成模型;在按照图像生成模型所包括的层的顺序,将当前层输出的特征图作为下一层的输入时,若当前层为解码层,则获取与当前解码层相应的编码层所输出的特征图,并将获取的特征图与当前解码层输出的特征图融合后输入下一层;获取图像生成模型中末层输出的特征图,得到去噪声图像。In one embodiment, the image generation model is a convolutional neural network model; the convolutional neural network model includes an encoding layer and a corresponding decoding layer. The image generation module 803 is also used to input the noisy image into the image generation model; when the feature map output by the current layer is used as the input of the next layer according to the order of the layers included in the image generation model, if the current layer is a decoding layer, The feature map output by the coding layer corresponding to the current decoding layer is obtained, and the obtained feature map is fused with the feature map output by the current decoding layer and then input to the next layer; the feature map output by the last layer in the image generation model is obtained, and obtained Denoise the image.

在本实施例中,由于图像生成模型中的编码层和解码层在逐层进行特征提取时,得到的特征图中提取的特征更加明显,但包括的原始输入的图像的信息越来越少,为此,通过在对特征图进行解码操作时,将相应的编码层输出的特征图进行融合,使得后续解码层的输入图像即明确了图像特征,又能携带原始输入图像的图像信息,从而保证最终输出的图像既能去除噪声,又能够保留完整的图像信息。In this embodiment, since the encoding layer and decoding layer in the image generation model perform feature extraction layer by layer, the features extracted in the obtained feature map are more obvious, but the information of the original input image included is less and less, For this reason, when the feature map is decoded, the feature map output by the corresponding coding layer is fused, so that the input image of the subsequent decoding layer can clarify the image characteristics and carry the image information of the original input image, thus ensuring The final output image can not only remove noise, but also retain complete image information.

在一个实施例中,模型调整模块805还用于按照最大化第一鉴别置信度的调整方式,调整鉴别模型;按照最小化第二鉴别置信度的调整方式,调整鉴别模型和图像生成模型。In one embodiment, the model adjustment module 805 is further configured to adjust the identification model according to the adjustment manner of maximizing the first identification confidence; and adjust the identification model and the image generation model according to the adjustment manner of minimizing the second identification confidence.

在本实施例中,通过调整图像生成模型和鉴别模型的模型参数,以最大化输入原始图像和带噪声图像得到的第一鉴别置信度,最小化输入去噪声图像和带噪声图像得到的第二鉴别置信度,不断增大第一鉴别置信度和第二鉴别置信度的差异,使得鉴别模型能够精确鉴别输入的非带噪声图像的另一图像是否为原始图像,图像生成模型能够生成更接近原始图像的去噪声图像来干扰鉴别模型,从而提高鉴别模型的鉴别准确率以及图像生成模型的图像生成效果。In this embodiment, by adjusting the model parameters of the image generation model and the identification model, the first identification confidence obtained by inputting the original image and the noisy image is maximized, and the second identification confidence obtained by inputting the denoised image and the noisy image is minimized. Discrimination confidence, continuously increasing the difference between the first discrimination confidence and the second discrimination confidence, so that the discrimination model can accurately identify whether another image of the input non-noisy image is the original image, and the image generation model can generate images closer to the original The denoised image of the image interferes with the identification model, thereby improving the identification accuracy of the identification model and the image generation effect of the image generation model.

在一个实施例中,模型调整模块805还用于按照鉴别模型所包括的层的次序,逆序确定第一鉴别置信度随各层所对应的模型参数的变化率;按逆序调整鉴别模型所包括的层所对应的模型参数,使得第一鉴别置信度随相应调整的层所对应的模型参数的变化率增大。In one embodiment, the model adjustment module 805 is also used to determine the rate of change of the first identification confidence level with the model parameters corresponding to each layer in reverse order according to the order of the layers included in the identification model; The model parameters corresponding to the layers, so that the first identification confidence increases with the rate of change of the correspondingly adjusted model parameters corresponding to the layers.

在本实施例中,通过反向传播方式求解第一鉴别置信度随鉴别模型各层所对应的模型参数的变化率,通过调节鉴别模型各层所对应的模型参数使得计算得到的变化率减大,以训练鉴别模型,使得训练得到的鉴别模型效果更优。In this embodiment, the rate of change of the first discriminant confidence degree with the model parameters corresponding to each layer of the discriminant model is calculated by backpropagation, and the calculated rate of change is reduced by adjusting the model parameters corresponding to each layer of the discriminant model. , to train the identification model, so that the effect of the trained identification model is better.

在一个实施例中,模型调整模块805还用于依次按照图像生成模型和鉴别模型所包括的层的次序,逆序确定第二鉴别置信度随各层所对应的模型参数的变化率;按逆序,调整鉴别模型和图像生成模型所包括的层所对应的模型参数,使得第二鉴别置信度随相应调整的层所对应的模型参数的变化率减小。In one embodiment, the model adjustment module 805 is further configured to determine the rate of change of the second discrimination confidence level with the model parameters corresponding to each layer in reverse order according to the order of the layers included in the image generation model and the discrimination model; in reverse order, The model parameters corresponding to the layers included in the identification model and the image generation model are adjusted, so that the rate of change of the second identification confidence degree with the model parameters corresponding to the adjusted layers decreases.

在本实施例中,通过反向传播方式求解第二鉴别置信度随图像生成模型和鉴别模型各层所对应的模型参数的变化率,通过调节图像生成模型和鉴别模型各层所对应的模型参数使得计算得到的变化率减小,以训练图像生成模型和鉴别模型,使得训练得到的图像生成模型和鉴别模型效果更优。In this embodiment, the rate of change of the second discrimination confidence degree with the model parameters corresponding to the layers of the image generation model and the discrimination model is solved by back propagation, and by adjusting the model parameters corresponding to the layers of the image generation model and the discrimination model The calculated rate of change is reduced to train the image generation model and the identification model, so that the effect of the trained image generation model and identification model is better.

如图9所示,在一个实施例中,提供了一种身份验证装置900,包括:获取模块901、生成模块902和验证模块903。As shown in FIG. 9 , in one embodiment, an identity verification device 900 is provided, including: an acquisition module 901 , a generation module 902 and a verification module 903 .

获取模块901,用于获取与用户标识对应的人脸图像帧;从与用户标识对应的身份证件中,获取带网纹人脸证件图像。The obtaining module 901 is configured to obtain a face image frame corresponding to the user identification; and obtain a face certificate image with a texture from the identity certificate corresponding to the user identification.

生成模块902,用于通过上述模型训练方法训练得到的图像生成模型,生成带网纹人脸证件图像的去网纹人脸证件图像。The generation module 902 is used to generate the de-screened face certificate image of the screened face certificate image using the image generation model trained by the above-mentioned model training method.

验证模块903,用于将人脸图像帧和去网纹人脸证件图像对比,得到身份验证结果。The verification module 903 is configured to compare the face image frame with the de-screened face certificate image to obtain an identity verification result.

上述身份验证装置900,在需要进行用户身份验证时,采集与用户标识对应的人脸图像帧,再从与该用户标识对应的身份证件中,读取带网纹人脸证件图像,然后通过按照图像生成模型与鉴别模型相互对抗、相互促进的方式训练得到的图像生成模型,生成去噪效果好且不失真的去网纹人脸证件图像,再将采集的人脸图像帧和生成的去网纹人脸证件图像对比,即可得到身份验证结果,极大地提高了身份验证的准确性。The above-mentioned identity verification device 900, when user identity verification is required, collects the face image frame corresponding to the user identification, and then reads the face certificate image with texture from the identity certificate corresponding to the user identification, and then passes the The image generation model obtained by training the image generation model and the identification model against each other and promoting each other can generate a de-screened face certificate image with good denoising effect and no distortion, and then combine the collected face image frame and the generated de-networked image The identity verification result can be obtained by comparing the image of the fingerprinted face certificate, which greatly improves the accuracy of identity verification.

图10示出了一个实施例中计算机设备的内部结构图。该计算机设备具体可以是图1中的终端110或服务器120。如图10所示,该计算机设备包括通过系统总线连接的处理器、非易失性存储介质、内存储器和网络接口。其中,处理器包括中央处理器和图形处理器。该计算机设备的非易失性存储介质存储有操作系统,还可存储有计算机可读指令,该计算机可读指令被处理器执行时,可使得处理器实现模型训练方法和/或身份验证方法。该中央处理器用于提供计算和控制能力,支撑整个计算机设备的运行,该图形处理器用于执行图形处理指令。该内存储器中也可储存有计算机可读指令,该计算机可读指令被所述处理器执行时,可使得所述处理器执行模型训练方法和/或身份验证方法。本领域技术人员可以理解,图10中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。Figure 10 shows a diagram of the internal structure of a computer device in one embodiment. Specifically, the computer device may be the terminal 110 or the server 120 in FIG. 1 . As shown in FIG. 10, the computer device includes a processor, a non-volatile storage medium, an internal memory, and a network interface connected through a system bus. Wherein, the processor includes a central processing unit and a graphics processing unit. The non-volatile storage medium of the computer device stores an operating system, and may also store computer-readable instructions. When the computer-readable instructions are executed by the processor, the processor may implement a model training method and/or an identity verification method. The CPU is used to provide computing and control capabilities to support the operation of the entire computer device, and the graphics processor is used to execute graphics processing instructions. Computer-readable instructions may also be stored in the internal memory, and when the computer-readable instructions are executed by the processor, the processor may execute a model training method and/or an identity verification method. Those skilled in the art can understand that the structure shown in Figure 10 is only a block diagram of a part of the structure related to the solution of this application, and does not constitute a limitation to the computer equipment on which the solution of this application is applied. The specific computer equipment can be More or fewer components than shown in the figures may be included, or some components may be combined, or have a different arrangement of components.

在一个实施例中,本申请提供的应用程序处理装置可以实现为一种计算机程序的形式,所述计算机程序可在如图10所示的计算机设备上运行,所述计算机设备的非易失性存储介质可存储组成该应用程序处理装置的各个程序模块,比如,图8所示的图像获取模块801或者图9所示的获取模块901等。各个程序模块中包括计算机可读指令,所述计算机可读指令用于使所述计算机设备执行本说明书中描述的本申请各个实施例的应用程序处理方法中的步骤。In one embodiment, the application program processing device provided by the present application can be implemented as a form of computer program, and the computer program can run on the computer equipment shown in Figure 10, and the non-volatile The storage medium may store various program modules constituting the application program processing apparatus, for example, the image acquisition module 801 shown in FIG. 8 or the acquisition module 901 shown in FIG. 9 . Each program module includes computer-readable instructions, and the computer-readable instructions are used to make the computer device execute the steps in the application program processing methods of the various embodiments of the present application described in this specification.

例如,所述计算机设备可以通过如图8所示的模型训练装置800中的图像获取模块801获取原始图像和相应的带噪声图像,通过第一输出模块802将原始图像和带噪声图像输入鉴别模型,得到第一鉴别置信度,通过图像生成模块803通过图像生成模型,生成带噪声图像的去噪声图像,通过第二输出模块804将去噪声图像和带噪声图像输入鉴别模型,得到第二鉴别置信度,通过模型调整模块805按照增大第一鉴别置信度和第二鉴别置信度间差异的调整方式,调整鉴别模型和图像生成模型,并继续训练,直至满足训练结束条件。For example, the computer device can obtain the original image and the corresponding image with noise through the image acquisition module 801 in the model training device 800 as shown in FIG. , to obtain the first identification confidence, through the image generation module 803 through the image generation model, generate the denoised image of the noisy image, through the second output module 804, input the denoised image and the noisy image into the identification model, and obtain the second identification confidence Degree, through the model adjustment module 805, adjust the identification model and image generation model according to the adjustment method of increasing the difference between the first identification confidence level and the second identification confidence level, and continue training until the training end condition is met.

还例如,所述计算机设备可以通过如图9所示的身份验证装置900中的获取模块901获取与用户标识对应的人脸图像帧,从与用户标识对应的身份证件中,获取带网纹人脸证件图像,通过生成模块902通过上述模型训练方法训练得到的图像生成模型,生成带网纹人脸证件图像的去网纹人脸证件图像,通过验证模块903将人脸图像帧和去网纹人脸证件图像对比,得到身份验证结果。Also for example, the computer device can obtain the face image frame corresponding to the user identification through the acquisition module 901 in the identity verification device 900 as shown in FIG. The face certificate image, through the image generation model obtained by the above-mentioned model training method training through the generation module 902, generates the de-screened face certificate image of the de-screened face certificate image with the reticulated face certificate image, and the face image frame and the de-screened face certificate image are generated by the verification module 903. Compare the images of face certificates to get the identity verification result.

在一个实施例中,提供了一种计算机可读存储介质,该计算机可读存储介质上存储有计算机可读指令,该计算机可读指令被处理器执行时,使得处理器执行以下步骤:获取原始图像和相应的带噪声图像;将原始图像和带噪声图像输入鉴别模型,得到第一鉴别置信度;通过图像生成模型,生成带噪声图像的去噪声图像;将去噪声图像和带噪声图像输入鉴别模型,得到第二鉴别置信度;按照增大第一鉴别置信度和第二鉴别置信度间差异的调整方式,调整鉴别模型和图像生成模型,并继续训练,直至满足训练结束条件。In one embodiment, a computer-readable storage medium is provided. Computer-readable instructions are stored on the computer-readable storage medium. When executed by a processor, the computer-readable instructions cause the processor to perform the following steps: obtain the original image and the corresponding noisy image; input the original image and the noisy image into the discriminant model to obtain the first discriminative confidence; through the image generation model, generate a denoised image of the noisy image; input the denoised image and the noisy image into the discriminator model to obtain the second identification confidence degree; adjust the identification model and the image generation model according to the adjustment method of increasing the difference between the first identification confidence degree and the second identification confidence degree, and continue training until the training end condition is met.

在一个实施例中,原始图像为原始人脸证件图像。获取原始图像和相应的带噪声图像,包括:获取原始人脸证件图像集;从原始人脸证件图像集中选取原始人脸证件图像;为选取的原始人脸证件图像添加网纹,得到相应的带噪声图像。In one embodiment, the original image is an original face certificate image. Obtain the original image and the corresponding image with noise, including: obtain the original face certificate image set; select the original face certificate image from the original face certificate image set; add mesh to the selected original face certificate image to obtain the corresponding noisy image.

在一个实施例中,鉴别模型为卷积神经网络模型。将原始图像和带噪声图像输入鉴别模型,得到第一鉴别置信度,包括:分别将原始图像和带噪声图像输入鉴别模型;获取鉴别模型中的卷积层输出的与原始图像对应的特征图;获取卷积层输出的与带噪声图像对应的特征图;根据与原始图像对应的特征图和与带噪声图像对应的特征图,计算原始图像与带噪声图像的第一鉴别置信度。In one embodiment, the discrimination model is a convolutional neural network model. Inputting the original image and the image with noise into the identification model to obtain the first identification confidence, including: inputting the original image and the image with noise into the identification model respectively; obtaining the feature map corresponding to the original image output by the convolutional layer in the identification model; Obtain the feature map corresponding to the noisy image output by the convolutional layer; calculate the first discrimination confidence between the original image and the noisy image according to the feature map corresponding to the original image and the feature map corresponding to the noisy image.

在一个实施例中,图像生成模型为卷积神经网络模型。卷积神经网络模型包括编码层和相应的解码层。通过图像生成模型,生成带噪声图像的去噪声图像,包括:将带噪声图像输入图像生成模型;在按照图像生成模型所包括的层的顺序,将当前层输出的特征图作为下一层的输入时,若当前层为解码层,则获取与当前解码层相应的编码层所输出的特征图,并将获取的特征图与当前解码层输出的特征图融合后输入下一层;获取图像生成模型中末层输出的特征图,得到去噪声图像。In one embodiment, the image generation model is a convolutional neural network model. A convolutional neural network model consists of an encoding layer and a corresponding decoding layer. Generate a denoised image of a noisy image through the image generation model, including: input the noisy image into the image generation model; in the order of the layers included in the image generation model, use the feature map output by the current layer as the input of the next layer When , if the current layer is the decoding layer, obtain the feature map output by the encoding layer corresponding to the current decoding layer, and then input the obtained feature map to the next layer after fusion with the feature map output by the current decoding layer; obtain the image generation model The feature map output by the middle and last layer is used to obtain a denoised image.

在一个实施例中,按照增大第一鉴别置信度和第二鉴别置信度间差异的调整方式,调整鉴别模型和图像生成模型,包括:按照最大化第一鉴别置信度的调整方式,调整鉴别模型;按照最小化第二鉴别置信度的调整方式,调整鉴别模型和图像生成模型。In one embodiment, adjusting the identification model and the image generation model according to the adjustment manner of increasing the difference between the first identification confidence degree and the second identification confidence degree includes: adjusting the identification model according to the adjustment manner of maximizing the first identification confidence degree. model; adjusting the discrimination model and the image generation model according to the adjustment manner of minimizing the second discrimination confidence.

在一个实施例中,按照最大化第一鉴别置信度的调整方式,调整鉴别模型,包括:按照鉴别模型所包括的层的次序,逆序确定第一鉴别置信度随各层所对应的模型参数的变化率;按逆序调整鉴别模型所包括的层所对应的模型参数,使得第一鉴别置信度随相应调整的层所对应的模型参数的变化率增大。In one embodiment, adjusting the identification model according to the adjustment method of maximizing the first identification confidence degree includes: determining the first identification confidence degree in reverse order according to the order of the layers included in the identification model along with the model parameters corresponding to each layer Rate of change: adjust the model parameters corresponding to the layers included in the identification model in reverse order, so that the first identification confidence increases with the rate of change of the model parameters corresponding to the adjusted layers.

在一个实施例中,按照最小化第二鉴别置信度的调整方式,调整鉴别模型和图像生成模型,包括:依次按照图像生成模型和鉴别模型所包括的层的次序,逆序确定第二鉴别置信度随各层所对应的模型参数的变化率;按逆序,调整鉴别模型和图像生成模型所包括的层所对应的模型参数,使得第二鉴别置信度随相应调整的层所对应的模型参数的变化率减小。In one embodiment, adjusting the identification model and the image generation model according to the adjustment manner of minimizing the second identification confidence level includes: sequentially determining the second identification confidence level according to the order of the layers included in the image generation model and the identification model in reverse order With the rate of change of the model parameters corresponding to each layer; in reverse order, adjust the model parameters corresponding to the layers included in the identification model and the image generation model, so that the second identification confidence varies with the model parameters corresponding to the correspondingly adjusted layers rate decreases.

上述存储介质,包括图像生成模型和鉴别模型两个模型的训练。其中,训练图像生成模型的过程在于学习生成带噪声图像的去噪声图像,训练鉴别模型的过程在于学习在给定带噪声图像的条件下,学习判断输入的另一图像是原始图像还是通过图像生成模型生成的去噪声图像。这样图像生成模型学习生成与原始图像更相似的图像,以干扰鉴别模型的判断,鉴别模型学习更加精准地进行原始图像和去噪声图像的判断,两个模型相互对抗,相互促进,使得训练得到的模型性能更优,从而在使用训练得到的图像生成模型进行图像去噪时,能够极大程度上克服图像失真的问题。The above-mentioned storage medium includes the training of two models, the image generation model and the identification model. Among them, the process of training the image generation model is to learn to generate denoised images of noisy images, and the process of training the identification model is to learn to judge whether another input image is an original image or generated through an image under the condition of a given noisy image. The denoised image generated by the model. In this way, the image generation model learns to generate an image that is more similar to the original image to interfere with the judgment of the discrimination model, and the discrimination model learns to judge the original image and the denoised image more accurately. The performance of the model is better, so that when using the trained image generation model for image denoising, the problem of image distortion can be overcome to a great extent.

在一个实施例中,提供了一种计算机可读存储介质,该计算机可读存储介质上存储有计算机可读指令,该计算机可读指令被处理器执行时,使得处理器执行以下步骤:获取与用户标识对应的人脸图像帧;从与用户标识对应的身份证件中,获取带网纹人脸证件图像;通过上述模型训练方法训练得到的图像生成模型,生成带网纹人脸证件图像的去网纹人脸证件图像;将人脸图像帧和去网纹人脸证件图像对比,得到身份验证结果。In one embodiment, a computer-readable storage medium is provided. Computer-readable instructions are stored on the computer-readable storage medium. When executed by a processor, the computer-readable instructions cause the processor to perform the following steps: obtaining and The face image frame corresponding to the user identification; from the ID card corresponding to the user identification, obtain the image of the certificate with a textured face; the image generation model obtained through the training of the above-mentioned model training method generates the image of the certificate with a textured face Screened face ID image; compare the face image frame with the de-screened face ID image to obtain the identity verification result.

上述存储介质,在需要进行用户身份验证时,采集与用户标识对应的人脸图像帧,再从与该用户标识对应的身份证件中,读取带网纹人脸证件图像,然后通过按照图像生成模型与鉴别模型相互对抗、相互促进的方式训练得到的图像生成模型,生成去噪效果好且不失真的去网纹人脸证件图像,再将采集的人脸图像帧和生成的去网纹人脸证件图像对比,即可得到身份验证结果,极大地提高了身份验证的准确性。The above-mentioned storage medium, when user identity verification is required, collects the face image frame corresponding to the user ID, and then reads the image of the face certificate with texture from the ID card corresponding to the user ID, and then generates The image generation model obtained by training the model and the identification model against each other and promoting each other can generate a de-screened face certificate image with good denoising effect and no distortion, and then combine the collected face image frame and the generated de-screened face The identity verification result can be obtained by comparing the face certificate image, which greatly improves the accuracy of identity verification.

在一个实施例中,提供了一种计算机设备,包括存储器和处理器,所述存储器中储存有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行以下步骤:获取原始图像和相应的带噪声图像;将原始图像和带噪声图像输入鉴别模型,得到第一鉴别置信度;通过图像生成模型,生成带噪声图像的去噪声图像;将去噪声图像和带噪声图像输入鉴别模型,得到第二鉴别置信度;按照增大第一鉴别置信度和第二鉴别置信度间差异的调整方式,调整鉴别模型和图像生成模型,并继续训练,直至满足训练结束条件。In one embodiment, a computer device is provided, comprising a memory and a processor, the memory storing computer-readable instructions that, when executed by the processor, cause the processor to perform The following steps: obtain the original image and the corresponding image with noise; input the original image and the image with noise into the identification model to obtain the first identification confidence; through the image generation model, generate a denoised image of the image with noise; denoise the image and Input the noisy image into the identification model to obtain the second identification confidence degree; adjust the identification model and the image generation model according to the adjustment method of increasing the difference between the first identification confidence degree and the second identification confidence degree, and continue training until the training ends condition.

在一个实施例中,原始图像为原始人脸证件图像。获取原始图像和相应的带噪声图像,包括:获取原始人脸证件图像集;从原始人脸证件图像集中选取原始人脸证件图像;为选取的原始人脸证件图像添加网纹,得到相应的带噪声图像。In one embodiment, the original image is an original face certificate image. Obtain the original image and the corresponding image with noise, including: obtain the original face certificate image set; select the original face certificate image from the original face certificate image set; add mesh to the selected original face certificate image to obtain the corresponding noisy image.

在一个实施例中,鉴别模型为卷积神经网络模型。将原始图像和带噪声图像输入鉴别模型,得到第一鉴别置信度,包括:分别将原始图像和带噪声图像输入鉴别模型;获取鉴别模型中的卷积层输出的与原始图像对应的特征图;获取卷积层输出的与带噪声图像对应的特征图;根据与原始图像对应的特征图和与带噪声图像对应的特征图,计算原始图像与带噪声图像的第一鉴别置信度。In one embodiment, the discrimination model is a convolutional neural network model. Inputting the original image and the image with noise into the identification model to obtain the first identification confidence, including: inputting the original image and the image with noise into the identification model respectively; obtaining the feature map corresponding to the original image output by the convolutional layer in the identification model; Obtain the feature map corresponding to the noisy image output by the convolutional layer; calculate the first discrimination confidence between the original image and the noisy image according to the feature map corresponding to the original image and the feature map corresponding to the noisy image.

在一个实施例中,图像生成模型为卷积神经网络模型。卷积神经网络模型包括编码层和相应的解码层。通过图像生成模型,生成带噪声图像的去噪声图像,包括:将带噪声图像输入图像生成模型;在按照图像生成模型所包括的层的顺序,将当前层输出的特征图作为下一层的输入时,若当前层为解码层,则获取与当前解码层相应的编码层所输出的特征图,并将获取的特征图与当前解码层输出的特征图融合后输入下一层;获取图像生成模型中末层输出的特征图,得到去噪声图像。In one embodiment, the image generation model is a convolutional neural network model. A convolutional neural network model consists of an encoding layer and a corresponding decoding layer. Generate a denoised image of a noisy image through the image generation model, including: input the noisy image into the image generation model; in the order of the layers included in the image generation model, use the feature map output by the current layer as the input of the next layer When , if the current layer is the decoding layer, obtain the feature map output by the encoding layer corresponding to the current decoding layer, and then input the obtained feature map to the next layer after fusion with the feature map output by the current decoding layer; obtain the image generation model The feature map output by the middle and last layer is used to obtain a denoised image.

在一个实施例中,按照增大第一鉴别置信度和第二鉴别置信度间差异的调整方式,调整鉴别模型和图像生成模型,包括:按照最大化第一鉴别置信度的调整方式,调整鉴别模型;按照最小化第二鉴别置信度的调整方式,调整鉴别模型和图像生成模型。In one embodiment, adjusting the identification model and the image generation model according to the adjustment manner of increasing the difference between the first identification confidence degree and the second identification confidence degree includes: adjusting the identification model according to the adjustment manner of maximizing the first identification confidence degree. model; adjusting the discrimination model and the image generation model according to the adjustment manner of minimizing the second discrimination confidence.

在一个实施例中,按照最大化第一鉴别置信度的调整方式,调整鉴别模型,包括:按照鉴别模型所包括的层的次序,逆序确定第一鉴别置信度随各层所对应的模型参数的变化率;按逆序调整鉴别模型所包括的层所对应的模型参数,使得第一鉴别置信度随相应调整的层所对应的模型参数的变化率增大。In one embodiment, adjusting the identification model according to the adjustment method of maximizing the first identification confidence degree includes: determining the first identification confidence degree in reverse order according to the order of the layers included in the identification model along with the model parameters corresponding to each layer Rate of change: adjust the model parameters corresponding to the layers included in the identification model in reverse order, so that the first identification confidence increases with the rate of change of the model parameters corresponding to the adjusted layers.

在一个实施例中,按照最小化第二鉴别置信度的调整方式,调整鉴别模型和图像生成模型,包括:依次按照图像生成模型和鉴别模型所包括的层的次序,逆序确定第二鉴别置信度随各层所对应的模型参数的变化率;按逆序,调整鉴别模型和图像生成模型所包括的层所对应的模型参数,使得第二鉴别置信度随相应调整的层所对应的模型参数的变化率减小。In one embodiment, adjusting the identification model and the image generation model according to the adjustment manner of minimizing the second identification confidence level includes: sequentially determining the second identification confidence level according to the order of the layers included in the image generation model and the identification model in reverse order With the rate of change of the model parameters corresponding to each layer; in reverse order, adjust the model parameters corresponding to the layers included in the identification model and the image generation model, so that the second identification confidence varies with the model parameters corresponding to the correspondingly adjusted layers rate decreases.

上述计算机设备,包括图像生成模型和鉴别模型两个模型的训练。其中,训练图像生成模型的过程在于学习生成带噪声图像的去噪声图像,训练鉴别模型的过程在于学习在给定带噪声图像的条件下,学习判断输入的另一图像是原始图像还是通过图像生成模型生成的去噪声图像。这样图像生成模型学习生成与原始图像更相似的图像,以干扰鉴别模型的判断,鉴别模型学习更加精准地进行原始图像和去噪声图像的判断,两个模型相互对抗,相互促进,使得训练得到的模型性能更优,从而在使用训练得到的图像生成模型进行图像去噪时,能够极大程度上克服图像失真的问题。The above-mentioned computer equipment includes the training of two models, an image generation model and a discrimination model. Among them, the process of training the image generation model is to learn to generate denoised images of noisy images, and the process of training the identification model is to learn to judge whether another input image is an original image or generated through an image under the condition of a given noisy image. The denoised image generated by the model. In this way, the image generation model learns to generate an image that is more similar to the original image to interfere with the judgment of the discrimination model, and the discrimination model learns to judge the original image and the denoised image more accurately. The performance of the model is better, so that when using the trained image generation model for image denoising, the problem of image distortion can be overcome to a great extent.

在一个实施例中,提供了一种计算机设备,包括存储器和处理器,所述存储器中储存有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行以下步骤:获取与用户标识对应的人脸图像帧;从与用户标识对应的身份证件中,获取带网纹人脸证件图像;通过上述模型训练方法训练得到的图像生成模型,生成带网纹人脸证件图像的去网纹人脸证件图像;将人脸图像帧和去网纹人脸证件图像对比,得到身份验证结果。In one embodiment, a computer device is provided, comprising a memory and a processor, the memory storing computer-readable instructions that, when executed by the processor, cause the processor to perform The following steps: obtain the face image frame corresponding to the user identification; from the ID card corresponding to the user identification, obtain the image of the face certificate with the texture; the image generation model obtained by training the above-mentioned model training method to generate the person with the texture The descreened face certificate image of the face certificate image; the identity verification result is obtained by comparing the face image frame with the descreened face certificate image.

上述计算机设备,在需要进行用户身份验证时,采集与用户标识对应的人脸图像帧,再从与该用户标识对应的身份证件中,读取带网纹人脸证件图像,然后通过按照图像生成模型与鉴别模型相互对抗、相互促进的方式训练得到的图像生成模型,生成去噪效果好且不失真的去网纹人脸证件图像,再将采集的人脸图像帧和生成的去网纹人脸证件图像对比,即可得到身份验证结果,极大地提高了身份验证的准确性。The above-mentioned computer equipment, when user identity verification is required, collects the face image frame corresponding to the user identification, and then reads the face certificate image with texture from the identity document corresponding to the user identification, and then generates The image generation model obtained by training the model and the identification model against each other and promoting each other can generate a de-screened face certificate image with good denoising effect and no distortion, and then combine the collected face image frame and the generated de-screened face The identity verification result can be obtained by comparing the face certificate image, which greatly improves the accuracy of identity verification.

本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一非易失性计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)等。Those of ordinary skill in the art can understand that all or part of the processes in the methods of the above embodiments can be realized through computer programs to instruct related hardware, and the programs can be stored in a non-volatile computer-readable storage medium When the program is executed, it may include the processes of the embodiments of the above-mentioned methods. Wherein, the storage medium may be a magnetic disk, an optical disk, a read-only memory (Read-Only Memory, ROM) and the like.

以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。The technical features of the above embodiments can be combined arbitrarily. To make the description concise, all possible combinations of the technical features in the above embodiments are not described. However, as long as there is no contradiction in the combination of these technical features, they should be It is considered to be within the range described in this specification.

以上所述实施例仅表达了本发明的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干变形和改进,这些都属于本发明的保护范围。因此,本发明专利的保护范围应以所附权利要求为准。The above-mentioned embodiments only express several implementation modes of the present invention, and the description thereof is relatively specific and detailed, but should not be construed as limiting the patent scope of the present invention. It should be noted that, for those skilled in the art, several modifications and improvements can be made without departing from the concept of the present invention, and these all belong to the protection scope of the present invention. Therefore, the protection scope of the patent for the present invention should be based on the appended claims.

Claims (14)

1. A method of model training, the method comprising:
acquiring an original image and a noisy image obtained by performing noise adding processing on the original image;
inputting the original image and the noisy image into an authentication model respectively;
acquiring a feature map corresponding to the original image, which is output by a convolution layer in the identification model;
acquiring a feature map corresponding to the noisy image output by the convolution layer;
calculating a first identification confidence of the original image and the noisy image according to the feature map corresponding to the original image and the feature map corresponding to the noisy image;
inputting the noisy image into an image generation model;
When the feature map output by the current layer is used as the input of the next layer according to the sequence of the layers included in the image generation model, if the current layer is a decoding layer, acquiring the feature map output by the coding layer corresponding to the current decoding layer, fusing the acquired feature map with the feature map output by the current decoding layer, and inputting the fused feature map into the next layer; acquiring a feature map output by a last layer in the image generation model to obtain a denoising image;
inputting the denoising image and the noisy image into the authentication model to obtain a second authentication confidence;
and adjusting the identification model and the image generation model according to an adjustment mode for increasing the difference between the first identification confidence coefficient and the second identification confidence coefficient, and continuing training until the training ending condition is met.
2. The method of claim 1, wherein the raw image is a raw face document image;
the obtaining an original image and a noisy image obtained by performing noise adding processing on the original image comprises the following steps:
acquiring an original face certificate image set;
selecting an original face certificate image from the original face certificate image set;
And adding reticulate patterns to the selected original face certificate image to obtain a corresponding noisy image.
3. The method of claim 1, wherein said adjusting said authentication model and said image generation model in a manner that increases the difference between said first authentication confidence and said second authentication confidence comprises:
adjusting the authentication model according to an adjustment mode of maximizing the first authentication confidence;
and adjusting the authentication model and the image generation model according to an adjustment mode for minimizing the second authentication confidence.
4. A method according to claim 3, wherein said adjusting said authentication model in a manner that maximizes said first authentication confidence comprises:
determining the change rate of the first authentication confidence along with the model parameters corresponding to each layer in reverse order according to the order of the layers included in the authentication model;
and adjusting model parameters corresponding to the layers included in the identification model according to the reverse order, so that the first identification confidence coefficient increases along with the change rate of the model parameters corresponding to the layers which are correspondingly adjusted.
5. A method according to claim 3, wherein said adjusting said authentication model and said image generation model in a manner that minimizes said second authentication confidence comprises:
Sequentially determining the change rate of the second identification confidence along with the model parameters corresponding to each layer in reverse order according to the order of the layers included in the image generation model and the identification model;
and adjusting model parameters corresponding to layers included in the identification model and the image generation model according to the reverse order, so that the change rate of the second identification confidence coefficient corresponding to the correspondingly adjusted layers is reduced.
6. A method of identity verification, the method comprising:
acquiring a face image frame corresponding to a user identifier;
acquiring a facial document image with reticulate patterns from an identity document corresponding to the user identifier;
generating a descreened face document image of the anilox face document image by an image generation model trained by the model training method according to any one of claims 1 to 5;
and comparing the face image frame with the anilox-removed face certificate image to obtain an identity verification result.
7. A model training apparatus, the apparatus comprising:
the image acquisition module is used for acquiring an original image and a noisy image obtained after noise addition processing is carried out on the original image;
The first output module is used for inputting the original image and the noisy image into an identification model respectively; acquiring a feature map corresponding to the original image, which is output by a convolution layer in the identification model; acquiring a feature map corresponding to the noisy image output by the convolution layer; calculating a first identification confidence of the original image and the noisy image according to the feature map corresponding to the original image and the feature map corresponding to the noisy image;
the image generation module is used for inputting the noisy image into an image generation model; when the feature map output by the current layer is used as the input of the next layer according to the sequence of the layers included in the image generation model, if the current layer is a decoding layer, acquiring the feature map output by the coding layer corresponding to the current decoding layer, fusing the acquired feature map with the feature map output by the current decoding layer, and inputting the fused feature map into the next layer; acquiring a feature map output by a last layer in the image generation model to obtain a denoising image;
the second output module is used for inputting the denoising image and the noisy image into an identification model to obtain a second identification confidence;
And the model adjustment module is used for adjusting the identification model and the image generation model according to an adjustment mode for increasing the difference between the first identification confidence coefficient and the second identification confidence coefficient, and continuing training until the training ending condition is met.
8. The apparatus of claim 7, wherein the raw image is a raw face document image;
the image acquisition module is also used for acquiring an original face certificate image set; selecting an original face certificate image from the original face certificate image set; and adding reticulate patterns to the selected original face certificate image to obtain a corresponding noisy image.
9. The apparatus of claim 7, wherein the model adjustment module is further configured to adjust the authentication model in an adjustment manner that maximizes the first authentication confidence level; and adjusting the authentication model and the image generation model according to an adjustment mode for minimizing the second authentication confidence.
10. The apparatus of claim 9, wherein the model adjustment module is further configured to determine, in an order of layers included in the authentication model, a rate of change of the first authentication confidence level with a model parameter corresponding to each layer in reverse order; and adjusting model parameters corresponding to the layers included in the identification model according to the reverse order, so that the first identification confidence coefficient increases along with the change rate of the model parameters corresponding to the layers which are correspondingly adjusted.
11. The apparatus of claim 9, wherein the model adjustment module is further configured to determine, in order of the layers included in the image generation model and the authentication model, a rate of change of the second authentication confidence with the model parameters corresponding to each of the layers in reverse order; and adjusting model parameters corresponding to layers included in the identification model and the image generation model according to the reverse order, so that the change rate of the second identification confidence coefficient corresponding to the correspondingly adjusted layers is reduced.
12. An authentication apparatus, the apparatus comprising:
the acquisition module is used for acquiring the face image frames corresponding to the user identifications; acquiring a facial document image with reticulate patterns from an identity document corresponding to the user identifier;
a generation module, configured to generate a model from an image generated by training the model training method according to any one of claims 1 to 5, and generate a descreened face document image of the anilox face document image;
and the verification module is used for comparing the face image frame with the anilox-removed face certificate image to obtain an identity verification result.
13. A computer readable storage medium having stored thereon computer readable instructions which, when executed by a processor, cause the processor to perform the steps of the method according to any of claims 1 to 6.
14. A computer device comprising a memory and a processor, the memory having stored therein computer readable instructions which, when executed by the processor, cause the processor to perform the steps of the method of any of claims 1 to 6.
CN201710687385.7A 2017-08-11 2017-08-11 Model training, authentication method, apparatus, storage medium and computer equipment Active CN107545277B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710687385.7A CN107545277B (en) 2017-08-11 2017-08-11 Model training, authentication method, apparatus, storage medium and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710687385.7A CN107545277B (en) 2017-08-11 2017-08-11 Model training, authentication method, apparatus, storage medium and computer equipment

Publications (2)

Publication Number Publication Date
CN107545277A CN107545277A (en) 2018-01-05
CN107545277B true CN107545277B (en) 2023-07-11

Family

ID=60970530

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710687385.7A Active CN107545277B (en) 2017-08-11 2017-08-11 Model training, authentication method, apparatus, storage medium and computer equipment

Country Status (1)

Country Link
CN (1) CN107545277B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108491794B (en) * 2018-03-22 2023-04-07 腾讯科技(深圳)有限公司 Face recognition method and device
CN108564550B (en) * 2018-04-25 2020-10-02 Oppo广东移动通信有限公司 Image processing method, device and terminal device
CN110738227B (en) * 2018-07-20 2021-10-12 马上消费金融股份有限公司 Model training method and device, recognition method, storage medium and electronic equipment
CN109711546B (en) * 2018-12-21 2021-04-06 深圳市商汤科技有限公司 Neural network training method and device, electronic equipment and storage medium
CN109685743B (en) * 2018-12-30 2023-01-17 陕西师范大学 Image Mixed Noise Removal Method Based on Noise Learning Neural Network Model
CN109981991A (en) * 2019-04-17 2019-07-05 北京旷视科技有限公司 Model training method, image processing method, device, medium and electronic equipment
CN110163827B (en) * 2019-05-28 2023-01-10 腾讯科技(深圳)有限公司 Training method of image denoising model, image denoising method, device and medium
CN111259603B (en) * 2020-01-17 2024-01-30 南京星火技术有限公司 Electronic device, model design apparatus, and computer-readable medium
CN111414856B (en) * 2020-03-19 2022-04-12 支付宝(杭州)信息技术有限公司 Face image generation method and device for realizing user privacy protection

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103984957A (en) * 2014-05-04 2014-08-13 中国科学院深圳先进技术研究院 Automatic early warning system for suspicious lesion area of capsule endoscope image
CN105426857A (en) * 2015-11-25 2016-03-23 小米科技有限责任公司 Training method and device of face recognition model
CN105760859A (en) * 2016-03-22 2016-07-13 中国科学院自动化研究所 Method and device for identifying reticulate pattern face image based on multi-task convolutional neural network

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9436890B2 (en) * 2014-01-23 2016-09-06 Samsung Electronics Co., Ltd. Method of generating feature vector, generating histogram, and learning classifier for recognition of behavior
US10043243B2 (en) * 2016-01-22 2018-08-07 Siemens Healthcare Gmbh Deep unfolding algorithm for efficient image denoising under varying noise conditions

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103984957A (en) * 2014-05-04 2014-08-13 中国科学院深圳先进技术研究院 Automatic early warning system for suspicious lesion area of capsule endoscope image
CN105426857A (en) * 2015-11-25 2016-03-23 小米科技有限责任公司 Training method and device of face recognition model
CN105760859A (en) * 2016-03-22 2016-07-13 中国科学院自动化研究所 Method and device for identifying reticulate pattern face image based on multi-task convolutional neural network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
吕永标 ; 赵建伟 ; 曹飞龙 ; .基于复合卷积神经网络的图像去噪算法.模式识别与人工智能.2017,(02),全文. *
李传朋 ; 秦品乐 ; 张晋京 ; .基于深度卷积神经网络的图像去噪研究.计算机工程.2017,(03),全文. *

Also Published As

Publication number Publication date
CN107545277A (en) 2018-01-05

Similar Documents

Publication Publication Date Title
CN107545277B (en) Model training, authentication method, apparatus, storage medium and computer equipment
US11983850B2 (en) Image processing method and apparatus, device, and storage medium
US20220327189A1 (en) Personalized biometric anti-spoofing protection using machine learning and enrollment data
CN110992252B (en) Image multi-grid conversion method based on latent variable feature generation
CN112437926B (en) Fast and robust friction ridge imprint detail extraction using feed-forward convolutional neural network
CN111210382B (en) Image processing method, image processing device, computer equipment and storage medium
CN111797891B (en) Unpaired heterogeneous face image generation method and device based on generative adversarial network
CN111160313A (en) A face representation attack detection method based on LBP-VAE anomaly detection model
CN115661482B (en) A RGB-T Salient Object Detection Method Based on Joint Attention
CN118918039B (en) 4K image defogging method and device based on weak supervision contrast learning and storage medium
CN116189265A (en) Sketch Face Recognition Method, Device and Equipment Based on Lightweight Semantic Transformer Model
CN114022914A (en) Palmprint Recognition Method Based on Fusion Deep Network
Zhou et al. Attention transfer network for nature image matting
CN106355210B (en) Insulator Infrared Image feature representation method based on depth neuron response modes
CN118155251B (en) Palm vein recognition method based on semantic communication type federal learning
CN111046893A (en) Image similarity determination method and device, image processing method and device
CN111209886B (en) Rapid pedestrian re-identification method based on deep neural network
CN116975864A (en) Malicious code detection method and device, electronic equipment and storage medium
CN112288626A (en) A face illusion method and system based on dual-path deep fusion
WO2024260134A1 (en) Palmprint picture generation method and apparatus, storage medium, program product, and electronic device
CN119169632A (en) Optical character recognition method, device and equipment
CN117690171A (en) Method for detecting facial key points in complex environments based on data generated by generative network
CN118072350A (en) Pedestrian detection method, device and electronic equipment
CN117456529A (en) A street scene semantic segmentation method and device based on generalized causal learning
CN117651144A (en) Deep learning-based building point cloud compression method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TG01 Patent term adjustment
TG01 Patent term adjustment