CN111339897A - Living body identification method, living body identification device, computer equipment and storage medium - Google Patents
Living body identification method, living body identification device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN111339897A CN111339897A CN202010107870.4A CN202010107870A CN111339897A CN 111339897 A CN111339897 A CN 111339897A CN 202010107870 A CN202010107870 A CN 202010107870A CN 111339897 A CN111339897 A CN 111339897A
- Authority
- CN
- China
- Prior art keywords
- image
- processed
- sample
- training
- living body
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q20/00—Payment architectures, schemes or protocols
- G06Q20/38—Payment protocols; Details thereof
- G06Q20/40—Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
- G06Q20/401—Transaction verification
- G06Q20/4014—Identity check for transactions
- G06Q20/40145—Biometric identity checks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/467—Encoded features or binary features, e.g. local binary patterns [LBP]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Business, Economics & Management (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Mathematical Physics (AREA)
- Molecular Biology (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Accounting & Taxation (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- General Business, Economics & Management (AREA)
- Finance (AREA)
- Computer Security & Cryptography (AREA)
- Human Computer Interaction (AREA)
- Strategic Management (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
Abstract
The present application relates to a living body identification method, apparatus, computer device, and storage medium. The method comprises the following steps: acquiring an image to be processed, and converting the image to be processed into a first image through a conversion layer of an identification model, wherein the image to be processed and the first image correspond to different attributes, and the attributes comprise a forged image and an unforeseen image; performing feature extraction on the image to be processed and the first image through a recognition layer of the recognition model to obtain a feature map of the image to be processed and a feature map of the first image; determining a residual error map between the image to be processed and the first image according to the feature map of the image to be processed and the feature map of the first image; and carrying out living body identification on the image to be processed based on the residual image to obtain the category of the image to be processed, wherein the category is a living body or a non-living body. By adopting the method, whether the image to be processed is a living body can be accurately identified.
Description
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for identifying a living body, a computer device, and a storage medium.
Background
With the development of computer technology, living body identification technology has emerged. The living body detection means that a user makes corresponding actions according to system instructions, for example: blinking, shaking the head, speaking a string of numbers, etc., prevent the user from fooling the system into performing the verification technique with photographs in some important circumstances. After the user executes the action according to the instruction of the system, the system can perform operations such as face detection, five sense organs positioning, action detection and the like to judge whether the living body detection of the user passes or not.
However, a malicious user may trick the liveness detection system through a combined video of multiple actions, resulting in inaccurate liveness identification.
Disclosure of Invention
In view of the above, it is necessary to provide a living body identification method, apparatus, computer device, and storage medium for solving the above technical problems and solving the technical problem of inaccurate living body identification.
In one embodiment, there is provided a living body identification method, the method including:
acquiring an image to be processed, and converting the image to be processed into a first image through a conversion layer of an identification model, wherein the image to be processed and the first image correspond to different attributes, and the attributes comprise a forged image and an unforeseen image;
performing feature extraction on the image to be processed and the first image through a recognition layer of the recognition model to obtain a feature map of the image to be processed and a feature map of the first image;
determining a residual error map between the image to be processed and the first image according to the feature map of the image to be processed and the feature map of the first image;
and carrying out living body identification on the image to be processed based on the residual image to obtain the category of the image to be processed, wherein the category is a living body or a non-living body.
In one embodiment, there is provided a living body identification apparatus, the apparatus comprising:
the conversion module is used for acquiring an image to be processed, converting the image to be processed into a first image through a conversion layer of an identification model, wherein the image to be processed and the first image correspond to different attributes, and the attributes comprise a forged image and a non-forged image;
the extraction module is used for extracting the features of the image to be processed and the first image through the identification layer of the identification model to obtain a feature map of the image to be processed and a feature map of the first image;
a determining module, configured to determine a residual map between the image to be processed and the first image according to the feature map of the image to be processed and the feature map of the first image;
and the identification module is used for carrying out living body identification on the image to be processed based on the residual error map to obtain the category of the image to be processed, wherein the category is a living body or a non-living body.
In one embodiment, there is provided a computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
acquiring an image to be processed, and converting the image to be processed into a first image through a conversion layer of an identification model, wherein the image to be processed and the first image correspond to different attributes, and the attributes comprise a forged image and an unforeseen image;
performing feature extraction on the image to be processed and the first image through a recognition layer of the recognition model to obtain a feature map of the image to be processed and a feature map of the first image;
determining a residual error map between the image to be processed and the first image according to the feature map of the image to be processed and the feature map of the first image;
and carrying out living body identification on the image to be processed based on the residual image to obtain the category of the image to be processed, wherein the category is a living body or a non-living body.
In one embodiment, a computer-readable storage medium is provided, having stored thereon a computer program which, when executed by a processor, performs the steps of:
acquiring an image to be processed, and converting the image to be processed into a first image through a conversion layer of an identification model, wherein the image to be processed and the first image correspond to different attributes, and the attributes comprise a forged image and an unforeseen image;
performing feature extraction on the image to be processed and the first image through a recognition layer of the recognition model to obtain a feature map of the image to be processed and a feature map of the first image;
determining a residual error map between the image to be processed and the first image according to the feature map of the image to be processed and the feature map of the first image;
and carrying out living body identification on the image to be processed based on the residual image to obtain the category of the image to be processed, wherein the category is a living body or a non-living body.
The living body identification method, the living body identification device, the computer equipment and the storage medium acquire an image to be processed, convert the image to be processed into a first image through a conversion layer of an identification model, wherein the image to be processed and the first image correspond to different attributes, the attributes comprise a forged image and an unforced image, perform feature extraction on the image to be processed and the first image through an identification layer of the identification model to obtain a feature map of the image to be processed and a feature map of the first image, and determine a residual map between the image to be processed and the first image according to the feature map of the image to be processed and the feature map of the first image, so that the difference between the image to be processed and the first image can be determined. The living body identification is carried out on the image to be processed based on the residual error image to obtain the category of the image to be processed, the category is a living body or a non-living body, the user does not need to cooperate to make any facial action, the living body identification can be carried out only from a single image, the detection cost is reduced, and the accuracy of the living body identification is improved.
In one embodiment, a recognition model training method is provided, including:
acquiring a training image sample and a class label corresponding to the training image sample, wherein the class label comprises a living body and a non-living body;
converting the training image sample into a first image sample through a conversion layer of an identification model, wherein the training image sample and the first image sample correspond to different attributes; the attributes include a counterfeit image and an unforeseen image;
performing feature extraction on the training image sample and the first image sample through a recognition layer of the recognition model to obtain a feature map of the training image sample and a feature map of the first image sample;
determining a residual error map between the training image sample and the first image sample according to the feature map of the training image sample and the feature map of the first image sample;
performing living body recognition on the training image sample based on the residual error image to obtain a recognition result of the training image sample;
and adjusting parameters of the recognition model and continuing training according to the difference between the recognition result of the training image sample and the corresponding class label until the preset condition is met, and obtaining the trained recognition model.
In one embodiment, there is provided a recognition model training apparatus, the apparatus including:
the acquisition module is used for acquiring a training image sample and a class label corresponding to the training image sample, wherein the class label comprises a living body and a non-living body;
the sample conversion module is used for converting the training image sample into a first image sample through a conversion layer of a recognition model, and the training image sample and the first image sample correspond to different attributes; the attributes include a counterfeit image and an unforeseen image;
the feature extraction module is used for performing feature extraction on the training image sample and the first image sample through a recognition layer of the recognition model to obtain a feature map of the training image sample and a feature map of the first image sample;
a residual map module, configured to determine a residual map between the training image sample and the first image sample according to the feature map of the training image sample and the feature map of the first image sample;
the living body identification module is used for carrying out living body identification on the training image sample based on the residual error image to obtain an identification result of the training image sample;
and the adjusting module is used for adjusting the parameters of the recognition model and continuing training according to the difference between the recognition result of the training image sample and the corresponding class label until the preset condition is met, so that the trained recognition model is obtained.
In one embodiment, there is provided a computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
acquiring a training image sample and a class label corresponding to the training image sample, wherein the class label comprises a living body and a non-living body;
converting the training image sample into a first image sample through a conversion layer of an identification model, wherein the training image sample and the first image sample correspond to different attributes; the attributes include a counterfeit image and an unforeseen image;
performing feature extraction on the training image sample and the first image sample through a recognition layer of the recognition model to obtain a feature map of the training image sample and a feature map of the first image sample;
determining a residual error map between the training image sample and the first image sample according to the feature map of the training image sample and the feature map of the first image sample;
performing living body recognition on the training image sample based on the residual error image to obtain a recognition result of the training image sample;
and adjusting parameters of the recognition model and continuing training according to the difference between the recognition result of the training image sample and the corresponding class label until the preset condition is met, and obtaining the trained recognition model.
In one embodiment, a computer-readable storage medium is provided, having stored thereon a computer program which, when executed by a processor, performs the steps of:
acquiring a training image sample and a class label corresponding to the training image sample, wherein the class label comprises a living body and a non-living body;
converting the training image sample into a first image sample through a conversion layer of an identification model, wherein the training image sample and the first image sample correspond to different attributes; the attributes include a counterfeit image and an unforeseen image;
performing feature extraction on the training image sample and the first image sample through a recognition layer of the recognition model to obtain a feature map of the training image sample and a feature map of the first image sample;
determining a residual error map between the training image sample and the first image sample according to the feature map of the training image sample and the feature map of the first image sample;
performing living body recognition on the training image sample based on the residual error image to obtain a recognition result of the training image sample;
and adjusting parameters of the recognition model and continuing training according to the difference between the recognition result of the training image sample and the corresponding class label until the preset condition is met, and obtaining the trained recognition model.
According to the recognition model training method, the recognition model training device, the computer equipment and the storage medium, the training image sample and the class label corresponding to the training image sample are obtained, the class label comprises a living body and a non-living body, the training image sample is converted into the first image sample through the conversion layer of the recognition model, and the training image sample corresponds to different attributes of the first image sample; the attributes comprise forged images and unformed images, feature extraction is carried out on a training image sample and a first image sample through a recognition layer of a recognition model to obtain a feature map of the training image sample and a feature map of the first image sample, a residual map between the training image sample and the first image sample is determined according to the feature map of the training image sample and the feature map of the first image sample, living body recognition is carried out on the training image sample based on the residual map to obtain a recognition result of the training image sample, parameters of the recognition model are adjusted according to the difference between the recognition result of the training image sample and a corresponding class label, training is continued until a preset condition is met, and therefore the trained recognition model is obtained, living body discrimination can be carried out on a single image through the trained recognition model without any face action of a user in a matching way, the cost of detection is reduced and the accuracy of living body identification is improved.
Drawings
FIG. 1 is a diagram illustrating an exemplary embodiment of an application of a method for living body identification;
FIG. 2 is a schematic flow chart of a method for living body identification according to an embodiment;
FIG. 3 is a schematic flow chart illustrating the steps of feature extraction for a to-be-processed image and a first image by a recognition layer of a recognition model in one embodiment;
FIG. 4 is a flowchart illustrating a step of determining a feature variation between a feature map of an image to be processed and a feature map of a first image in another embodiment;
FIG. 5 is a diagram illustrating generation of a residual map between a to-be-processed image and a first image in one embodiment;
FIG. 6 is a flowchart illustrating a step of performing living body identification on an image to be processed based on a weight value corresponding to each pixel point in a residual error map in one embodiment;
FIG. 7 is a schematic diagram of in vivo detection of an image to be processed in one embodiment;
FIG. 8 is a schematic flow chart diagram illustrating a recognition model training method in one embodiment;
FIG. 9 is a schematic flow diagram that illustrates the training steps for a generator in a translation layer of a recognition model, in one embodiment;
FIG. 10 is a schematic flow chart diagram illustrating the training steps for discriminators in a translation layer of a recognition model in one embodiment;
FIG. 11 is an architecture diagram of a translation layer of the recognition model in one embodiment;
FIG. 12 is an architecture diagram of a recognition model in one embodiment;
FIG. 13 is a block diagram showing the configuration of a living body identifying apparatus according to an embodiment;
FIG. 14 is a block diagram showing the structure of a recognition model training apparatus according to an embodiment;
FIG. 15 is a diagram showing an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The living body identification method provided by the application can be applied to the application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The terminal 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices, and the server 104 may be implemented by an independent server or a server cluster formed by a plurality of servers.
In this embodiment, the terminal 102 acquires a to-be-processed image and transmits the to-be-processed image to the server 104. The server 104 receives the image to be processed, inputs the image to be processed into the identification model, and converts the image to be processed into the first image through the conversion layer of the identification model, wherein the image to be processed and the first image correspond to different attributes, and the attributes comprise a forged image and an unforeseen image. Then, the conversion layer inputs the outputted first image into the recognition layer of the recognition model. And performing feature extraction on the image to be processed and the first image through an identification layer of the identification model to obtain a feature map of the image to be processed and a feature map of the first image. And determining a residual image between the image to be processed and the first image according to the characteristic image of the image to be processed and the characteristic image of the first image. And performing living body identification on the image to be processed based on the residual image to obtain the category of the image to be processed, wherein the category is a living body or a non-living body. The server 104 then returns the category of the image to be processed to the terminal 102. Through the interaction between the terminal 102 and the server 104, the living body detection is carried out on the image to be processed at the server side, the storage space of the terminal is saved, and the image to be processed can be accurately detected to be a living body or a non-living body.
It will be appreciated that in practice, live face recognition is often used in conjunction with other techniques, such as face verification of user identity. Currently, face verification identities have been applied in a number of services, such as: the system comprises a bank remote identity verification system, a face payment system, a drip driver remote authentication system and a community access control system.
In one embodiment, the application of the living body recognition method in the present application in a face payment scenario is as follows:
when a user initiates a payment instruction, the terminal collects a face image of the user through the camera and inputs the face image into the recognition model. And a conversion layer of the identification model identifies whether the face image is an attack image or a real image. In this embodiment, if the image acquired by the terminal is a real image, the conversion layer converts the face image into an attack image.
Then, the recognition layer of the recognition model divides the face image and the attack image to obtain each region corresponding to the face image and each region corresponding to the attack image. And calculating the characteristic value corresponding to each region, and obtaining the characteristic image of the face image according to the characteristic value corresponding to each region of the face image. And obtaining a characteristic diagram of the attack image according to the characteristic value corresponding to each region of the attack image.
And then, the recognition layer of the recognition model determines pixel points which are matched with each other in the characteristic image of the face image and the characteristic image of the attack image, and calculates the characteristic difference value between the pixel points which are matched with each other. And carrying out normalization processing on each characteristic difference value to obtain each weight value. And generating a residual image between the face image and the attack image based on the weight values, wherein each pixel point in the residual image corresponds to one weight value.
Then, the face image and the residual image are input into a classification network in a recognition layer. The convolution layer of the first convolution layer of the classification network performs convolution processing on the characteristics of each pixel point in the face image to obtain a first characteristic value corresponding to each pixel point. And then, determining corresponding pixel points in the residual image and the face image. And multiplying the weight value corresponding to each pixel point in the residual image with the first characteristic value corresponding to the corresponding pixel point in the face image to obtain a second characteristic value corresponding to the pixel point in the face image. And the first convolution layer inputs the output second characteristic value corresponding to each pixel point into the second convolution layer for convolution processing until the class probability of the face image output by the output layer is obtained. And comparing the class probability with a probability threshold, and when the class probability is greater than the probability threshold, taking the face image as a living body face image. And when the class probability is less than or equal to the probability threshold, the face image is a non-living face image.
And when the face image is identified to be the living body face image, the terminal executes payment operation so as to finish the face payment of the user.
By applying the living body face recognition method to the face payment scene, illegal attacks to attempted transactions can be accurately recognized through high-precision living body detection, so that the security of the transactions is ensured, and the benefits of companies and individuals can be prevented from being damaged.
In one embodiment, the living body identification method can be applied to a scene of verifying the identity of a user in the bank account opening process. The living body recognition method in the application does not necessarily exist in a model form, and can be directly stored as a living body face recognition algorithm. In the process of bank remote account opening, in order to confirm the real identity of the account opening person, living human face detection is also needed to be carried out on the account opening person. The general flow is as follows: firstly, a user obtains an image containing a human face through a camera at the front end of an application. The front end transmits the face image to the back end and invokes a live face recognition algorithm. The living body face recognition algorithm carries out living body face detection and returns a recognition result to the front end. If the living human face is judged to pass, otherwise, the verification fails. The living body identification method is applied to the scene of verifying the user identity in the bank account opening process, so that the malicious user can be prevented from illegally using the identity of other people to handle bank related business, and the personal information and property safety of the user can be effectively ensured.
In one embodiment, the living body identification method in the application can be applied to an access control system, and in order to improve the efficiency of identity verification, the access control system directly acquires a face image at the front end, sends the face image into a packaged identification model, directly judges the face image, and feeds back whether the face image is a living body face. The living body face recognition method is applied to an access control system, and the identity information of a user can be quickly and accurately recognized.
It is understood that the living body identification method provided by the present application can be applied to any scene requiring living body identification, and is not limited to the above example.
In one embodiment, as shown in fig. 2, a living body identification method is provided, which is described by taking the method as an example applied to the terminal in fig. 1, and includes the following steps:
The image to be processed is an image including a face region, and may also be an image including the face and various parts of the body of the user, such as a face image, an upper body image, a whole body image, and the like. The image to be processed may be an RGB (Red, Green, Blue) image. The recognition model is used for recognizing whether the image to be processed is a model of a living body image. The recognition model can be applied to a terminal and can also be applied to a server.
The non-forged image refers to the captured source image, i.e. the real image. The forged image is an image obtained by copying, changing the face of a source image or combining the forged image with other images to change all or key characteristics of the source image, and is also called an attack image. For example, an image directly acquired by a user through a camera is called a source image, and an image obtained by processing the acquired image through functions of matting, face changing, beauty, special effects and the like is a forged image.
In this embodiment, when living body face recognition needs to be performed on a face of a user, after a terminal acquires a user image including a face area, the terminal can detect and frame the area where the face of the user is located, enlarge preset multiples by taking the area as a center, obtain more background contents in the user image, and cut out the enlarged area to obtain an image to be processed.
Specifically, the terminal inputs the image to be processed into the recognition model. The conversion layer of the recognition model is able to generate a new image of a different attribute than the image to be processed, i.e. the first image.
In the implementation, when the image to be processed is a source image, the source image is converted into an attack image through a conversion layer of the recognition model. And when the image to be processed is an attack image, converting the attack image into a source image through a conversion layer of the recognition model.
And 204, performing feature extraction on the image to be processed and the first image through the identification layer of the identification model to obtain a feature map of the image to be processed and a feature map of the first image.
Specifically, the terminal can input the image to be processed into the recognition layer of the recognition model, and perform feature extraction on the image to be processed through the recognition layer to obtain a feature map of the image to be processed. The terminal can input the first image output by the conversion layer of the recognition model into the recognition layer, and performs feature extraction on the first image through the recognition layer to obtain a feature map of the first image.
In this embodiment, the terminal may perform LBP feature extraction on the image to be processed and the first image through the recognition layer of the recognition model, so as to obtain an LBP feature map corresponding to the image to be processed and an LBP feature map corresponding to the first image. LBP (local binary Pattern) is an operator used to describe the local texture features of an image.
And step 206, determining a residual image between the image to be processed and the first image according to the feature image of the image to be processed and the feature image of the first image.
The residual map is a map showing a characteristic difference between two images. In this embodiment, the residual map is used to represent the feature difference between the image to be processed and the first image.
Specifically, the terminal may calculate a feature difference between the feature map of the image to be processed and the feature map of the first image, to obtain a residual map.
In this embodiment, the terminal may calculate a feature difference between an LBP feature map corresponding to the image to be processed and an LBP feature map corresponding to the first image, to obtain a residual map. Further, the feature value corresponding to the LBP feature map of the image to be processed is subtracted from the feature value corresponding to the LBP feature map of the first image to obtain a residual map.
And 208, performing living body identification on the image to be processed based on the residual image to obtain the category of the image to be processed, wherein the category is a living body or a non-living body.
The living body detection means that whether a user is a living body is detected by collecting a user image and identifying the user image.
Specifically, the terminal takes the residual map and the image to be processed as input images for the living body identification process. And the terminal performs convolution processing on the residual image and the image to be processed through the identification layer in the identification model, and can determine the position with characteristic difference in the image to be processed through the residual image so as to obtain the class probability corresponding to the image to be processed output by the identification layer. And then, determining the category corresponding to the image to be processed according to the category probability, and outputting the category corresponding to the image to be processed.
In this embodiment, when the class probability corresponding to the image to be processed is greater than the probability threshold, the class corresponding to the image to be processed is a living body. And when the class probability corresponding to the image to be processed is smaller than or equal to the probability threshold, the class corresponding to the image to be processed is a non-living body.
In the living body identification method, an image to be processed is obtained, the image to be processed is converted into a first image through a conversion layer of an identification model, the image to be processed and the first image correspond to different attributes, the attributes comprise a forged image and an unforced image, feature extraction is carried out on the image to be processed and the first image through an identification layer of the identification model to obtain a feature map of the image to be processed and a feature map of the first image, and a residual map between the image to be processed and the first image is determined according to the feature map of the image to be processed and the feature map of the first image, so that the difference between the image to be processed and the first image can be determined. The living body identification is carried out on the image to be processed based on the residual error image to obtain the category of the image to be processed, the category is a living body or a non-living body, the user does not need to cooperate to make any facial action, the living body identification can be carried out only from a single image, the detection cost is reduced, and the accuracy of the living body identification is improved.
In an embodiment, as shown in fig. 3, the performing feature extraction on the image to be processed and the first image through the recognition layer of the recognition model to obtain a feature map of the image to be processed and a feature map of the first image includes:
Specifically, the terminal inputs the image to be processed and the first image into the recognition layer of the recognition model. And dividing the image to be processed into a plurality of areas through the identification layer to obtain each area corresponding to the image to be processed. And dividing the first image into a plurality of areas through the identification layer to obtain each area corresponding to the first image.
In the present embodiment, the image to be processed and the first image may be divided into the same number of regions in the same manner. The same division mode means that each region obtained by the image to be processed corresponds to each region of the first image one by one. For example, if the image to be processed is divided into nine-squares, the first image is divided into the same nine-squares.
And 304, determining characteristic values respectively corresponding to all the areas in the image to be processed, and determining the characteristic values respectively corresponding to all the areas in the first image.
Specifically, the terminal may obtain a region corresponding to the image to be processed, and determine a pixel value of each pixel point in the region. Then, a central pixel point in the area is determined, and the pixel value of the central pixel point is compared with the pixel values of the pixel points around the central pixel point to obtain a comparison result. And determining the characteristic value of the region according to the comparison result. In the same way, the characteristic values respectively corresponding to the regions in the image to be processed can be determined. In this same manner, the feature values corresponding to the respective regions in the first image can be determined.
In this embodiment, the terminal may perform LBP feature extraction on the image to be processed, and determine LBP values respectively corresponding to each region in the image to be processed. The terminal can extract LBP characteristics of the first image and determine LBP values corresponding to all areas in the first image.
And 308, determining a feature map of the first image according to the feature values respectively corresponding to the regions in the first image.
Specifically, the feature values respectively corresponding to the regions in the image to be processed represent key feature information of the regions, and the terminal can generate the feature map of the image to be processed according to the key feature information respectively corresponding to the regions in the image to be processed. Similarly, the feature value corresponding to each region in the first image represents the key feature information of the region, and the terminal can generate the feature map of the first image according to the key feature information corresponding to each region in the first image.
In this embodiment, the image to be processed and the first image are divided by the identification layer of the identification model to obtain each region of the image to be processed and each region of the first image, the feature values respectively corresponding to each region in the image to be processed are determined, and the feature values respectively corresponding to each region in the first image are determined, so that the key feature information of each region in the image can be obtained. Determining a feature map of the image to be processed according to the feature values respectively corresponding to the regions in the image to be processed, determining the feature map of the first image according to the feature values respectively corresponding to the regions in the first image, and generating the feature map according to the key feature information, so that the feature map contains all key feature information in the image, and feature differences existing between the image to be processed and the first image are visually displayed.
In one embodiment, the determining a residual map between the image to be processed and the first image according to the feature map of the image to be processed and the feature map of the first image includes: determining the characteristic variation between the characteristic diagram of the image to be processed and the characteristic diagram of the first image; and generating a residual image between the image to be processed and the first image according to the characteristic variation.
The characteristic variation refers to a characteristic difference between the same pixel points or the matched characteristic points in the two images. The feature variation includes, but is not limited to, a feature difference between two identical pixel points or a feature difference between matched feature points.
Specifically, the terminal may determine pairs of feature points that match each other in the feature map of the image to be processed and the feature map of the first image. Then, the terminal can calculate the feature variation between two feature points in each feature point pair to obtain the feature variation corresponding to each pair of feature points. And generating a residual error map according to the characteristic variation corresponding to each pair of characteristic points. The residual map represents a difference in characteristics between the image to be processed and the first image.
In this embodiment, the terminal obtains a preset number of feature point pairs between the feature map of the image to be processed and the feature map of the first image, where the feature point pairs are feature points that match each other in the feature map of the image to be processed and the feature map of the first image. For each pair of feature points, the feature variation between the two feature points is calculated, so that the feature variation of a preset number is obtained. Then, a residual map may be generated according to a predetermined number of feature variations.
In this embodiment, the terminal may determine pairs of pixel points that match each other in the feature map of the image to be processed and the feature map of the first image. Then, the terminal can calculate the characteristic variation between two pixels in each pixel pair to obtain the characteristic variation corresponding to each pair of pixels. And generating a residual error map according to the characteristic variable quantity corresponding to each pair of pixel points. The residual map represents a difference in characteristics between the image to be processed and the first image.
In the embodiment, the characteristic variation between the characteristic diagram of the image to be processed and the characteristic diagram of the first image is determined; and generating a residual map between the image to be processed and the first image according to the characteristic variation, so that the characteristic difference between the image to be processed and the first image can be accurately represented by the residual map, and whether the image to be processed is a living body can be accurately identified based on the characteristic difference.
In one embodiment, as shown in fig. 4, the determining the amount of feature variation between the feature map of the image to be processed and the feature map of the first image includes:
in step 402, a pixel point pair between the feature map of the image to be processed and the feature map of the first image is determined.
The pixel point pair refers to pixel points which are matched with each other in the two images. In this embodiment, the pixel point pair refers to pixel points that are matched with each other in the feature map of the image to be processed and the feature map of the first image.
Specifically, the terminal can determine pixel points in the feature map of the image to be processed, and select pixel points in the feature map of the first image, which are matched with each pixel point in the feature map of the image to be processed, so as to obtain pixel point pairs.
In this embodiment, the terminal may select a preset number of pixel points in the feature map of the image to be processed, and select pixel points in the feature map of the first image, where the pixel points are matched with the preset number of pixel points, to obtain a preset number of pixel point pairs.
The characteristic difference value refers to a difference value between characteristic values corresponding to two pixel points in the pixel point pair. The feature difference value represents a feature difference between two pixels in the pixel pair.
Specifically, for a pixel point pair, the terminal obtains a characteristic value corresponding to each pixel point in the pixel point pair, and calculates a difference value between the characteristic values corresponding to the two pixel points, so as to obtain a characteristic difference value corresponding to the pixel point pair. According to the same processing mode, the characteristic difference value corresponding to each pixel point pair can be obtained.
The generating of the residual map between the image to be processed and the first image according to the characteristic variation comprises the following steps:
and step 406, generating a residual error map between the image to be processed and the first image according to the corresponding feature difference value of each pixel point pair.
Specifically, the residual map is generated by the feature difference values respectively corresponding to each pixel point pair of the terminal. The residual image represents the feature difference between the matched pixel points in the feature map of the image to be processed and the feature map of the first image, so that the feature difference between the image to be processed and the first image is represented.
In this embodiment, a feature difference value between two pixel points in a pixel point pair is determined by determining a pixel point pair between a feature map of an image to be processed and a feature map of a first image, and a feature difference value corresponding to each pixel point pair is obtained, so that a feature difference between pixel points matched with each other in the two feature maps is calculated. And generating a residual error map according to the characteristic difference values, so that the characteristic difference between the image to be processed and the first image can be visually displayed through the residual error map.
In one embodiment, the generating a residual map between the image to be processed and the first image according to the corresponding feature difference value of each pixel point pair includes: normalizing the characteristic difference value corresponding to each pixel point pair to obtain a weight value corresponding to each pixel point pair; and generating a residual image between the image to be processed and the first image according to the corresponding weight value of each pixel point pair.
Specifically, after the terminal obtains the feature difference value corresponding to each pixel point pair, the terminal performs normalization processing on the feature difference values to convert the difference values into a range of [0,255 ]. And normalizing each characteristic difference value to obtain a new value, wherein the new value is used as a weight value. And then, the terminal generates a residual image according to the corresponding weight value of each pixel point, and each pixel point in the residual image corresponds to one weight value.
In this embodiment, the feature difference value corresponding to each pixel point pair is normalized to obtain a weight value corresponding to each pixel point pair, and a residual map between the image to be processed and the first image is generated according to the weight value corresponding to each pixel point pair, so that the residual map is used as a weight map to mark a position in the image to be processed where the feature difference exists in the identification process, so that the model focuses more on the position where the feature difference exists in the identification process, and the accuracy of identifying whether the image to be processed is a living body can be improved.
Fig. 5 is a schematic diagram of generating a residual map between the image to be processed and the first image in one embodiment. Wherein, the attack sample B is an image to be processed, and the terminal converts the attack sample B into a real sample BAThe attack sample B is a forged image, and the real sample BAThe source image is obtained after restoring the attack sample B. Then, the terminal carries out the LBP feature extraction algorithm on the attack sample B and the real sample BAPerforming feature extraction to obtain LBP graph of B and BALBP graph of (a). Then, the terminal can divide the LBP map of B into a plurality of regions through 3 × 3 windows (the window size is adjustable, including but not limited to 3 × 3), and each region at least includes 9 pixels. And taking the pixel value of the pixel point at the center of the window as a threshold value, and comparing the pixel values of the adjacent 8 pixel points with the central pixel value. When the adjacent pixel value is larger than the central pixel value, the position of the adjacent pixel point is marked as 1, otherwise, the position is 0. Thus, 8 points within a 3 by 3 window are compared to produce an 8-bit binary number. The LBP code is obtained by converting the 8-bit binary number into a decimal number. The LBP code is the LBP value of the center pixel point of the window, and the LBP value is used for reflecting the areaThe texture information of (1). Then, the LBP values of 8 pixels adjacent to the center pixel can be set to 0. And according to the same processing mode, the LBP value of each pixel point in a plurality of areas corresponding to the LBP graph of the B can be obtained.
For true sample BAAnd processing according to the same processing mode of the attack sample B to obtain the LBP value of each pixel point in a plurality of areas corresponding to the LBP graph of the B. Then, the terminal can calculate LBPBA-LBPBI.e. determining attack sample B and true sample BAAnd calculating the difference value between the LBP values corresponding to the mutually matched pixel points. Some differences may be 0, some differences may be positive numbers, and some negative numbers. The terminal normalizes the difference values to convert the difference values to be between 0 and 255, and generates a residual error map according to the normalized values.
In one embodiment, the performing living body recognition on the image to be processed based on the residual map to obtain the category of the image to be processed includes: acquiring a weight value corresponding to each pixel point in the residual image; and performing living body identification on the image to be processed based on the weight value corresponding to each pixel point in the residual error image to obtain the category of the image to be processed.
Specifically, the terminal may input the residual map and the image to be processed into a classification network of the recognition layer. The classification network can perform convolution processing on the image to be processed and acquire the weight values corresponding to all pixel points in the residual error map. The method comprises the steps that a weighted value corresponding to each pixel point in a residual image participates in convolution processing of an image to be processed to emphasize positions with characteristic differences in the image to be processed, so that the positions with the characteristic differences obtained after the convolution processing is carried out on the image to be processed are more and more obvious. In the continuous convolution processing, the recognition model can refine the feature difference existing in the image to be processed, so that the category probability is obtained according to the feature difference existing in the image to be processed. And then, comparing the class probability with a probability threshold, wherein when the class probability is greater than the probability threshold, the class corresponding to the image to be processed is a living body image. And when the class probability is smaller than or equal to the probability threshold, the class corresponding to the image to be processed is a non-living image.
In this embodiment, by obtaining the weight value corresponding to each pixel point in the residual map, the living body recognition is performed on the image to be processed based on the weight value corresponding to each pixel point in the residual map, so that the residual map is used as a weight map to mark the position of the image to be processed where the characteristic difference exists in the recognition process, and the recognition model focuses more on the position of the characteristic difference, thereby improving the recognition accuracy and accurately obtaining the category corresponding to the image to be processed.
In an embodiment, as shown in fig. 6, the performing living body identification on the to-be-processed image based on the weight value corresponding to each pixel point in the residual map to obtain the category of the to-be-processed image includes:
Specifically, the terminal inputs the image to be processed and the first image into the recognition layer of the recognition model. The convolution layer in the identification layer can perform convolution processing on the image to be processed through the convolution core corresponding to each layer to obtain a first characteristic value corresponding to each pixel point in the image to be processed.
In this embodiment, the terminal inputs the image to be processed and the first image into a first convolution layer in the recognition layers of the recognition model. And performing convolution processing on the image to be processed by convolution check in the first convolution layer to obtain a first characteristic value corresponding to each pixel point.
Specifically, the convolution layer in the identification layer may perform convolution processing on the image to be processed through convolution kernel, and obtain a first characteristic value corresponding to each pixel point in the image to be processed, and then obtain a weight value corresponding to each pixel point in the residual error map. And determining a second characteristic value corresponding to each pixel point in the image to be processed according to the weight value corresponding to each pixel point in the residual image and the first characteristic value of the corresponding pixel point in the image to be processed. The second characteristic value is the characteristic value output by the convolution layer.
In this embodiment, a weight value corresponding to a pixel point in the residual map is multiplied by a first characteristic value of a corresponding pixel point in the image to be processed, and the product is a second characteristic value. And multiplying the weighted value corresponding to each pixel point in the residual image by the first characteristic value of the corresponding pixel point in the image to be processed to obtain a second characteristic value corresponding to each pixel point in the image to be processed.
In this embodiment, the convolution kernel in the first convolution layer in the identification layer performs convolution processing on the image to be processed, so as to obtain a first feature value corresponding to each pixel point. And multiplying the weighted value corresponding to each pixel point in the residual image with the first characteristic value of the corresponding pixel point in the image to be processed to obtain a second characteristic value corresponding to each pixel point in the image to be processed output by the first convolution layer.
For example, if the weighted values corresponding to some pixel points in the residual image are 0, the weighted values are multiplied by the first feature values of the corresponding pixel points in the image to be processed, and the second feature values are 0, it indicates that there is no or little feature difference between the pixel points with the second feature values of 0 and the corresponding pixel points in the source image. If the weight value corresponding to the pixel point in the residual image is a positive number, the larger the numerical value is, the larger the second characteristic value obtained by multiplying the first characteristic value is. The larger the second characteristic value is, the obvious characteristic difference exists between the corresponding pixel point and the corresponding pixel point in the source image, so that the pixel point with the obvious characteristic difference in the image to be processed can be screened out.
Specifically, the second feature value output by the convolution layer is used as the input of the next convolution layer, and the convolution processing is performed on the second feature value corresponding to each pixel point in the image to be processed through the next convolution layer. And taking the characteristics of the output of the previous volume of lamination layer as the input of the next volume of lamination layer so as to obtain the class probability of the output of the last volume of lamination layer. And determining the category of the image to be processed according to the category probability.
In this embodiment, when the class probability corresponding to the image to be processed is greater than the probability threshold, the class corresponding to the image to be processed is a living body. And when the class probability corresponding to the image to be processed is smaller than or equal to the probability threshold, the class corresponding to the image to be processed is a non-living body.
In this embodiment, a convolution process is performed on an image to be processed through an identification layer of an identification model to obtain a first characteristic value corresponding to each pixel point in the image to be processed, and a second characteristic value corresponding to each pixel point in the image to be processed is determined according to a weight value corresponding to each pixel point in a residual map and the first characteristic value corresponding to each pixel point in the image to be processed, so that the weight value in the residual map is applied to the convolution process of the image to be processed, and therefore, the pixel points with obvious characteristic differences in the image to be processed can be screened out. And determining the category of the image to be processed according to the second characteristic value corresponding to each pixel point in the image to be processed, so that the residual image can be applied to living body detection, and whether the image to be processed belongs to a living body can be more accurately identified.
In one embodiment, the performing living body recognition on the image to be processed based on the residual map to obtain the category of the image to be processed includes: extracting the features of the residual image and the image to be processed to obtain the features corresponding to the residual image and the features corresponding to the image to be processed; and performing living body identification on the image to be processed based on the characteristics of the residual image and the characteristics of the image to be processed to obtain the category of the image to be processed.
Specifically, after the terminal obtains the residual map through the recognition layer in the recognition model, the terminal can perform feature extraction on the residual map and the image to be processed to obtain features corresponding to the residual map and the image to be processed. And then, inputting the features corresponding to the residual error map and the features corresponding to the image to be processed into a classification network in an identification layer, and performing convolution processing on the features corresponding to the residual error map and the features corresponding to the image to be processed by the classification network to obtain the class probability of the image to be processed. The category of the image to be processed can be obtained by comparing the category probability with the probability threshold, and the identification layer of the identification model outputs the category corresponding to the image to be processed.
In this embodiment, the first convolution layer in the classification network performs convolution processing on the features corresponding to the residual map and the features corresponding to the image to be processed, so as to obtain an output feature map. And then, taking the feature map output by the first convolutional layer as the input of a second convolutional layer, taking the feature map output by the last convolutional layer as the input of the next convolutional layer, and obtaining the class probability of the image to be processed output by the last convolutional layer.
In this embodiment, feature extraction is performed on the residual map and the image to be processed to obtain features corresponding to the residual map and the image to be processed, so as to obtain key feature information in the residual map and key feature information in the image to be processed. The living body identification is carried out on the image to be processed based on the characteristics of the residual image and the characteristics of the image to be processed, the category of the image to be processed can be identified based on the key characteristic information, the calculation amount is reduced, and the living body detection efficiency is improved.
Fig. 7 is a schematic diagram of a living body detection of an image to be processed in one embodiment. And the terminal inputs the attack sample B and the residual map into the first convolutional layer for convolution processing, takes the output characteristic of the first convolutional layer as the input of the second convolutional layer, and takes the characteristic map output by the last convolutional layer as the input of the next convolutional layer until the characteristic map output by the last convolutional layer is obtained. And predicting the feature graph output by the last convolutional layer through an output layer to determine whether the attack sample B belongs to the living body category.
In one embodiment, as shown in fig. 8, there is provided a recognition model training method, including:
The training image sample is an image including a face region, or may be an image including a face and parts of a body of the user, such as a face image, an upper body image, a whole body image, and the like. The training image sample may be an RGB (Red, Green, Blue) image. The training image samples comprise positive sample images and negative sample images, and the positive sample images refer to collected source images, namely real images. The forged image is an image obtained by copying, changing the face of a source image or combining the forged image with other images to change all or key characteristics of the source image, and is also called an attack image.
Specifically, the terminal can directly shoot a user to acquire a training image sample, can also acquire the training image sample from a local or network, and determines a label corresponding to the training image sample through manual labeling. The positive and negative sample images in the training image sample do not need to be matched one to one, i.e., one positive sample image and one negative sample image are not images of the same user. The number of positive and negative sample images need not be the same. It is understood that the positive and negative sample images may also match one-to-one. The number of positive and negative sample images may also be the same.
Specifically, the terminal inputs training image samples into the constructed recognition model. The conversion layer of the constructed recognition model is capable of generating a new image of a different attribute than the training image sample, i.e. the first image sample.
In the implementation, when the training image sample is a source image, the source image is converted into an attack image through a conversion layer of the recognition model. And when the training image sample is an attack image, converting the attack image into a source image through a conversion layer of the recognition model.
Specifically, the terminal can input the training image sample into the recognition layer of the constructed recognition model, and perform feature extraction on the training image sample through the recognition layer to obtain a feature map of the training image sample. The terminal can input the first image sample output by the conversion layer of the identification model into the identification layer, and performs feature extraction on the first image sample through the identification layer to obtain a feature map of the first image sample.
In this embodiment, the terminal may perform LBP feature extraction on the training image sample and the first image sample through a recognition layer of the recognition model, so as to obtain an LBP feature map corresponding to the training image sample and an LBP feature map corresponding to the first image sample.
Specifically, the terminal may calculate a feature difference between the feature map of the training image sample and the feature map of the first image sample to obtain a residual map.
In this embodiment, the terminal may calculate a feature difference between the LBP feature map corresponding to the training image sample and the LBP feature map corresponding to the first image sample to obtain a residual map.
Specifically, the terminal takes the residual map and the training image sample as input images for the living body identification process. And the terminal performs convolution processing on the residual error image and the training image sample through a recognition layer in the recognition model to obtain the class probability corresponding to the training image sample output by the recognition layer. And then, determining the category corresponding to the training image sample according to the category probability, and outputting the identification result corresponding to the training image sample.
And step 812, adjusting parameters of the recognition model according to the difference between the recognition result of the training image sample and the corresponding class label, and continuing training until a preset condition is met, so as to obtain a trained recognition model.
Specifically, the terminal compares the recognition result of the training image sample output by the recognition model with the corresponding class label, and determines the difference between the two. And adjusting the parameters of the recognition model according to the difference between the two and continuing training until the preset condition is met, so as to obtain the trained recognition model.
In this embodiment, the preset condition is that a difference between the recognition result of the training image sample and the corresponding class label is smaller than a preset difference. Or the preset condition is that the loss value output by the recognition model is smaller than the loss threshold value. And when the difference between the recognition result of the training image sample and the corresponding class label is smaller than the preset difference, or the preset condition is that the loss value output by the recognition model is smaller than the loss threshold, stopping training to obtain the trained recognition model.
In the embodiment, a training image sample and a category label corresponding to the training image sample are obtained, wherein the category label comprises a living body and a non-living body, the training image sample is converted into a first image sample through a conversion layer of an identification model, and the training image sample and the first image sample correspond to different attributes; the attributes comprise forged images and unformed images, feature extraction is carried out on a training image sample and a first image sample through a recognition layer of a recognition model to obtain a feature map of the training image sample and a feature map of the first image sample, a residual map between the training image sample and the first image sample is determined according to the feature map of the training image sample and the feature map of the first image sample, living body recognition is carried out on the training image sample based on the residual map to obtain a recognition result of the training image sample, parameters of the recognition model are adjusted according to the difference between the recognition result of the training image sample and a corresponding class label, training is continued until a preset condition is met, and therefore the trained recognition model is obtained, living body discrimination can be carried out on a single image through the trained recognition model without any face action of a user in a matching way, the cost of detection is reduced and the accuracy of living body identification is improved.
In one embodiment, as shown in FIG. 9, the translation layer of the recognition model includes a first generator and a second generator; the first generator converts the forged image into an unforeseen image, and the second generator converts the unforeseen image into a forged image;
the training mode of the generator in the conversion layer of the recognition model comprises the following steps:
Specifically, the terminal may obtain a negative sample image from the training image sample. The negative sample image is input to a first generator in a conversion layer of the recognition model. The first generator converts the forged image into an unforeseen image, and the attributes of the image output by the first generator are all unforeseen images. The negative sample image is converted into a second image by the first generator, and the attribute of the second image is an unforeseen image.
Specifically, the second image output by the first generator is input into a second generator, the second generator converts the non-forged image into a forged image, and the attributes of the image output by the second generator are both forged images. The second image is converted to a third image by a second generator. The third image and the negative sample image are both forged images, and the attributes of the third image and the negative sample image are the same.
In step 908, the similarity between the negative sample image and the third image is determined.
Specifically, after the negative sample image is converted into the second image, the second image is converted into the third image, and similar features exist between the negative sample image and the third image. The terminal calculates the similarity between the negative sample image and the third image. Further, the terminal may determine the feature vector corresponding to the negative sample image and the feature vector corresponding to the third image through an LBP algorithm. And calculating the similarity between the negative sample image and the third image according to the feature vector corresponding to the negative sample image and the feature vector corresponding to the third image.
And step 910, when the similarity between the negative sample image and the third image is smaller than the similarity threshold, adjusting the parameters of the first generator and the second generator and continuing training until a training stopping condition is met, so as to obtain the trained first generator and second generator.
Specifically, the terminal obtains a similarity threshold value, and compares the similarity of the negative sample image and the third image with the similarity threshold value. When the similarity between the negative sample image and the third image is smaller than the similarity threshold, it is indicated that the feature similarity between the negative sample image and the third image is not satisfactory, i.e., the performance of the second generator is not good. It also indicates that the performance of the first generator generating the second image is not satisfactory. The terminal adjusts the parameters of the first generator and the second generator and continues training until the first generator and the second generator meet the training stopping condition, and the trained first generator and second generator are obtained.
In this embodiment, the training stop condition is that the similarity between the negative sample image and the third image is greater than or equal to the similarity threshold. When the similarity between the negative sample image and the third image is greater than or equal to the similarity threshold, the feature similarity between the negative sample image and the third image meets the requirement, that is, the performance of the second generator meets the requirement. Since the third image is converted from the second image, the second image is obtained by converting the first image, and the performance of the second generator meets the requirement, the performance of the first generator also meets the requirement.
In this embodiment, a negative sample image is obtained from a training image sample, the negative sample image is converted into a second image by the first generator, the negative sample image and the second image have different attributes, and the second image is converted into a third image by the second generator. The third image and the negative sample image have the same attribute, whether the performance of the second generator meets the requirement can be judged by determining the similarity between the negative sample image and the third image, and whether the performance of the first generator meets the requirement can be judged by judging whether the performance of the second generator meets the requirement. And when the similarity of the negative sample image and the third image is smaller than a similarity threshold value, adjusting the parameters of the first generator and the second generator and continuing training until a training stopping condition is met, so as to obtain the trained first generator and second generator. The trained first generator can convert the forged image into the non-forged image, and the second generator converts the non-forged image into the forged image, so that one image is converted into a plurality of images with different attributes, the training image sample can be expanded, the data set is enhanced, and the cost of data acquisition is reduced.
In one embodiment, a first generator and a second generator are included in a translation layer of a recognition model; the first generator converts the forged image into an unforeseen image, and the second generator converts the unforeseen image into a forged image;
the training mode of the generator in the conversion layer of the recognition model comprises the following steps:
acquiring a positive sample image from a training image sample, wherein the attribute of the positive sample image is an unforeseen image;
converting the positive sample image into a fourth image through a second generator, wherein the positive sample image and the fourth image have different attributes;
converting the fourth image into a fifth image through the first generator, wherein the fifth image and the positive sample image have the same attribute;
determining the similarity of the positive sample image and the fifth image;
and when the similarity between the positive sample image and the fifth image is smaller than a similarity threshold value, adjusting the parameters of the first generator and the second generator and continuing training until a training stopping condition is met, so as to obtain the trained first generator and second generator.
It will be appreciated that the training of the generator in the conversion layer of the recognition model may also be trained using negative sample images. The principle of training the generator by using the positive sample image can refer to the process from step 902 to step 910, and is not described herein again.
In one embodiment, as shown in FIG. 10, a discriminator is also included in the translation layer of the recognition model; the training mode of the discriminator in the conversion layer of the recognition model comprises the following steps:
And 1004, identifying the second image and the positive sample image through the identifier, and determining attribute identification results corresponding to the second image and the positive sample image.
Wherein the discriminator is used for discriminating whether the image is a forged image or a non-forged image. I.e. the discriminator is used to discriminate the properties of the image.
Specifically, the terminal may obtain a positive sample image from the training image, the positive sample image being an unforeseen image. Next, the terminal inputs the positive sample image and the second image into a discriminator in a conversion layer of the recognition model. The discriminator discriminates the positive sample image and the second image, and outputs an attribute recognition result of the positive sample image and an attribute recognition result of the second image. The second image is an unforeseen image converted from a negative sample image, and an untrained discriminator may recognize the second image as a counterfeit image.
And step 1006, when the attribute identification results corresponding to the second image and the positive sample image are different, adjusting the parameters of the discriminator and continuing training until the training stop condition is met, so as to obtain the trained discriminator.
Specifically, when the attribute identification result of the second image output by the discriminator is different from the attribute identification result corresponding to the positive sample image, the discrimination capability of the discriminator is not met. For example, the identifier outputs the attribute identification result of the second image as a counterfeit image, and outputs the attribute identification result corresponding to the positive sample image as an unforgeable image. The terminal adjusts the parameters of the discriminator and continues training until the training stopping condition is met, and the trained discriminator is obtained.
In the present embodiment, the discriminator training stop condition is that both the attribute recognition results of the second image and the positive sample image are non-counterfeit images. And stopping training when the attribute recognition result of the second image and the attribute recognition result of the positive sample image output by the discriminator are both non-forged images, so as to obtain the trained discriminator.
In this embodiment, a positive sample image is obtained from a training image sample, an attribute of the positive sample image is an unforgeable image, a discriminator is used to identify a second image and the positive sample image having the same attribute, and an attribute identification result corresponding to the second image and the positive sample image is determined, so as to determine whether the authentication performance of the discriminator meets the requirement. And when the attribute recognition results corresponding to the second image and the positive sample image are different, adjusting the parameters of the discriminator and continuing training until the training stopping condition is met to obtain the trained discriminator, so that the trained discriminator is used for discriminating the attribute of the output image to be processed in the application process of the recognition model. And determining the attribute of the image to be converted according to the attribute of the image to be processed, thereby determining a generator for processing the image to be processed, and further accurately obtaining a converted image with the attribute different from that of the image to be processed.
In one embodiment, the conversion layer of the recognition model further comprises a discriminator; the training mode of the discriminator in the conversion layer of the recognition model comprises the following steps:
acquiring a negative sample image from a training image sample, wherein the attribute of the negative sample image is a forged image;
identifying the fourth image and the negative sample image through the discriminator, and determining attribute identification results corresponding to the fourth image and the negative sample image;
and when the attribute recognition results corresponding to the fourth image and the negative sample image are different, adjusting parameters of the discriminator and continuing training until the training stopping condition is met, so that the trained discriminator is obtained.
It will be appreciated that the training of the discriminators in the conversion layer of the recognition model may also be trained using negative sample images. The principle of training the discriminator by using the negative sample image can also refer to the process from step 1002 to step 1006, and will not be described herein.
FIG. 11 illustrates an architecture diagram for identifying a translation layer in a model in one embodiment. The conversion layer of the recognition model comprises 4 generators andtwo discriminators. Two of the 4 generators are identical. GeneratorBtoAIs a first generator, a generatorAtoBIs the second generator. The discriminator A is used for discriminating the attack sample B and the attack sample ABThe discriminator B is used for discriminating the real sample A and the real sample BAThe attributes are attack samples and real samples. The attack sample is a forged image, and the real sample is an unformed image. The attack sample B and the real sample a may correspond to the same user or different users.
The terminal inputs an attack sample B into a generator to be trainedBtoABy generators to be trainedBtoAConverting the attack sample B into a real sample BABy generators to be trainedAtoBReal sample BAAnd converting into an attack sample B'. Then, the terminal calculates the similarity between the attack sample B and the attack sample B'. When the similarity is smaller than the similarity threshold, the characteristic difference between the attack sample B and the reconstructed attack sample B' is too obvious, and the similarity is not high, the generator is indicatedAtoBThe reconstruction performance of (2) is not good. From which the generator can be presumedBtoAThe reconstruction performance is also not good. The terminal can adjust the generatorBtoASum generatorAtoBAnd repeatedly training. When the similarity between the attack sample B and the attack sample B 'is larger than or equal to the similarity threshold, the difference of the characteristics between the attack sample B and the reconstructed attack sample B' is very small, and the similarity is very high, so that the generator is indicatedAtoBThe reconstruction performance of the method meets the requirements. From which the generator can be presumedBtoAThe reconstruction performance of the method also meets the requirements. GeneratorAtoBSum generatorBtoAIs finished, the well-trained generator is obtainedAtoBSum generatorBtoA。
Then, the terminal acquires the trained generatorBtoATrue sample B of the outputAAnd acquiring a real sample A. The real sample BAThe true sample a and the true sample a may correspond to the same user or may correspond to different users. The terminal will real sample BAAnd the real sample A is input into a discriminator B to be trained. Identification to be trainedDevice B identifies true sample BAAnd which of the real samples a is the image converted by the generator and which is not the converted image. The identifier B to be trained outputs a real sample BAAnd the attribute recognition result of the real sample a. When the discriminator B outputs the true sample BAWhen the attribute recognition result of (2) is an image converted by the generator, it indicates that the discriminator judges the true sample BAThe image is attacked and not the real image. When the identifier B outputs the attribute identification result of the real sample A as a real image, the identifier B indicates that the identifier judges the real sample BAIs a real image and does not generate a converted image. But the recognition model requires the discriminator B to be on the true sample BAThe recognition results of the real sample A are both real images, but not images obtained by the conversion of the generator, so that the images obtained by the conversion of the generator can be ensured to be in accordance with the actual situation. The terminal adjusts the parameters of the discriminator B so that the discriminator B compares the true sample B with the true sample BAAnd stopping training when the recognition results of the real sample A are both real images to obtain a trained discriminator B.
In this embodiment, the terminal can input the real sample A into the generator to be trainedAtoBBy generators to be trainedAtoBConverting a real sample A into an attack sample ABBy generators to be trainedBtoAWill attack sample ABConverted into a real sample a'. Then, the terminal calculates the similarity between the real sample a and the real sample a'. When the similarity is smaller than the similarity threshold value, the characteristic difference between the real sample A and the reconstructed real sample A' is too obvious, and the similarity is not high, the generator is indicatedBtoAThe reconstruction performance of (2) is not good. From which the generator can be presumedAtoBThe reconstruction performance is also not good. The terminal can adjust the generatorBtoASum generatorAtoBAnd repeatedly training. When the similarity between the real sample A and the real sample A 'is larger than or equal to the similarity threshold, the difference of the characteristics between the real sample A and the reconstructed real sample A' is very small, and the similarity is very high, so that the generator is indicatedBtoAThe reconstruction performance of the method meets the requirements. Thereby making it possible to presumeBirth forming deviceAtoBThe reconstruction performance of the method also meets the requirements. GeneratorAtoBSum generatorBtoAIs finished, the well-trained generator is obtainedAtoBSum generatorBtoA。
Then, the terminal acquires the trained generatorAtoBOutput attack sample ABAnd an attack sample B is obtained. The attack sample ABThe attack sample B may correspond to the same user or different users. Terminal will attack sample ABAnd the attack sample B is input into a discriminator B to be trained. The identifier B to be trained identifies the attack sample ABAnd which of the attack samples B is the image converted by the generator and which is not the converted image. The identifier B to be trained outputs an attack sample ABAnd the attribute identification result of the attack sample B. When the discriminator B outputs the attack sample ABWhen the attribute identification result of the attack sample B is the image converted by the generator and the attribute identification result of the attack sample B is the image converted by the non-generator, the terminal adjusts the parameters of the discriminator B so that the discriminator B can be used for processing the attack sample ABAnd stopping training when the identification results of the attack sample B are images obtained by conversion of the non-generator, so as to obtain the trained discriminator B.
FIG. 12 is an architecture diagram of a recognition model in one embodiment. The generation process comprises two parts of encoding and decoding, a deep convolutional network can be adopted in the encoding process, generally, 5 convolutional blocks are adopted, and the number of the convolutional blocks can be set according to requirements; each Convolution block includes three layers, i.e., Conv (Convolution layer), BN (Batch Normalization network), and Relu (Rectified Linear Unit). The decoding process can adopt a deep deconvolution network, and is generally similar to the coding process in structure; each deconvolution block contains three layers, TranConv, BN, Relu. The input image can be converted into an image with different attributes from the input image through the generation process, namely, a reconstructed image. And the identification process is a depth convolution network, extracts features from the input image and the reconstructed image, determines a residual image between the input image and the reconstructed image, performs convolution processing on the residual image and the input image, and finally generates a one-dimensional output convolution layer to determine whether the extracted features belong to a specific category, so as to obtain the category of the input image.
In one embodiment, there is provided a living body identification method including:
the terminal obtains a training image sample and a category label corresponding to the training image sample, wherein the category label comprises a living body and a non-living body.
Then, the terminal converts the training image sample into a first image sample through a conversion layer of the recognition model, and the training image sample and the first image sample correspond to different attributes; the attribute includes a forged image and an unforeseen image.
And then, the terminal performs feature extraction on the training image sample and the first image sample through a recognition layer of the recognition model to obtain a feature map of the training image sample and a feature map of the first image sample.
Further, the terminal determines a residual map between the training image sample and the first image sample according to the feature map of the training image sample and the feature map of the first image sample.
And then, the terminal identifies the living body of the training image sample based on the residual image to obtain the identification result of the training image sample.
Further, the terminal adjusts parameters of the recognition model according to the difference between the recognition result of the training image sample and the corresponding class label and continues training until the preset condition is met, and the trained recognition model is obtained.
And then, the terminal acquires an image to be processed, the image to be processed is converted into a first image through a conversion layer of the identification model, the image to be processed and the first image correspond to different attributes, and the attributes comprise a forged image and an unforeseen image.
Then, the terminal divides the image to be processed and the first image through the identification layer of the identification model to obtain each region of the image to be processed and each region of the first image;
further, the terminal determines characteristic values respectively corresponding to all the areas in the image to be processed, and determines characteristic values respectively corresponding to all the areas in the first image;
further, the terminal determines a feature map of the image to be processed according to the feature values respectively corresponding to the regions in the image to be processed; and determining the characteristic map of the first image according to the characteristic values respectively corresponding to the areas in the first image.
Then, the terminal determines a pixel point pair between the feature map of the image to be processed and the feature map of the first image; and determining a characteristic difference value between two pixel points in the pixel point pairs to obtain a corresponding characteristic difference value of each pixel point pair.
And then, the terminal performs normalization processing on the characteristic difference value corresponding to each pixel point pair to obtain a weight value corresponding to each pixel point pair.
Further, the terminal generates a residual image between the image to be processed and the first image according to the corresponding weight value of each pixel point pair.
And then, the terminal acquires a weight value corresponding to each pixel point in the residual image, and performs convolution processing on the image to be processed through the identification layer of the identification model to obtain a first characteristic value corresponding to each pixel point in the image to be processed.
And then, the terminal determines a second characteristic value corresponding to each pixel point in the image to be processed according to the weight value corresponding to each pixel point in the residual image and the first characteristic value corresponding to each pixel point in the image to be processed.
Further, the terminal determines the category probability of the image to be processed according to the second characteristic value corresponding to each pixel point in the image to be processed, and compares the category probability with a probability threshold; and when the class probability is greater than the probability threshold, the image to be processed is a living body image. And when the class probability is less than or equal to the probability threshold value, the image to be processed is a non-living body image.
In this embodiment, the to-be-processed image is converted into a first image with a different attribute from the to-be-processed image through a trained conversion layer of the recognition model, and feature maps of the to-be-processed image and the first image are extracted to obtain key information of the to-be-processed image and the first image. And calculating the characteristic difference value between the matched pixel points in the characteristic diagram of the image to be processed and the characteristic diagram of the first image to generate a residual error diagram, wherein the residual error diagram represents the characteristic difference between the image to be processed and the first image. And applying the weighted value in the residual image to convolution processing of the image to be processed, so that pixel points with obvious characteristic difference in the image to be processed can be screened out. The type of the image to be processed is determined based on the pixel points with obvious characteristic difference, and whether the image to be processed belongs to a living body or not is identified more accurately. In addition, in the embodiment, the living body can be distinguished only from a single image without any face action by the cooperation of the user, so that the detection cost is reduced, and the accuracy of living body identification is improved.
In one embodiment, a living body recognition method for recognizing whether a face image of a user is a living body face is provided, including:
the terminal obtains a training image sample and a class label corresponding to the training image sample, wherein the class label comprises a living body face and a non-living body face, and the training sample image is a face image.
Then, the terminal converts the training image sample into a first image sample through a conversion layer of the recognition model, and the training image sample and the first image sample correspond to different attributes; the attribute includes a forged image and an unforeseen image.
And then, the terminal performs feature extraction on the training image sample and the first image sample through a recognition layer of the recognition model to obtain a feature map of the training image sample and a feature map of the first image sample.
Further, the terminal determines a residual map between the training image sample and the first image sample according to the feature map of the training image sample and the feature map of the first image sample.
And then, the terminal identifies the living body of the training image sample based on the residual image to obtain the identification result of the training image sample.
Further, the terminal adjusts parameters of the recognition model according to the difference between the recognition result of the training image sample and the corresponding class label and continues training until the preset condition is met, and the trained recognition model is obtained.
And then, the terminal acquires an image to be processed, the image to be processed is a face image, the image to be processed is converted into a first image through a conversion layer of the identification model, the image to be processed and the first image correspond to different attributes, and the attributes comprise a forged image and an unforeseen image.
Then, the terminal divides the image to be processed and the first image through the identification layer of the identification model to obtain each region of the image to be processed and each region of the first image;
further, the terminal determines characteristic values respectively corresponding to all the areas in the image to be processed, and determines characteristic values respectively corresponding to all the areas in the first image;
further, the terminal determines a feature map of the image to be processed according to the feature values respectively corresponding to the regions in the image to be processed; and determining the characteristic map of the first image according to the characteristic values respectively corresponding to the areas in the first image.
Then, the terminal determines a pixel point pair between the feature map of the image to be processed and the feature map of the first image; and determining a characteristic difference value between two pixel points in the pixel point pairs to obtain a corresponding characteristic difference value of each pixel point pair.
And then, the terminal performs normalization processing on the characteristic difference value corresponding to each pixel point pair to obtain a weight value corresponding to each pixel point pair.
Further, the terminal generates a residual image between the image to be processed and the first image according to the corresponding weight value of each pixel point pair.
And then, the terminal acquires a weight value corresponding to each pixel point in the residual image, and performs convolution processing on the image to be processed through the identification layer of the identification model to obtain a first characteristic value corresponding to each pixel point in the image to be processed.
And then, the terminal determines a second characteristic value corresponding to each pixel point in the image to be processed according to the weight value corresponding to each pixel point in the residual image and the first characteristic value corresponding to each pixel point in the image to be processed.
Further, the terminal determines the category probability of the image to be processed according to the second characteristic value corresponding to each pixel point in the image to be processed, and compares the category probability with a probability threshold; and when the class probability is greater than the probability threshold, the image to be processed is a living body face image. And when the class probability is less than or equal to the probability threshold, the image to be processed is a non-living body face image.
In this embodiment, the to-be-processed image is converted into a first image with a different attribute from the to-be-processed image through a trained conversion layer of the recognition model, and feature maps of the to-be-processed image and the first image are extracted to obtain key information of the to-be-processed image and the first image. And calculating the characteristic difference value between the matched pixel points in the characteristic diagram of the image to be processed and the characteristic diagram of the first image to generate a residual error diagram, wherein the residual error diagram represents the characteristic difference between the image to be processed and the first image. And applying the weighted value in the residual image to convolution processing of the image to be processed, so that pixel points with obvious characteristic difference in the image to be processed can be screened out. The classification of the face image is determined based on the pixel points with obvious characteristic difference, and whether the face image belongs to a living face or not is identified more accurately. In addition, in the embodiment, the living human face can be distinguished from a single image without any face action by the cooperation of a user, so that the detection cost is reduced, and the accuracy of living human face recognition is improved.
It should be understood that although the various steps in the flowcharts of fig. 2-12 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-12 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed in turn or alternately with other steps or at least some of the other steps or stages.
In one embodiment, as shown in fig. 13, there is provided a living body identification apparatus, which may be a part of a computer device using a software module or a hardware module, or a combination of the two, specifically comprising: a conversion module 1302, an extraction module 1304, a determination module 1306, and an identification module 1308, wherein:
the conversion module 1302 is configured to obtain an image to be processed, and convert the image to be processed into a first image through a conversion layer of the identification model, where the image to be processed and the first image correspond to different attributes, and the attributes include a forged image and an unforeseen forged image.
The extraction module 1304 is configured to perform feature extraction on the image to be processed and the first image through the identification layer of the identification model, so as to obtain a feature map of the image to be processed and a feature map of the first image.
And a determining module 1306, configured to determine a residual map between the image to be processed and the first image according to the feature map of the image to be processed and the feature map of the first image.
The identifying module 1308 is configured to perform living body identification on the image to be processed based on the residual map, so as to obtain a category of the image to be processed, where the category is a living body or a non-living body.
In the living body recognition device, an image to be processed is acquired, the image to be processed is converted into a first image through a conversion layer of a recognition model, the image to be processed and the first image correspond to different attributes, the attributes comprise a forged image and an unforced image, feature extraction is carried out on the image to be processed and the first image through a recognition layer of the recognition model to obtain a feature map of the image to be processed and a feature map of the first image, and a residual map between the image to be processed and the first image is determined according to the feature map of the image to be processed and the feature map of the first image, so that the difference between the image to be processed and the first image can be determined. The living body identification is carried out on the image to be processed based on the residual error image to obtain the category of the image to be processed, the category is a living body or a non-living body, the user does not need to cooperate to make any facial action, the living body identification can be carried out only from a single image, the detection cost is reduced, and the accuracy of the living body identification is improved.
In one embodiment, the extraction module 1304 is further configured to: dividing the image to be processed and the first image through a recognition layer of the recognition model to obtain each region of the image to be processed and each region of the first image; determining characteristic values respectively corresponding to all the areas in the image to be processed, and determining the characteristic values respectively corresponding to all the areas in the first image; determining a feature map of the image to be processed according to the feature values respectively corresponding to the regions in the image to be processed; and determining the characteristic map of the first image according to the characteristic values respectively corresponding to the areas in the first image.
In this embodiment, the image to be processed and the first image are divided by the identification layer of the identification model to obtain each region of the image to be processed and each region of the first image, the feature values respectively corresponding to each region in the image to be processed are determined, and the feature values respectively corresponding to each region in the first image are determined, so that the key feature information of each region in the image can be obtained. Determining a feature map of the image to be processed according to the feature values respectively corresponding to the regions in the image to be processed, determining the feature map of the first image according to the feature values respectively corresponding to the regions in the first image, and generating the feature map according to the key feature information, so that the feature map contains all key feature information in the image, and feature differences existing between the image to be processed and the first image are visually displayed.
In one embodiment, the determining module 1306 is further configured to: determining the characteristic variation between the characteristic diagram of the image to be processed and the characteristic diagram of the first image; and generating a residual image between the image to be processed and the first image according to the characteristic variation.
In the embodiment, the characteristic variation between the characteristic diagram of the image to be processed and the characteristic diagram of the first image is determined; and generating a residual map between the image to be processed and the first image according to the characteristic variation, so that the characteristic difference between the image to be processed and the first image can be accurately represented by the residual map, and whether the image to be processed is a living body can be accurately identified based on the characteristic difference.
In one embodiment, the determining module 1306 is further configured to: determining a pixel point pair between the feature map of the image to be processed and the feature map of the first image; determining a characteristic difference value between two pixel points in the pixel point pair to obtain a characteristic difference value corresponding to each pixel point pair; and generating a residual image between the image to be processed and the first image according to the corresponding characteristic difference value of each pixel point pair.
In this embodiment, a feature difference value between two pixel points in a pixel point pair is determined by determining a pixel point pair between a feature map of an image to be processed and a feature map of a first image, and a feature difference value corresponding to each pixel point pair is obtained, so that a feature difference between pixel points matched with each other in the two feature maps is calculated. And generating a residual error map according to the characteristic difference values, so that the characteristic difference between the image to be processed and the first image can be visually displayed through the residual error map.
In one embodiment, the determining module 1306 is further configured to: normalizing the characteristic difference value corresponding to each pixel point pair to obtain a weight value corresponding to each pixel point pair; and generating a residual image between the image to be processed and the first image according to the corresponding weight value of each pixel point pair.
In this embodiment, the feature difference value corresponding to each pixel point pair is normalized to obtain a weight value corresponding to each pixel point pair, and a residual map between the image to be processed and the first image is generated according to the weight value corresponding to each pixel point pair, so that the residual map is used as a weight map to mark a position in the image to be processed where a feature difference exists in the image to be processed in the identification process, and the accuracy of identifying whether the image to be processed is a living body can be improved.
In one embodiment, the identification module 1308 is further configured to: acquiring a weight value corresponding to each pixel point in the residual image; and performing living body identification on the image to be processed based on the weight value corresponding to each pixel point in the residual error image to obtain the category of the image to be processed.
In this embodiment, by obtaining the weight value corresponding to each pixel point in the residual map, the living body recognition is performed on the image to be processed based on the weight value corresponding to each pixel point in the residual map, so that the residual map is used as a weight map to mark the position of the image to be processed where the characteristic difference exists in the recognition process, and the recognition model focuses more on the position of the characteristic difference, thereby improving the recognition accuracy and accurately obtaining the category corresponding to the image to be processed.
In one embodiment, the identification module 1308 is further configured to: performing convolution processing on the image to be processed through the identification layer of the identification model to obtain a first characteristic value corresponding to each pixel point in the image to be processed; determining a second characteristic value corresponding to each pixel point in the image to be processed according to the weight value corresponding to each pixel point in the residual image and the first characteristic value corresponding to each pixel point in the image to be processed; and determining the category of the image to be processed according to the second characteristic value corresponding to each pixel point in the image to be processed.
In this embodiment, a convolution process is performed on an image to be processed through an identification layer of an identification model to obtain a first characteristic value corresponding to each pixel point in the image to be processed, and a second characteristic value corresponding to each pixel point in the image to be processed is determined according to a weight value corresponding to each pixel point in a residual map and the first characteristic value corresponding to each pixel point in the image to be processed, so that the weight value in the residual map is applied to the convolution process of the image to be processed, and therefore, the pixel points with obvious characteristic differences in the image to be processed can be screened out. And determining the category of the image to be processed according to the second characteristic value corresponding to each pixel point in the image to be processed, so that the residual image can be applied to living body detection, and whether the image to be processed belongs to a living body can be more accurately identified.
In one embodiment, the identification module 1308 is further configured to: extracting the features of the residual image and the image to be processed to obtain the features corresponding to the residual image and the features corresponding to the image to be processed; and performing living body identification on the image to be processed based on the characteristics of the residual image and the characteristics of the image to be processed to obtain the category of the image to be processed.
In this embodiment, feature extraction is performed on the residual map and the image to be processed to obtain features corresponding to the residual map and the image to be processed, so as to obtain key feature information in the residual map and key feature information in the image to be processed. The living body identification is carried out on the image to be processed based on the characteristics of the residual image and the characteristics of the image to be processed, the category of the image to be processed can be identified based on the key characteristic information, the calculation amount is reduced, and the living body detection efficiency is improved.
In one embodiment, as shown in fig. 14, there is provided a recognition model training apparatus, which may be a part of a computer device using a software module or a hardware module, or a combination of the two, and specifically includes: an acquisition module 1402, a sample transformation module 1404, a feature extraction module 1406, a residual map module 1408, a living body identification module 1410, and an adjustment module 1412. Wherein,
an obtaining module 1402, configured to obtain a training image sample and a category label corresponding to the training image sample, where the category label includes a living body and a non-living body;
a sample conversion module 1404, configured to convert the training image sample into a first image sample through a conversion layer of the recognition model, where the training image sample and the first image sample correspond to different attributes; the attribute includes a counterfeit image and an unforeseen image;
the feature extraction module 1406 is configured to perform feature extraction on the training image sample and the first image sample through a recognition layer of the recognition model to obtain a feature map of the training image sample and a feature map of the first image sample;
a residual map module 1408 for determining a residual map between the training image sample and the first image sample based on the feature map of the training image sample and the feature map of the first image sample;
the living body identification module 1410 is used for carrying out living body identification on the training image sample based on the residual error map to obtain an identification result of the training image sample;
and an adjusting module 1412, configured to adjust parameters of the recognition model according to a difference between the recognition result of the training image sample and the corresponding class label, and continue training until a preset condition is met, stopping training, and obtaining a trained recognition model.
In the embodiment, a training image sample and a category label corresponding to the training image sample are obtained, wherein the category label comprises a living body and a non-living body, the training image sample is converted into a first image sample through a conversion layer of an identification model, and the training image sample and the first image sample correspond to different attributes; the attributes comprise forged images and unformed images, feature extraction is carried out on a training image sample and a first image sample through a recognition layer of a recognition model to obtain a feature map of the training image sample and a feature map of the first image sample, a residual map between the training image sample and the first image sample is determined according to the feature map of the training image sample and the feature map of the first image sample, living body recognition is carried out on the training image sample based on the residual map to obtain a recognition result of the training image sample, parameters of the recognition model are adjusted according to the difference between the recognition result of the training image sample and a corresponding class label, training is continued until a preset condition is met, and therefore the trained recognition model is obtained, living body discrimination can be carried out on a single image through the trained recognition model without any face action of a user in a matching way, the cost of detection is reduced and the accuracy of living body identification is improved.
In one embodiment, the conversion layer of the recognition model comprises a first generator and a second generator; the first generator converts the forged image into an unforeseen image, and the second generator converts the unforeseen image into a forged image;
the sample conversion module 1404 is also configured to: acquiring a negative sample image from the training image sample, wherein the attribute of the negative sample image is a forged image; converting the negative sample image into a second image through the first generator, wherein the negative sample image and the second image have different attributes; converting the second image into a third image through the second generator, wherein the third image and the negative sample image have the same attribute; determining the similarity of the negative sample image and the third image; and when the similarity between the negative sample image and the third image is smaller than a similarity threshold value, adjusting the parameters of the first generator and the second generator and continuing training until a training stopping condition is met, thus obtaining the trained first generator and second generator.
In this embodiment, a negative sample image is obtained from a training image sample, the negative sample image is converted into a second image by the first generator, the negative sample image and the second image have different attributes, and the second image is converted into a third image by the second generator. The third image and the negative sample image have the same attribute, whether the performance of the second generator meets the requirement can be judged by determining the similarity between the negative sample image and the third image, and whether the performance of the first generator meets the requirement can be judged by judging whether the performance of the second generator meets the requirement. And when the similarity of the negative sample image and the third image is smaller than a similarity threshold value, adjusting the parameters of the first generator and the second generator and continuing training until a training stopping condition is met, so as to obtain the trained first generator and second generator. The trained first generator can convert the forged image into the non-forged image, and the second generator converts the non-forged image into the forged image, so that one image is converted into a plurality of images with different attributes, the training image sample can be expanded, the data set is enhanced, and the cost of data acquisition is reduced.
In one embodiment, the conversion layer of the recognition model further comprises a discriminator; the sample conversion module 1404 is also configured to: acquiring a positive sample image from the training image sample, wherein the attribute of the positive sample image is an unforeseen image; identifying the second image and the positive sample image through the discriminator, and determining attribute identification results corresponding to the second image and the positive sample image; and when the attribute recognition results corresponding to the second image and the positive sample image are different, adjusting the parameters of the discriminator and continuing training until the training stopping condition is met, so as to obtain the trained discriminator.
In this embodiment, a positive sample image is obtained from a training image sample, an attribute of the positive sample image is an unforgeable image, a discriminator is used to identify a second image and the positive sample image having the same attribute, and an attribute identification result corresponding to the second image and the positive sample image is determined, so as to determine whether the authentication performance of the discriminator meets the requirement. And when the attribute recognition results corresponding to the second image and the positive sample image are different, adjusting the parameters of the discriminator and continuing training until the training stopping condition is met to obtain the trained discriminator, so that the trained discriminator is used for discriminating the attribute of the output image to be processed in the application process of the recognition model. And determining the attribute of the image to be converted according to the attribute of the image to be processed, thereby determining a generator for processing the image to be processed, and further accurately obtaining a converted image with the attribute different from that of the image to be processed.
For specific definition of the living body identification device, reference may be made to the above definition of the living body identification method, which is not described herein again. The respective modules in the living body identification apparatus described above may be entirely or partially implemented by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
For the specific definition of the recognition model training device, reference may be made to the above definition of the recognition model training method, which is not described herein again. The modules in the recognition model training device can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 15. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a method of living body recognition or recognition model training. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 15 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is further provided, which includes a memory and a processor, the memory stores a computer program, and the processor implements the steps of the above method embodiments when executing the computer program.
In an embodiment, a computer-readable storage medium is provided, in which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Claims (15)
1. A living body identification method, comprising:
acquiring an image to be processed, and converting the image to be processed into a first image through a conversion layer of an identification model, wherein the image to be processed and the first image correspond to different attributes, and the attributes comprise a forged image and an unforeseen image;
performing feature extraction on the image to be processed and the first image through a recognition layer of the recognition model to obtain a feature map of the image to be processed and a feature map of the first image;
determining a residual error map between the image to be processed and the first image according to the feature map of the image to be processed and the feature map of the first image;
and carrying out living body identification on the image to be processed based on the residual image to obtain the category of the image to be processed, wherein the category is a living body or a non-living body.
2. The method according to claim 1, wherein the performing feature extraction on the image to be processed and the first image through a recognition layer of the recognition model to obtain a feature map of the image to be processed and a feature map of the first image comprises:
dividing the image to be processed and the first image through an identification layer of the identification model to obtain each region of the image to be processed and each region of the first image;
determining characteristic values respectively corresponding to all the areas in the image to be processed, and determining characteristic values respectively corresponding to all the areas in the first image;
determining a feature map of the image to be processed according to the feature values respectively corresponding to the regions in the image to be processed;
and determining the characteristic diagram of the first image according to the characteristic values respectively corresponding to the areas in the first image.
3. The method according to claim 1, wherein the determining a residual map between the image to be processed and the first image according to the feature map of the image to be processed and the feature map of the first image comprises:
determining the characteristic variation between the characteristic diagram of the image to be processed and the characteristic diagram of the first image;
and generating a residual error map between the image to be processed and the first image according to the characteristic variation.
4. The method according to claim 3, wherein the determining a feature variation between the feature map of the image to be processed and the feature map of the first image comprises:
determining pixel point pairs between the feature map of the image to be processed and the feature map of the first image;
determining a characteristic difference value between two pixel points in the pixel point pairs to obtain a characteristic difference value corresponding to each pixel point pair;
the generating a residual map between the image to be processed and the first image according to the characteristic variation comprises:
and generating a residual error map between the image to be processed and the first image according to the corresponding characteristic difference value of each pixel point pair.
5. The method according to claim 4, wherein said generating a residual map between the image to be processed and the first image according to the corresponding feature difference value of each pixel point pair comprises:
normalizing the characteristic difference value corresponding to each pixel point pair to obtain a weight value corresponding to each pixel point pair;
and generating a residual image between the image to be processed and the first image according to the corresponding weight value of each pixel point pair.
6. The method according to any one of claims 1 to 5, wherein the performing living body recognition on the image to be processed based on the residual map to obtain the category of the image to be processed comprises:
acquiring a weight value corresponding to each pixel point in the residual error image;
and performing living body identification on the image to be processed based on the weight value corresponding to each pixel point in the residual error image to obtain the category of the image to be processed.
7. The method according to claim 6, wherein the performing living body recognition on the image to be processed based on the weight value corresponding to each pixel point in the residual map to obtain the category of the image to be processed comprises:
performing convolution processing on the image to be processed through the identification layer of the identification model to obtain a first characteristic value corresponding to each pixel point in the image to be processed;
determining a second characteristic value corresponding to each pixel point in the image to be processed according to the weight value corresponding to each pixel point in the residual image and the first characteristic value corresponding to each pixel point in the image to be processed;
and determining the category of the image to be processed according to the second characteristic value corresponding to each pixel point in the image to be processed.
8. The method according to claim 1, wherein the performing living body recognition on the image to be processed based on the residual map to obtain the category of the image to be processed comprises:
extracting the features of the residual image and the image to be processed to obtain the features corresponding to the residual image and the image to be processed;
and performing living body identification on the image to be processed based on the characteristics of the residual error image and the characteristics of the image to be processed to obtain the category of the image to be processed.
9. A recognition model training method, comprising:
acquiring a training image sample and a class label corresponding to the training image sample, wherein the class label comprises a living body and a non-living body;
converting the training image sample into a first image sample through a conversion layer of an identification model, wherein the training image sample and the first image sample correspond to different attributes; the attributes include a counterfeit image and an unforeseen image;
performing feature extraction on the training image sample and the first image sample through a recognition layer of the recognition model to obtain a feature map of the training image sample and a feature map of the first image sample;
determining a residual error map between the training image sample and the first image sample according to the feature map of the training image sample and the feature map of the first image sample;
performing living body recognition on the training image sample based on the residual error image to obtain a recognition result of the training image sample;
and adjusting parameters of the recognition model and continuing training according to the difference between the recognition result of the training image sample and the corresponding class label until the preset condition is met, and obtaining the trained recognition model.
10. The method of claim 9, wherein the conversion layer of the recognition model comprises a first generator and a second generator; the first generator converts the forged image into an unforeseen image, and the second generator converts the unforeseen image into a forged image;
the training mode of the generator in the conversion layer of the recognition model comprises the following steps:
acquiring a negative sample image from the training image sample, wherein the attribute of the negative sample image is a forged image;
converting the negative sample image into a second image through the first generator, wherein the negative sample image and the second image have different attributes;
converting the second image into a third image through the second generator, wherein the third image and the negative sample image have the same attribute;
determining similarity of the negative sample image and the third image;
and when the similarity of the negative sample image and the third image is smaller than a similarity threshold value, adjusting the parameters of the first generator and the second generator and continuing training until a training stopping condition is met, so as to obtain the trained first generator and second generator.
11. The method of claim 10, wherein the translation layer of the recognition model further comprises a discriminator;
the training mode of the discriminator in the conversion layer of the recognition model comprises the following steps:
acquiring a positive sample image from the training image sample, wherein the attribute of the positive sample image is an unforeseen image;
identifying the second image and the positive sample image through the discriminator, and determining attribute identification results corresponding to the second image and the positive sample image;
and when the attribute recognition results corresponding to the second image and the positive sample image are different, adjusting the parameters of the discriminator and continuing training until the training stopping condition is met, so as to obtain the trained discriminator.
12. A living body identification device, the device comprising:
the conversion module is used for acquiring an image to be processed, converting the image to be processed into a first image through a conversion layer of an identification model, wherein the image to be processed and the first image correspond to different attributes, and the attributes comprise a forged image and a non-forged image;
the extraction module is used for extracting the features of the image to be processed and the first image through the identification layer of the identification model to obtain a feature map of the image to be processed and a feature map of the first image;
a determining module, configured to determine a residual map between the image to be processed and the first image according to the feature map of the image to be processed and the feature map of the first image;
and the identification module is used for carrying out living body identification on the image to be processed based on the residual error map to obtain the category of the image to be processed, wherein the category is a living body or a non-living body.
13. An apparatus for training a recognition model, the apparatus comprising:
the acquisition module is used for acquiring a training image sample and a class label corresponding to the training image sample, wherein the class label comprises a living body and a non-living body;
the sample conversion module is used for converting the training image sample into a first image sample through a conversion layer of a recognition model, and the training image sample and the first image sample correspond to different attributes; the attributes include a counterfeit image and an unforeseen image;
the feature extraction module is used for performing feature extraction on the training image sample and the first image sample through a recognition layer of the recognition model to obtain a feature map of the training image sample and a feature map of the first image sample;
a residual map module, configured to determine a residual map between the training image sample and the first image sample according to the feature map of the training image sample and the feature map of the first image sample;
the living body identification module is used for carrying out living body identification on the training image sample based on the residual error image to obtain an identification result of the training image sample;
and the adjusting module is used for adjusting the parameters of the recognition model and continuing training according to the difference between the recognition result of the training image sample and the corresponding class label until the preset condition is met, so that the trained recognition model is obtained.
14. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 11 when executing the computer program.
15. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 11.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010107870.4A CN111339897B (en) | 2020-02-21 | 2020-02-21 | Living body identification method, living body identification device, computer device, and storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010107870.4A CN111339897B (en) | 2020-02-21 | 2020-02-21 | Living body identification method, living body identification device, computer device, and storage medium |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN111339897A true CN111339897A (en) | 2020-06-26 |
| CN111339897B CN111339897B (en) | 2023-07-21 |
Family
ID=71185452
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202010107870.4A Active CN111339897B (en) | 2020-02-21 | 2020-02-21 | Living body identification method, living body identification device, computer device, and storage medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN111339897B (en) |
Cited By (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111680672A (en) * | 2020-08-14 | 2020-09-18 | 腾讯科技(深圳)有限公司 | Face living body detection method, system, device, computer equipment and storage medium |
| CN111882525A (en) * | 2020-07-01 | 2020-11-03 | 上海品览数据科技有限公司 | Image reproduction detection method based on LBP watermark characteristics and fine-grained identification |
| CN112115912A (en) * | 2020-09-28 | 2020-12-22 | 腾讯科技(深圳)有限公司 | Image recognition method and device, computer equipment and storage medium |
| CN112836625A (en) * | 2021-01-29 | 2021-05-25 | 汉王科技股份有限公司 | Face living body detection method and device and electronic equipment |
| CN113569806A (en) * | 2021-08-18 | 2021-10-29 | 浙江大华技术股份有限公司 | Face recognition method and device |
| CN114550259A (en) * | 2022-02-25 | 2022-05-27 | 展讯通信(天津)有限公司 | Face living body detection method, device and equipment |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108805828A (en) * | 2018-05-22 | 2018-11-13 | 腾讯科技(深圳)有限公司 | Image processing method, device, computer equipment and storage medium |
| US20180365794A1 (en) * | 2017-06-15 | 2018-12-20 | Samsung Electronics Co., Ltd. | Image processing apparatus and method using multi-channel feature map |
-
2020
- 2020-02-21 CN CN202010107870.4A patent/CN111339897B/en active Active
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20180365794A1 (en) * | 2017-06-15 | 2018-12-20 | Samsung Electronics Co., Ltd. | Image processing apparatus and method using multi-channel feature map |
| CN108805828A (en) * | 2018-05-22 | 2018-11-13 | 腾讯科技(深圳)有限公司 | Image processing method, device, computer equipment and storage medium |
Cited By (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111882525A (en) * | 2020-07-01 | 2020-11-03 | 上海品览数据科技有限公司 | Image reproduction detection method based on LBP watermark characteristics and fine-grained identification |
| CN111680672A (en) * | 2020-08-14 | 2020-09-18 | 腾讯科技(深圳)有限公司 | Face living body detection method, system, device, computer equipment and storage medium |
| CN112115912A (en) * | 2020-09-28 | 2020-12-22 | 腾讯科技(深圳)有限公司 | Image recognition method and device, computer equipment and storage medium |
| CN112115912B (en) * | 2020-09-28 | 2023-11-28 | 腾讯科技(深圳)有限公司 | Image recognition method, device, computer equipment and storage medium |
| CN112836625A (en) * | 2021-01-29 | 2021-05-25 | 汉王科技股份有限公司 | Face living body detection method and device and electronic equipment |
| CN113569806A (en) * | 2021-08-18 | 2021-10-29 | 浙江大华技术股份有限公司 | Face recognition method and device |
| CN114550259A (en) * | 2022-02-25 | 2022-05-27 | 展讯通信(天津)有限公司 | Face living body detection method, device and equipment |
Also Published As
| Publication number | Publication date |
|---|---|
| CN111339897B (en) | 2023-07-21 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN111339897B (en) | Living body identification method, living body identification device, computer device, and storage medium | |
| Zhang et al. | Unsupervised learning-based framework for deepfake video detection | |
| US10248954B2 (en) | Method and system for verifying user identity using card features | |
| Rattani et al. | A survey of mobile face biometrics | |
| Deb et al. | Look locally infer globally: A generalizable face anti-spoofing approach | |
| CN110222573B (en) | Face recognition method, device, computer equipment and storage medium | |
| US9189686B2 (en) | Apparatus and method for iris image analysis | |
| CN109886223B (en) | Face recognition method, bottom library input method and device and electronic equipment | |
| CN106778613A (en) | An identity verification method and device based on face segmentation area matching | |
| CN113642639B (en) | Living body detection method, living body detection device, living body detection equipment and storage medium | |
| Galdi et al. | FIRE: fast iris recognition on mobile phones by combining colour and texture features | |
| CN111414858A (en) | Face recognition method, target image determination method, device and electronic system | |
| Tapia et al. | Selfie periocular verification using an efficient super-resolution approach | |
| CN111582155A (en) | Living body detection method, living body detection device, computer equipment and storage medium | |
| CN106599841A (en) | Full face matching-based identity verifying method and device | |
| CN106650657A (en) | Authentication method and device based on full face binary matching | |
| Vijayalakshmi et al. | Finger and palm print based multibiometric authentication system with GUI interface | |
| Jagadeesh et al. | DBC based Face Recognition using DWT | |
| Amelia | Age estimation on human face image using support vector regression and texture-based features | |
| HK40025241A (en) | Living body recognition method and apparatus, computer device and storage medium | |
| CN113190858B (en) | Image processing method, system, medium and device based on privacy protection | |
| Shrestha et al. | Real-time finger-video analysis for accurate identity verification in mobile devices | |
| CN114840830B (en) | Authentication method, device, computer equipment and storage medium | |
| Gundgurti et al. | Latent Fingerprint Enhancement and Segmentation Through Advanced Deep-Learning Techniques | |
| HK40025241B (en) | Living body recognition method and apparatus, computer device and storage medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 40025241 Country of ref document: HK |
|
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |