[go: up one dir, main page]

CN116403079A - Adversarial sample detection method, device, computer equipment and storage medium - Google Patents

Adversarial sample detection method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN116403079A
CN116403079A CN202310371235.0A CN202310371235A CN116403079A CN 116403079 A CN116403079 A CN 116403079A CN 202310371235 A CN202310371235 A CN 202310371235A CN 116403079 A CN116403079 A CN 116403079A
Authority
CN
China
Prior art keywords
sample image
countermeasure
challenge
disturbance
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310371235.0A
Other languages
Chinese (zh)
Inventor
林晓锐
张锦元
刘唱
张磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202310371235.0A priority Critical patent/CN116403079A/en
Publication of CN116403079A publication Critical patent/CN116403079A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/776Validation; Performance evaluation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present application relates to a challenge sample detection method, apparatus, computer device, storage medium and computer program product, relating to the field of artificial intelligence. The method comprises the following steps: aiming at an original face sample image set, obtaining simulation opposite disturbance of at least two different shape types by simulating general opposite disturbance; the generic challenge disturbance is a challenge disturbance generated by at least two gradient-based generic challenge methods; adding the simulated countermeasure disturbance of each shape type to an original face sample image in the original face sample image set to obtain a simulated countermeasure sample image set; generating a target countermeasure detection training set according to the simulated countermeasure sample image set and the original face sample image set; the target countermeasure detection training set is used for training a face countermeasure detection model; the face countermeasure detection model is used for detecting whether the face image to be detected is a face countermeasure sample. By adopting the method, the detection accuracy of the face countermeasure sample can be improved.

Description

Challenge sample detection method, apparatus, computer device, and storage medium
Technical Field
The present application relates to the field of artificial intelligence, and in particular, to a challenge sample detection method, apparatus, computer device, storage medium, and computer program product.
Background
In recent years, with the dramatic jump in the computing power of graphics processors, convolutional neural networks (Convolutional Neural Network, abbreviated as CNN) have made great progress in a wide range of tasks including image classification, pedestrian re-recognition, and face recognition.
CNNs are susceptible to challenge samples, which are often difficult to perceive by humans, but can result in serious algorithm output errors. The face countermeasure sample is taken as one of the countermeasure samples, and the safety of the face recognition system is seriously affected.
However, the face challenge sample detection method of the related art is usually designed for specific attacks or specific tasks, and a new face challenge sample is generated after a new attack occurs to retrain the challenge detection model, so that the challenge detection model has poor ubiquity, and the face challenge samples of different challenge types cannot be accurately identified.
Therefore, there is a problem in the related art that the detection accuracy for the face challenge sample is low.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a challenge sample detection method, apparatus, computer device, computer-readable storage medium, and computer program product that can improve the detection accuracy of a challenge sample for a face.
In a first aspect, the present application provides a challenge sample detection method. The method comprises the following steps:
aiming at an original face sample image set, obtaining simulation opposite disturbance of at least two different shape types by simulating general opposite disturbance; the generic challenge disturbance is a challenge disturbance generated by at least two gradient-based generic challenge methods;
adding the simulated countermeasure disturbance of each shape type to an original face sample image in the original face sample image set to obtain a simulated countermeasure sample image set;
generating a target countermeasure detection training set according to the simulated countermeasure sample image set and the original face sample image set; the target countermeasure detection training set is used for training a face countermeasure detection model; the face countermeasure detection model is used for detecting whether the face image to be detected is a face countermeasure sample.
In one embodiment, the obtaining, for the original face sample image set, the simulated countering disturbance of at least two different shape types by simulating the universal countering disturbance includes:
Taking a preset number of original face sample images as first original sample images, and taking images except the first original sample images in the original face sample image set as second original sample images;
obtaining a simulated disturbance countermeasure of a first shape type by simulating the universal disturbance countermeasure for the first original sample image;
aiming at the second original sample image, obtaining a simulated disturbance-countering effect of a second shape type by simulating the universal disturbance-countering effect;
determining the simulated countering disturbance of the at least two different shape types from the simulated countering disturbance of the first shape type and the simulated countering disturbance of the second shape type.
In one embodiment, the obtaining, for the first original sample image, a simulated challenge disturbance of a first shape type by simulating the generic challenge disturbance includes:
acquiring a first all-zero matrix; the matrix size of the first all-zero matrix is matched with the image size of the first original sample image;
for each pixel point position of the first original sample image corresponding to the pixel point position in the first all-zero matrix, adding a random disturbance value at each pixel point position to obtain a first target matrix;
And taking the first target matrix as a punctiform simulation disturbance countermeasure to obtain the simulation disturbance countermeasure of the first shape type.
In one embodiment, the obtaining, for the second original sample image, a simulated anti-disturbance of a second shape type by simulating the universal anti-disturbance includes:
acquiring a second all-zero matrix; the matrix size of the second all-zero matrix is matched with the image size of the second original sample image;
traversing mask areas corresponding to each pixel point in the second original sample image in the second all-zero matrix, and respectively adding random disturbance values in the mask areas to obtain a second target matrix; the mask area is a pixel point area with a preset size and containing the corresponding pixel points;
and taking the second target matrix as a block-shaped simulation anti-disturbance to obtain the simulation anti-disturbance of the second shape type.
In one embodiment, the adding each shape type of the simulated countermeasure disturbance to the original face sample image in the original face sample image set to obtain a simulated countermeasure sample image set includes:
Adding the simulated anti-disturbance of the first shape type to the first original sample image to obtain a first simulated sample image;
adding the simulated anti-disturbance of the second shape type to the second original sample image to obtain a second simulated sample image;
the set of simulated challenge sample images is determined from the first simulated sample image and the second simulated sample image.
In one embodiment, the generating a target challenge detection training set according to the simulated challenge sample image set and the original face sample image set includes:
obtaining a challenge detection training set according to the simulated challenge sample image set and the original face sample image set; the challenge detection training set comprises a plurality of challenge detection sample images;
positioning a classification sensitive area of the countermeasure detection sample image to obtain the target countermeasure detection training set; and the classification sensitive area is an area of which the classification influence weight in the challenge detection sample image meets a preset condition.
In one embodiment, the locating the classification sensitive region of the challenge detection sample image to obtain the target challenge detection training set includes:
Determining a random probability value corresponding to the countermeasure detection sample image;
under the condition that the random probability value is larger than a preset probability value threshold value, locating a classification sensitive area of the countermeasure detection sample image to obtain an optimized countermeasure detection sample image;
and obtaining the target challenge detection training set according to the optimized challenge detection sample image.
In one embodiment, the locating the classification sensitive region of the challenge test sample image comprises:
the countermeasures detection sample image is input into a face countermeasures detection model to be trained, fake attention force diagram calculation is carried out on the countermeasures detection sample image, and an attention mask diagram corresponding to the countermeasures detection sample image is obtained; the attention mask map is used for representing the classification sensitivity of the face countermeasure detection model to each pixel point in the countermeasure detection sample image;
screening out target pixel points from the countermeasure detection sample image according to the classification sensitivity corresponding to each pixel point in the countermeasure detection sample image, so as to obtain a classification sensitive area of the countermeasure detection sample image; the classification sensitivity corresponding to the target pixel point is larger than a preset sensitivity threshold.
In one embodiment, the performing a false attention attempt calculation on the challenge detection sample image by inputting the challenge detection sample image into a face challenge detection model to be trained to obtain an attention mask image corresponding to the challenge detection sample image includes:
inputting the countermeasure detection sample image to the face countermeasure detection model to be trained to obtain a countermeasure probability value and a non-countermeasure probability value corresponding to the countermeasure detection sample image;
determining a disturbance gradient value corresponding to the countermeasure detection sample image according to the difference value of the countermeasure probability value and the non-countermeasure probability value; the disturbance gradient value is obtained by carrying out gradient operation according to the absolute value of the difference value;
determining a maximum disturbance gradient value as a mask value corresponding to each pixel point aiming at the disturbance gradient value corresponding to the color channel of each pixel point of the countermeasure detection sample image;
and obtaining the attention mask map according to the mask value corresponding to each pixel point of the countermeasure detection sample image.
In one embodiment, the optimized challenge detection sample image comprises:
Determining a corresponding area to be covered of the classification sensitive area in the countermeasure detection sample image according to the position of the classification sensitive area in the countermeasure detection sample image; the area to be covered is obtained by expanding and determining the classification sensitive area according to the random starting point position and the random size;
covering the area to be covered by a random number matrix to obtain an antagonism detection sample image after suspicious counterfeiting is eliminated; the matrix scale of the random number matrix is matched with the area scale of the area to be covered;
and taking the challenge detection sample image after the suspected counterfeiting is eliminated as the optimized challenge detection sample image.
In a second aspect, the present application also provides an challenge sample detection device. The device comprises:
the simulation module is used for obtaining simulation opposite disturbance of at least two different shape types by simulating the general opposite disturbance aiming at the original face sample image set; the generic challenge disturbance is a challenge disturbance generated by at least two gradient-based generic challenge methods;
the adding module is used for adding the simulated countermeasure disturbance of each shape type to the original face sample image in the original face sample image set to obtain a simulated countermeasure sample image set;
The generation module is used for generating a target countermeasure detection training set according to the simulated countermeasure sample image set and the original face sample image set; the target countermeasure detection training set is used for training a face countermeasure detection model; the face countermeasure detection model is used for detecting whether the face image to be detected is a face countermeasure sample.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor which when executing the computer program performs the steps of:
aiming at an original face sample image set, obtaining simulation opposite disturbance of at least two different shape types by simulating general opposite disturbance; the generic challenge disturbance is a challenge disturbance generated by at least two gradient-based generic challenge methods;
adding the simulated countermeasure disturbance of each shape type to an original face sample image in the original face sample image set to obtain a simulated countermeasure sample image set;
generating a target countermeasure detection training set according to the simulated countermeasure sample image set and the original face sample image set; the target countermeasure detection training set is used for training a face countermeasure detection model; the face countermeasure detection model is used for detecting whether the face image to be detected is a face countermeasure sample.
In a fourth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
aiming at an original face sample image set, obtaining simulation opposite disturbance of at least two different shape types by simulating general opposite disturbance; the generic challenge disturbance is a challenge disturbance generated by at least two gradient-based generic challenge methods;
adding the simulated countermeasure disturbance of each shape type to an original face sample image in the original face sample image set to obtain a simulated countermeasure sample image set;
generating a target countermeasure detection training set according to the simulated countermeasure sample image set and the original face sample image set; the target countermeasure detection training set is used for training a face countermeasure detection model; the face countermeasure detection model is used for detecting whether the face image to be detected is a face countermeasure sample.
In a fifth aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of:
Aiming at an original face sample image set, obtaining simulation opposite disturbance of at least two different shape types by simulating general opposite disturbance; the generic challenge disturbance is a challenge disturbance generated by at least two gradient-based generic challenge methods;
adding the simulated countermeasure disturbance of each shape type to an original face sample image in the original face sample image set to obtain a simulated countermeasure sample image set;
generating a target countermeasure detection training set according to the simulated countermeasure sample image set and the original face sample image set; the target countermeasure detection training set is used for training a face countermeasure detection model; the face countermeasure detection model is used for detecting whether the face image to be detected is a face countermeasure sample.
The above-described challenge sample detection method, apparatus, computer device, storage medium, and computer program product obtain simulated challenge perturbations of at least two different shape types by simulating a generic challenge perturbation for an original face sample image set; wherein the generic challenge perturbation is a challenge perturbation generated by at least two gradient-based generic challenge attack methods; adding simulated countermeasure disturbance of each shape type to an original face sample image in an original face sample image set to obtain a simulated countermeasure sample image set; generating a target countermeasure detection training set according to the simulated countermeasure sample image set and the original face sample image set; the target countermeasure detection training set is used for training a face countermeasure detection model; the face countermeasure detection model is used for detecting whether the face image to be detected is a face countermeasure sample.
Thus, although various general challenge methods based on gradients can correspondingly generate various general challenge disturbances, the general challenge disturbances have a fixed pattern and a general challenge disturbance shape, by simulating the fixed pattern of the general challenge disturbances with respect to the original face sample image set, at least two different types of simulated challenge disturbances are obtained, and the simulated challenge sample image set is added to the original face sample image to obtain the simulated challenge sample image set, so that the simulated challenge sample image set can cover various general challenge disturbances, thereby the real face challenge sample obtained by the challenge disturbances generated according to the general challenge method based on gradients can not be used, and only the original face sample image set and the simulated challenge sample image set are used to train the face challenge detection model, so that the trained face challenge detection model is used for face challenge sample detection only with respect to a specific challenge method, but the face challenge sample detection can be performed with respect to various general challenge methods based on the gradient challenge sample image, the performance and the face challenge detection model can be effectively improved, and the face challenge sample detection accuracy can be improved.
Drawings
FIG. 1 is a flow chart of a method for challenge sample detection in one embodiment;
FIG. 2 is a schematic diagram of a simulated disturbance rejection in the form of dots and a simulated disturbance rejection in the form of blocks, according to one embodiment;
FIG. 3 is a flow chart of a method for challenge sample detection in another embodiment;
FIG. 4 is a general block diagram of a challenge sample detection method in one embodiment;
FIG. 5 is a block diagram of an challenge sample detection device in one embodiment;
fig. 6 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
In one embodiment, as shown in fig. 1, a method for detecting an challenge sample is provided, where the method is applied to a server for illustration, and the server may be implemented by using a separate server or a server cluster formed by a plurality of servers. It will be appreciated that the method may also be applied to a terminal, and may also be applied to a system comprising a terminal and a server, and implemented by interaction of the terminal and the server. In this embodiment, the method includes the steps of:
step S110, aiming at an original face sample image set, obtaining simulation opposite disturbance of at least two different shape types by simulating the general opposite disturbance.
Wherein the generic challenge disturbance is a challenge disturbance generated by at least two gradient-based generic challenge methods.
Among them, gradient-based general challenge-attack methods may include, but are not limited to, general challenge-attack methods such as FGSM (Fast Gradient Sign Method ), BIM (Basic Iterative Method, basic iteration method), PGD (Projected Gradient Descent, projection gradient descent method), RFGSM (Random Fast Gradient Sign Method, random single-step attack method), MIFGSM (Momentum Iterative Fast Gradient Sign Method ), and the like.
In a specific implementation, the server may obtain a preprocessed original face sample image set, and obtain simulated anti-disturbance of at least two different shape types by simulating anti-disturbance generated by at least two gradient-based universal anti-attack methods for an original face sample image in the original face sample image set.
In practice, the server may use a random gradient instead of the classification loss gradient calculated from the back-propagation and generate simulated countermeasure disturbances of at least two shape types by calculation using the random gradient.
Wherein preprocessing includes operations of size unification (e.g., image size which may be unifying 112 x 112), random cropping, random horizontal inversion, center cropping, normalization, etc. The preprocessing operation is carried out on the original face sample image, so that a data set can be effectively expanded, and the generalization capability of the face countermeasure detection model for mirror images and inclined images existing in a real scene is improved.
Step S120, adding the simulated countermeasure disturbance of each shape type to the original face sample image in the original face sample image set to obtain a simulated countermeasure sample image set.
In a specific implementation, the server may add the simulated anti-disturbance of each shape type to the original face sample image in the original face sample image set according to the setting, take the original face sample image added with the simulated anti-disturbance as a simulated sample image, and form a simulated anti-sample image set from each simulated sample image.
Step S130, a target countermeasure detection training set is generated according to the simulated countermeasure sample image set and the original face sample image set.
The target countermeasure detection training set is used for training a face countermeasure detection model.
The face countermeasure detection model is used for detecting whether the face image to be detected is a face countermeasure sample or not.
In a specific implementation, the server may generate a target countermeasure detection training set according to the simulated countermeasure sample image set and the original face sample image set, train the face countermeasure detection model to be trained through the target countermeasure detection training set, and train a trained face countermeasure detection model, so that whether the face image to be detected is a face countermeasure sample can be detected through the face countermeasure detection model.
In the countermeasure sample detection method, simulation countermeasures of at least two different shape types are obtained by simulating universal countermeasures aiming at an original face sample image set; wherein the generic challenge perturbation is a challenge perturbation generated by at least two gradient-based generic challenge attack methods; adding simulated countermeasure disturbance of each shape type to an original face sample image in an original face sample image set to obtain a simulated countermeasure sample image set; generating a target countermeasure detection training set according to the simulated countermeasure sample image set and the original face sample image set; the target countermeasure detection training set is used for training a face countermeasure detection model; the face countermeasure detection model is used for detecting whether the face image to be detected is a face countermeasure sample.
Thus, although various general challenge methods based on gradients can correspondingly generate various general challenge disturbances, the general challenge disturbances have a fixed pattern and a general challenge disturbance shape, by simulating the fixed pattern of the general challenge disturbances with respect to the original face sample image set, at least two different types of simulated challenge disturbances are obtained, and the simulated challenge sample image set is added to the original face sample image to obtain the simulated challenge sample image set, so that the simulated challenge sample image set can cover various general challenge disturbances, thereby the real face challenge sample obtained by the challenge disturbances generated according to the general challenge method based on gradients can not be used, and only the original face sample image set and the simulated challenge sample image set are used to train the face challenge detection model, so that the trained face challenge detection model is used for face challenge sample detection only with respect to a specific challenge method, but the face challenge sample detection can be performed with respect to various general challenge methods based on the gradient challenge sample image, the performance and the face challenge detection model can be effectively improved, and the face challenge sample detection accuracy can be improved.
In one embodiment, for an original face sample image set, obtaining simulated countering perturbations of at least two different shape types by simulating universal countering perturbations, comprising: taking a preset number of original face sample images as first original sample images, and centralizing the original face sample images into images except the first original sample images to serve as second original sample images; aiming at a first original sample image, obtaining a simulated disturbance countermeasure of a first shape type by simulating the universal disturbance countermeasure; aiming at a second original sample image, obtaining a simulated disturbance-countering effect of a second shape type by simulating the universal disturbance-countering effect; determining simulated countering disturbances of at least two different shape types based on the simulated countering disturbances of the first shape type and the simulated countering disturbances of the second shape type.
In a specific implementation, in a process of obtaining simulated anti-disturbance of at least two different shape types by simulating universal anti-disturbance for an original face sample image set, the server may use a preset number of original face sample images as a first original sample image, and use images in the original face sample image set except the first original sample image as a second original sample image.
In practical application, the server may set 50% of the original face sample images in the original face sample image set as the first original sample image, and the remaining 50% of the original face sample images are used as the second original sample image. Alternatively, the original face sample image may be divided into the first original sample image and the second original sample image in other proportions, which are not particularly limited herein.
The server may then use a random gradient instead of the classification loss gradient calculated from the back propagation for the first raw sample image by simulating the generic countermeasure perturbation, and obtain a simulated countermeasure perturbation of the first shape type by calculating using the random gradient. Meanwhile, the server can simulate general anti-disturbance by aiming at the second original sample image, uses a random gradient to replace the classification loss gradient obtained by back propagation calculation, and obtains the simulated anti-disturbance of the second shape type by calculating the random gradient.
In this manner, the server may determine simulated countering disturbances of at least two different shape types from the simulated countering disturbances of the first shape type and the simulated countering disturbances of the second shape type.
According to the technical scheme, a preset number of original face sample images are used as first original sample images, and the original face sample images are concentrated into images except the first original sample images and used as second original sample images; aiming at a first original sample image, obtaining a simulated disturbance countermeasure of a first shape type by simulating the universal disturbance countermeasure; aiming at a second original sample image, obtaining a simulated disturbance-countering effect of a second shape type by simulating the universal disturbance-countering effect; determining simulated countering disturbances of at least two different shape types based on the simulated countering disturbances of the first shape type and the simulated countering disturbances of the second shape type. In this way, the original face sample image set is divided into the first original sample image and the second original sample image, the fixed modes of the universal challenge disturbance are simulated for the first original sample image and the second original sample image respectively, and the simulated challenge disturbance of two shape types is generated, so that the simulated sample images corresponding to the two simulated challenge disturbances can be obtained by adding the corresponding simulated challenge disturbance to the corresponding original face sample image, the simulated challenge sample image set obtained based on the simulated sample images corresponding to the two simulated challenge disturbances can cover various universal challenge disturbances, and the performance and generalization capability of the face challenge detection model can be improved to accurately identify the face challenge samples.
In one embodiment, for a first raw sample image, obtaining a simulated challenge disturbance of a first shape type by simulating a generic challenge disturbance, comprising: acquiring a first all-zero matrix; the matrix size of the first all-zero matrix is matched with the image size of the first original sample image; adding random disturbance values at the positions of the corresponding pixel points in the first all-zero matrix for each pixel point in the first original sample image to obtain a first target matrix; and taking the first target matrix as a punctiform simulation countermeasure disturbance to obtain a simulation countermeasure disturbance of a first shape type.
In a specific implementation, in a process of obtaining a simulated disturbance countermeasure of a first shape type by simulating a universal disturbance countermeasure for a first original sample image, a server can obtain a preset all-zero matrix as a first all-zero matrix, wherein the matrix size of the first all-zero matrix is matched with the image size of the first original sample image because a fixed mode exists in a plurality of universal disturbance countermeasure, and the disturbance countermeasure shape corresponding to the fixed mode is punctiform. In practical applications, the matrix size of the first all-zero matrix may be equal to the image matrix size corresponding to the first original sample image. It should be noted that, the image sizes of the original face sample images in the original face sample image set are the same, that is, the image sizes of the first original sample image and the second original sample image are the same.
Then, the server can respectively add random disturbance values at the positions of the corresponding pixel points in the first all-zero matrix aiming at each pixel point in the first original sample image to obtain a first target matrix; then, the first target matrix is used as a punctiform simulation countermeasure disturbance, so that the simulation countermeasure disturbance of the first shape type is obtained. Wherein, the random disturbance value may be a single-step disturbance value.
In practical application, for the original face sample image set S normal Original face sample image X with middle width, high and color channel numbers of W, H and C real When the original face sample image X real For the first original sample image, the server adopts a first all-zero matrix M point RecordingThe punctuation simulation counteracts the disturbance. To adequately simulate adding punctuation for each pixel point in the first raw sample image, the server may traverse X real For X, the positions of all pixels corresponding to the first all-zero matrix real Middle [ h, w]The pixel position of the pixel point at the position corresponding to the pixel point position in the first all-zero matrix is calculated by using a random gradient value rs and a single-step disturbance amplitude alpha, namely M point [h,w]=M point [h,w]+α×rs. Wherein rs is a random number with a value of 1 or-1, and alpha is a single-step disturbance amplitude set by an algorithm, and defaults to 1, but the value can be adjusted according to actual conditions.
In this way, the single-step disturbance calculation process is repeated until all pixel points in the first original sample image are traversed at the corresponding pixel point positions in the first all-zero matrix, and after the pixel points are traversed, the punctiform simulation disturbance resisting M is obtained point ' as a simulation of the first shape type, counteracts the disturbance.
According to the technical scheme, as a plurality of general anti-disturbance modes exist, the anti-disturbance shape corresponding to one of the fixed modes is punctiform, and a first all-zero matrix is obtained; the matrix size of the first all-zero matrix is matched with the image size of the first original sample image; adding random disturbance values at the positions of the corresponding pixel points in the first all-zero matrix for each pixel point in the first original sample image to obtain a first target matrix; taking the first target matrix as a punctiform simulation countermeasure disturbance to obtain a simulation countermeasure disturbance of a first shape type; in this way, punctiform countermeasure disturbance can be generated for each pixel point in the first original sample image, a first target matrix with the matrix size matched with the image size of the first original sample image is obtained, punctiform analog countermeasure disturbance corresponding to the first original sample image is formed, and accordingly the corresponding countermeasure disturbance can be fully simulated to form punctiform general countermeasure disturbance.
In one embodiment, for a second raw sample image, obtaining a simulated anti-disturbance of a second shape type by simulating a generic anti-disturbance, comprising: acquiring a second all-zero matrix; the matrix size of the second all-zero matrix is matched with the image size of the second original sample image; traversing mask areas corresponding to each pixel point in a second original sample image in a second all-zero matrix, and respectively adding random disturbance values in each mask area to obtain a second target matrix; the mask area is a pixel area with a preset size and corresponding pixel points; and taking the second target matrix as a block-shaped simulation anti-disturbance to obtain a simulation anti-disturbance of a second shape type.
In a specific implementation, in a process of obtaining a simulated anti-disturbance of a second shape type by simulating a universal anti-disturbance for a second original sample image, as a fixed mode exists in a plurality of universal anti-disturbances, the anti-disturbance shape corresponding to another fixed mode is a block, in order to simulate the universal anti-disturbance corresponding to the anti-disturbance shape which is the block, the server can obtain a preset all-zero matrix as a second all-zero matrix, and the matrix size of the second all-zero matrix is matched with the image size of the second original sample image.
Then, the server can traverse mask areas corresponding to each pixel point in the second original sample image in the second all-zero matrix aiming at the second original sample image so as to respectively add random disturbance values in the mask areas to obtain a second target matrix; the mask area is a pixel area having a preset size and including corresponding pixels, that is, a portion where the mask area corresponding to each pixel overlaps the mask area corresponding to an adjacent pixel. Wherein, the random disturbance value may be a single-step disturbance value.
Specifically, when the original face sample image X real For the second original sample image, the server adopts a second all-zero matrix M block The simulated disturbance resistance of the block is recorded. To adequately simulate adding a block to counter-disturbance for each pixel in the second original sample image, the server will traverse X at this stage real Mask regions corresponding to all pixels in the second all-zero matrix for [ h, w ]]Mask corresponding to pixel pointThe region may use a preset mask region length sl as a preset size to calculate an upper left corner abscissa and a lower right corner abscissa of the mask region corresponding to the pixel point. The specific calculation process is as follows:
Top left corner ordinate top=max (h-sl, 0);
the upper left corner abscissa lef=max (w-sl, 0);
lower right corner abscissa rib=min (w+sl, 0);
the lower right corner point ordinate bot=min (h+sl, 0).
The mask region [ top: bot, lef: rig ] can be calculated]The mask region corresponding to the pixel point comprises the pixel point region of the corresponding pixel point, namely [ h, w ]]The area of the mask area corresponding to the pixel point is larger than the area of the pixel point. For [ h, w]The mask area corresponding to the pixel point is calculated by using the new random gradient value rs and the new single-step disturbance amplitude alpha to calculate the disturbance needing to be added in the mask area, M block [top:bot,lef:rig]=M block [top:bot,lef:rig]+α×rs。
In this way, the above single-step disturbance calculation process is repeated until all the pixel points in the second original sample image have been traversed in the mask region corresponding to the pixel points in the second all-zero matrix, and the block-shaped simulated disturbance resisting M is obtained after the traversal of the mask region corresponding to the pixel points block ' as a second shape type of simulation is resistant to disturbances.
For ease of understanding by those skilled in the art, fig. 2 provides a schematic diagram of the generated punctiform simulated disturbance-countermeasure and blockwise simulated disturbance-countermeasure for the original face sample image.
According to the technical scheme, as a plurality of general anti-disturbance modes exist, the anti-disturbance corresponding to the other fixed mode is in a block shape, and a second all-zero matrix is obtained; the matrix size of the second all-zero matrix is matched with the image size of the second original sample image; traversing mask areas corresponding to each pixel point in a second original sample image in a second all-zero matrix, and respectively adding random disturbance values in each mask area to obtain a second target matrix; the mask area is a pixel area with a preset size and corresponding pixel points; taking the second target matrix as a block-shaped simulation anti-disturbance to obtain a simulation anti-disturbance of a second shape type; in this way, block-shaped disturbance countermeasures can be generated for each pixel point in the second original sample image, a second target matrix with the matrix size matched with the image size of the second original sample image is obtained, and block-shaped simulated disturbance countermeasures corresponding to the second original sample image are formed, so that general disturbance countermeasures with the block-shaped disturbance countermeasures can be fully simulated.
In one embodiment, adding simulated challenge perturbations of each shape type to an original face sample image in an original face sample image set to obtain a simulated challenge sample image set includes: adding the simulated anti-disturbance of the first shape type to the first original sample image to obtain a first simulated sample image; adding the simulation opposite disturbance of the second shape type to the second original sample image to obtain a second simulation sample image; a set of simulated challenge sample images is determined from the first simulated sample image and the second simulated sample image.
In a specific implementation, in a process of adding simulated anti-disturbance of each shape type to an original face sample image in an original face sample image set by a server to obtain a simulated anti-sample image set, the server may add the simulated anti-disturbance of a first shape type to a first original sample image according to a preset adding manner to obtain a first simulated sample image, and add the simulated anti-disturbance of a second shape type to a second original sample image to obtain a second simulated sample image.
Specifically, in the case where the simulated disturbance countermeasure of the first shape type is a punctiform simulated disturbance countermeasure and the simulated disturbance countermeasure of the second shape type is a block-like simulated disturbance countermeasure, the server may obtain two simulated sample images by the following two disturbance addition methods:
first simulated sample image with dot-like simulated anti-disturbance added
Figure BDA0004168641760000141
Second simulated sample image with blocky simulated anti-disturbance added
Figure BDA0004168641760000142
Finally, the pixel values of the two analog sample images are compressed to [0,255]Obtaining a punctiform simulation face countermeasure sample set composed of compressed first simulation sample images
Figure BDA0004168641760000143
And a block-like set of simulated face challenge samples consisting of compressed second simulated sample images >
Figure BDA0004168641760000144
According to punctiform analog human face countermeasure sample set +.>
Figure BDA0004168641760000151
And block-shaped analog face countermeasure sample set +.>
Figure BDA0004168641760000152
A set of simulated contrast sample images can be output +.>
Figure BDA0004168641760000153
In practical applications, the set of simulated challenge sample images may also be named a set of simulated face challenge samples.
According to the technical scheme, a first simulation sample image is obtained by adding simulation disturbance resistance of a first shape type to a first original sample image; adding the simulation opposite disturbance of the second shape type to the second original sample image to obtain a second simulation sample image; determining a set of simulated challenge sample images from the first simulated sample image and the second simulated sample image; in this way, the simulated counterdisturbance of the first shape type and the simulated counterdisturbance of the second shape type are obtained for simulating the counterdisturbance shapes corresponding to the fixed modes of the various general counterdisturbance, so that the simulated countersample images in the simulated countersample image set can cover the various general counterdisturbance, thereby improving the generalization capability of the face counterdetection model and accurately identifying the face countersamples.
In one embodiment, generating a target challenge detection training set from the simulated challenge sample image set and the original face sample image set includes: obtaining a countermeasure detection training set according to the simulated countermeasure sample image set and the original face sample image set; the challenge detection training set comprises a plurality of challenge detection sample images; and positioning a classification sensitive area of the countermeasure detection sample image to obtain a target countermeasure detection training set.
The classification sensitive area is an area with classification influence weight meeting a preset condition in the anti-detection sample image, namely, the classification sensitive area is an area with larger influence weight on model output in the anti-detection sample image.
In practical applications, the challenge detection training set may be referred to as a face challenge detection training set.
In a specific implementation, in a process of generating a target challenge detection training set according to a simulated challenge sample image set and an original face sample image set, the server may obtain a challenge detection training set according to the simulated challenge sample image set and the original face sample image set, where the challenge detection training set includes a plurality of challenge detection sample images.
Specifically, the simulated challenge sample image set S adv And original face sample image set S normal Combining to obtain an countermeasure detection training set S train ={S normal ,S adv }。
Then, the server can locate the area with the classification influence weight meeting the preset condition in the countermeasure detection sample image through gradient operation on the countermeasure detection sample image, and the target countermeasure detection training set is obtained through locating the countermeasure detection sample image with the classification sensitive area as the classification sensitive area.
According to the technical scheme of the embodiment, a countermeasure detection training set is obtained according to a simulated countermeasure sample image set and an original face sample image set; the challenge detection training set comprises a plurality of challenge detection sample images; positioning a classification sensitive area of the countermeasure detection sample image to obtain a target countermeasure detection training set; therefore, the target countermeasure detection training set is used for training the face countermeasure detection model to be trained, so that the face countermeasure detection model can be guided to pay attention to the classification sensitive area of the face countermeasure sample, the performance of the face countermeasure detection model is further improved, and the generalization capability of the face countermeasure detection model is enhanced.
In one embodiment, locating a classification sensitive region of an challenge detection sample image results in a target challenge detection training set comprising: determining a random probability value corresponding to the countermeasures detection sample image; under the condition that the random probability value is larger than a preset probability value threshold value, locating a classification sensitive area of the countermeasure detection sample image to obtain an optimized countermeasure detection sample image; and obtaining a target countermeasure detection training set according to the optimized countermeasure detection sample image.
Wherein, the random probability value corresponding to the challenge detection sample image is a random number between 0 and 1.
In a specific implementation, in the process that the server locates the classification sensitive area of the challenge detection sample image to obtain the target challenge detection training set, the server performs a challenge detection training set S train The number of wide, high and color channels in the image is W, H and C, and the image of the challenge detection sample is obtained
Figure BDA0004168641760000161
And calculating a corresponding random probability value, positioning a classification sensitive area of the challenge detection sample image under the condition that the random probability value is larger than a preset probability value threshold value, processing the classification sensitive area to obtain an optimized challenge detection sample image, and forming a target challenge detection training set by the optimized challenge detection sample image.
In practical applications, the random probability threshold may be 0.5, or may be another value, which is not specifically limited herein.
According to the technical scheme, a random probability value corresponding to the countermeasures detection sample image is determined; under the condition that the random probability value is larger than a preset probability value threshold value, locating a classification sensitive area of the countermeasure detection sample image to obtain an optimized countermeasure detection sample image; obtaining a target countermeasure detection training set according to the optimized countermeasure detection sample image; therefore, the number ratio of the countermeasure detection sample images for locating the classification sensitive areas in the whole target countermeasure detection training set can be increased, the attention of the face countermeasure detection model obtained by using the target countermeasure detection training set to the classification sensitive areas is improved, the generalization capability of the face countermeasure detection model can be improved while the performance of the face countermeasure detection model is improved, and the detection accuracy of the face countermeasure samples can be improved.
In one embodiment, locating a classification sensitive region against a detection sample image includes: the method comprises the steps of inputting a challenge detection sample image into a face challenge detection model to be trained, performing fake attention map calculation on the challenge detection sample image, and obtaining an attention mask image corresponding to the challenge detection sample image; the attention mask map is used for representing the classification sensitivity of the face countermeasure detection model to each pixel point in the countermeasure detection sample image; screening out target pixel points from the countermeasure detection sample image according to the classification sensitivity corresponding to each pixel point in the countermeasure detection sample image, so as to obtain a classification sensitivity region of the countermeasure detection sample image; the classification sensitivity corresponding to the target pixel point is larger than a preset sensitivity threshold.
In a specific implementation, in the process of locating the classification sensitive area of the challenge detection sample image, the server may input the challenge detection sample image to a face challenge detection model to be trained, and perform fake attention map calculation on the input challenge detection sample image to obtain an attention mask map corresponding to the challenge detection sample image, where the attention mask map is used to characterize the classification sensitivity of the face challenge detection model to each pixel point in the challenge detection sample image.
In this way, the server can screen out the pixel points with the corresponding classification sensitivity larger than the preset sensitivity threshold value from the anti-detection sample image according to the classification sensitivity corresponding to each pixel point in the anti-detection sample image, and then the server can take the position of the target pixel point in the anti-detection sample image as the classification sensitive area of the anti-detection sample image. The size of the classification sensitive area is smaller than that of the countercheck sample image.
In practical application, the server may obtain a preset number of pixel points with highest classification sensitivity as the target pixel point.
According to the technical scheme, the countermeasures sample images are input into the face countermeasures model to be trained, fake attention map calculation is carried out on the countermeasures sample images, and the attention mask map corresponding to the countermeasures sample images is obtained; the attention mask map is used for representing the classification sensitivity of the face countermeasure detection model to each pixel point in the countermeasure detection sample image; therefore, the region with large influence on classification after the simulation countermeasure disturbance is added can be focused through the attention mask map, and the target pixel points with classification sensitivity larger than the preset sensitivity threshold value are screened out from the countermeasure detection sample image according to the classification sensitivity corresponding to each pixel point in the countermeasure detection sample image, so that the classification sensitive region of the countermeasure detection sample image is obtained, and the classification sensitive region of the countermeasure detection sample image is more accurately positioned based on the attention mask map corresponding to the countermeasure detection sample image.
In one embodiment, by inputting the challenge detection sample image into a face challenge detection model to be trained, performing a fake attention map calculation on the challenge detection sample image to obtain an attention mask map corresponding to the challenge detection sample image, including: inputting the challenge detection sample image into a face challenge detection model to be trained to obtain a challenge probability value and a non-challenge probability value corresponding to the challenge detection sample image; determining a disturbance gradient value corresponding to the countermeasure detection sample image according to the difference value of the countermeasure probability value and the non-countermeasure probability value; the disturbance gradient value is obtained by gradient operation according to the absolute value of the difference value; determining a maximum disturbance gradient value as a mask value corresponding to each pixel point aiming at the disturbance gradient value corresponding to the color channel of each pixel point of the contrast detection sample image; and obtaining an attention mask map according to the mask value corresponding to each pixel point of the contrast detection sample image.
In a specific implementation, in the process of obtaining an attention mask diagram corresponding to a challenge detection sample image by inputting the challenge detection sample image into a face challenge detection model to be trained and performing fake attention map calculation on the challenge detection sample image, the server can obtain a challenge probability value and a non-challenge probability value corresponding to the challenge detection sample image by inputting the challenge detection sample image into the face challenge detection model to be trained.
In particular, the server may detect the sample image against
Figure BDA0004168641760000181
Input to the face challenge detection model to calculate output, output two logits (unnormalized logarithmic probabilities) to represent classification possibility of the challenge detection sample image, and represent challenge sample possibility O fake Probability of normal sample O real . Wherein, challenge sample likelihood O fake As a countering class probability value, normal sample probability O real As non-antagonistic probability values.
Then, the server can calculate the difference between the countermeasures probability value and the non-countermeasures probability value corresponding to the countermeasures sample image, and take the absolute value of the difference, and then compare the absolute value of the difference with the countermeasures sample image
Figure BDA0004168641760000182
Gradient is calculated to obtain gradient value->
Figure BDA0004168641760000183
The gradient value serves as a perturbation gradient value.
The server may then maximize the perturbation gradient value per color channel to obtain the final attention map mask (i.e., an attention mask map). Specifically, the three RGB color channels corresponding to each pixel point of the counter detection sample image have respective corresponding disturbance gradient values, and the server can use the largest disturbance gradient value of the three disturbance gradient values corresponding to each pixel point as the mask value corresponding to the corresponding pixel point; then, the server may obtain an attention mask map corresponding to the challenge detection sample image from the mask value corresponding to each pixel point of the challenge detection sample image. In this way, the values of the pixels in the attention mask map are gradient values of the corresponding pixels for the classification result, and are used for characterizing the classification sensitivity corresponding to the corresponding pixels of the anti-detection sample image.
In practical application, the formula for calculating the attention mask map is as follows:
Figure BDA0004168641760000191
according to the technical scheme, the countermeasure detection sample image is input into a face countermeasure detection model to be trained, so that a countermeasure probability value and a non-countermeasure probability value corresponding to the countermeasure detection sample image are obtained; determining a disturbance gradient value corresponding to the countermeasure detection sample image according to the difference value of the countermeasure probability value and the non-countermeasure probability value; the disturbance gradient value is obtained by gradient operation according to the absolute value of the difference value; determining a maximum disturbance gradient value as a mask value corresponding to each pixel point aiming at the disturbance gradient value corresponding to the color channel of each pixel point of the contrast detection sample image; and obtaining an attention mask map according to the mask value corresponding to each pixel point of the contrast detection sample image. In this way, based on the challenge probability value and the non-challenge probability value corresponding to the challenge detection sample image, a maximum gradient value of the pixel point in the challenge detection sample image for the classification result may be determined, and based on the maximum gradient value corresponding to the pixel point in the challenge detection sample image, a corresponding attention mask image may be obtained, so that the value of the pixel point in the attention mask image may represent the classification sensitivity corresponding to the pixel point corresponding to the challenge detection sample image.
In one embodiment, obtaining an optimized challenge test sample image includes: determining a corresponding region to be covered of the classification sensitive region in the countermeasure detection sample image according to the position of the classification sensitive region in the countermeasure detection sample image; the area to be covered is obtained by expanding and determining the classified sensitive area according to the random starting point position and the random size; covering the area to be covered by a random number matrix to obtain an antagonism detection sample image after suspicious counterfeiting is eliminated; the matrix scale of the random number matrix is matched with the area scale of the area to be covered; and taking the challenge detection sample image after suspicious counterfeiting is eliminated as an optimized challenge detection sample image.
Wherein the random starting point positions comprise a random height starting point position and a random width starting point position; the random size includes a random height and a random width.
For the area to be covered corresponding to any classified sensitive area, the matrix size of the corresponding random number matrix can be equal to the area size of the area to be covered.
In a specific implementation, in the process of obtaining the optimized challenge detection sample image, the server can determine the corresponding area to be covered of the classification sensitive area in the challenge detection sample image according to the position of the classification sensitive area in the challenge detection sample image.
Specifically, the server may expand the classification sensitive area according to the random height and the random width for the area scale, and the random height starting point position and the random width starting point position for the position of the classification sensitive area, so as to obtain the corresponding area to be covered of the classification sensitive area in the challenge detection sample image.
In practice, for a classification sensitive region (x i ,y i ) The server calculates coordinates of the corresponding upper left corner point and lower right corner point of the area to be covered through the random height mask, the random width mask, the random height starting point sh and the random width starting point sw in the following manner:
top left corner point ordinate top=max (x i -sh,0);
Bottom right corner point ordinate bot=min (x i +(maskh-sh),H);
Left upper corner abscissa lef=max (y i -sw,0);
Right lower corner point abscissa rig=min (y i +(maskw-sw),W)。
Classification sensitive area (x) i ,y i ) The corresponding area to be covered is
Figure BDA0004168641760000201
And then, covering the area to be covered by a random number matrix with the matrix size matched with the area size of the area to be covered in the challenge detection sample image, so as to obtain the challenge detection sample image after suspicious counterfeiting is eliminated.
Specifically, for the classification sensitive region (x i ,y i ) Corresponding to-be-covered area
Figure BDA0004168641760000202
The server may pass through a matrix M of random numbers specified by the random numbers random For->
Figure BDA0004168641760000203
Covering to eliminate suspected forgery, +.>
Figure BDA0004168641760000204
Stopping the forgery elimination operation to obtain the challenge detection sample image +.f through the classified sensitive area mask when the number of pixels for suspicious forgery elimination reaches the preset number>
Figure BDA0004168641760000205
The challenge detection sample image->
Figure BDA0004168641760000206
The server can take the challenge detection sample image after the suspected counterfeiting is eliminated as the optimized challenge detection sample image.
According to the technical scheme of the embodiment, the corresponding area to be covered of the classification sensitive area in the countermeasure detection sample image is determined according to the position of the classification sensitive area in the countermeasure detection sample image; the area to be covered is obtained by expanding and determining the classified sensitive area according to the random starting point position and the random size; covering the area to be covered by a random number matrix to obtain an antagonism detection sample image after suspicious counterfeiting is eliminated; the matrix scale of the random number matrix is matched with the area scale of the area to be covered; and taking the challenge detection sample image after suspicious counterfeiting is eliminated as an optimized challenge detection sample image. Therefore, the random number mask is carried out on the classified sensitive areas in the challenge detection sample image so as to eliminate suspicious counterfeiting, the optimized challenge detection sample image is obtained, and the optimized challenge detection sample image is a data-enhanced challenge detection sample image, so that model training is carried out through the data-enhanced challenge detection sample image, a face challenge detection model can be guided to pay attention to the classified sensitive areas, the performance of the face challenge detection model is improved, and meanwhile, the generalization capability of the face challenge detection model is enhanced.
In another embodiment, as shown in fig. 3, there is provided an challenge sample detection method comprising the steps of:
step S302, taking a preset number of original face sample images as first original sample images, and taking images except the first original sample images in the original face sample image set as second original sample images.
Step S304, aiming at the first original sample image, point-shaped simulated disturbance is obtained by simulating the universal disturbance.
Step S306, for the second original sample image, obtaining a block-shaped simulated disturbance by simulating the universal disturbance.
Step S308, adding the simulated countermeasure disturbance of each shape type to the original face sample image in the original face sample image set to obtain a simulated countermeasure sample image set.
Step S310, obtaining a countermeasure detection training set according to the simulated countermeasure sample image set and the original face sample image set.
In step S312, the counterfeited attention map is calculated on the counterfeited detection sample image by inputting the counterfeited detection sample image into the face counterfeited detection model to be trained, so as to obtain an attention mask map corresponding to the counterfeited detection sample image.
Step S314, screening out target pixel points from the contrast detection sample image according to the classification sensitivity corresponding to each pixel point in the contrast detection sample image, so as to obtain a classification sensitive area of the contrast detection sample image.
And step S316, covering the classified sensitive area through a random number matrix to obtain the challenge detection sample image after suspicious counterfeiting is eliminated.
Step S318, the challenge detection sample image after suspicious counterfeiting is eliminated is used as the optimized challenge detection sample image, so as to obtain a target challenge detection training set.
It should be noted that, the specific limitation of the above steps may be referred to as specific limitation of an anti-sample detection method.
In some embodiments, the backbone network of the face challenge detection model is an acceptance net, which is used to extract and classify challenge features, and the present application uses the classification (face challenge sample, normal face sample) as a model output result, as well as other challenge detection settings. The face countermeasure detection model is trained by using the target countermeasure detection training set, and the target countermeasure detection training set can train the model under the condition that any real face countermeasure sample is not used, and meanwhile, the model can efficiently and accurately detect the real face countermeasure sample. In the training process of the model, an Adam optimizer and cross entropy loss training model can be adopted, the learning rate is set to be 0.0002, and the training data batch size can be set to be 64. The specific parameters may also be adjusted according to the actual conditions, and are not particularly limited herein.
For face countermeasure detection test set S test In a sample image of (1)
Figure BDA0004168641760000221
After model extraction of features, the output is generated to be similar to [0.7,0.3 ] through a softmax function]Is included in the prediction vector of (a). If the element with subscript 0 is greater than the element with subscript 1, then the predictive label is0, representing a normal face sample; otherwise, the predictive label is 1, representing a face countermeasure sample.
After training the trained face countermeasure detection model, the face image to be detected can be input into the face countermeasure detection model to obtain a detection result of the face image to be detected so as to judge whether the face image to be detected is a face countermeasure sample. Specifically, the face image to be detected may be preprocessed before being input, and the preprocessing for the face image to be detected may include operations such as size unification, center clipping, normalization, and the like, without performing random clipping and random horizontal inversion operations.
According to the technical scheme, the face countermeasure sample data set does not need to be constructed in advance aiming at a certain general countermeasure attack method to train the face countermeasure detection model, and the generation process of the simulated countermeasure sample image set used for training is integrated in the training of the face countermeasure detection model. Meanwhile, the technical scheme does not need to access a protected face recognition system to acquire the face feature vector when the countermeasure detection is carried out, and the network structure of the face countermeasure detection model used in the method is simple and has no redundant structure.
For ease of understanding by those skilled in the art, FIG. 4 provides a general block diagram of an challenge sample detection method. As shown in fig. 4, the above method can be divided into three stages, namely a general disturbance countermeasure simulation stage for obtaining a punctiform disturbance countermeasure and a block disturbance countermeasure; a classification sensitive area mask stage for calculating and positioning the classification sensitive area of the challenge detection sample image by forgery attention, and performing suspicious forgery elimination by adopting a random number matrix; and in the countermeasure detection stage, the input face image is detected based on the backbone network of the trained face countermeasure detection model, and whether the input face image is a normal face sample or a face countermeasure sample is judged.
In some embodiments, to demonstrate that the challenge sample detection method based on the simulated generic challenge disturbance simulation has advantages in both performance and adaptability, the present application performs validation and analysis by the following experiments:
A. experimental data set
Training set: two data sets were used for normal face samples: LFW, VGGFACE, VGGFACE2. The LFW contains 13000 multiple collected face images from the internet. VGGFACE consists of 2622 class faces. Each type of face has a text file containing the image URL (uniform resource locator, uniform resource locator system) and the corresponding face detection result. VGGFACE2 contains 331 tens of thousands of 9131-identity images. The image is downloaded from Google Image Search. The face challenge sample uses a set of simulated challenge sample images that are subjected to a generic challenge disturbance simulation and classification sensitive region mask.
Test set: the test was performed using the real face challenge samples generated by the eight gradient-based challenge method (FGSM, BIM, PGD, RFGSM, MIFGSM, TIFGSM, DIFGSM, TIPIM) and the normal face samples in LFW, VGGFACE, VGGFACE.
B. Evaluation criteria
The application adopts mainstream evaluation standard of domestic and foreign face recognition research, and adopts accuracy and ROC curve for face countermeasure sample detection test
receiver operating characteristic curve, receiver operating curve) under Area (AUC), assuming that the test set has N pictures in total, and M pictures are judged to be wrong, the accuracy (accc) of face challenge sample detection is:
Figure BDA0004168641760000241
for the ROC curve, each point on the ROC curve reflects the sensitivity to the same signal stimulus. The horizontal axis of the ROC curve represents negative positive class ratio (FPR) and the vertical axis of the ROC curve represents true class ratio (TPR). For the two classification problems:
if a sample is and is predicted to be a positive class, it is a true class (TP);
if a sample is of the positive type and predicted to be of the negative type, it is of the false negative type (FN);
if one sample is a negative class and predicted to be a positive class, it is a true class (FP);
if a sample is negative and predicted to be negative, it is a true class (TN);
The true class rate TPR and the negative positive class rate FPR can be calculated by the following formula:
Figure BDA0004168641760000242
Figure BDA0004168641760000243
for the area under the ROC curve (AUC), the academic and industrial circles often use the AUC value as an evaluation criterion for the classifier, and the AUC can be obtained by summing the areas of the parts under the ROC curve.
C. Experimental results
Experiments show that the detection accuracy of the real face countermeasure sample generated by the method for detecting the real face challenge based on various gradient-based general challenge methods on LFW and VGGFACE can reach more than 94%, the AUC of the face countermeasure detection model can reach more than 0.99, and compared with the model (FGSM columns in tables 1 and 2) trained by using the FGSM algorithm for generating the real face countermeasure sample, the method of the application greatly improves the detection performance of BIM, RFGSM, TIFGSM, DIFGSM, TIPIM five real face countermeasure samples; in addition, the AUC of the face countermeasure detection model on the VGGFACE2 can reach more than 0.99 for various real face countermeasure samples. The present application achieves a leading level in comparison to existing face challenge detection methods on three data sets. The experimental results are shown in the following table:
TABLE 1LFW face challenge detection Performance (accuracy ACC, AUC)
Figure BDA0004168641760000251
TABLE 2LFW face challenge detection Performance (accuracy ACC, AUC)
Figure BDA0004168641760000252
TABLE 3VGGFACE face challenge detection Performance (accuracy ACC, negative positive class Rate FPR)
Figure BDA0004168641760000261
TABLE 4VGGFACE2 face challenge detection Performance (AUC)
Figure BDA0004168641760000262
As can be seen from tables 3 and 4, the Methods of the present application all exhibited superior performance compared to the current advanced algorithm AmI and Massli's Methods under the same experimental environment.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiments of the present application also provide an challenge sample detection means for implementing one of the above-mentioned challenge sample detection methods. The implementation of the solution provided by the device is similar to that described in the above method, so the specific limitations of one or more embodiments of the challenge sample detection device provided below can be referred to above for a limitation of a challenge sample detection method, and will not be repeated here.
In one embodiment, as shown in fig. 5, there is provided an challenge sample detection device comprising: a simulation module 510, an addition module 520, and a generation module 530, wherein:
the simulation module 510 is configured to obtain, for an original face sample image set, simulated anti-disturbance of at least two different shape types by simulating a universal anti-disturbance; the generic challenge disturbance is a challenge disturbance generated by at least two gradient-based generic challenge methods.
The adding module 520 is configured to add each shape type of simulated countermeasure disturbance to the original face sample image in the original face sample image set, so as to obtain a simulated countermeasure sample image set.
A generating module 530, configured to generate a target challenge detection training set according to the simulated challenge sample image set and the original face sample image set; the target countermeasure detection training set is used for training a face countermeasure detection model; the face countermeasure detection model is used for detecting whether the face image to be detected is a face countermeasure sample.
In one embodiment, the simulation module 510 is specifically configured to take a preset number of the original face sample images as a first original sample image, and take images in the original face sample image set except for the first original sample image as a second original sample image; obtaining a simulated disturbance countermeasure of a first shape type by simulating the universal disturbance countermeasure for the first original sample image; aiming at the second original sample image, obtaining a simulated disturbance-countering effect of a second shape type by simulating the universal disturbance-countering effect; determining the simulated countering disturbance of the at least two different shape types from the simulated countering disturbance of the first shape type and the simulated countering disturbance of the second shape type.
In one embodiment, the simulation module 510 is specifically configured to obtain a first all-zero matrix; the matrix size of the first all-zero matrix is matched with the image size of the first original sample image; for each pixel point position of the first original sample image corresponding to the pixel point position in the first all-zero matrix, adding a random disturbance value at each pixel point position to obtain a first target matrix; and taking the first target matrix as a punctiform simulation disturbance countermeasure to obtain the simulation disturbance countermeasure of the first shape type.
In one embodiment, the simulation module 510 is specifically configured to obtain a second all-zero matrix; the matrix size of the second all-zero matrix is matched with the image size of the second original sample image; traversing mask areas corresponding to each pixel point in the second original sample image in the second all-zero matrix, and respectively adding random disturbance values in the mask areas to obtain a second target matrix; the mask area is a pixel point area with a preset size and containing the corresponding pixel points; and taking the second target matrix as a block-shaped simulation anti-disturbance to obtain the simulation anti-disturbance of the second shape type.
In one embodiment, the adding module 520 is specifically configured to add the simulated anti-disturbance of the first shape type to the first original sample image, so as to obtain a first simulated sample image; adding the simulated anti-disturbance of the second shape type to the second original sample image to obtain a second simulated sample image; the set of simulated challenge sample images is determined from the first simulated sample image and the second simulated sample image.
In one embodiment, the generating module 530 is specifically configured to obtain a challenge detection training set according to the simulated challenge sample image set and the original face sample image set; the challenge detection training set comprises a plurality of challenge detection sample images; positioning a classification sensitive area of the countermeasure detection sample image to obtain the target countermeasure detection training set; and the classification sensitive area is an area of which the classification influence weight in the challenge detection sample image meets a preset condition.
In one embodiment, the generating module 530 is specifically configured to determine a random probability value corresponding to the challenge detection sample image; under the condition that the random probability value is larger than a preset probability value threshold value, locating a classification sensitive area of the countermeasure detection sample image to obtain an optimized countermeasure detection sample image; and obtaining the target challenge detection training set according to the optimized challenge detection sample image.
In one embodiment, the generating module 530 is specifically configured to input the challenge detection sample image to a face challenge detection model to be trained, perform a false attention map calculation on the challenge detection sample image, and obtain an attention mask map corresponding to the challenge detection sample image; the attention mask map is used for representing the classification sensitivity of the face countermeasure detection model to each pixel point in the countermeasure detection sample image; screening out target pixel points from the countermeasure detection sample image according to the classification sensitivity corresponding to each pixel point in the countermeasure detection sample image, so as to obtain a classification sensitive area of the countermeasure detection sample image; the classification sensitivity corresponding to the target pixel point is larger than a preset sensitivity threshold.
In one embodiment, the generating module 530 is specifically configured to obtain a challenge probability value and a non-challenge probability value corresponding to the challenge detection sample image by inputting the challenge detection sample image to the face challenge detection model to be trained; determining a disturbance gradient value corresponding to the countermeasure detection sample image according to the difference value of the countermeasure probability value and the non-countermeasure probability value; the disturbance gradient value is obtained by carrying out gradient operation according to the absolute value of the difference value; determining a maximum disturbance gradient value as a mask value corresponding to each pixel point aiming at the disturbance gradient value corresponding to the color channel of each pixel point of the countermeasure detection sample image; and obtaining the attention mask map according to the mask value corresponding to each pixel point of the countermeasure detection sample image.
In one embodiment, the generating module 530 is specifically configured to determine, according to a position of the classification sensitive area in the challenge detection sample image, a to-be-covered area corresponding to the classification sensitive area in the challenge detection sample image; the area to be covered is obtained by expanding and determining the classification sensitive area according to the random starting point position and the random size; covering the area to be covered by a random number matrix to obtain an antagonism detection sample image after suspicious counterfeiting is eliminated; the matrix scale of the random number matrix is matched with the area scale of the area to be covered; and taking the challenge detection sample image after the suspected counterfeiting is eliminated as the optimized challenge detection sample image.
The various modules in the challenge sample detection device described above may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 6. The computer device includes a processor, a memory, an Input/Output interface (I/O) and a communication interface. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface is connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is used for storing original face sample image set data. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a challenge sample detection method.
It will be appreciated by those skilled in the art that the structure shown in fig. 6 is merely a block diagram of some of the structures associated with the present application and is not limiting of the computer device to which the present application may be applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In an embodiment, there is also provided a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, carries out the steps of the method embodiments described above.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
It should be noted that, the user information (including, but not limited to, user equipment information, user personal information, etc.) and the data (including, but not limited to, data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data are required to comply with the related laws and regulations and standards of the related countries and regions.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.

Claims (14)

1. A method of challenge sample detection, the method comprising:
aiming at an original face sample image set, obtaining simulation opposite disturbance of at least two different shape types by simulating general opposite disturbance; the generic challenge disturbance is a challenge disturbance generated by at least two gradient-based generic challenge methods;
adding the simulated countermeasure disturbance of each shape type to an original face sample image in the original face sample image set to obtain a simulated countermeasure sample image set;
Generating a target countermeasure detection training set according to the simulated countermeasure sample image set and the original face sample image set; the target countermeasure detection training set is used for training a face countermeasure detection model; the face countermeasure detection model is used for detecting whether the face image to be detected is a face countermeasure sample.
2. The method according to claim 1, wherein the obtaining, for the original face sample image set, simulated countering perturbations of at least two different shape types by simulating universal countering perturbations comprises:
taking a preset number of original face sample images as first original sample images, and taking images except the first original sample images in the original face sample image set as second original sample images;
obtaining a simulated disturbance countermeasure of a first shape type by simulating the universal disturbance countermeasure for the first original sample image;
aiming at the second original sample image, obtaining a simulated disturbance-countering effect of a second shape type by simulating the universal disturbance-countering effect;
determining the simulated countering disturbance of the at least two different shape types from the simulated countering disturbance of the first shape type and the simulated countering disturbance of the second shape type.
3. The method of claim 2, wherein the obtaining a first shape type of simulated challenge disturbance for the first raw sample image by simulating the generic challenge disturbance comprises:
acquiring a first all-zero matrix; the matrix size of the first all-zero matrix is matched with the image size of the first original sample image;
for each pixel point position of the first original sample image corresponding to the pixel point position in the first all-zero matrix, adding a random disturbance value at each pixel point position to obtain a first target matrix;
and taking the first target matrix as a punctiform simulation disturbance countermeasure to obtain the simulation disturbance countermeasure of the first shape type.
4. The method of claim 2, wherein the obtaining a second shape type of simulated anti-disturbance by simulating the generic anti-disturbance for the second raw sample image comprises:
acquiring a second all-zero matrix; the matrix size of the second all-zero matrix is matched with the image size of the second original sample image;
traversing mask areas corresponding to each pixel point in the second original sample image in the second all-zero matrix, and respectively adding random disturbance values in the mask areas to obtain a second target matrix; the mask area is a pixel point area with a preset size and containing the corresponding pixel points;
And taking the second target matrix as a block-shaped simulation anti-disturbance to obtain the simulation anti-disturbance of the second shape type.
5. The method of claim 2, wherein adding each of the shape types of simulated challenge perturbations to the original face sample images in the original face sample image set to obtain a simulated challenge sample image set comprises:
adding the simulated anti-disturbance of the first shape type to the first original sample image to obtain a first simulated sample image;
adding the simulated anti-disturbance of the second shape type to the second original sample image to obtain a second simulated sample image;
the set of simulated challenge sample images is determined from the first simulated sample image and the second simulated sample image.
6. The method of claim 1, wherein generating a target challenge detection training set from the simulated challenge sample image set and the raw face sample image set comprises:
obtaining a challenge detection training set according to the simulated challenge sample image set and the original face sample image set; the challenge detection training set comprises a plurality of challenge detection sample images;
Positioning a classification sensitive area of the countermeasure detection sample image to obtain the target countermeasure detection training set; and the classification sensitive area is an area of which the classification influence weight in the challenge detection sample image meets a preset condition.
7. The method of claim 6, wherein said locating the classification sensitive region of the challenge detection sample image results in the target challenge detection training set comprising:
determining a random probability value corresponding to the countermeasure detection sample image;
under the condition that the random probability value is larger than a preset probability value threshold value, locating a classification sensitive area of the countermeasure detection sample image to obtain an optimized countermeasure detection sample image;
and obtaining the target challenge detection training set according to the optimized challenge detection sample image.
8. The method of claim 7, wherein said locating the classification sensitive region of the challenge test sample image comprises:
the countermeasures detection sample image is input into a face countermeasures detection model to be trained, fake attention force diagram calculation is carried out on the countermeasures detection sample image, and an attention mask diagram corresponding to the countermeasures detection sample image is obtained; the attention mask map is used for representing the classification sensitivity of the face countermeasure detection model to each pixel point in the countermeasure detection sample image;
Screening out target pixel points from the countermeasure detection sample image according to the classification sensitivity corresponding to each pixel point in the countermeasure detection sample image, so as to obtain a classification sensitive area of the countermeasure detection sample image; the classification sensitivity corresponding to the target pixel point is larger than a preset sensitivity threshold.
9. The method according to claim 8, wherein the obtaining the attention mask map corresponding to the challenge detection sample image by inputting the challenge detection sample image to a face challenge detection model to be trained, performing a fake attention map calculation on the challenge detection sample image, includes:
inputting the countermeasure detection sample image to the face countermeasure detection model to be trained to obtain a countermeasure probability value and a non-countermeasure probability value corresponding to the countermeasure detection sample image;
determining a disturbance gradient value corresponding to the countermeasure detection sample image according to the difference value of the countermeasure probability value and the non-countermeasure probability value; the disturbance gradient value is obtained by carrying out gradient operation according to the absolute value of the difference value;
determining a maximum disturbance gradient value as a mask value corresponding to each pixel point aiming at the disturbance gradient value corresponding to the color channel of each pixel point of the countermeasure detection sample image;
And obtaining the attention mask map according to the mask value corresponding to each pixel point of the countermeasure detection sample image.
10. The method of claim 8, wherein the optimized challenge test sample image is obtained, comprising:
determining a corresponding area to be covered of the classification sensitive area in the countermeasure detection sample image according to the position of the classification sensitive area in the countermeasure detection sample image; the area to be covered is obtained by expanding and determining the classification sensitive area according to the random starting point position and the random size;
covering the area to be covered by a random number matrix to obtain an antagonism detection sample image after suspicious counterfeiting is eliminated; the matrix scale of the random number matrix is matched with the area scale of the area to be covered;
and taking the challenge detection sample image after the suspected counterfeiting is eliminated as the optimized challenge detection sample image.
11. An challenge sample detection device, the device comprising:
the simulation module is used for obtaining simulation opposite disturbance of at least two different shape types by simulating the general opposite disturbance aiming at the original face sample image set; the generic challenge disturbance is a challenge disturbance generated by at least two gradient-based generic challenge methods;
The adding module is used for adding the simulated countermeasure disturbance of each shape type to the original face sample image in the original face sample image set to obtain a simulated countermeasure sample image set;
the generation module is used for generating a target countermeasure detection training set according to the simulated countermeasure sample image set and the original face sample image set; the target countermeasure detection training set is used for training a face countermeasure detection model; the face countermeasure detection model is used for detecting whether the face image to be detected is a face countermeasure sample.
12. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 10 when the computer program is executed.
13. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 10.
14. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any one of claims 1 to 10.
CN202310371235.0A 2023-04-10 2023-04-10 Adversarial sample detection method, device, computer equipment and storage medium Pending CN116403079A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310371235.0A CN116403079A (en) 2023-04-10 2023-04-10 Adversarial sample detection method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310371235.0A CN116403079A (en) 2023-04-10 2023-04-10 Adversarial sample detection method, device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116403079A true CN116403079A (en) 2023-07-07

Family

ID=87017549

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310371235.0A Pending CN116403079A (en) 2023-04-10 2023-04-10 Adversarial sample detection method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116403079A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117315395A (en) * 2023-09-27 2023-12-29 北京瑞莱智慧科技有限公司 Face countermeasure sample generation method, related device, equipment and storage medium
CN119068286A (en) * 2024-08-13 2024-12-03 电子科技大学(深圳)高等研究院 Adversarial sample preparation method based on universal potential infection
CN119600382A (en) * 2024-11-19 2025-03-11 人民网股份有限公司 Method and device for generating countermeasure sample aiming at face counterfeiting detection model

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111652290A (en) * 2020-05-15 2020-09-11 深圳前海微众银行股份有限公司 A method and device for detecting an adversarial sample
CN111783629A (en) * 2020-06-29 2020-10-16 浙大城市学院 A face liveness detection method and device for adversarial sample attack
CN112528675A (en) * 2020-12-14 2021-03-19 成都易书桥科技有限公司 Confrontation sample defense algorithm based on local disturbance
CN113111776A (en) * 2021-04-12 2021-07-13 京东数字科技控股股份有限公司 Method, device and equipment for generating countermeasure sample and storage medium
CN113221858A (en) * 2021-06-16 2021-08-06 中国科学院自动化研究所 Method and system for defending face recognition against attack
CN113780123A (en) * 2021-08-27 2021-12-10 广州大学 Countermeasure sample generation method, system, computer device and storage medium
WO2022032549A1 (en) * 2020-08-11 2022-02-17 中国科学院自动化研究所 Anti-counterfeiting facial detection method, system and apparatus based on cross-modality conversion assistance
CN114332982A (en) * 2021-11-30 2022-04-12 浪潮(北京)电子信息产业有限公司 Face recognition model attack defense method, device, equipment and storage medium
CN114663665A (en) * 2022-02-28 2022-06-24 华南理工大学 Gradient-based confrontation sample generation method and system
CN115761859A (en) * 2022-11-29 2023-03-07 中国工商银行股份有限公司 Method, device, and computer-readable storage medium for determining an adversarial example
CN115798056A (en) * 2022-10-20 2023-03-14 招商银行股份有限公司 Face confrontation sample generation method, device and system and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111652290A (en) * 2020-05-15 2020-09-11 深圳前海微众银行股份有限公司 A method and device for detecting an adversarial sample
CN111783629A (en) * 2020-06-29 2020-10-16 浙大城市学院 A face liveness detection method and device for adversarial sample attack
WO2022032549A1 (en) * 2020-08-11 2022-02-17 中国科学院自动化研究所 Anti-counterfeiting facial detection method, system and apparatus based on cross-modality conversion assistance
CN112528675A (en) * 2020-12-14 2021-03-19 成都易书桥科技有限公司 Confrontation sample defense algorithm based on local disturbance
CN113111776A (en) * 2021-04-12 2021-07-13 京东数字科技控股股份有限公司 Method, device and equipment for generating countermeasure sample and storage medium
CN113221858A (en) * 2021-06-16 2021-08-06 中国科学院自动化研究所 Method and system for defending face recognition against attack
CN113780123A (en) * 2021-08-27 2021-12-10 广州大学 Countermeasure sample generation method, system, computer device and storage medium
CN114332982A (en) * 2021-11-30 2022-04-12 浪潮(北京)电子信息产业有限公司 Face recognition model attack defense method, device, equipment and storage medium
CN114663665A (en) * 2022-02-28 2022-06-24 华南理工大学 Gradient-based confrontation sample generation method and system
CN115798056A (en) * 2022-10-20 2023-03-14 招商银行股份有限公司 Face confrontation sample generation method, device and system and storage medium
CN115761859A (en) * 2022-11-29 2023-03-07 中国工商银行股份有限公司 Method, device, and computer-readable storage medium for determining an adversarial example

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
黄横;韩青;李晓东;: "基于对抗样本防御的人脸安全识别方法", 北京电子科技学院学报, no. 04, 15 December 2019 (2019-12-15) *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117315395A (en) * 2023-09-27 2023-12-29 北京瑞莱智慧科技有限公司 Face countermeasure sample generation method, related device, equipment and storage medium
CN119068286A (en) * 2024-08-13 2024-12-03 电子科技大学(深圳)高等研究院 Adversarial sample preparation method based on universal potential infection
CN119600382A (en) * 2024-11-19 2025-03-11 人民网股份有限公司 Method and device for generating countermeasure sample aiming at face counterfeiting detection model
CN119600382B (en) * 2024-11-19 2025-08-08 人民网股份有限公司 Adversarial sample generation method and device for face forgery detection model

Similar Documents

Publication Publication Date Title
CN111723732B (en) Optical remote sensing image change detection method, storage medium and computing equipment
CN109360232B (en) Indoor scene layout estimation method and device based on condition generation countermeasure network
CN116403079A (en) Adversarial sample detection method, device, computer equipment and storage medium
CN113554089A (en) Image classification countermeasure sample defense method and system and data processing terminal
CN113239914B (en) Classroom student facial expression recognition and classroom state assessment method and device
TW202217653A (en) Deepfake video detection system and method which can determine whether the video has been faked by detecting the changes in the human eye state in the video, using deep learning to quantify the eye characteristic behavior based on time series and then integrating statistical models
CN114241587B (en) Evaluation method and device for human face living body detection confrontation robustness
CN113420289B (en) Hidden poisoning attack defense method and device for deep learning model
CN113487600A (en) Characteristic enhancement scale self-adaptive sensing ship detection method
CN118264448A (en) A deep learning network intrusion detection model for multi-classification
CN114742170A (en) Adversarial sample generation method, model training method, image recognition method and device
CN115796243B (en) A Method and System for Detecting Adversarial Examples Based on Attention Map Differences
CN114638356A (en) Static weight guided deep neural network back door detection method and system
JP6892844B2 (en) Information processing device, information processing method, watermark detection device, watermark detection method, and program
Daryani et al. IRL-Net: Inpainted region localization network via spatial attention
CN117692210A (en) Network traffic intrusion detection method and system based on image enhancement
Ain et al. Regularized forensic efficient net: a game theory based generalized approach for video deepfakes detection
CN113570564B (en) A detection method for multi-resolution fake face videos based on multi-channel convolutional network
Dhar et al. Detecting deepfake images using deep convolutional neural network
CN118262276B (en) Method and device for detecting counterfeiting of video, electronic equipment and storage medium
CN114510715A (en) Model's functional safety testing method, device, storage medium and equipment
CN117152486B (en) An interpretability-based method for detecting adversarial examples in images
Mor Forensic ai: A novel multi-granular approach for detecting synthetic media manipulation
CN119273953A (en) Fire image detection method and device based on improved YOLOv5
CN117392074A (en) Method, apparatus, computer device and storage medium for detecting object in image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination