[go: up one dir, main page]

CN112766065A - Mobile terminal examinee identity authentication method, device, terminal and storage medium - Google Patents

Mobile terminal examinee identity authentication method, device, terminal and storage medium Download PDF

Info

Publication number
CN112766065A
CN112766065A CN202011625484.0A CN202011625484A CN112766065A CN 112766065 A CN112766065 A CN 112766065A CN 202011625484 A CN202011625484 A CN 202011625484A CN 112766065 A CN112766065 A CN 112766065A
Authority
CN
China
Prior art keywords
face
image
network
examinee
improved
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011625484.0A
Other languages
Chinese (zh)
Inventor
马磊
陈义学
夏彬彬
侯庆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHANDONG SHANDA OUMA SOFTWARE CO Ltd
Original Assignee
SHANDONG SHANDA OUMA SOFTWARE CO Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHANDONG SHANDA OUMA SOFTWARE CO Ltd filed Critical SHANDONG SHANDA OUMA SOFTWARE CO Ltd
Priority to CN202011625484.0A priority Critical patent/CN112766065A/en
Publication of CN112766065A publication Critical patent/CN112766065A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Multimedia (AREA)
  • Molecular Biology (AREA)
  • Human Computer Interaction (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention discloses a mobile terminal examinee identity authentication method, a mobile terminal examinee identity authentication device, a mobile terminal examinee identity authentication terminal and a storage medium.A face positioning frame and face characteristic point coordinates of an image containing a face are obtained by adopting an improved RetinaFace algorithm; wherein, the backbone network adopting the improved RetinaFace algorithm adopts MobileNet V3 as a multi-stage feature extraction network; preprocessing an image containing a human face based on the human face positioning frame and the coordinates of the human face characteristic points to obtain a human face image to be authenticated; carrying out face feature extraction on a face image to be authenticated by adopting an improved MobileNet V3 network algorithm to obtain a face feature vector; and comparing the extracted face feature vector with vectors in a face feature library, and authenticating the identity of the examinee according to the comparison result. The invention adopts the improved RetinaFace algorithm and the MobileNet V3 network algorithm to process the face image to obtain the identity authentication of the examinee, so that the algorithm is more suitable for the mobile terminal with lower memory and processing performance, the processing process is smoother, and the precision meets the requirement.

Description

Mobile terminal examinee identity authentication method, device, terminal and storage medium
Technical Field
The invention relates to the field of examinee identity authentication, in particular to a mobile terminal examinee identity authentication method, a mobile terminal examinee identity authentication device, a mobile terminal examinee identity authentication terminal and a storage medium.
Background
The method aims at various defects of the traditional examinee attendance check-in mode, such as the traditional mode taking examinee signature and invigilator manual check as representatives, and has the defects of low efficiency, difficult statistics, difficult management and the like. The face recognition technology carries out identity authentication by utilizing the uniqueness of the biological characteristics of the human body, and the advanced face automatic recognition technology is applied to realize the identity authentication and the autonomous check-in of examinee entrance, thereby perfecting the intelligent invigilation mode, improving the entrance efficiency and accurately positioning the examinee identity.
The human face recognition is a biological recognition technology for identity recognition based on human face characteristic information, and utilizes a non-contact high-end mode recognition technology of computer image analysis, model theory, artificial intelligence and mode recognition technology to complete an intelligent analysis process of detecting and detecting characteristic portrait information from a complex image scene and performing matching recognition. A camera or a video camera is generally used to capture an image or video stream containing a human face and automatically detect and track the human face in the image.
In recent years, with the rapid development of the computer vision field and the deep application of machine learning technology, deep learning technology and the like, the current face recognition methods include a method based on a characteristic face, a method based on a geometric characteristic, a method based on deep learning, a method based on a support vector machine and other comprehensive methods. The appearance of deep learning makes the face recognition technology make breakthrough progress.
However, the existing face recognition algorithm based on deep learning generally operates well at a computer end with better memory and processing performance, is less friendly to a mobile end with lower memory and processing performance, operates unsmoothly, and does not meet the development trend that the mobile end is more and more popular.
Disclosure of Invention
In order to solve the above problems, the present invention provides a mobile terminal examinee identity authentication method, apparatus, terminal and storage medium, which are suitable for authenticating the examinee identity at the mobile terminal by using a face recognition algorithm based on deep learning.
The technical scheme of the invention is as follows: a mobile terminal examinee identity authentication method comprises the following steps:
adopting an improved RetinaFace algorithm to obtain a face positioning frame and face characteristic point coordinates of an image containing a face; wherein, the backbone network adopting the improved RetinaFace algorithm adopts MobileNet V3 as a multi-stage feature extraction network;
preprocessing an image containing a human face based on the human face positioning frame and the coordinates of the human face characteristic points to obtain a human face image to be authenticated;
carrying out face feature extraction on a face image to be authenticated by adopting an improved MobileNet V3 network algorithm to obtain a face feature vector;
and comparing the extracted face feature vector with vectors in a face feature library, and authenticating the identity of the examinee according to the comparison result.
Further, the improved MobileNetV3 network algorithm adopted for extracting the face features of the face image to be authenticated uses a 7 × 7 × 512 separable convolution layer instead of the average pooling layer; where 7 × 7 is the convolution kernel size and 512 is the number of input signature channels.
Further, the method for obtaining the coordinates of the face positioning frame and the face characteristic points of the face image by adopting an improved Retina face algorithm comprises the following steps:
adopting an FPN network and an SSH network to extract reinforced features;
wherein the SSH network introduces context modeling in the feature graph.
Further, preprocessing the image containing the human face based on the human face positioning frame and the coordinates of the human face characteristic points to obtain a human face image to be authenticated, which specifically comprises the following steps:
cutting out a face image through a face positioning frame;
adopting a human face image quality detection and processing method to carry out light uniformity detection and compensation and brightness and ambiguity detection and correction on the human face image to obtain a processed human face image;
and aligning the processed face images through affine transformation according to the coordinates of the face characteristic points to obtain the aligned face images serving as the face images to be authenticated.
Further, the extracted face feature vector is compared with the vectors in the face feature library, and the identity of the examinee is authenticated according to the comparison result, which specifically comprises the following steps:
comparing the extracted face feature vector with vectors in a face feature library to obtain face feature similarity;
and processing the comparison result of the face feature similarity, wherein the output result is the same person when the similarity is greater than or equal to a set threshold value, and the output result is not the same person when the similarity is smaller than the set threshold value.
Furthermore, the similarity calculation method of the human face features adopts cosine similarity.
The technical scheme of the invention also comprises a mobile terminal examinee identity authentication device which comprises,
a positioning feature point extraction module: adopting an improved RetinaFace algorithm to obtain a face positioning frame and face characteristic point coordinates of an image containing a face; wherein, the backbone network adopting the improved RetinaFace algorithm adopts MobileNet V3 as a multi-stage feature extraction network;
the face image to be authenticated extraction module: preprocessing an image containing a human face based on the human face positioning frame and the coordinates of the human face characteristic points to obtain a human face image to be authenticated;
the face feature vector extraction module: carrying out face feature extraction on a face image to be authenticated by adopting an improved MobileNet V3 network algorithm to obtain a face feature vector;
an identity authentication module: and comparing the extracted face feature vector with vectors in a face feature library, and authenticating the identity of the examinee according to the comparison result.
Further, the improved MobileNetV3 network algorithm adopted by the face feature extraction module to extract the face features of the face image to be authenticated uses a 7 × 7 × 512 separable convolution layer instead of the average pooling layer; where 7 × 7 is the convolution kernel size and 512 is the number of input signature channels.
The technical scheme of the invention also comprises a terminal, which comprises:
a processor;
a memory for storing instructions for execution by the processor;
wherein the processor is configured to perform any of the methods described above.
The invention also includes a computer-readable storage medium storing a computer program that, when executed by a processor, implements any of the methods described above.
According to the mobile terminal examinee identity authentication method, device, terminal and storage medium, the improved Retina face algorithm and the MobileNet V3 network algorithm are adopted to process the face image to obtain examinee identity authentication, wherein the backbone network of the improved Retina face algorithm adopts MobileNet V3 as a multi-stage feature extraction network, so that the algorithm is more suitable for mobile terminals with lower memory and processing performance, the processing process is smoother, and the precision meets the requirement.
Drawings
FIG. 1 is a schematic flow chart of a method according to an embodiment of the present invention;
FIG. 2 is a block diagram illustrating a flow of a specific implementation of an embodiment of the present invention;
fig. 3 is a schematic diagram of a human face detection network structure of a RetinaFace algorithm improved in a specific implementation manner according to an embodiment of the present invention;
fig. 4 is a schematic block diagram of a second structure according to an embodiment of the present invention.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings by way of specific examples, which are illustrative of the present invention and are not limited to the following embodiments.
The following explains the english terms referred to herein:
RetinaFace algorithm: the core of the latest one-stage Face detection model proposed by Insight Face in 2019 is published in the paper "Retinaface: Single-stage Face localization in the Wild".
MobileNet V3 network algorithm: a network architecture proposed by Google.
Example one
As shown in fig. 1, the present embodiment provides an identity authentication method for a mobile terminal examinee, including the following steps:
s1, obtaining a face positioning frame and face feature point coordinates of the image containing the face by adopting an improved Retina face algorithm; wherein, the backbone network adopting the improved RetinaFace algorithm adopts MobileNet V3 as a multi-stage feature extraction network;
s2, preprocessing the image containing the human face based on the human face positioning frame and the coordinates of the human face characteristic points to obtain a human face image to be authenticated;
s3, extracting the face features of the face image to be authenticated by adopting an improved MobileNet V3 network algorithm to obtain a face feature vector;
and S4, performing feature comparison on the extracted face feature vector and the vector in the face feature library, and authenticating the identity of the examinee according to the comparison result.
In the step S1 of the method, a backbone network adopting an improved Retina face algorithm adopts MobileNet V3 as a multi-stage feature extraction network, in the step S3, an improved MobileNet V3 network algorithm is adopted to extract the face features of a face image to be authenticated, and the Retina face network structure and the MobileNet V3 network structure are improved to realize two network models of face detection and face feature extraction, so that the algorithm is more suitable for a mobile terminal.
To further explain the present invention, a specific implementation method is provided below based on the above steps and the principle of the present invention, as shown in fig. 2, the specific implementation method obtains images from a surveillance video, performs face detection after preprocessing the images to extract face images, performs alignment after preprocessing the face images, then performs feature extraction and comparison, and finally determines a face recognition result according to a comparison result.
The specific implementation method comprises the following steps:
the method comprises the following steps of collecting video images of examinee entering a test room in real time and preprocessing the video images, wherein image Gaussian filtering smoothing and histogram equalization are adopted in the preprocessing process to suppress noise of the images, so that relatively ideal images I containing human faces are obtained, and the image size is h multiplied by w.
Step two, carrying out face detection on the image I preprocessed in the step (1), wherein the detection algorithm adopts an improved Retina face algorithm, and inputting the image I containing the face into a Retina face network to obtain a face positioning frame (expressed as ((x)1,y1),(x2,y2) ) and face feature point coordinates including five feature point coordinates of the left and right eyes, nose, and left and right mouth corners of the face region.
The network structure of the improved RetinaFace algorithm is shown in FIG. 3, and compared with a MobileNet V2, the MobileNet V3 serving as a multi-stage feature extraction network is adopted as a backbone network of the improved RetinaFace algorithm, so that the target detection accuracy and efficiency are improved.
After the backbone network processing, next, an FPN (Feature Pyramid network) network and an SSH (Single Stage Face Detector) network are used to perform enhanced Feature extraction, wherein each layer of the FPN network is independently predicted, and features at the top layer are fused with features at the lower layer through upsampling. In particular, the SSH network introduces context modeling in the feature map, and improves the detection of the small face.
Compared with a two-stage cascade method, the design obtains good performance and speed improvement, and the network is trained by using the multitask loss of strong supervision and self supervision as a loss function, wherein the multitask loss function L is as follows:
Figure BDA0002874743600000051
face classification penalty in equation (1)
Figure BDA0002874743600000052
piRepresenting the probability of predicting the ith Anchor as a face,
Figure BDA0002874743600000053
a true value is represented; face frame regression loss
Figure BDA0002874743600000054
Wherein t isi=(tx,ty,tw,th)iAnd
Figure BDA0002874743600000055
representing the position of a prediction frame corresponding to the Anchor of the positive sample and the position of a real marking frame; face key point regression function
Figure BDA0002874743600000056
Dense regression loss Lpixel
And step three, cutting out a face image through the face positioning frame obtained in the step (2), and carrying out light uniformity detection and compensation and brightness and ambiguity detection and correction on the face image by adopting a face image quality detection and processing method to obtain the processed face image.
And fourthly, aligning the face images through affine transformation according to coordinates of five feature points of the left eye, the right eye, the nose and the left mouth corner and the right mouth corner of the face region to obtain aligned face images, wherein the aligned face images are the face images to be authenticated.
And step five, inputting the aligned face images into an improved MobileNet V3 network, and extracting face features to obtain face feature vectors.
The network structure of the modified MobileNetV3 network used in this step is shown in table 1 below.
TABLE 1 improved MobileNet V3 network architecture
Figure BDA0002874743600000061
This step of improved MobileNetV3 network for face feature extraction uses separable convolution instead of average pooling, i.e. a separable convolution layer of 7 × 7 × 512(7 × 7 represents the size of the convolution kernel, 512 represents the number of channels of the input feature map) instead of global average pooling, which allows the network to learn weights at different points, where global represents global and depthwise represents depth, i.e. channel-by-channel convolution, using separable convolution: i.e. 7 x 512 convolution kernels are used instead of 7 x 512 convolution kernels. In addition, the channel expansion multiple is reduced, the extraction of detail features is convenient, an Arcface loss function is adopted in the training process, and the formula is as follows:
Figure BDA0002874743600000071
the value of correctly classified label in equation (2) is
Figure BDA0002874743600000073
The cos function decreases monotonically within (0,1), and adding m makes the value smaller, so that loss becomes larger, and the angular distance has a more pronounced effect on angle than the cosine distance.
And step six, comparing the obtained face feature vector with vectors in a face feature library to obtain face feature similarity, wherein the feature similarity calculation mode adopts cosine similarity.
Figure BDA0002874743600000072
And seventhly, processing the comparison result of the face feature similarity, wherein the output result is the same person when the similarity is greater than or equal to a set threshold value, and the output result is not the same person when the similarity is smaller than the set threshold value.
A specific experimental example is provided below, the experimental environment for model training is a Linux system, and the system is configured as follows: inter (R) Xeon E5-2620 v4@2.10GHz memory 32G and two NVIDIA Tesla P100 GPU video cards with 16G memory, an improved Retina face detection network model is obtained by training data sets comprising about 30000 images and 400000 high-precision face positioning frames, and an improved MobileNet V3 face feature extraction network model is obtained by training data sets comprising about 50000 persons and about 12 face photos under different scenes of each person. The used model deployment experimental environment is Hua is a flat plate M6 of memory 6G of a processor of KaiSi kylin 980, and 128-dimensional feature vectors obtained by registration photos of examinees through the operation steps are used for constructing a face feature database and setting labels. The human face similarity threshold is set to be 0.85, when the similarity of the human face feature vectors in the collected live picture and the registration picture is greater than or equal to 0.85, the same person is determined, the human face identification passes through, when the similarity is less than 0.85, the same person is determined, and the human face identification does not pass through. Selecting an actual examinee entrance authentication monitoring video with the resolution of 1080 × 720, and inputting an image sequence by adopting a strategy of sampling 10 frames per second, taking an examinee target as an example.
The realization process is as follows:
step (1): taking an image sequence of an entrance video image of an examinee with a resolution of 1080 x 720 after sampling of 10 frames per second as input, and carrying out suppression processing on the noise of the image through Gaussian filtering smoothing and histogram equalization;
step (2): inputting the preprocessed entrance video image of the examinee into a pre-trained improved RetinaFace face detection network model, and outputting a face positioning frame of the target examinee and five feature point coordinates of left and right eyes, a nose and left and right mouth angles of a face area;
and (3): cutting out a face image through the face positioning frame obtained in the step (2), and carrying out light uniformity detection and compensation, brightness detection, ambiguity detection and correction and other processing on the face image by adopting a face image quality detection and processing method to obtain a processed face image;
and (4): aligning the face image through affine transformation according to coordinates of five feature points of a left eye, a right eye, a nose and a left mouth corner and a right mouth corner of the face region to obtain an aligned face image, and changing the size of the face image to be 112 multiplied by 112;
and (5): inputting the aligned face images (112 multiplied by 112) into a pre-trained improved MobileNet V3 network, and extracting face features to obtain a 128-dimensional feature vector of the face;
and (6): comparing the obtained face feature vector with vectors in a face feature library to obtain face feature similarity, wherein the feature similarity calculation mode adopts cosine similarity;
and (7): and processing the comparison result of the face feature similarity, wherein the output result is the same person when the similarity is greater than or equal to a set threshold value of 0.85, and the output result is different persons when the similarity is less than 0.85.
Example two
As shown in fig. 4, on the basis of the first embodiment, the present embodiment provides an identity authentication apparatus for a mobile terminal examinee, which includes the following functional modules.
The localization feature point extracting module 101: adopting an improved RetinaFace algorithm to obtain a face positioning frame and face characteristic point coordinates of an image containing a face; wherein, the backbone network adopting the improved RetinaFace algorithm adopts MobileNet V3 as a multi-stage feature extraction network;
the face image to be authenticated extraction module 102: preprocessing an image containing a human face based on the human face positioning frame and the coordinates of the human face characteristic points to obtain a human face image to be authenticated;
the face feature vector extraction module 103: carrying out face feature extraction on a face image to be authenticated by adopting an improved MobileNet V3 network algorithm to obtain a face feature vector;
the identity authentication module 104: and comparing the extracted face feature vector with vectors in a face feature library, and authenticating the identity of the examinee according to the comparison result.
The improved MobileNetV3 network algorithm adopted by the facial feature extraction module 103 to extract facial features from a facial image to be authenticated uses a 7 × 7 × 512 separable convolution layer instead of an average pooling layer, where 7 × 7 is the size of a convolution kernel, and 512 is the number of input feature map channels.
EXAMPLE III
The present embodiments provide a terminal that includes a processor and a memory.
The memory is used for storing the execution instructions of the processor. The memory may be implemented by any type or combination of volatile or non-volatile memory terminals, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks. The executable instructions in the memory, when executed by the processor, enable the terminal to perform some or all of the steps in the above-described method embodiments.
The processor is a control center of the storage terminal, connects various parts of the whole electronic terminal by using various interfaces and lines, and executes various functions of the electronic terminal and/or processes data by operating or executing software programs and/or modules stored in the memory and calling data stored in the memory. The processor may be composed of an Integrated Circuit (IC), for example, a single packaged IC, or a plurality of packaged ICs connected with the same or different functions.
Example four
The present embodiment provides a computer storage medium, wherein the computer storage medium may store a program, and the program may include some or all of the steps in the embodiments provided in the present invention when executed. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM) or a Random Access Memory (RAM).
The above embodiments may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, the above-described embodiments may be implemented in whole or in part in the form of a computer program product. The computer program product comprises one or more computer instructions or computer programs. The procedures or functions according to the embodiments of the present application are wholly or partially generated when the computer instructions or the computer program are loaded or executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire or wirelessly.
The above disclosure is only for the preferred embodiments of the present invention, but the present invention is not limited thereto, and any non-inventive changes that can be made by those skilled in the art and several modifications and amendments made without departing from the principle of the present invention shall fall within the protection scope of the present invention.

Claims (10)

1. A mobile terminal examinee identity authentication method is characterized by comprising the following steps:
adopting an improved RetinaFace algorithm to obtain a face positioning frame and face characteristic point coordinates of an image containing a face; wherein, the backbone network adopting the improved RetinaFace algorithm adopts MobileNet V3 as a multi-stage feature extraction network;
preprocessing an image containing a human face based on the human face positioning frame and the coordinates of the human face characteristic points to obtain a human face image to be authenticated;
carrying out face feature extraction on a face image to be authenticated by adopting an improved MobileNet V3 network algorithm to obtain a face feature vector;
and comparing the extracted face feature vector with vectors in a face feature library, and authenticating the identity of the examinee according to the comparison result.
2. The mobile terminal examinee identity authentication method according to claim 1, wherein the improved MobileNetV3 network algorithm adopted for extracting the face features of the face image to be authenticated uses a 7 x 512 separable convolutional layer instead of the average pooling layer; where 7 × 7 is the convolution kernel size and 512 is the number of input signature channels.
3. The mobile terminal examinee identity authentication method according to claim 1 or 2, wherein obtaining coordinates of a face positioning frame and face feature points from a face image by using an improved RetinaFace algorithm comprises:
adopting an FPN network and an SSH network to extract reinforced features;
wherein the SSH network introduces context modeling in the feature graph.
4. The mobile terminal examinee identity authentication method according to claim 1 or 2, wherein the image containing the face is preprocessed based on the face positioning frame and the face feature point coordinates to obtain a face image to be authenticated, and specifically, the method comprises the following steps:
cutting out a face image through a face positioning frame;
adopting a human face image quality detection and processing method to carry out light uniformity detection and compensation and brightness and ambiguity detection and correction on the human face image to obtain a processed human face image;
and aligning the processed face images through affine transformation according to the coordinates of the face characteristic points to obtain the aligned face images serving as the face images to be authenticated.
5. The mobile terminal examinee identity authentication method according to claim 1 or 2, characterized in that feature comparison is performed between the extracted face feature vector and vectors in a face feature library, and the examinee identity is authenticated according to a comparison result, specifically:
comparing the extracted face feature vector with vectors in a face feature library to obtain face feature similarity;
and processing the comparison result of the face feature similarity, wherein the output result is the same person when the similarity is greater than or equal to a set threshold value, and the output result is not the same person when the similarity is smaller than the set threshold value.
6. The mobile terminal examinee identity authentication method according to claim 5, wherein the face feature similarity calculation method employs cosine similarity.
7. An examinee identity authentication device at a mobile terminal is characterized by comprising,
a positioning feature point extraction module: adopting an improved RetinaFace algorithm to obtain a face positioning frame and face characteristic point coordinates of an image containing a face; wherein, the backbone network adopting the improved RetinaFace algorithm adopts MobileNet V3 as a multi-stage feature extraction network;
the face image to be authenticated extraction module: preprocessing an image containing a human face based on the human face positioning frame and the coordinates of the human face characteristic points to obtain a human face image to be authenticated;
the face feature vector extraction module: carrying out face feature extraction on a face image to be authenticated by adopting an improved MobileNet V3 network algorithm to obtain a face feature vector;
an identity authentication module: and comparing the extracted face feature vector with vectors in a face feature library, and authenticating the identity of the examinee according to the comparison result.
8. The mobile terminal examinee identity authentication device of claim 7, wherein the improved MobileNetV3 network algorithm adopted by the face feature vector extraction module to extract the face features of the face image to be authenticated uses a 7 x 512 separable convolution layer instead of the average pooling layer;
where 7 × 7 is the convolution kernel size and 512 is the number of input signature channels.
9. A terminal, comprising:
a processor;
a memory for storing instructions for execution by the processor;
wherein the processor is configured to perform the method of any one of claims 1-6.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 6.
CN202011625484.0A 2020-12-30 2020-12-30 Mobile terminal examinee identity authentication method, device, terminal and storage medium Pending CN112766065A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011625484.0A CN112766065A (en) 2020-12-30 2020-12-30 Mobile terminal examinee identity authentication method, device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011625484.0A CN112766065A (en) 2020-12-30 2020-12-30 Mobile terminal examinee identity authentication method, device, terminal and storage medium

Publications (1)

Publication Number Publication Date
CN112766065A true CN112766065A (en) 2021-05-07

Family

ID=75698918

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011625484.0A Pending CN112766065A (en) 2020-12-30 2020-12-30 Mobile terminal examinee identity authentication method, device, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN112766065A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114119953A (en) * 2021-11-25 2022-03-01 安徽百诚慧通科技有限公司 Method for quickly positioning and correcting license plate, storage medium and equipment
CN114299279A (en) * 2021-12-01 2022-04-08 北京昭衍新药研究中心股份有限公司 Unmarked group rhesus monkey motion amount estimation method based on face detection and recognition
CN114596594A (en) * 2022-01-20 2022-06-07 北京极豪科技有限公司 A fingerprint image matching method, device, medium and program product
CN118053193A (en) * 2024-04-16 2024-05-17 中国移动紫金(江苏)创新研究院有限公司 Face comparison method, device, equipment, storage medium and product of vehicle-mounted terminal

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110647840A (en) * 2019-09-19 2020-01-03 天津天地基业科技有限公司 Face recognition method based on improved mobileNet V3
CN111310732A (en) * 2020-03-19 2020-06-19 广东宜教通教育有限公司 High-precision face authentication method, system, computer equipment and storage medium
CN111428606A (en) * 2020-03-19 2020-07-17 华南师范大学 Lightweight face comparison verification method facing edge calculation
CN111582224A (en) * 2020-05-19 2020-08-25 湖南视觉伟业智能科技有限公司 Face recognition system and method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110647840A (en) * 2019-09-19 2020-01-03 天津天地基业科技有限公司 Face recognition method based on improved mobileNet V3
CN111310732A (en) * 2020-03-19 2020-06-19 广东宜教通教育有限公司 High-precision face authentication method, system, computer equipment and storage medium
CN111428606A (en) * 2020-03-19 2020-07-17 华南师范大学 Lightweight face comparison verification method facing edge calculation
CN111582224A (en) * 2020-05-19 2020-08-25 湖南视觉伟业智能科技有限公司 Face recognition system and method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JIANKANG DENG ET AL.: "RetinaFace: Single-stage Dense Face Localisation in the Wild", 《ARXIV:1905.00641V2》 *
张子昊,王蓉: "基于MobileFacNet网络改进的人脸识别方法", 《北京航空航天大学学报》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114119953A (en) * 2021-11-25 2022-03-01 安徽百诚慧通科技有限公司 Method for quickly positioning and correcting license plate, storage medium and equipment
CN114299279A (en) * 2021-12-01 2022-04-08 北京昭衍新药研究中心股份有限公司 Unmarked group rhesus monkey motion amount estimation method based on face detection and recognition
CN114596594A (en) * 2022-01-20 2022-06-07 北京极豪科技有限公司 A fingerprint image matching method, device, medium and program product
CN114596594B (en) * 2022-01-20 2025-10-31 天津极豪科技有限公司 Fingerprint image matching method, device, medium and program product
CN118053193A (en) * 2024-04-16 2024-05-17 中国移动紫金(江苏)创新研究院有限公司 Face comparison method, device, equipment, storage medium and product of vehicle-mounted terminal
CN118053193B (en) * 2024-04-16 2024-07-26 中国移动紫金(江苏)创新研究院有限公司 Vehicle-mounted terminal face comparison method, device, equipment, storage medium and product

Similar Documents

Publication Publication Date Title
CN111310731B (en) Video recommendation method, device, equipment and storage medium based on artificial intelligence
CN102375970B (en) A kind of identity identifying method based on face and authenticate device
CN112364827B (en) Face recognition method, device, computer equipment and storage medium
JP6032921B2 (en) Object detection apparatus and method, and program
CN104123543B (en) A kind of eye movement recognition methods based on recognition of face
CN111160269A (en) A method and device for detecting facial key points
CN112766065A (en) Mobile terminal examinee identity authentication method, device, terminal and storage medium
CN110633004B (en) Interaction method, device and system based on human body posture estimation
EP2697775A1 (en) Method of detecting facial attributes
CN110705357A (en) Face recognition method and face recognition device
CN105869166B (en) A kind of human motion recognition method and system based on binocular vision
US11380010B2 (en) Image processing device, image processing method, and image processing program
CN110008806A (en) Storage medium, learning processing method, learning device and object identification device
CN114373203B (en) Image archiving method, device, terminal equipment, and computer-readable storage medium
CN107766864B (en) Method and device for extracting features and method and device for object recognition
CN114038045A (en) Cross-modal face recognition model construction method and device and electronic equipment
CN112487232B (en) Face retrieval method and related products
CN113837006A (en) Face recognition method and device, storage medium and electronic equipment
CN110569707A (en) identity recognition method and electronic equipment
CN112836682A (en) Object recognition method, device, computer equipment and storage medium in video
JP2013218605A (en) Image recognition device, image recognition method, and program
CN112446333A (en) Ball target tracking method and system based on re-detection
CN109598201B (en) Action detection method and device, electronic equipment and readable storage medium
CN110008803B (en) Methods, devices and equipment for pedestrian detection and training detectors
CN111507289A (en) Video matching method, computer device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210507