WO2020233000A1 - Procédé et appareil de reconnaissance faciale et support de stockage lisible par ordinateur - Google Patents
Procédé et appareil de reconnaissance faciale et support de stockage lisible par ordinateur Download PDFInfo
- Publication number
- WO2020233000A1 WO2020233000A1 PCT/CN2019/117342 CN2019117342W WO2020233000A1 WO 2020233000 A1 WO2020233000 A1 WO 2020233000A1 CN 2019117342 W CN2019117342 W CN 2019117342W WO 2020233000 A1 WO2020233000 A1 WO 2020233000A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- face
- gradient
- data
- training
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Definitions
- This application relates to the field of artificial intelligence technology, and in particular to a face recognition method, device and computer-readable storage medium that can be used for smart security.
- the video surveillance system is the representative system of security.
- the video surveillance system is the representative system of security.
- the importance of video surveillance systems in the field of national security and urbanization management has become more and more prominent, and the requirements for its functions and performance have also been continuously improved.
- traffic accidents and social security problems are showing increasingly serious situations.
- criminal activities such as theft and robbery by breaking doors or breaking windows into experimental and scientific research buildings, office buildings, and residential communities are still very serious.
- the security monitoring based on face recognition at home and abroad mainly adopts the face alarm system of infrared sensor, but the infrared alarm system is susceptible to interference from various heat sources, light sources, radio frequency radiation, and hot air flow, and it is difficult to achieve efficient face recognition effects.
- This application provides a face recognition method, device, and computer-readable storage medium, the main purpose of which is to provide a technical solution that can efficiently recognize a face from video or picture data.
- a face recognition method includes:
- Step A The data collection layer collects a face image set, a non-face image set, and a face comparison set, saves the face image set and the non-face image set as an original data set, and sends the original data Input the set to the data processing layer, and input the face comparison set into the database;
- Step B The data processing layer performs grayscale and denoising processing on the original data set to obtain a preprocessed data set, where the preprocessed data set includes a face preprocessing data set and a non-face preprocessing Data set, input the face preprocessing data set to the data cutting layer, and input the non-face preprocessing data set to the model training layer;
- Step C The data cutting layer receives the face preprocessing data set, performs edge detection and segmentation processing on the face preprocessing data set, and then obtains the face training set and inputs it to the model training layer;
- Step D The model training layer receives a training set consisting of the face training set and the non-face preprocessing data set, and extracts the face control set from the database, according to the direction gradient straight method
- the training set is calculated to obtain a gradient feature set, and the gradient feature set and the face control set are input to the lifting algorithm for training, until the training accuracy of the lifting algorithm is greater than a preset threshold, the model
- the training layer exits training;
- Step E The data acquisition layer receives the captured image, performs grayscale and noise reduction processing on the input image, and then inputs it to the model training layer.
- the model training layer determines whether the captured image contains a person Face, when the captured image does not contain a human face, output the result that the human face is not recognized;
- Step F When the captured image contains a human face, the model training layer sequentially determines the similarity between the captured image and the face control set of the database based on the Euclidean distance method, and outputs the highest similarity Face comparison set pictures to complete face recognition.
- the present application also provides a face recognition device, which includes a memory and a processor.
- the memory stores a face recognition program that can run on the processor. The following steps are implemented when the recognition program is executed by the processor:
- Step A The data collection layer collects a face image set, a non-face image set, and a face comparison set, saves the face image set and the non-face image set as an original data set, and sends the original data Input the set to the data processing layer, and input the face comparison set into the database;
- Step B The data processing layer performs grayscale and denoising processing on the original data set to obtain a preprocessed data set, where the preprocessed data set includes a face preprocessing data set and a non-face preprocessing Data set, input the face preprocessing data set to the data cutting layer, and input the non-face preprocessing data set to the model training layer;
- Step C The data cutting layer receives the face preprocessing data set, performs edge detection and segmentation processing on the face preprocessing data set, and then obtains the face training set and inputs it to the model training layer;
- Step D The model training layer receives a training set consisting of the face training set and the non-face preprocessing data set, and extracts the face control set from the database, according to the direction gradient straight method Calculate the training set to obtain a gradient feature set, and input the gradient feature set and the face control set to the lifting algorithm for training, until the training accuracy of the lifting algorithm is greater than a preset threshold, the model training layer Exit training;
- Step E The data acquisition layer receives the captured image, performs grayscale and noise reduction processing on the captured image, and then inputs it to the model training layer.
- the model training layer determines whether the captured image is Contains a human face, and when the captured image does not contain a human face, output the result that the human face is not recognized;
- Step F When the captured image contains a human face, the model training layer sequentially determines the similarity between the captured image and the face control set of the database based on the Euclidean distance method, and outputs the highest similarity Face comparison set pictures to complete face recognition.
- the present application also provides a computer-readable storage medium that stores a face recognition program on the computer-readable storage medium, and the face recognition program can be executed by one or more processors to Implement the steps of the face recognition method as described above.
- the adaptive image denoising filter can reduce the impact of noise on the image, and the improvement algorithm makes good use of weak classifiers for cascading, and the final combination form of strong classifiers has high classification accuracy. Therefore, the face recognition proposed in this application
- the method, device and computer-readable storage medium can realize accurate face recognition function.
- FIG. 1 is a schematic flowchart of a face recognition method provided by an embodiment of this application.
- FIG. 2 is a schematic diagram of the internal structure of a face recognition device provided by an embodiment of the application.
- FIG. 3 is a schematic diagram of modules of a face recognition program in a face recognition device provided by an embodiment of the application.
- This application provides a face recognition method.
- FIG. 1 it is a schematic flowchart of a face recognition method provided by an embodiment of this application.
- the method can be executed by a device, and the device can be implemented by software and/or hardware.
- the face recognition method includes:
- the data collection layer collects a face image set, a non-face image set, and a face comparison set, saves the face image set and the non-face image set as an original data set, and sends the original data set Input to the data processing layer, and input the face comparison set into the database.
- the preferred embodiment of the present application deploys several video surveillance areas in a preset scene, such as an experimental scientific research building, office building, residential area, etc., and selects images including human faces from the images captured in the several video surveillance areas.
- a face image set based on different faces in the face image set, collect ID photo pictures corresponding to the different faces, and obtain ID photos from the relevant monitoring department, such as from the public security department
- the preferred embodiment of the present application selects images that do not include human faces from the captured image sets in the several video surveillance areas, and obtains non-human target data sets from a preset data set, such as the COCO data set , Compose a set of non-face images.
- the COCO data set is a large-scale image data set specially designed for object detection, segmentation, human key point detection, semantic segmentation and caption generation.
- the data processing layer performs grayscale and denoising processing on the original data set to obtain a preprocessed data set, where the preprocessed data set includes a face preprocessed data set and a non-face preprocessed data Set, input the face preprocessing data set to the data cutting layer, and input the non-face preprocessing data set to the model training layer.
- the gray scale is to convert the data in the original data set from an RGB format to a black and white gray format by using a proportional method.
- the ratio method is as follows: Obtain the R, G, and B pixel values of each pixel in the original data set, and convert the pixel to a black-and-white gray format according to the following function:
- the noise reduction processing adopts the following adaptive image noise reduction filtering method:
- (x, y) represents the coordinates of the image pixels in the original data set
- f(x, y) is the output data after the original data set is denoised based on the adaptive image noise reduction filtering method
- ⁇ (x,y) is noise
- g(x,y) is the original data set
- L represents the current pixel coordinates.
- the data cutting layer receives the face preprocessing data set, performs edge detection and segmentation processing on the face preprocessing data set to obtain a face training set and inputs it to the model training layer.
- the edge detection and the segmentation process are based on the edge detection to find a pixel set with a large change in the pixel gray level in the face preprocessing data set, and based on the segmentation process.
- the pixel set is reconnected to segment the human face and the human face background.
- the larger step change means that the gray-scale derivative has a maximum value or a minimum value.
- the preferred embodiment of the present application adopts the Canny edge detection method.
- the Canny edge detection method smooths and filters the face preprocessing data set based on a Gaussian filter, and calculates the smoothed and filtered data set based on the first-order partial derivative finite difference to obtain non-local maximum points and Minimal value point, complete edge detection.
- the model training layer receives a training set consisting of the face training set and the non-face preprocessing data set, and extracts the face control set from the database, and calculates according to the direction gradient straight method
- the training set obtains a gradient feature set, and the gradient feature set and the face control set are input to a lifting algorithm for training, until the training accuracy of the lifting algorithm is greater than a preset threshold, the model training layer exits training.
- the preferred embodiment of the present application calculates the gradient amplitude and gradient direction value of each pixel (x, y) of the data in the training set, and uses the gradient amplitude as the first component and the gradient direction value as the first component.
- Two components form a gradient matrix; divide the data in the gradient matrix into multiple small blocks, and add the gradient amplitude and the gradient direction value of each small block to obtain the added value, and the added value is connected in series to form the The gradient feature set.
- the boosting algorithm includes the AdaBoost algorithm, and the AdaBoost algorithm includes several weak classifiers and strong classifiers;
- the weak classifier h(x, t, p, ⁇ ) is:
- t is the classification function including the gradient feature set
- x is the detection sub-window
- p is the weighted inequality direction coefficient
- ⁇ is the weak classifier threshold.
- the preferred embodiment of the present application is trained on the basis of the gradient feature set. The weak classifier h(x, t, p, ⁇ ) until the optimal threshold ⁇ is determined to obtain the strong classifier C(x):
- ⁇ k is the coefficient of the strong classifier C(x)
- T is the total number of the weak classifiers
- ⁇ k ⁇ k /(1- ⁇ k )
- ⁇ k is:
- w i is the weight of the gradient feature set
- yi is the face control set
- the boosting algorithm exits training, and the preset threshold is generally Set to 0.97.
- S5. Receive a captured image, perform grayscale and noise reduction processing on the captured image, and then input it to the model training layer to determine whether the captured image contains a human face.
- the captured image uses an image captured by equipment such as an outdoor camera and a mobile phone.
- the model training layer sequentially determines the similarity between the captured image and the face comparison set of the database based on the Euclidean distance method, and outputs the face comparison with the highest similarity Collect pictures to complete face recognition.
- the Euclidean distance method is:
- a is the captured image
- yi is the face comparison set
- n is the total amount of data in the face comparison set.
- the invention also provides a face recognition device.
- FIG. 2 it is a schematic diagram of the internal structure of a face recognition device provided by an embodiment of this application. (Corresponding modification)
- the face recognition apparatus 1 may be a PC (Personal Computer, personal computer), or a terminal device such as a smart phone, a tablet computer, or a portable computer, or a server.
- the face recognition device 1 at least includes a memory 11, a processor 12, a communication bus 13, and a network interface 14.
- the memory 11 includes at least one type of readable storage medium, and the readable storage medium includes flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory, etc.), magnetic memory, magnetic disk, optical disk, etc.
- the memory 11 may be an internal storage unit of the face recognition device 1 in some embodiments, such as a hard disk of the face recognition device 1.
- the memory 11 may also be an external storage device of the face recognition device 1, for example, a plug-in hard disk equipped on the face recognition device 1, a smart memory card (Smart Media Card, SMC), and a secure digital (Secure Digital) Digital, SD) card, flash card (Flash Card), etc.
- the memory 11 may also include both an internal storage unit of the face recognition apparatus 1 and an external storage device.
- the memory 11 can be used not only to store application software and various data installed in the face recognition device 1, such as the code of the face recognition program 01, etc., but also to temporarily store data that has been output or will be output.
- the processor 12 may be a central processing unit (CPU), controller, microcontroller, microprocessor, or other data processing chip, and is used to run the program code or processing stored in the memory 11 Data, such as execution of face recognition program 01, etc.
- CPU central processing unit
- controller microcontroller
- microprocessor or other data processing chip
- the communication bus 13 is used to realize the connection and communication between these components.
- the network interface 14 may optionally include a standard wired interface and a wireless interface (such as a WI-FI interface), and is usually used to establish a communication connection between the device 1 and other electronic devices.
- the device 1 may also include a user interface.
- the user interface may include a display (Display) and an input unit such as a keyboard (Keyboard).
- the optional user interface may also include a standard wired interface and a wireless interface.
- the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode, organic light emitting diode) touch device, etc.
- the display can also be called a display screen or a display unit as appropriate, for displaying information processed in the face recognition device 1 and for displaying a visualized user interface.
- FIG. 2 only shows the face recognition device 1 with components 11-14 and the face recognition program 01.
- FIG. 1 does not constitute a limitation on the face recognition device 1 It may include fewer or more components than shown, or a combination of some components, or a different component arrangement.
- the memory 11 stores the face recognition program 01; when the processor 12 executes the face recognition program 01 stored in the memory 11, the following steps are implemented:
- Step 1 Collect a face image set, a non-face image set, and a face comparison set.
- the face image set and the non-face image set are collectively referred to as the original data set, and the original data set is input to the data processing Layer, input the face comparison set into the database.
- the preferred embodiment of the present application deploys several video surveillance areas, and transfers the image sets captured in the several video surveillance areas to the database; selects images that include human faces in the image set stored in the database to form human faces Image set; based on the different faces in the face image set, collect ID photos corresponding to the different faces, and obtain ID photos from the relevant monitoring department, such as obtaining criminals at large from the public security department The ID photos and the ID photos of the untrustworthy old Lai, etc. form the face comparison set.
- the preferred embodiment of this application selects images that do not include human faces in the image set stored in the database, and selects non-human target data sets from a preset data set, such as the COCO data set, to form a non-human face image set.
- the COCO data set is a large-scale image data set designed for object detection, segmentation, human key point detection, semantic segmentation and caption generation.
- Step 2 The data processing layer performs grayscale and denoising processing on the original data set to obtain a preprocessed data set, and the preprocessed data set includes a face preprocessed data set and a non-face preprocessed data set,
- the face preprocessing data set is input to the data cutting layer, and the non-face preprocessing data set is input to the model training layer.
- the gray scale is to convert the data in the original data set from an RGB format to a black and white gray format by using a proportional method.
- the ratio method is as follows: Obtain the R, G, and B pixel values of each pixel in the original data set, and convert the pixel to a black-and-white gray format according to the following function:
- (x, y) represents the coordinates of the image pixels in the original data set
- f(x, y) is the output data after the original data set is denoised based on the adaptive image noise reduction filtering method
- ⁇ (x,y) is noise
- g(x,y) is the original data set
- L represents the current pixel coordinates.
- Step 3 The data cutting layer receives the face preprocessing data set, performs edge detection and segmentation processing on the face preprocessing data set, and then obtains the face training set and inputs it to the model training layer.
- the face training set The non-face preprocessing data set is collectively referred to as a training set.
- the edge detection and the segmentation process are based on the edge detection to find a pixel set with a large change in the pixel gray level in the face preprocessing data set, and based on the segmentation process.
- the pixel set is reconnected to segment the human face and the human face background. Further, the step change is large, that is, the gray-scale derivative is a maximum value or a minimum value.
- the preferred embodiment of the present application adopts the Canny edge detection method.
- the Canny edge detection method performs smoothing filtering processing on the face preprocessing data set based on a Gaussian filter, and calculating after the smoothing filtering processing based on the first-order partial derivative finite difference Obtain the non-local maximum and minimum points for the data set, and complete the edge detection.
- Step 4 The model training layer receives the training set and extracts the face control set from the database, calculates the training set according to the directional gradient method to obtain a gradient feature set, and compares the gradient feature set with all
- the face comparison set is input to the lifting algorithm for training, and training is exited when the training accuracy of the lifting algorithm is greater than a preset threshold.
- the gradient amplitude and gradient direction value of each pixel (x, y) of the data in the training set are calculated, and the gradient amplitude is used as the first component, and the gradient direction value is used as The second component forms a gradient matrix; divide the data in the gradient matrix into a plurality of small blocks, and add the gradient amplitude and the gradient direction value of each small block to obtain an added value, and connect the added value in series to form a gradient
- the feature set is input to the lifting algorithm.
- the boosting algorithm of the preferred embodiment of the present application includes the AdaBoost algorithm, and the AdaBoost algorithm includes several weak classifiers and strong classifiers;
- the weak classifier h(x, t, p, ⁇ ) is:
- t is the classification function including the gradient feature set
- x is the detection sub-window
- p is the weighted inequality direction coefficient
- ⁇ is the threshold of the weak classifier, and the weak classifier is trained according to the gradient feature set. Weak classifier h(x,t,p, ⁇ ) until the optimal threshold ⁇ is determined;
- the strong classifier C(x) is:
- ⁇ k is the coefficient of the strong classifier C(x)
- T is the total number of the weak classifiers
- ⁇ k ⁇ k /(1- ⁇ k )
- ⁇ k is:
- w i is the weight of the gradient feature set
- yi is the face control set
- the boosting algorithm exits training, and the preset threshold is generally Set to 0.97.
- Step 5 Receive a captured image, perform grayscale and noise reduction processing on the captured image, and input it to the model training layer to determine whether the captured image contains a human face.
- the captured image uses an image captured by equipment such as an outdoor camera and a mobile phone.
- Step 6 When the captured image does not contain a human face, output the result that the human face is not recognized.
- Step 7 When the captured image contains a face, the model training layer sequentially judges the similarity between the captured image and the face comparison set of the database based on the Euclidean distance method, and outputs the face with the highest similarity Comparing the pictures in the collection to complete face recognition.
- the Euclidean distance method is:
- a is the captured image
- yi is the face comparison set
- n is the total amount of data in the face comparison set.
- the face recognition program can also be divided into one or more modules, and the one or more modules are stored in the memory 11 and run by one or more processors (this embodiment is The processor 12) is executed to complete the application.
- the module referred to in the application refers to a series of computer program instruction segments capable of completing specific functions, and is used to describe the execution process of the face recognition program in the face recognition device.
- FIG. 3 it is a schematic diagram of the program modules of the face recognition program in an embodiment of the applicant’s face recognition device.
- the face recognition program can be divided into a data receiving module 10 and a data
- the processing module 20, the model training module 30, and the face recognition output module 40 are exemplary:
- the data receiving module 10 is used to collect a face image set, a non-face image set, and a face comparison set.
- the face image set and the non-face image set are collectively referred to as an original data set, and the original
- the data set is input into the data processing layer, and the face comparison set is input into the database.
- the data processing module 20 is configured to: the data processing layer performs grayscale and noise reduction processing on the original data set to obtain a preprocessed data set, and the preprocessed data set includes a face preprocessed data set and a non-human face
- the face preprocessing data set is input to the data cutting layer, and the non-face preprocessing data set is input to the model training layer.
- the data cutting layer receives the face preprocessing data set, performs edge detection and segmentation processing on the face preprocessing data set, and then obtains the face training set and inputs it to the model training layer.
- the face training set and the Non-face preprocessing data sets are collectively referred to as training sets.
- the model training module 30 is configured to: the model training layer receives the training set and extracts the face control set from the database, calculates the training set according to the directional gradient method to obtain a gradient feature set, The gradient feature set and the face control set are input to a lifting algorithm for training, and training is exited when the training accuracy of the lifting algorithm is greater than a preset threshold.
- the face recognition output module 40 is configured to: receive the captured image, perform grayscale and noise reduction processing on the image captured by the user, and then input it to the model training layer, and the model training layer determines the captured image Whether the captured image contains a human face, when the captured image does not contain a human face, output the result that the human face is not recognized.
- the model training layer sequentially determines the similarity between the captured image and the face comparison set of the database based on the Euclidean distance method, and outputs the face comparison with the highest similarity Collect pictures to complete face recognition.
- an embodiment of the present application also proposes a computer-readable storage medium that stores a face recognition program on the computer-readable storage medium, and the face recognition program can be executed by one or more processors to implement the following operations :
- the face image set and the non-face image set are collectively referred to as the original data set.
- the original data set is input to the data processing layer, and
- the face comparison set is input into the database.
- the data processing layer performs grayscale and noise reduction processing on the original data set to obtain a preprocessed data set.
- the preprocessed data set includes a face preprocessed data set and a non-face preprocessed data set, and the The face preprocessing data set is input to the data cutting layer, and the non-face preprocessing data set is input to the model training layer.
- the data cutting layer receives the face preprocessing data set, performs edge detection and segmentation processing on the face preprocessing data set, and then obtains the face training set and inputs it to the model training layer.
- the face training set and the Non-face preprocessing data sets are collectively referred to as training sets.
- the model training layer receives the training set and extracts the face control set from the database, calculates the training set according to the directional gradient method to obtain a gradient feature set, and compares the gradient feature set with the face
- the control set is input to the lifting algorithm for training, and the training is exited when the training accuracy of the lifting algorithm is greater than the preset threshold.
- the model training layer determines whether the captured image contains a human face. When the captured image is The obtained image does not contain human faces, and the result of unrecognized human faces is output. When the captured image contains a face, the model training layer sequentially determines the similarity between the captured image and the face comparison set of the database based on the Euclidean distance method, and outputs the face comparison with the highest similarity Collect pictures to complete face recognition.
- the method of the above embodiments can be implemented by means of software plus the necessary general hardware platform. Of course, it can also be implemented by hardware, but in many cases the former is better. ⁇
- the technical solution of this application essentially or the part that contributes to the existing technology can be embodied in the form of a software product, and the computer software product is stored in a storage medium (such as ROM/RAM) as described above. , Magnetic disk, optical disk), including several instructions to make a terminal device (which can be a mobile phone, a computer, a server, or a network device, etc.) execute the method described in each embodiment of the present application.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Data Mining & Analysis (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
La présente invention concerne une technologie d'intelligence artificielle. Est divulgué un procédé de reconnaissance faciale, comprenant les étapes consistant à : collecter un ensemble de données d'origine et un ensemble de contrastes faciaux, les prétraiter puis les introduire dans une couche d'apprentissage de modèle ; la couche d'apprentissage de modèle effectuant un calcul en fonction d'un histogramme d'un procédé de gradients orientés pour obtenir un ensemble de caractéristiques de gradient ; entrer l'ensemble de caractéristiques de gradient et l'ensemble de contrastes faciaux dans un algorithme de levage pour l'apprentissage et sortir de l'apprentissage lorsque la précision d'apprentissage de l'algorithme de levage est supérieure à une valeur de seuil prédéfinie ; recevoir une image capturée et la couche d'apprentissage de modèle détermine si l'image capturée contient un visage ; et lorsque l'image capturée contient un visage, rechercher un visage ayant la similarité la plus élevée avec l'ensemble de contrastes faciaux pour achever la reconnaissance faciale. L'invention concerne en outre un appareil de reconnaissance faciale et un support de stockage lisible par ordinateur. La présente invention peut réaliser une fonction de reconnaissance faciale précise.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910417997.3 | 2019-05-20 | ||
| CN201910417997.3A CN110309709A (zh) | 2019-05-20 | 2019-05-20 | 人脸识别方法、装置及计算机可读存储介质 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2020233000A1 true WO2020233000A1 (fr) | 2020-11-26 |
Family
ID=68074686
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2019/117342 Ceased WO2020233000A1 (fr) | 2019-05-20 | 2019-11-12 | Procédé et appareil de reconnaissance faciale et support de stockage lisible par ordinateur |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN110309709A (fr) |
| WO (1) | WO2020233000A1 (fr) |
Cited By (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112633192A (zh) * | 2020-12-28 | 2021-04-09 | 杭州魔点科技有限公司 | 一种手势交互的人脸识别测温的方法、系统、设备及介质 |
| CN113420739A (zh) * | 2021-08-24 | 2021-09-21 | 北京通建泰利特智能系统工程技术有限公司 | 基于神经网络的智能应急监控方法、系统和可读存储介质 |
| CN114445093A (zh) * | 2022-01-27 | 2022-05-06 | 黑龙江邮政易通信息网络有限责任公司 | 一种产品管控与防伪溯源系统 |
| CN114677736A (zh) * | 2022-03-25 | 2022-06-28 | 浙江工商大学 | 基于超椭球的人脸识别方法、装置及存储介质 |
| CN116403137A (zh) * | 2023-03-27 | 2023-07-07 | 研华科技(中国)有限公司 | 一种高清视频图像处理方法 |
| CN116631035A (zh) * | 2023-05-31 | 2023-08-22 | 北京明朝万达科技股份有限公司 | 一种人脸识别输出结果筛选方法和装置 |
| CN119580105A (zh) * | 2025-02-07 | 2025-03-07 | 华东交通大学 | 一种桥梁裂缝识别方法及系统 |
Families Citing this family (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110309709A (zh) * | 2019-05-20 | 2019-10-08 | 平安科技(深圳)有限公司 | 人脸识别方法、装置及计算机可读存储介质 |
| CN110853047B (zh) * | 2019-10-12 | 2023-09-15 | 平安科技(深圳)有限公司 | 智能图像分割及分类方法、装置及计算机可读存储介质 |
| CN111652064B (zh) * | 2020-04-30 | 2024-06-07 | 平安科技(深圳)有限公司 | 人脸图像生成方法、电子装置及可读存储介质 |
| CN111639704A (zh) * | 2020-05-28 | 2020-09-08 | 深圳壹账通智能科技有限公司 | 目标识别方法、装置及计算机可读存储介质 |
| CN112712004B (zh) * | 2020-12-25 | 2023-09-12 | 英特灵达信息技术(深圳)有限公司 | 人脸检测系统及人脸检测方法、装置、电子设备 |
Citations (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102521622A (zh) * | 2011-11-18 | 2012-06-27 | 常州蓝城信息科技有限公司 | 基于广告投放的人脸检测系统 |
| CN102831411A (zh) * | 2012-09-07 | 2012-12-19 | 云南晟邺科技有限公司 | 一种快速人脸检测方法 |
| US20130108123A1 (en) * | 2011-11-01 | 2013-05-02 | Samsung Electronics Co., Ltd. | Face recognition apparatus and method for controlling the same |
| CN103605964A (zh) * | 2013-11-25 | 2014-02-26 | 上海骏聿数码科技有限公司 | 基于图像在线学习的人脸检测方法及系统 |
| CN106022254A (zh) * | 2016-05-17 | 2016-10-12 | 上海民实文化传媒有限公司 | 图像识别技术 |
| CN106355138A (zh) * | 2016-08-18 | 2017-01-25 | 电子科技大学 | 基于深度学习和关键点特征提取的人脸识别方法 |
| CN106485273A (zh) * | 2016-10-09 | 2017-03-08 | 湖南穗富眼电子科技有限公司 | 一种基于hog特征与dnn分类器的人脸检测方法 |
| CN106503615A (zh) * | 2016-09-20 | 2017-03-15 | 北京工业大学 | 基于多传感器的室内人体检测跟踪和身份识别系统 |
| CN110309709A (zh) * | 2019-05-20 | 2019-10-08 | 平安科技(深圳)有限公司 | 人脸识别方法、装置及计算机可读存储介质 |
Family Cites Families (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN100568262C (zh) * | 2007-12-29 | 2009-12-09 | 浙江工业大学 | 基于多摄像机信息融合的人脸识别检测装置 |
| CN104978550B (zh) * | 2014-04-08 | 2018-09-18 | 上海骏聿数码科技有限公司 | 基于大规模人脸数据库的人脸识别方法及系统 |
| CN106127114A (zh) * | 2016-06-16 | 2016-11-16 | 北京数智源科技股份有限公司 | 智能视频分析方法 |
| CN106529448A (zh) * | 2016-10-27 | 2017-03-22 | 四川长虹电器股份有限公司 | 利用聚合通道特征进行多视角人脸检测的方法 |
-
2019
- 2019-05-20 CN CN201910417997.3A patent/CN110309709A/zh active Pending
- 2019-11-12 WO PCT/CN2019/117342 patent/WO2020233000A1/fr not_active Ceased
Patent Citations (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20130108123A1 (en) * | 2011-11-01 | 2013-05-02 | Samsung Electronics Co., Ltd. | Face recognition apparatus and method for controlling the same |
| CN102521622A (zh) * | 2011-11-18 | 2012-06-27 | 常州蓝城信息科技有限公司 | 基于广告投放的人脸检测系统 |
| CN102831411A (zh) * | 2012-09-07 | 2012-12-19 | 云南晟邺科技有限公司 | 一种快速人脸检测方法 |
| CN103605964A (zh) * | 2013-11-25 | 2014-02-26 | 上海骏聿数码科技有限公司 | 基于图像在线学习的人脸检测方法及系统 |
| CN106022254A (zh) * | 2016-05-17 | 2016-10-12 | 上海民实文化传媒有限公司 | 图像识别技术 |
| CN106355138A (zh) * | 2016-08-18 | 2017-01-25 | 电子科技大学 | 基于深度学习和关键点特征提取的人脸识别方法 |
| CN106503615A (zh) * | 2016-09-20 | 2017-03-15 | 北京工业大学 | 基于多传感器的室内人体检测跟踪和身份识别系统 |
| CN106485273A (zh) * | 2016-10-09 | 2017-03-08 | 湖南穗富眼电子科技有限公司 | 一种基于hog特征与dnn分类器的人脸检测方法 |
| CN110309709A (zh) * | 2019-05-20 | 2019-10-08 | 平安科技(深圳)有限公司 | 人脸识别方法、装置及计算机可读存储介质 |
Cited By (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112633192A (zh) * | 2020-12-28 | 2021-04-09 | 杭州魔点科技有限公司 | 一种手势交互的人脸识别测温的方法、系统、设备及介质 |
| CN112633192B (zh) * | 2020-12-28 | 2023-08-25 | 杭州魔点科技有限公司 | 一种手势交互的人脸识别测温的方法、系统、设备及介质 |
| CN113420739A (zh) * | 2021-08-24 | 2021-09-21 | 北京通建泰利特智能系统工程技术有限公司 | 基于神经网络的智能应急监控方法、系统和可读存储介质 |
| CN113420739B (zh) * | 2021-08-24 | 2022-10-18 | 北京通建泰利特智能系统工程技术有限公司 | 基于神经网络的智能应急监控方法、系统和可读存储介质 |
| CN114445093A (zh) * | 2022-01-27 | 2022-05-06 | 黑龙江邮政易通信息网络有限责任公司 | 一种产品管控与防伪溯源系统 |
| CN114677736A (zh) * | 2022-03-25 | 2022-06-28 | 浙江工商大学 | 基于超椭球的人脸识别方法、装置及存储介质 |
| CN114677736B (zh) * | 2022-03-25 | 2024-12-27 | 浙江工商大学 | 基于超椭球的人脸识别方法、装置及存储介质 |
| CN116403137A (zh) * | 2023-03-27 | 2023-07-07 | 研华科技(中国)有限公司 | 一种高清视频图像处理方法 |
| CN116631035A (zh) * | 2023-05-31 | 2023-08-22 | 北京明朝万达科技股份有限公司 | 一种人脸识别输出结果筛选方法和装置 |
| CN119580105A (zh) * | 2025-02-07 | 2025-03-07 | 华东交通大学 | 一种桥梁裂缝识别方法及系统 |
Also Published As
| Publication number | Publication date |
|---|---|
| CN110309709A (zh) | 2019-10-08 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2020233000A1 (fr) | Procédé et appareil de reconnaissance faciale et support de stockage lisible par ordinateur | |
| US8761446B1 (en) | Object detection with false positive filtering | |
| US8351662B2 (en) | System and method for face verification using video sequence | |
| CN103617432B (zh) | 一种场景识别方法及装置 | |
| TWI686774B (zh) | 人臉活體檢測方法和裝置 | |
| CN104751136B (zh) | 一种基于人脸识别的多相机视频事件回溯追踪方法 | |
| CN115002414B (zh) | 监测方法、装置及服务器和计算机可读存储介质 | |
| CN112200043A (zh) | 面向室外施工现场的危险源智能识别系统及方法 | |
| CN113012383B (zh) | 火灾检测报警方法、相关系统、相关设备及存储介质 | |
| CN107169458B (zh) | 数据处理方法、装置及存储介质 | |
| CN106682665B (zh) | 一种基于计算机视觉的七段式数显仪表数字识别方法 | |
| WO2018113523A1 (fr) | Dispositif et procédé de traitement d'images et support d'informations | |
| CN105678213B (zh) | 基于视频特征统计的双模式蒙面人事件自动检测方法 | |
| CN101344922A (zh) | 一种人脸检测方法及其装置 | |
| CN109190475A (zh) | 一种人脸识别网络与行人再识别网络协同训练方法 | |
| KR20190093799A (ko) | Cctv를 통한 실시간 실종자 얼굴 인식 시스템 및 그 방법 | |
| CN113255557A (zh) | 一种基于深度学习的视频人群情绪分析方法及系统 | |
| CN112699810A (zh) | 一种提升室内监控系统人物识别精度的方法及装置 | |
| CN110795995B (zh) | 数据处理方法、装置及计算机可读存储介质 | |
| KR102171384B1 (ko) | 영상 보정 필터를 이용한 객체 인식 시스템 및 방법 | |
| CN114863472B (zh) | 多级行人检测方法、装置和存储介质 | |
| WO2018113206A1 (fr) | Terminal et procédé de traitement d'image | |
| CN120997768A (zh) | 复杂施工场景下的小目标检测系统 | |
| CN112435240B (zh) | 一种面向工人违规使用手机的深度视觉手机检测系统 | |
| CN112633179A (zh) | 基于视频分析的农贸市场过道物体占道检测方法 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19929976 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 19929976 Country of ref document: EP Kind code of ref document: A1 |