[go: up one dir, main page]

CN107403168A - A kind of facial-recognition security systems - Google Patents

A kind of facial-recognition security systems Download PDF

Info

Publication number
CN107403168A
CN107403168A CN201710667710.3A CN201710667710A CN107403168A CN 107403168 A CN107403168 A CN 107403168A CN 201710667710 A CN201710667710 A CN 201710667710A CN 107403168 A CN107403168 A CN 107403168A
Authority
CN
China
Prior art keywords
face
image
facial
identified
recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710667710.3A
Other languages
Chinese (zh)
Other versions
CN107403168B (en
Inventor
孙强
崔孔明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Lock Intelligent Technology Co Ltd
Original Assignee
Qingdao Lock Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Lock Intelligent Technology Co Ltd filed Critical Qingdao Lock Intelligent Technology Co Ltd
Priority to CN201710667710.3A priority Critical patent/CN107403168B/en
Publication of CN107403168A publication Critical patent/CN107403168A/en
Application granted granted Critical
Publication of CN107403168B publication Critical patent/CN107403168B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/60Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/467Encoded features or binary features, e.g. local binary patterns [LBP]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of facial-recognition security systems,It is related to Image Processing and Pattern Recognition technical field,The sex of face-image to be identified is determined using the method for deep learning first,Then the sex identified in conjunction with deep learning,The facial identity of the determination face-image to be identified of method high-accuracy based on convolutional neural networks,Facial micro- expression of face-image to be identified is finally determined using parallax feature of the binocular vision technology based on face-image to be identified again,Realize the gender information with reference to face-image to be identified,The face recognition of identity information and expression information,By deep learning method,Convolutional neural networks and binocular vision technology are combined,Reduce influence of the factors such as illumination condition and facial pose to facial discrimination,Reduce the data amount of calculation of face-recognition procedure simultaneously,Realize to the quick of face,The identification of high-accuracy,And reduce the cost in face-recognition procedure,Improve the accuracy rate of different facial pose lower face identifications.

Description

A kind of facial-recognition security systems
Technical field
The present invention relates to Image Processing and Pattern Recognition technical field, more particularly to a kind of facial-recognition security systems.
Background technology
Facial recognition techniques are displacement shape by analyzing face organ and position relationship to carry out identity discriminating, are A kind of important biological identification technology, is widely used in the fields such as security protection, gate inhibition and monitoring.The main calculation of facial recognition techniques The face recognition method of method including the template matches based on geometric properties, the face recognition method based on geometric properties, based on sample The face recognition method of this study and the face recognition method based on textural characteristics.Wherein, the face based on face texture feature Portion's recognition methods relies primarily on LBP(Local Binary Pattern)I.e. local binary patterns carry out facial feature extraction.
At present, facial-recognition security systems are divided into two-dimentional facial-recognition security systems according to the data difference of processing and three dimensional face identifies System.
Wherein, method relative maturity used by two-dimentional facial-recognition security systems, Turk and Pentland is carried within earlier than 1991 Eigenface (Eigenfaces) method gone out just has preferable recognition effect.In subsequent research, the face based on neutral net Portion's recognition methods, the face recognition method based on SVMs (SVM) and face recognition method based on wavelet transformation etc. All constantly emerge in large numbers.But anyway improve the defects of all two-dimentional face recognition can not being overcome intrinsic, i.e. illumination condition and face The change of the factors such as portion's posture have impact on images to be recognized feature and the matching of characteristics of image in Sample Storehouse, so as to reduce identification Performance.
And most three dimensional face recognition methods is based on relatively abstract space geometry feature, such as iteration closest approach is used Method carries out the recognition methods of curved surface similarity mode, regional area progress curve is extracted according to threedimensional model positioning feature point The recognition methods of matching.But three dimensional face identification technology is ripe not enough, three-dimensional data and excessively huge, computation complexity Height, data are computationally intensive, and face recognition speed is low, and three-dimensional data obtains equipment costliness, three-dimensional data obtains condition and is limited, So the three dimensional face identification in prior art is difficult to promote in actual applications.
The content of the invention
The embodiment of the present invention provides a kind of facial-recognition security systems, it is intended to reduces the factor such as illumination condition and facial pose opposite The influence of portion's discrimination, reduce the data amount of calculation of face-recognition procedure, realize quick, high-accuracy the identification to face, And reduce the cost in face-recognition procedure.
Concrete technical scheme provided by the invention is as follows:
A kind of facial-recognition security systems, the facial-recognition security systems include:
Face-image processing module, for determining that the facial zone in pending image is laggard based on face feature point location algorithm The interception of row face-image, and noise reduction, light filling are carried out to the face-image to be identified after interception, highlighted and normalized;
Facial feature extraction module, for the key point based on face-image, the facial characteristics of face-image to be identified is extracted, its In, the facial characteristics includes regarding for histograms of oriented gradients HOG features, local binary patterns LBP features and facial pixel Poor feature;
Facial recognition modules based on big data, for according to the histograms of oriented gradients HOG features and the local binary Pattern LBP features identify the sex of the face-image to be identified based on big data;
Facial recognition modules based on convolutional neural networks, for according to the histograms of oriented gradients HOG features and the office Portion's binary pattern LBP features are realized under the face-image multi-angle to be identified using the convolutional neural networks model trained High-accuracy facial identification;
Facial recognition modules based on binocular vision technology, for the parallax feature according to the facial pixel, described in acquisition The facial expression of face-image to be identified detects exclusive image, is detected based on the facial expression and waits to know described in exclusive image recognition Facial micro- expression of other face-image.
Optionally, the face-image processing module specifically includes:
Facial zone detection sub-module based on various visual angles, for carrying out gray processing to the coloured image of input, and carry out Nogata Figure equalization, carries out face detection using front, left surface and right flank face detector respectively, removes area and is less than predetermined value Face detection result, obtain the pending face-images of various visual angles;
Face location and normalized submodule, for the mixing tree structure based on HOG (histograms of oriented gradients) feature Feature point model, positioning feature point is carried out in the pending face-image of the various visual angles of acquisition, after location feature point, Facial zone is accurately determined according to facial contour characteristic points, and waits to locate described in the completion of facial zone image by cutting out and scaling Manage the normalization of face-image;
The post processing submodule of face-image, for carrying out noise reduction, light filling to the pending face-image after interception, highlighting Processing, obtains the face-image to be identified.
Optionally, the facial feature extraction module specifically includes:
HOG feature extraction submodules, for carrying out histogram equalization to the face-image to be identified after normalized And extract HOG features;
LBP feature extraction submodules, for scheming according to different partition strategies to the face to be identified after normalized As carrying out image block, and using the LBP features of mixing LBP (local binary patterns) operator extraction block image;
Parallax extracting sub-module, for the depth map of the Same Scene shot respectively according to two depth cameras, treated described in calculating Identify the disparity map of face-image, and the parallax of the facial pixel based on the disparity map acquisition face-image to be identified Feature.
Optionally, the facial recognition modules based on big data specifically include:
Sex identification model structure submodule based on deep learning, for according to having marked the facial sample training of sex to be used for The sex identification model based on deep learning of facial sex identification;
Sex identifies submodule, for using the sex identification model based on deep learning to the face-image to be identified Carry out sex identification.
Optionally, the sex identification model structure submodule based on deep learning is specifically used for:
It is used for the first deep learning sex identification mould of facial sex identification using the facial sample training of sex has all been marked Type, wherein, the output parameter of the first deep learning sex identification model includes being used to characterize the face for having marked sex Portion's sample is the first probability parameter of male and for characterizing the second probability that the facial sample for having marked sex is women Parameter, first probability parameter and second probability parameter and be 1;
In the facial sample for having marked sex from the whole, obtain first probability parameter and second probability parameter it Between difference absolute value be less than predetermined threshold value retraining face sample;
It is used for the second deep learning sex identification model of facial sex identification using the retraining face sample training, its In, it is male that the output parameter of the second deep learning sex identification model, which includes being used to characterize the retraining face sample, The 3rd probability parameter and for characterizing the 4th probability parameter that the retraining face sample is women, the 3rd probability ginseng Number and the 4th probability parameter and for 1.
The first deep learning sex identification model is used to obtain the face-image to be identified as the first general of male Rate parameter and the second probability parameter that the face-image to be identified is women, first probability parameter and second probability Parameter and for 1;
If the difference between first probability parameter and second probability parameter is more than predetermined threshold value, it is determined that described to wait to know Other face-image is male;
If the difference between second probability parameter and first probability parameter is more than predetermined threshold value, it is determined that described to wait to know Other face-image is women;
If the absolute value of the difference between first probability parameter and second probability parameter is less than predetermined threshold value, use The second deep learning sex identification model obtains the face-image to be identified as the 3rd probability parameter of male and described Face-image to be identified is the 4th probability parameter of women, the 3rd probability parameter and the 4th probability parameter and be 1;
According to first probability parameter, second probability parameter, the 3rd probability parameter and the 4th probability parameter Determine the sex of the face-image to be identified.
Optionally, the facial recognition modules based on convolutional neural networks specifically include:
Image input submodule, for inputting the histograms of oriented gradients HOG features and the institute of the pending face-image State local binary patterns LBP features;
Face recognition model construction submodule based on convolutional neural networks, for being used for according to the facial sample training marked The face recognition model based on convolutional neural networks of face recognition;
Face recognition submodule, for special according to the histograms of oriented gradients HOG features and the local binary patterns LBP Sign, using the face recognition model based on convolutional neural networks, realizes high precision of the face-image under multi-angle The face recognition of rate.
Optionally, the face recognition model construction submodule based on convolutional neural networks is specifically used for:
The depth convolutional neural networks for face recognition are built in the server, wherein, the depth convolutional neural networks point For 9 layers, input layer number is the pixel size of facial sample, and remaining each layer parameter is arranged to:1st, 3,5,7 layer of difference For convolutional layer C1, C2, C3, C4, it is made up of respectively 46 × 6,86 × 6,86 × 6 characteristic patterns, 12 6 × 6 characteristic patterns, often Individual neuron is connected with the neighborhood of input layer 6 × 6;2nd, 4,6 layer is down-sampling layer S1, S2, S3, in the 2nd, 4,6 layer of characteristic pattern Each neuron be connected with 4 × 4 neighborhoods of corresponding characteristic pattern in the 1st, 3,5 layer;8th layer is hidden layer, by the 12 of C4 Open the characteristic value in characteristic pattern and be arranged as a column vector formation characteristic vector, last classification is carried out to one-dimensional feature and known Not;9th layer is output layer, what the quantity of neuron was determined by facial identity number to be determined, and representing shared how many kinds of can The recognition result of energy;
The face-image collected and corresponding facial identity are input to the depth convolutional neural networks set In, obtain exporting Op by propagating forward step by step;
Differences of the output Op with corresponding preferable output Yp is calculated, weight matrix is adjusted by the method for minimization error, until obtaining The reasonably face recognition model based on convolutional neural networks.
Optionally, the facial recognition modules based on binocular vision technology specifically include:
Coloured image builds submodule, for the parallax value of each pixel in the disparity map according to the face-image, structure With the disparity map size identical coloured image of the face-image;Wherein, the three of each pixel of the coloured image is former Colour is related to the parallax value of corresponding pixel points in the disparity map of the face-image;
Three-dimensional distance calculating sub module, for the coloured image to be divided into some candidate regions, and with reference to the disparity map The parallax value of pixel corresponding with each candidate region as in, it is determined that the three-dimensional spatial information of each candidate region;
Face organ selects submodule, for the three-dimensional spatial information according to each candidate region and default facial device The three-dimensional spatial information threshold value of official, determine whether each candidate region is face organ region;
Face recognition submodule, the three-dimensional spatial information identification face figure for the face organ according to the face-image The identity information of picture.
Optionally, the three-dimensional distance calculating sub module specifically includes:
Image segmentation unit, the color for the different zones according to the coloured image is different, and the coloured image is divided For some regions;
Coordinate calculating unit, for the parallax value of each pixel in the disparity map according to the face-image, calculate described every The three dimensional space coordinate of individual pixel;
Area calculation unit, the three dimensional space coordinate of the pixel for being included according to each candidate region determine each time The size of favored area and position.
Beneficial effects of the present invention are as follows:
Facial-recognition security systems provided in an embodiment of the present invention, the histograms of oriented gradients HOG of face-image to be identified is extracted respectively The parallax feature of feature, local binary patterns LBP features and facial pixel, then determine to wait to know using the method for deep learning The sex of other face-image, the sex then identified in conjunction with deep learning, the method Gao Zhun based on convolutional neural networks The facial identity of the determination face-image to be identified of true rate, finally again using binocular vision technology based on face-image to be identified Parallax feature determines facial micro- expression of face-image to be identified, realizes the gender information with reference to face-image to be identified, body The face recognition of part information and expression information, deep learning method, convolutional neural networks and binocular vision technology are combined, and are dropped Low influence of the factor such as illumination condition and facial pose to facial discrimination, while the data for reducing face-recognition procedure calculate Amount, quick, high-accuracy the identification to face is realized, and reduce the cost in face-recognition procedure, improve different faces The accuracy rate of posture lower face identification.
Brief description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing There is the required accompanying drawing used in technology description to be briefly described, it should be apparent that, drawings in the following description are only this Some embodiments of invention, for those of ordinary skill in the art, without having to pay creative labor, may be used also To obtain other accompanying drawings according to these accompanying drawings.
Fig. 1 is a kind of composition structural representation of facial-recognition security systems of the embodiment of the present invention;
Fig. 2 is a kind of facial key point distribution schematic diagram of the embodiment of the present invention;
Fig. 3 is a kind of facial face position distribution schematic diagram of the embodiment of the present invention;
Fig. 4 is that a kind of facial zone of the embodiment of the present invention divides schematic diagram;
Fig. 5 is the structured flowchart schematic diagram of the face-image processing module 100 of the embodiment of the present invention;
Fig. 6 is the structured flowchart schematic diagram of the facial feature extraction module 200 of the embodiment of the present invention;
Fig. 7 is the structured flowchart schematic diagram of the facial recognition modules 300 based on big data of the embodiment of the present invention;
Fig. 8 is the structured flowchart schematic diagram of the facial recognition modules 400 based on convolutional neural networks of the embodiment of the present invention;
Fig. 9 is the structured flowchart schematic diagram of the facial recognition modules 500 based on binocular vision technology of the embodiment of the present invention.
Embodiment
In order that the object, technical solutions and advantages of the present invention are clearer, the present invention is made below in conjunction with accompanying drawing into One step it is described in detail, it is clear that described embodiment is only part of the embodiment of the present invention, rather than whole implementation Example.Based on the embodiment in the present invention, what those of ordinary skill in the art were obtained under the premise of creative work is not made All other embodiment, belongs to the scope of protection of the invention.
In the description of the invention, it is to be understood that term " " center ", " on ", " under ", "front", "rear", " left side ", The orientation or position relationship of the instruction such as " right side ", " vertical ", " level ", " top ", " bottom ", " interior ", " outer " are based on shown in the drawings Orientation or position relationship, be for only for ease of the description present invention and simplify description, rather than instruction or imply signified device or Element must have specific orientation, with specific azimuth configuration and operation, therefore be not considered as limiting the invention.
Term " first ", " second ", " the 3rd ", " the 4th " " the 5th ", " the 6th ", " the 7th ", " the 8th " and " the 9th " is only For descriptive purposes, and it is not intended that instruction or hint relative importance or the implicit number for indicating indicated technical characteristic Amount.Thus, " first ", " second ", " the 3rd ", " the 4th " " the 5th ", " the 6th ", " the 7th ", " the 8th " and " the 9th " are defined Feature can express or implicitly include one or more this feature.In the description of the invention, unless otherwise saying Bright, " multiple " are meant that two or more.
The facial-recognition security systems of the embodiment of the present invention can apply to various intelligent terminals, such as smart mobile phone, intelligent camera Head, intelligent monitoring device, intelligent entrance guard etc.;The facial-recognition security systems of the embodiment of the present invention are particularly suitable for use in face recognition Each Terminal Type of technology, for example with the security device of facial recognition techniques and binocular vision camera, monitoring device, Men Jinshe Standby, smart mobile phone etc..
With reference to shown in figure 1, a kind of facial-recognition security systems of the embodiment of the present invention, including face-image processing module 100, face Portion's characteristic extracting module 200, the facial recognition modules 300 based on big data, the facial recognition modules based on convolutional neural networks 400 and the facial recognition modules 500 based on binocular vision technology:Wherein, face-image processing module 100, for based on face Positioning feature point algorithm determines the interception of progress face-image after the facial zone in pending image, and to waiting to know after interception Other face-image carries out noise reduction, light filling, highlights and normalized;Facial feature extraction module 200, for based on face-image Key point, extract the facial characteristics of face-image to be identified, wherein, the facial characteristics of extraction includes histograms of oriented gradients The parallax feature of HOG features, local binary patterns LBP features and facial pixel;Facial recognition modules based on big data 300, for the histograms of oriented gradients HOG features according to extraction and local binary patterns LBP features, the side based on deep learning Method identifies the sex of face-image to be identified;Facial recognition modules 400 based on convolutional neural networks, for the side according to extraction To histogram of gradients HOG features and local binary patterns LBP features using the convolutional neural networks model trained, realize and wait to know The facial identification of high-accuracy under other face-image multi-angle;Facial recognition modules 500 based on binocular vision technology, For the parallax feature according to facial pixel, the facial expression for obtaining face-image to be identified detects exclusive image, and is based on The facial expression detect facial micro- expression of exclusive image recognition face-image to be identified.
Specifically, with reference to shown in figure 5, face-image processing module 100 specifically includes:Facial zone inspection based on various visual angles Submodule 101 is surveyed, for carrying out gray processing, column hisgram of going forward side by side equalization, respectively using positive, left to the coloured image of input Side carries out face detection with right flank face detector, removes the face detection result that area is less than predetermined value, obtains regard more The pending face-image at angle;Face location and normalized submodule 102, for based on HOG (histograms of oriented gradients) The tree-like architectural feature point model of mixing of feature, positioning feature point is carried out in the pending face-image of the various visual angles of acquisition, After location feature point, facial zone is accurately determined according to facial contour characteristic points, and by cutting out and scaling facial zone Image completes the normalization of pending face-image;The post processing submodule 103 of face-image, for pending after interception Face-image carries out noise reduction, light filling, highlights processing, obtains face-image to be identified.
Further, the facial zone detection sub-module 101 for being primarily based on various visual angles carries out ash to the coloured image of input Degreeization, column hisgram of going forward side by side equalization, carries out face detection using front, left surface and right flank face detector respectively, goes Except area is less than the face detection result of predetermined value, the pending face-image of various visual angles is obtained, that is, is used based on various visual angles Facial zone detection sub-module 101 pre-processes to the coloured image of input, obtains the gray level image of pending image, then Face detection is carried out to the pending image using face detector.
Can be as follows for the process for carrying out face detection to pending image using face detector, it is first determined wait to locate Facial face exterior feature key point, eyebrow key point, eyes key point, nose key point and the face key point that reason image includes, its In, facial face exterior feature key point, eyebrow key point, eyes key point, nose key point and face key point represent pending respectively Position, the position of eyebrow, the position of eyes, the position of nose and the position of face of image septum reset profile.
Example, Fig. 2 shows the facial key point distribution signal that a kind of face detector of the embodiment of the present invention uses Figure, as shown in Fig. 2 the facial key point determined shares 83, wherein, represent face contour position and the face of face contour size Contouring key point has 19, and the eyebrow key point for representing eyebrow position and eyebrow size has 16, represents eye position and eye The eyes key point of eyeball size has 18, and the nose key point for representing nose shape and nose size has 12, represents face position Put has 18 with the face key point of face size.Certainly, it is merely illustrative of herein, does not represent the step of the embodiment of the present invention The rapid 100 facial key point distributing positions determined and quantity are confined to this.
After the facial key point that pending image includes is determined, position determination that can be based on facial key point is treated The facial zone in image is handled, then removes the face detection result that area is less than predetermined value, and can be according to facial crucial Ratio between point determines the visual angle of pending face-image, obtains the pending face-image of various visual angles.
Further, with reference to substantial amounts of experience, face angle generally include positive face, to the left face, to the right face, downwards it is low Head and to first-class five kinds of different face angles of facing upward, wherein, to the left face and to the right face relative to positive face, pending face The facial characteristics of image changes greatly in the horizontal direction, and in the vertical direction does not change;Bow downwards and to head of facing upward Relative to positive face, the facial characteristics of pending face-image does not change in the horizontal direction, in the vertical direction change compared with Greatly;Example, Fig. 3 shows positive face posture lower face face position distribution schematic diagram, with reference to shown in figure 3, eyeball to volume bottom line Between vertical distance and eyeball accounted for the one of facial vertical distance respectively to the vertical distance between facial head peak Half, i.e. line between two eyeballs of face is center line of the face in vertical direction, wherein, nose bottom line is between volume bottom line Vertical distance and nose bottom line have accounted for 1/3rd of facial vertical distance to the vertical distance between open wiring respectively, i.e., positive face angle Under degree, the ratio of open wiring to the vertical distance between nose bottom line and nose bottom line to the vertical distance between volume bottom line is 1:1.Enter one The analysis of step finds, is changed into when face angle from positive face to when facing upward, open wiring is to the facial parts between nose bottom line the nose bottom of than Line is remote apart from the distance of taking lens to the facial parts between volume bottom line, corresponding open wiring to the vertical distance between nose bottom line Become big under relatively positive face-like state, diminish under the relatively positive face-like state of nose bottom line to the vertical distance between volume bottom line, i.e., to head of facing upward Under state, open wiring to the vertical distance between nose bottom line and nose bottom line are more than 1 to the ratio of the vertical distance between volume bottom line:1; When face angle is changed into bowing downwards from positive face, open wiring to the facial parts between nose bottom line is arrived between volume bottom line than nose bottom line Facial parts it is near apart from the distance of taking lens, under the relatively positive face-like state of corresponding open wiring to the vertical distance between nose bottom line Diminish, become big under the relatively positive face-like state of nose bottom line to the vertical distance between volume bottom line, i.e., bow under state downwards, open wiring to nose The ratio of vertical distance and nose bottom line to the vertical distance between volume bottom line between bottom line is less than 1:1.
Based on above-mentioned analysis, the present invention can be by judging open wiring to the vertical distance between nose bottom line and nose bottom line to volume Size between the ratio and first threshold Y of vertical distance between bottom line, judging the face angle of pending face-image is It is no to bow downwards or to head of facing upward.Otherness in view of measurement error from different facial face distributions, it is preferred that the first threshold Value Y value is a number range rather than a concrete numerical value, example, and first threshold Y is [0.8,1.2].
With reference to shown in figure 3, in positive face angle lower face face, the left eye angle of left eye to the horizontal range between the left basal part of the ear, The left eye angle of left eye is to the level between the horizontal range between the right eye angle of left eye, the left eye angle at the right eye angle of left eye to right eye Distance, the left eye angle of right eye are to the horizontal range between the right eye angle of right eye and the right eye angle of right eye to the level between auris dextra root Distance accounts for 1/5th of face-image integral level distance respectively, i.e., the left eye angle of left eye to the level between the left basal part of the ear away from From, left eye left eye angle between the horizontal range between the right eye angle of left eye, the left eye angle at the right eye angle of left eye to right eye Horizontal range, the left eye angle of right eye are to the horizontal range between the right eye angle of right eye and the right eye angle of right eye between auris dextra root Ratio between horizontal range between whole is 1:1:1:1:1:1.Found by research, when the face angle of face-image is from just When face change turns to face to the left or to the right face, the left eye angle to the horizontal range between the left basal part of the ear, the left eye angle of left eye of left eye To horizontal range between the horizontal range between the right eye angle of left eye, the left eye angle at the right eye angle of left eye to right eye, right eye Left eye angle to the horizontal range between the right eye angle of right eye and the right eye angle of right eye to the horizontal range between auris dextra root relative to Different degrees of change occurs under positive face posture, therefore, we can be any between above-mentioned five horizontal ranges by judging Both ratio situations of change judge that the angle of the face-image is face or to the right face to the left.
After the pending face-image of various visual angles is obtained, using Face location and normalized submodule 102 The tree-like architectural feature point model of mixing based on HOG (histograms of oriented gradients) feature, in the pending face of the various visual angles of acquisition Positioning feature point is carried out in portion's image, after location feature point, according to facial contour characteristic points(I.e. facial outline is crucial Point)It is accurate to determine facial zone, and by cutting out and scaling the normalization of the pending face-image of facial zone image completion;Most Afterwards using the post processing submodule 103 of face-image to the progress of pending face-image noise reduction, the light filling after interception, highlight etc. after Processing, obtains face-image to be identified.
It should be noted that the embodiment of the present invention carries out noise reduction, light filling for pending face-image, the post processing such as highlights Process, do not do herein it is tired state, those skilled in the art refer to noise reduction in prior art, light filling, highlight algorithm carry out it is related Processing, the embodiment of the present invention are not also limited this.
Further, with reference to shown in figure 6, facial feature extraction module 200 specifically includes:HOG feature extraction submodules 201, for carrying out histogram equalization to the face-image to be identified after normalized and extracting HOG features;LBP features carry Submodule 202 is taken, for carrying out image block to the face-image to be identified after normalized according to different partition strategies, And using the LBP features of mixing LBP (local binary patterns) operator extraction block image;Parallax extracting sub-module 203, is used for The depth map of the Same Scene shot respectively according to two depth cameras, calculates the disparity map of face-image to be identified, and is based on Disparity map obtains the parallax feature of the facial pixel of face-image to be identified.
Example, it is equal column hisgram directly can be entered to the face-image to be identified after normalized using conventional method Weighing apparatusization simultaneously extracts HOG features, and the embodiment of the present invention is not repeated to this, such as the face-image to be identified after normalization is 64X64 pixels, then the dimension for the HOG features extracted is 1764 dimensions.
Specifically, LBP feature extractions submodule 202 can use traditional LBP operators to handle face-image to be identified, and unite The histogram H0 for being worth to the LBP images of each pixel of LBP images after meter processing, then according to different piecemeal plans Image block, the uniform LBP operators processing of each piece of use are slightly carried out to the face-image to be identified after normalized, and is counted Obtain corresponding histogram and all histograms are combined into composition H1 according to certain order again;Fine piecemeal is carried out simultaneously to image again Block image is handled using identical method, obtains H2;Above-mentioned histogram H0, H1, H2 are connected into histogram h1;It is exemplary , the sampled point that the histogram that said process obtains can use is 12, and sample radius is 2 LBP operators;Reuse sampled point For 16, the LBP operators that sample radius is 4 replace above-mentioned operator, repeat said process, obtain histogram h2;Then combine h1 and H2, obtain characteristic vector h;Example, the dimension of obtained LBP characteristic vectors is 2872.
Example, with reference to shown in figure 4, face-image to be identified is divided into four regions by LBP feature extractions submodule 202, Respectively first area, second area, the 3rd region and the 4th region, wherein, first area, second area, the 3rd region and 4th region is along distribution clockwise, and first area is located on the 4th region vertical direction.First area is positioned at face Left half face and be located at nose bottom line area above, second area is positioned at the facial face of the right side half and is located at nose bottom line area above, and the 3rd Region is positioned at the facial face of the right side half and is located at nose bottom line region below, and the 4th region is positioned at the facial face of a left side half and is located at nose bottom line Region below.
For the face-image to be identified under different face angles, its first area, second area, the 3rd region and the 4th Region is used during LBP features are extracted to extract window and differs, example, if the face of face-image to be identified Portion's angle is front, then the pixel in first area, second area, the 3rd region and the 4th region can use identical Circular window carries out the extraction of LBP features;If the face angle of face-image to be identified to bow downwards, first area, the Pixel in two regions is more than the extraction of the oval window progress LBP features of trunnion axis, the 3rd region and the 4th using the longitudinal axis Pixel in region is less than the extraction of the oval window progress LBP features of trunnion axis using the longitudinal axis, can avoid different faces Under angle, face-image local scale ratio is different, the problem of causing the LBP characteristic differences of same key point extraction larger, carries The validity of LBP feature extractions based on facial key point under high different face angles, and then it is accurate to improve face recognition Rate.
Specifically, parallax extracting sub-module 203 shoots the face of same person using two depth cameras respectively, obtain same The different depth map of facial two of one people, then using one in two depth maps as reference map, another conduct Matching figure, the disparity map of face-image to be identified, then the parallax based on face-image to be identified are obtained using parallax calculation method Figure obtains the parallax feature of the facial pixel of face-image to be identified.
Further, with reference to shown in figure 7, the facial recognition modules 300 based on big data specifically include:Based on deep learning Sex identification model structure submodule 301, for according to marked sex facial sample training be used for facial sex identification The sex identification model based on deep learning;Sex identifies submodule 302, for using the above-mentioned property based on deep learning Other identification model carries out sex identification to face-image to be identified.
Facial recognition modules 300 based on big data use first has marked the facial sample training of sex to be used for face The sex identification model based on deep learning not identified, after obtaining the sex identification model of deep learning, by face to be identified Portion's characteristics of image inputs the sex identification model of the deep learning, you can according to the output of the sex identification model of the deep learning Parameter determines the sex of face-image to be identified.
Specifically, the sex identification model structure submodule 301 based on deep learning is specifically used for:Marked using whole The facial sample training of sex is used for the first deep learning sex identification model of facial sex identification, wherein, the first depth The output parameter of the other identification model of habit includes being used for the first probability parameter for characterizing that the facial sample for having marked sex is male It is the second probability parameter of women with the facial sample for having marked sex for characterizing, the first probability parameter and the second probability parameter And for 1;After having obtained the first deep learning sex identification model, from the facial sample for all having marked sex, obtain The absolute value of difference between first probability parameter and the second probability parameter is less than the retraining face sample of predetermined threshold value;Using Whole retraining face sample trainings is used for the second deep learning sex identification model of facial sex identification, wherein, second The output parameter of deep learning sex identification model includes being used for the 3rd probability parameter for characterizing that retraining face sample is male With for characterizing the 4th probability parameter that retraining face sample is women, the 3rd probability parameter and the 4th probability parameter and be 1。
Specifically, sex identification submodule 302 is specifically used for:Obtained first using the first deep learning sex identification model The second probability parameter that face-image to be identified is the first probability parameter of male and face-image to be identified is women, first is general Rate parameter and the second probability parameter and be 1;Then the exhausted of the difference between the first probability parameter and the second probability parameter is judged Whether predetermined threshold value is more than to value;If the absolute value of the difference between the first probability parameter and the second probability parameter is more than default Threshold value, if then the difference between the first probability parameter and the second probability parameter is more than predetermined threshold value, it is determined that face figure to be identified As being male, if the difference between the second probability parameter and the first probability parameter is more than predetermined threshold value, it is determined that face to be identified Image is women;If the absolute value of the difference between the first probability parameter and the second probability parameter is less than predetermined threshold value, adopt It is the 3rd probability parameter of male and face to be identified to obtain face-image to be identified with the second deep learning sex identification model Image is the 4th probability parameter of women, the 3rd probability parameter and the 4th probability parameter and be 1;Finally joined according to the first probability Number, the second probability parameter, the 3rd probability parameter and the 4th probability parameter determine the sex of face-image to be identified.
Specifically, determine to treat according to the first probability parameter, the second probability parameter, the 3rd probability parameter and the 4th probability parameter The process of sex for identifying face-image is:If the first probability parameter and the 3rd probability parameter and subtract the second probability parameter with Difference 4th probability parameter and afterwards is more than predetermined threshold value, it is determined that face-image to be identified is male;If the second probability It is parameter and the 4th probability parameter and the first probability parameter and the 3rd probability parameter and subtract the first probability parameter and the 3rd general Difference rate parameter and afterwards is more than predetermined threshold value, it is determined that face-image to be identified is women.
Specifically, determine to treat according to the first probability parameter, the second probability parameter, the 3rd probability parameter and the 4th probability parameter The process of sex for identifying face-image is:If the first probability parameter subtracts the difference and the first weight coefficient of the second probability parameter The first product and the 3rd probability parameter subtract between the difference of the 4th probability parameter and the second product of the second weight coefficient With more than predetermined threshold value, it is determined that face-image to be identified is male;If the second probability parameter subtracts the difference of the first probability parameter Value subtracts the difference and the second weight coefficient of the 3rd probability parameter with the first product of the first weight coefficient and the 4th probability parameter The second product between and more than predetermined threshold value, it is determined that face-image to be identified is women;Wherein, the first weight coefficient with Second weight coefficient sum is 1, and the second weight coefficient is more than the first weight coefficient.
It should be noted that the embodiment of the present invention is more than the first weight coefficient by set the second weight coefficient, it is Because inventor during the present invention is realized, has found to know using the second deep learning sex of retraining face sample training Other model is used to determine that the accuracy rate of the sex of the fuzzy facial picture of sex identifies mould apparently higher than the first deep learning sex Type, so setting can improve according to the first probability parameter, the second probability parameter, the 3rd probability parameter and the 4th probability parameter base The accuracy rate of the sex of determine the probability face-image to be identified after fusion.
Secondly, it is necessary to which explanation, the embodiment of the present invention are not specifically limited for the size of predetermined threshold value, example, Predetermined threshold value can be 0.3 either 0.4 or 0.5, and the specific size of wherein predetermined threshold value can consider setting, can also by The facial-recognition security systems of inventive embodiments are set.
The facial-recognition security systems of the embodiment of the present invention, its facial recognition modules 300 based on big data is first using all Facial sample training the first deep learning sex identification model of sex has been marked, the first deep learning sex is then identified into mould The absolute value of difference in the output parameter of type between the first probability parameter and the second probability parameter is less than the sample of predetermined threshold value Take out, form retraining face sample, then know only with retraining face the second deep learning of sample training sex of whole Other model, the training of fuzzy facial specimen discerning parameter can be improved, and then when the gender comparison mould of face-image to be identified During paste, using the first deep learning sex identification model and the second deep learning sex identification model phase interworking of circuit training Close, carry out the determination of the sex of face-image to be identified, improve the determination accuracy rate of the sex of face-image to be identified, avoid When the first probability parameter and the second probability parameter determined using the first deep learning sex identification model is closer to, Wu Fagen The sex of face-image to be identified is determined according to the first deep learning sex identification model, realizes multiple angles, a variety of definition Under face-image to be identified sex determine, can effectively reduce the factors such as illumination condition and facial pose influence face-image Other determination.
Further, with reference to shown in figure 8, the facial recognition modules 400 based on convolutional neural networks specifically include:Image is defeated Enter submodule 401, the histograms of oriented gradients HOG features and local binary patterns LBP for inputting pending face-image are special Sign;Face recognition model construction submodule 402 based on convolutional neural networks, for according to having marked the facial sample of identity to instruct Practice the face recognition model based on convolutional neural networks for face recognition;Face recognition submodule 403, wait to know for basis The histograms of oriented gradients HOG features and local binary patterns LBP features of other face-image, using above-mentioned based on convolutional Neural The face recognition model of network, realize the face recognition of high-accuracy of the face-image to be identified under multi-angle.
Specifically, the face recognition model construction submodule 402 based on convolutional neural networks is specifically used for:In the server The convolutional neural networks for face recognition are built, wherein, convolutional neural networks are divided into 9 layers, and input layer number is face The pixel size of sample, remaining each layer parameter are arranged to:1st, 3,5,7 layer is respectively convolutional layer C1, C2, C3, C4, respectively by 4 Open 6 × 6,86 × 6,86 × 6 characteristic patterns, 12 6 × 6 characteristic patterns are formed, the neighborhood of each neuron and input layer 6 × 6 It is connected;2nd, 4,6 layer is down-sampling layer S1, S2, S3, in each neuron and the 1st, 3,5 layer in the 2nd, 4,6 layer of characteristic pattern 4 × 4 neighborhoods of corresponding characteristic pattern are connected;8th layer is hidden layer, and the characteristic value in C4 12 characteristic patterns is arranged as into one Bar column vector forms characteristic vector, and last Classification and Identification is carried out to one-dimensional feature;9th layer is output layer, the number of neuron What amount was determined by facial identity number to be determined, represent the possible recognition result of shared how many kinds of;The face that will be collected Image and corresponding facial identity are input in the convolutional neural networks set, are exported by propagating forward step by step Op;Differences of the output Op with corresponding preferable output Yp is calculated, weight matrix is adjusted by the method for minimization error, until being closed The face recognition model based on convolutional neural networks of reason.
Facial-recognition security systems provided in an embodiment of the present invention, the property of face-image to be identified is determined using deep learning first Not, after the sex for determining face-image to be identified, face to be identified is determined using the method for convolutional neural networks with reference to its sex The identity of portion's image, deep learning and convolutional neural networks are combined the identity determination for people's lazyness image, can effectively be carried The identity of high face-image to be identified determines accuracy rate.
Further, with reference to shown in figure 9, the facial recognition modules 500 based on binocular vision technology specifically include:Cromogram As structure submodule 501, for the parallax value of each pixel in the disparity map according to face-image to be identified, build and wait to know The disparity map size identical coloured image of other face-image;Wherein, the tristimulus values of each pixel of coloured image are with treating Identify that the parallax value of corresponding pixel points in the disparity map of face-image is related;Three-dimensional distance calculating sub module 502, for by colour Image is divided into some candidate regions, and with reference to the parallax value of pixel corresponding with each candidate region in anaglyph, really The three-dimensional spatial information of fixed each candidate region;Face organ selects submodule 503, for the three-dimensional according to each candidate region Spatial information and the three-dimensional spatial information threshold value of default face organ, it is determined that whether each candidate region is face organ area Domain;Face recognition submodule 504, the three-dimensional spatial information identification for the face organ according to face-image to be identified are to be identified Facial micro- expression of face-image.
Wherein it is possible to the disparity map for the face-image to be identified for being generated image processing engine by embedded microprocessor enters One step generates and disparity map size identical coloured image.Specifically, embedded microprocessor is newly-built identical with disparity map size Coloured image;Wherein, each pixel of each pixel of anaglyph and coloured image corresponds.Embedded microprocessor The corresponding pixel points of coloured image are carried out color filling by device according to the parallax value of each pixel in disparity map.
It is to be understood that based on the disparity map after In-Painting is operated, two with reference to binocular camera are taken the photograph As the distance between head B and focal length f of cam lens, by formula Z=B*f/d, d is parallax value, it is possible to is calculated every Depth information of the individual pixel in actual three dimensions, i.e. Z values.Correspondence that afterwards can be according to depth information to coloured image Pixel carries out color filling.For example, coloured image can be adjusted according to the fluctuation range of the depth information of all pixels point The RGB of each pixel(Three primary colors)Value, makes the rgb value of each pixel be fluctuated between 0-255.
Three-dimensional distance calculating sub module 502 specifically includes:Image segmentation unit, for the different zones according to coloured image Color it is different, coloured image is divided into some regions;Coordinate calculating unit, for the parallax according to face-image to be identified The parallax value of each pixel in figure, calculate the three dimensional space coordinate of each pixel;Area calculation unit, for according to each The three dimensional space coordinate for the pixel that candidate region includes determines size and the position of each candidate region.
It should be noted that Pixel-level segmentation can be carried out to coloured image, and the image after segmentation is divided into some Candidate region.For mark off come each candidate region, can according to each candidate region in anaglyph correspondence position Pixel parallax value, it is determined that the three-dimensional spatial information of each candidate region.Wherein, three-dimensional spatial information includes each candidate The information such as length, position corresponding to region.
Specifically, can be by one region division of color identical together, can be by nose, eye by being divided Eyeball, face, ear, hair etc. are divided in different regions respectively.Then according to the parallax value of each pixel in disparity map, meter The three dimensional space coordinate of each pixel is calculated, can be calculated using below equation:Z = B*f / d、X = (W/2-u)* B/d-B/2, Y=H '-(v-H/2) * B/d, wherein,(X,Y,Z)For the three dimensional space coordinate value under required world coordinate system, B The distance between two cameras for binocular camera, f are cam lens focal length, and for d to be parallax value, H ' is dual camera Height apart from ground, anaglyph size are(W, H), such as:1280*960, seat of the pixel in image coordinate system It is designated as(U, v), such as pixel(100,100).Due to B, f, d, H ' and(W, H)With(U, v)It is known quantity, from there through Above-mentioned formula can calculates the three dimensional space coordinate value of the pixel of each candidate region after division.It is each calculating The three dimensional space coordinate value of each pixel of candidate region(X,Y,Z)Afterwards, it is possible to directly by the difference of coordinates computed value, Obtain the length dimension information in each region.And according to the three dimensional space coordinate value of all pixels point in each region The residing locus in each region can be determined.
Facial recognition modules 500 provided in an embodiment of the present invention based on binocular vision technology can obtain face to be identified The disparity map of image, the coloured silk for characterizing each pixel position and far and near face-image to be identified is being obtained according to disparity map Color image, parallax feature and the coloured image being finally based in disparity map, according to the three-dimensional spatial information of each candidate region with And the three-dimensional spatial information threshold value of default face organ, it is determined that face organ's species of each candidate region, finally, based on not With the position between face organ and parallax information, facial micro- expression of face-image to be identified is determined, such as according to the corners of the mouth Position and the shape of face, it can be determined that the facial expression of the face-image to be identified is happy or sad, such as on the corners of the mouth Raise, face parts a little and shows that it is being smiled, such as face emerge, brows knit tightly, show that its is sad.
Facial recognition modules 500 provided in an embodiment of the present invention based on binocular vision technology are by using binocular vision skill Art, with reference to disparity map and the coloured image for the face-image to be identified for characterizing each pixel position and distance, realize basis The judgement of different facial organ sites, finally with reference to the position between different face organs and parallax information, is realized to be identified The identification of facial micro- expression of face-image, can be using the micro- expression information of face as facial recognition information, for intelligent entrance guard In the face recognition of intelligent encryption, the safety and reliability of the smart home based on face recognition is improved, is further improved The accuracy rate of face recognition.
Facial-recognition security systems provided in an embodiment of the present invention, the histograms of oriented gradients of face-image to be identified is extracted respectively The parallax feature of HOG features, local binary patterns LBP features and facial pixel, then determined using the method for deep learning The sex of face-image to be identified, the sex then identified in conjunction with deep learning, the method based on convolutional neural networks The facial identity of the determination face-image to be identified of high-accuracy, face to be identified is finally based on using binocular vision technology again and schemed The parallax feature of picture determines facial micro- expression of face-image to be identified, and the sex realized with reference to face-image to be identified is believed The face recognition of breath, identity information and expression information, deep learning method, convolutional neural networks and binocular vision technology are mutually tied Close, reduce influence of the factors such as illumination condition and facial pose to facial discrimination, while reduce the number of face-recognition procedure According to amount of calculation, quick, high-accuracy the identification to face is realized, and reduces the cost in face-recognition procedure, is improved not With the accuracy rate of facial pose lower face identification.
The present invention is with reference to method according to embodiments of the present invention, equipment(System)And the flow of computer program product Figure and/or block diagram describe.It should be understood that can be by every first-class in computer program instructions implementation process figure and/or block diagram Journey and/or the flow in square frame and flow chart and/or block diagram and/or the combination of square frame.These computer programs can be provided Instruct the processor of all-purpose computer, special-purpose computer, Embedded Processor or other programmable data processing devices so that A stream in flow chart can be achieved by the instruction of the computing device of the computer or other programmable data processing devices The function of being specified in journey or multiple flows and/or one square frame of block diagram or multiple square frames.
These computer program instructions, which may be alternatively stored in, can guide computer or other programmable data processing devices with spy Determine in the computer-readable memory that mode works so that the instruction being stored in the computer-readable memory, which produces, to be included referring to Make the manufacture of device, the command device realize in one flow of flow chart or multiple flows and/or one square frame of block diagram or The function of being specified in multiple square frames.
These computer program instructions can be also loaded into computer or other programmable data processing devices so that counted Series of operation steps is performed on calculation machine or other programmable devices to produce computer implemented processing, so as in computer or The instruction performed on other programmable devices, which provides, to be used to realize the flow or multiple flows and/or block diagram in flow chart A square frame or multiple square frames in specify function the step of.
Although preferred embodiments of the present invention have been described, but those skilled in the art once know basic creation Property concept, then can make other change and modification to these embodiments.So appended claims be intended to be construed to include it is excellent Select embodiment and fall into having altered and changing for the scope of the invention.
Obviously, those skilled in the art can carry out various changes and modification without departing from this hair to the embodiment of the present invention The spirit and scope of bright embodiment.So, if these modifications and variations of the embodiment of the present invention belong to the claims in the present invention And its within the scope of equivalent technologies, then the present invention is also intended to comprising including these changes and modification.

Claims (10)

1. a kind of facial-recognition security systems, it is characterised in that the facial-recognition security systems include:
Face-image processing module, for determining that the facial zone in pending image is laggard based on face feature point location algorithm The interception of row face-image, and noise reduction, light filling are carried out to the face-image to be identified after interception, highlighted and normalized;
Facial feature extraction module, for the key point based on face-image, the facial characteristics of face-image to be identified is extracted, its In, the facial characteristics includes regarding for histograms of oriented gradients HOG features, local binary patterns LBP features and facial pixel Poor feature;
Facial recognition modules based on big data, for according to the histograms of oriented gradients HOG features and the local binary Pattern LBP features identify the sex of the face-image to be identified based on big data;
Facial recognition modules based on convolutional neural networks, for according to the histograms of oriented gradients HOG features and the office Portion's binary pattern LBP features are realized under the face-image multi-angle to be identified using the convolutional neural networks model trained High-accuracy facial identification;
Facial recognition modules based on binocular vision technology, for the parallax feature according to the facial pixel, described in acquisition The facial expression of face-image to be identified detects exclusive image, is detected based on the facial expression and waits to know described in exclusive image recognition Facial micro- expression of other face-image.
2. facial-recognition security systems according to claim 1, it is characterised in that face-image processing module specifically includes:
Facial zone detection sub-module based on various visual angles, for carrying out gray processing to the coloured image of input, and carry out Nogata Figure equalization, carries out face detection using front, left surface and right flank face detector respectively, removes area and is less than predetermined value Face detection result, obtain the pending face-images of various visual angles;
Face location and normalized submodule, for the mixing tree structure based on HOG (histograms of oriented gradients) feature Feature point model, positioning feature point is carried out in the pending face-image of the various visual angles of acquisition, after location feature point, Facial zone is accurately determined according to facial contour characteristic points, and waits to locate described in the completion of facial zone image by cutting out and scaling Manage the normalization of face-image;
The post processing submodule of face-image, for carrying out noise reduction, light filling to the pending face-image after interception, highlighting Processing, obtains the face-image to be identified.
3. facial-recognition security systems according to claim 1, it is characterised in that the facial feature extraction module is specifically wrapped Include:
HOG feature extraction submodules, for carrying out histogram equalization to the face-image to be identified after normalized And extract HOG features;
LBP feature extraction submodules, for scheming according to different partition strategies to the face to be identified after normalized As carrying out image block, and using the LBP features of mixing LBP (local binary patterns) operator extraction block image;
Parallax extracting sub-module, for the depth map of the Same Scene shot respectively according to two depth cameras, treated described in calculating Identify the disparity map of face-image, and the parallax of the facial pixel based on the disparity map acquisition face-image to be identified Feature.
4. facial-recognition security systems according to claim 1, it is characterised in that the facial recognition modules based on big data Specifically include:
Sex identification model structure submodule based on deep learning, for according to having marked the facial sample training of sex to be used for The sex identification model based on deep learning of facial sex identification;
Sex identifies submodule, for using the sex identification model based on deep learning to the face-image to be identified Carry out sex identification.
5. facial-recognition security systems according to claim 4, it is characterised in that the sex identification mould based on deep learning Type structure submodule is specifically used for:
It is used for the first deep learning sex identification mould of facial sex identification using the facial sample training of sex has all been marked Type, wherein, the output parameter of the first deep learning sex identification model includes being used to characterize the face for having marked sex Portion's sample is the first probability parameter of male and for characterizing the second probability that the facial sample for having marked sex is women Parameter, first probability parameter and second probability parameter and be 1;
In the facial sample for having marked sex from the whole, obtain first probability parameter and second probability parameter it Between difference absolute value be less than predetermined threshold value retraining face sample;
It is used for the second deep learning sex identification model of facial sex identification using the retraining face sample training, its In, it is male that the output parameter of the second deep learning sex identification model, which includes being used to characterize the retraining face sample, The 3rd probability parameter and for characterizing the 4th probability parameter that the retraining face sample is women, the 3rd probability ginseng Number and the 4th probability parameter and for 1.
6. facial-recognition security systems according to claim 5, it is characterised in that the sex identification submodule is specifically used for
Use the first deep learning sex identification model to obtain the face-image to be identified for the first probability of male to join Number and the second probability parameter that the face-image to be identified is women, first probability parameter and second probability parameter And for 1;
If the difference between first probability parameter and second probability parameter is more than predetermined threshold value, it is determined that described to wait to know Other face-image is male;
If the difference between second probability parameter and first probability parameter is more than predetermined threshold value, it is determined that described to wait to know Other face-image is women;
If the absolute value of the difference between first probability parameter and second probability parameter is less than predetermined threshold value, use The second deep learning sex identification model obtains the face-image to be identified as the 3rd probability parameter of male and described Face-image to be identified is the 4th probability parameter of women, the 3rd probability parameter and the 4th probability parameter and be 1;
According to first probability parameter, second probability parameter, the 3rd probability parameter and the 4th probability parameter Determine the sex of the face-image to be identified.
7. facial-recognition security systems according to claim 1, it is characterised in that the face knowledge based on convolutional neural networks Other module specifically includes:
Image input submodule, for inputting the histograms of oriented gradients HOG features and the institute of the pending face-image State local binary patterns LBP features;
Face recognition model construction submodule based on convolutional neural networks, for being used for according to the facial sample training marked The face recognition model based on convolutional neural networks of face recognition;
Face recognition submodule, for special according to the histograms of oriented gradients HOG features and the local binary patterns LBP Sign, using the face recognition model based on convolutional neural networks, realizes high precision of the face-image under multi-angle The face recognition of rate.
8. facial-recognition security systems according to claim 7, it is characterised in that the face knowledge based on convolutional neural networks Other model construction submodule is specifically used for:
The depth convolutional neural networks for face recognition are built in the server, wherein, the depth convolutional neural networks point For 9 layers, input layer number is the pixel size of facial sample, and remaining each layer parameter is arranged to:1st, 3,5,7 layer of difference For convolutional layer C1, C2, C3, C4, it is made up of respectively 46 × 6,86 × 6,86 × 6 characteristic patterns, 12 6 × 6 characteristic patterns, often Individual neuron is connected with the neighborhood of input layer 6 × 6;2nd, 4,6 layer is down-sampling layer S1, S2, S3, in the 2nd, 4,6 layer of characteristic pattern Each neuron be connected with 4 × 4 neighborhoods of corresponding characteristic pattern in the 1st, 3,5 layer;8th layer is hidden layer, by the 12 of C4 Open the characteristic value in characteristic pattern and be arranged as a column vector formation characteristic vector, last classification is carried out to one-dimensional feature and known Not;9th layer is output layer, what the quantity of neuron was determined by facial identity number to be determined, and representing shared how many kinds of can The recognition result of energy;
The face-image collected and corresponding facial identity are input to the depth convolutional neural networks set In, obtain exporting Op by propagating forward step by step;
Differences of the output Op with corresponding preferable output Yp is calculated, weight matrix is adjusted by the method for minimization error, until obtaining The reasonably face recognition model based on convolutional neural networks.
9. facial-recognition security systems according to claim 3, it is characterised in that the face knowledge based on binocular vision technology Other module specifically includes:
Coloured image builds submodule, for the parallax value of each pixel in the disparity map according to the face-image, structure With the disparity map size identical coloured image of the face-image;Wherein, the three of each pixel of the coloured image is former Colour is related to the parallax value of corresponding pixel points in the disparity map of the face-image;
Three-dimensional distance calculating sub module, for the coloured image to be divided into some candidate regions, and with reference to the disparity map The parallax value of pixel corresponding with each candidate region as in, it is determined that the three-dimensional spatial information of each candidate region;
Face organ selects submodule, for the three-dimensional spatial information according to each candidate region and default facial device The three-dimensional spatial information threshold value of official, determine whether each candidate region is face organ region;
Face recognition submodule, the three-dimensional spatial information identification face figure for the face organ according to the face-image The identity information of picture.
10. according to the method for claim 9, it is characterised in that the three-dimensional distance calculating sub module specifically includes:
Image segmentation unit, the color for the different zones according to the coloured image is different, and the coloured image is divided For some regions;
Coordinate calculating unit, for the parallax value of each pixel in the disparity map according to the face-image, calculate described every The three dimensional space coordinate of individual pixel;
Area calculation unit, the three dimensional space coordinate of the pixel for being included according to each candidate region determine each time The size of favored area and position.
CN201710667710.3A 2017-08-07 2017-08-07 Face recognition system Active CN107403168B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710667710.3A CN107403168B (en) 2017-08-07 2017-08-07 Face recognition system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710667710.3A CN107403168B (en) 2017-08-07 2017-08-07 Face recognition system

Publications (2)

Publication Number Publication Date
CN107403168A true CN107403168A (en) 2017-11-28
CN107403168B CN107403168B (en) 2020-08-11

Family

ID=60402054

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710667710.3A Active CN107403168B (en) 2017-08-07 2017-08-07 Face recognition system

Country Status (1)

Country Link
CN (1) CN107403168B (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107978185A (en) * 2017-12-07 2018-05-01 何旭连 A kind of good children learning machine of teaching efficiency
CN108062787A (en) * 2017-12-13 2018-05-22 北京小米移动软件有限公司 Three-dimensional face modeling method and device
CN108363990A (en) * 2018-03-14 2018-08-03 广州影子控股股份有限公司 One boar face identifying system and method
CN108921942A (en) * 2018-07-11 2018-11-30 北京聚力维度科技有限公司 The method and device of 2D transformation of ownership 3D is carried out to image
CN109117753A (en) * 2018-07-24 2019-01-01 广州虎牙信息科技有限公司 Part identification method, device, terminal and storage medium
CN109492611A (en) * 2018-11-27 2019-03-19 电卫士智能电器(北京)有限公司 Electrical Safety method for early warning and device
CN109600546A (en) * 2018-11-26 2019-04-09 维沃移动通信(杭州)有限公司 A kind of image-recognizing method and mobile terminal
CN110223338A (en) * 2019-06-11 2019-09-10 中科创达(重庆)汽车科技有限公司 Depth information calculation method, device and electronic equipment based on image zooming-out
WO2019179295A1 (en) * 2018-03-22 2019-09-26 腾讯科技(深圳)有限公司 Facial recognition method and device
CN110827394A (en) * 2018-08-10 2020-02-21 宏达国际电子股份有限公司 Facial expression construction method and device and non-transitory computer readable recording medium
WO2020134238A1 (en) * 2018-12-29 2020-07-02 北京市商汤科技开发有限公司 Living body detection method and apparatus, and storage medium
CN112365586A (en) * 2020-11-25 2021-02-12 厦门瑞为信息技术有限公司 3D face modeling and stereo judging method and binocular 3D face modeling and stereo judging method of embedded platform
CN113077265A (en) * 2020-12-08 2021-07-06 泰州市朗嘉馨网络科技有限公司 Live client credit management system
CN113837976A (en) * 2021-09-17 2021-12-24 重庆邮电大学 Multi-focus image fusion method based on joint multi-domain
CN114973727A (en) * 2022-08-02 2022-08-30 成都工业职业技术学院 Intelligent driving method based on passenger characteristics
CN115240276A (en) * 2022-07-22 2022-10-25 京东方科技集团股份有限公司 Facial orientation recognition method and device
CN116386120A (en) * 2023-05-24 2023-07-04 杭州企智互联科技有限公司 A sensorless monitoring and management system
WO2023159350A1 (en) * 2022-02-22 2023-08-31 Liu Kin Wing Recognition system detecting facial features
CN118194265A (en) * 2024-05-13 2024-06-14 湖南三湘银行股份有限公司 NFC-based method for rapidly identifying and collecting identity information
CN119650069A (en) * 2025-02-13 2025-03-18 四川互慧软件有限公司 A method, device, equipment and medium for risk assessment of stroke patients

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104268539A (en) * 2014-10-17 2015-01-07 中国科学技术大学 High-performance human face recognition method and system
CN104915656A (en) * 2015-06-12 2015-09-16 东北大学 Quick human face recognition method based on binocular vision measurement technology
WO2015161816A1 (en) * 2014-04-25 2015-10-29 Tencent Technology (Shenzhen) Company Limited Three-dimensional facial recognition method and system
CN105069400A (en) * 2015-07-16 2015-11-18 北京工业大学 Face image gender recognition system based on stack type sparse self-coding

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015161816A1 (en) * 2014-04-25 2015-10-29 Tencent Technology (Shenzhen) Company Limited Three-dimensional facial recognition method and system
CN104268539A (en) * 2014-10-17 2015-01-07 中国科学技术大学 High-performance human face recognition method and system
CN104915656A (en) * 2015-06-12 2015-09-16 东北大学 Quick human face recognition method based on binocular vision measurement technology
CN105069400A (en) * 2015-07-16 2015-11-18 北京工业大学 Face image gender recognition system based on stack type sparse self-coding

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ZHANG, LING 等: "Fatigue detection with 3D facial features based on binocular stereo vision", 《INTEGRATED COMPUTER-AIDED ENGINEERING》 *
赵明珠 等: "基于三维数字图像相关方法的面部表情变形测量研究", 《实验力学》 *
龙海强 等: "基于深度卷积网络算法的人脸识别方法研究", 《计算机仿真》 *

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107978185B (en) * 2017-12-07 2019-12-03 南京白下高新技术产业园区投资发展有限责任公司 A kind of good children learning machine of teaching efficiency
CN107978185A (en) * 2017-12-07 2018-05-01 何旭连 A kind of good children learning machine of teaching efficiency
CN108062787A (en) * 2017-12-13 2018-05-22 北京小米移动软件有限公司 Three-dimensional face modeling method and device
CN108062787B (en) * 2017-12-13 2022-02-11 北京小米移动软件有限公司 3D face modeling method and device
CN108363990A (en) * 2018-03-14 2018-08-03 广州影子控股股份有限公司 One boar face identifying system and method
US11138412B2 (en) 2018-03-22 2021-10-05 Tencent Technology (Shenzhen) Company Limited Facial recognition method and apparatus
WO2019179295A1 (en) * 2018-03-22 2019-09-26 腾讯科技(深圳)有限公司 Facial recognition method and device
CN108921942A (en) * 2018-07-11 2018-11-30 北京聚力维度科技有限公司 The method and device of 2D transformation of ownership 3D is carried out to image
CN109117753B (en) * 2018-07-24 2021-04-20 广州虎牙信息科技有限公司 Part recognition method, device, terminal and storage medium
CN109117753A (en) * 2018-07-24 2019-01-01 广州虎牙信息科技有限公司 Part identification method, device, terminal and storage medium
CN110827394B (en) * 2018-08-10 2024-04-02 宏达国际电子股份有限公司 Facial expression construction method, device and non-transitory computer readable recording medium
CN110827394A (en) * 2018-08-10 2020-02-21 宏达国际电子股份有限公司 Facial expression construction method and device and non-transitory computer readable recording medium
CN109600546A (en) * 2018-11-26 2019-04-09 维沃移动通信(杭州)有限公司 A kind of image-recognizing method and mobile terminal
CN109492611A (en) * 2018-11-27 2019-03-19 电卫士智能电器(北京)有限公司 Electrical Safety method for early warning and device
CN111444744A (en) * 2018-12-29 2020-07-24 北京市商汤科技开发有限公司 Living body detection method, living body detection device, and storage medium
WO2020134238A1 (en) * 2018-12-29 2020-07-02 北京市商汤科技开发有限公司 Living body detection method and apparatus, and storage medium
US11393256B2 (en) 2018-12-29 2022-07-19 Beijing Sensetime Technology Development Co., Ltd. Method and device for liveness detection, and storage medium
CN110223338A (en) * 2019-06-11 2019-09-10 中科创达(重庆)汽车科技有限公司 Depth information calculation method, device and electronic equipment based on image zooming-out
CN112365586B (en) * 2020-11-25 2023-07-18 厦门瑞为信息技术有限公司 3D face modeling and stereo judgment method and binocular 3D face modeling and stereo judgment method of embedded platform
CN112365586A (en) * 2020-11-25 2021-02-12 厦门瑞为信息技术有限公司 3D face modeling and stereo judging method and binocular 3D face modeling and stereo judging method of embedded platform
CN113077265A (en) * 2020-12-08 2021-07-06 泰州市朗嘉馨网络科技有限公司 Live client credit management system
CN113837976B (en) * 2021-09-17 2024-03-19 重庆邮电大学 Multi-focus image fusion method based on joint multi-domain
CN113837976A (en) * 2021-09-17 2021-12-24 重庆邮电大学 Multi-focus image fusion method based on joint multi-domain
WO2023159350A1 (en) * 2022-02-22 2023-08-31 Liu Kin Wing Recognition system detecting facial features
CN115240276A (en) * 2022-07-22 2022-10-25 京东方科技集团股份有限公司 Facial orientation recognition method and device
CN114973727B (en) * 2022-08-02 2022-09-30 成都工业职业技术学院 Intelligent driving method based on passenger characteristics
CN114973727A (en) * 2022-08-02 2022-08-30 成都工业职业技术学院 Intelligent driving method based on passenger characteristics
CN116386120A (en) * 2023-05-24 2023-07-04 杭州企智互联科技有限公司 A sensorless monitoring and management system
CN116386120B (en) * 2023-05-24 2023-08-18 杭州企智互联科技有限公司 A noninductive control management system for wisdom campus dormitory
CN118194265A (en) * 2024-05-13 2024-06-14 湖南三湘银行股份有限公司 NFC-based method for rapidly identifying and collecting identity information
CN118194265B (en) * 2024-05-13 2024-10-15 湖南三湘银行股份有限公司 NFC-based method for rapidly identifying and collecting identity information
CN119650069A (en) * 2025-02-13 2025-03-18 四川互慧软件有限公司 A method, device, equipment and medium for risk assessment of stroke patients

Also Published As

Publication number Publication date
CN107403168B (en) 2020-08-11

Similar Documents

Publication Publication Date Title
CN107403168A (en) A kind of facial-recognition security systems
CN108197587B (en) Method for performing multi-mode face recognition through face depth prediction
CN105205480B (en) Human-eye positioning method and system in a kind of complex scene
CN111274916A (en) Face recognition method and face recognition device
CN110189294B (en) A saliency detection method for RGB-D images based on depth reliability analysis
CN106446851A (en) Visible light based human face optimal selection method and system
WO2019056988A1 (en) Face recognition method and apparatus, and computer device
CN109117797A (en) A kind of face snapshot recognition method based on face quality evaluation
CN108629305A (en) A kind of face recognition method
EP3905104B1 (en) Living body detection method and device
CN111178208A (en) Pedestrian detection method, device and medium based on deep learning
CN113591763B (en) Classification recognition method and device for face shapes, storage medium and computer equipment
CN101216882A (en) A method and device for positioning and tracking on corners of the eyes and mouths of human faces
CN111611934A (en) Face detection model generation and face detection method, device and equipment
CN107194361A (en) Two-dimentional pose detection method and device
CN108520204A (en) A face recognition method
CN106570447B (en) Based on the matched human face photo sunglasses automatic removal method of grey level histogram
CN109117755A (en) A kind of human face in-vivo detection method, system and equipment
CN108268814A (en) A kind of face identification method and device based on the fusion of global and local feature Fuzzy
CN116798130A (en) Face anti-counterfeiting method, device and storage medium
CN106778660A (en) A kind of human face posture bearing calibration and device
CN111832464A (en) Living body detection method and device based on near-infrared camera
CN117623031B (en) Elevator sensorless control system and method
CN108256454A (en) A kind of training method based on CNN models, human face posture estimating and measuring method and device
CN111753781B (en) A real-time 3D face liveness judgment method based on binocular infrared

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant