CN105139007A - Positioning method and apparatus of face feature point - Google Patents
Positioning method and apparatus of face feature point Download PDFInfo
- Publication number
- CN105139007A CN105139007A CN201510641854.2A CN201510641854A CN105139007A CN 105139007 A CN105139007 A CN 105139007A CN 201510641854 A CN201510641854 A CN 201510641854A CN 105139007 A CN105139007 A CN 105139007A
- Authority
- CN
- China
- Prior art keywords
- point
- coordinate
- correction
- point coordinate
- unique point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The disclosure brings forward a positioning method and apparatus of a face feature point. The method comprises: correcting initial feature point coordinates according to a first feature point correction model to obtain primary correction feature point coordinate; carrying out central feature point identification on a plurality of primary correction feature point coordinates so as to obtain at least one central feature point coordinate; and according to a feature point mapping function, carrying out coordinate mapping on the multiple primary correction feature point coordinates to obtain a plurality of secondary correction feature point coordinates. The feature point mapping function represents a mapping relation between the central feature point and the secondary correction feature point coordinates. With the method and apparatus, precision of the face feature point positioning can be improved obviously.
Description
Technical field
The disclosure relates to image processing field, particularly relates to man face characteristic point positioning method and device.
Background technology
SDM (superviseddecentmethod supervision gradient descent method) is the accurate facial modeling algorithm of computer vision field latest find, SDM because of locate fast, robustness is good, versatility and extendability strong, its application is more and more extensive.Oriented the unique point of face by SDM algorithm after, other process a series of of follow-up face can be carried out very easily, as beautiful in face face, recognition of face etc.But more and more extensive along with related application, user is more and more higher for the requirement of the positional accuracy of human face characteristic point, therefore how to improve the precision of SDM algorithm for facial modeling, has more and more important meaning.
Summary of the invention
For overcoming Problems existing in correlation technique, the disclosure provides a kind of man face characteristic point positioning method and device.
According to the first aspect of disclosure embodiment, provide a kind of man face characteristic point positioning method, described method comprises:
According to fisrt feature point correction model, initial characteristics point coordinate is revised, revised unique point coordinate for the first time;
The identification of central feature point is carried out to multiple described first correction unique point coordinate, obtains at least one central feature point coordinate;
According to unique point mapping function, virtual borderlines is carried out to multiple described first correction unique point coordinate, obtain multiple second-order correction unique point coordinate, wherein, described unique point mapping function is the mapping relations of described central feature point to described second-order correction unique point coordinate.
Optionally, described method also comprises:
According to second feature point correction model, multiple described second-order correction unique point coordinate is revised, obtain multiple final correction unique point coordinate.
Optionally, described method also comprises:
Human face region detection is carried out to target picture, obtains human face region;
According to the coordinate accounting of multiple described initial characteristics point, obtain the multiple described initial characteristics point coordinate in described human face region.
Optionally, the coordinate accounting of described initial characteristics point obtains by carrying out calibration measurements to the human face region in multiple pictures sample.
Optionally, described central feature point coordinate is eyeball center point coordinate.
Optionally, describedly according to fisrt feature point correction model, initial characteristics point coordinate to be revised, is revised unique point coordinate for the first time, comprising:
According to described fisrt feature point correction model, matrix multiplication calculating is carried out to multiple described initial characteristics point coordinate, obtain multiple first initial characteristics point coordinate;
According to described fisrt feature point correction model, matrix multiplication calculating is carried out to multiple described first initial characteristics point coordinate, obtain multiple second initial characteristics point coordinate;
……
According to described N initial characteristics point coordinate, matrix multiplication calculating is carried out to multiple described N-1 initial characteristics point coordinate, obtains multiple described first correction unique point coordinate, wherein, N be greater than or equal to 2 integer.
Optionally, describedly according to second feature point correction model, multiple described second-order correction unique point coordinate to be revised, obtains multiple described final correction unique point coordinate, comprising:
According to described second feature point correction model, matrix multiplication calculating is carried out to multiple described second-order correction unique point coordinate, obtain the first final unique point coordinate;
According to described second feature point correction model, matrix multiplication calculating is carried out to the described first final unique point coordinate, obtain the second final unique point coordinate;
……
According to described M initial characteristics point coordinate, matrix multiplication calculating is carried out to described M-1 initial characteristics point coordinate, obtains described final correction unique point coordinate, wherein, M be greater than or equal to 2 integer.
Optionally, described fisrt feature point correction model is the feature of multiple initial characteristics point, side-play amount and multiple first feature of correction unique point, the mapping relations of side-play amount, and described fisrt feature point correction model is projection matrix model.
Optionally, described second feature point correction model is the feature of multiple second-order correction unique point, side-play amount and multiple final feature of correction unique point, the mapping relations of side-play amount, and described second feature point correction model is projection matrix model.
Optionally, the quantity of described initial characteristics point is 44 or 98.
According to the second aspect of disclosure embodiment, a kind of facial modeling device, described device comprises:
First correcting module, is configured to revise initial characteristics point coordinate according to fisrt feature point correction model, is revised unique point coordinate for the first time;
Identification module, the multiple described first correction unique point coordinate be configured to described first correcting module correction obtains carries out the identification of central feature point, obtains at least one central feature point coordinate;
Mapping block, be configured to carry out virtual borderlines according to unique point mapping function to the multiple described first correction unique point coordinate that described first correcting module correction obtains, obtain multiple second-order correction unique point coordinate, wherein, described unique point mapping function is the mapping relations of the described central feature point that obtains of described identification module identification to described second-order correction unique point coordinate.
Optionally, described device also comprises:
Second correcting module, maps to described mapping block the multiple described second-order correction unique point coordinate obtained according to second feature point correction model and revises, obtain multiple final correction unique point coordinate.
Optionally, described device also comprises:
Detection module, is configured to carry out human face region detection to target picture, obtains human face region;
Acquisition module, is configured to, according to the coordinate accounting of multiple described initial characteristics point, obtain the multiple described initial characteristics point coordinate in described human face region.
Optionally, the coordinate accounting of described initial characteristics point obtains by carrying out calibration measurements to the human face region in multiple pictures sample.
Optionally, described central feature point coordinate is eyeball center point coordinate.
Optionally, described first correcting module comprises:
First calculating sub module, is configured to carry out matrix multiplication calculating according to described fisrt feature point correction model to multiple described initial characteristics point coordinate, obtains multiple first initial characteristics point coordinate;
According to described fisrt feature point correction model, matrix multiplication calculating is carried out to multiple described first initial characteristics point coordinate, obtain multiple second initial characteristics point coordinate;
……
According to described N initial characteristics point coordinate, matrix multiplication calculating is carried out to multiple described N-1 initial characteristics point coordinate, obtains multiple described first correction unique point coordinate, wherein, N be greater than or equal to 2 integer.
Optionally, described second correcting module comprises:
Second calculating sub module, is configured to carry out matrix multiplication calculating according to described second feature point correction model to multiple described second-order correction unique point coordinate, obtains the first final unique point coordinate;
According to described second feature point correction model, matrix multiplication calculating is carried out to the described first final unique point coordinate that described second calculating sub module calculates, obtain the second final unique point coordinate;
……
According to described M initial characteristics point coordinate, matrix multiplication calculating is carried out to described M-1 initial characteristics point coordinate, obtains described final correction unique point coordinate, wherein, M be greater than or equal to 2 integer.
Optionally, described fisrt feature point correction model is the feature of multiple initial characteristics point, side-play amount and multiple first feature of correction unique point, the mapping relations of side-play amount, and described fisrt feature point correction model is projection matrix model.
Optionally, described second feature point correction model is the feature of multiple second-order correction unique point, side-play amount and multiple final feature of correction unique point, the mapping relations of side-play amount, and described second feature point correction model is projection matrix model.
Optionally, the quantity of described initial characteristics point is 44 or 98.
According to the third aspect of disclosure embodiment, a kind of facial modeling device is provided, it is characterized in that, comprising:
Processor;
For the storer of storage of processor executable instruction;
Wherein, described processor is configured to:
According to fisrt feature point correction model, initial characteristics point coordinate is revised, revised unique point coordinate for the first time;
The identification of central feature point is carried out to multiple described first correction unique point coordinate, obtains at least one central feature point coordinate;
According to unique point mapping function, virtual borderlines is carried out to multiple described first correction unique point coordinate, obtain multiple second-order correction unique point coordinate, wherein, described unique point mapping function is the mapping relations of described central feature point to described second-order correction unique point coordinate.
In above embodiment of the present disclosure, by fisrt feature point correction model, initial characteristics point coordinate is revised, revised unique point coordinate for the first time, and the identification of central feature point is carried out to multiple described first correction unique point coordinate, obtain at least one central feature point coordinate, then according to unique point mapping function, virtual borderlines is carried out to multiple described first correction unique point coordinate, obtain multiple second-order correction unique point coordinate, wherein, because described unique point mapping function is the mapping relations of described central feature point to described second-order correction unique point coordinate, and described central feature point carries out identifying based on described first correction unique point coordinate the unique point more accurately obtained, therefore the positioning accurate accuracy of human face characteristic point can be improved.
In above embodiment of the present disclosure, by second feature point correction model, multiple described second-order correction unique point coordinate is revised, obtain multiple final correction unique point coordinate, owing to again being revised described second-order correction unique point by second feature point correction model, the positioning precision of human face characteristic point therefore can be improved further.
In above embodiment of the present disclosure, by carrying out human face region detection to target picture, obtain human face region, then according to the coordinate accounting of multiple described initial characteristics point, obtain the multiple described initial characteristics point coordinate in described human face region, wherein, because the coordinate accounting of described initial characteristics obtains by carrying out calibration measurements to the human face region in multiple pictures, therefore can fast accurate be target picture setting initial characteristics point.
Should be understood that, it is only exemplary and explanatory that above general description and details hereinafter describe, and can not limit the disclosure.
Accompanying drawing explanation
Accompanying drawing to be herein merged in instructions and to form the part of this instructions, shows and meets embodiment of the present disclosure, and is used from instructions one and explains principle of the present disclosure.
Fig. 1 is the schematic flow sheet of a kind of man face characteristic point positioning method according to an exemplary embodiment;
Fig. 2 is the schematic flow sheet of the another kind of man face characteristic point positioning method according to an exemplary embodiment;
Fig. 3 is the schematic block diagram of a kind of facial modeling device according to an exemplary embodiment;
Fig. 4 is the schematic block diagram of the another kind of facial modeling device according to an exemplary embodiment;
Fig. 5 is the schematic block diagram of the another kind of facial modeling device according to an exemplary embodiment;
Fig. 6 is the schematic block diagram of the another kind of facial modeling device according to an exemplary embodiment;
Fig. 7 is the schematic block diagram of the another kind of facial modeling device according to an exemplary embodiment;
Fig. 8 is a kind of structural representation for described facial modeling device according to an exemplary embodiment.
Embodiment
Here will be described exemplary embodiment in detail, its sample table shows in the accompanying drawings.When description below relates to accompanying drawing, unless otherwise indicated, the same numbers in different accompanying drawing represents same or analogous key element.Embodiment described in following exemplary embodiment does not represent all embodiments consistent with the disclosure.On the contrary, they only with as in appended claims describe in detail, the example of apparatus and method that aspects more of the present disclosure are consistent.
The term used in the disclosure is only for the object describing specific embodiment, and the not intended to be limiting disclosure." one ", " described " and " being somebody's turn to do " of the singulative used in disclosure and the accompanying claims book is also intended to comprise most form, unless context clearly represents other implications.It is also understood that term "and/or" used herein refer to and comprise one or more project of listing be associated any or all may combine.
Term first, second, third, etc. may be adopted although should be appreciated that to describe various information in the disclosure, these information should not be limited to these terms.These terms are only used for the information of same type to be distinguished from each other out.Such as, when not departing from disclosure scope, the first information also can be called as the second information, and similarly, the second information also can be called as the first information.Depend on linguistic context, word as used in this " if " can be construed as into " ... time " or " when ... time " or " in response to determining ".
SDM (SupervisedDescentMethod, the gradient descent method of supervision) algorithm belongs to a kind of iterative algorithm, and may be used for carrying out face characteristic and locate, its algorithm principle is:
Set one group of initial characteristics point, this algorithm extracts image feature vector for this group initial characteristics point, obtains one group of image feature vector Y
0, and use Y
0prediction is from initial characteristics point current location X
0start, to the side-play amount delta_X of next impact point
0, then by side-play amount and current location X
0be added, and start next iteration, whole iterative process can represent with following formula:
X
n+1=X
n+delta_X
n
delta_X
n=f
n(Y
n)
n=0,1,2...
Wherein, above-mentioned delta_X
nfor multi-C vector, above-mentioned delta_X
nvalue, namely the computing method of the side-play amount of each iteration are the keys of this iterative algorithm.SDM algorithm is at calculating delta_X
nvalue time, usually adopt the method for linear prediction, i.e. the side-play amount delta_X of each iteration
nimage feature vector Y
nlinear function f
n(Y
n), wherein:
f
n(Y
n)=A
n*Y
n
At above-mentioned linear function f
n(Y
n) expression formula in, A
nrefer to location prediction matrix, for predicting the side-play amount delta_X of each iteration
n.In the process of computing, if total p point patterns point needs location, so Y
nthe vector (each feature point extraction k dimensional feature vector, the concrete value of k sets according to the actual requirements) of k*p dimension, A
nthe matrix of 2pxkp, X
n2*p dimensional vector (each unique point has 2 dimension coordinates).
SDM algorithm is carrying out in the process of iteration based on above-mentioned initial characteristics point, common way utilizes fast face to detect, the approximate location finding out face obtains an initial face frame, then in initial face frame, set initial characteristics point, then carry out iteration by SDM algorithm and carry out locating human face's unique point.
But, in such scheme, when carrying out facial modeling by SDM algorithm, positioning precision depends on the position of initial block very much, and initial block is when the inside of actual face, and the change of face inside is less, therefore the positioning result after SDM algorithm iteration can be relatively good, when initial block is actual face outside, because the change of external context may be very large, SDM algorithm iteration positioning result out of true all therefore can be caused.Visible, in the related, the precise degrees of SDM algorithm, largely depends on the position of initial block, often causes facial modeling out of true.
For solving the problem, the disclosure proposes a kind of face characteristic point methods, by fisrt feature point correction model, initial characteristics point coordinate is revised, revised unique point coordinate for the first time, and the identification of central feature point is carried out to multiple described first correction unique point coordinate, obtain at least one central feature point coordinate, then according to unique point mapping function, virtual borderlines is carried out to multiple described first correction unique point coordinate, obtain multiple second-order correction unique point coordinate, wherein, because described unique point mapping function is the mapping relations of described central feature point to described second-order correction unique point coordinate, and described central feature point carries out identifying based on described first correction unique point coordinate the unique point more accurately obtained, therefore the positioning accurate accuracy of human face characteristic point can be improved.
As shown in Figure 1, Fig. 1 is a kind of man face characteristic point positioning method according to an exemplary embodiment, and the method is used for service end, comprises the following steps:
In a step 101, according to fisrt feature point correction model, initial characteristics point coordinate is revised, revised unique point coordinate for the first time;
In a step 102, the identification of central feature point is carried out to multiple described first correction unique point coordinate, obtains at least one central feature point coordinate;
In step 103, according to unique point mapping function, virtual borderlines is carried out to multiple described first correction unique point coordinate, obtain multiple second-order correction unique point coordinate, wherein, described unique point mapping function is the mapping relations of described central feature point to described second-order correction unique point coordinate.
In the present embodiment, service end can comprise server, server cluster or the cloud platform that user oriented provides facial modeling to serve.Above-mentioned fisrt feature point correction model, can be based on the image texture characteristic of initial characteristics point, side-play amount and first image texture characteristic, the mapping association between side-play amount revising unique point, the projection matrix model of training, the coordinate that may be used for above-mentioned initial characteristics point is revised, and obtains the coordinate of above-mentioned first correction unique point.
Such as, above-mentioned fisrt feature point correction model can be the projection matrix model based on SDM algorithm, and when training above-mentioned fisrt feature point correction model, the initial characteristics point training can demarcated based on the human face region in the photo sample of predetermined number obtains.
Be be described based on the training process of projection matrix model to above-mentioned fisrt feature point correction model of SDM algorithm with described fisrt feature point correction model below.
In the preparatory stage of fisrt feature point correction model training, the photo sample of predetermined number can be prepared, when training above-mentioned fisrt feature point correction model, manually human face region can be calibrated on all photo samples, then in the human face region calibrated, along facial contour, the unified label in unified position is adopted manually to calibrate the equally distributed human face characteristic point of some.Such as, when demarcating human face characteristic point, in the human face region of photo sample, can be fixed by position and equally distributed unique point, shape of face, eyebrow, eyes, nose and face are sketched the contours, depicting all facial characteristics of face.
Wherein, the unique point distribution of demarcating is more even, quantity is more, the model that final training obtains is more accurate, but calibrated many many calculated amount that can increase system of feature, therefore in the process realized, the quantity of the unique point of demarcating, engineering experience value can be adopted, or set according to the actual computation ability of system or demand.Such as, when realizing, 50,000 image patterns can be prepared, can around facial contour in each human face region of all photo samples, manually demarcate one group and be uniformly distributed and fixing 44 or 98 unique points in position.Such as, suppose all manually to demarcate 98 points in all picture samples, the numbering so can unifying employing 0 ~ 97 carries out label, and in each pictures sample, the relative position of the unique point of identical label in human face region is all fixing.
On all photo samples, after human face characteristic point has all been demarcated, these now can be utilized to have demarcated successful human face characteristic point, the matrix model training method based on SDM algorithm trains above-mentioned fisrt feature point correction model.
When training above-mentioned fisrt feature point correction model, after all photo samples all manually calibrate the unique point of some, can using the human face region of demarcation as prime area, according to proven unique point, one group of corresponding initial characteristics point is set in the human face region demarcated in all photo samples.
Wherein, when setting initial characteristics point in human face region, can set according to the coordinate accounting of above-mentioned initial characteristics point in human face region, described coordinate accounting is comparable to be obtained after carrying out calibration measurements to the human face region in the photo sample of predetermined number.
Such as, in the process to all artificial feature point for calibration of photo sample, the coordinate accounting of each unique point in photo human face region can be measured respectively, after all photo sample standard deviations have been demarcated, can to the coordinate accounting data analysis of each unique point in all photo samples measured, for each unique point arranges suitable coordinate accounting (such as getting average) respectively, and using the coordinate accounting of this coordinate accounting as setting initial characteristics point.
Wherein, above-mentioned coordinate accounting may be used for weighing the relative position of each unique point in human face region.In different photo samples, the magnitude range of human face region is all not identical, even the unique point of same position in different photo samples, the coordinate of its correspondence also may be different, therefore weigh the relative position of unique point in human face region by coordinate accounting, the position weighing unique point relative to use unique point coordinate is more accurate.Such as, for XY axis coordinate system, above-mentioned coordinate accounting can characterize with the ratio of the X-axis coordinate of unique point and Y-axis coordinate.
Therefore, when setting initial characteristics point in human face region, according to the coordinate accounting obtained after carrying out calibration measurements to the human face region in the photo sample of predetermined number, the coordinate of corresponding initial characteristics point directly can be obtained in all photo samples.The quantity of the initial characteristics point now set is consistent with the quantity of the unique point that all photo samples are manually demarcated.
Certainly, when setting initial characteristics point in human face region, except described above can setting by the coordinate accounting of above-mentioned initial characteristics point in human face region, also can be realized by alternate manner.In the another kind of implementation shown in the present embodiment, the coordinate average of often opening proven human face characteristic point on photo sample can being calculated, then in above-mentioned initial human face region, respectively setting corresponding initial characteristics point for often opening photo sample according to this coordinate average calculated.
Such as, suppose 98 unique points all demarcated by all photo samples, so can calculate the average coordinates value of these 98 unique points on all photo samples, such as calculate the coordinate average of No. 1 point on all photo samples, calculate the coordinate average of No. 2 points again, by that analogy.When after the average coordinates value calculating these 98 unique points, using 98 coordinate averages calculating as discreet value, can also estimate out 98 unique points in the initial human face region detected.
What deserves to be explained is, in the process of the above-mentioned fisrt feature point correction model of training, when for often opening photo sample setting initial characteristics point, can also add that certain random file is as disturbed value for the initial characteristics point of setting.In initial characteristics point, add disturbed value, finally when training corresponding model based on initial characteristics point, the precision of model can be increased.
In the present embodiment, after initial unique point has set, now can train above-mentioned fisrt feature point correction model based on these initial characteristics points arranged.
As previously mentioned, due to the side-play amount delta_X of each iteration of SDM algorithm
nimage feature vector Y
nlinear function f
n(Y
n), and f
n(Y
n)=A
n* Y
n, therefore after initial unique point has set, the image texture characteristic vector Y that these initial characteristics points are corresponding can be extracted
n, and compute location matrix A
n.
On the one hand, can respectively for the initial characteristics point X set in all photo samples
0(X
0represent one group of initial characteristics point be provided with), extract corresponding image texture characteristic vector Y
0.
When extracting image texture characteristic vector, Feature Descriptor can be used as, the Feature Descriptor then will extracted from all initial characteristics points by extracting a k dimensional vector on each initial characteristics point, connect into a k*p dimensional vector Y
n(p is the unique point needing location).
Wherein, at the extraction feature interpretation period of the day from 11 p.m. to 1 a.m, multiple choices can be had, and under normal conditions, Feature Descriptor requires that dimension is low, can the picture material of Expressive Features point concisely, to illumination variation, the robustness of Geometrical change will be got well etc., in a kind of embodiment therefore illustrated in the present embodiment, the gray scale dot matrix of the HOG (HistogramofOrientedGradient, histograms of oriented gradients) and 3x3 that can extract 3x3 is as Feature Descriptor.
What deserves to be explained is, in the application process of reality, descriptor dimension is too high, usually can directly have influence on location prediction matrix A
nsize, therefore in order to control the number of parameter needing study, dimensionality reduction can also be carried out according to default dimension-reduction algorithm to the Feature Descriptor extracted in picture; Such as, all human face characteristic points from mark can be collected, adopt PCA (PrincipalComponentAnalysis Principal Component Analysis Method) algorithm to process, obtain a dimensionality reduction matrix B (mxk dimension), then use this dimensionality reduction matrix to Y
neach descriptor in vector carries out dimensionality reduction, obtains the image feature vector Z after a dimensionality reduction
n(m*p dimensional vector), follow-up at calculating side-play amount delta_X
ntime, this image feature vector Z can be used
nsubstitute above-mentioned image feature vector Y
n, i.e. linear function f
n(Y
n) can f be expressed as
n(Y
n)=A
n* Y
n=A
n* B (Y
n)=A
n* Z
n.
On the other hand, can also for the position of often opening the initial characteristics point set in photo sample, calculate and in all photo samples by the side-play amount delta_X between the human face characteristic point manually marked
0, now delta_X
0=X
*-X
0, X
0represent the position coordinates of the initial characteristics point set in all photo samples, X
*represent the position coordinates by the human face characteristic point manually marked in all photo samples.
When extracting image feature vector Y corresponding to the initial characteristics point that set in every pictures
0, and calculate in the initial characteristics point and all photo samples often opening and set in photo sample by the side-play amount delta_X between the human face characteristic point manually marked
0after, then can based on side-play amount delta_X
0with initial characteristics point X
0between exist linear relationship, learn out location prediction matrix A by the mode of linear fit
0.Wherein, A
0the location prediction matrix adopted when changing to be calculated is for the first time carried out in expression based on SBM algorithm.
Such as, as previously mentioned, in SDM algorithm, above-mentioned linear relationship can use linear function delta_X
n=A
n* Y
nrepresent, therefore according to this linear relationship, can be easy to draw delta_X
0=A
0* Y
0.
Can be very easy to find by above-mentioned linear function, the delta_X now calculated
0and Y
0to the A in above-mentioned linear function
0there is certain restriction relation, when by delta_X
0and Y
0during as predicted data, A
0then can be understood as delta_X
0and Y
0constraint matrix.
For this situation, solving A based on above-mentioned linear function
0time, the side-play amount delta_X that can will calculate
0and the image feature vector Y extracted
0as predicted data, solve A by the mode of least-squares algorithm linear fitting
0.
Wherein, A is solved by the mode of least-squares algorithm linear fitting
0process, no longer describe in detail in the present embodiment, those skilled in the art, can with reference to introduction of the prior art when by auxiliary for above technical scheme realization.
When the mode by least-squares algorithm linear fitting solves A
0time, now A
0be based on SBM algorithm carry out first time repeatedly to be calculated time the location prediction matrix that adopts.When calculating A
0time, can by above-mentioned linear function calculate first time iteration time side-play amount delta_X
0, above-mentioned X
0can add that this side-play amount obtains a stack features point A of next iteration
1.When calculating a stack features point A of next iteration
1after, above iterative process can be repeated, until SDM algorithm convergence.
What deserves to be explained is, in the process of continuous iteration, displacement error between the lineup's face characteristic point manually demarcated in the lineup's face characteristic point oriented and photo sample will constantly be corrected, when after SDM algorithm convergence, displacement error between the human face characteristic point of the artificial demarcation in the human face characteristic point now oriented and photo sample is minimum, therefore when after SDM algorithm convergence, now above-mentioned fisrt feature point correction model training is complete, the location prediction matrix calculated after each iteration in above-mentioned fisrt feature point correction model, the Target Photo that may be used for user provides carries out facial modeling.
Wherein, when training above-mentioned fisrt feature point correction model, the iterations carried out during SDM algorithm convergence, is not particularly limited in the disclosure.Such as, based on engineering experience value, in the application of facial modeling, usually need iteration 4 times, therefore in above-mentioned fisrt feature point correction model, can A be provided
0~ A
3in 4 location prediction matrixes.
Described above is the detailed process of training fisrt feature point correction model.
Above-mentioned fisrt feature point correction model is based on one group of initial characteristics point equally distributed around facial contour in photo sample, is that prime area training forms with human face region.
For the above-mentioned fisrt feature point correction model trained, may be used for revising the coordinate of the initial characteristics point that target picture sets, obtain the coordinate of above-mentioned first correction unique point, thus realize the precise positioning of human face characteristic point.Above-mentioned target picture, is the photo that user needs to carry out facial modeling.
When carrying out facial modeling for Target Photo, fast face detection technique (such as can use the human-face detector of the maturations such as such as adaboost) can be utilized, carry out human face region to above-mentioned target picture to detect and obtain an initial human face region, and in this human face region, set initial characteristics point.
Wherein, when setting initial characteristics point in the human face region detected in target picture, still can according to the coordinate accounting of above-mentioned initial characteristics point in human face region, or calculate the coordinate average of often opening the human face characteristic point demarcated in photo sample and set, detailed process repeats no more.
After setting initial characteristics point in the human face region detected in target picture, can for this group initial characteristics point X set in target picture
0, extract corresponding image texture characteristic vector Y
0, the image feature vector Y then will extracted
0, with the location prediction matrix A provided in the fisrt feature point correction model of having trained
ncarry out interative computation, with to initial characteristics point X above-mentioned in target picture
0revise for the first time, revised unique point coordinate for the first time.
At the above-mentioned image texture characteristic vector Y by target picture
0, with the location prediction matrix A that provides in the fisrt feature point correction model of having trained
nwhen carrying out interative computation, suppose to provide A in fisrt feature point correction model
0~ A
3in 4 location prediction matrixes, so will carry out 4 submatrix multiplication and calculate, first can according to A
0to above-mentioned image texture characteristic vector Y
0carry out matrix multiplication calculating, carry out first time iteration, obtain one group of first initial characteristics point coordinate, then according to A
1again matrix multiplication calculating is carried out to the above-mentioned first initial characteristics point coordinate calculated, carries out second time iteration, obtain one group of second initial characteristics point coordinate, and then according to A
2again matrix multiplication calculating is carried out to the above-mentioned second initial characteristics point coordinate calculated, carries out third time iteration, obtain the 3rd initial characteristics point coordinate, when third time iteration complete, then according to A
4carry out matrix multiplication calculating to the above-mentioned 3rd initial characteristics point coordinate calculated, obtain above-mentioned first correction unique point coordinate, now iteration completes.
In the present embodiment, because fisrt feature point correction model is when carrying out facial modeling for Target Photo, be the face frame that detects be prime area, positioning precision depends on the position of initial block very much, initial block is when the inside of actual face, the change of face inside is less, positioning result after SDM algorithm iteration can be relatively good, when initial block is actual face outside, because the change of external context may be very large, SDM algorithm iteration positioning result out of true all will be caused, therefore in order to improve positioning precision, when fisrt feature point correction model is carrying out after facial modeling terminates for Target Photo, the multiple above-mentioned first correction unique point obtained after can also calculating matrix multiplication carries out second-order correction, obtain the second-order correction unique point coordinate of predetermined number.
When carrying out second-order correction for above-mentioned first correction unique point, the identification of central feature point can be carried out for multiple above-mentioned first correction unique point, obtain at least one central feature point coordinate, then according to the mapping relations between central feature point coordinate and above-mentioned second-order correction unique point coordinate, virtual borderlines is carried out to above-mentioned first correction unique point, obtains the second-order correction unique point coordinate of predetermined number.
Wherein, in a kind of implementation shown in the present embodiment, above-mentioned central feature point can be eyeball center, and above-mentioned central feature point coordinate can be then the coordinate at the eyeball center of eyes.
When above-mentioned central feature point is eyeball center, when identifying eyeball center based on above-mentioned first correction unique point, because the textural characteristics of eyeball central point is abundanter, therefore default Ins location algorithm can be passed through using the coordinate of above-mentioned first correction unique point as auxiliary parameter, by identifying that the textural characteristics of eyeball central point carries out the location at eyeball center.Wherein, above-mentioned default Ins location algorithm is not particularly limited in the present embodiment, and those skilled in the art can with reference to the implementation procedure in correlation technique.
When the coordinate based on above-mentioned first correction unique point identifies the coordinate time at two eyeball centers, can based on the mapping relations between the coordinate at eyeball center and above-mentioned second-order correction unique point coordinate, virtual borderlines is carried out to above-mentioned first correction unique point, second-order correction is carried out to above-mentioned first unique point, obtains the coordinate of the second-order correction unique point of predetermined number.
Wherein, mapping relations between the coordinate at eyeball center and above-mentioned second-order correction unique point coordinate can characterize with the unique point mapping function preset, and this unique point mapping function then can learn to obtain based on the relative distance between each unique point manually marked in eyeball center in the photo sample of above-mentioned predetermined number and photo sample.
For the photo sample of above-mentioned predetermined number, in different photos, the size of human face region is all not identical with scope, and in different photo samples, distance between the eyeball center of eyes and each unique point manually marked, then relative to more constant, therefore when manually marking unique point to above-mentioned photo sample, can for each photo sample, the eyeball center measuring eyes respectively to mark each unique point between distance, then to measuring the data that obtain as predicted data, mapping relations between the coordinate being learnt out the eyeball center of eyes by the mode of linear fit and the coordinate of other each unique point marked, then according to these mapping relations learnt out, to the coordinate of above-mentioned first correction unique point.
Due to the distance between the eyeball center of eyes and each unique point manually marked, relative to more constant, therefore after carrying out virtual borderlines by the coordinate of above-mentioned unique point mapping function to above-mentioned first correction unique point, the second-order correction to above-mentioned first correction unique point can be realized, obtain the coordinate of the second-order correction unique point of predetermined number, thus the positioning precision of human face characteristic point can be improved.
In the present embodiment, after second-order correction being carried out to above-mentioned first correction unique point based on above-mentioned unique point mapping function, the coordinate of the second-order correction unique point obtained, again can also revise based on second feature point correction model, finally be revised the coordinate of unique point.
Wherein, above-mentioned second feature point correction model, can be based on the mapping association between the image texture characteristic of the image texture characteristic of above-mentioned second-order correction unique point, side-play amount and above-mentioned final correction unique point, side-play amount, the projection matrix model of training, the coordinate that may be used for above-mentioned second-order correction unique point is revised again, obtains the coordinate of above-mentioned final correction unique point.
Such as, above-mentioned second feature point correction model still can be the projection matrix model based on SDM algorithm, as previously mentioned, when training above-mentioned fisrt feature point correction model, being based on the initial characteristics point that the human face region in the photo sample of predetermined number is demarcated, is that prime area training obtains with human face region.Coordinate due to above-mentioned second-order correction unique point obtains based on the eyeball center of eyes and the mapping relations correction of above-mentioned second-order correction unique point, therefore when training above-mentioned second feature point correction model, the initial characteristics point demarcated in the human face region that can form based on the eyeball center of the eyes in the photo sample of predetermined number, with the center of eyes eyeball for prime area training obtains.
Be be described based on the training process of projection matrix model to above-mentioned second feature point correction model of SDM algorithm with described second feature point correction model below.
When training above-mentioned second feature point correction model, still those photo samples used during training fisrt feature point correction model can be adopted, first the eyeball center of eyes in all photo samples is demarcated as central feature point, after the eyeball center of eyes has been demarcated, a rectangle frame can be generated according to the eyeball center demarcating eyes, and using this rectangle frame as prime area, according to unique point proven in photo sample, one group of corresponding initial characteristics point is set in this prime area.
Wherein, when setting initial characteristics point in this prime area, still can set according to the coordinate accounting of above-mentioned initial characteristics point in this prime area, described coordinate accounting is still comparable to be obtained after carrying out calibration measurements to the above-mentioned prime area in the photo sample of predetermined number.Such as, in the process to all artificial feature point for calibration of photo sample, the coordinate accounting of each unique point in above-mentioned prime area can be measured respectively, after all photo sample standard deviations have been demarcated, can to the coordinate accounting data analysis of each unique point in above-mentioned prime area in all photo samples measured, for each unique point in this prime area arranges suitable coordinate accounting (such as getting average) respectively, and using the coordinate accounting of this coordinate accounting as setting initial characteristics point.
After initial unique point has set, the image feature vector Y that these initial characteristics points are corresponding can be extracted
n, and compute location matrix A
n.On the one hand, can respectively for the initial characteristics point X set in all photo samples
0(X
0still represent one group of initial characteristics point be provided with), extract corresponding image feature vector Y
0.On the other hand, can also for the position of often opening the initial characteristics point set in photo sample, calculate and in all photo samples in above-mentioned prime area by the side-play amount delta_X between the human face characteristic point manually marked
0, now delta_X
0=X
*-X
0, X
0represent the position coordinates of the initial characteristics point set in all photo samples, X
*to represent in all photo samples in above-mentioned prime area by the position coordinates of the human face characteristic point manually marked.When extracting image feature vector Y corresponding to the initial characteristics point that set in every pictures
0, and to calculate in the initial characteristics point and all photo samples often opening and set in photo sample in above-mentioned prime area by the side-play amount delta_X between the human face characteristic point manually marked
0after, then can based on side-play amount delta_X
0with initial characteristics point X
0between exist linear relationship, learn out location prediction matrix A by the mode of linear fit
0.
Wherein, location prediction matrix A is learnt out in the mode by linear fit
0time, still can adopt the mode of least-squares algorithm linear fitting to realize, detailed process repeats no more, and those skilled in the art can carry out equivalent enforcement see the training process of the above fisrt feature point correction model introduced.
Suppose, in the process of training second feature point correction model, to have iteration 4 times altogether after SDM algorithm convergence, so in above-mentioned second feature point correction model, can A be provided
0~ A
3in 4 location prediction matrixes.
Described above is the training process of second feature point correction model.
Above-mentioned fisrt feature point correction model is with the eyeball center of eyes for prime area, forms based on one group of initial characteristics point training in prime area above-mentioned in photo sample.For the above-mentioned second feature point correction model trained, may be used for again revising the coordinate of above-mentioned second-order correction unique point, finally revised the coordinate of unique point.Thus improve the positioning precision of human face characteristic point.
When revising according to the coordinate of above-mentioned second feature point correction model to above-mentioned second-order correction unique point, can for by above-mentioned unique point mapping function this group second-order correction revised unique point X
0, extract corresponding image texture characteristic vector Y
0, the image feature vector Y then will extracted
0, with the location prediction matrix A provided in the fisrt feature point correction model of having trained
ncarry out interative computation, with to above-mentioned second-order correction unique point X
0again revise, finally revised unique point coordinate.
At the above-mentioned image texture characteristic vector Y by second-order correction unique point
0, with the location prediction matrix A that provides in the second feature point correction model of having trained
nwhen carrying out interative computation, suppose still to provide A in second feature point correction model
0~ A
3in 4 location prediction matrixes, so will carry out 4 submatrix multiplication and calculate, first can according to A
0to above-mentioned image texture characteristic vector Y
0carry out matrix multiplication calculating, carry out first time iteration, obtain one group of first final unique point coordinate, then according to A
1again matrix multiplication calculating is carried out to the calculate above-mentioned first final beginning unique point coordinate, carries out second time iteration, obtain one group of second final unique point coordinate, and then according to A
2again matrix multiplication calculating is carried out to the calculate above-mentioned second final unique point coordinate, carries out third time iteration, obtain the 3rd final unique point coordinate, when third time iteration complete, then according to A
3carry out matrix multiplication calculating to the calculate the above-mentioned 3rd final unique point coordinate, obtain above-mentioned final correction unique point coordinate, now iteration completes.
At the above-mentioned image texture characteristic vector Y by second-order correction unique point
0, with the location prediction matrix A that provides in the second feature point correction model of having trained
nafter carrying out interative computation, the final correction unique point coordinate now obtained, is the net result carrying out facial modeling for above-mentioned target picture.
Known by describing above, by fisrt feature point correction model, default unique point mapping function and second feature point correction model in the present embodiment, carry out three times to the initial characteristics point set in above-mentioned target picture to revise, therefore can promote the positioning precision of human face characteristic point significantly.
In above embodiment of the present disclosure, by fisrt feature point correction model, initial characteristics point coordinate is revised, revised unique point coordinate for the first time, and the identification of central feature point is carried out to multiple described first correction unique point coordinate, obtain at least one central feature point coordinate, then according to unique point mapping function, virtual borderlines is carried out to multiple described first correction unique point coordinate, obtain multiple second-order correction unique point coordinate, wherein, because described unique point mapping function is the mapping relations of described central feature point to described second-order correction unique point coordinate, and described central feature point carries out identifying based on described first correction unique point coordinate the unique point more accurately obtained, therefore the positioning accurate accuracy of human face characteristic point can be improved.
In above embodiment of the present disclosure, by second feature point correction model, multiple described second-order correction unique point coordinate is revised, obtain multiple final correction unique point coordinate, owing to again being revised described second-order correction unique point by second feature point correction model, the positioning precision of human face characteristic point therefore can be improved further.
In above embodiment of the present disclosure, by carrying out human face region detection to target picture, obtain human face region, then according to the coordinate accounting of multiple described initial characteristics point, obtain the multiple described initial characteristics point coordinate in described human face region, wherein, because the coordinate accounting of described initial characteristics obtains by carrying out calibration measurements to the human face region in multiple pictures, therefore can fast accurate be target picture setting initial characteristics point.
As shown in Figure 2, Fig. 2 is a kind of man face characteristic point positioning method according to an exemplary embodiment, is applied in service end, comprises the following steps:
In step 201, human face region detection is carried out to target picture, obtains human face region;
In step 202., according to the coordinate accounting of multiple described initial characteristics point, the multiple described initial characteristics point coordinate in described human face region is obtained; The coordinate accounting of described initial characteristics point obtains by carrying out calibration measurements to the human face region in multiple pictures sample;
In step 203, according to the coordinate accounting of multiple described initial characteristics point, the multiple described initial characteristics point coordinate in described human face region is obtained; The coordinate accounting of described initial characteristics point obtains by carrying out calibration measurements to the human face region in multiple pictures sample;
In step 204, according to the coordinate accounting of multiple described initial characteristics point, the multiple described initial characteristics point coordinate in described human face region is obtained; The coordinate accounting of described initial characteristics point obtains by carrying out calibration measurements to the human face region in multiple pictures sample;
In step 205, according to the coordinate accounting of multiple described initial characteristics point, the multiple described initial characteristics point coordinate in described human face region is obtained; The coordinate accounting of described initial characteristics point obtains by carrying out calibration measurements to the human face region in multiple pictures sample.
In the present embodiment, service end can comprise server, server cluster or the cloud platform that user oriented provides facial modeling to serve.Above-mentioned fisrt feature point correction model, can be based on the image texture characteristic of initial characteristics point, side-play amount and first image texture characteristic, the mapping association between side-play amount revising unique point, the projection matrix model of training, the coordinate that may be used for above-mentioned initial characteristics point is revised, and obtains the coordinate of above-mentioned first correction unique point.
Such as, above-mentioned fisrt feature point correction model can be the projection matrix model based on SDM algorithm, and when training above-mentioned fisrt feature point correction model, the initial characteristics point training can demarcated based on the human face region in the photo sample of predetermined number obtains.
Be be described based on the training process of projection matrix model to above-mentioned fisrt feature point correction model of SDM algorithm with described fisrt feature point correction model below.
In the preparatory stage of fisrt feature point correction model training, the photo sample of predetermined number can be prepared, when training above-mentioned fisrt feature point correction model, manually human face region can be calibrated on all photo samples, then in the human face region calibrated, along facial contour, the unified label in unified position is adopted manually to calibrate the equally distributed human face characteristic point of some.Such as, when demarcating human face characteristic point, in the human face region of photo sample, can be fixed by position and equally distributed unique point, shape of face, eyebrow, eyes, nose and face are sketched the contours, depicting all facial characteristics of face.
Wherein, the unique point distribution of demarcating is more even, quantity is more, the model that final training obtains is more accurate, but calibrated many many calculated amount that can increase system of feature, therefore in the process realized, the quantity of the unique point of demarcating, engineering experience value can be adopted, or set according to the actual computation ability of system or demand.Such as, when realizing, 50,000 image patterns can be prepared, can around facial contour in each human face region of all photo samples, manually demarcate one group and be uniformly distributed and fixing 44 or 98 unique points in position.Such as, suppose all manually to demarcate 98 points in all picture samples, the numbering so can unifying employing 0 ~ 97 carries out label, and in each pictures sample, the relative position of the unique point of identical label in human face region is all fixing.
On all photo samples, after human face characteristic point has all been demarcated, these now can be utilized to have demarcated successful human face characteristic point, the matrix model training method based on SDM algorithm trains above-mentioned fisrt feature point correction model.
When training above-mentioned fisrt feature point correction model, after all photo samples all manually calibrate the unique point of some, can using the human face region of demarcation as prime area, according to proven unique point, one group of corresponding initial characteristics point is set in the human face region demarcated in all photo samples.
Wherein, when setting initial characteristics point in human face region, can set according to the coordinate accounting of above-mentioned initial characteristics point in human face region, described coordinate accounting is comparable to be obtained after carrying out calibration measurements to the human face region in the photo sample of predetermined number.
Such as, in the process to all artificial feature point for calibration of photo sample, the coordinate accounting of each unique point in photo human face region can be measured respectively, after all photo sample standard deviations have been demarcated, can to the coordinate accounting data analysis of each unique point in all photo samples measured, for each unique point arranges suitable coordinate accounting (such as getting average) respectively, and using the coordinate accounting of this coordinate accounting as setting initial characteristics point.
Wherein, above-mentioned coordinate accounting may be used for weighing the relative position of each unique point in human face region.In different photo samples, the magnitude range of human face region is all not identical, even the unique point of same position in different photo samples, the coordinate of its correspondence also may be different, therefore weigh the relative position of unique point in human face region by coordinate accounting, the position weighing unique point relative to use unique point coordinate is more accurate.Such as, for XY axis coordinate system, above-mentioned coordinate accounting can characterize with the ratio of the X-axis coordinate of unique point and Y-axis coordinate.
Therefore, when setting initial characteristics point in human face region, according to the coordinate accounting obtained after carrying out calibration measurements to the human face region in the photo sample of predetermined number, the coordinate of corresponding initial characteristics point directly can be obtained in all photo samples.The quantity of the initial characteristics point now set is consistent with the quantity of the unique point that all photo samples are manually demarcated.
Certainly, when setting initial characteristics point in human face region, except described above can setting by the coordinate accounting of above-mentioned initial characteristics point in human face region, also can be realized by alternate manner.In the another kind of implementation shown in the present embodiment, the coordinate average of often opening proven human face characteristic point on photo sample can being calculated, then in above-mentioned initial human face region, respectively setting corresponding initial characteristics point for often opening photo sample according to this coordinate average calculated.
Such as, suppose 98 unique points all demarcated by all photo samples, so can calculate the average coordinates value of these 98 unique points on all photo samples, such as calculate the coordinate average of No. 1 point on all photo samples, calculate the coordinate average of No. 2 points again, by that analogy.When after the average coordinates value calculating these 98 unique points, using 98 coordinate averages calculating as discreet value, can also estimate out 98 unique points in the initial human face region detected.
What deserves to be explained is, in the process of the above-mentioned fisrt feature point correction model of training, when for often opening photo sample setting initial characteristics point, can also add that certain random file is as disturbed value for the initial characteristics point of setting.In initial characteristics point, add disturbed value, finally when training corresponding model based on initial characteristics point, the precision of model can be increased.
In the present embodiment, after initial unique point has set, now can train above-mentioned fisrt feature point correction model based on these initial characteristics points arranged.
As previously mentioned, due to the side-play amount delta_X of each iteration of SDM algorithm
nimage feature vector Y
nlinear function f
n(Y
n), and f
n(Y
n)=A
n* Y
n, therefore after initial unique point has set, the image texture characteristic vector Y that these initial characteristics points are corresponding can be extracted
n, and compute location matrix A
n.
On the one hand, can respectively for the initial characteristics point X set in all photo samples
0(X
0represent one group of initial characteristics point be provided with), extract corresponding image texture characteristic vector Y
0.
When extracting image texture characteristic vector, Feature Descriptor can be used as, the Feature Descriptor then will extracted from all initial characteristics points by extracting a k dimensional vector on each initial characteristics point, connect into a k*p dimensional vector Y
n(p is the unique point needing location).
Wherein, at the extraction feature interpretation period of the day from 11 p.m. to 1 a.m, multiple choices can be had, and under normal conditions, Feature Descriptor requires that dimension is low, can the picture material of Expressive Features point concisely, to illumination variation, the robustness of Geometrical change will be got well etc., in a kind of embodiment therefore illustrated in the present embodiment, the gray scale dot matrix of the HOG (HistogramofOrientedGradient, histograms of oriented gradients) and 3x3 that can extract 3x3 is as Feature Descriptor.
What deserves to be explained is, in the application process of reality, descriptor dimension is too high, usually can directly have influence on location prediction matrix A
nsize, therefore in order to control the number of parameter needing study, dimensionality reduction can also be carried out according to default dimension-reduction algorithm to the Feature Descriptor extracted in picture; Such as, all human face characteristic points from mark can be collected, adopt PCA (PrincipalComponentAnalysis Principal Component Analysis Method) algorithm to process, obtain a dimensionality reduction matrix B (mxk dimension), then use this dimensionality reduction matrix to Y
neach descriptor in vector carries out dimensionality reduction, obtains the image feature vector Z after a dimensionality reduction
n(m*p dimensional vector), follow-up at calculating side-play amount delta_X
ntime, this image feature vector Z can be used
nsubstitute above-mentioned image feature vector Y
n, i.e. linear function f
n(Y
n) can f be expressed as
n(Y
n)=A
n* Y
n=A
n* B (Y
n)=A
n* Z
n.
On the other hand, can also for the position of often opening the initial characteristics point set in photo sample, calculate and in all photo samples by the side-play amount delta_X between the human face characteristic point manually marked
0, now delta_X
0=X
*-X
0, X
0represent the position coordinates of the initial characteristics point set in all photo samples, X
*represent the position coordinates by the human face characteristic point manually marked in all photo samples.
When extracting image feature vector Y corresponding to the initial characteristics point that set in every pictures
0, and calculate in the initial characteristics point and all photo samples often opening and set in photo sample by the side-play amount delta_X between the human face characteristic point manually marked
0after, then can based on side-play amount delta_X
0with initial characteristics point X
0between exist linear relationship, learn out location prediction matrix A by the mode of linear fit
0.Wherein, A
0the location prediction matrix adopted when changing to be calculated is for the first time carried out in expression based on SBM algorithm.
Such as, as previously mentioned, in SDM algorithm, above-mentioned linear relationship can use linear function delta_X
n=A
n* Y
nrepresent, therefore according to this linear relationship, can be easy to draw delta_X
0=A
0* Y
0.
Can be very easy to find by above-mentioned linear function, the delta_X now calculated
0and Y
0to the A in above-mentioned linear function
0there is certain restriction relation, when by delta_X
0and Y
0during as predicted data, A
0then can be understood as delta_X
0and Y
0constraint matrix.
For this situation, solving A based on above-mentioned linear function
0time, the side-play amount delta_X that can will calculate
0and the image feature vector Y extracted
0as predicted data, solve A by the mode of least-squares algorithm linear fitting
0.
Wherein, A is solved by the mode of least-squares algorithm linear fitting
0process, no longer describe in detail in the present embodiment, those skilled in the art, can with reference to introduction of the prior art when by auxiliary for above technical scheme realization.
When the mode by least-squares algorithm linear fitting solves A
0time, now A
0be based on SBM algorithm carry out first time repeatedly to be calculated time the location prediction matrix that adopts.When calculating A
0time, can by above-mentioned linear function calculate first time iteration time side-play amount delta_X
0, above-mentioned X
0can add that this side-play amount obtains a stack features point A of next iteration
1.When calculating a stack features point A of next iteration
1after, above iterative process can be repeated, until SDM algorithm convergence.
What deserves to be explained is, in the process of continuous iteration, displacement error between the lineup's face characteristic point manually demarcated in the lineup's face characteristic point oriented and photo sample will constantly be corrected, when after SDM algorithm convergence, displacement error between the human face characteristic point of the artificial demarcation in the human face characteristic point now oriented and photo sample is minimum, therefore when after SDM algorithm convergence, now above-mentioned fisrt feature point correction model training is complete, the location prediction matrix calculated after each iteration in above-mentioned fisrt feature point correction model, the Target Photo that may be used for user provides carries out facial modeling.
Wherein, when training above-mentioned fisrt feature point correction model, the iterations carried out during SDM algorithm convergence, is not particularly limited in the disclosure.Such as, based on engineering experience value, in the application of facial modeling, usually need iteration 4 times, therefore in above-mentioned fisrt feature point correction model, can A be provided
0~ A
3in 4 location prediction matrixes.
Described above is the detailed process of training fisrt feature point correction model.
Above-mentioned fisrt feature point correction model is based on one group of initial characteristics point equally distributed around facial contour in photo sample, is that prime area training forms with human face region.
For the above-mentioned fisrt feature point correction model trained, may be used for revising the coordinate of the initial characteristics point that target picture sets, obtain the coordinate of above-mentioned first correction unique point, thus realize the precise positioning of human face characteristic point.Above-mentioned target picture, is the photo that user needs to carry out facial modeling.
When carrying out facial modeling for Target Photo, fast face detection technique (such as can use the human-face detector of the maturations such as such as adaboost) can be utilized, carry out human face region to above-mentioned target picture to detect and obtain an initial human face region, and in this human face region, set initial characteristics point.
Wherein, when setting initial characteristics point in the human face region detected in target picture, still can according to the coordinate accounting of above-mentioned initial characteristics point in human face region, or calculate the coordinate average of often opening the human face characteristic point demarcated in photo sample and set, detailed process repeats no more.
After setting initial characteristics point in the human face region detected in target picture, can for this group initial characteristics point X set in target picture
0, extract corresponding image texture characteristic vector Y
0, the image feature vector Y then will extracted
0, with the location prediction matrix A provided in the fisrt feature point correction model of having trained
ncarry out interative computation, with to initial characteristics point X above-mentioned in target picture
0revise for the first time, revised unique point coordinate for the first time.
At the above-mentioned image texture characteristic vector Y by target picture
0, with the location prediction matrix A that provides in the fisrt feature point correction model of having trained
nwhen carrying out interative computation, suppose to provide A in fisrt feature point correction model
0~ A
3in 4 location prediction matrixes, so will carry out 4 submatrix multiplication and calculate, first can according to A
0to above-mentioned image texture characteristic vector Y
0carry out matrix multiplication calculating, carry out first time iteration, obtain one group of first initial characteristics point coordinate, then according to A
1again matrix multiplication calculating is carried out to the above-mentioned first initial characteristics point coordinate calculated, carries out second time iteration, obtain one group of second initial characteristics point coordinate, and then according to A
2again matrix multiplication calculating is carried out to the above-mentioned second initial characteristics point coordinate calculated, carries out third time iteration, obtain the 3rd initial characteristics point coordinate, when third time iteration complete, then according to A
4carry out matrix multiplication calculating to the above-mentioned 3rd initial characteristics point coordinate calculated, obtain above-mentioned first correction unique point coordinate, now iteration completes.
In the present embodiment, because fisrt feature point correction model is when carrying out facial modeling for Target Photo, be the face frame that detects be prime area, positioning precision depends on the position of initial block very much, initial block is when the inside of actual face, the change of face inside is less, positioning result after SDM algorithm iteration can be relatively good, when initial block is actual face outside, because the change of external context may be very large, SDM algorithm iteration positioning result out of true all will be caused, therefore in order to improve positioning precision, when fisrt feature point correction model is carrying out after facial modeling terminates for Target Photo, the multiple above-mentioned first correction unique point obtained after can also calculating matrix multiplication carries out second-order correction, obtain the second-order correction unique point coordinate of predetermined number.
When carrying out second-order correction for above-mentioned first correction unique point, the identification of central feature point can be carried out for multiple above-mentioned first correction unique point, obtain at least one central feature point coordinate, then according to the mapping relations between central feature point coordinate and above-mentioned second-order correction unique point coordinate, virtual borderlines is carried out to above-mentioned first correction unique point, obtains the second-order correction unique point coordinate of predetermined number.
Wherein, in a kind of implementation shown in the present embodiment, above-mentioned central feature point can be eyeball center, and above-mentioned central feature point coordinate can be then the coordinate at the eyeball center of eyes.
When above-mentioned central feature point is eyeball center, when identifying eyeball center based on above-mentioned first correction unique point, because the textural characteristics of eyeball central point is abundanter, therefore default Ins location algorithm can be passed through using the coordinate of above-mentioned first correction unique point as auxiliary parameter, by identifying that the textural characteristics of eyeball central point carries out the location at eyeball center.Wherein, above-mentioned default Ins location algorithm is not particularly limited in the present embodiment, and those skilled in the art can with reference to the implementation procedure in correlation technique.
When the coordinate based on above-mentioned first correction unique point identifies the coordinate time at two eyeball centers, can based on the mapping relations between the coordinate at eyeball center and above-mentioned second-order correction unique point coordinate, virtual borderlines is carried out to above-mentioned first correction unique point, second-order correction is carried out to above-mentioned first unique point, obtains the coordinate of the second-order correction unique point of predetermined number.
Wherein, mapping relations between the coordinate at eyeball center and above-mentioned second-order correction unique point coordinate can characterize with the unique point mapping function preset, and this unique point mapping function then can learn to obtain based on the relative distance between each unique point manually marked in eyeball center in the photo sample of above-mentioned predetermined number and photo sample.
For the photo sample of above-mentioned predetermined number, in different photos, the size of human face region is all not identical with scope, and in different photo samples, distance between the eyeball center of eyes and each unique point manually marked, then relative to more constant, therefore when manually marking unique point to above-mentioned photo sample, can for each photo sample, the eyeball center measuring eyes respectively to mark each unique point between distance, then to measuring the data that obtain as predicted data, mapping relations between the coordinate being learnt out the eyeball center of eyes by the mode of linear fit and the coordinate of other each unique point marked, then according to these mapping relations learnt out, to the coordinate of above-mentioned first correction unique point.
Due to the distance between the eyeball center of eyes and each unique point manually marked, relative to more constant, therefore after carrying out virtual borderlines by the coordinate of above-mentioned unique point mapping function to above-mentioned first correction unique point, the second-order correction to above-mentioned first correction unique point can be realized, obtain the coordinate of the second-order correction unique point of predetermined number, thus the positioning precision of human face characteristic point can be improved.
In the present embodiment, after second-order correction being carried out to above-mentioned first correction unique point based on above-mentioned unique point mapping function, the coordinate of the second-order correction unique point obtained, again can also revise based on second feature point correction model, finally be revised the coordinate of unique point.
Wherein, above-mentioned second feature point correction model, can be based on the mapping association between the image texture characteristic of the image texture characteristic of above-mentioned second-order correction unique point, side-play amount and above-mentioned final correction unique point, side-play amount, the projection matrix model of training, the coordinate that may be used for above-mentioned second-order correction unique point is revised again, obtains the coordinate of above-mentioned final correction unique point.
Such as, above-mentioned second feature point correction model still can be the projection matrix model based on SDM algorithm, as previously mentioned, when training above-mentioned fisrt feature point correction model, being based on the initial characteristics point that the human face region in the photo sample of predetermined number is demarcated, is that prime area training obtains with human face region.Coordinate due to above-mentioned second-order correction unique point obtains based on the eyeball center of eyes and the mapping relations correction of above-mentioned second-order correction unique point, therefore when training above-mentioned second feature point correction model, the initial characteristics point demarcated in the human face region that can form based on the eyeball center of the eyes in the photo sample of predetermined number, with the center of eyes eyeball for prime area training obtains.
Be be described based on the training process of projection matrix model to above-mentioned second feature point correction model of SDM algorithm with described second feature point correction model below.
When training above-mentioned second feature point correction model, still those photo samples used during training fisrt feature point correction model can be adopted, first the eyeball center of eyes in all photo samples is demarcated as central feature point, after the eyeball center of eyes has been demarcated, a rectangle frame can be generated according to the eyeball center demarcating eyes, and using this rectangle frame as prime area, according to unique point proven in photo sample, one group of corresponding initial characteristics point is set in this prime area.
Wherein, when setting initial characteristics point in this prime area, still can set according to the coordinate accounting of above-mentioned initial characteristics point in this prime area, described coordinate accounting is still comparable to be obtained after carrying out calibration measurements to the above-mentioned prime area in the photo sample of predetermined number.Such as, in the process to all artificial feature point for calibration of photo sample, the coordinate accounting of each unique point in above-mentioned prime area can be measured respectively, after all photo sample standard deviations have been demarcated, can to the coordinate accounting data analysis of each unique point in above-mentioned prime area in all photo samples measured, for each unique point in this prime area arranges suitable coordinate accounting (such as getting average) respectively, and using the coordinate accounting of this coordinate accounting as setting initial characteristics point.
After initial unique point has set, the image feature vector Y that these initial characteristics points are corresponding can be extracted
n, and compute location matrix A
n.On the one hand, can respectively for the initial characteristics point X set in all photo samples
0(X
0still represent one group of initial characteristics point be provided with), extract corresponding image feature vector Y
0.On the other hand, can also for the position of often opening the initial characteristics point set in photo sample, calculate and in all photo samples in above-mentioned prime area by the side-play amount delta_X between the human face characteristic point manually marked
0, now delta_X
0=X
*-X
0, X
0represent the position coordinates of the initial characteristics point set in all photo samples, X
*to represent in all photo samples in above-mentioned prime area by the position coordinates of the human face characteristic point manually marked.When extracting image feature vector Y corresponding to the initial characteristics point that set in every pictures
0, and to calculate in the initial characteristics point and all photo samples often opening and set in photo sample in above-mentioned prime area by the side-play amount delta_X between the human face characteristic point manually marked
0after, then can based on side-play amount delta_X
0with initial characteristics point X
0between exist linear relationship, learn out location prediction matrix A by the mode of linear fit
0.
Wherein, location prediction matrix A is learnt out in the mode by linear fit
0time, still can adopt the mode of least-squares algorithm linear fitting to realize, detailed process repeats no more, and those skilled in the art can carry out equivalent enforcement see the training process of the above fisrt feature point correction model introduced.
Suppose, in the process of training second feature point correction model, to have iteration 4 times altogether after SDM algorithm convergence, so in above-mentioned second feature point correction model, can A be provided
0~ A
3in 4 location prediction matrixes.
Described above is the training process of second feature point correction model.
Above-mentioned fisrt feature point correction model is with the eyeball center of eyes for prime area, forms based on one group of initial characteristics point training in prime area above-mentioned in photo sample.For the above-mentioned second feature point correction model trained, may be used for again revising the coordinate of above-mentioned second-order correction unique point, finally revised the coordinate of unique point.Thus improve the positioning precision of human face characteristic point.
When revising according to the coordinate of above-mentioned second feature point correction model to above-mentioned second-order correction unique point, can for by above-mentioned unique point mapping function this group second-order correction revised unique point X
0, extract corresponding image texture characteristic vector Y
0, the image feature vector Y then will extracted
0, with the location prediction matrix A provided in the fisrt feature point correction model of having trained
ncarry out interative computation, with to above-mentioned second-order correction unique point X
0again revise, finally revised unique point coordinate.
At the above-mentioned image texture characteristic vector Y by second-order correction unique point
0, with the location prediction matrix A that provides in the second feature point correction model of having trained
nwhen carrying out interative computation, suppose still to provide A in second feature point correction model
0~ A
3in 4 location prediction matrixes, so will carry out 4 submatrix multiplication and calculate, first can according to A
0to above-mentioned image texture characteristic vector Y
0carry out matrix multiplication calculating, carry out first time iteration, obtain one group of first final unique point coordinate, then according to A
1again matrix multiplication calculating is carried out to the calculate above-mentioned first final beginning unique point coordinate, carries out second time iteration, obtain one group of second final unique point coordinate, and then according to A
2again matrix multiplication calculating is carried out to the calculate above-mentioned second final unique point coordinate, carries out third time iteration, obtain the 3rd final unique point coordinate, when third time iteration complete, then according to A
3carry out matrix multiplication calculating to the calculate the above-mentioned 3rd final unique point coordinate, obtain above-mentioned final correction unique point coordinate, now iteration completes.
At the above-mentioned image texture characteristic vector Y by second-order correction unique point
0, with the location prediction matrix A that provides in the second feature point correction model of having trained
nafter carrying out interative computation, the final correction unique point coordinate now obtained, is the net result carrying out facial modeling for above-mentioned target picture.
Known by describing above, by fisrt feature point correction model, default unique point mapping function and second feature point correction model in the present embodiment, carry out three times to the initial characteristics point set in above-mentioned target picture to revise, therefore can promote the positioning precision of human face characteristic point significantly.
In above embodiment of the present disclosure, by fisrt feature point correction model, initial characteristics point coordinate is revised, revised unique point coordinate for the first time, and the identification of central feature point is carried out to multiple described first correction unique point coordinate, obtain at least one central feature point coordinate, then according to unique point mapping function, virtual borderlines is carried out to multiple described first correction unique point coordinate, obtain multiple second-order correction unique point coordinate, wherein, because described unique point mapping function is the mapping relations of described central feature point to described second-order correction unique point coordinate, and described central feature point carries out identifying based on described first correction unique point coordinate the unique point more accurately obtained, therefore the positioning accurate accuracy of human face characteristic point can be improved.
In above embodiment of the present disclosure, by second feature point correction model, multiple described second-order correction unique point coordinate is revised, obtain multiple final correction unique point coordinate, owing to again being revised described second-order correction unique point by second feature point correction model, the positioning precision of human face characteristic point therefore can be improved further.
In above embodiment of the present disclosure, by carrying out human face region detection to target picture, obtain human face region, then according to the coordinate accounting of multiple described initial characteristics point, obtain the multiple described initial characteristics point coordinate in described human face region, wherein, because the coordinate accounting of described initial characteristics obtains by carrying out calibration measurements to the human face region in multiple pictures, therefore can fast accurate be target picture setting initial characteristics point.
Corresponding with aforementioned man face characteristic point positioning method embodiment, the disclosure additionally provides a kind of embodiment of device.
Fig. 3 is the schematic block diagram of a kind of facial modeling device according to an exemplary embodiment.
As shown in Figure 3, a kind of facial modeling device 300 according to an exemplary embodiment, comprising: the first correcting module 301, identification module 302 and mapping block 303; Wherein:
Described first correcting module 301 is configured to, and revises, revised unique point coordinate for the first time according to fisrt feature point correction model to initial characteristics point coordinate;
Described identification module 302 is configured to, and carries out the identification of central feature point, obtain at least one central feature point coordinate to the multiple described first correction unique point coordinate that described first correcting module 301 correction obtains;
Described mapping block 303 is configured to, according to unique point mapping function, virtual borderlines is carried out to the multiple described first correction unique point coordinate that described first correcting module 301 correction obtains, obtain multiple second-order correction unique point coordinate, wherein, described unique point mapping function is that described identification module 302 identifies the described central feature point that the obtains mapping relations to described second-order correction unique point coordinate.
In the embodiment above, by fisrt feature point correction model, initial characteristics point coordinate is revised, revised unique point coordinate for the first time, and the identification of central feature point is carried out to multiple described first correction unique point coordinate, obtain at least one central feature point coordinate, then according to unique point mapping function, virtual borderlines is carried out to multiple described first correction unique point coordinate, obtain multiple second-order correction unique point coordinate, wherein, because described unique point mapping function is the mapping relations of described central feature point to described second-order correction unique point coordinate, and described central feature point carries out identifying based on described first correction unique point coordinate the unique point more accurately obtained, therefore the positioning accurate accuracy of human face characteristic point can be improved.
It should be noted that, in the embodiment above, described fisrt feature point correction model is the feature of multiple initial characteristics point, side-play amount and multiple first feature of correction unique point, the mapping relations of side-play amount, and described fisrt feature point correction model is projection matrix model.The quantity of described initial characteristics point is 44 or 98.
Refer to Fig. 4, Fig. 4 is the block diagram of the another kind of device of the disclosure according to an exemplary embodiment, and this embodiment is on aforementioned basis embodiment illustrated in fig. 3, and described device 300 can also comprise the second correcting module 304; Wherein:
Described second correcting module 304 is configured to, and maps the multiple described second-order correction unique point coordinate obtained and revises, obtain multiple final correction unique point coordinate according to second feature point correction model to described mapping block 303.
In the embodiment above, by second feature point correction model, multiple described second-order correction unique point coordinate is revised, obtain multiple final correction unique point coordinate, owing to again being revised described second-order correction unique point by second feature point correction model, the positioning precision of human face characteristic point therefore can be improved further.
It should be noted that, in the embodiment above, described second feature point correction model is the feature of multiple second-order correction unique point, side-play amount and multiple final feature of correction unique point, the mapping relations of side-play amount, and described second feature point correction model is projection matrix model.
Refer to Fig. 5, Fig. 5 is the block diagram of the another kind of device of the disclosure according to an exemplary embodiment, and this embodiment is on aforementioned basis embodiment illustrated in fig. 4, and described device 300 can also comprise detection module 305 and acquisition module 306; Wherein:
Described detection module 305 is configured to, and carries out human face region detection, obtain human face region to target picture;
Described acquisition module 306 is configured to, and according to the coordinate accounting of multiple described initial characteristics point, obtains the multiple described initial characteristics point coordinate in described human face region.
In the embodiment above, by carrying out human face region detection to target picture, obtain human face region, then according to the coordinate accounting of multiple described initial characteristics point, obtain the multiple described initial characteristics point coordinate in described human face region, wherein, because the coordinate accounting of described initial characteristics obtains by carrying out calibration measurements to the human face region in multiple pictures, therefore can fast accurate be target picture setting initial characteristics point.
It should be noted that, in the embodiment above, the coordinate accounting of described initial characteristics point obtains by carrying out calibration measurements to the human face region in multiple pictures sample.Described central feature point coordinate is eyeball center point coordinate.The structure of the detection module 305 shown in device embodiment shown in above-mentioned Fig. 5 and acquisition module 306 also can be included in the device embodiment of earlier figures 3, does not limit this disclosure.
Refer to Fig. 6, Fig. 6 is the block diagram of the another kind of device of the disclosure according to an exemplary embodiment, and this embodiment is on aforementioned basis embodiment illustrated in fig. 3, and described first correcting module 301 can comprise the first calculating sub module 301A; Wherein:
Described first calculating sub module 301A is configured to, and carries out matrix multiplication calculating, obtain multiple first initial characteristics point coordinate according to described fisrt feature point correction model to multiple described initial characteristics point coordinate;
According to described fisrt feature point correction model, matrix multiplication calculating is carried out to multiple described first initial characteristics point coordinate, obtain multiple second initial characteristics point coordinate;
……
According to described N initial characteristics point coordinate, matrix multiplication calculating is carried out to multiple described N-1 initial characteristics point coordinate, obtains multiple described first correction unique point coordinate, wherein, N be greater than or equal to 2 integer.
It should be noted that, the structure of the first calculating sub module 301A shown in device embodiment shown in above-mentioned Fig. 6 also can be included in the device embodiment of earlier figures 4-5, does not limit this disclosure.
Refer to Fig. 7, Fig. 7 is the block diagram of the another kind of device of the disclosure according to an exemplary embodiment, and this embodiment is on aforementioned basis embodiment illustrated in fig. 4, and described second correcting module 304 can comprise the second calculating sub module 304A; Wherein:
Described second calculating sub module 304A is configured to,
According to described second feature point correction model, matrix multiplication calculating is carried out to multiple described second-order correction unique point coordinate, obtain the first final unique point coordinate;
According to described second feature point correction model, matrix multiplication calculating is carried out to the described first final unique point coordinate that described second calculating sub module calculates, obtain the second final unique point coordinate;
……
According to described M initial characteristics point coordinate, matrix multiplication calculating is carried out to described M-1 initial characteristics point coordinate, obtains described final correction unique point coordinate, wherein, M be greater than or equal to 2 integer.
It should be noted that, the structure of the second calculating sub module 304A shown in device embodiment shown in above-mentioned Fig. 7 also can be included in the device embodiment of earlier figures 4 or 5-6, does not limit this disclosure.In said apparatus, the implementation procedure of the function and efficacy of modules specifically refers to the implementation procedure of corresponding step in said method, does not repeat them here.
For device embodiment, because it corresponds essentially to embodiment of the method, so relevant part illustrates see the part of embodiment of the method.Device embodiment described above is only schematic, the wherein said module illustrated as separating component can or may not be physically separates, parts as module display can be or may not be physical module, namely can be positioned at a place, or also can be distributed on multiple mixed-media network modules mixed-media.Some or all of module wherein can be selected according to the actual needs to realize the object of disclosure scheme.Those of ordinary skill in the art, when not paying creative work, are namely appreciated that and implement.
Accordingly, the disclosure also provides a kind of facial modeling device, and described device comprises:
Processor;
For the storer of storage of processor executable instruction;
Wherein, described processor is configured to:
According to fisrt feature point correction model, initial characteristics point coordinate is revised, revised unique point coordinate for the first time;
The identification of central feature point is carried out to multiple described first correction unique point coordinate, obtains at least one central feature point coordinate;
According to unique point mapping function, virtual borderlines is carried out to multiple described first correction unique point coordinate, obtain multiple second-order correction unique point coordinate, wherein, described unique point mapping function is the mapping relations of described central feature point to described second-order correction unique point coordinate.
Accordingly, the disclosure also provides a kind of service end, described service end includes storer, and one or more than one program, one of them or more than one program are stored in storer, and are configured to perform described more than one or one routine package containing the instruction for carrying out following operation by more than one or one processor:
According to fisrt feature point correction model, initial characteristics point coordinate is revised, revised unique point coordinate for the first time;
The identification of central feature point is carried out to multiple described first correction unique point coordinate, obtains at least one central feature point coordinate;
According to unique point mapping function, virtual borderlines is carried out to multiple described first correction unique point coordinate, obtain multiple second-order correction unique point coordinate, wherein, described unique point mapping function is the mapping relations of described central feature point to described second-order correction unique point coordinate.
Accordingly, the disclosure also provides a kind of facial modeling device, and described device comprises:
Processor;
For the storer of storage of processor executable instruction;
Wherein, described processor is configured to:
According to fisrt feature point correction model, initial characteristics point coordinate is revised, revised unique point coordinate for the first time;
The identification of central feature point is carried out to multiple described first correction unique point coordinate, obtains at least one central feature point coordinate;
According to unique point mapping function, virtual borderlines is carried out to multiple described first correction unique point coordinate, obtain multiple second-order correction unique point coordinate, wherein, described unique point mapping function is the mapping relations of described central feature point to described second-order correction unique point coordinate.
Fig. 8 is a kind of block diagram for facial modeling device 8000 according to an exemplary embodiment.Such as, device 8000 may be provided in a server.With reference to Fig. 8, device 8000 comprises processing components 8022, and it comprises one or more processor further, and the memory resource representated by storer 8032, can such as, by the instruction of the execution of processing element 8022, application program for storing.The application program stored in storer 8032 can comprise each module corresponding to one group of instruction one or more.In addition, processing components 8022 is configured to perform instruction, to perform the control method of above-mentioned smart machine.
Device 8000 can also comprise the power management that a power supply module 8026 is configured to actuating unit 8000, and a wired or wireless network interface 8050 is configured to device 8000 to be connected to network, and input and output (I/O) interface 8058.Device 8000 can operate the operating system based on being stored in storer 8032, such as WindowsServerTM, MacOSXTM, UnixTM, LinuxTM, FreeBSDTM or similar.
Those skilled in the art, at consideration instructions and after putting into practice invention disclosed herein, will easily expect other embodiment of the present disclosure.The application is intended to contain any modification of the present disclosure, purposes or adaptations, and these modification, purposes or adaptations are followed general principle of the present disclosure and comprised the undocumented common practise in the art of the disclosure or conventional techniques means.Instructions and embodiment are only regarded as exemplary, and true scope of the present disclosure and spirit are pointed out by claim below.
Should be understood that, the disclosure is not limited to precision architecture described above and illustrated in the accompanying drawings, and can carry out various amendment and change not departing from its scope.The scope of the present disclosure is only limited by appended claim.
Claims (21)
1. a man face characteristic point positioning method, is characterized in that, described method comprises:
According to fisrt feature point correction model, initial characteristics point coordinate is revised, revised unique point coordinate for the first time;
The identification of central feature point is carried out to multiple described first correction unique point coordinate, obtains at least one central feature point coordinate;
According to unique point mapping function, virtual borderlines is carried out to multiple described first correction unique point coordinate, obtain multiple second-order correction unique point coordinate, wherein, described unique point mapping function is the mapping relations of described central feature point to described second-order correction unique point coordinate.
2. method according to claim 1, is characterized in that, described method also comprises:
According to second feature point correction model, multiple described second-order correction unique point coordinate is revised, obtain multiple final correction unique point coordinate.
3. method according to claim 1, is characterized in that, described method also comprises:
Human face region detection is carried out to target picture, obtains human face region;
According to the coordinate accounting of multiple described initial characteristics point, obtain the multiple described initial characteristics point coordinate in described human face region.
4. method according to claim 3, is characterized in that, the coordinate accounting of described initial characteristics point obtains by carrying out calibration measurements to the human face region in multiple pictures sample.
5. method according to claim 1, is characterized in that, described central feature point coordinate is eyeball center point coordinate.
6. method according to claim 1, is characterized in that, describedly revises initial characteristics point coordinate according to fisrt feature point correction model, is revised unique point coordinate for the first time, comprising:
According to described fisrt feature point correction model, matrix multiplication calculating is carried out to multiple described initial characteristics point coordinate, obtain multiple first initial characteristics point coordinate;
According to described fisrt feature point correction model, matrix multiplication calculating is carried out to multiple described first initial characteristics point coordinate, obtain multiple second initial characteristics point coordinate;
……
According to described N initial characteristics point coordinate, matrix multiplication calculating is carried out to multiple described N-1 initial characteristics point coordinate, obtains multiple described first correction unique point coordinate, wherein, N be greater than or equal to 2 integer.
7. method according to claim 2, is characterized in that, describedly revises multiple described second-order correction unique point coordinate according to second feature point correction model, obtains multiple described final correction unique point coordinate, comprising:
According to described second feature point correction model, matrix multiplication calculating is carried out to multiple described second-order correction unique point coordinate, obtain the first final unique point coordinate;
According to described second feature point correction model, matrix multiplication calculating is carried out to the described first final unique point coordinate, obtain the second final unique point coordinate;
……
According to described M initial characteristics point coordinate, matrix multiplication calculating is carried out to described M-1 initial characteristics point coordinate, obtains described final correction unique point coordinate, wherein, M be greater than or equal to 2 integer.
8. method according to claim 1, it is characterized in that, described fisrt feature point correction model is the feature of multiple initial characteristics point, side-play amount and multiple first feature of correction unique point, the mapping relations of side-play amount, and described fisrt feature point correction model is projection matrix model.
9. method according to claim 1, it is characterized in that, described second feature point correction model is the feature of multiple second-order correction unique point, side-play amount and multiple final feature of correction unique point, the mapping relations of side-play amount, and described second feature point correction model is projection matrix model.
10. method according to claim 1, is characterized in that, the quantity of described initial characteristics point is 44 or 98.
11. 1 kinds of facial modeling devices, is characterized in that, described device comprises:
First correcting module, is configured to revise initial characteristics point coordinate according to fisrt feature point correction model, is revised unique point coordinate for the first time;
Identification module, the multiple described first correction unique point coordinate be configured to described first correcting module correction obtains carries out the identification of central feature point, obtains at least one central feature point coordinate;
Mapping block, be configured to carry out virtual borderlines according to unique point mapping function to the multiple described first correction unique point coordinate that described first correcting module correction obtains, obtain multiple second-order correction unique point coordinate, wherein, described unique point mapping function is the mapping relations of the described central feature point that obtains of described identification module identification to described second-order correction unique point coordinate.
12. devices according to claim 11, is characterized in that, described device also comprises:
Second correcting module, maps to described mapping block the multiple described second-order correction unique point coordinate obtained according to second feature point correction model and revises, obtain multiple final correction unique point coordinate.
13. devices according to claim 11, is characterized in that, described device also comprises:
Detection module, is configured to carry out human face region detection to target picture, obtains human face region;
Acquisition module, is configured to, according to the coordinate accounting of multiple described initial characteristics point, obtain the multiple described initial characteristics point coordinate in described human face region.
14. devices according to claim 13, is characterized in that, the coordinate accounting of described initial characteristics point obtains by carrying out calibration measurements to the human face region in multiple pictures sample.
15. devices according to claim 11, is characterized in that, described central feature point coordinate is eyeball center point coordinate.
16. devices according to claim 11, is characterized in that, described first correcting module comprises:
First calculating sub module, is configured to carry out matrix multiplication calculating according to described fisrt feature point correction model to multiple described initial characteristics point coordinate, obtains multiple first initial characteristics point coordinate;
According to described fisrt feature point correction model, matrix multiplication calculating is carried out to multiple described first initial characteristics point coordinate, obtain multiple second initial characteristics point coordinate;
……
According to described N initial characteristics point coordinate, matrix multiplication calculating is carried out to multiple described N-1 initial characteristics point coordinate, obtains multiple described first correction unique point coordinate, wherein, N be greater than or equal to 2 integer.
17. devices according to claim 12, is characterized in that, described second correcting module comprises:
Second calculating sub module, is configured to carry out matrix multiplication calculating according to described second feature point correction model to multiple described second-order correction unique point coordinate, obtains the first final unique point coordinate;
According to described second feature point correction model, matrix multiplication calculating is carried out to the described first final unique point coordinate that described second calculating sub module calculates, obtain the second final unique point coordinate;
……
According to described M initial characteristics point coordinate, matrix multiplication calculating is carried out to described M-1 initial characteristics point coordinate, obtains described final correction unique point coordinate, wherein, M be greater than or equal to 2 integer.
18. devices according to claim 11, it is characterized in that, described fisrt feature point correction model is the feature of multiple initial characteristics point, side-play amount and multiple first feature of correction unique point, the mapping relations of side-play amount, and described fisrt feature point correction model is projection matrix model.
19. devices according to claim 11, it is characterized in that, described second feature point correction model is the feature of multiple second-order correction unique point, side-play amount and multiple final feature of correction unique point, the mapping relations of side-play amount, and described second feature point correction model is projection matrix model.
20. methods according to claim 11, is characterized in that, the quantity of described initial characteristics point is 44 or 98.
21. 1 kinds of facial modeling devices, is characterized in that, comprising:
Processor;
For the storer of storage of processor executable instruction;
Wherein, described processor is configured to:
According to fisrt feature point correction model, initial characteristics point coordinate is revised, revised unique point coordinate for the first time;
The identification of central feature point is carried out to multiple described first correction unique point coordinate, obtains at least one central feature point coordinate;
According to unique point mapping function, virtual borderlines is carried out to multiple described first correction unique point coordinate, obtain multiple second-order correction unique point coordinate, wherein, described unique point mapping function is the mapping relations of described central feature point to described second-order correction unique point coordinate.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201510641854.2A CN105139007B (en) | 2015-09-30 | 2015-09-30 | Man face characteristic point positioning method and device |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201510641854.2A CN105139007B (en) | 2015-09-30 | 2015-09-30 | Man face characteristic point positioning method and device |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN105139007A true CN105139007A (en) | 2015-12-09 |
| CN105139007B CN105139007B (en) | 2019-04-16 |
Family
ID=54724350
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201510641854.2A Active CN105139007B (en) | 2015-09-30 | 2015-09-30 | Man face characteristic point positioning method and device |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN105139007B (en) |
Cited By (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107194980A (en) * | 2017-05-18 | 2017-09-22 | 成都通甲优博科技有限责任公司 | Faceform's construction method, device and electronic equipment |
| CN108875646A (en) * | 2018-06-22 | 2018-11-23 | 苏州市启献智能科技有限公司 | A kind of real face image and identity card registration is dual compares authentication method and system |
| CN109903297A (en) * | 2019-03-08 | 2019-06-18 | 数坤(北京)网络科技有限公司 | Coronary artery dividing method and system based on disaggregated model |
| CN110782408A (en) * | 2019-10-18 | 2020-02-11 | 杭州趣维科技有限公司 | Intelligent beautifying method and system based on convolutional neural network |
| CN112629546A (en) * | 2019-10-08 | 2021-04-09 | 宁波吉利汽车研究开发有限公司 | Position adjustment parameter determining method and device, electronic equipment and storage medium |
| WO2021208767A1 (en) * | 2020-04-13 | 2021-10-21 | 百果园技术(新加坡)有限公司 | Facial contour correction method and apparatus, and device and storage medium |
| US11475708B2 (en) * | 2018-08-10 | 2022-10-18 | Zhejiang Uniview Technologies Co., Ltd. | Face feature point detection method and device, equipment and storage medium |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101377814A (en) * | 2007-08-27 | 2009-03-04 | 索尼株式会社 | Face image processing apparatus, face image processing method, and computer program |
| US20130129141A1 (en) * | 2010-08-20 | 2013-05-23 | Jue Wang | Methods and Apparatus for Facial Feature Replacement |
| CN104077585A (en) * | 2014-05-30 | 2014-10-01 | 小米科技有限责任公司 | Image correction method and device and terminal |
| CN104182718A (en) * | 2013-05-21 | 2014-12-03 | 腾讯科技(深圳)有限公司 | Method and device for locating facial feature points |
-
2015
- 2015-09-30 CN CN201510641854.2A patent/CN105139007B/en active Active
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101377814A (en) * | 2007-08-27 | 2009-03-04 | 索尼株式会社 | Face image processing apparatus, face image processing method, and computer program |
| US20130129141A1 (en) * | 2010-08-20 | 2013-05-23 | Jue Wang | Methods and Apparatus for Facial Feature Replacement |
| CN104182718A (en) * | 2013-05-21 | 2014-12-03 | 腾讯科技(深圳)有限公司 | Method and device for locating facial feature points |
| CN104077585A (en) * | 2014-05-30 | 2014-10-01 | 小米科技有限责任公司 | Image correction method and device and terminal |
Cited By (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107194980A (en) * | 2017-05-18 | 2017-09-22 | 成都通甲优博科技有限责任公司 | Faceform's construction method, device and electronic equipment |
| CN108875646A (en) * | 2018-06-22 | 2018-11-23 | 苏州市启献智能科技有限公司 | A kind of real face image and identity card registration is dual compares authentication method and system |
| CN108875646B (en) * | 2018-06-22 | 2022-09-27 | 青岛民航凯亚系统集成有限公司 | Method and system for double comparison and authentication of real face image and identity card registration |
| US11475708B2 (en) * | 2018-08-10 | 2022-10-18 | Zhejiang Uniview Technologies Co., Ltd. | Face feature point detection method and device, equipment and storage medium |
| CN109903297A (en) * | 2019-03-08 | 2019-06-18 | 数坤(北京)网络科技有限公司 | Coronary artery dividing method and system based on disaggregated model |
| CN112629546A (en) * | 2019-10-08 | 2021-04-09 | 宁波吉利汽车研究开发有限公司 | Position adjustment parameter determining method and device, electronic equipment and storage medium |
| CN112629546B (en) * | 2019-10-08 | 2023-09-19 | 宁波吉利汽车研究开发有限公司 | Position adjustment parameter determining method and device, electronic equipment and storage medium |
| CN110782408A (en) * | 2019-10-18 | 2020-02-11 | 杭州趣维科技有限公司 | Intelligent beautifying method and system based on convolutional neural network |
| WO2021208767A1 (en) * | 2020-04-13 | 2021-10-21 | 百果园技术(新加坡)有限公司 | Facial contour correction method and apparatus, and device and storage medium |
Also Published As
| Publication number | Publication date |
|---|---|
| CN105139007B (en) | 2019-04-16 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN105139007A (en) | Positioning method and apparatus of face feature point | |
| CN110580723B (en) | Method for carrying out accurate positioning by utilizing deep learning and computer vision | |
| US11610373B2 (en) | Method of generating three-dimensional model data of object | |
| CN107798685B (en) | Pedestrian's height determines method, apparatus and system | |
| CN112989947B (en) | Method and device for estimating three-dimensional coordinates of key points of human body | |
| US20210082132A1 (en) | Laser sensor-based map generation | |
| US10964057B2 (en) | Information processing apparatus, method for controlling information processing apparatus, and storage medium | |
| CN107990899A (en) | A kind of localization method and system based on SLAM | |
| CN108871311B (en) | Pose determination method and device | |
| US12499579B2 (en) | Method and apparatus for registering devices of autonomous vehicle based on trajectory alignment | |
| US12249183B2 (en) | Apparatus and method for detecting facial pose, image processing system, and storage medium | |
| CN112683169A (en) | Object size measuring method, device, equipment and storage medium | |
| CN113361381A (en) | Human body key point detection model training method, detection method and device | |
| CN109909999A (en) | A kind of method and apparatus obtaining robot TCP coordinate | |
| CN106959105A (en) | Method for calibrating compass and device | |
| CN109389645A (en) | Camera method for self-calibrating, system, camera, robot and cloud server | |
| CN118435247A (en) | Image processing method, device, interactive device, electronic device and storage medium | |
| CN112907669B (en) | Camera pose measurement method and device based on coplanar feature points | |
| AU2016401548A1 (en) | Multi-measurement-mode three-dimensional measurement system and measurement method | |
| CN109858402B (en) | Image detection method, device, terminal and storage medium | |
| CN109213465A (en) | It is a kind of for educating the multi-display identification method and system of operating system | |
| CN109978043B (en) | Target detection method and device | |
| CN106709957B (en) | Method and system, the intelligent electronic device of polyphaser observed object | |
| CN109376409A (en) | A kind of pre-buried map generalization method, apparatus of three-dimensional water power, equipment and storage medium | |
| CN113902910A (en) | Vision measurement method and system |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |