[go: up one dir, main page]

CN102799871A - Method for tracking and recognizing face - Google Patents

Method for tracking and recognizing face Download PDF

Info

Publication number
CN102799871A
CN102799871A CN2012102448511A CN201210244851A CN102799871A CN 102799871 A CN102799871 A CN 102799871A CN 2012102448511 A CN2012102448511 A CN 2012102448511A CN 201210244851 A CN201210244851 A CN 201210244851A CN 102799871 A CN102799871 A CN 102799871A
Authority
CN
China
Prior art keywords
face
facial image
temp
people
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2012102448511A
Other languages
Chinese (zh)
Inventor
周龙沙
邵诗强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TCL Corp
Original Assignee
TCL Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TCL Corp filed Critical TCL Corp
Priority to CN2012102448511A priority Critical patent/CN102799871A/en
Publication of CN102799871A publication Critical patent/CN102799871A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention is suitable for the technical field of mode recognition and provides a method for tracking and recognizing a face. The method comprises the steps of: firstly, carrying out face detection on a current video frame image to obtain a face image; figuring out a deflection angle of the face image; if the deflection angle is within a preset range, comparing face characteristic description of the face image, obtained through the combination of Gabor and LBP in advance, with the face image, when the current video frame image is a first frame, giving a comparison result; otherwise, determining a face image to be tracked according to a comparison result, and tracing the face image by adopting a tracking algorithm. According to the invention, the problem of face recognition errors caused by various gesture changes of people in a face recognition process is solved, and the face has stronger robustness under many environment gesture changes.

Description

A kind of method of face tracking identification
Technical field
The invention belongs to mode identification technology, relate in particular to a kind of method of face tracking identification.
Background technology
Recognition of face is refered in particular to utilize to analyze and is compared the computer technology that people's face visual signature information is carried out the identity discriminating.Current most face recognition technology only carries out the detection and Identification of people's face to single-frame images, has only used the information on the space, does not set up for the contact between every frame, can reduce the discrimination of people's face so to a certain extent.People's face is the continuity with time and space in motion process, utilizes this continuity can better combine the characteristic and the identification of people's face.
For existing multiple technologies of recognition of face such as principal component analysis (PCA) (Principal Component Analysis; PCA), linear discriminant analysis (Linear Discriminant Analysis; LDA), EngineFace etc., but can be the Gabor small echo to the method that face characteristic carries out better extract, the Gabor small echo is portrayed face characteristic on different yardsticks and direction; But this only is portrayal on the whole; For the detail textures aspect, (Local Binary Patterns LBP) can better portray the minutia of people's face to adopt local binary pattern.
Mainly there is following problem in existing face tracking method:
Can well discern to a certain extent for face recognition algorithms-Gabor+LBP people's face; But; When human face posture changes within the specific limits, the mistake identification to people's face can appear, perhaps when people's face deflection angle changes (deflection angle is big especially); Detection also can't realize the identification of people's face less than people's face.
In sum, present face recognition technology only carries out the detection and Identification of people's face to single-frame images, has only used the information on the space, does not set up for the contact between every frame, changes for people's various attitudes and causes recognition of face that error is arranged.
Summary of the invention
The invention provides a kind of method of face tracking identification, be intended to solve when human face posture changes within the specific limits, problem can occur the mistake identification of people's face.
On the one hand, a kind of method of face tracking identification is provided, is used for detected facial image is discerned, and the facial image of colourful attitude multi-angle conversion is followed the tracks of, said method comprises:
A, obtain suitable Gabor nuclear window through Gabor nuclear energy; Earlier said Gabor nuclear window and detected facial image are carried out the filtering convolution and obtain filtering image; Utilize local binary pattern LBP that said filtering image is carried out the face characteristic description that the filtering convolution obtains said facial image again, to set up facial image sample storehouse;
B, the current video two field picture is carried out people's face detect, obtain facial image;
C, be the center with the center point coordinate of detected facial image, obtain people be bold figure and the bashful figure of people, said people is bashful, and figure is the image that includes only people's face central block of face complexion, and the said people figure that is bold is the image that comprises certain people's face peripheral environment;
D, calculate the deflection angle of said facial image according to the said people bashful figure of figure and people that is bold;
If the said deflection angle of E in the scope that presets, then will obtain face characteristic through steps A and describe, compare with said facial image, when the current video two field picture is first frame, provide comparison result; Otherwise, and adopt track algorithm that said facial image is followed the tracks of according to the definite facial image that will follow the tracks of of comparison result;
If the said deflection angle of F not in the scope that presets, is then gathered next video frame images and order execution in step B to step e.
Further, said steps A specifically comprises:
Choose suitable Gabor nuclear window size;
The Gabor nuclear window size that utilization is chosen carries out the filtering convolution to said facial image, obtains a plurality of multiple dimensioned poly-directional human face filtering images;
Utilize local binary pattern operator to describe the texture features of said filtering image, obtain a plurality of local binary pattern code patterns;
To each local binary pattern code pattern piecemeal and extract the histogram sequence of each piecemeal;
With the histogram sequence cascade addition of each piecemeal, obtain the histogram sequence of all local binary pattern code patterns.
Further, the said suitable Gabor nuclear window size of choosing is specially:
Definition temp is the height of Gabor nuclear, and said Gabor nuclear is the multiple dimensioned multidirectional nuclear that utilizes the Gabor kernel function to calculate, and the number of nuclear is N, N=μ * v, and wherein, μ, v are respectively the direction and the sizes of Gabor nuclear;
N Gabor nuclear is divided into N real part matrix and N imaginary-part matrix;
For i real part matrix and i imaginary-part matrix, obtain the real part and the imaginary-part matrix of temp*temp size respectively, and obtain temp_r1 and temp_i1 to the element value addition in the said matrix respectively;
Calculate i each element value in the real part matrix with, be designated as: temp_r2;
Calculate i each element value in the imaginary-part matrix with, be designated as: temp_i2
Calculating is resulting energy value p under temp*temp, p=(temp_r1/temp_r2)+(temp_i1/temp_i2)
If said p value does not reach predefined Gabor energy threshold; Temp=temp-1 then; Continue to carry out for i real part matrix and i imaginary-part matrix; Obtain the real part and the imaginary-part matrix of temp*temp size respectively, and obtain the step of temp_r1 and temp_i1 to the element value addition in the said matrix respectively;
If said p value reaches predefined Gabor energy threshold, i=i+1 then examines current resulting temp the height of pairing the most suitable filter window as i Gabor.
Further, said step D specifically comprises:
Calculate the H component of people's face color histogram of the bashful figure of said people;
According to said H variable and the said people figure that is bold, utilize back projection to calculate reverse projection image I, reflected the colour of skin distributed areas of people's face among the said reverse projection image I; Wherein, X, y represent the coordinate in the said reverse projection image, I (x; Y) be that said reverse projection image I is in (x, y) place corresponding pixel value;
(x y) calculates 0 rank square M00 of said image, first moment M10, M01 and second moment M20, M02 according to said I;
According to said M00, M10 and M01, calculate the center-of-mass coordinate of said facial image respectively;
According to the center-of-mass coordinate of said facial image, 0 rank square M00, second moment M20, M02 and M01 calculate the deflection angle of said facial image.
Further, said step e comprises:
In the record current video two field picture to the status recognition of said facial image;
Read in the video frame images status recognition to said facial image;
According in the last video frame images that reads the status recognition of said facial image being confirmed whether detected facial image is the facial image that previous frame recognizes in the current video two field picture; If; And the status recognition to said facial image in the current video two field picture is unidentified arriving, and then said facial image is followed the tracks of.
Further, before said step C, said method also comprises:
Obtain the center point coordinate and the size of detected facial image;
According to said center point coordinate and size, judge whether said facial image moves to the frame of said video frame images, if then continue to receive next video frame images of importing.
Further, through the Adaboost algorithm current video two field picture being carried out people's face detects.
Further, said track algorithm comprises CAMShift algorithm, particle filter algorithm.
The face characteristic that the present invention adopts Gabor wavelet transformation and LBP algorithm to obtain facial image is described and is set up facial image sample storehouse, compares the description of prior art to face characteristic, the characteristic that the combinational algorithm of Gabor+LBP more can the expressing human face; When video frame images is detected; Obtain the face characteristic description of facial image equally through the Gabor+LBP algorithm; Obtain the big figure and the little figure of people's face simultaneously, so that can calculate the deflection angle of facial image, in the scope that deflection angle is presetting; The face characteristic that then obtains according to Gabor+LBP is described; With the coupling of comparing of the facial image in the sample storehouse, when not matching, can adopt such as the track algorithm of CAMShift detected facial image is followed the tracks of, underwriter's face is better recognition people face under more attitude changes; Solved in the face recognition process because people's various attitudes change the situation that causes recognition of face that error is arranged, guaranteed that people's face has a stronger robustness under more environment attitude changes.
Description of drawings
Fig. 1 is the realization flow figure of the method embodiment of inventor's face Tracking Recognition;
Fig. 2 is the be bold synoptic diagram of figure Face_big of the bashful figure of the people that provides of the method embodiment of inventor's face Tracking Recognition Face_small and the people who comprises certain people's face peripheral environment;
Fig. 3 is the be bold synoptic diagram of reverse projection image of figure of the people that provides among Fig. 2;
Fig. 4 is the process synoptic diagram of definite among the method embodiment of the inventor's face Tracking Recognition facial image that needs to follow the tracks of;
Fig. 5 is the process synoptic diagram of the deflection angle of definite facial image of providing of the method embodiment of inventor's face Tracking Recognition.
Embodiment
In order to make the object of the invention, technical scheme and advantage clearer,, the present invention is further elaborated below in conjunction with accompanying drawing and embodiment.Should be appreciated that specific embodiment described herein only in order to explanation the present invention, and be not used in qualification the present invention.
In embodiments of the present invention, the current video two field picture is carried out people's face detect, obtain facial image; Calculate the deflection angle of said facial image again; If said deflection angle in the scope that presets, then will obtain the face characteristic description of facial image in advance through Gabor+LBP, compare with said facial image, when the current video two field picture is first frame, provide comparison result; Otherwise, and adopt track algorithm that said facial image is followed the tracks of according to the definite facial image that will follow the tracks of of comparison result.
Below in conjunction with specific embodiment realization of the present invention is described in detail:
Fig. 1 shows the realization flow of the method embodiment of inventor's face Tracking Recognition, is used for detected facial image is discerned, and the facial image of colourful attitude multi-angle conversion is followed the tracks of, and details are as follows:
S100, obtain suitable Gabor nuclear window through Gabor nuclear energy; Earlier said Gabor nuclear window and detected facial image are carried out the filtering convolution and obtain filtering image; Utilize local binary pattern LBP that said filtering image is carried out the face characteristic description that the filtering convolution obtains said facial image again, to set up facial image sample storehouse.
The Gabor small echo is a kind of of wavelet transformation; Hyperchannel Gabor wavelet filter can extract the local message of target at spatial domain and frequency domain; Why can select the Gabor small echo to be because find through the biological study personnel: face characteristic is easy to receive the influence of various geometric transformations such as illumination, attitude; If directly the gray-scale value to gray level image carries out recognition of face; Be difficult to reach desired discrimination, but adopt the two-dimensional Gabor wavelet transformation can get access to the partial structurtes information of locus, spatial frequency and directional selectivity, these character all are very suitable for describing the facial image characteristic.The kernel function of Gabor small echo can be described the receptive field of mammal visual cortex simple cell more accurately, in the selection of local space and directivity, has represented good characteristic.
In the present embodiment, utilize the Gabor nuclear window size of choosing that facial image is carried out the filtering convolution, obtain embodying the filtering image of face characteristic, wherein the definition of Gabor kernel function is as follows:
ψ μ , v ( z ) = | | k μ , v | | 2 σ 2 e ( - | | k μ , v | | 2 | | z | | 2 / 2 σ 2 ) [ e ik μ , v z - e - σ 2 / 2 ] - - - ( 2 - 1 )
μ here and v have defined the direction and the size of Gabor nuclear, z=(x, y), || || expression norm calculation rule, and little wave vector is defined as following form:
k μ , v = k v e i φ μ - - - ( 2 - 2 )
k v=k Max/ f vAnd φ μ=π μ/8, k MaxBe maximum frequency, f is the space factor in frequency domain between nuclear and the nuclear.Gabor nuclear in (2-1) formula is self similarity, because it can pass through wavelet coefficient k on female small echo basis μ, vChange on yardstick and direction generates.Each Gabor nuclear all is by the wave component of a gaussian envelope complex plane, and first of the square bracket in formula (2-1) have determined the concussion part of nuclear, and second is the compensating direct current component.When severals σ values of expression Gauss window width and the ratio of wavelength than greatly the time, the compensating direct current component can be ignored.
(Local Binary Pattern is a kind of effectively operator of texture feature extraction LBP) to local binary pattern, has characteristics such as rotational invariance and gray scale monotonicity.To obtain facial image at different frequency, edge under the different directions and local notable feature can be carried out texture analysis to facial image through the LBP operator through Gabor filtering, thereby on details, the facial image textural characteristics are portrayed and expressed.
Concrete, this step S100 may further comprise the steps:
Step 1, choose suitable Gabor nuclear window size;
The Gabor nuclear window size that step 2, utilization are chosen carries out the filtering convolution to said facial image, obtains a plurality of multiple dimensioned poly-directional human face filtering images;
Step 3, utilize local binary pattern operator to describe the texture features of said filtering image, obtain a plurality of local binary pattern code patterns;
Step 4, to each local binary pattern code pattern piecemeal and extract the histogram sequence of each piecemeal;
Step 5, with the histogram sequence cascade addition of each piecemeal, obtain the histogram sequence of all local binary pattern code patterns.
Concrete, the step of choosing suitable Gabor nuclear window size comprises:
Step 11, definition temp are the height of Gabor nuclear, and said Gabor nuclear is the multiple dimensioned multidirectional nuclear that utilizes the Gabor kernel function to calculate, and the number of nuclear is N, N=μ * v, and wherein, μ, v are respectively the direction and the sizes of Gabor nuclear;
Step 12, N Gabor nuclear is divided into N real part matrix and N imaginary-part matrix;
Step 13, for i real part matrix and i imaginary-part matrix, obtain the real part and the imaginary-part matrix of temp*temp size respectively, and obtain temp_r1 and temp_i1 to the element value addition in the said matrix respectively;
Step 14, calculate each element value in i the real part matrix and, be designated as: temp_r2;
Step 15, calculate each element value in i the imaginary-part matrix and, be designated as: temp_i2;
Step 16, calculating resulting energy value p under temp*temp, p=(temp_r1/temp_r2)+(temp_i1/temp_i2);
Step 17, if said p value does not reach predefined Gabor energy threshold, temp=temp-1 then, repeated execution of steps 14 to 16;
Step 18, if said p value reaches predefined Gabor energy threshold, current resulting temp is examined the height of pairing the most suitable filter window as i Gabor.
S101, the current video two field picture is carried out people's face detect, obtain facial image.
In the present embodiment, when receiving a frame video frame images of input, carry out corresponding people's face and detect,, then continue to obtain the next frame video frame images if do not detect facial image; Obtain facial image, then execution in step S102 if detect.Concrete, in this enforcements, carry out the detection of people's face through iteration Adaboost algorithm, get access to big or small face_widh, the face_height of facial image through the Haar characteristic, the center point coordinate of facial image is (x 0+ face_width/2, y 0+ face_height/2), wherein, x0, y0 are the coordinate figures that adopts the summit, the facial image rectangle frame left side that the Adaboost algorithm gets access to.
S102, be the center with the center point coordinate of detected facial image, obtain people be bold figure and the bashful figure of people, said people is bashful, and figure is the image that includes only people's face central block of face complexion, and the said people figure that is bold is the image that comprises certain people's face peripheral environment.
In the present embodiment; The synoptic diagram of confirming Face_small and Face_big is as shown in Figure 2, and Face_small is the bashful figure of people that includes only people's face central block of face complexion, shown in the solid line among the figure; Face_big is the people that the comprises certain people's face background figure that is bold, shown in the dotted line among the figure.Wherein, Face_small is the center point coordinate (x with detected facial image 0+ face_width/2, y 0+ face_height/2) be the center, wide is face_width/2, height is the rectangle frame of face_height/2; Face_big then is the center point coordinate (x with detected facial image 0+ face_width/2, y 0+ face_height/2) be the center, wide be face_width+face_width/2, highly be the rectangle frame of face_height+face_height/2.
S103, calculate the deflection angle of said facial image according to the said people bashful figure of figure and people that is bold.
In the present embodiment, specifically may further comprise the steps:
The H component of people's face color histogram of step 21, the bashful figure of the said people of calculating;
Step 22, according to said H variable and the said people figure that is bold, utilize back projection to calculate the be bold reverse projection image I of figure of said people, its synoptic diagram sees also Fig. 3, has reflected the colour of skin distributed areas of people's face among the said reverse projection image I; Wherein, x, y represent the coordinate in the said reverse projection image; (x is that said reverse projection image I is in (x, y) place corresponding pixel value y) to I; Wherein, reverse projection image is the technology of an image known, does not give unnecessary details here;
Step 23, (x y) calculates 0 rank square M00 of said reverse projection image, first moment M10, M01 and second moment M20, M02 according to said I;
Concrete, can calculate M00 through formula
Figure BDA00001885701600091
;
Figure BDA00001885701600092
calculates M10 through formula;
Figure BDA00001885701600093
calculates M01 through formula;
Figure BDA00001885701600094
calculates M20 through formula;
Figure BDA00001885701600095
calculates M02 through formula.
Step 24, according to said M00, M10 and M01, calculate the center-of-mass coordinate of said reverse projection image respectively;
Concrete, can pass through formula
Figure BDA00001885701600096
With Calculate the center-of-mass coordinate (x of said reverse projection image c, y c).
Step 25, according to the center-of-mass coordinate of said reverse projection image, 0 rank square M00, second moment M20, M02 and M01 calculate the deflection angle of the people's face in the said reverse projection image.
Concrete, can calculate the deflection angle θ of the people's face among the said back projection figure through formula
Figure BDA00001885701600098
.
S104, judge said deflection angle whether in the scope that presets, if, execution in step S105 then, otherwise, execution in step S109.
If the said deflection angle of S105 in the scope that presets, then will obtain face characteristic through step S100 and describe, compare with the facial image sample storehouse of having set up among the S100.
S106, judge whether the current video two field picture is first frame, if, execution in step S107 then, otherwise, execution in step S108.
S107, when the current video two field picture is first frame, provide comparison result, and turn back to step S101; Need to prove that said first frame is meant that when detecting people's face for the first time be frame of video.
S108, the detected people's face of step S105Adaboost algorithm compared provides the result, and confirms the facial image that will follow the tracks of to adopt track algorithm that said facial image is followed the tracks of.
Concrete, can confirm the facial image that needs are followed the tracks of according to following steps, promptly " compare provide the result to the detected people's face of step S105Adaboost algorithm " among the step S108 described:
In step 31, the record current video two field picture to the status recognition of said facial image;
Step 32, read in the video frame images status recognition to said facial image;
In step 33, the basis last video frame images that reads the status recognition of said facial image is confirmed whether detected facial image is the facial image that previous frame recognizes in the current video two field picture; If; And the status recognition to said facial image in the current video two field picture is unidentified arriving, and then said facial image is followed the tracks of.
Wherein, Fig. 4 shows a kind of process synoptic diagram of how confirming the facial image that needs are followed the tracks of, and details are as follows:
Adopt array face_recog_table_track and face_recog_table_track_temp come to store respectively the previous frame video image identification to status recognition and the current frame video image of corresponding people's face in to the status recognition of people's face; In first frame; Initialization face_recog_table_track and face_recog_table_track_temp array; Make that the recognition of face state in two arrays is identical; Before the second frame recognition of face people's face state storage of previous frame at array face_recog_table_track; And array face_recog_table_track_temp zero clearing; When second frame carries out recognition of face; Value through to face_recog_table_track and two arrays of face_recog_table_track_temp contrasts one by one; Find unidentified people's face (face_recog_table_track [2]-face_recog_table_track_temp [2]=1), have new people's face G to add (face_recog_table_track [7]-face_recog_table_track_temp [7]=-1) simultaneously, at this moment B people's face is followed the tracks of to B; Status recognition to G people's face upgrades simultaneously, and people's face state of storing in the face_recog_table_track_temp array with second frame is initialized as people's face state of storing in the face_recog_table_track array before the 3rd frame identification.And the like just can follow the tracks of the people's face state that recognizes.Need to prove, A, B to G in parameter 1,2 to the 7 difference representative graphs 3 among array face_recog_table_track and the face_recog_table_track_temp, wherein A, B to G represent people's face.
Wherein, if Fig. 5 shows the process synoptic diagram that another kind is confirmed the deflection angle of detected facial image, details are as follows:
Solid line be previous frame recognize pass through the detected facial image of Adaboost algorithm, calculate the center point coordinate of this facial image that recognizes, and preserve; When to the second frame video frame images,, and can not discern if people's face has the attitude of certain angle to change; But can detect people's face, the image of the people's face through obtaining this position size then calculates the center point coordinate of facial image; And judge that distance between people's face central point that this center point coordinate and previous frame recognize whether in specified threshold value, is that previous frame is known others face if then judge this people's face, and to its tracking; If not; Then think people's face that flase drop measures, preserve last record, be written into the next frame video frame images again and judge.Need to prove; The effect of this threshold value is to guarantee in short time interval; People's face center point coordinate is at the reasonable space coordinate range with respect to previous frame, and the setting of threshold value is: 0 ~ (face_big_width/2), and wherein face_big_width/2 representes people's the half the of figure width of being bold.
Concrete, track algorithm comprises CAMShift algorithm, particle filter algorithm.
S109, gather next video frame images and return step S101.
Present embodiment in the scope that the deflection angle of facial image is presetting, then will obtain face characteristic through step S100 and describe, and compare with said facial image, when the current video two field picture is first frame, provide comparison result; Otherwise according to the definite facial image that will follow the tracks of of comparison result; And adopt track algorithm that said facial image is followed the tracks of; Solved in the face recognition process because people's various attitudes change the situation that causes recognition of face that error is arranged, guaranteed that people's face has a stronger robustness under more environment attitude changes.In addition; Choosing of nuclear window size when adopting the Gabor wavelet transformation is that Gabor nuclear energy and Gabor function obtain several Gabor real parts and the Gabor imaginary-part matrix is realized through being provided with; Through the nuclear window of choosing facial image is carried out the filtering convolution then; Obtain to embody a concentrated reflection of the Gabor image of face characteristic, guaranteed accuracy of face identification.
The above is merely preferred embodiment of the present invention, not in order to restriction the present invention, all any modifications of within spirit of the present invention and principle, being done, is equal to and replaces and improvement etc., all should be included within protection scope of the present invention.

Claims (8)

1. the method for a face tracking identification is used for detected facial image is discerned, and the facial image of colourful attitude multi-angle conversion is followed the tracks of, and it is characterized in that said method comprises:
A, obtain suitable Gabor nuclear window through Gabor nuclear energy; Earlier said Gabor nuclear window and detected facial image are carried out the filtering convolution and obtain filtering image; Utilize local binary pattern LBP that said filtering image is carried out the face characteristic description that the filtering convolution obtains said facial image again, to set up facial image sample storehouse;
B, the current video two field picture is carried out people's face detect, obtain facial image;
C, be the center with the center point coordinate of detected facial image, obtain people be bold figure and the bashful figure of people, said people is bashful, and figure is the image that includes only people's face central block of face complexion, and the said people figure that is bold is the image that comprises certain people's face peripheral environment;
D, calculate the deflection angle of said facial image according to the said people bashful figure of figure and people that is bold;
If the said deflection angle of E in the scope that presets, then will obtain face characteristic through steps A and describe, compare with said facial image, when the current video two field picture is first frame, provide comparison result; Otherwise, and adopt track algorithm that said facial image is followed the tracks of according to the definite facial image that will follow the tracks of of comparison result;
If the said deflection angle of F not in the scope that presets, is then gathered next video frame images and order execution in step B to step e.
2. the method for claim 1 is characterized in that, said steps A specifically comprises:
Choose suitable Gabor nuclear window size;
The Gabor nuclear window size that utilization is chosen carries out the filtering convolution to said facial image, obtains a plurality of multiple dimensioned poly-directional human face filtering images;
Utilize local binary pattern operator to describe the texture features of said filtering image, obtain a plurality of local binary pattern code patterns;
To each local binary pattern code pattern piecemeal and extract the histogram sequence of each piecemeal;
With the histogram sequence cascade addition of each piecemeal, obtain the histogram sequence of all local binary pattern code patterns.
3. method as claimed in claim 2 is characterized in that, the said suitable Gabor nuclear window size of choosing is specially:
Definition temp is the height of Gabor nuclear, and said Gabor nuclear is the multiple dimensioned multidirectional nuclear that utilizes the Gabor kernel function to calculate, and the number of nuclear is N, N=μ * v, and wherein, μ, v are respectively the direction and the sizes of Gabor nuclear;
N Gabor nuclear is divided into N real part matrix and N imaginary-part matrix;
For i real part matrix and i imaginary-part matrix, obtain the real part and the imaginary-part matrix of temp*temp size respectively, and obtain temp_r1 and temp_i1 to the element value addition in the said matrix respectively;
Calculate i each element value in the real part matrix with, be designated as: temp_r2;
Calculate i each element value in the imaginary-part matrix with, be designated as: temp_i2
Calculating is resulting energy value p under temp*temp, p=(temp_r1/temp_r2)+(temp_i1/temp_i2)
If said p value does not reach predefined Gabor energy threshold; Temp=temp-1 then; Continue to carry out for i real part matrix and i imaginary-part matrix; Obtain the real part and the imaginary-part matrix of temp*temp size respectively, and obtain the step of temp_r1 and temp_i1 to the element value addition in the said matrix respectively;
If said p value reaches predefined Gabor energy threshold, i=i+1 then examines current resulting temp the height of pairing the most suitable filter window as i Gabor.
4. the method for claim 1 is characterized in that, said step D specifically comprises:
Calculate the H component of people's face color histogram of the bashful figure of said people;
According to said H variable and the said people figure that is bold, utilize back projection to calculate reverse projection image I, reflected the colour of skin distributed areas of people's face among the said reverse projection image I; Wherein, X, y represent the coordinate in the said reverse projection image, I (x; Y) be that said reverse projection image I is in (x, y) place corresponding pixel value;
(x y) calculates 0 rank square M00 of said image, first moment M10, M01 and second moment M20, M02 according to said I;
According to said M00, M10 and M01, calculate the center-of-mass coordinate of said facial image respectively;
According to the center-of-mass coordinate of said facial image, 0 rank square M00, second moment M20, M02 and M01 calculate the deflection angle of said facial image.
5. the method for claim 1 is characterized in that, said step e comprises:
In the record current video two field picture to the status recognition of said facial image;
Read in the video frame images status recognition to said facial image;
According in the last video frame images that reads the status recognition of said facial image being confirmed whether detected facial image is the facial image that previous frame recognizes in the current video two field picture; If; And the status recognition to said facial image in the current video two field picture is unidentified arriving, and then said facial image is followed the tracks of.
6. the method for claim 1 is characterized in that, before said step C, said method also comprises:
Obtain the center point coordinate and the size of detected facial image;
According to said center point coordinate and size, judge whether said facial image moves to the frame of said video frame images, if then continue to receive next video frame images of importing.
7. the method for claim 1 is characterized in that, through the Adaboost algorithm current video two field picture is carried out people's face and detects.
8. the method for claim 1 is characterized in that, said track algorithm comprises CAMShift algorithm, particle filter algorithm.
CN2012102448511A 2012-07-13 2012-07-13 Method for tracking and recognizing face Pending CN102799871A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2012102448511A CN102799871A (en) 2012-07-13 2012-07-13 Method for tracking and recognizing face

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2012102448511A CN102799871A (en) 2012-07-13 2012-07-13 Method for tracking and recognizing face

Publications (1)

Publication Number Publication Date
CN102799871A true CN102799871A (en) 2012-11-28

Family

ID=47198971

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2012102448511A Pending CN102799871A (en) 2012-07-13 2012-07-13 Method for tracking and recognizing face

Country Status (1)

Country Link
CN (1) CN102799871A (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102982321A (en) * 2012-12-05 2013-03-20 深圳Tcl新技术有限公司 Acquisition method and device for face database
CN103793693A (en) * 2014-02-08 2014-05-14 厦门美图网科技有限公司 Method for detecting face turning and facial form optimizing method with method for detecting face turning
CN104680120A (en) * 2013-12-02 2015-06-03 华为技术有限公司 Method and device for generating strong classifier for face detection
WO2015089949A1 (en) * 2013-12-19 2015-06-25 成都品果科技有限公司 Human face clustering method merging lbp and gabor features
CN105868574A (en) * 2016-04-25 2016-08-17 南京大学 Human face tracking optimization method for camera and intelligent health monitoring system based on videos
CN107679504A (en) * 2017-10-13 2018-02-09 北京奇虎科技有限公司 Face identification method, device, equipment and storage medium based on camera scene
CN108388857A (en) * 2018-02-11 2018-08-10 广东欧珀移动通信有限公司 Face detection method and related equipment
CN108921201A (en) * 2018-06-12 2018-11-30 河海大学 Dam defect identification and classification method based on feature combination and CNN
CN109118513A (en) * 2018-08-10 2019-01-01 中国科学技术大学 A kind of calculation method and system of two-value sports immunology
CN109345566A (en) * 2018-09-28 2019-02-15 上海应用技术大学 Moving target tracking method and system
CN109635749A (en) * 2018-12-14 2019-04-16 网易(杭州)网络有限公司 Image processing method and device based on video flowing
CN110097021A (en) * 2019-05-10 2019-08-06 电子科技大学 Face pose estimation based on MTCNN
CN111444875A (en) * 2020-04-07 2020-07-24 珠海格力电器股份有限公司 Face tracking method, device, equipment and computer readable storage medium
CN111487245A (en) * 2020-04-03 2020-08-04 中国地质大学(武汉) A system for evaluating the evolution of biomass in coral reef-like waters
CN113177491A (en) * 2021-05-08 2021-07-27 重庆第二师范学院 Self-adaptive light source face recognition system and method
WO2021169257A1 (en) * 2020-02-24 2021-09-02 北京三快在线科技有限公司 Face recognition
CN113705422A (en) * 2021-08-25 2021-11-26 山东云缦智能科技有限公司 Method for acquiring character video clips through human faces
CN113810692A (en) * 2020-06-17 2021-12-17 佩克普股份公司 Method for framing changes and movements, image processing apparatus and program product
CN114387552A (en) * 2022-01-13 2022-04-22 电子科技大学 Rotor unmanned aerial vehicle infrared video tracking method based on biological vision mechanism
CN115546881A (en) * 2022-09-21 2022-12-30 中国银行股份有限公司 Identity recognition method, device, equipment and storage medium based on iris multi-feature

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1072014B1 (en) * 1998-04-13 2004-11-24 Nevengineering, Inc. Face recognition from video images
CN1790374A (en) * 2004-12-14 2006-06-21 中国科学院计算技术研究所 Face recognition method based on template matching
CN102306290A (en) * 2011-10-14 2012-01-04 刘伟华 Face tracking recognition technique based on video

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1072014B1 (en) * 1998-04-13 2004-11-24 Nevengineering, Inc. Face recognition from video images
CN1790374A (en) * 2004-12-14 2006-06-21 中国科学院计算技术研究所 Face recognition method based on template matching
CN102306290A (en) * 2011-10-14 2012-01-04 刘伟华 Face tracking recognition technique based on video

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
吴璇: "基于视频的人脸检测和跟踪算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑 》, no. 3, 15 March 2011 (2011-03-15) *
李建平: "《非常规小波变换与军事生物信息安全》", 30 November 2008 *
阮书敏: "基于灰度加权及主元分析的图像人脸偏转校正", 《科技与生活》, vol. 2009, no. 3, 31 December 2009 (2009-12-31), pages 37 *

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102982321A (en) * 2012-12-05 2013-03-20 深圳Tcl新技术有限公司 Acquisition method and device for face database
CN102982321B (en) * 2012-12-05 2016-09-21 深圳Tcl新技术有限公司 Face database acquisition method and device
CN104680120B (en) * 2013-12-02 2018-10-19 华为技术有限公司 A kind of generation method and device of the strong classifier of Face datection
CN104680120A (en) * 2013-12-02 2015-06-03 华为技术有限公司 Method and device for generating strong classifier for face detection
WO2015089949A1 (en) * 2013-12-19 2015-06-25 成都品果科技有限公司 Human face clustering method merging lbp and gabor features
CN103793693A (en) * 2014-02-08 2014-05-14 厦门美图网科技有限公司 Method for detecting face turning and facial form optimizing method with method for detecting face turning
CN105868574A (en) * 2016-04-25 2016-08-17 南京大学 Human face tracking optimization method for camera and intelligent health monitoring system based on videos
CN105868574B (en) * 2016-04-25 2018-12-14 南京大学 A kind of optimization method of camera track human faces and wisdom health monitor system based on video
CN107679504A (en) * 2017-10-13 2018-02-09 北京奇虎科技有限公司 Face identification method, device, equipment and storage medium based on camera scene
CN108388857A (en) * 2018-02-11 2018-08-10 广东欧珀移动通信有限公司 Face detection method and related equipment
CN108921201A (en) * 2018-06-12 2018-11-30 河海大学 Dam defect identification and classification method based on feature combination and CNN
CN109118513A (en) * 2018-08-10 2019-01-01 中国科学技术大学 A kind of calculation method and system of two-value sports immunology
CN109118513B (en) * 2018-08-10 2022-01-11 中国科学技术大学 Method and system for calculating binary motion descriptor
CN109345566A (en) * 2018-09-28 2019-02-15 上海应用技术大学 Moving target tracking method and system
CN109635749A (en) * 2018-12-14 2019-04-16 网易(杭州)网络有限公司 Image processing method and device based on video flowing
CN109635749B (en) * 2018-12-14 2021-03-16 网易(杭州)网络有限公司 Image processing method and device based on video stream
CN110097021A (en) * 2019-05-10 2019-08-06 电子科技大学 Face pose estimation based on MTCNN
CN110097021B (en) * 2019-05-10 2022-09-06 电子科技大学 MTCNN-based face pose estimation method
WO2021169257A1 (en) * 2020-02-24 2021-09-02 北京三快在线科技有限公司 Face recognition
CN111487245A (en) * 2020-04-03 2020-08-04 中国地质大学(武汉) A system for evaluating the evolution of biomass in coral reef-like waters
CN111444875A (en) * 2020-04-07 2020-07-24 珠海格力电器股份有限公司 Face tracking method, device, equipment and computer readable storage medium
CN111444875B (en) * 2020-04-07 2024-05-03 珠海格力电器股份有限公司 Face tracking method, device, equipment and computer readable storage medium
CN113810692A (en) * 2020-06-17 2021-12-17 佩克普股份公司 Method for framing changes and movements, image processing apparatus and program product
CN113810692B (en) * 2020-06-17 2024-05-10 佩克普股份公司 Method for framing changes and movements, image processing device and program product
CN113177491A (en) * 2021-05-08 2021-07-27 重庆第二师范学院 Self-adaptive light source face recognition system and method
CN113705422B (en) * 2021-08-25 2024-04-09 山东浪潮超高清视频产业有限公司 Method for obtaining character video clips through human faces
CN113705422A (en) * 2021-08-25 2021-11-26 山东云缦智能科技有限公司 Method for acquiring character video clips through human faces
CN114387552A (en) * 2022-01-13 2022-04-22 电子科技大学 Rotor unmanned aerial vehicle infrared video tracking method based on biological vision mechanism
CN114387552B (en) * 2022-01-13 2022-08-26 电子科技大学 Rotor unmanned aerial vehicle infrared video tracking method based on biological vision mechanism
CN115546881A (en) * 2022-09-21 2022-12-30 中国银行股份有限公司 Identity recognition method, device, equipment and storage medium based on iris multi-feature

Similar Documents

Publication Publication Date Title
CN102799871A (en) Method for tracking and recognizing face
O'Sullivan et al. SAR ATR performance using a conditionally Gaussian model
Liu et al. Airplane detection based on rotation invariant and sparse coding in remote sensing images
CN102332086A (en) Facial identification method based on dual threshold local binary pattern
CN104517287A (en) Image matching method and device
CN107564006A (en) A kind of circular target detection method using Hough transform
CN108334876A (en) Tired expression recognition method based on image pyramid local binary pattern
Zhou et al. Histograms of categorized shapes for 3D ear detection
Sakaguchi et al. A comparison of feature representations for explosive threat detection in ground penetrating radar data
Arandjelović et al. An information-theoretic approach to face recognition from face motion manifolds
Santhaseelan et al. Tracking in wide area motion imagery using phase vector fields
Saville et al. Commercial vehicle classification from spectrum parted linked image test‐attributed synthetic aperture radar imagery
Kyrki Local and global feature extraction for invariant object recognition
Zhu et al. RF Sign: Signature Anticounterfeiting Real‐Time Monitoring System Based on Single Tag
Gao et al. Target detection and recognition in SAR imagery based on KFDA
Uchiyama et al. Transparent random dot markers
Liu et al. An iris recognition approach with SIFT descriptors
Ramos et al. A natural feature representation for unstructured environments
Kwak et al. DNN-based legibility improvement for air-writing in millimeter-waveband radar system
Rao et al. Texture classification based on local features using dual neighborhood approach
Heidemann The long-range saliency of edge-and corner-based salient points
Lomov et al. Dynamic programming for curved reflection symmetry detection in segmented images
Ragb et al. Multi-hypothesis approach for efficient human detection
Zhao et al. A fast vanishing point detection method in structured road
Prakash et al. A novel approach for coin identification using eigenvalues of covariance matrix, Hough transform and raster scan algorithms

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20121128

RJ01 Rejection of invention patent application after publication