Summary of the invention
In view of this, the present invention provides a kind of eyeball center positioning methods, apparatus and system, to overcome in the prior art
In portrait beauty, the position at eyeball center can not accurately be determined, U.S. pupil is caused to be bonded asking for dislocation with eyeball in image
Topic.
To achieve the above object, the invention provides the following technical scheme:
A kind of eyeball center positioning method, comprising:
Obtain the coordinate of the outsourcing contour feature point of eyes in image;
According to the coordinate of the outsourcing contour feature point, the target image comprising eyes is obtained;
The pixel of each pixel for the inside of eye image that outsourcing contour feature point described in the target image is surrounded
The pixel value that value is set as each pixel of eyes external image in 255 and the target image is set as 0, obtains mask
Gray level image;
It calculates each in the x direction gradient gX (i, j) and the target image of each pixel (i, j) in the target image
The y direction gradient gY (i, j) of a pixel (i, j) obtains the x direction gradient figure being made of the x direction gradient of each pixel,
And the y direction gradient figure being made of the y direction gradient of each pixel;
X direction gradient each in the x direction gradient figure is normalized, normalization x direction gradient figure is obtained;
Y direction gradient each in the y direction gradient figure is normalized, normalization y direction gradient figure is obtained;
According to the target image and the mask gray level image, by the color of pixel each in the target image
It is inverted, obtains target reverse phase figure;
Following operation is executed for the pixel (i, j) of each pixel value non-zero in the target reverse phase figure:
Calculate the unit gradient vector G of the pixel (i, j)i,j=(gX'(i, j), gY'(i, j)), in gradient map
The unit location vector of each pixel (I, J)
∈ Ω indicates to traverse each pixel in the gradient map, gX'(i, j) be the normalization x direction gradient figure pixel (i,
J) pixel value, gY'(i, j) be it is described normalization y direction gradient figure pixel (i, j) pixel value;
It obtains by the corresponding Sum of each pixel of target reverse phase figurei,jResult figure as pixel value composition;
Pixel value each in the result figure is normalized to 0 to 255 range, obtains normalization result figure;
The color of each pixel in the normalization result figure is inverted, reverse phase result figure is obtained;
According to the pixel value of each pixel in the reverse phase result figure and the coordinate of each pixel, institute is calculated
State the weighted average coordinate of each pixel in reverse phase result figure;
According to the weighted average coordinate, the eyeball centre coordinate is determined.
A kind of eyeball center positioning device, comprising:
First obtains module, the coordinate of the outsourcing contour feature point for obtaining eyes in image;
Second obtains module, for the coordinate according to the outsourcing contour feature point, obtains the target image comprising eyes;
Third obtains module, the inside of eye image for surrounding outsourcing contour feature point described in the target image
The pixel value of each pixel be set as the pixel value of each pixel of eyes external image in 255 and the target image
It is set as 0, obtains mask gray level image;
4th obtains module, for calculating the x direction gradient gX (i, j) of each pixel (i, j) in the target image
With the y direction gradient gY (i, j) of pixel (i, j) each in the target image, the x direction gradient by each pixel is obtained
The x direction gradient figure of composition, and the y direction gradient figure being made of the y direction gradient of each pixel;
5th obtains module, for x direction gradient each in the x direction gradient figure to be normalized, obtains normalizing
Change x direction gradient figure;
6th obtains module, for y direction gradient each in the y direction gradient figure to be normalized, obtains normalizing
Change y direction gradient figure;
7th obtains module, is used for according to the target image and the mask gray level image, by the target image
In the color of each pixel inverted, obtain target reverse phase figure;
First computing module is executed for the pixel (i, j) for each pixel value non-zero in the target reverse phase figure
It operates below:
Calculate the unit gradient vector G of the pixel (i, j)i,j=(gX'(i, j), gY'(i, j)), in gradient map
The unit location vector of each pixel (I, J)
∈ Ω indicates to traverse each pixel in the gradient map, gX'(i, j) be the normalization x direction gradient figure pixel (i,
J) pixel value, gY'(i, j) be it is described normalization y direction gradient figure pixel (i, j) pixel value;
8th obtains module, for obtaining by the corresponding Sum of each pixel of target reverse phase figurei,jAs pixel value
The result figure of composition;
9th obtains module, for pixel value each in the result figure to be normalized to 0 to 255 range, obtains normalizing
Change result figure;
Tenth obtains module, for inverting the color of each pixel in the normalization result figure, obtains anti-
Phase result figure;
Second computing module, for according to each pixel in the reverse phase result figure pixel value and each pixel
The coordinate of point, calculates the weighted average coordinate of each pixel in the reverse phase result figure;
Determining module, for determining the eyeball centre coordinate according to the weighted average coordinate.
A kind of eyeball center location system, comprising:
Processor;
For storing the memory of the processor-executable instruction;
Wherein, the processor is configured to:
Obtain the coordinate of the outsourcing contour feature point of eyes in image;
According to the coordinate of the outsourcing contour feature point, the target image comprising eyes is obtained;
The pixel of each pixel for the inside of eye image that outsourcing contour feature point described in the target image is surrounded
The pixel value that value is set as each pixel of eyes external image in 255 and the target image is set as 0, obtains mask
Gray level image;
It calculates each in the x direction gradient gX (i, j) and the target image of each pixel (i, j) in the target image
The y direction gradient gY (i, j) of a pixel (i, j) obtains the x direction gradient figure being made of the x direction gradient of each pixel,
And the y direction gradient figure being made of the y direction gradient of each pixel;
X direction gradient each in the x direction gradient figure is normalized, normalization x direction gradient figure is obtained;
Y direction gradient each in the y direction gradient figure is normalized, normalization y direction gradient figure is obtained;
According to the target image and the mask gray level image, by the color of pixel each in the target image
It is inverted, obtains target reverse phase figure;
Following operation is executed for the pixel (i, j) of each pixel value non-zero in the target reverse phase figure:
Calculate the unit gradient vector G of the pixel (i, j)i,j=(gX'(i, j), gY'(i, j)), in gradient map
The unit location vector of each pixel (I, J)
∈ Ω indicates to traverse each pixel in the gradient map, gX'(i, j) be the normalization x direction gradient figure pixel (i,
J) pixel value, gY'(i, j) be it is described normalization y direction gradient figure pixel (i, j) pixel value;
It obtains by the corresponding Sum of each pixel of target reverse phase figurei,jResult figure as pixel value composition;
Pixel value each in the result figure is normalized to 0 to 255 range, obtains normalization result figure;
The color of each pixel in the normalization result figure is inverted, reverse phase result figure is obtained;
According to the pixel value of each pixel in the reverse phase result figure and the coordinate of each pixel, institute is calculated
State the weighted average coordinate of each pixel in reverse phase result figure;
According to the weighted average coordinate, the eyeball centre coordinate is determined.
It can be seen via above technical scheme that compared with prior art, eyeball centralized positioning provided by the embodiments of the present application
In method, by the coordinate of the outsourcing contour feature point of eyes, the target image comprising eyes is obtained, then obtain target image
Mask gray level image is this feature of black using eyeball, will according to the target image and the mask gray level image
The color of each pixel is inverted in the target image, obtains target reverse phase figure.Calculate each pixel in target image
The y direction gradient gY (i, j) of each pixel (i, j), is obtained in the x direction gradient gX (i, j) and the target image of point (i, j)
Obtain x direction gradient figure and y direction gradient figure;Then x direction gradient each in x direction gradient figure is normalized, is returned
One changes x direction gradient figure, and y direction gradient each in y direction gradient figure is normalized, and obtains normalization y direction gradient figure.
The unit gradient vector G of the pixel (i, j) is calculated againi,j=(gX'(i, j), gY'(i, j)), with picture each in gradient map
The dot product quadratic sum Sum of the unit location vector of vegetarian refreshments (I, J)i,j, obtain corresponding by each pixel of target reverse phase figure
Sumi,jAs the result figure of pixel value composition, pixel value each in the result figure is normalized to 0 to 255 range, is returned
One changes result figure, and the color of each pixel in the normalization result figure is inverted, reverse phase result figure is obtained, according to institute
The coordinate of the pixel value of each pixel and each pixel in reverse phase result figure is stated, is calculated in the reverse phase result figure
The weighted average coordinate of each pixel determines the eyeball centre coordinate according to the weighted average coordinate.To realize
The accurate purpose for determining eyeball center.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on
Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other
Embodiment shall fall within the protection scope of the present invention.
Referring to Fig. 1, being a kind of flow diagram of eyeball center positioning method provided by the embodiments of the present application, this method
May include:
Step S101: the coordinate of the outsourcing contour feature point of eyes in image is obtained.
The acquisition methods of the outsourcing contour feature point of eyes have many modes, such as ASM (Active Shape Model)
Method or neural network method etc..
As shown in Fig. 2, being a kind of schematic diagram of the outsourcing contour feature point of eyes provided by the embodiments of the present application.
As shown in Fig. 2, being the outsourcing contour feature point of eyes at position 21.
Step S102: according to the coordinate of the outsourcing contour feature point, the target image comprising eyes is obtained.
Target image is the substantially image for including eyes, as shown in Fig. 2, in implementation a kind of in the embodiment of the present application
Target image.
Step S103: each picture for the inside of eye image that outsourcing contour feature point described in the target image is surrounded
The pixel value that the pixel value of element is set as each pixel of eyes external image in 255 and the target image is set as 0,
Obtain mask gray level image.
The inside of eye image that outsourcing contour feature point surrounds can be and be obtained using Sai Er Drawing of Curve.
As shown in figure 3, being a kind of mask gray level image provided by the embodiments of the present application.
From figure 3, it can be seen that mask gray level image includes inside of eye image 31 and eyes external image 32.
Step S104: the x direction gradient gX (i, j) and the mesh of each pixel (i, j) in the target image are calculated
The y direction gradient gY (i, j) of each pixel (i, j) in logo image obtains the x being made of the x direction gradient of each pixel
Direction gradient figure, and the y direction gradient figure being made of the y direction gradient of each pixel.
The first order differential operators such as Sobel, Prewitt can be used in the calculation method of x direction gradient figure and y direction gradient figure,
It is illustrated by taking Sobel operator as an example below, x direction gradient figure and y direction gradient figure can be calculated by following formula:
Wherein, Src (i, j) indicates pixel (i, j) in the target image
Pixel value,
As shown in figure 4, being a kind of x direction gradient figure provided by the embodiments of the present application, as shown in figure 5, implementing for the application
A kind of y direction gradient figure that example provides.As shown in fig. 6, being a kind of map of magnitudes provided by the embodiments of the present application.
The pixel value of each pixel calculates according to the following formula in map of magnitudes:
magi,jFor the pixel value of pixel (i, j) in amplitude figure.
Step S105: x direction gradient each in the x direction gradient figure is normalized, and obtains the normalization direction x ladder
Degree figure.
Normalization can be maximin normalization, such as maximum value corresponding 1, minimum value corresponding 0.
Step S106: y direction gradient each in the y direction gradient figure is normalized, and obtains the normalization direction y ladder
Degree figure.
Normalization can be maximin normalization, such as maximum value corresponding 1, minimum value corresponding 0.
Step S107: according to the target image and the mask gray level image, by picture each in the target image
The color of vegetarian refreshments is inverted, and target reverse phase figure is obtained.
Target reverse phase figure can be obtained according to following formula:
Weighti,j=(255-Srci,j)×Maski,j/255;Wherein, Weighti,jFor pixel in the target reverse phase figure
The pixel value of point (i, j), Srci,jFor the pixel value of pixel (i, j) in the target image, Maski,jFor the mask gray scale
The pixel value of pixel (i, j) in image.
It is this feature of black using eyeball, the color of target image is inverted, eyeball has reformed into white, eye
Ball color gray value is lower, using the target reverse phase figure of target image as priori weight, by y direction gradient figure, x direction gradient figure
It is sufficiently combined with target reverse phase figure, substantially increases the precision of eyeball center positioning method.
As shown in fig. 7, being a kind of target reverse phase figure provided by the embodiments of the present application.
It can be with comparison diagram 4, Fig. 5, Fig. 6 and Fig. 7, it can be seen that eyeball becomes white from grey.
Step S108: following operation is executed for the pixel (i, j) of each pixel value non-zero in the target reverse phase figure:
Calculate the unit gradient vector G of the pixel (i, j)i,j=(gX'(i, j), gY'(i, j)), in gradient map
The unit location vector of each pixel (I, J)I
It is to be less than pixel in the gradient map more than or equal to 0 for the total line number M for being less than pixel in the gradient map more than or equal to 0, J
Total columns N, gX'(i, j) be it is described normalization x direction gradient figure pixel (i, j) pixel value, gY'(i, j) be described
Normalize the pixel value of the pixel (i, j) of y direction gradient figure.
Step S109: it obtains by the corresponding Sum of each pixel of target reverse phase figurei,jKnot as pixel value composition
Fruit figure.
Step S110: being normalized to 0 to 255 range for pixel value each in the result figure, obtains normalization result figure.
Step S111: the color of each pixel in the normalization result figure is inverted, reverse phase result figure is obtained.
Reverse phase result figure can be obtained by following formula:
Sumi',j=255-255 × (Sumi,j-minSum)/(maxSum-minSum);
Wherein, Sumi',jFor the pixel value of pixel (i, j) in the reverse phase result figure, Sumi,jFor in the result figure
The pixel value of pixel (i, j), minSum are the smallest pixel value in the result figure, and maxSum is maximum in the result figure
Pixel value.
As shown in figure 8, being a kind of reverse phase result figure provided by the embodiments of the present application.
As can be seen from Figure 8, the color at eyeball center and the color in other regions are significantly different.
Step S112: according to the pixel value of each pixel in the reverse phase result figure and the coordinate of each pixel,
Calculate the weighted average coordinate of each pixel in the reverse phase result figure.
Weighted average coordinate can be obtained by following formula:
Wherein, Sumi',jFor pixel in the reverse phase result figure
The pixel value of point (i, j), (Vi, Vj) are the weighted average coordinate, function f (Sum'i,j) it is that the reverse phase result figure is each
The pixel value from 0 to 255 of pixel maps to the mapping function of preset range, wherein Sum'i,jIt is smaller, f (Sum'i,j) bigger.
Optionally, f (x)=e-0.01*x。
The embodiment of the present application has abandoned the method for calculating optimum point using most value in the prior art, using coordinate points position
Weighted average method significantly improves the stability of eyeball center positioning method.
Step S113: according to the weighted average coordinate, the eyeball centre coordinate is determined.
Target image, mask gray level image, x direction gradient figure, y direction gradient figure, the normalization side x in the embodiment of the present application
Into gradient map, normalization y direction gradient figure, target reverse phase figure, result figure, normalization result figure and reverse phase result figure, pixel
Point total line number and total columns be it is identical, therefore, in above-described embodiment i be more than or equal to 0 be less than M positive integer, j be greater than
It is less than the positive integer of N equal to 0.
In eyeball center positioning method provided by the embodiments of the present application, by the coordinate of the outsourcing contour feature point of eyes,
The target image comprising eyes is obtained, then obtains the mask gray level image of target image, is this feature of black using eyeball,
According to the target image and the mask gray level image, the color of pixel each in the target image is carried out anti-
Turn, obtains target reverse phase figure.Calculate the x direction gradient gX (i, j) and the target figure of each pixel (i, j) in target image
The y direction gradient gY (i, j) of each pixel (i, j) as in, obtains x direction gradient figure and y direction gradient figure;Then by the side x
Into gradient map, each x direction gradient is normalized, and normalization x direction gradient figure is obtained, by each y in y direction gradient figure
Direction gradient is normalized, and obtains normalization y direction gradient figure.The unit gradient vector of the pixel (i, j) is calculated again
Gi,j=(gX'(i, j), gY'(i, j)), the dot product quadratic sum with the unit location vector of pixel (I, J) each in gradient map
Sumi,j, obtain by the corresponding Sum of each pixel of target reverse phase figurei,jIt, will be described as the result figure of pixel value composition
Each pixel value is normalized to 0 to 255 range in result figure, obtains normalization result figure, will be each in the normalization result figure
The color of a pixel is inverted, and reverse phase result figure is obtained, according to the pixel value of each pixel in the reverse phase result figure,
And the coordinate of each pixel, the weighted average coordinate of each pixel in the reverse phase result figure is calculated, according to described in
It is weighted and averaged coordinate, determines the eyeball centre coordinate.To realize the accurate purpose for determining eyeball center.
It is understood that the area of target image and mask gray level image is bigger, arithmetic speed is slower, to improve operation
Speed, if target image area a is greater than area threshold A, by target image and the constant diminution of mask gray level image length-width ratio
To area A, referring to Fig. 9, being in a kind of eyeball center positioning method provided by the embodiments of the present application according to the outsourcing profile
The coordinate of characteristic point obtains a kind of flow diagram of implementation of the target image comprising eyes, this method comprises:
Step S901: the boundary rectangle of the outsourcing contour feature point is obtained.
Step S902: target image subject to the region that the boundary rectangle is surrounded is determining.
In order to include completely eye areas, using the boundary rectangle of outsourcing contour feature point as quasi- target image.
Step S903: judge whether the image area of the quasi- target image is greater than area threshold.
Step S904: when the image area a of the quasi- target image is greater than the area threshold A, by the quasi- target
The length and width of image are according to zoom factorIt zooms in and out, the target image after being scaled.
Step S905: when the image area a of the quasi- target image is less than or equal to the area threshold A, by the standard
Target image is determined as the target image.
Correspondingly, determining that the eyeball centre coordinate includes: when the target image is according to the weighted average coordinate
When the quasi- target image, the weighted average coordinate is determined as the eyeball centre coordinate;When the target image is institute
When stating the image after quasi- target image scales, using the product of the weighted average coordinate and zoom factor rate as the eyeball
Centre coordinate.
It is understood that can be iterated to above-mentioned eyeball center positioning method, the number of iterations is bigger, eyeball center
The precision of positioning may be bigger, certainly may also eyeball centralized positioning precision it is lower, therefore find suitable iteration time
Number is critically important.
Before step S104, above-mentioned eyeball center positioning method further include: setting maximum number of iterations, by current iteration
Number is set as 0;After step S111, above-mentioned eyeball center positioning method further include:
The current iteration number is added 1;Judge whether the current iteration number is more than or equal to the greatest iteration time
Number;When the current iteration number is more than or equal to the maximum number of iterations, step S112 is executed;When the current iteration time
When number is less than the maximum number of iterations, using the reverse phase result figure as the target image, return step S104.
Maximum number of iterations can be the positive integer more than or equal to 1 less than or equal to 4, naturally it is also possible to it is other positive integers,
Such as 5,6 etc..
In order to allow those skilled in the art to become more apparent upon the accurate of eyeball center positioning method provided by the embodiments of the present application
Degree, applicant are also tested.
36000 facial images are extracted, are tested using the eyeball center positioning method, in setting greatest iteration time
In the case where number NMax=3, area threshold A=1000, mean error Meanerror=0.1217*r, standard deviation Stderror
=0.1086*r, r are eyeball radius.
Individual figure is measured on Macbook Pro (Retina, 15-inch, Mid 2015), OS X10.11, XCode7.3
Average calculation times are only 2.57ms.
In practical applications, it is the application demand for meeting U.S. pupil, usually requires that eyeball centralized positioning mean error
Meanerror is less than 0.15*r, and standard deviation Stderror is less than 0.15*r, eyeball centralized positioning side provided by the embodiments of the present application
Method complies fully with.
Referring to Fig. 10, being a kind of structural schematic diagram of eyeball center positioning device provided by the embodiments of the present application, the eye
Ball center's positioning device includes: that the first acquisition module 1001, second obtains module 1002, third obtains module the 1003, the 4th and obtains
Modulus block the 1004, the 5th obtains module the 1005, the 6th and obtains module the 1006, the 7th acquisition module 1007, the first computing module
1008, the 8th obtain module the 1009, the 9th obtain module the 1010, the tenth obtain module 1011, the second computing module 1012 and
Determining module 1013, in which:
First obtains module 1001, the coordinate of the outsourcing contour feature point for obtaining eyes in image.
The acquisition methods of the outsourcing contour feature point of eyes have many modes, such as ASM (Active Shape Model)
Method or neural network method etc..
It may refer to describe Fig. 2, details are not described herein.
Second obtains module 1002, for the coordinate according to the outsourcing contour feature point, obtains the target comprising eyes
Image.
Third obtains module 1003, the inside of eye for surrounding outsourcing contour feature point described in the target image
The pixel value of each pixel of image is set as the picture of each pixel of eyes external image in 255 and the target image
Plain value is set as 0, obtains mask gray level image.
The inside of eye image that outsourcing contour feature point surrounds can be and be obtained using Sai Er Drawing of Curve.
4th obtains module 1004, for calculating the x direction gradient gX of each pixel (i, j) in the target image
The y direction gradient gY (i, j) of each pixel (i, j) in (i, j) and the target image obtains the side x by each pixel
The x direction gradient figure formed to gradient, and the y direction gradient figure being made of the y direction gradient of each pixel.
The first order differential operators such as Sobel, Prewitt can be used in the calculation method of x direction gradient figure and y direction gradient figure,
Be illustrated by taking Sobel operator as an example below, the 4th obtain module 1004 may include: first acquisition unit, for by with
Lower formula calculates x direction gradient figure and y direction gradient figure:
Wherein, Src (i, j) indicates pixel (i, j) in the target image
Pixel value,
5th obtains module 1005, for x direction gradient each in the x direction gradient figure to be normalized, obtains
Normalize x direction gradient figure.
6th obtains module 1006, for y direction gradient each in the y direction gradient figure to be normalized, obtains
Normalize y direction gradient figure.
7th obtains module 1007, is used for according to the target image and the mask gray level image, by the target
The color of each pixel is inverted in image, obtains target reverse phase figure.
7th acquisition module 1007 may include second acquisition unit, for obtaining target reverse phase figure according to following formula:
Weighti,j=(255-Srci,j)×Maski,j/255;Wherein, Weighti,jFor pixel in the target reverse phase figure
The pixel value of point (i, j), Srci,jFor the pixel value of pixel (i, j) in the target image, Maski,jFor the mask gray scale
The pixel value of pixel (i, j) in image.
First computing module 1008, for the pixel (i, j) for each pixel value non-zero in the target reverse phase figure
Execute following operation:
Calculate the unit gradient vector G of the pixel (i, j)i,j=(gX'(i, j), gY'(i, j)), in gradient map
The unit location vector of each pixel (I, J)
It is to be less than pixel in the gradient map more than or equal to 0 for the total line number M for being less than pixel in the gradient map more than or equal to 0, J
Total columns N, gX'(i, j) be it is described normalization x direction gradient figure pixel (i, j) pixel value, gY'(i, j) be described
Normalize the pixel value of the pixel (i, j) of y direction gradient figure.
8th obtains module 1009, for obtaining by the corresponding Sum of each pixel of target reverse phase figurei,jAs picture
The result figure of element value composition.
9th obtains module 1010, for pixel value each in the result figure to be normalized to 0 to 255 range, obtains
Normalize result figure.
Tenth obtains module 1011, for inverting the color of each pixel in the normalization result figure, obtains
Obtain reverse phase result figure.
Tenth acquisition module 1011 may include third acquiring unit, obtain reverse phase result figure for passing through following formula:
Sumi',j=255-255 × (Sumi,j-minSum)/(maxSum-minSum);
Wherein, Sumi',jFor the pixel value of pixel (i, j) in the reverse phase result figure, Sumi,jFor in the result figure
The pixel value of pixel (i, j), minSum are the smallest pixel value in the result figure, and maxSum is maximum in the result figure
Pixel value.
Second computing module 1012, for pixel value according to each pixel in the reverse phase result figure and each
The coordinate of pixel calculates the weighted average coordinate of each pixel in the reverse phase result figure.
Second computing module 1012 may include the 4th acquiring unit, sit for obtaining weighted average by following formula
Mark:
Wherein, Sumi',jFor pixel in the reverse phase result figure
The pixel value of point (i, j), (Vi, Vj) are the weighted average coordinate, function f (Sum'i,j) it is that the reverse phase result figure is each
The pixel value from 0 to 255 of pixel maps to the mapping function of preset range, wherein Sum'i,jIt is smaller, f (Sum'i,j) bigger.
Determining module 1013, for determining the eyeball centre coordinate according to the weighted average coordinate.
In eyeball center positioning device provided by the embodiments of the present application, second obtains the outer chartered steamer that module 1002 passes through eyes
The coordinate of wide characteristic point obtains the target image comprising eyes, then obtains module 1003 by third and obtain covering for target image
Mould gray level image, is this feature of black using eyeball, and the 7th obtains module 1007 according to the target image and described
Mask gray level image inverts the color of pixel each in the target image, obtains target reverse phase figure.4th obtains
Each picture in the x direction gradient gX (i, j) and the target image of each pixel (i, j) in the calculating target image of module 1004
The y direction gradient gY (i, j) of vegetarian refreshments (i, j) obtains x direction gradient figure and y direction gradient figure;Then the 5th module 1005 is obtained
X direction gradient each in x direction gradient figure is normalized, normalization x direction gradient figure is obtained, the 6th obtains module 1006
Y direction gradient each in y direction gradient figure is normalized, normalization y direction gradient figure is obtained.First computing module 1008
Calculate the unit gradient vector G of the pixel (i, j)i,j=(gX'(i, j), gY'(i, j)), with pixel each in gradient map
The dot product quadratic sum Sum of the unit location vector of point (I, J)i,j, the 8th acquisition module 1009 obtains every by the target reverse phase figure
The corresponding Sum of one pixeli,jAs the result figure of pixel value composition, the 9th obtains module 1010 will be each in the result figure
Pixel value is normalized to 0 to 255 range, obtains normalization result figure, and the tenth obtains module 1011 for the normalization result figure
In the color of each pixel inverted, obtain reverse phase result figure, the second computing module 1012 is according to the reverse phase result figure
In the pixel value of each pixel and the coordinate of each pixel, calculate each pixel in the reverse phase result figure
It is weighted and averaged coordinate, determining module 1013 determines the eyeball centre coordinate according to the weighted average coordinate.To realize
The accurate purpose for determining eyeball center.
Figure 11 is please referred to, is one of the second acquisition module in a kind of eyeball center positioning device provided by the embodiments of the present application
The structural schematic diagram of kind of implementation, the second acquisition module include: the 5th acquiring unit 1101, the first determination unit 1102,
Judging unit 1103, unit for scaling 1104 and the second determination unit 1105, in which:
5th acquiring unit 1101, for obtaining the boundary rectangle of the outsourcing contour feature point.
First determination unit 1102, region for surrounding the boundary rectangle determine subject to target image.
First judging unit 1103, for judging whether the image area of the quasi- target image is greater than area threshold.
Unit for scaling 1104, for when the image area a of the quasi- target image be greater than the area threshold A when, by institute
The length and width for stating quasi- target image are according to zoom factorIt zooms in and out, the target image after being scaled.
Second determination unit 1105 is less than or equal to the area threshold A for the image area a when the quasi- target image
When, the quasi- target image is determined as the target image.
Correspondingly, determining module 1013 includes: third determination unit, for being the quasi- target figure when the target image
When picture, the weighted average coordinate is determined as the eyeball centre coordinate;4th determination unit, for working as the target image
When for image after the quasi- target image scaling, using the product of the weighted average coordinate and zoom factor rate as described in
Eyeball centre coordinate.
It is understood that can be iterated to above-mentioned eyeball center positioning method, the number of iterations is bigger, eyeball center
The precision of positioning may be bigger, certainly may also eyeball centralized positioning precision it is lower, therefore find suitable iteration time
Number is critically important.
Eyeball center positioning device can also include: setup module, for maximum number of iterations to be arranged, by current iteration time
Number is set as 0.Addition module, for the current iteration number to be added 1;Judgment module, for judging the current iteration time
Whether number is more than or equal to the maximum number of iterations;First trigger module, for being more than or equal to institute when the current iteration number
When stating maximum number of iterations, the second computing module 1012 is triggered;Second trigger module, for being less than when the current iteration number
When the maximum number of iterations, using the reverse phase result figure as the target image, triggering the 4th obtains module 1004.
The embodiment of the present application also provides a kind of eyeball center location system, which includes: processing
Device and memory, in which:
Memory, for storing the processor-executable instruction.
Processor is configured as:
Obtain the coordinate of the outsourcing contour feature point of eyes in image.
According to the coordinate of the outsourcing contour feature point, the target image comprising eyes is obtained.
The pixel of each pixel for the inside of eye image that outsourcing contour feature point described in the target image is surrounded
The pixel value that value is set as each pixel of eyes external image in 255 and the target image is set as 0, obtains mask
Gray level image.
It calculates each in the x direction gradient gX (i, j) and the target image of each pixel (i, j) in the target image
The y direction gradient gY (i, j) of a pixel (i, j) obtains the x direction gradient figure being made of the x direction gradient of each pixel,
And the y direction gradient figure being made of the y direction gradient of each pixel.
X direction gradient each in the x direction gradient figure is normalized, normalization x direction gradient figure is obtained.
Y direction gradient each in the y direction gradient figure is normalized, normalization y direction gradient figure is obtained.
According to the target image and the mask gray level image, by the color of pixel each in the target image
It is inverted, obtains target reverse phase figure.
Following operation is executed for the pixel (i, j) of each pixel value non-zero in the target reverse phase figure:
Calculate the unit gradient vector G of the pixel (i, j)i,j=(gX'(i, j), gY'(i, j)), in gradient map
The unit location vector of each pixel (I, J)
It is to be less than pixel in the gradient map more than or equal to 0 for the total line number M for being less than pixel in the gradient map more than or equal to 0, J
Total columns N, gX'(i, j) be it is described normalization x direction gradient figure pixel (i, j) pixel value, gY'(i, j) be described
Normalize the pixel value of the pixel (i, j) of y direction gradient figure.
It obtains by the corresponding Sum of each pixel of target reverse phase figurei,jResult figure as pixel value composition.
Pixel value each in the result figure is normalized to 0 to 255 range, obtains normalization result figure.
The color of each pixel in the normalization result figure is inverted, reverse phase result figure is obtained.
According to the pixel value of each pixel in the reverse phase result figure and the coordinate of each pixel, institute is calculated
State the weighted average coordinate of each pixel in reverse phase result figure.
According to the weighted average coordinate, the eyeball centre coordinate is determined.
It should be noted that all the embodiments in this specification are described in a progressive manner, each embodiment weight
Point explanation is the difference from other embodiments, and the same or similar parts between the embodiments can be referred to each other.
The foregoing description of the disclosed embodiments enables those skilled in the art to implement or use the present invention.
Various modifications to these embodiments will be readily apparent to those skilled in the art, as defined herein
General Principle can be realized in other embodiments without departing from the spirit or scope of the present invention.Therefore, of the invention
It is not intended to be limited to the embodiments shown herein, and is to fit to and the principles and novel features disclosed herein phase one
The widest scope of cause.