Summary of the invention
The method and apparatus that face point cloud is obtained in a kind of robot spatial registration provided by the embodiments of the present application, are being not required to
In the case where wanting the ancillary equipments such as structure light, the characteristic point by acquiring facial image obtains face point cloud, reduces and obtains people
The cumbersome degree of face point cloud step and the complexity of device therefor, moreover, the facial area that the embodiment of the present application is obtained by gray level image
Domain come determine extract region after, then from extract region obtain facial image, reduce and facial image institute determined according to gray level image
Duration improves the acquisition efficiency of face point cloud to reduce the acquisition duration of entire face point cloud.
In order to achieve the above objectives, the embodiment of the present invention adopts the following technical scheme that
On the one hand, the embodiment of the present application provides a kind of method that face point cloud is obtained in robot spatial registration, including
According to the two of the different angle got gray level images, the first facial region for respectively corresponding two gray level images is determined
With the second facial area;According to the first facial region and second facial area, the determining and first facial region
Corresponding first, which extracts region and corresponding with second facial area second, extracts region;Region is extracted according to described first
Region is extracted with described second, obtains the first facial image and the second facial image;To what is obtained from first facial image
Characteristic point and the characteristic point obtained in second facial image are matched, and face point cloud is obtained.
Optionally, according to the first facial region and second facial area, the determining and first facial region
Corresponding first, which extracts region and corresponding with second facial area second, extracts region the following steps are included: with described the
Centered on one facial area and second facial area, the first facial region and second facial area are put respectively
The multiple set greatly obtains corresponding with the first facial region first and extracts region and corresponding with second facial area
Second extract region.
Optionally, the numberical range of the multiple set is 2.25~9.
Optionally, the value of the multiple set is 6.25.
Optionally, to the characteristic point that is obtained from first facial image and the spy obtained in second facial image
Sign point the step of being matched, obtaining face point cloud specifically: according to first facial image and second facial image,
Obtain the characteristic point in the characteristic point and second facial image in first facial image;To from the first face figure
The characteristic point as obtained in from characteristic point obtained in second facial image identical characteristic point of pixel value carry out
Match;According to obtained matching double points are matched, face point cloud is obtained.
Optionally, it according to first facial image and second facial image, obtains in first facial image
Characteristic point and second facial image in characteristic point the step of are as follows: to first facial image carry out Gaussian Blur at
Reason, obtains the Gaussian image of three different fog-levels corresponding from first facial image;In first facial image
In the Gaussian image of three different fog-levels, according to the sequence of clarity from high to low, successively in two adjacent width
Image subtraction is carried out between image, obtain three subtract each other after image;The middle graph in image after being subtracted each other according to described three
The comparison result of the pixel of corresponding position, determines the first face figure in the pixel of picture and two images adjacent thereto
Characteristic point as in, and Gaussian Blur processing is carried out to second facial image, it obtains and second facial image pair
The Gaussian image for the three different fog-levels answered;In the Gauss of second facial image and three different fog-levels
In image, according to the sequence of clarity from high to low, image subtraction is successively carried out between adjacent two images, obtains three
Image after subtracting each other;The pixel of the intermediate image in image after being subtracted each other according to described three and two images adjacent thereto
The comparison result of the pixel of middle corresponding position determines the characteristic point in second facial image.
Optionally, the pixel of the intermediate image in image after being subtracted each other according to described three and two figures adjacent thereto
The comparison result of the pixel of corresponding position, determines the specific steps of the characteristic point in first facial image as in are as follows: will
Correspond to the pixel in 8 consecutive points of pixel and the pixel in the intermediate image, two images
Position and the point for corresponding respectively to 8 consecutive points positions are compared;If the pixel value of the pixel is maximum value
Or minimum value, it is determined that the point in corresponding first facial image of the pixel is the feature in first facial image
Point.
Optionally, the pixel of the intermediate image in image after being subtracted each other according to described three and two figures adjacent thereto
The comparison result of the pixel of corresponding position, determines the specific steps of the characteristic point in second facial image as in are as follows: will
Correspond to the pixel in 8 consecutive points of pixel and the pixel in the intermediate image, two images
Position and the point for corresponding respectively to 8 consecutive points positions are compared;If the pixel value of the pixel is maximum value
Or minimum value, it is determined that the point in corresponding second facial image of the pixel is the feature in second facial image
Point.
Optionally, described according to the step of matching obtained matching double points, obtain face point cloud includes: to obtain from matching
Characteristic point centering search the matching double points of the corresponding actual persons same point on the face;According to the matching double points found, people is obtained
Face point cloud.
Optionally, determine respectively correspond two gray level images first facial region and the second facial area it
Before, it further comprises the steps of: and non-linear bilateral filtering processing is carried out to two gray level images of the different angle got.
On the other hand, the embodiment of the present application also provides in a kind of robot spatial registration obtain face point cloud device,
It is determined described in respectively corresponding including facial area determining module for two gray level images according to the different angle got
The first facial region of two gray level images and the second facial area;Area determination module is extracted, for according to the facial area
The first facial region and second facial area that domain determining module obtains, determination are corresponding with the first facial region
First extract region and it is corresponding with second facial area second extract region;Facial image extraction module is used for root
Region and described second is extracted according to described first and extracts region, obtains the first facial image and the second facial image;Face point cloud
Module is obtained, for neutralizing the second face figure to first facial image determined from the facial image extraction module
Characteristic point as in is matched, to obtain face point cloud.
To sum up, the method and apparatus that face point cloud is obtained in a kind of robot spatial registration provided in an embodiment of the present invention,
Facial area is determined from the gray level image with face direct picture first, is determined further according to facial area for extracting
The extraction region of facial image, and then the selected characteristic point from the facial image extracted, in conjunction with each characteristic point in face
D coordinates value, obtain face point cloud data.The embodiment of the present application during obtaining facial image, do not need by means of
Structure light, thus facial image obtained not will receive the influence of structure light, thus people obtained in the embodiment of the present application
Face image is in close proximity to true face, obtains more accurate characteristic point.And the embodiment of the present application is not needed to face
Nose, the privileged sites such as eyes or canthus are identified and are marked, and can get the characteristic point in facial image, effectively
Operand needed for face data reduction process is reduced, and then reduces the duration for obtaining face point cloud, improves a cloud
Acquisition efficiency.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on
Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other
Embodiment shall fall within the protection scope of the present invention.
As shown in Figure 1, the method for obtaining face point cloud in robot spatial registration provided by the embodiments of the present application, comprising:
S100: according to two gray level images under the different angle got, determination respectively corresponds two grayscale images
The first facial region of picture and the second facial area.
In step S100, the gray level image of two different perspectivess can be acquired, by binocular camera in order to after the completion
Continue step, it should be comprising the image of the human face of different perspectives, in actual acquisition, in order to improve two in two gray level images
The distance of the collection effect of the gray level image of different perspectives, binocular camera centre distance nose preferably can be set
Between 600mm~1200mm, it is more highly preferred to be set as 1000mm.
After getting two gray level images of different angle, facial characteristics is detected from every gray level image, thus according to
The facial characteristics detected determines first facial region corresponding with two gray level images respectively and the second facial area.Specifically
When realization, haar detection of classifier facial characteristics can use, then in conjunction with AdaBoost algorithm respectively in two gray level images
The middle respective facial area of determination.
Further, in order to while eliminating the noise of acquired gray level image so that the edge of gray level image not by
Blurring, the embodiment of the present application can also determine the first facial region and the second face for respectively corresponding two gray level images
Before portion region, non-linear bilateral filtering processing is carried out to two gray level images of the different angle got.
S200: according to the first facial region and second facial area, the determining and first facial region pair
First answered extracts region and corresponding with second facial area second and extracts region;
Specifically, centered on the first facial region and second facial area, respectively by the first facial
The multiple in region and second facial area amplification setting, obtains corresponding with the first facial region first and extracts region
Corresponding with second facial area second extracts region.
In the present embodiment, determine that the first extraction corresponding with first facial region and the second facial area region and second mentions
Region is taken, gray level image P, extracts the size relation between region B and facial area F as shown in Fig. 2, wherein ellipse is people
The schematic forms of face.
It should be noted that determining the first extraction region according to first facial region, and determined according to the second facial area
Second is essentially identical the step of extracting region, herein to determine corresponding with first facial region first according to first facial region
Extraction is illustrated for region.
As illustrated in fig. 2, it is assumed that the size in first facial region is M, expanded centered on first facial region,
I.e. centered on first facial region, after the multiple K of first facial region amplification setting, first that size is T is obtained
Region is extracted, then has T=K*M.
It should be noted that getting complete facial image from extraction region to be able to achieve, while shortening as far as possible
The time of facial image is obtained, the area in the extraction region in the embodiment of the present application is greater than facial area, when specific implementation, variable
The value range of K can be between 2.25 and 9, including 2.25 and 9 the two boundary values.
Further, the value of K is 6.25.
Simultaneously it should also be noted that, actual acquisition to image in, face part face shared in whole image
Product is limited, because known in those skilled in the art, the range of camera collection image is usually that will not change, Er Qie
During subsequent spatial match, it is also necessary to use the image-capture field other than face part, such as use mechanical arm and exist
Moved in the acquisition range of the image and acquire respective image etc., if occupied area is excessive in the picture for face part, will be unable to
Subsequent spatial registration program is carried out, if but obtained extraction region is greater than whole image range, this field skill after calculating
Art personnel are also envisaged that the extraction that facial image is directly carried out using collected entire gray level image.
By above-mentioned steps, after determining the first extraction corresponding with first facial region region, so that it may execute step
S300。
S300: region and described second is extracted according to described first and extracts region, obtains the first facial image and the second people
Face image.
In actual operation, it can use existing algorithm, for example Grabcut algorithm obtains face figure from extraction region
Picture, which is not described herein again.
After getting facial image, step S400 is executed.
S400: the characteristic point obtained in the first facial image is carried out with the characteristic point obtained in the second facial image
Matching, obtains face point cloud.
In actual mechanical process, as shown in figure 3, step S400 can be executed in three steps, it can specifically be divided into step
S410: according to the first facial image and the second facial image, the characteristic point and the second facial image in the first facial image are obtained
In characteristic point, in the actual operation process, the selected characteristic point from the first facial image and the second facial image respectively;Step
S420: to from characteristic point obtained in the first facial image with from characteristic point obtained in the second facial image pixel value it is identical
Characteristic point matched;Step S430: according to point pair obtained after matching, face point cloud is obtained.
This step needs first to obtain characteristic point from the first facial image and the second facial image, i.e. execution step S410,
Existing mode can be specifically used, from the characteristic point extracted on face in image.
A preferred technical solution is provided herein: Gaussian Blur processing being carried out to the first facial image, is obtained and the
The Gaussian image of the corresponding three different fog-levels of one facial image;In the first facial image fog-level different from three
In Gaussian image, according to the sequence of clarity from high to low, image subtraction is successively carried out between adjacent two images, is obtained
Three subtract each other after image;The pixel of the intermediate image in image after being subtracted each other according to three and two images adjacent thereto
The comparison result of the pixel of middle corresponding position, determines the characteristic point in the first facial image, and to the second facial image into
The processing of row Gaussian Blur, obtains the Gaussian image of three different fog-levels corresponding from the second facial image;In the second face
In the Gaussian image of image fog-level different from three, according to the sequence of clarity from high to low, successively in two adjacent width
Image subtraction is carried out between image, obtain three subtract each other after image;The intermediate image in image after being subtracted each other according to three
The comparison result of the pixel of corresponding position, determines the spy in the second facial image in pixel and two images adjacent thereto
Sign point.
By taking the first facial image as an example, in the actual operation process, above scheme specifically: to the first face got
Image carries out Gaussian Blur processing, to create the Gaussian Blur figure of three different fog-levels corresponding from the first facial image
Picture, the first facial image of note are m1, and three Gaussian Blur images are respectively m2, m3, m4.That is, by the first facial image
After carrying out Gaussian Blur processing, four gray level images, respectively m1, m2, m3, m4 is obtained.Wherein, m4 ratio m3 fog-level
Height, m3 ratio m2 fog-level is high, and m2 ratio m1 fog-level is high.That is, being ordered as m1 > m2 > m3 > m4 according to clarity.
After getting above-mentioned four gray level images m1, m2, m3, m4, successively by same position in image m1 and image m2
The value of pixel is subtracted each other to obtain image j1, is subtracted each other the value of image m2 and the pixel of image m3 same position to obtain j2,
Subtract each other the value of image m3 and the pixel of image m4 same position to obtain image j3, image j2 here is exactly above-mentioned middle graph
Picture, j1 and j3 are above-mentioned two images adjacent with intermediate image j2.
It should be noted that during above-mentioned image subtraction, if the difference of the pixel value after subtracting each other is subtracted each other less than 0
The pixel value of the position of gained image is set to 0 afterwards.For example, if a certain position of image m1 with it is right in image m2
After answering the value of the pixel of position to subtract each other, difference is -1, then in the image j1 that image subtraction obtains, the pixel value of the position is just arranged
It is 0.
Obtain image j1, it, will be in 8 consecutive points of pixel and the pixel in j2 image, j1 image after j2, j3
Corresponding to pixel position and corresponding respectively in the point and j3 image of 8 consecutive points positions correspond to pixel position and
The point for corresponding respectively to 8 consecutive points positions is compared;If the pixel value of pixel is maximum value or most in j2 image
Small value, it is determined that the point in j2 image in corresponding first facial image of pixel is the characteristic point in the first facial image.
When specific implementation, as shown in figure 4, by the pixel of X position in image j2 respectively with X position pixel in this image
8 neighbor pixels, correspond to the pixel of X position in image j1 and respectively correspond the pixel of above-mentioned 8 adjacent pixels
The pixel of X position is corresponded in point and image j3 and respectively corresponds the pixels of above-mentioned 8 adjacent pixels totally 26 pictures
Vegetarian refreshments is compared, if the pixel value that comparison result is the pixel of X position is maximum value or minimum value, it is determined that X
The pixel set is a characteristic point of the first facial image.Successively by the pixel in image j2 all with it is right in image j1 and j3
It answers the pixel of position to be compared, all characteristic points of the first facial image can be got.The wherein edge pixel of image
Because being unable to complete above-mentioned comparison, extracted not as characteristic point.
In above-mentioned same method, the available characteristic point to the second facial image.Next pair of step 420 is executed
Characteristic point in first facial image and the second facial image carries out matched process, i.e., to choosing from the first facial image
Characteristic point with from the characteristic point chosen in the second facial image the identical characteristic point of pixel value matched.
It is illustrated in figure 5 provided by the embodiments of the present application to the first facial image and the matched mistake of the second facial image progress
Journey schematic diagram, it is assumed that the facial image on the left side is image A in Fig. 5, and the facial image on the right is image B, it should be noted that
By when the identical characteristic point of pixel value is matched in image A and image B, it is possible that some characteristic points exist in image A.Figure
The case where as can not find therewith the characteristic point with same pixel value in B, the embodiment of the present application is by pixel in image A and image B
Be worth identical characteristic point and carry out line, then, not the characteristic point of line mean that can not find in another width facial image with
The pixel with same pixel value.It is just complete after the identical pixel of pixel value carries out line in image A and image B
At the matching of two facial images, Fig. 6 show the facial image under two angles after matching.To which step S430 can
To be combined binocular or multi-vision visual system each point obtained under its coordinate system according to obtained point pair is matched
Dependent coordinate, it will be able to obtain the dependent coordinate for completing the point of spatial registration, complete the acquisition work of face point cloud.
Preferably, after the matching that the facial image under different angle is opened in completion two, it is also possible to occur under different angle
Facial image in the identical characteristic point of the pixel value not situation on the same position of practical face, spy in this case
Sign point is the characteristic point of error hiding.This is just needed after carrying out images match, and the characteristic point of error hiding is removed, so as to
Face point cloud is obtained using remaining characteristic point after the point of removal error hiding.It, can be from the obtained feature of matching when specific implementation
The point pair of the corresponding actual persons same point on the face is searched in point centering, and by qualified point to as matching double points, and according to
The matching double points found obtain face point cloud.
It should be noted that the characteristic point of RANSAC algorithm removal error hiding can be used.
Since remaining characteristic point is one-to-one in the facial image under different angle after removal error hiding, and
One-to-one each pair of characteristic point reflection is the same point of actual persons on the face under different angle, and passes through binocular vision system
While system obtains the gray level image at two visual angles, so that it may obtain the seat of each pixel in the gray level image under two visual angles
Mark, in turn, in the actual operation process, according to remaining characteristic point after the characteristic point of removal error hiding, with range of triangle original
Reason can be obtained by the coordinate of the face point cloud for spatial registration, to obtain face point cloud.
Based on same inventive concept, as shown in fig. 7, the embodiment of the present application also provides in a kind of robot spatial registration
Obtain the device of face point cloud, comprising:
Facial area determining module 701, for two gray level images according to the different angle got, determination is right respectively
Answer the first facial region and the second facial area of two gray level images;
Extract area determination module 702, first facial region for obtaining according to facial area determining module 701 and the
Two facial areas, determining and first facial region corresponding first are extracted region and corresponding with the second facial area second and are extracted
Region;
Facial image extraction module 703, first for being obtained according to extraction regions module, which extracts region and second, extracts
Region obtains the first facial image and the second facial image;
Face point cloud obtains module 704, and the first facial image for being determined according to facial image extraction module 703 neutralizes
Characteristic point in second facial image is matched, to obtain face point cloud.
In the present embodiment, facial area determining module 701 extracts area determination module 702, facial image extraction module
703, face point cloud, which obtains module 704, can execute step corresponding in above method embodiment.
In several embodiments provided herein, it should be understood that disclosed system, device and method can be with
It realizes by another way.For example, the apparatus embodiments described above are merely exemplary, for example, the division of unit,
Only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units or components can be with
In conjunction with or be desirably integrated into another system, or some features can be ignored or not executed.Another point, it is shown or discussed
Mutual coupling, direct-coupling or communication connection can be through some interfaces, the INDIRECT COUPLING of device or unit or
Communication connection can be electrical property, mechanical or other forms.
Unit may or may not be physically separated as illustrated by the separation member, shown as a unit
Component may or may not be physical unit, it can and it is in one place, or may be distributed over multiple networks
On unit.It can some or all of the units may be selected to achieve the purpose of the solution of this embodiment according to the actual needs.
It, can also be in addition, the functional units in various embodiments of the present invention may be integrated into one processing unit
It is that the independent physics of each unit includes, can also be integrated in one unit with two or more units.Above-mentioned integrated list
Member both can take the form of hardware realization, can also realize in the form of hardware adds SFU software functional unit.
The above-mentioned integrated unit being realized in the form of SFU software functional unit can store and computer-readable deposit at one
In storage media.Above-mentioned SFU software functional unit is stored in a storage medium, including some instructions are used so that a computer
Equipment (can be personal computer, server or the network equipment etc.) executes the part step of each embodiment method of the present invention
Suddenly.And storage medium above-mentioned include: USB flash disk, mobile hard disk, read-only memory (Read-Only Memory, abbreviation ROM), with
Machine access memory (Random Access Memory, abbreviation RAM), magnetic or disk etc. are various to can store program code
Medium.
Finally, it should be noted that the above embodiments are merely illustrative of the technical solutions of the present invention, rather than its limitations;Although
Present invention has been described in detail with reference to the aforementioned embodiments, those skilled in the art should understand that: it still may be used
To modify the technical solutions described in the foregoing embodiments or equivalent replacement of some of the technical features;
And these are modified or replaceed, technical solution of various embodiments of the present invention that it does not separate the essence of the corresponding technical solution spirit and
Range.