[go: up one dir, main page]

CN109409169A - The method and apparatus of face point cloud are obtained in a kind of robot spatial registration - Google Patents

The method and apparatus of face point cloud are obtained in a kind of robot spatial registration Download PDF

Info

Publication number
CN109409169A
CN109409169A CN201710703134.3A CN201710703134A CN109409169A CN 109409169 A CN109409169 A CN 109409169A CN 201710703134 A CN201710703134 A CN 201710703134A CN 109409169 A CN109409169 A CN 109409169A
Authority
CN
China
Prior art keywords
facial
image
region
facial image
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710703134.3A
Other languages
Chinese (zh)
Other versions
CN109409169B (en
Inventor
张赛
刘达
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baihui Weikang Technology Co Ltd
Original Assignee
Beijing Baihui Weikang Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baihui Weikang Technology Co Ltd filed Critical Beijing Baihui Weikang Technology Co Ltd
Priority to CN201710703134.3A priority Critical patent/CN109409169B/en
Publication of CN109409169A publication Critical patent/CN109409169A/en
Application granted granted Critical
Publication of CN109409169B publication Critical patent/CN109409169B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention provides the method and apparatus that face point cloud is obtained in a kind of robot spatial registration, wherein method includes determining first facial region and the second facial area for respectively corresponding two gray level images according to two gray level images of the different angle got;According to first facial region and the second facial area, determining and first facial region corresponding first extracts region and corresponding with the second facial area second and extracts region;Region and second is extracted from first to extract in region, extracts the first facial image and the second facial image;The characteristic point obtained from the first facial image is matched with the characteristic point obtained in the second facial image, obtains face point cloud.The embodiment of the present application is not in the case where needing the privileged site to face and being identified, face point cloud can be got, reduce the preparation before obtaining face point cloud, and reduce the operand in acquisition process, and then reduces the duration used for obtaining face point cloud.

Description

The method and apparatus of face point cloud are obtained in a kind of robot spatial registration
Technical field
The present invention relates to a kind of methods that face point cloud is obtained in field of image processing more particularly to robot spatial registration And device.
Background technique
Spatial registration is one of the key technology in robot localization, and obtains front end of the face point cloud as spatial registration Step plays a very important role during spatial registration.
Under normal circumstances, the existing method needs that face point cloud is obtained in spatial registration are mentioned from face original image Take identification point, the information such as characteristic point not only need the nose to face, eyes, the privileged sites such as canthus carry out identification and Calibration, it is also necessary to the textural characteristics on face surface are extracted, and since the textural characteristics on face surface are not abundant enough, so that using phase The characteristic point in facial image that machine obtains is unobvious, therefore prior art generally uses active shadow casting technique to throw to face surface Certain structure light is penetrated, to enrich face surface characteristics.
In conclusion other than needing optical camera, being also additionally required knot in the extraction process of current face point cloud Structure light device is assisted, but also needs to identify the privileged sites of face, therefore, obtains face point cloud the step of ratio Cumbersome, operand is also grown so as to cause the time for obtaining face point cloud greatly.
Summary of the invention
The method and apparatus that face point cloud is obtained in a kind of robot spatial registration provided by the embodiments of the present application, are being not required to In the case where wanting the ancillary equipments such as structure light, the characteristic point by acquiring facial image obtains face point cloud, reduces and obtains people The cumbersome degree of face point cloud step and the complexity of device therefor, moreover, the facial area that the embodiment of the present application is obtained by gray level image Domain come determine extract region after, then from extract region obtain facial image, reduce and facial image institute determined according to gray level image Duration improves the acquisition efficiency of face point cloud to reduce the acquisition duration of entire face point cloud.
In order to achieve the above objectives, the embodiment of the present invention adopts the following technical scheme that
On the one hand, the embodiment of the present application provides a kind of method that face point cloud is obtained in robot spatial registration, including According to the two of the different angle got gray level images, the first facial region for respectively corresponding two gray level images is determined With the second facial area;According to the first facial region and second facial area, the determining and first facial region Corresponding first, which extracts region and corresponding with second facial area second, extracts region;Region is extracted according to described first Region is extracted with described second, obtains the first facial image and the second facial image;To what is obtained from first facial image Characteristic point and the characteristic point obtained in second facial image are matched, and face point cloud is obtained.
Optionally, according to the first facial region and second facial area, the determining and first facial region Corresponding first, which extracts region and corresponding with second facial area second, extracts region the following steps are included: with described the Centered on one facial area and second facial area, the first facial region and second facial area are put respectively The multiple set greatly obtains corresponding with the first facial region first and extracts region and corresponding with second facial area Second extract region.
Optionally, the numberical range of the multiple set is 2.25~9.
Optionally, the value of the multiple set is 6.25.
Optionally, to the characteristic point that is obtained from first facial image and the spy obtained in second facial image Sign point the step of being matched, obtaining face point cloud specifically: according to first facial image and second facial image, Obtain the characteristic point in the characteristic point and second facial image in first facial image;To from the first face figure The characteristic point as obtained in from characteristic point obtained in second facial image identical characteristic point of pixel value carry out Match;According to obtained matching double points are matched, face point cloud is obtained.
Optionally, it according to first facial image and second facial image, obtains in first facial image Characteristic point and second facial image in characteristic point the step of are as follows: to first facial image carry out Gaussian Blur at Reason, obtains the Gaussian image of three different fog-levels corresponding from first facial image;In first facial image In the Gaussian image of three different fog-levels, according to the sequence of clarity from high to low, successively in two adjacent width Image subtraction is carried out between image, obtain three subtract each other after image;The middle graph in image after being subtracted each other according to described three The comparison result of the pixel of corresponding position, determines the first face figure in the pixel of picture and two images adjacent thereto Characteristic point as in, and Gaussian Blur processing is carried out to second facial image, it obtains and second facial image pair The Gaussian image for the three different fog-levels answered;In the Gauss of second facial image and three different fog-levels In image, according to the sequence of clarity from high to low, image subtraction is successively carried out between adjacent two images, obtains three Image after subtracting each other;The pixel of the intermediate image in image after being subtracted each other according to described three and two images adjacent thereto The comparison result of the pixel of middle corresponding position determines the characteristic point in second facial image.
Optionally, the pixel of the intermediate image in image after being subtracted each other according to described three and two figures adjacent thereto The comparison result of the pixel of corresponding position, determines the specific steps of the characteristic point in first facial image as in are as follows: will Correspond to the pixel in 8 consecutive points of pixel and the pixel in the intermediate image, two images Position and the point for corresponding respectively to 8 consecutive points positions are compared;If the pixel value of the pixel is maximum value Or minimum value, it is determined that the point in corresponding first facial image of the pixel is the feature in first facial image Point.
Optionally, the pixel of the intermediate image in image after being subtracted each other according to described three and two figures adjacent thereto The comparison result of the pixel of corresponding position, determines the specific steps of the characteristic point in second facial image as in are as follows: will Correspond to the pixel in 8 consecutive points of pixel and the pixel in the intermediate image, two images Position and the point for corresponding respectively to 8 consecutive points positions are compared;If the pixel value of the pixel is maximum value Or minimum value, it is determined that the point in corresponding second facial image of the pixel is the feature in second facial image Point.
Optionally, described according to the step of matching obtained matching double points, obtain face point cloud includes: to obtain from matching Characteristic point centering search the matching double points of the corresponding actual persons same point on the face;According to the matching double points found, people is obtained Face point cloud.
Optionally, determine respectively correspond two gray level images first facial region and the second facial area it Before, it further comprises the steps of: and non-linear bilateral filtering processing is carried out to two gray level images of the different angle got.
On the other hand, the embodiment of the present application also provides in a kind of robot spatial registration obtain face point cloud device, It is determined described in respectively corresponding including facial area determining module for two gray level images according to the different angle got The first facial region of two gray level images and the second facial area;Area determination module is extracted, for according to the facial area The first facial region and second facial area that domain determining module obtains, determination are corresponding with the first facial region First extract region and it is corresponding with second facial area second extract region;Facial image extraction module is used for root Region and described second is extracted according to described first and extracts region, obtains the first facial image and the second facial image;Face point cloud Module is obtained, for neutralizing the second face figure to first facial image determined from the facial image extraction module Characteristic point as in is matched, to obtain face point cloud.
To sum up, the method and apparatus that face point cloud is obtained in a kind of robot spatial registration provided in an embodiment of the present invention, Facial area is determined from the gray level image with face direct picture first, is determined further according to facial area for extracting The extraction region of facial image, and then the selected characteristic point from the facial image extracted, in conjunction with each characteristic point in face D coordinates value, obtain face point cloud data.The embodiment of the present application during obtaining facial image, do not need by means of Structure light, thus facial image obtained not will receive the influence of structure light, thus people obtained in the embodiment of the present application Face image is in close proximity to true face, obtains more accurate characteristic point.And the embodiment of the present application is not needed to face Nose, the privileged sites such as eyes or canthus are identified and are marked, and can get the characteristic point in facial image, effectively Operand needed for face data reduction process is reduced, and then reduces the duration for obtaining face point cloud, improves a cloud Acquisition efficiency.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, below will be in embodiment or description of the prior art Required attached drawing is briefly described, it should be apparent that, the accompanying drawings in the following description is only some realities of the invention Example is applied, it for those of ordinary skill in the art, without creative efforts, can also be according to these attached drawings Obtain other attached drawings.
Fig. 1 is the method flow diagram that face point cloud is obtained in a kind of robot spatial registration provided in an embodiment of the present invention;
Fig. 2 is the relation schematic diagram between gray level image provided in an embodiment of the present invention, facial area and extraction region;
Fig. 3 is the specific implementation flow chart of step S400 in flow chart shown in Fig. 1 provided in an embodiment of the present invention;
Fig. 4 is that the pixel value provided in an embodiment of the present invention by comparing pixel determines showing for characteristic point in facial image It is intended to;
Fig. 5 is schematic diagram when matching in the facial image provided in an embodiment of the present invention under different angle;
Fig. 6 is the facial image schematic diagram under different angle after matching provided in an embodiment of the present invention;
Fig. 7 is the schematic device that face point cloud is obtained in a kind of robot spatial registration provided in an embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other Embodiment shall fall within the protection scope of the present invention.
As shown in Figure 1, the method for obtaining face point cloud in robot spatial registration provided by the embodiments of the present application, comprising:
S100: according to two gray level images under the different angle got, determination respectively corresponds two grayscale images The first facial region of picture and the second facial area.
In step S100, the gray level image of two different perspectivess can be acquired, by binocular camera in order to after the completion Continue step, it should be comprising the image of the human face of different perspectives, in actual acquisition, in order to improve two in two gray level images The distance of the collection effect of the gray level image of different perspectives, binocular camera centre distance nose preferably can be set Between 600mm~1200mm, it is more highly preferred to be set as 1000mm.
After getting two gray level images of different angle, facial characteristics is detected from every gray level image, thus according to The facial characteristics detected determines first facial region corresponding with two gray level images respectively and the second facial area.Specifically When realization, haar detection of classifier facial characteristics can use, then in conjunction with AdaBoost algorithm respectively in two gray level images The middle respective facial area of determination.
Further, in order to while eliminating the noise of acquired gray level image so that the edge of gray level image not by Blurring, the embodiment of the present application can also determine the first facial region and the second face for respectively corresponding two gray level images Before portion region, non-linear bilateral filtering processing is carried out to two gray level images of the different angle got.
S200: according to the first facial region and second facial area, the determining and first facial region pair First answered extracts region and corresponding with second facial area second and extracts region;
Specifically, centered on the first facial region and second facial area, respectively by the first facial The multiple in region and second facial area amplification setting, obtains corresponding with the first facial region first and extracts region Corresponding with second facial area second extracts region.
In the present embodiment, determine that the first extraction corresponding with first facial region and the second facial area region and second mentions Region is taken, gray level image P, extracts the size relation between region B and facial area F as shown in Fig. 2, wherein ellipse is people The schematic forms of face.
It should be noted that determining the first extraction region according to first facial region, and determined according to the second facial area Second is essentially identical the step of extracting region, herein to determine corresponding with first facial region first according to first facial region Extraction is illustrated for region.
As illustrated in fig. 2, it is assumed that the size in first facial region is M, expanded centered on first facial region, I.e. centered on first facial region, after the multiple K of first facial region amplification setting, first that size is T is obtained Region is extracted, then has T=K*M.
It should be noted that getting complete facial image from extraction region to be able to achieve, while shortening as far as possible The time of facial image is obtained, the area in the extraction region in the embodiment of the present application is greater than facial area, when specific implementation, variable The value range of K can be between 2.25 and 9, including 2.25 and 9 the two boundary values.
Further, the value of K is 6.25.
Simultaneously it should also be noted that, actual acquisition to image in, face part face shared in whole image Product is limited, because known in those skilled in the art, the range of camera collection image is usually that will not change, Er Qie During subsequent spatial match, it is also necessary to use the image-capture field other than face part, such as use mechanical arm and exist Moved in the acquisition range of the image and acquire respective image etc., if occupied area is excessive in the picture for face part, will be unable to Subsequent spatial registration program is carried out, if but obtained extraction region is greater than whole image range, this field skill after calculating Art personnel are also envisaged that the extraction that facial image is directly carried out using collected entire gray level image.
By above-mentioned steps, after determining the first extraction corresponding with first facial region region, so that it may execute step S300。
S300: region and described second is extracted according to described first and extracts region, obtains the first facial image and the second people Face image.
In actual operation, it can use existing algorithm, for example Grabcut algorithm obtains face figure from extraction region Picture, which is not described herein again.
After getting facial image, step S400 is executed.
S400: the characteristic point obtained in the first facial image is carried out with the characteristic point obtained in the second facial image Matching, obtains face point cloud.
In actual mechanical process, as shown in figure 3, step S400 can be executed in three steps, it can specifically be divided into step S410: according to the first facial image and the second facial image, the characteristic point and the second facial image in the first facial image are obtained In characteristic point, in the actual operation process, the selected characteristic point from the first facial image and the second facial image respectively;Step S420: to from characteristic point obtained in the first facial image with from characteristic point obtained in the second facial image pixel value it is identical Characteristic point matched;Step S430: according to point pair obtained after matching, face point cloud is obtained.
This step needs first to obtain characteristic point from the first facial image and the second facial image, i.e. execution step S410, Existing mode can be specifically used, from the characteristic point extracted on face in image.
A preferred technical solution is provided herein: Gaussian Blur processing being carried out to the first facial image, is obtained and the The Gaussian image of the corresponding three different fog-levels of one facial image;In the first facial image fog-level different from three In Gaussian image, according to the sequence of clarity from high to low, image subtraction is successively carried out between adjacent two images, is obtained Three subtract each other after image;The pixel of the intermediate image in image after being subtracted each other according to three and two images adjacent thereto The comparison result of the pixel of middle corresponding position, determines the characteristic point in the first facial image, and to the second facial image into The processing of row Gaussian Blur, obtains the Gaussian image of three different fog-levels corresponding from the second facial image;In the second face In the Gaussian image of image fog-level different from three, according to the sequence of clarity from high to low, successively in two adjacent width Image subtraction is carried out between image, obtain three subtract each other after image;The intermediate image in image after being subtracted each other according to three The comparison result of the pixel of corresponding position, determines the spy in the second facial image in pixel and two images adjacent thereto Sign point.
By taking the first facial image as an example, in the actual operation process, above scheme specifically: to the first face got Image carries out Gaussian Blur processing, to create the Gaussian Blur figure of three different fog-levels corresponding from the first facial image Picture, the first facial image of note are m1, and three Gaussian Blur images are respectively m2, m3, m4.That is, by the first facial image After carrying out Gaussian Blur processing, four gray level images, respectively m1, m2, m3, m4 is obtained.Wherein, m4 ratio m3 fog-level Height, m3 ratio m2 fog-level is high, and m2 ratio m1 fog-level is high.That is, being ordered as m1 > m2 > m3 > m4 according to clarity.
After getting above-mentioned four gray level images m1, m2, m3, m4, successively by same position in image m1 and image m2 The value of pixel is subtracted each other to obtain image j1, is subtracted each other the value of image m2 and the pixel of image m3 same position to obtain j2, Subtract each other the value of image m3 and the pixel of image m4 same position to obtain image j3, image j2 here is exactly above-mentioned middle graph Picture, j1 and j3 are above-mentioned two images adjacent with intermediate image j2.
It should be noted that during above-mentioned image subtraction, if the difference of the pixel value after subtracting each other is subtracted each other less than 0 The pixel value of the position of gained image is set to 0 afterwards.For example, if a certain position of image m1 with it is right in image m2 After answering the value of the pixel of position to subtract each other, difference is -1, then in the image j1 that image subtraction obtains, the pixel value of the position is just arranged It is 0.
Obtain image j1, it, will be in 8 consecutive points of pixel and the pixel in j2 image, j1 image after j2, j3 Corresponding to pixel position and corresponding respectively in the point and j3 image of 8 consecutive points positions correspond to pixel position and The point for corresponding respectively to 8 consecutive points positions is compared;If the pixel value of pixel is maximum value or most in j2 image Small value, it is determined that the point in j2 image in corresponding first facial image of pixel is the characteristic point in the first facial image.
When specific implementation, as shown in figure 4, by the pixel of X position in image j2 respectively with X position pixel in this image 8 neighbor pixels, correspond to the pixel of X position in image j1 and respectively correspond the pixel of above-mentioned 8 adjacent pixels The pixel of X position is corresponded in point and image j3 and respectively corresponds the pixels of above-mentioned 8 adjacent pixels totally 26 pictures Vegetarian refreshments is compared, if the pixel value that comparison result is the pixel of X position is maximum value or minimum value, it is determined that X The pixel set is a characteristic point of the first facial image.Successively by the pixel in image j2 all with it is right in image j1 and j3 It answers the pixel of position to be compared, all characteristic points of the first facial image can be got.The wherein edge pixel of image Because being unable to complete above-mentioned comparison, extracted not as characteristic point.
In above-mentioned same method, the available characteristic point to the second facial image.Next pair of step 420 is executed Characteristic point in first facial image and the second facial image carries out matched process, i.e., to choosing from the first facial image Characteristic point with from the characteristic point chosen in the second facial image the identical characteristic point of pixel value matched.
It is illustrated in figure 5 provided by the embodiments of the present application to the first facial image and the matched mistake of the second facial image progress Journey schematic diagram, it is assumed that the facial image on the left side is image A in Fig. 5, and the facial image on the right is image B, it should be noted that By when the identical characteristic point of pixel value is matched in image A and image B, it is possible that some characteristic points exist in image A.Figure The case where as can not find therewith the characteristic point with same pixel value in B, the embodiment of the present application is by pixel in image A and image B Be worth identical characteristic point and carry out line, then, not the characteristic point of line mean that can not find in another width facial image with The pixel with same pixel value.It is just complete after the identical pixel of pixel value carries out line in image A and image B At the matching of two facial images, Fig. 6 show the facial image under two angles after matching.To which step S430 can To be combined binocular or multi-vision visual system each point obtained under its coordinate system according to obtained point pair is matched Dependent coordinate, it will be able to obtain the dependent coordinate for completing the point of spatial registration, complete the acquisition work of face point cloud.
Preferably, after the matching that the facial image under different angle is opened in completion two, it is also possible to occur under different angle Facial image in the identical characteristic point of the pixel value not situation on the same position of practical face, spy in this case Sign point is the characteristic point of error hiding.This is just needed after carrying out images match, and the characteristic point of error hiding is removed, so as to Face point cloud is obtained using remaining characteristic point after the point of removal error hiding.It, can be from the obtained feature of matching when specific implementation The point pair of the corresponding actual persons same point on the face is searched in point centering, and by qualified point to as matching double points, and according to The matching double points found obtain face point cloud.
It should be noted that the characteristic point of RANSAC algorithm removal error hiding can be used.
Since remaining characteristic point is one-to-one in the facial image under different angle after removal error hiding, and One-to-one each pair of characteristic point reflection is the same point of actual persons on the face under different angle, and passes through binocular vision system While system obtains the gray level image at two visual angles, so that it may obtain the seat of each pixel in the gray level image under two visual angles Mark, in turn, in the actual operation process, according to remaining characteristic point after the characteristic point of removal error hiding, with range of triangle original Reason can be obtained by the coordinate of the face point cloud for spatial registration, to obtain face point cloud.
Based on same inventive concept, as shown in fig. 7, the embodiment of the present application also provides in a kind of robot spatial registration Obtain the device of face point cloud, comprising:
Facial area determining module 701, for two gray level images according to the different angle got, determination is right respectively Answer the first facial region and the second facial area of two gray level images;
Extract area determination module 702, first facial region for obtaining according to facial area determining module 701 and the Two facial areas, determining and first facial region corresponding first are extracted region and corresponding with the second facial area second and are extracted Region;
Facial image extraction module 703, first for being obtained according to extraction regions module, which extracts region and second, extracts Region obtains the first facial image and the second facial image;
Face point cloud obtains module 704, and the first facial image for being determined according to facial image extraction module 703 neutralizes Characteristic point in second facial image is matched, to obtain face point cloud.
In the present embodiment, facial area determining module 701 extracts area determination module 702, facial image extraction module 703, face point cloud, which obtains module 704, can execute step corresponding in above method embodiment.
In several embodiments provided herein, it should be understood that disclosed system, device and method can be with It realizes by another way.For example, the apparatus embodiments described above are merely exemplary, for example, the division of unit, Only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units or components can be with In conjunction with or be desirably integrated into another system, or some features can be ignored or not executed.Another point, it is shown or discussed Mutual coupling, direct-coupling or communication connection can be through some interfaces, the INDIRECT COUPLING of device or unit or Communication connection can be electrical property, mechanical or other forms.
Unit may or may not be physically separated as illustrated by the separation member, shown as a unit Component may or may not be physical unit, it can and it is in one place, or may be distributed over multiple networks On unit.It can some or all of the units may be selected to achieve the purpose of the solution of this embodiment according to the actual needs.
It, can also be in addition, the functional units in various embodiments of the present invention may be integrated into one processing unit It is that the independent physics of each unit includes, can also be integrated in one unit with two or more units.Above-mentioned integrated list Member both can take the form of hardware realization, can also realize in the form of hardware adds SFU software functional unit.
The above-mentioned integrated unit being realized in the form of SFU software functional unit can store and computer-readable deposit at one In storage media.Above-mentioned SFU software functional unit is stored in a storage medium, including some instructions are used so that a computer Equipment (can be personal computer, server or the network equipment etc.) executes the part step of each embodiment method of the present invention Suddenly.And storage medium above-mentioned include: USB flash disk, mobile hard disk, read-only memory (Read-Only Memory, abbreviation ROM), with Machine access memory (Random Access Memory, abbreviation RAM), magnetic or disk etc. are various to can store program code Medium.
Finally, it should be noted that the above embodiments are merely illustrative of the technical solutions of the present invention, rather than its limitations;Although Present invention has been described in detail with reference to the aforementioned embodiments, those skilled in the art should understand that: it still may be used To modify the technical solutions described in the foregoing embodiments or equivalent replacement of some of the technical features; And these are modified or replaceed, technical solution of various embodiments of the present invention that it does not separate the essence of the corresponding technical solution spirit and Range.

Claims (10)

1. obtaining the method for face point cloud in a kind of robot spatial registration, which comprises the following steps:
According to the two of the different angle got gray level images, the first facial for respectively corresponding two gray level images is determined Region and the second facial area;
According to the first facial region and second facial area, determine that corresponding with the first facial region first mentions Region and corresponding with second facial area second is taken to extract region;
Region and described second is extracted according to described first and extracts region, obtains the first facial image and the second facial image;
The characteristic point obtained from first facial image is carried out with the characteristic point obtained in second facial image Match, obtains face point cloud.
2. obtaining the method for face point cloud in a kind of robot spatial registration according to claim 1, which is characterized in that root According to the first facial region and second facial area, determine that corresponding with the first facial region first extracts region It is corresponding with second facial area second extract region the following steps are included:
Centered on the first facial region and second facial area, respectively by the first facial region and described The multiple of two facial areas amplification setting obtains corresponding with the first facial region first and extracts region and with described second Facial area corresponding second extracts region.
3. obtaining the method for face point cloud in a kind of robot spatial registration according to claim 2, which is characterized in that institute The numberical range of the multiple set is stated as 2.25~9.
4. obtaining the method for face point cloud in a kind of robot spatial registration according to claim 2, which is characterized in that institute The value of the multiple set is stated as 6.25.
5. obtaining the method for face point cloud in a kind of robot spatial registration according to claim 1, which is characterized in that right The characteristic point obtained from first facial image is matched with the characteristic point obtained in second facial image, is obtained The step of face point cloud specifically:
According to first facial image and second facial image, the characteristic point and institute in first facial image are obtained State the characteristic point in the second facial image;
To from characteristic point obtained in first facial image and the picture from characteristic point obtained in second facial image The identical characteristic point of element value is matched;
According to obtained matching double points are matched, face point cloud is obtained.
6. obtaining the method for face point cloud in a kind of robot spatial registration according to claim 5, which is characterized in that root According to first facial image and second facial image, the characteristic point in first facial image and described second are obtained The step of characteristic point in facial image are as follows:
Gaussian Blur processing is carried out to first facial image, obtains three different moulds corresponding from first facial image The Gaussian image of paste degree;First facial image from the Gaussian image of three different fog-levels, according to clear Clear degree sequence from high to low, successively carries out image subtraction between adjacent two images, obtain three subtract each other after image; Corresponding position in the pixel of the intermediate image in image after being subtracted each other according to described three and two images adjacent thereto The comparison result of pixel determines the characteristic point in first facial image, and carries out to second facial image high This Fuzzy Processing obtains the Gaussian image of three different fog-levels corresponding from second facial image;Described second Facial image from the Gaussian image of three different fog-levels, according to the sequence of clarity from high to low, successively in phase Carry out image subtraction between adjacent two images, obtain three subtract each other after image;In image after being subtracted each other according to described three Intermediate image pixel and two images adjacent thereto in corresponding position pixel comparison result, determine described Characteristic point in two facial images.
7. obtaining the method for face point cloud in a kind of robot spatial registration according to claim 6, which is characterized in that root The picture of corresponding position in the pixel of the intermediate image in image after subtracting each other according to described three and two images adjacent thereto The comparison result of vegetarian refreshments determines the specific steps of the characteristic point in first facial image are as follows:
It is described by corresponding in 8 consecutive points of pixel and the pixel in the intermediate image, two images Pixel position and the point for corresponding respectively to 8 consecutive points positions are compared;If the pixel value of the pixel is Maximum value or minimum value, it is determined that the point in corresponding first facial image of the pixel is in first facial image Characteristic point.
8. obtaining the method for face point cloud in a kind of robot spatial registration according to claim 6, which is characterized in that root The picture of corresponding position in the pixel of the intermediate image in image after subtracting each other according to described three and two images adjacent thereto The comparison result of vegetarian refreshments determines the specific steps of the characteristic point in second facial image are as follows:
It is described by corresponding in 8 consecutive points of pixel and the pixel in the intermediate image, two images Pixel position and the point for corresponding respectively to 8 consecutive points positions are compared;If the pixel value of the pixel is Maximum value or minimum value, it is determined that the point in corresponding second facial image of the pixel is in second facial image Characteristic point.
9. obtaining the method for face point cloud in a kind of robot spatial registration according to claim 5, which is characterized in that institute Stating the step of obtaining face point cloud according to the obtained matching double points of matching includes: that the characteristic point centering obtained from matching is searched The matching double points of the corresponding actual persons same point on the face;
According to the matching double points found, face point cloud is obtained.
10. obtaining the device of face point cloud in a kind of robot spatial registration characterized by comprising
Facial area determining module determines described in respectively corresponding for two gray level images according to the different angle got The first facial region of two gray level images and the second facial area;
Extract area determination module, the first facial region for obtaining according to the facial area determining module and described Second facial area determines the first extraction corresponding with first facial region region and corresponding with second facial area Second extract region;
Facial image extraction module extracts region for extracting region and described second according to described first, obtains the first face Image and the second facial image;
Face point cloud obtain module, for from the facial image extraction module determine first facial image in and institute The characteristic point stated in the second facial image is matched, to obtain face point cloud.
CN201710703134.3A 2017-08-16 2017-08-16 Method and device for acquiring human face point cloud in robot space registration Active CN109409169B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710703134.3A CN109409169B (en) 2017-08-16 2017-08-16 Method and device for acquiring human face point cloud in robot space registration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710703134.3A CN109409169B (en) 2017-08-16 2017-08-16 Method and device for acquiring human face point cloud in robot space registration

Publications (2)

Publication Number Publication Date
CN109409169A true CN109409169A (en) 2019-03-01
CN109409169B CN109409169B (en) 2021-02-02

Family

ID=65454624

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710703134.3A Active CN109409169B (en) 2017-08-16 2017-08-16 Method and device for acquiring human face point cloud in robot space registration

Country Status (1)

Country Link
CN (1) CN109409169B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020154820A1 (en) * 2001-03-06 2002-10-24 Toshimitsu Kaneko Template matching method and image processing device
CN101866497A (en) * 2010-06-18 2010-10-20 北京交通大学 Intelligent 3D face reconstruction method and system based on binocular stereo vision
CN104143080A (en) * 2014-05-21 2014-11-12 深圳市唯特视科技有限公司 Three-dimensional face recognition device and method based on three-dimensional point cloud

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020154820A1 (en) * 2001-03-06 2002-10-24 Toshimitsu Kaneko Template matching method and image processing device
CN101866497A (en) * 2010-06-18 2010-10-20 北京交通大学 Intelligent 3D face reconstruction method and system based on binocular stereo vision
CN104143080A (en) * 2014-05-21 2014-11-12 深圳市唯特视科技有限公司 Three-dimensional face recognition device and method based on three-dimensional point cloud

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
隋巧燕 等: "双目下点云的三维人脸重建", 《双目下点云的三维人脸重建 *

Also Published As

Publication number Publication date
CN109409169B (en) 2021-02-02

Similar Documents

Publication Publication Date Title
CN110874594B (en) Human body appearance damage detection method and related equipment based on semantic segmentation network
Rathod et al. Diagnosis of skin diseases using Convolutional Neural Networks
JP4755202B2 (en) Face feature detection method
CN107103613B (en) A kind of three-dimension gesture Attitude estimation method
Li et al. Fusing images with different focuses using support vector machines
EP3939006A1 (en) Feature point detection
CN103514459A (en) Method and system for identifying crop diseases and pests based on Android mobile phone platform
JP7419080B2 (en) computer systems and programs
Prakash et al. Human recognition using 3D ear images
CN105869166B (en) A kind of human motion recognition method and system based on binocular vision
CN106446862A (en) Face detection method and system
CN111080670A (en) Image extraction method, device, device and storage medium
CN113221767A (en) Method for training living body face recognition model and method for recognizing living body face and related device
CN110263768A (en) A kind of face identification method based on depth residual error network
WO2017168462A1 (en) An image processing device, an image processing method, and computer-readable recording medium
CN112200065B (en) Micro-expression classification method based on action amplification and self-adaptive attention area selection
CN108109164B (en) Information processing method and electronic equipment
WO2023202400A1 (en) Training method and apparatus for segmentation model, and image recognition method and apparatus
Athavale et al. One eye is all you need: Lightweight ensembles for gaze estimation with single encoders
Devadethan et al. Face detection and facial feature extraction based on a fusion of knowledge based method and morphological image processing
CN115862147A (en) Attitude estimation method, device, electronic device and storage medium
CN112766028A (en) Face fuzzy processing method and device, electronic equipment and storage medium
US20250086823A1 (en) Gaze target detection method and system
JP6350331B2 (en) TRACKING DEVICE, TRACKING METHOD, AND TRACKING PROGRAM
JP6362947B2 (en) Video segmentation apparatus, method and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CB03 Change of inventor or designer information

Inventor after: Zhang Sai

Inventor after: Zhao Guoguang

Inventor after: Liu Da

Inventor before: Zhang Sai

Inventor before: Liu Da

CB03 Change of inventor or designer information
CP03 Change of name, title or address

Address after: 100191 Room 501, floor 5, building 9, No. 35 Huayuan North Road, Haidian District, Beijing

Patentee after: Beijing Baihui Weikang Technology Co.,Ltd.

Address before: Room 502, Building No. 3, Garden East Road, Haidian District, Beijing, 100191

Patentee before: Beijing Baihui Wei Kang Technology Co.,Ltd.

CP03 Change of name, title or address