[go: up one dir, main page]

CN108537126B - Face image processing method - Google Patents

Face image processing method Download PDF

Info

Publication number
CN108537126B
CN108537126B CN201810205659.9A CN201810205659A CN108537126B CN 108537126 B CN108537126 B CN 108537126B CN 201810205659 A CN201810205659 A CN 201810205659A CN 108537126 B CN108537126 B CN 108537126B
Authority
CN
China
Prior art keywords
image
face
customer
image processing
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810205659.9A
Other languages
Chinese (zh)
Other versions
CN108537126A (en
Inventor
陈东岳
陈秋生
贾同
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeastern University China
Original Assignee
Northeastern University China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeastern University China filed Critical Northeastern University China
Priority to CN201810205659.9A priority Critical patent/CN108537126B/en
Publication of CN108537126A publication Critical patent/CN108537126A/en
Application granted granted Critical
Publication of CN108537126B publication Critical patent/CN108537126B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Electronic shopping [e-shopping] utilising user interfaces specially adapted for shopping
    • G06Q30/0643Electronic shopping [e-shopping] utilising user interfaces specially adapted for shopping graphically representing goods, e.g. 3D product representation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/40Filling a planar surface by adding surface attributes, e.g. colour or texture
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Finance (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Accounting & Taxation (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Image Processing (AREA)

Abstract

本发明公开了一种人脸图像处理系统及方法,所述系统包括:模特图像存储模块,用于存储模特图像;人脸图像采集模块,用于采集人脸图像;图像传输模块,用于传输人脸图像到图像处理模块;图像处理模块,用于进行人脸图像合成;图像显示模块,用于显示包括采集的人脸图像、推荐的模特图像以及最终的合成图像。所述方法主要是检测输入图像和参考图像中的人脸区域并提取人脸特征点,以及按照相同规律分别对两张人脸图像进行三角剖分等步骤。通过本发明的系统及方法可使图像采集模块与图像处理模块实现分离,便于选择最佳的采集位置,同时使顾客能够实时看到采集到的人脸图像,选择合适的人脸表情,最佳的拍照位置。

Figure 201810205659

The invention discloses a face image processing system and method. The system comprises: a model image storage module for storing model images; a face image acquisition module for collecting face images; an image transmission module for transmitting A face image to image processing module; an image processing module for synthesizing face images; an image display module for displaying the collected face images, the recommended model images and the final composite image. The method mainly includes the steps of detecting the face area in the input image and the reference image, extracting the face feature points, and triangulating the two face images respectively according to the same rule. The system and method of the present invention can separate the image acquisition module from the image processing module, which facilitates the selection of the best acquisition location, and enables the customer to see the acquired face image in real time, select the appropriate facial expression, and optimize the photo location.

Figure 201810205659

Description

Face image processing method
Technical Field
The present invention relates to image processing systems and methods, and particularly to a system and method for processing a face image.
Background
Image processing techniques are used in many fields including medical, military, manufacturing, etc. The application of the image processing technology in face image recognition enables people to more conveniently acquire related information in many fields, so that more accurate judgment is made on related conditions.
In many situations, such as hair salons, a customer may wish to have an intuitive decision as to whether a hairstyle will fit himself or herself before cutting his hair, so that a more self-fitting choice can be made. In addition, the method for synthesizing the face image has wide application prospects in privacy protection, virtual fitting, entertainment and leisure and the like.
The main disadvantages of the prior art are:
firstly, if the brightness, contrast and tone of the image to be synthesized are not consistent, the reality of the synthesized image is not high, and therefore useful information cannot be obtained.
Secondly, when the method is applied to the barbershop scene, the facial features of the customer cannot be seamlessly attached to the hairstyle of the model due to the fact that the hairstyle can shield the facial features of the human body, the image effect is unnatural, and the customer experience is poor.
Thirdly, the prior art cannot change the face contour of the synthesized image, and in a popular way, if the provided model is a round face, but the face of a customer is a square face, the prior synthesis technology can only ensure that the synthesized image is still the round face.
In published papers and applications of implementation, some people can change the facial features of the reference image, and the final effect is that the facial features of the customers are the facial features of the five features of the synthetic image, but the three problems are still existed.
A3D Model Based application method (3D morphable models, 3dMM) is adopted in Face swing under Large position variables of Lin Y, Wang S, Lin Q and the like (in IEEE, International Conference on Multimedia and Expo, 2012), and because the technology is immature, only partial facial features can be generated, the final synthesis effect is not real, and the method Based on the 3D Model can spend a Large amount of time and cannot meet most application scenes.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a face image processing system and a face image processing method, so that the face image synthesis effect has higher authenticity, and a more intuitive and more accurate simulation effect is provided for customers.
The technical scheme of the invention is realized as follows:
a face image processing system comprising: the model image storage module is used for storing a model image; the face image acquisition module is used for acquiring a face image; the image transmission module is used for transmitting the face image to the image processing module; the image processing module is used for carrying out face image synthesis; and the image display module is used for displaying the acquired face image, the recommended model image and the final composite image.
Preferably, the system further comprises a data analysis unit for analyzing the data information of the customer, wherein the data mainly analyzed comprises the age, the sex and the hair style suitable for the customer, and the data is prepared for the customer to select the hair style suitable for the customer.
Preferably, the system further comprises a visual user interface including buttons for taking pictures and selecting images, allowing the user to take pictures and select pictures of the customer at the appropriate time.
Preferably, the visual user interface further comprises an option button for recommending a hair style according to data obtained by analyzing the appearance of the customer.
Preferably, the visual user interface further comprises an option button for the customer to view the hairstyle models stored in the database, and when the user wishes to try another hairstyle after seeing the recommended hairstyle, the option allows the user to manually select his favorite hairstyle model in the database, and to use the composition button to generate an image of his same hairstyle as the model.
Preferably, the image transmission module comprises a network unit for wireless data transmission, and the network unit allows a user to remotely control the camera in the same local area network in a wireless manner.
Preferably, the image processing module includes an image preprocessing unit for preprocessing the collected customer image, where the preprocessing includes image graying, histogram equalization, and filtering.
Preferably, the image processing module further comprises a feature extraction unit, configured to detect a face region of the customer and extract face feature points, where the face feature points are a series of points defined in advance and capable of reflecting face features, and are mainly distributed in the facial five sense organ contour.
Preferably, the image processing module further comprises an image synthesis unit for synthesizing images, wherein the image synthesis unit is used for synthesizing the images between the collected customer image and the model with the hairstyle, so that the five sense organs and the face of the synthesized image are consistent with the customer, and the hairstyle is consistent with the hairstyle of the model.
A face image processing method is applicable to any system in the technical scheme, and comprises the following steps:
s1, detecting the human face areas in the input image and the reference image respectively, and extracting human face characteristic points;
s2, respectively triangulating the two face images according to the same rule based on the face characteristic points extracted in S1;
s3, calculating affine transformation between corresponding triangles obtained by triangulation from the input image to the reference image, and filling colors in the triangles in the reference image to obtain an intermediate image;
s4, extracting a human face region of interest (ROI) from the intermediate image in S3;
s5, making a mask image of a human face region of interest (ROI), wherein the mask image is used for processing the unnatural problem of a synthetic image caused by the color difference between the input image and the reference image;
and S6, finishing the color correction of the synthesized image through the human face mask image, enabling the color of the synthesized image to be perfect and excessive, and improving the reality.
The invention has the beneficial effects that:
1. the system framework can separate the image acquisition module from the image processing module, is convenient for selecting the optimal acquisition position, and simultaneously enables a customer to see the acquired face image in real time and select the appropriate face expression and the optimal photographing position.
2. Adopt long-range wireless transmission mode greatly increased the degree of freedom in space, image acquisition equipment small in size moreover can regard as handheld device to use, and customer can handheld image acquisition equipment, as long as acquisition equipment is in same LAN with image processing module, the system just can realize the picture real-time transmission between acquisition equipment and the processing module, can make personnel of shooing and customer observe the picture simultaneously to and real-time adjustment angle of shooing.
3. The visual interface with the function options can enable a customer to manually select a favorite hairstyle model in the database, and generate an image of the same hairstyle as the model by using the synthesis button, so that the operability of the user is greatly improved, and more diversified choices are provided for the customer.
Drawings
FIG. 1 is a schematic diagram of the system of the present invention;
FIG. 2 is a diagram of a user terminal application scenario for the system of the present invention;
FIG. 3 is a schematic diagram of face detection and face feature point extraction according to an embodiment of the present invention;
FIG. 4 is a first schematic diagram of image triangulation in an embodiment of the invention;
FIG. 5 is a second schematic diagram of image triangulation in an embodiment of the invention;
FIG. 6 is a face mask image produced in an embodiment of the present invention;
FIG. 7 is a flow chart of an image synthesis algorithm according to the present invention;
FIG. 8 is a flow chart of an embodiment of an image synthesis algorithm according to the present invention.
Detailed Description
The invention is explained in more detail below with reference to the figures and examples:
as shown in fig. 1, a face image processing system includes:
the model image storage module is used for storing a model image;
the face image acquisition module is used for acquiring a face image;
the image transmission module is used for transmitting the face image to the image processing module;
the image processing module is used for carrying out face image synthesis;
and the image display module is used for displaying the acquired face image, the recommended model image and the final composite image.
Further, the system also comprises a data analysis unit for analyzing the data information of the customers, wherein the mainly analyzed data comprise the age, the sex and the hair style suitable for the customers, and the data preparation is made for the customers to select the hair style suitable for the customers.
Further, the system also comprises a visual user interface, wherein the visual user interface comprises buttons for taking pictures and selecting images, an option button for allowing a user to take pictures and select pictures for customers at proper time, carrying out hair style recommendation according to data obtained by analyzing the appearance of the customers and an option button for allowing the customer to view hair style models stored in the database, when the user wants to try other hair styles after seeing the recommended hair style, the option allows the user to manually select a favorite hair style model in the database, and an image of the same hair style as the model is generated by using a synthesis button.
Further, the image transmission module comprises a network unit for wireless data transmission, and the network unit allows a user to remotely control the camera in the same local area network in a wireless mode.
Further, the image processing module comprises an image preprocessing unit for preprocessing the collected customer image, wherein the preprocessing comprises image graying, histogram equalization and filtering operation.
Furthermore, the image processing module further comprises a feature extraction unit, which is used for detecting the face area of the customer and extracting the face feature points, wherein the face feature points are a series of points which are defined in advance and can reflect the face features, and are mainly distributed on the contour of the five sense organs of the face. Furthermore, the image processing module also comprises an image synthesis unit for synthesizing images, and the image synthesis unit is used for synthesizing the images between the acquired customer image and the model with the hair style, so that the five sense organs and the face shape of the synthesized image are consistent with the customer, and the hair style is consistent with the hair style of the model.
Further, the image processing module further includes an image synthesizing unit for synthesizing an image. The unit can be used for image synthesis between the collected customer image and the model with the hairstyle, and the final effect is that the five sense organs and the face of the synthesized image are consistent with the customer, but the hairstyle is consistent with the hairstyle of the model. This is a core task of the system.
In this embodiment, as shown in fig. 1 and 2, the facial image acquisition module is used for acquiring an image when a customer enters a store and acquiring a facial image with more accurate hair style selection, the image acquired by the image transmission module is transmitted between the image acquisition module and the image processing module in a wireless manner, the image processing module completes a plurality of system core functions including customer appearance data analysis, model hair style recommendation, image data synthesis, and the like, and the image display module is arranged for displaying a user facial image, displaying an image synthesis progress, displaying a synthesized image, displaying a user interface, and the like, which are used for acquisition. Specifically, the image acquisition module and the image processing module adopt a wireless remote transmission mode, the image processing module occupies a fixed size in space, can be kept stable under the condition of no external force and is suitable for being placed in a fixed space position; the image processing module and the image display module are connected in a wired mode, so that the image processing module and the image display module are placed at the same spatial position. Because the image processing module is too big and is not suitable for being used as mobile equipment, the image acquisition module can be separated from the image processing module for use, so that the optimal acquisition position can be conveniently selected, and meanwhile, a customer can see the acquired face image in real time and select a proper face expression and an optimal photographing position. Meanwhile, the customer cannot take the photo and the keeping action into consideration at any time, so that the specific photo taking action is set in the image processing module and is finished by another person, and the specific collected situation can be displayed in the image processing module in real time. The remote wireless transmission mode greatly increases the freedom of space, the image acquisition equipment is small in size and can be used as handheld equipment, a customer can hold the image acquisition equipment by hands, and as long as the acquisition equipment and the image processing module are in the same local area network, the system can realize real-time transmission of pictures between the acquisition equipment and the processing module, allows a picture-taking person and the customer to observe the pictures simultaneously, and adjusts the picture-taking angle in real time. When the image of the customer is captured by the acquisition module and transmitted to the processing module in a wireless transmission mode, firstly, the image processing module analyzes personal information of the customer to obtain accurate data based on basic information of the customer so as to recommend the hair style more suitable for the customer, at the moment, the image display module displays the recommended hair style suitable for the customer, and the image synthesis module performs image synthesis on the model image of the hair style and the transmitted image of the customer and displays the synthesized image on the display module. On the one hand, the function is beneficial to saving time for customers, because the part of the function is completely automatically operated by the system and does not need to input information manually; on the other hand, the customer can intuitively feel the suitable degree of the hairstyle and the customer, and the consumption experience of the customer is improved.
As shown in fig. 2, a customer can use a handheld device (in the present invention, a smart phone is recommended to be used) to collect information through a front camera, a shot picture can be displayed on a screen of the handheld device in real time, and meanwhile, the user can also receive the picture shot by the customer through a wireless transmission mode on a display screen of an image display module, and the shooting action is controlled by the user, and when the user thinks that a proper picture is shot, the user can press an Esc key on a keyboard to shoot. The collected customer images are received by the image processing module in a wireless transmission mode, the external characteristic information of the customer is firstly analyzed, all model images in the storage module are matched, in the embodiment, 4 best-fit hairstyles of the customer are selected, the effect after the combination is displayed on the display screen, and the customer can manually select all hairstyle models in the library to carry out image combination.
The image processing module in this embodiment includes a feature extraction unit for detecting the face area of the customer and extracting the feature points of the face. As shown in fig. 3, 4 and 5, the facial feature points refer to a series of points defined in advance and capable of reflecting facial features, and are mainly distributed in the facial five sense organ outlines. The feature extraction unit may provide specific positions where the face region of the customer is located, and specific distribution positions of the face feature points. The positions of the face areas of the customers and the distribution positions of the feature points are important preparation data for face information data analysis and face image synthesis.
As shown in fig. 7, a method for processing a face image, which is applicable to any system described in the above embodiments, includes the following steps: s1, detecting the human face areas in the input image and the reference image respectively, and extracting human face characteristic points; s2, respectively triangulating the two face images according to the same rule based on the face characteristic points extracted in S1; s3, calculating affine transformation between corresponding triangles obtained by triangulation from the input image to the reference image, and filling colors in the triangles in the reference image to obtain an intermediate image; s4, extracting a human face region of interest (ROI) from the intermediate image in S3; s5, making a mask image of a human face region of interest (ROI), wherein the mask image is used for processing the unnatural problem of a synthetic image caused by the color difference between the input image and the reference image; and S6, finishing the color correction of the synthesized image through the human face mask image, enabling the color of the synthesized image to be perfect and excessive, and improving the reality.
More specific method embodiments, as shown in FIG. 8:
step 101 is the acquisition of a face image of a customer, and the acquired image is called an input image. The input image provides the five sense organs and facial contour information for the final composite image.
Step 102 is a preprocessing operation of the input image, mainly including various filtering processes, aiming at improving the image quality.
Step 103 is face detection, the face detection algorithm detects a face by using a HAAR-like feature algorithm provided by the existing computer vision library OPENCV, a return value of the face detection algorithm is a square area containing a face area, and a specific mathematical expression is coordinates of a vertex at the upper left corner of the square and the length and width of a rectangle.
Step 104 is face feature point detection, and requires detecting key points of a face to determine the position of a face feature. By using the improved ASM method proposed by Kazemi, Valid, Josephine Sullivan et al In the paper "One Millisecon Face Alignment with An end Of the Regression Trees" (In IEEE, Conference on Computer Vision and Pattern Recognition (CVPR), 2014), 68 personal Face feature points are obtained which outline the eyebrow, eye, nose, mouth and Face contour Of the Face, and FIG. 3 shows the Face detection and feature point detection results Of the image.
Step 105, aligning the face of the input image with the face of the reference image, including scaling and rotating operations, in order to keep the sizes and angles of the faces of the two images consistent, selecting a vector between the 40 th and 43 th feature points as a basic rotating vector, solving an included angle between the vectors of the two images, rotating the input image to keep the angles of the faces of the two images consistent, and then selecting a vector between the 1 st and 17 th feature points as a basic scaling vector to scale the input image. After the faces are aligned, the sizes and angles of the faces of the input image and the reference image can be kept consistent, and after the input image is rotated, the face characteristic points of the input image are changed. This is important for the subsequent work.
Step 106 is triangulation of the input image.
The triangulation needs to be respectively carried out on an input image and a reference image, a plurality of feature points are added firstly, for the input image, 3 feature points are added, and the input image is expressed by the following formula mathematically:
Figure GDA0001665354890000071
Figure GDA0001665354890000072
Figure GDA0001665354890000073
for the reference image, 7 feature points are added:
Figure GDA0001665354890000074
Figure GDA0001665354890000075
Figure GDA0001665354890000076
there are also 4 feature points that are the coordinates of the four corners of the reference image.
Wherein
Figure GDA0001665354890000077
Representing an input imageThe coordinates of the 69 th feature point of (2), which includes two values, represent desired x, y coordinate values of the feature point.
Figure GDA0001665354890000078
The coordinates of the ith feature point of the reference image are expressed, and α is a coefficient, which can be selected by itself, and is 1.2. The three characteristic points are respectively positioned in the middle of the eyes and above the two eyebrows of the human face visually, and in addition, in order to keep the shape of the face after face changing consistent with the input image, the human face characteristic points of the reference image are changed and are expressed mathematically as follows:
Figure 1
,for i ∈(2,3,…,17) (7)
the reference image is first triangulated as shown in fig. 5, which is performed twice for the reference images before and after the feature point change, respectively. After triangulation, a series of triangular patches corresponding to each other are obtained on the two images.
The affine transformation is calculated such that each triangle vertex of the image before the feature point change is mapped onto a corresponding triangle vertex of the image after the feature point change.
The affine transformation includes:
a) rotation (Linear transformation)
b) Translation (vector addition)
Scaling operation (Linear transformation)
Three points in the image can determine an affine transformation, which is usually represented by a 2 × 3 matrix:
Figure GDA0001665354890000081
Figure GDA0001665354890000082
the matrix A, B may be a two-dimensional vector
Figure GDA0001665354890000083
It is transformed, so it can also be expressed in the following form:
Figure GDA0001665354890000084
or
T=M·[x,y,1]T (10)
Figure GDA0001665354890000085
T is the vector after affine transformation M of vector X. In step 112, known are the vector X and the vector T, and the transformation matrix M is solved for.
According to T ═ M · [ x, y,1 ·]TAnd respectively selecting 3 points from the two images to obtain six equations and solving all values in the matrix M.
And then, deforming all pixel points in the triangle in the reference image with unchanged characteristic points into the reference image after the characteristic points are changed by using the calculated affine transformation. By this step, an image in which five sense organs are unchanged but the shape of the face is identical to the input image is obtained, which is called a second-level reference image. The feature points of the second-level reference image are the feature points of the reference image that have changed.
And the second step is to triangulate the input image, the rule of triangulation is as shown in fig. 4, and because only the facial features of the input image are needed, triangulation is not needed for the whole input image. It can be seen from fig. 4 and 5 that the face triangulation of the input image and the second-level reference image are consistent, and therefore the triangles they obtain are also in one-to-one correspondence. Similarly, the above operation is repeated to calculate the affine transformation, but this time with the three vertices of each triangle in the input image mapped into the second level reference image. After the step is finished, an image can be obtained, the face shape and the five-sense organ characteristics of the image are consistent with those of the input image, other parts of the image are consistent with those of the reference image, the image is changed into a third-level reference image, and the characteristic points of the third-level reference image are consistent with those of the second-level reference image.
Steps 107-111 are for reference images, and the specific implementation is consistent with the above steps 101-105, and step 112 has been described in detail above.
After step 112, an intermediate image corresponding to the third-level reference image can be obtained.
In order to automatically implement all functions, the ROI of the face is extracted in step 113 based on the detected feature points. In the paper, a convex hull consisting of 7 feature points is selected as the ROI. Seven points are shown by the following formula:
Figure GDA0001665354890000091
Figure GDA0001665354890000092
Figure GDA0001665354890000093
Figure GDA0001665354890000094
Figure GDA0001665354890000095
Figure GDA0001665354890000096
Figure GDA0001665354890000097
wherein
Figure GDA0001665354890000098
The first feature point coordinate representing the ROI, β is a parameter that can be manually selected, and in the paper let β be 0.05. And extracting the ROI from the third-level reference image, wherein most features of the human face are included in the ROI. The color correction is very important for an algorithm, the authenticity of a face changing result is closely related to the color fitting degree of the ROI and the reference image, then the face changing result is fitted into the second-level reference image, and in order to realize the color seamless conversion of the ROI and the second-level reference image, the function provided in the OPENCV library is used for realizing the seamless butt joint of human face colors, so that the image looks more vivid and natural.
Besides calculating the human face ROI through the feature points, the invention also provides a method for manufacturing the human face mask image.
As shown in fig. 6, the mask image is also a method for extracting a face ROI, and is a binary image with only black and white colors, where the white portion corresponds to the face ROI and the black portion is the information discarding portion. For the hair style models stored in the library, only one mask image needs to be made, and the mask image can be repeatedly called in the subsequent program.
The method of making the mask image is very simple and easy to understand and provides a method of selecting a point on the boundary of the white portion of the mask image by the left mouse button so that the selected point can contain the information of the five sense organs of the model without destroying the hair style information. The number and position of the selected points can be freely controlled.
Step 114 is facial feature exchange, which is fitting the extracted ROI into the second level reference image. There is also a case to be explained in dealing with this problem, which is the position where the ROI is attached. Because the geometry of each face is different, when face changing is carried out, if the position selection of the ROI is inaccurate, the five sense organs can be displaced, and a large amount of time can be wasted by manually selecting a proper position for each face changing operation. First, a minimum rectangle surrounding the ROI is created, the coordinates of the center point of the rectangle are obtained, and C is used1To represent it, and then in the second level reference image, based on finding the ROICharacteristic points, establishing a minimum rectangle surrounding the characteristic points, and solving the position coordinate C of the central point of the rectangle2When finding the ROI position, make C1And C2And a better effect can be obtained by superposition.
The above description is only a preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any design concept of the system and method for simulating and matching the hairstyle of a customer by processing a face image according to the system and method of the present invention is within the scope of the present invention, and any person skilled in the art can substitute or change the technical solution and the concept of the present invention within the technical scope of the present invention, and the technical solution and the concept thereof according to the present invention should be covered by the scope of the present invention.

Claims (6)

1.一种人脸图像处理方法,其特征在于,所述方法适用于一种人脸图像处理系统,所述系统包括:模特图像存储模块,用于存储模特图像;人脸图像采集模块,用于采集人脸图像;图像传输模块,用于传输人脸图像到图像处理模块;图像处理模块,用于进行人脸图像合成;图像显示模块,用于显示包括采集的人脸图像、推荐的模特图像以及最终的合成图像;所述系统还包括可视化用户界面,所述可视化用户界面包括用于拍照和图像选择的按钮,允许用户在合适时机对顾客进行拍照及对照片进行选择;所述可视化用户界面还包括根据顾客外貌分析得到的数据进行发型推荐的选项按钮;所述可视化用户界面还包括用于顾客查看数据库内存储的发型模特的选项按钮,当用户看到推荐的发型之后希望尝试其他发型,该选项允许用户在数据库内手动选择自己喜欢的发型模特,并使用合成按钮来生成自己与模特相同发型的图像;1. a face image processing method, is characterized in that, described method is applicable to a kind of face image processing system, described system comprises: model image storage module, is used for storing model image; It is used to collect face images; the image transmission module is used to transmit the face images to the image processing module; the image processing module is used to synthesize the face images; the image display module is used to display the collected face images and recommended models. image and final composite image; the system also includes a visual user interface that includes buttons for taking pictures and image selection, allowing the user to take pictures of customers and select pictures at appropriate times; the visual user interface The interface also includes an option button for recommending hairstyles according to the data obtained by the customer's appearance analysis; the visual user interface also includes an option button for the customer to view the hairstyle models stored in the database, and when the user sees the recommended hairstyle, he wants to try other hairstyles , this option allows users to manually select their favorite hairstyle models in the database, and use the synthesis button to generate images of themselves with the same hairstyle as the model; 所述人脸图像处理方法,包括以下步骤:The face image processing method includes the following steps: S1,分别检测输入图像和参考图像中的人脸区域,并提取人脸特征点;得到输入图像的68个人脸特征点;S1, detect the face regions in the input image and the reference image respectively, and extract the face feature points; obtain 68 face feature points of the input image; S2,基于S1中提取到的人脸特征点,按照相同规律分别对两张人脸图像进行三角剖分;S2, based on the face feature points extracted in S1, triangulate the two face images respectively according to the same rule; S3,计算输入图像到参考图像经所述三角剖分得到的对应三角形之间的仿射变换,并对参考图像中的三角形进行色彩填充,得到一张中间图像;S3, calculate the affine transformation between the corresponding triangles obtained by the triangulation of the input image to the reference image, and color-fill the triangles in the reference image to obtain an intermediate image; S4,从S3中所述中间图像提取人脸感兴趣区域(ROI);S4, extracts a face region of interest (ROI) from the intermediate image described in S3; S5,制作人脸感兴趣区域(ROI)掩模图像,所述掩模图像用于处理由于输入图像和参考图像色彩差异而引起的合成图像不自然问题;S5, making a face region of interest (ROI) mask image, the mask image is used to deal with the unnatural problem of the composite image caused by the color difference between the input image and the reference image; S6,通过人脸掩模图像完成合成图像的色彩修正,使得合成图像的色彩完美过度,提高真实性;S6, complete the color correction of the composite image through the face mask image, so that the color of the composite image is perfect and excessive, and the authenticity is improved; 其中,计算输入图像到参考图像经所述三角剖分得到的对应三角形之间的仿射变换,并对参考图像中的三角形进行色彩填充,得到一张中间图像,包括:The affine transformation between the input image and the corresponding triangles obtained by the triangulation of the reference image is calculated, and the triangles in the reference image are color-filled to obtain an intermediate image, including: 对输入图像和参考图像做三角剖分,在输入图像的68个人脸特征点的基础上增加3个特征点,其在数学上使用如下公式表示:Triangulate the input image and the reference image, and add 3 feature points to the 68 face feature points of the input image, which are mathematically represented by the following formula:
Figure FDA0002866363540000011
Figure FDA0002866363540000011
Figure FDA0002866363540000021
Figure FDA0002866363540000021
Figure FDA0002866363540000022
Figure FDA0002866363540000022
其中
Figure FDA0002866363540000023
表示输入图像的第69个特征点的坐标,它包括两个值,表示特征点的x、y坐标值;α=1.2。
in
Figure FDA0002866363540000023
Represents the coordinates of the 69th feature point of the input image, which includes two values, representing the x and y coordinate values of the feature point; α=1.2.
2.根据权利要求1所述的人脸图像处理方法,其特征在于:所述系统还包括数据分析单元,用于分析顾客数据信息,主要分析的数据包括顾客年龄、性别以及适合顾客的发型,为顾客选择适合自己的发型做好前期数据准备。2. The face image processing method according to claim 1, wherein the system further comprises a data analysis unit for analyzing customer data information, and the data mainly analyzed include customer age, gender and hairstyle suitable for the customer, Prepare preliminary data for customers to choose the hairstyle that suits them. 3.根据权利要求1所述的人脸图像处理方法,其特征在于:所述图像传输模块包括用于无线数据传输的网络单元,该单元允许用户在同一局域网内通过无线的方式远程控制摄像头。3 . The face image processing method according to claim 1 , wherein the image transmission module comprises a network unit for wireless data transmission, which allows a user to remotely control the camera in the same local area network by wireless. 4 . 4.根据权利要求1所述的人脸图像处理方法,其特征在于:所述图像处理模块包括图像预处理单元,用于对采集到的顾客图像进行预处理,所述预处理包括图像灰度化、直方图均衡化和滤波操作。4 . The face image processing method according to claim 1 , wherein the image processing module comprises an image preprocessing unit, which is used for preprocessing the collected customer images, and the preprocessing comprises image grayscale. 5 . Equalization, Histogram Equalization, and Filtering operations. 5.根据权利要求1所述的人脸图像处理方法,其特征在于:所述图像处理模块还包括特征提取单元,用于顾客脸部区域检测以及人脸特征点提取,所述人脸特征点是事先定义的能够反映人脸部特征的一系列的点,主要分布于人脸五官轮廓。5. The human face image processing method according to claim 1, wherein the image processing module further comprises a feature extraction unit, which is used for customer face region detection and face feature point extraction. It is a series of points defined in advance that can reflect the characteristics of the human face, mainly distributed in the outline of the facial features of the human face. 6.根据权利要求1所述的人脸图像处理方法,其特征在于:所述图像处理模块还包括用于合成图像的图像合成单元,该单元用于对采集的顾客图像与发型模特之间的图像进行合成,使合成图像的五官以及脸型都与顾客一致,而发型与模特的发型一致。6. The face image processing method according to claim 1, wherein the image processing module further comprises an image synthesizing unit for synthesizing images, and the unit is used for comparing the collected image of the customer and the hairstyle model. The images are synthesized so that the facial features and face shape of the synthesized image are consistent with the customer, and the hairstyle is consistent with that of the model.
CN201810205659.9A 2018-03-13 2018-03-13 Face image processing method Active CN108537126B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810205659.9A CN108537126B (en) 2018-03-13 2018-03-13 Face image processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810205659.9A CN108537126B (en) 2018-03-13 2018-03-13 Face image processing method

Publications (2)

Publication Number Publication Date
CN108537126A CN108537126A (en) 2018-09-14
CN108537126B true CN108537126B (en) 2021-03-23

Family

ID=63484557

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810205659.9A Active CN108537126B (en) 2018-03-13 2018-03-13 Face image processing method

Country Status (1)

Country Link
CN (1) CN108537126B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11074733B2 (en) 2019-03-15 2021-07-27 Neocortext, Inc. Face-swapping apparatus and method
CN112102146B (en) * 2019-06-18 2023-11-03 北京陌陌信息技术有限公司 Face image processing method, device, equipment and computer storage medium
CN110555812A (en) * 2019-07-24 2019-12-10 广州视源电子科技股份有限公司 image adjusting method and device and computer equipment
CN110543826A (en) * 2019-08-06 2019-12-06 尚尚珍宝(北京)网络科技有限公司 Image processing method and device for virtual wearing of wearable product
CN110503599B (en) * 2019-08-16 2022-12-13 郑州阿帕斯科技有限公司 Image processing method and device
CN110610456A (en) * 2019-09-27 2019-12-24 上海依图网络科技有限公司 Imaging system and video processing method
CN112769937B (en) * 2021-01-12 2021-09-03 济源职业技术学院 Medical treatment solid waste supervisory systems
CN113807313A (en) * 2021-10-08 2021-12-17 合肥安达创展科技股份有限公司 An AI platform analysis system based on Dlib face recognition
CN116228763B (en) * 2023-05-08 2023-07-21 成都睿瞳科技有限责任公司 Image processing method and system for eyeglass printing

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7450740B2 (en) * 2005-09-28 2008-11-11 Facedouble, Inc. Image classification and information retrieval over wireless digital networks and the internet
JP4706849B2 (en) * 2006-03-23 2011-06-22 花王株式会社 Method for forming hairstyle simulation image
CN103646416A (en) * 2013-12-18 2014-03-19 中国科学院计算技术研究所 Three-dimensional cartoon face texture generation method and device
CN105045968B (en) * 2015-06-30 2019-02-12 青岛理工大学 Hair styling method and system
CN105117445A (en) * 2015-08-13 2015-12-02 北京建新宏业科技有限公司 Automatic hairstyle matching method, device and system
CN105354411A (en) * 2015-10-19 2016-02-24 百度在线网络技术(北京)有限公司 Information processing method and apparatus
CN107784134A (en) * 2016-08-24 2018-03-09 南京乐朋电子科技有限公司 A kind of virtual hair style simulation system
CN107741974A (en) * 2017-10-09 2018-02-27 武汉轻工大学 Auxiliary hairdressing method

Also Published As

Publication number Publication date
CN108537126A (en) 2018-09-14

Similar Documents

Publication Publication Date Title
CN108537126B (en) Face image processing method
US11625878B2 (en) Method, apparatus, and system generating 3D avatar from 2D image
CN105404392B (en) Virtual method of wearing and system based on monocular cam
CN109690617B (en) System and method for digital cosmetic mirror
US9959453B2 (en) Methods and systems for three-dimensional rendering of a virtual augmented replica of a product image merged with a model image of a human-body feature
CN113610612B (en) 3D virtual fitting method, system and storage medium
CN108885794A (en) Virtually try on clothes on a real mannequin of the user
US20090115777A1 (en) Method of Generating and Using a Virtual Fitting Room and Corresponding System
CN108460398B (en) Image processing method and device and cloud processing equipment
JP2010507854A (en) Method and apparatus for virtual simulation of video image sequence
CN101493930B (en) Loading exchanging method and transmission exchanging method
JP2004094917A (en) Virtual makeup apparatus and method
WO2021143282A1 (en) Three-dimensional facial model generation method and apparatus, computer device and storage medium
CN110189202A (en) A kind of three-dimensional virtual fitting method and system
WO2014081394A1 (en) Method, apparatus and system for virtual clothes modelling
WO2020104990A1 (en) Virtually trying cloths & accessories on body model
CN110291560A (en) Method for creating a three-dimensional virtual representation of a person
CN107680166A (en) A kind of method and apparatus of intelligent creation
CN113298956A (en) Image processing method, nail beautifying method and device, and terminal equipment
CN108664884A (en) A kind of virtually examination cosmetic method and device
Danieau et al. Automatic generation and stylization of 3d facial rigs
JPH10240908A (en) Video composition method
CN111028318A (en) Virtual face synthesis method, system, device and storage medium
JP2018195996A (en) Image projection apparatus, image projection method, and image projection program
CN115019401B (en) Prop generation method and system based on image matching

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant