[go: up one dir, main page]

WO2018214115A1 - Procédé et dispositif d'évaluation de maquillage de visage - Google Patents

Procédé et dispositif d'évaluation de maquillage de visage Download PDF

Info

Publication number
WO2018214115A1
WO2018214115A1 PCT/CN2017/085980 CN2017085980W WO2018214115A1 WO 2018214115 A1 WO2018214115 A1 WO 2018214115A1 CN 2017085980 W CN2017085980 W CN 2017085980W WO 2018214115 A1 WO2018214115 A1 WO 2018214115A1
Authority
WO
WIPO (PCT)
Prior art keywords
makeup
evaluation result
user
evaluation
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2017/085980
Other languages
English (en)
Chinese (zh)
Inventor
闫洁
宋风龙
黄永兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to PCT/CN2017/085980 priority Critical patent/WO2018214115A1/fr
Priority to CN201780091213.1A priority patent/CN110663063B/zh
Publication of WO2018214115A1 publication Critical patent/WO2018214115A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation

Definitions

  • Embodiments of the present invention relate to the field of image processing technologies, and in particular, to a method and apparatus for evaluating face makeup.
  • the above implementation can ensure that the user obtains the effect map desired by the user, since the generation of the effect map mainly depends on the image processing software, the improvement of the character image does not effectively affect the actual makeup process of the user. Moreover, for the user, the obtained effect map is not only processed by light intensity, but also may be processed by face modification or pupil amplification. It can be seen that the above adjustment process can only be adjusted based on the existing image of the person, thereby providing the user with a better visual effect, and according to the rendered effect picture, it is difficult for the user to adjust the actual situation in a targeted manner. Face makeup.
  • the embodiment of the invention provides a method and a device for evaluating face makeup, which can solve the problem that the user cannot apply the processing result presented by the effect diagram output by the terminal to the actual makeup.
  • the embodiment of the present invention adopts the following technical solutions:
  • an embodiment of the present invention provides a method for evaluating a face makeup.
  • the method specifically includes: collecting a character image frame, and evaluating the avatar area in the character image frame according to the specified model, obtaining an evaluation result of the avatar area, and then displaying the character image frame, and the evaluation result and/or corresponding to the evaluation result.
  • the model parameter of the specified model is a personalized model parameter for performing facial makeup evaluation for the current user obtained by adjusting the initial model parameters. It can be seen that the terminal does not improve the collected character image itself, but after the internal processing of the terminal, the corresponding evaluation result is obtained, and then the original character image frame is presented to the user, and the evaluation result and/or The makeup suggestions corresponding to the evaluation results are presented to the user.
  • the user can determine the plan for adjusting the face makeup according to the evaluation result presented by the terminal and the image frame of the person reflecting the current actual face makeup, and implement; in the case of presenting makeup suggestions Under the user, the user can directly adjust the face makeup according to the presented makeup suggestions, thereby ensuring that the user can display the corresponding makeup effect on the actual face makeup of the user by using the adjustment plan based on the evaluation result or the presented makeup suggestion.
  • the problem that the user cannot apply the processing result presented by the effect image outputted by the terminal to the actual makeup is solved.
  • the avatar area includes n partial regions, where n is an integer greater than or equal to one.
  • the specified model is a deep neural network model
  • the avatar area in the image frame of the person is evaluated according to the specified model, and the evaluation result of the avatar area is obtained, which can be realized as follows: according to the depth neural network model, or according to the deep neural network model and the pre- A rule is set to evaluate the avatar area, and obtain at least one of an evaluation result of the avatar area as a whole, an evaluation result of the n partial areas, and an evaluation result indicating an association relationship between at least two partial areas of the n partial areas. item.
  • the preset rule is a judging rule for determining an evaluation result of the n local regions and an evaluation result of the association relationship according to the facial features of the user.
  • it can be regarded as the pre-set evaluation rule of the user. For example, if there is a dice around the corner of the user's eyes, there must be a black spot in the image of the person collected by the camera near the corner of the eye. If the deep neural network model is used to evaluate the face makeup of the person's image, it is likely that the black spot will be regarded as a stain, thereby affecting the evaluation result of the user's face makeup. In order to reduce the probability of occurrence of this situation, the user can pre-configure the rules.
  • the rules are specifically that the facial makeup effect reflected by the area around the corner of the eye is not considered, so that the stain recognized by the terminal does not face the user.
  • the evaluation results of makeup have an impact.
  • the user can eliminate part of the area, so that the rejected part of the area does not participate in the evaluation, or reduce the evaluation of this part of the area, for example, in the case of finding the stain, it will not reduce the evaluation of the face makeup. In this way, if the evaluation results obtained by comprehensively considering the deep neural network model and the preset rules are combined, the evaluation result of the user's actual face makeup can be more accurately reflected.
  • the method before displaying the makeup suggestion corresponding to the evaluation result, the method further includes: traversing the database to find a makeup suggestion corresponding to the evaluation result.
  • the database is used to store the matching relationship between each evaluation result and the makeup suggestion.
  • the evaluation result can effectively evaluate the current facial makeup of the user. If the user is skilled in the makeup technique, the user can directly adjust the current facial makeup according to the content presented by the evaluation result, thereby achieving the desired result of the user. Visual effect. In the actual makeup process, only users who are engaged in the professional makeup-related industry can better control various makeup methods. For ordinary users, it is difficult to find the best solution to overcome the current face defects. .
  • various evaluation results can be matched with corresponding makeup suggestions by means of a preset database. If the makeup suggestions are presented to the user at the time of presenting the evaluation result, the user can select the adjustment method that he or she needs more quickly, thereby improving the makeup speed and improving the makeup effect.
  • a prediction effect map after applying the makeup suggestion to the person image frame may also be displayed.
  • the user it is the real experience that the user wants to get in order to combine makeup advice and adjust his face makeup. Displaying only the makeup suggestion can help the user to adjust the face makeup, but it may be impossible to predict whether the makeup suggestion can achieve the visual effect desired by the user because the user is not familiar with the makeup skill, and therefore, the present invention is implemented.
  • the presentation of the predicted renderings can provide a more intuitive understanding of whether the provided cosmetic suggestions are applicable to the user, thereby giving the user the flexibility to select the makeup suggestions they desire.
  • the method further includes: displaying the static person image, and acquiring the current user input The evaluation result of at least one partial region in the static character image, And/or the evaluation result of the avatar area as a whole, and/or the relationship evaluation result.
  • the specified user meets the specified condition with the current user, and the specified condition includes that the similarity between the facial feature of the specified user and the facial feature of the current user is greater than a first threshold, and the evaluation result given by the specified user and the current user for the same person image The degree of similarity between the two is greater than at least one of the second thresholds.
  • the specified model is trained, and the initial model parameters in the specified model are adjusted to the personalized model parameters.
  • the terminal may further train the specified model based on the obtained personalized annotation data until the specified model converges, and the model parameters applied by the specified model after convergence are used as the personalized model parameters obtained for the current user. .
  • the avatar area includes a face type, or includes a face type and a hairstyle
  • the partial area in the avatar area includes at least one of the facial features.
  • the image frame of the person is collected, which can be specifically implemented as: acquiring an image stream, and sampling from the image stream to obtain a character image frame.
  • the terminal may sample the image stream acquired by the camera, thereby obtaining a certain number of character image frames, and separately performing the image frames of the characters. deal with.
  • the collected image stream is actually output to the user, but in the presented image stream, the character image frame including the evaluation result and the makeup suggestion is superimposed.
  • the currently displayed evaluation result may be directly replaced when a new evaluation result appears, thereby ensuring that the evaluation result is always present in the image stream presented to the user.
  • makeup advice can also use the same output method. In other words, for a person image outputted by a certain frame, the person image frame may not be evaluated, suggested, etc., but when displayed, the image frame of the person including the evaluation result and the makeup suggestion in the previous frame is still used. What is included.
  • an embodiment of the present invention provides an apparatus for evaluating a face makeup.
  • the device can implement the functions implemented in the foregoing method embodiments, and the functions can be implemented by using hardware or by executing corresponding software by hardware.
  • the hardware or software includes one or more modules corresponding to the above functions.
  • the apparatus includes a processor and a transceiver configured to support the apparatus to perform the corresponding functions of the above methods.
  • the transceiver is used to support communication between the device and other devices.
  • the apparatus can also include a memory for coupling with the processor that retains the program instructions and data necessary for the apparatus.
  • an embodiment of the present invention provides a computer storage medium for storing computer software instructions for implementing the above functions, including a program designed to perform the above aspects.
  • FIG. 1 is a schematic structural diagram of a terminal according to an embodiment of the present disclosure
  • FIG. 2 is a schematic diagram of a process for evaluating a face makeup according to an embodiment of the present invention
  • FIG. 3 is a schematic diagram of a workflow implemented by a facial makeup evaluation and recommendation system 202 according to an embodiment of the present invention
  • FIG. 4 is a flowchart of a method for evaluating a face makeup according to an embodiment of the present invention
  • FIG. 5 is a schematic diagram of partitioning a specific area in a certain frame of a person image according to an embodiment of the present invention
  • FIG. 6 is a schematic diagram of a presentation evaluation result according to an embodiment of the present invention.
  • FIG. 7 is a flowchart of another method for evaluating a face makeup according to an embodiment of the present invention.
  • FIG. 8 is a schematic structural diagram of a face makeup evaluation and suggestion system 202 according to an embodiment of the present invention.
  • FIG. 9 is a schematic diagram of an intelligent evaluation module 402 implementing corresponding functions according to an embodiment of the present invention.
  • FIG. 10 is a schematic flowchart of evaluating an association between a user's eyes and an eyebrow according to an embodiment of the present invention
  • FIG. 11 and FIG. 12 are flowcharts of another method for evaluating face makeup according to an embodiment of the present invention.
  • FIG. 13 is a schematic diagram of a process for providing intelligent suggestions for facial makeup according to an embodiment of the present invention.
  • FIG. 14 is a flowchart of a method for providing intelligent suggestions for a face makeup according to an embodiment of the present invention.
  • FIG. 15 is a schematic diagram of an operation flow for extracting facial features, textures, retrieving corresponding makeup templates, and applying a makeup template to generate a predicted effect map according to an embodiment of the present invention
  • 16 is a flowchart of another method for evaluating a face makeup according to an embodiment of the present invention.
  • FIG. 17 is a schematic diagram of a facial photo evaluation module 404 for implementing corresponding functions according to an embodiment of the present invention.
  • FIG. 18 is a schematic diagram of a process for specifying model training according to an embodiment of the present invention.
  • FIG. 19 is a flowchart of another method for evaluating a face makeup according to an embodiment of the present invention.
  • FIG. 20 is a schematic structural diagram of an apparatus for evaluating a face makeup according to an embodiment of the present invention.
  • FIG. 21 is a schematic structural diagram of another apparatus for evaluating a face makeup according to an embodiment of the present invention.
  • the embodiment of the present invention can be used in a terminal, and the terminal can include a notebook computer, a smart phone, and the like.
  • the terminal is provided with at least a camera, a display screen, an input device and a processor.
  • the terminal 100 includes a processor 101, a memory 102, a camera 103, an RF circuit 104, and an audio circuit 105.
  • the speaker 106, the microphone 107, the input device 108, other input devices 109, the display screen 110, the touch panel 111, the display panel 112, the output device 113, and the power source 114 are components.
  • the display screen 110 is composed of at least a touch panel 111 as an input device and a display panel 112 as an output device. It should be noted that the terminal structure shown in FIG. 1 does not constitute a limitation on the terminal, and may include more or less components than the illustration, or a combination of some Some components, or some components, or different component arrangements, are not limited herein.
  • the components of the terminal 100 will be specifically described below with reference to FIG. 1 :
  • the RF circuit 104 can be used for transmitting and receiving information or during a call, and receiving and transmitting the signal. For example, if the terminal 100 is a mobile phone, the terminal 100 can receive the downlink information sent by the base station through the RF circuit 104, and then transmit the downlink information to the processor. 101 processing; in addition, data related to the uplink is transmitted to the base station.
  • RF circuits include, but are not limited to, an antenna, at least one amplifier, a transceiver, a coupler, an LNA, a duplexer, and the like.
  • RF circuitry 104 can also communicate with the network and other devices via wireless communication. The wireless communication can use any communication standard or protocol including, but not limited to, GSM, GPRS, CDMA, WCDMA, LTE, email, SMS, and the like.
  • the memory 102 can be used to store software programs and modules, and the processor 101 executes various functional applications and data processing of the terminal 100 by running software programs and modules stored in the memory 101.
  • the memory 101 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function (for example, a sound playing function, an image playing function, etc.); and the storage data area may be Data (such as audio data, video data, etc.) created in accordance with the use of the terminal 100 is stored.
  • the memory 101 may include a high speed random access memory, and may also include a nonvolatile memory such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
  • Other input devices 109 can be used to receive input numeric or character information, as well as to generate key signal inputs related to user settings and function control of terminal 100.
  • other input devices 109 may include, but are not limited to, a physical keyboard, function keys (eg, volume control buttons, switch buttons, etc.), trackballs, mice, joysticks, light rats (light mice are touches that do not display visual output)
  • function keys eg, volume control buttons, switch buttons, etc.
  • trackballs mice
  • mice joysticks
  • light rats light mice are touches that do not display visual output
  • One or more of a sensitive surface, or an extension of a touch sensitive surface formed by a touch screen may also include sensors built into the terminal 100, such as gravity sensors, acceleration sensors, etc., and the terminal 100 may also use parameters detected by the sensors as input data.
  • the display screen 110 can be used to display information input by the user or information provided to the user as well as various menus of the terminal 100, and can also accept user input.
  • the display panel 112 may configure the display panel 112 in the form of an LCD, an OLED, or the like;
  • the touch panel 111 also referred to as a touch screen, a touch sensitive screen, etc., may collect contact or non-contact operations on or near the user (eg, The operation of the user on the touch panel 111 or in the vicinity of the touch panel 111 using any suitable object or accessory such as a finger, a stylus, or the like may also include a somatosensory operation; the operation includes a single point control operation, a multi-point control operation, and the like.
  • the touch panel 111 may further include two parts: a touch detection device and a touch controller.
  • the touch detection device detects the touch orientation and posture of the user, and detects a signal brought by the touch operation, and transmits a signal to the touch controller;
  • the touch controller receives the touch information from the touch detection device, and converts the signal into the processor 101.
  • the information that can be processed is transmitted to the processor 101, and the commands sent from the processor 101 can also be received and executed.
  • the touch panel 111 can be implemented by using various types such as resistive, capacitive, infrared, and surface acoustic waves, and the touch panel 111 can be implemented by any technology developed in the future.
  • the touch panel 111 can cover the display panel 112, and the user can display the content according to the display panel 112.
  • the display content includes but is not limited to a soft keyboard, a virtual mouse, a virtual button, and an icon. And operating on or near the touch panel 111 covered on the display panel 112. After detecting the operation on or near the touch panel 111, the touch panel 111 transmits to the processor 101 to determine user input, and then the processor 101 is configured according to User input provides a corresponding visual output on display panel 112.
  • the touch panel 111 and the display panel 112 are used as two independent components to implement the input and output functions of the terminal 100 in FIG. 1, in some embodiments, the touch panel 111 may be integrated with the display panel 112. To implement the input and output functions of the terminal 100.
  • the RF circuit 104, the speaker 106, and the microphone 107 provide an audio interface between the user and the terminal 100.
  • the audio circuit 105 can transmit the converted audio data to the speaker 106 for conversion to the sound signal output.
  • the microphone 107 can convert the collected sound signal into a signal, which is received by the audio circuit 105.
  • the audio data is then converted to audio data, and the audio data is output to the RF circuit 104 for transmission to a device such as another terminal, or the audio data is output to the memory 102 for the processor 101 to perform further processing in conjunction with the content stored in the memory 102.
  • the camera 103 can acquire image frames in real time and transmit them to the processor 101 for processing, and store the processed results to the memory 102 and/or present the processed results to the user via the display panel 112.
  • the processor 101 is the control center of the terminal 100, connecting various portions of the entire terminal 100 using various interfaces and lines, by running or executing software programs and/or modules stored in the memory 102, and recalling data stored in the memory 102.
  • the various functions and processing data of the terminal 100 are executed to perform overall monitoring of the terminal 100.
  • the processor 101 may include one or more processing units; the processor 101 may further integrate an application processor and a modem processor, where the application processor mainly processes an operating system, a UI, an application, and the like, and modulates
  • the demodulation processor primarily handles wireless communications. It can be understood that the above modem processor may not be integrated into the processor 101.
  • the terminal 100 may further include a power source 114 (for example, a battery) for supplying power to the respective components.
  • a power source 114 for example, a battery
  • the power source 114 may be logically connected to the processor 101 through the power management system, thereby managing charging, discharging, and the like through the power management system. And power consumption and other functions.
  • the terminal 100 may further include a Bluetooth module and the like, and details are not described herein again.
  • Embodiments of the present invention provide a method for evaluating face makeup, which may be performed by the terminal 100 shown in FIG. 1.
  • FIG. 2 it is a schematic diagram of the steps in the implementation process of the method, that is, the camera 201 of the terminal 200 can collect the character image frame in real time, and then transmit the character image frame to the facial makeup evaluation and suggestion system 202, and the system 202 is Face makeup evaluations, suggestions are implemented, and the results of the evaluation and suggestions are then rendered onto the captured character image frames and presented to the user via display 203.
  • the terminal 200 is further provided with an input device 203.
  • the user can input subjective evaluations for different character images through the input device 204, and the definition and establishment process of the specified model are proposed later. I will not repeat them here.
  • FIG. 3 it is a schematic diagram of the workflow implemented by the facial makeup evaluation and suggestion system 202.
  • the face makeup evaluation and suggestion system 202 is mainly involved in three phases, which are the data labeling phase (ie, phase A shown in FIG. 3) and the offline training phase (ie, the order shown in FIG. 3). Segment B), online analysis phase (ie phase C shown in Figure 3).
  • the pre-training model parameters, the personalized model parameters, and the database may be part of the facial makeup evaluation and suggestion system 202, wherein the pre-training model parameters and database may be independent of the facial makeup evaluation and suggestion system.
  • the terminal 100 can obtain pre-training model parameters and data recorded in the database from the network side or other devices through a communication method such as the Internet.
  • the phase A is mainly used to grasp the aesthetic situation of the user, understand the facial makeup that the user prefers and/or dislike, and generate personalized data annotation for the model training; then the model training is completed by the phase B, thereby Based on the pre-training model parameters, the adjusted personalized model parameters are obtained, that is, the specified model after training is obtained, wherein the specified model is more suitable for the user's aesthetic.
  • the camera collects the image stream in real time, and then performs the processes of sampling, recognition, evaluation, suggestion, etc. through the stage C, and finally forms the image formed by the rendered image. Stream, presented on the display.
  • the face makeup evaluation and makeup suggestion part will adopt the personalized model parameters involved in the specified model; after determining the evaluation result of the face makeup evaluation, it is necessary to find the makeup corresponding to the evaluation result by using the mapping relationship stored in the database. Suggestions are presented to the user one by one, thereby providing the user with a face makeup adjustment method that overcomes the current face makeup defects or makes the face makeup look better.
  • the specific implementation steps of the foregoing method may include:
  • Step 301 Collect a character image frame.
  • the face recognition may specifically include face detection, location and identification of key points of the part, and then output the image of the person with the identification information in units of frames, for the face makeup evaluation and suggestion system to realize the evaluation and suggestion of the character image.
  • the image of the person with the identification information outputted above is not presented to the user, and the image stream directly presented on the display screen is the image stream collected by the actual camera.
  • the output referred to herein is the information with the identification information.
  • the image of the person is used as an input for facial makeup evaluation and suggestion. This input and output process is only reflected in the facial makeup evaluation and suggestion system and is not publicly disclosed.
  • Step 302 According to the specified model, the avatar area in the image frame of the person is evaluated, and the evaluation result of the avatar area is obtained.
  • the model parameter of the specified model is a personalized model parameter for performing facial makeup evaluation for the current user obtained by adjusting the initial model parameters.
  • the avatar area includes a face type, or includes a face type and a hairstyle, and the partial area in the avatar area includes at least one of the facial features.
  • a schematic diagram is divided into specific regions in a certain frame of a person image.
  • the user's head contour and face portion are marked with a broken line frame, and four rectangular regions are roughly divided in the head contour for overall evaluation of the face makeup evaluation and suggestion system;
  • the embodiment of the present invention mainly evaluates and suggests the facial features of the user, and therefore, The ear included in the facial features of the user does not necessarily need to be considered.
  • the face evaluation and suggestion system is also The ear can be taken into consideration, but in general, the user does not wear makeup on the ear, and at most, some accessories are added to the ear to modify the face. Therefore, in the example of the embodiment of the present invention, the ear is not considered.
  • Step 303 Display a person image frame, and an evaluation result and/or a makeup suggestion corresponding to the evaluation result.
  • the makeup suggestion can point out the problem of the current face makeup, and give corresponding adjustment plans for the problem, that is, when necessary.
  • FIG. 6 it is a schematic diagram showing the evaluation results.
  • a person image frame acquired by the camera is presented, and various evaluations are displayed in a blank area of the character image frame.
  • the blank area can be understood as an area on the display screen that does not hinder the user from viewing the imaging effect.
  • the menu bar may be displayed in a blank area of the display screen to prompt the user to view the hidden evaluation result by clicking, sliding, and the like. It should be noted that the above two ways of presenting the evaluation result are only one of the many presentation modes, and are not limited as the evaluation effect of the embodiment of the present invention.
  • the terminal does not improve the collected character image itself, but after the internal processing of the terminal, the corresponding evaluation result is obtained, and then the original character image frame and the evaluation result are presented to the user, of course, the original The character image frame and the makeup suggestion corresponding to the evaluation result are presented to the user, and the original character image frame, the evaluation result, and the makeup suggestion corresponding to the evaluation result may be presented to the user.
  • the makeup suggestion presented herein may be a makeup suggestion corresponding to the partial evaluation result, or a makeup suggestion corresponding to the overall evaluation result, and the specific presentation content may be selected by the user or preset, and is not limited herein.
  • the user can determine the scheme for adjusting the face makeup according to the evaluation result presented by the terminal and the image frame of the person reflecting the current actual face makeup, and implement, thereby ensuring that the adjustment scheme obtained by the user according to the evaluation result can effectively reflect In the user's actual face makeup, or directly based on the makeup suggestions that have been presented corresponding to the evaluation results to adjust their face makeup, it is solved that the user can not apply the processing result presented by the terminal output effect diagram to the actual makeup.
  • the specified model may be a processing model that has been applied to each of the retouching softwares.
  • the specified model is used as the deep neural network model as an example, and the image frame of the person collected according to the specified model and the camera is provided and evaluated. The implementation process of the results. Therefore, on the basis of the implementation shown in FIG. 4, an implementation as shown in FIG. 7 can also be implemented.
  • the avatar area includes n local areas, and n is an integer greater than or equal to 1, if the specified model is a deep neural network
  • step 302 evaluates the avatar area in the image frame of the person according to the specified model, and obtains the evaluation result of the avatar area, which can be specifically implemented as step 3021.
  • Step 3021 According to the depth neural network model, or according to the deep neural network model and the preset rule, the avatar area is evaluated, and the evaluation result of the avatar area, the evaluation result of the n local areas, and the n local areas are used. At least one of the evaluation results of the association relationship between at least two partial regions.
  • the preset rule is a judging rule for determining an evaluation result of the n local regions and an evaluation result of the association relationship according to the facial features of the current user.
  • the preset rule it can be regarded as the pre-set evaluation rule of the user. For example, if there is a dice around the corner of the user's eyes, there must be a black spot in the image of the person collected by the camera near the corner of the eye. If the deep neural network model is used to evaluate the face makeup of the person's image, it is likely that the black spot will be regarded as a stain, thereby reducing the evaluation of the user's face makeup. In order to reduce the probability of occurrence of this situation, the user can pre-configure the rules. For example, the rules are specifically that the facial makeup effect reflected by the area around the corner of the eye is not considered, so that the stain recognized by the terminal does not face the user. The evaluation of makeup has an impact. Of course, the user can eliminate part of the area, so that the rejected part of the area does not participate in the evaluation, or reduce the evaluation of this part of the area, for example, for the case of finding the stain, it will not affect the evaluation result of the face makeup.
  • the facial makeup evaluation and suggestion system 202 can also be implemented as a structural schematic diagram as shown in FIG.
  • the facial makeup evaluation and suggestion system 202 mainly includes an online analysis subsystem, an offline training subsystem, and an image rendering module 406 and an information processing and control module 407.
  • the online analysis subsystem includes a face recognition module 401, an intelligent evaluation module 402, and an intelligent recommendation module 403.
  • the functions of the modules are mainly used to implement the image rendering as shown in FIG. Other functions;
  • the offline training subsystem includes a face photo evaluation module 404 and a model offline training module 405.
  • the functions of these modules are mainly used to implement the functions implemented in phase A and phase B as shown in FIG.
  • image rendering module 406 is mainly used to implement the function of image rendering in phase C;
  • the information processing and control module 407 is mainly used to combine the output results of the online analysis subsystem, the offline training subsystem, and the content of the user input collected by the input device, The corresponding data and information are analyzed and processed, and then sent to the image rendering module 406, so that the image rendering module 406 combines the original human image frame collected by the camera with the content input by the information processing and control module 407, and performs the original human image frame. Rendering, and rendering the rendered image frame of the character to the user through the display.
  • each module shown in Figure 8 can be considered to work in parallel. That is, in the same subsystem, each module can be regarded as working in a pipelined manner according to the flow direction of data transmission, and different pipeline stages. It works in parallel.
  • the face recognition module 401 processes the first frame person image
  • the first frame person image is processed by the smart evaluation module 402
  • the face recognition module 401 in the idle state can be To continue processing the second frame of the person image.
  • the second frame person image is a character image of a next frame to be processed adjacent to the first frame person image.
  • this serial processing can also be used between every two modules that have data transfer.
  • the intelligent evaluation module 402 shown in FIG. 8 can be used to implement the operation indicated by step 3021, and the specific operation flow is as shown in FIG. 9.
  • the depth neural network-based evaluation model shown in FIG. 9 is the above-described deep neural network model. It can be seen from the figure that whether the evaluation model based on deep neural network or the evaluation model based on rules is used, the corresponding evaluation results can be obtained.
  • the evaluation model based on the deep neural network is required to input the original person image, that is, the image frame of the person collected by the camera, and the output evaluation result may include at least three types, that is, the evaluation result of the overall avatar area and the evaluation of the local area.
  • the result of the evaluation of the result and the association relationship wherein the evaluation result of the association relationship indicates the evaluation of the association relationship between at least two local regions, for example, the association relationship between the local region A and the local region B, or the local region A, the local region The relationship between the region B and the local region C, etc.; using the rule-based evaluation model, the input image is the original person image and the face recognition information, and the output evaluation result generally includes only the face recognition information.
  • the evaluation results of each partial region obtained after the region division.
  • the above partial area may be regarded as a rectangular area indicated by the face recognition information, for example, a facial features, a chin, a forehead, and the like.
  • the determination of the local area may be preset by the user according to the requirements of the facial makeup evaluation and suggestion system, and is not limited to the partial area shown in FIG. 9 , and may include only the one shown in FIG. 9 . Partial local areas are not limited here.
  • FIG. 10 it is a flow chart for evaluating the correlation between the user's eyes and the eyebrows.
  • the original character image is used as the input of the deep neural network model. After multiple convolutions and sub-samplings, the images are fully connected and the output evaluation results are obtained. It should be noted that only the two convolution and sub-sampling processes are shown in the figure. In the actual operation process, the number of times the deep neural network model processes the original person image can be preset. In general, the more the processing times More, the more accurate the results are.
  • the training, generation, and subsequent adjustment of the deep neural network model are very mature in the prior art. Therefore, in the embodiment of the present invention, the implementation effect is only described by an exemplary description, for the deep neural network model. The implementation process, the principle and the like are not described in detail, and the specific content can be referred to the prior art.
  • the deep neural network model processes the entire character image, the background color, light, and the like in the image of the person are taken into consideration during the processing, that is, considering the influence of the user's surrounding environment on the face makeup, It can provide users with reasonable evaluation results more accurately, and provide users with suggestions for adjusting the current face makeup, that is, makeup suggestions, if the user needs them. It should be noted that the manner of providing makeup suggestions will be mentioned later, and will not be described here.
  • the deep neural network model can be used for evaluation.
  • the terminal can also selectively give the user a makeup suggestion according to the obtained evaluation result, so that the user can adjust his face makeup.
  • FIG. 4 can also be implemented as an implementation manner as shown in FIG. 11 .
  • the step 303 is to display the person image frame, and the evaluation result and/or the makeup suggestion corresponding to the evaluation result, which may be implemented as step 3031 and step 3032 and/or step 3033; before step 3033 is performed, step 501 may also be performed:
  • Step 3031 Display a character image frame.
  • Step 3032 Display the evaluation result.
  • Step 3033 Display a makeup suggestion corresponding to the evaluation result.
  • Step 501 Traversing the database to find a makeup suggestion corresponding to the evaluation result.
  • the database is used to store the matching relationship between each evaluation result and the makeup suggestion.
  • the database can be pre-configured by the user or the staff member.
  • the scoring system is used to measure the quality of the evaluation result.
  • the upper limit of the score can be set to 10 for each local area, and the corresponding one for the local area.
  • a set of makeup suggestions, corresponding to a set of makeup suggestions when the score of the local area is 8.
  • the content of the above two sets of makeup suggestions may have an intersection, which means that the evaluation results obtained by the same local area when different scores are obtained are likely to be different, so that the makeup suggestions corresponding to the evaluation results are made. It can also be partially identical or completely different, and of course, in some cases, it can be identical.
  • the above-mentioned makeup suggestions are in units of groups, and are only one possible case proposed by the embodiment of the present invention.
  • Each set of makeup suggestions may include one or more makeup suggestions, which are not limited herein.
  • the makeup of the local area where the eyebrows are located may be incomplete, and the makeup suggestion at this time may be to draw the eyebrows, etc.; when the overall figure of the character is scored 7, the bangs may be regarded as too long. At this time, the makeup suggestion can be more mature for the bangs to be collected. It can be seen that the content presented to the user may include not only makeup suggestions corresponding to the evaluation results, but also effects that can be achieved by using the above makeup suggestions.
  • the specific presentation form may adopt an image or a text, which is convenient for the user to understand, and is not limited herein.
  • the user can adjust the face makeup more specifically in combination with the actual evaluation results and the corresponding makeup suggestions.
  • the makeup suggestion presented to the user by the terminal can be referred to, so that when the user has no adjustment, the user is provided with a makeup instruction similar to the tutorial, that is, the user adjusts the makeup according to the makeup suggestion.
  • the terminal may present the predicted effect map after applying the makeup suggestion to the user for the user to determine whether the makeup suggestion needs to be adopted. Therefore, based on the implementations shown in FIG. 4, FIG. 7, and FIG. 11, the implementation shown in FIG. 12 can also be implemented by taking FIG. 4 as an example.
  • the avatar area in the image frame of the person is evaluated according to the specified model, and after the evaluation result of the avatar area is obtained, step 502 may be performed:
  • Step 502 Display a prediction effect diagram after applying the makeup suggestion to the character image frame.
  • the terminal can analyze the evaluation results in advance to find a certain number of low-level items in all the evaluation results; analyze the facial features corresponding to the low-score items, and retrieve the corresponding makeup suggestions from the database.
  • a makeup template corresponding to the makeup suggestion using a makeup template, the face makeup modified by the makeup template is combined onto the original person image to form a prediction effect map adjusted by the makeup opinion; and the prediction effect map is To carry out the evaluation, it should be noted that the model for evaluation this time is the same as the model for initially evaluating the original person image; if the evaluation result has reached the standard, the above makeup suggestions are output, or the effect of the above makeup suggestions on the prediction effect map is output.
  • the largest one or more makeup suggestions if the evaluation result is not up to standard, continue to retrieve the makeup template and adjust the image again until the evaluation result can reach the target prediction effect, and then present all the makeup suggestions to the user. Or suggesting a makeup that has a greater impact on all make-up suggestions. To the user.
  • the makeup template corresponding to each makeup suggestion can also be stored in the database.
  • the makeup template may be a template having a common feature extracted from a person image having a high evaluation result based on a plurality of parts.
  • FIG. 15 an operation flow diagram for extracting facial features, textures, retrieving corresponding makeup templates, and applying a makeup template to generate a predicted effect map is shown.
  • the corresponding makeup template is retrieved, and then, at the time of composition, the makeup template is superimposed on the position corresponding to the makeup template in the original character image, thereby obtaining a prediction effect map.
  • the specified model can be a deep neural network model.
  • the initial model parameters of the existing specified model may also be adjusted to obtain a personalized model parameter, and applied to the specified model, that is, Different users complete the training of the specified model. Therefore, based on any of the implementations shown in FIG. 4, FIG. 7, FIG. 11, and FIG. 12, FIG. 4 can be used as an implementation as shown in FIG.
  • steps 601 to 604 may be performed:
  • Step 601 Display a static character image.
  • the static character image may be a person image pre-existing in the photo database, and may also be a character image stored in a terminal local or network-side remote database or a character image temporarily acquired by the camera.
  • the source of the static character image is not limited here.
  • the terminal can sequentially display the static character images to the user through the display screen.
  • the static character image can be displayed in batches. For example, the display screen is divided into nine squares, and a static character is displayed in each grid. image. In this way, the user can pass By comparing and other methods, it is more fair to evaluate the images of people presented to the user at the same time.
  • the manner of displaying the static character image is not limited to the above two examples, and may be other ways to ensure that the user views the static character image, which is not limited herein.
  • Step 602 Acquire an evaluation result input by the current user.
  • the evaluation result input by the current user may include an evaluation result of the current user for at least one partial region in each static character image, and/or an evaluation result of evaluating the overall image of the person, and/or for at least two The evaluation result of the relationship between local regions.
  • the input person image as the face photo evaluation module 404 may be an offline person image stored in the photo database or a person image frame collected by the camera of the terminal in real time; the terminal displays the current through the display screen.
  • Step 603 Acquire a static person image that the designated user completes the face makeup evaluation and the corresponding evaluation result.
  • the specified user meets the specified condition with the current user, and the specified condition includes that the similarity between the facial feature of the specified user and the facial feature of the current user is greater than a first threshold, and the evaluation result given by the specified user and the current user for the same person image The degree of similarity between the two is greater than at least one of the second thresholds.
  • the setting manner and the value of the first threshold and the second threshold are not limited herein, and may be set by the current user or the staff in combination with historical experience values.
  • the terminal may preferentially push a similar person image as a character image of the training model according to the facial features of the user, such as the face shape of the user.
  • the similar person image here refers to a person image with high correlation with the current user's facial features, for example, the same as the current user's facial features, or the same facial features as the current user. And a person image that is less different from the facial features of the rest of the current user.
  • the image of the similar person is not limited to the above-mentioned several possible situations, and may be selected according to the user's requirements for facial makeup evaluation and suggestion, which is not limited herein.
  • the current user can sequentially or batchly evaluate the static person image through the input device.
  • the terminal is a mobile phone
  • the user can touch the screen.
  • the pen and the virtual keyboard presented on the display screen evaluate the image of the person;
  • the terminal is an electronic device such as a notebook that is externally connected to the input device, the user can evaluate the image of the person through an external input device such as a mouse or a keyboard. .
  • the terminal may save the evaluation result given by the current user in the terminal locality, or store the evaluation result on the network side or in the database of other devices in accordance with the principle of saving the terminal storage resource. Since the amount of data of the evaluation result manually input by the user is much smaller than the amount of data required for training the specified model, the evaluation results of different users are uniformly stored, and a richer training resource can be provided for the specified model applied to different users.
  • the evaluation result input by the designated user In order to make the evaluation result input by the non-user himself more useful for training the specified model for the current user himself, in the actual operation process, the evaluation result input by the designated user with the similar cosmetic type that the current user tends to be can be used as a part of the training resource. Or, for the same person image, the input result is compared with the current user's evaluation of the person image, and the person image evaluated by the designated user is a part of the training resource. It can be seen that the access to the training resources is not limited in the embodiment of the present invention, and the image of the person who can be evaluated by the same user may also be the image of the person evaluated through a plurality of different users, and may of course include A person image in which a plurality of users evaluate different character images.
  • Step 604 Train the specified model according to the static character image of the specified user and the current user to complete the face makeup evaluation and the corresponding evaluation results, and adjust the initial model parameters in the specified model to the personalized model parameters.
  • the implementation process of face recognition, face makeup evaluation, makeup suggestion, etc. is involved, and the above implementation process can be performed by using a machine learning model.
  • the model used in the face recognition may include an ASM algorithm and/or an AAM algorithm.
  • a general face recognition technology may be used, and details are not described herein.
  • the terminal may further train the specified model based on the obtained personalized annotation data until the specified model converges, and the model parameters applied by the specified model after convergence are used as the personalized model parameters obtained for the current user.
  • the current user's overall evaluation of the specified person image is scored as 8, and the overall evaluation of the specified person image is divided into 7 by using the specified model to which the initial model parameters are applied, thereby showing that the evaluation result obtained by the current designated model is obtained.
  • the specified model needs to be trained.
  • the current specified model may be determined as the specified model for completing the training, and the model parameters to which the currently specified model is applied may be As a personalized model parameter for the current user.
  • the floating range which is different from the overall evaluation score of ⁇ 0.5 by 0.5 can be regarded as the allowable error range of the specified model during the application process.
  • This parameter can be determined by the user according to his own requirements for facial makeup evaluation, or The staff is set according to the historical experience value, and the setting method of the error range and the specific value of the setting are not limited herein.
  • the model offline training module 405 collects pre-training model parameters, personalized annotation data, trains and adjusts the specified model, and finally outputs personalized model parameters.
  • the step 301 is to collect the image frame of the person, which may be specifically implemented as step 3011:
  • Step 3011 Acquire an image stream, and perform sampling from the image stream to obtain a character image frame.
  • the terminal may sample the image stream acquired by the camera, thereby obtaining a certain number of character image frames, and separately performing the image frames of the characters. deal with.
  • the collected image stream is actually output to the user, but in the presented image stream, the character image frame including the evaluation result and the makeup suggestion is superimposed.
  • the currently displayed evaluation result may be directly replaced when a new evaluation result appears, thereby ensuring that the evaluation result is always present in the image stream presented to the user.
  • makeup advice can also use the same output method. In other words, for a person image outputted in a certain frame, the person image frame may not be evaluated, suggested, etc., but when displayed, the previous frame is still used to include the evaluation result and the makeup corresponding to the evaluation result. The content of the proposed character image frame.
  • the process of processing the image of the person image obtained by the subsequent sampling may refer to the image of the person image obtained by the previous sampling adjacent to the image frame of the character obtained by the subsequent sampling.
  • the face recognition result of the person image frame obtained by the previous sampling may be applied to the character image frame obtained by the subsequent sampling. It should be noted that, in a character image frame obtained by sampling a few times in a normal manner, the facial features located on the central axis of the face do not change greatly. Therefore, the method of directly using the previous processing result is often used. The accuracy of the process can be guaranteed.
  • the processing result may be fine-tuned to obtain a processing result more suitable for the current application scenario, so that the processing can be guaranteed. Based on the accuracy of the results, reduce resource consumption.
  • the facial makeup evaluation and the suggested functions implemented by the embodiments of the present invention can be extended. If the above functions are implemented in software in the terminal, the software can be used as a mobile Internet traffic portal to connect with remote servers to provide various additional services, such as personalized data analysis based on user facial features and personal preferences.
  • the user is recommended for a specific cosmetic product for the cosmetic defect embodied in the current evaluation result, and helps the user overcome the current cosmetic defect.
  • a device for evaluating a face makeup may be provided, and in order to realize the above functions, the device for evaluating face makeup includes a hardware structure and/or a software module corresponding to each function.
  • the present invention can be implemented in a combination of hardware or hardware and computer software in combination with the elements and algorithm steps of the various examples described in the embodiments disclosed herein. Whether a function is implemented in hardware or computer software to drive hardware depends on the specific application and design constraints of the solution. A person skilled in the art can use different methods for implementing the described functions for each particular application, but such implementation should not be considered to be beyond the scope of the present invention.
  • the embodiment of the present invention may divide the function module of the device for evaluating the face makeup according to the above method example.
  • each function module may be divided according to each function, or two or more functions may be integrated into one processing module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules. It should be noted that the division of the module in the embodiment of the present invention is schematic, and is only a logical function division, and the actual implementation may have another division manner.
  • the device 10 for evaluating face makeup includes an acquisition module 11, an evaluation module 12, a display module 13, a search module 14, an acquisition module 15, a training module 16, and an adjustment module 17.
  • the device 10 for supporting the evaluation of the face makeup performs the step 301 in FIG. 4, FIG. 7, FIG. 11, FIG. 12, FIG. 16, step 3011 in FIG. 19; the evaluation module 12 is used to support the evaluation of the face makeup.
  • the device 10 performs step 302 in FIG. 4, FIG. 11, FIG. 12, FIG. 16, FIG. 19, step 3021 in FIG. 7, and the display module 13 is configured to support the device 10 for evaluating facial makeup to perform FIG. 4, FIG. 7, and FIG. 12. Steps 303 in FIG.
  • step 501 in FIG. 11 is performed; the obtaining module 15 is configured to support the facial makeup device 10 to perform step 602 and step 603 in FIG. 16; the training module 16 is configured to support the facial makeup device 10 to perform the steps in FIG. The training process in 604; the adjustment module 17 is used to support the device 10 for evaluating facial makeup to perform the adjustment process in step 604 of FIG.
  • the evaluation module 12, the search module 14, the training module 16, and the adjustment module 17 can be integrated on the processing module 20, and the evaluation module 12, the search module 14, and the training module are implemented by the processing module 20.
  • the acquisition module 11, the display module 13 and the acquisition module 15 can be integrated on the communication module 21 in addition to being separately deployable.
  • the communication module 21 implements the functions that the acquisition module 11, the display module 13, and the acquisition module 15 can implement.
  • the communication module 21 is also used to support communication between the terminal and other devices.
  • the device 10 for evaluating facial makeup may further be provided with a storage module 18 for storing program codes and data of the terminal.
  • the processing module 20 can be implemented as a processor or a controller, such as a CPU, a general purpose processor, a DSP, an ASIC, an FPGA or other programmable logic device, a transistor logic device, a hardware component, or any combination thereof. It is possible to implement or carry out the various illustrative logical blocks, modules and circuits described in connection with the present disclosure.
  • the processor may also be a combination of computing functions, for example, including one or more microprocessor combinations, a combination of a DSP and a microprocessor, and the like.
  • the communication module 21 can be implemented as a transceiver, a transceiver circuit, a communication interface, or the like.
  • the storage module 18 can be implemented as a memory.
  • the apparatus 30 for evaluating facial makeup includes: a processor 31, a transceiver 32, and a memory 33. And bus 34.
  • the processor 31, the transceiver 32 and the memory 33 are mutually connected by a bus 34; the bus 34 may be a PCI bus or an EISA bus or the like.
  • the bus can be divided into an address bus, a data bus, a control bus, and the like. For ease of representation, only one thick line is shown in FIG. 21, but it does not mean that there is only one bus or one type of bus.
  • the steps of a method or algorithm described in connection with the present disclosure may be implemented in a hardware, or may be implemented by a processor executing software instructions.
  • the software instructions may be comprised of corresponding software modules that may be stored in RAM, flash memory, ROM, EPROM, EEPROM, registers, hard disk, removable hard drive, CD-ROM, or any other form of storage medium known in the art.
  • An exemplary storage medium is coupled to the processor to enable the processor to read information from, and write information to, the storage medium.
  • the storage medium can also be an integral part of the processor.
  • the processor and the storage medium may be deployed in the same device, or the processor and the storage medium may be deployed as separate components in different devices.
  • the functions described in the embodiments of the present invention may be implemented in hardware, software, firmware, or any combination thereof.
  • the functions may be stored in a computer readable medium or transmitted as one or more instructions or code on a computer readable medium.
  • Computer readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one location to another.
  • a storage medium may be any available media that can be accessed by a general purpose or special purpose computer.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

La présente invention se rapporte au domaine technique du traitement d'image, et concerne en particulier un procédé et un dispositif d'évaluation de maquillage de visage conçus pour résoudre le problème empêchant un utilisateur d'appliquer à un processus de maquillage réel un résultat de traitement présenté dans une image d'effet émise par un terminal. Le procédé comprend les étapes qui consistent : à acquérir un cadre d'image d'une personne (301) ; à évaluer, selon un modèle spécifié, une partie d'image de tête du cadre d'image d'une personne pour obtenir un résultat d'évaluation de la partie d'image de tête (302) ; et à afficher le cadre d'image d'une personne, le résultat d'évaluation et/ou une suggestion de maquillage correspondant au résultat d'évaluation (303), un paramètre de modèle du modèle spécifié étant un paramètre de modèle personnalisé pour un utilisateur courant au moyen de l'ajustement d'un paramètre de modèle initial. Le procédé et le dispositif d'évaluation de maquillage de visage sont applicables à un terminal.
PCT/CN2017/085980 2017-05-25 2017-05-25 Procédé et dispositif d'évaluation de maquillage de visage Ceased WO2018214115A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2017/085980 WO2018214115A1 (fr) 2017-05-25 2017-05-25 Procédé et dispositif d'évaluation de maquillage de visage
CN201780091213.1A CN110663063B (zh) 2017-05-25 2017-05-25 一种评价脸妆的方法及装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/085980 WO2018214115A1 (fr) 2017-05-25 2017-05-25 Procédé et dispositif d'évaluation de maquillage de visage

Publications (1)

Publication Number Publication Date
WO2018214115A1 true WO2018214115A1 (fr) 2018-11-29

Family

ID=64395175

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/085980 Ceased WO2018214115A1 (fr) 2017-05-25 2017-05-25 Procédé et dispositif d'évaluation de maquillage de visage

Country Status (2)

Country Link
CN (1) CN110663063B (fr)
WO (1) WO2018214115A1 (fr)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110428368A (zh) * 2019-07-31 2019-11-08 北京金山云网络技术有限公司 一种算法评价方法、装置、电子设备及可读存储介质
CN111539882A (zh) * 2020-04-17 2020-08-14 华为技术有限公司 辅助化妆的交互方法、终端、计算机存储介质
CN112381928A (zh) * 2020-11-19 2021-02-19 北京百度网讯科技有限公司 图像显示的方法、装置、设备以及存储介质
CN113269719A (zh) * 2021-04-16 2021-08-17 北京百度网讯科技有限公司 模型训练、图像处理方法,装置,设备以及存储介质
US11253045B2 (en) 2019-07-18 2022-02-22 Perfect Mobile Corp. Systems and methods for recommendation of makeup effects based on makeup trends and facial analysis

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111369559A (zh) * 2020-04-16 2020-07-03 福州海豚世纪科技有限公司 妆容评估方法、装置、化妆镜和存储介质
CN113837020B (zh) * 2021-08-31 2024-02-02 北京新氧科技有限公司 一种化妆进度检测方法、装置、设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101350102A (zh) * 2008-08-29 2009-01-21 北京中星微电子有限公司 一种化妆辅助方法及系统
CN202588699U (zh) * 2012-04-27 2012-12-12 上海申视汽车新技术有限公司 一种智能式化妆盒
US20140016823A1 (en) * 2012-07-12 2014-01-16 Cywee Group Limited Method of virtual makeup achieved by facial tracking
CN104834800A (zh) * 2015-06-03 2015-08-12 上海斐讯数据通信技术有限公司 一种美妆方法、系统及设备
CN106293362A (zh) * 2015-05-20 2017-01-04 福建省辉锐电子技术有限公司 一种引导式化妆设备

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6128309B2 (ja) * 2013-02-01 2017-05-17 パナソニックIpマネジメント株式会社 メイクアップ支援装置、メイクアップ支援方法、およびメイクアップ支援プログラム
US10321747B2 (en) * 2013-02-01 2019-06-18 Panasonic Intellectual Property Management Co., Ltd. Makeup assistance device, makeup assistance system, makeup assistance method, and makeup assistance program
CN103995911A (zh) * 2013-02-15 2014-08-20 北京银万特科技有限公司 基于智能信息终端的美容选配方法与系统
CN103246878A (zh) * 2013-05-13 2013-08-14 苏州福丰科技有限公司 一种基于人脸识别的试妆系统及其试妆方法
CN104951770B (zh) * 2015-07-02 2018-09-04 广东欧珀移动通信有限公司 人脸图像数据库的构建方法、应用方法及相应装置
CN106339658A (zh) * 2015-07-09 2017-01-18 阿里巴巴集团控股有限公司 数据的处理方法和装置
CN106709411A (zh) * 2015-11-17 2017-05-24 腾讯科技(深圳)有限公司 一种颜值获取方法及装置
CN106407423A (zh) * 2016-09-26 2017-02-15 珠海格力电器股份有限公司 基于终端设备的妆饰指导方法、装置及终端设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101350102A (zh) * 2008-08-29 2009-01-21 北京中星微电子有限公司 一种化妆辅助方法及系统
CN202588699U (zh) * 2012-04-27 2012-12-12 上海申视汽车新技术有限公司 一种智能式化妆盒
US20140016823A1 (en) * 2012-07-12 2014-01-16 Cywee Group Limited Method of virtual makeup achieved by facial tracking
CN106293362A (zh) * 2015-05-20 2017-01-04 福建省辉锐电子技术有限公司 一种引导式化妆设备
CN104834800A (zh) * 2015-06-03 2015-08-12 上海斐讯数据通信技术有限公司 一种美妆方法、系统及设备

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11253045B2 (en) 2019-07-18 2022-02-22 Perfect Mobile Corp. Systems and methods for recommendation of makeup effects based on makeup trends and facial analysis
CN110428368A (zh) * 2019-07-31 2019-11-08 北京金山云网络技术有限公司 一种算法评价方法、装置、电子设备及可读存储介质
CN111539882A (zh) * 2020-04-17 2020-08-14 华为技术有限公司 辅助化妆的交互方法、终端、计算机存储介质
CN112381928A (zh) * 2020-11-19 2021-02-19 北京百度网讯科技有限公司 图像显示的方法、装置、设备以及存储介质
CN113269719A (zh) * 2021-04-16 2021-08-17 北京百度网讯科技有限公司 模型训练、图像处理方法,装置,设备以及存储介质
CN113269719B (zh) * 2021-04-16 2024-11-05 北京百度网讯科技有限公司 模型训练、图像处理方法,装置,设备以及存储介质

Also Published As

Publication number Publication date
CN110663063B (zh) 2022-04-12
CN110663063A (zh) 2020-01-07

Similar Documents

Publication Publication Date Title
WO2018214115A1 (fr) Procédé et dispositif d'évaluation de maquillage de visage
CN108229415B (zh) 信息推荐方法、装置、电子设备及计算机可读存储介质
CN108009521B (zh) 人脸图像匹配方法、装置、终端及存储介质
CN110443769B (zh) 图像处理方法、图像处理装置及终端设备
CN110544488B (zh) 一种多人语音的分离方法和装置
CN110110118B (zh) 妆容推荐方法、装置、存储介质及移动终端
US20200412975A1 (en) Content capture with audio input feedback
CN111985265A (zh) 图像处理方法和装置
WO2021203118A1 (fr) Identification de produits physiques pour des expériences de réalité augmentée dans un système de messagerie
US20140062861A1 (en) Gesture recognition apparatus, control method thereof, display instrument, and computer readable medium
WO2019223421A1 (fr) Procédé et dispositif permettant de générer une image de visage en dessin animé et support de stockage informatique
CN108985220B (zh) 一种人脸图像处理方法、装置及存储介质
CN108198130B (zh) 图像处理方法、装置、存储介质及电子设备
CN105303149B (zh) 人物图像的展示方法和装置
CN108875594B (zh) 一种人脸图像的处理方法、装置以及存储介质
CN108550117A (zh) 一种图像处理方法、装置以及终端设备
CN111047511A (zh) 一种图像处理方法及电子设备
CN108712603A (zh) 一种图像处理方法及移动终端
CN109272473B (zh) 一种图像处理方法及移动终端
CN111553854A (zh) 一种图像处理方法及电子设备
CN109819167B (zh) 一种图像处理方法、装置和移动终端
CN108021905A (zh) 图片处理方法、装置、终端设备及存储介质
US20250039537A1 (en) Screenshot processing method, electronic device, and computer readable medium
CN107948503A (zh) 一种拍照方法、拍照装置及移动终端
CN108681398A (zh) 基于虚拟人的视觉交互方法及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17910535

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17910535

Country of ref document: EP

Kind code of ref document: A1