HK40083062A - Data processing method, device, and readable storage medium - Google Patents
Data processing method, device, and readable storage medium Download PDFInfo
- Publication number
- HK40083062A HK40083062A HK42023072102.9A HK42023072102A HK40083062A HK 40083062 A HK40083062 A HK 40083062A HK 42023072102 A HK42023072102 A HK 42023072102A HK 40083062 A HK40083062 A HK 40083062A
- Authority
- HK
- Hong Kong
- Prior art keywords
- decoration
- virtual
- type
- target
- displaying
- Prior art date
Links
Description
Technical Field
The present application relates to the field of computer technologies, and in particular, to a data processing method, device, and readable storage medium.
Background
In general, a user needs to perform decoration (e.g., dressing, makeup, accessory matching, etc.) according to his/her preference before going out.
However, not all users have sufficient knowledge of the decoration principle (such as dressing principle, cosmetic application principle, etc.) and the decoration effect, and some users do not have detailed knowledge of the decoration manner, decoration skill and decoration principle, so that it is likely that the decoration effect obtained after the decoration treatment is not ideal (such as not adapting the proportion of the users themselves, not adapting the environment, etc.). In order to make the decoration processing effect better, usually, a user can manually search a large number of decoration modes and decoration teaching through keywords to perform decoration processing, however, decoration modes suitable for the user exist in the decoration modes searched by the user, and decoration modes unsuitable for the user may also exist, so that the user needs to spend a large amount of time and energy to screen to determine the decoration mode suitable for the user, and the efficiency is very low; meanwhile, the decoration mode recommended to the user based on the keywords is probably not in accordance with the requirements of the user, the decoration effect cannot meet the expectation of the user, and the recommendation precision of the decoration mode is very low.
Disclosure of Invention
The embodiment of the application provides a data processing method, a data processing device, data processing equipment and a readable storage medium, and the recommendation efficiency and the recommendation precision of a decoration mode can be improved.
An embodiment of the present application provides a data processing method, including:
acquiring decoration configuration information associated with a target object;
displaying N virtual decoration objects matched with the target object in the decoration application; each virtual decoration object is obtained by virtually decorating the key part based on a virtual decoration mode, and the virtual decoration mode adopted by each virtual decoration object is associated with the decoration configuration information; the target object comprises a key part;
responding to the object selection operation aiming at the N virtual decoration objects, and displaying the decoration flow corresponding to the target virtual decoration object selected from the N virtual decoration objects; the decoration flow is used for guiding the target object to carry out real decoration processing.
An embodiment of the present application provides a data processing apparatus, including:
the information acquisition module is used for acquiring decoration configuration information associated with the target object;
the decoration object display module is used for displaying N virtual decoration objects matched with the target object in the decoration application; each virtual decoration object is obtained by virtually decorating the key part based on a virtual decoration mode, and the virtual decoration mode adopted by each virtual decoration object is associated with the decoration configuration information; the target object comprises a key part;
a flow display module, configured to respond to an object selection operation for the N virtual decoration objects, and display a decoration flow corresponding to a target virtual decoration object selected from the N virtual decoration objects; the decoration flow is used for guiding the target object to carry out real decoration processing.
In one embodiment, the decoration configuration information includes activity scenario information;
the information acquisition module includes:
the position information display unit is used for responding to the triggering operation of the application starting control aiming at the decoration application and displaying the position information associated with the target object;
and the scene information display unit is used for displaying the activity scene information associated with the target object when the position information receives the trigger operation.
In one embodiment, the scene information display unit includes:
a scene interface display subunit, configured to respond to a trigger operation for the location information and display a scene type selection interface;
the target type display subunit is used for displaying the target scene type selected by the type selection operation according to the type selection operation on the scene type selection interface;
and the scene information display subunit is used for responding to the trigger operation aiming at the target scene type and displaying the activity scene information associated with the target scene type and the position information.
In one embodiment, the decoration configuration information includes a type of decoration required;
the information acquisition module includes:
the decoration type interface unit is used for responding to the triggering operation of the application starting control aiming at the decoration application and displaying a decoration type selection interface;
and the requirement type display unit is used for responding to the trigger operation aiming at the decoration type selection interface and displaying the requirement decoration type associated with the target object.
In one embodiment, the decoration type selection interface includes historical decoration types;
and the requirement type display unit is also specifically used for responding to the trigger operation aiming at the historical decoration type, determining the historical decoration type as the requirement decoration type and displaying the requirement decoration type.
In an embodiment, the requirement type display unit is further specifically configured to display the requirement decoration type selected by the selection operation in response to the selection operation for the decoration type in the decoration type selection interface.
In one embodiment, the decoration object display module is further specifically configured to display, in response to the confirmation operation for the decoration configuration information, N virtual decoration objects matching the target object in the decoration application; or, displaying the decoration configuration information, and displaying N virtual decoration objects matched with the target object in the decoration application when the display duration of the decoration configuration information meets the display condition.
In one embodiment, the decoration object display module includes:
the candidate mode acquisition unit is used for acquiring N candidate virtual decoration modes matched with the decoration configuration information;
the virtual decoration processing unit is used for respectively carrying out virtual decoration processing on the key parts according to the virtual decoration parameters of each candidate virtual decoration mode in the N candidate virtual decoration modes to obtain N virtual decoration objects;
and the object display unit is used for displaying the N virtual decoration objects in the decoration application.
In one embodiment, the decoration configuration information includes activity scene information, a required decoration type, and key part attribute information of a key part; the requirement decoration type comprises a requirement hierarchy type;
a candidate mode acquisition unit comprising:
the virtual parameter acquisition subunit is used for acquiring the virtual hierarchy type and the virtual isolation efficacy level of the adaptive activity scene information;
the mode determining subunit is used for determining N candidate virtual decoration modes according to the virtual hierarchy type, the virtual isolation efficacy level, the residual required decoration type and the key part attribute information when the virtual hierarchy type is matched with the required hierarchy type; the rest requirement decoration types are decoration types except the requirement level type in the requirement decoration types;
and the mode determining subunit is further configured to display the hierarchy selection prompt information for the virtual hierarchy type and the demand hierarchy type when the virtual hierarchy type is not matched with the demand hierarchy type, and determine N candidate virtual decoration modes according to the hierarchy selection result, the virtual isolation efficacy level, the remaining demand decoration type and the key part attribute information of the hierarchy selection prompt information.
In one embodiment, the mode determination subunit is further specifically configured to obtain a decoration database; the decoration database comprises M virtual decoration modes and virtual decoration parameters corresponding to the M virtual decoration modes respectively; m is a positive integer greater than N;
the mode determination subunit is further specifically configured to determine, as a first candidate decoration parameter, a virtual decoration parameter of which the hierarchy type is a virtual hierarchy type and the isolation efficacy level is a virtual isolation efficacy level among the M virtual decoration parameters;
the mode determination subunit is further specifically configured to obtain a second candidate decoration parameter matched with the remaining required decoration types from the first candidate decoration parameters;
the mode determination subunit is further specifically configured to acquire a third candidate decoration parameter matched with the key part attribute information from the second candidate decoration parameters;
and the mode determining subunit is further specifically configured to determine the virtual decoration mode corresponding to the third candidate decoration parameter as N candidate virtual decoration modes.
In one embodiment, the key part attribute information includes sub-part physical sign information and a sub-part proportion, the sub-part physical sign information is characteristic information of sub-parts in the key part, and the sub-part proportion is a proportion of the sub-parts in the key part; the second candidate decoration parameter comprises a second candidate decoration parameter S k K is a positive integer;
a mode determination subunit, further specifically configured to determine a second candidate decoration parameter S k A first adaptation rate to the sub-part ratio, and a second candidate decoration parameter S k A second adaptation rate with sub-site peer information;
a mode determination subunit, further specifically configured to determine a second candidate decoration parameter S according to the first adaptation rate and the second adaptation rate k The total adaptation rate with the attribute information of the key part;
a mode determination subunit, further specifically configured to determine a second candidate decoration parameter S if the total adaptation rate is greater than the adaptation threshold value k A third candidate decoration parameter is determined.
In one embodiment, the mode determination subunit is further specifically configured to determine, if the level selection result is a virtual level type, N candidate virtual decoration modes according to the virtual level type, the virtual isolation efficacy level, the remaining required decoration type, and the key part attribute information;
and the mode determining subunit is further specifically configured to determine, if the level selection result is the required level type, N candidate virtual decoration modes according to the virtual isolation efficacy level, the required decoration type, and the key part attribute information.
In one embodiment, the flow display module comprises:
the guide interface display unit is used for responding to the object selection operation aiming at the N virtual decoration objects and displaying a decoration flow guide interface;
an interface area display unit for displaying a target virtual decoration object and a key part selected from the N virtual decoration objects in an object display area of the decoration flow guide interface;
and the interface area display unit is also used for displaying the decoration process corresponding to the target virtual decoration object in the process display area of the decoration process guide interface.
In one embodiment, the object display area includes a decoration object display area and a key part display area;
the interface area display unit is specifically used for displaying the target virtual decoration object in the decoration object display area;
and the interface area display unit is also specifically used for displaying the key part in the key part display area.
In one embodiment, the decoration flow includes decoration audio data corresponding to the decoration text flow;
the interface area display unit is also specifically used for displaying a decoration text flow in the flow display area of the decoration flow guide interface;
and the interface area display unit is also specifically used for synchronously outputting decoration audio data while displaying the decoration text flow.
In one embodiment, the decoration text flow includes a decoration text step T a A is a positive integer;
interface area display unit, further specifically for obtaining a decoration text step T a A text timestamp of (a);
the interface area display unit is also specifically used for traversing the audio time stamp set corresponding to the decoration audio data; the audio time stamp set comprises one or more audio time stamps, and one audio time stamp is a time stamp corresponding to one sub-audio data in the decorative audio data;
the interface area display unit is further specifically used for determining an audio timestamp having a time alignment relationship with the text timestamp in the audio timestamp set as a target audio timestamp;
the interface area display unit is further specifically configured to determine sub-audio data corresponding to the target audio time stamp as target sub-audio data, and perform a step T of displaying a decoration text a Simultaneously, the target sub-audio data is synchronously output.
In one embodiment, the data processing apparatus further comprises:
the difference area determining module is used for acquiring a real decoration object obtained by real decoration processing of the target object in the process of real decoration processing of the target object;
the difference area determining module is also used for comparing the real decorative object with the target virtual decorative object to determine a difference decorative area;
the correction information display module is used for generating correction decoration prompt information according to the difference decoration area and displaying the correction decoration prompt information; the correction decoration prompting information is used for prompting the target object to perform correction decoration processing in the difference decoration area.
An embodiment of the present application provides a computer device, including: a processor and a memory;
the memory stores a computer program that, when executed by the processor, causes the processor to perform the method in the embodiments of the present application.
An aspect of the embodiments of the present application provides a computer-readable storage medium, in which a computer program is stored, where the computer program includes program instructions, and the program instructions, when executed by a processor, perform the method in the embodiments of the present application.
In one aspect of the application, a computer program product or computer program is provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the method provided by one aspect of the embodiments of the present application.
In the embodiment of the present application, a decoration application is provided, where decoration configuration information associated with a target object may be obtained, and in the decoration application, N virtual decoration objects (obtained by performing virtual decoration on a key part in different virtual decoration manners) matching the target object are displayed; the N virtual decoration objects can be selected by the target object, after the target object is selected, the decoration application can display the decoration process corresponding to the target virtual decoration object, and the target object can take the target virtual decoration object as a reference and perform real decoration processing under the guidance of the corresponding decoration process. It should be understood that by obtaining the decoration configuration information, each of the N virtual decoration objects determined by the decoration application may be adapted to the decoration configuration information (i.e., matched to the target object); the target object can select any virtual decoration object as the target virtual decoration object, and then, the target object can perform real decoration processing more accurately and in detail under the conditions of reference effect and guiding flow by displaying the decoration flow corresponding to the target virtual decoration object. That is to say, according to the method and the device, different virtual decoration objects which are matched with the target object and are subjected to virtual decoration in a virtual decoration mode can be automatically recommended to the target object according to the decoration configuration information, and the recommendation precision and recommendation efficiency can be improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a diagram of a network architecture provided in an embodiment of the present application;
fig. 2a is a schematic view of a scene for acquiring decoration configuration information according to an embodiment of the present application;
fig. 2b is a schematic view of another scenario for acquiring decoration configuration information according to an embodiment of the present application;
FIG. 3a is a schematic view of a scene displaying a virtual decoration object according to an embodiment of the present application;
FIG. 3b is a schematic view of a scene displaying a decoration process according to an embodiment of the present application;
fig. 4 is a schematic flowchart of a data processing method according to an embodiment of the present application;
FIG. 5a is a schematic view of a scene displaying a decoration process according to an embodiment of the present application;
FIG. 5b is a schematic view of a scene displaying a real decorative effect according to an embodiment of the present application;
fig. 6 is a schematic flowchart of a data processing method according to an embodiment of the present application;
fig. 7 is a schematic flowchart of a data processing method according to an embodiment of the present application;
FIG. 8 is a schematic view of a scene for determining a virtual decoration object according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
Referring to fig. 1, fig. 1 is a diagram of a network architecture according to an embodiment of the present disclosure. As shown in fig. 1, the network architecture may include a service server 1000 and a terminal device cluster, and the terminal device cluster may include one or more terminal devices, where the number of terminal devices is not limited herein. As shown in fig. 1, the plurality of terminal devices may include a terminal device 100a, a terminal device 100b, a terminal device 100c, \ 8230, a terminal device 100n; as shown in fig. 1, terminal device 100a, terminal device 100b, and terminal devices 100c, \ 8230, terminal device 100n may be respectively in network connection with service server 1000, so that each terminal device may perform data interaction with service server 1000 through the network connection.
It is understood that each terminal device shown in fig. 1 may be installed with a target application, and when the target application runs in each terminal device, data interaction may be performed between the target application and the service server 1000 shown in fig. 1, respectively, so that the service server 1000 may receive service data from each terminal device. The target application may include an application having a function of displaying data information such as text, images, audio, and video. For example, the application may be a decoration application with real-time three-dimensional (3 dimensions,3 d) modeling capabilities (e.g., a smart makeup application, a smart clothing match application, etc.); the application can also be the application for the promotion of decorative products (such as the promotion and application of clothes, the promotion and application of beauty products and the like); the application may also be an application for performing image processing (e.g., a american-type application), and so on, which are not illustrated herein. The service server 1000 in the present application may obtain the service data according to the applications, for example, the service data may be decoration configuration information obtained through a binding account of a user (the binding account may refer to an account bound by the user in the applications, and the user may log in the applications, upload data, obtain data, and the like through the corresponding binding account, and the service server may also obtain a login state of the user, upload data, send data to the user, and the like through the binding account).
The decoration configuration information may refer to configuration information corresponding to a decoration, and the configuration information may include activity scene information of a user, a required decoration type of the user, key part attribute information of the user, and the like. The activity scene information may refer to related information corresponding to the user when the user performs an activity at the destination location (e.g., destination location information, whether the activity is in an indoor state or an outdoor state, an outdoor temperature when the activity belongs to the outdoor state, an outdoor humidity, an outdoor rainfall, an outdoor ultraviolet intensity, or an indoor temperature when the activity belongs to the indoor state, an indoor humidity, and the like); the type of the required decoration may refer to information of preference requirement of the user for the decoration, for example, when the decoration is makeup, the information of preference requirement may include a makeup requirement hierarchy type (light and thin type or thick type), a makeup type (for example, sweet type, kutaan type, gentlewoman type, etc.) and the like when the decoration is makeup; the key part attribute information may refer to attribute information corresponding to a key part of the user, and the key part may refer to a body part of the user (e.g., a face, an arm part, a leg, a head, etc.), for example, when the key part is the face of the user, the key part attribute information may include a facial proportion of five sense organs, a skin attribute (e.g., roughness, pore size, speckle degree, etc.), a skin color attribute (e.g., dull, fair, etc.).
The key portion attribute information may be determined according to a key portion image of the user, that is, the terminal device may acquire a key portion (e.g., face) image of the user (which may be data of the key portion of the user acquired through the camera module (or mirror)), the terminal device may send the key portion image to the service server 1000, and the service server 1000 may determine the key portion attribute information according to the key portion image. Further, the service server 1000 may determine, according to the obtained decoration configuration information, N (N may be a positive integer) virtual decoration objects (each virtual decoration object is obtained by virtually decorating a key part based on a virtual decoration manner, and if the decoration is makeup, the virtual decoration object may be an object obtained by performing a memorable virtual makeup processing on the key part in a virtual makeup manner); the service server 1000 may return the N virtual decoration objects to the terminal device corresponding to the user, where the terminal device may display the N virtual decoration objects, the user may select the N virtual decoration objects, and the user may select the N virtual decoration objects by a trigger operation on a display interface of the terminal device, where the trigger operation may include a contact operation such as a click or a long press, and may also include a non-contact operation such as a voice or a gesture, and the trigger operation is not limited here. It should be understood that the triggering operation may be referred to as an object selection operation, and the terminal device may obtain the target virtual decoration object selected by the user in response to the object selection operation by the user, and may send the target virtual decoration object to the service server 1000. For a specific implementation manner of obtaining and displaying N virtual decoration objects, reference may be made to the description in the embodiment corresponding to fig. 4 in the following.
Further, the service server 1000 may obtain a decoration flow corresponding to the target virtual decoration object (for example, when the decoration is a makeup, the decoration flow may be a makeup flow) and return to the terminal device, so that the terminal device may display the decoration flow corresponding to the target virtual decoration object. The user can view the decoration process on the display interface of the terminal device, and the user can perform real decoration processing (such as real makeup processing) under the reference of the target virtual decoration object and under the guidance of the decoration process.
It should be understood that by acquiring the activity scene information, the demand decoration type and the key part attribute information of the user, a virtual decoration object which is matched with the activity scene information, meets the demand decoration type and is matched with the key part attribute information can be acquired for the user to select; the virtual decoration objects can be actually understood as different decoration effects obtained after a target object (mirror image data corresponding to a user) is subjected to virtual decoration in different virtual decoration modes, and after the target object is selected by the user, the target virtual decoration object and the decoration process can be synchronously displayed, so that the user can have a reference object and a guide process in the process of real decoration processing, and the real decoration effect of the user can be more expected and higher in quality.
In the embodiment of the present application, one terminal device may be selected from a plurality of terminal devices as a target terminal device, and the terminal device may include: the smart terminal may be, but is not limited to, a smart terminal that carries a multimedia data processing function (e.g., a video data playing function, a music data playing function), such as a smart cosmetic mirror, a smart phone, a tablet computer, a laptop computer, a desktop computer, a smart television, a smart speaker, a desktop computer, a smart watch, and a smart car terminal. For example, the terminal device 100a shown in fig. 1 may be used as the target terminal device, and the target terminal device may be integrated with the target application, and at this time, the target terminal device may perform data interaction with the service server 1000 through the target application.
It is understood that the method provided by the embodiment of the present application may be executed by a computer device, which includes but is not limited to a terminal device or a service server. The service server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as cloud service, a cloud database, cloud computing, cloud functions, cloud storage, network service, cloud communication, middleware service, domain name service, security service, CDN, big data and artificial intelligence platforms.
The terminal device and the service server may be directly or indirectly connected through wired or wireless communication, and the present application is not limited herein.
For ease of understanding, please refer to fig. 2a, and fig. 2a is a schematic view illustrating a scenario for acquiring decoration configuration information according to an embodiment of the present application. The scene is, for example, that the decoration configuration information includes the activity scene information, that is, the scene may be an exemplary scene for acquiring the activity scene information. The terminal device 100a shown in fig. 2a may be the terminal device 100a in the terminal device cluster according to the embodiment corresponding to fig. 1. The decoration scene shown in fig. 2a is a makeup scene, and the decoration application may be a makeup application.
As shown in fig. 2a, an application opening control 20a for a decoration application (makeup application) may be displayed in the display interface of the terminal device 100a, and the user a may open the makeup application by a trigger operation for the application opening control 20a, for example, the user a may open the makeup application by clicking the application opening control 20 a. After the user a clicks the application opening control 20a, the terminal device 100a may respond to the click operation of the user a and display a position confirmation interface; as shown in fig. 2a, current location prompt information may be displayed in the location confirmation interface (the current location prompt information may be used to prompt the current location of the user a), for example, as shown in fig. 2a, the current location prompt information may include "the location where you are currently detected by the system is an area a city ab, and is used as destination location information of you", and meanwhile, the location confirmation interface may further include a confirmation control and a location modification control for the current location prompt information; the user a can confirm the current position information as the target position information through the confirmation control; the user a may also input (or select) destination location information through a location modification control. The user a can understand that the trigger operation of any one of the confirmation control and the position modification control is the trigger operation of the prompt information of the current position.
As shown in fig. 2a, after the user a clicks the position modification control, the terminal device 100a may display a position input interface in which the user a may input the destination position information "aa area of city a"; then, after the user a clicks the confirmation control in the position input interface (that is, it can be understood that the user a generates the trigger operation for the target position information), the terminal device 100a may display a control selection interface in response to the trigger operation, where the control selection interface may display a makeup preference selection control and an active scene selection control for the user to select, and the user a may input the makeup preference information through the trigger operation for the makeup preference selection control; the user a can also input the related information of the target activity scene through the triggering operation of the activity scene selection control. For example, the user a may input makeup preference information by clicking on the makeup preference selection control, and may also input information related to a target activity scene by clicking on the activity scene selection control.
For example, as shown in fig. 2a, after the user a clicks the active scene selection control, the terminal device 100a may display a scene type selection interface in response to the triggering operation of the user a on the control selection interface. In the scene type selection interface, a user a may input an activity type to be performed when the user is at a destination arrival location (destination location information, i.e., aa area of city a), where the activity type may refer to a type of an activity to be performed when the user is at the destination arrival location; the activity types may include work, appointments, sports, dinner parties, recordings, and so forth. For example, if the user a goes to the aa area of a city to go on business, the activity at the destination arrival position is work, the activity type is work, and if the user a goes to the aa area of a city to go on dating, the activity at the destination arrival position is dating, the activity type is dating; that is, the destination arrival location may be understood as an action of the user, and the activity type may be understood as an action destination of the action. In this scenario, the input mode of the activity type is, for example, input in a selection mode, and the specific input mode is not limited to this.
As shown in fig. 2a, in the scene type selection interface, the user a can also input the scene state type (which may be called state type) in which the user a is active at the destination arrival position; the status types may include an outdoor status type and an indoor status type, that is, the user a may input whether to perform an activity outdoors or perform an activity indoors. The input means for the activity scene state may include a typing means, a voice means, a selection means, a progress bar dragging means, and the like, for example, as shown in fig. 2a, the terminal device 100a may display a progress bar for the activity scene state (including indoor and outdoor), and the user a may select the activity scene state as the outdoor state by dragging the progress bar. It should be noted that, for the progress bar of the activity scene status shown in fig. 2a, the center position of the progress bar may be regarded as a scene undefined status (or automatically determined as an outdoor status type); if the user a drags the staying position closer to the indoor, the activity scene state of the user a can be considered as an indoor state type, and if the user a drags the staying position closer to the outdoor, the activity scene state of the user a can be considered as an outdoor state type.
In the scene type selection interface, the user a may also input the total length of time that the activity is performed at the destination arrival location (i.e., the total length of time of the activity) and the estimated sweating degree. The input modes of the total activity duration and the estimated sweating degree may include a typing mode, a voice mode, a selection mode, a progress bar dragging mode, and the like, for example, the input mode of the total activity duration shown in fig. 2a is the selection mode, and the input mode of the estimated sweating degree is the progress bar dragging mode. Further, information composed of a type of activity (e.g., work), a scene status type (outdoor status type), a total duration of activity (8 hours), and a type of estimated sweating level (e.g., higher sweating level type) selected by the user a may be collectively used as the target scene type of the user a. Further, after the user a finishes selecting, the user a may confirm the target scene type, where the confirmation method may be: the user a can confirm the target scene type through the triggering operation of the characters corresponding to the target scene type or through the triggering operation of the blank area (the area without the characters) in the scene type selection interface. For example, after selecting the last estimated sweating degree, the user a may confirm the target scene type by performing a triggering operation (e.g., clicking or long pressing a blank area) on the blank area in the scene type selection interface. Of course, the terminal device 100a may also display a confirmation control in the scene type selection interface (the confirmation control may be an integrated control, and the entire target scene type may be confirmed by triggering the confirmation control), so that after the selection is completed, the user a may confirm the target scene type by triggering and manipulating the confirmation control. For example, as shown in fig. 2a, the scene input interface further includes a completion control (which may be understood as a confirmation control), a refresh control, and a cancel control, and the user a can determine that the input (or referred to as selection) of the active scene information is correct by performing a trigger operation on the completion control, and proceed to the next step; the user a can reset the currently input target scene type through the triggering operation aiming at the refreshing control and input a new target scene type again; the user a can cancel the input of the target scene type by the trigger operation for the cancel control, and enter the last step or exit the makeup application. As shown in fig. 2a, after the user a clicks the completion control, the terminal device 100a may respond to the trigger operation of the user a to acquire the target scene type input by the user a.
Further, the terminal device 100a may determine that the destination activity location of the user a belongs to the outdoor area based on the outdoor state type in the target scene type, and then the terminal device 100a may obtain the outdoor environment information (e.g., outdoor temperature, outdoor humidity, outdoor rainfall, outdoor ultraviolet intensity, etc.) corresponding to the destination location information (aa area of city a), and the terminal device 100a may determine the target scene type and the outdoor environment information together as the activity scene information associated with the user a.
It should be noted that the triggering operation for each control in the present application may include a contact operation such as a click or a long press, or may also include a non-contact operation such as a voice or a gesture, and this is not limited here.
Further, please refer to fig. 2b, where fig. 2b is a schematic view of another scenario for acquiring decoration configuration information according to an embodiment of the present application. The scene is, for example, that the decoration configuration information includes the demand decoration type, that is, the scene may be an exemplary scene for acquiring the demand decoration type. As shown in fig. 2b, an application opening control 20a for a decoration application (makeup application) may be displayed in the display interface of the terminal device 100a, and the user a may open the makeup application by a trigger operation for the application opening control 20a, for example, the user a may open the makeup application by clicking the application opening control 20 a. After the user a clicks the application opening control 20a, the terminal device 100a may respond to the click operation of the user a and display a control selection interface, where the control selection interface may display a makeup preference selection control and an activity scene selection control for the user to select. As shown in fig. 2b, after the user a clicks the makeup preference selection control, the terminal device 100a may display a decoration type selection interface (which may be understood as a makeup preference selection interface) in which the user a may input (or select) a desired makeup preference (which may be referred to as a desired decoration type) in response to this triggering operation of the user a.
As shown in fig. 2b, in the decoration type selection interface, the user a may input a finishing allowance type (which may be referred to as a makeup hierarchy type); for example, as shown in fig. 2b, the terminal device 100a may display a progress bar for the type of the underlying retouching (including a light air type and a lasting leisure type), and the user a may input the type of the underlying retouching by dragging the progress bar. It should be noted that, for the progress bar of the bottoming trimming shown in fig. 2b, the center position of the progress bar may be regarded as that the bottoming trimming is of a medium degree; if the user a drags the stop position to be closer to the light breathable type, the number of the layers of the foundation makeup is less, the degree of the foundation repair is lighter and thinner, and the type of the required layer of the user a can be considered to be the light breathable type (if the dragging stop position is at the starting position of the progress bar, the degree of the foundation repair is 0%, and at this time, the type is the completely light breathable type), while if the dragging stop position of the user a is closer to the lasting unpure, the number of the layers of the foundation makeup is more, the degree of the foundation repair is thicker and heavier, and the type of the required layer of the user a can be considered to be the lasting unpure type (if the dragging stop position is at the ending position of the progress bar, the degree of the foundation repair is 100%, and at this time, the type is the completely lasting unpure type).
In the decoration type selection interface, the user a may also input a makeup thickness type (which may be referred to as a makeup level type or a makeup concentration type); for example, as shown in fig. 2b, the terminal device 100a may display a progress bar for a makeup thickness (including a light makeup type and a thick makeup type), and the user a may input a degree of the makeup thickness by dragging the progress bar. It should be noted that, for the thick dressing progress bar shown in fig. 2b, the center position of the progress bar may be regarded as the thickness of the dressing is a middle degree; if the user a drags the staying position to be closer to light makeup, the required makeup steps are fewer, the layers are fewer, and the type of the required makeup layer of the user a can be considered as a light makeup type, whereas if the user a drags the staying position to be closer to thick makeup, the steps are fewer, the layers are more, and the type of the required makeup layer of the user a can be considered as a thick makeup type. In the decoration type selection interface, the user a may also input a makeup effect type, which may include a primordial qi type, a sweet type, a youth-reducing type, an euro-american type, etc., and the user a may select among the makeup effect types displayed in the decoration type selection interface, and the makeup effect type selected by the user a may be considered as a desired makeup effect type input by the user a. Taking the foundation finishing degree input by the user a as a light and breathable type, the makeup thickness type as a light makeup type, and the makeup effect type as a sweet makeup type as examples, the makeup type information input by the user a can be used as the makeup type required by the user a. After the user a clicks a completion control (confirmation control) in the decoration type selection interface, the terminal device may acquire the type of the required beauty makeup (i.e., the type of the required decoration) input by the user a.
It should be noted that, in the above fig. 2a and fig. 2b, the decoration configuration information includes the activity scene information and the decoration type required by the user as an example, two scenes for acquiring the decoration configuration information are exemplarily shown, and actually, the specific implementation manner for acquiring the activity scene information and the decoration type required is not limited to the scenes shown in fig. 2a and fig. 2 b. For example, after the activity scene information is acquired, that is, after the user a inputs the activity scene information and clicks a completion control (confirmation control), the terminal device 100a may display a control selection interface, and the user a may select a makeup preference selection control in the control selection interface and continue to input makeup preference information (that is, a makeup type is required) through the makeup preference selection control. The acquisition of the activity scene information needs to be determined based on the destination location information, and the application is not limited to the acquisition time of the activity scene information and the required decoration type. The activity scene information and the demand decoration type can be acquired sequentially or simultaneously.
Further, please refer to fig. 3a, wherein fig. 3a is a schematic view of a scene displaying a virtual decoration object according to an embodiment of the present application. The service server 1000 shown in fig. 3a may be the service server 1000 in the embodiment corresponding to fig. 1.
As shown in fig. 3a, after the terminal device 100a obtains the type of the makeup needed by the user a and the activity scene information, the terminal device 100a may display an information confirmation interface, and display the type of the makeup needed and the activity scene information in the information confirmation interface (which may be displayed sequentially or simultaneously) for the user a to confirm; after the user a confirms that the active scene information and the required makeup type are both correct, the terminal device 100a may acquire a face image of the user (that is, a key part of the user a) through the camera assembly, the acquisition of the face image may also be acquired when the makeup application is started, that is, the face image may also be acquired before the active scene information and the required makeup type are acquired, then, the terminal device 100a may acquire N virtual makeup objects (that is, virtual decoration objects, that is, face images with virtual makeup effects obtained after virtual makeup processing is performed on the face image in different virtual makeup manners (also referred to as makeup manners, that is, virtual decoration manners), which are matched with the active scene information and the required makeup type), and the terminal device 100a may display the N virtual makeup objects. It should be understood that after the required makeup type and activity scene information are acquired, if the face image of the user a is already acquired, the terminal device 100a may also directly acquire and display the N virtual makeup objects without displaying again and confirming again by the user a.
For example, as shown in fig. 3a, for example, the terminal device 100a may display the required cosmetic type and the action scene again for the user a to confirm, as shown in fig. 3a, the terminal device 100a may display the required cosmetic type (including the base dressing type being a light and breathable type (the degree of base dressing is 20%), the dressing thickness type being a light cosmetic type (the degree of dressing thickness is 25%), and the dressing effect type being a sweet cosmetic type); a determination control and a reset control can be displayed in the information confirmation interface, and a user a can complete the confirmation of the type of the required makeup through the triggering operation aiming at the determination control and enter the subsequent steps; the user a may also reset (i.e., re-enter or select) the desired cosmetic type by a triggering operation for the reset control. As shown in fig. 3a, after the user a clicks the determination control, the terminal device 100a may continue to display the activity scene information of the user a in response to the trigger operation of the user a, where the activity scene information includes the target scene type and the outdoor environment information described above in fig. 2a, the target scene type and the outdoor environment information may be displayed at the same time, or the target scene type and the outdoor environment information may be displayed sequentially, where, taking the example of displaying the target scene type first, the terminal device 100a may display the target scene type in an information confirmation interface (including that the destination arrival position is aa area in city a, the activity type is work, the scene state type is indoor, the activity duration is 8 hours, and the estimated sweat type is a type with a high sweat generation degree (degree is 70%)). The user a can also complete the confirmation of the target scene type through the triggering operation aiming at the determined control, and then enter the subsequent steps; the user a may also reset (i.e., re-enter or select) the target scene type by a triggering operation for the reset control.
As shown in fig. 3a, after the user a clicks the determination control, the terminal device 100a may display outdoor environment information (here, for example, an outdoor temperature is 25 degrees celsius, an outdoor humidity is 76%, an outdoor rainfall is 0mm, and an outdoor ultraviolet intensity is 7/10) in the information confirmation interface, and after the user a clicks the determination control, the user a confirms both a desired decoration type (a desired makeup type) and activity scene information, where, assuming that the terminal device 100a has acquired a face image of the user a as an example, the terminal device 100a may respond to the trigger operation of the user a, and acquire N virtual makeup objects matching the user a according to the desired decoration type and the activity scene information, as shown in fig. 3a, where the N virtual makeup objects may include a virtual makeup object 300a ', a virtual makeup object 300b ', and a virtual makeup object 300c '. The terminal device 100a may display the virtual makeup object 300a ', the virtual makeup object 300b', and the virtual makeup object 300c 'and simultaneously display object selection prompt information "below, it is a makeup effect that is more suitable for you to match, you select", and the user a may select a target virtual makeup object, which may be a reference when the user a makes real makeup, among the virtual makeup object 300a', the virtual makeup object 300b ', and the virtual makeup object 300c' based on the prompt information. For a specific implementation manner of the terminal device 100a obtaining the N virtual makeup objects, reference may be made to the description in the embodiment corresponding to fig. 4.
Further, referring to fig. 3b, fig. 3b is a scene schematic diagram for displaying a decoration flow according to an embodiment of the present application. As shown in fig. 3b, taking the target virtual cosmetic object selected by the user a as a virtual cosmetic object 300a ', after the user a clicks the determination control, the terminal device 100a may respond to the trigger operation of the user a to acquire the target virtual cosmetic object (i.e., the virtual cosmetic object 300 a'). Further, the terminal device 100a may respond to the trigger operation to acquire a makeup procedure corresponding to the target virtual makeup object 300 a'. The terminal device 100a may synchronously display the target virtual makeup object and the makeup process in the terminal display interface. For example, as shown in fig. 3b, the terminal device may display the target virtual makeup object 300a 'and simultaneously display a makeup flow corresponding to the target virtual makeup object 300 a': and (3) coating the lips by using the priming lipstick, and coating the lips by using the lip glaze to color the lips. The terminal device can also simultaneously display a determination control and a refreshing control, and the target cosmetic object 300a' and the cosmetic process thereof can be reloaded and displayed through the refreshing control.
It should be understood that the method and the device can automatically and quickly match the candidate virtual decoration objects matched with the user and display the candidate virtual decoration objects based on the activity scene information and the required decoration type of the user, and can improve the recommendation efficiency and precision of the decoration objects; therefore, the user can select any candidate virtual decoration object as the target virtual decoration object, and the user can accurately perform real decoration processing in detail under the reference of the target virtual decoration object and the guidance of the decoration flow.
Optionally, in a feasible embodiment, in the process of performing real makeup by referring to the target virtual makeup object and the makeup procedure, the terminal device 100a may detect a real makeup effect of the user a for each makeup step in the makeup procedure in real time, and when it is detected that the user a is for a certain makeup step in real time and the makeup effect of the target virtual makeup object is not achieved, may display the makeup prompt information in real time, and perform the makeup prompt on the user a in real time, thereby assisting the real makeup effect of the user a to have higher quality.
Alternatively, it is understood that the terminal device 100a in the embodiment of the present application may be an intelligent cosmetic mirror. The intelligent cosmetic mirror can comprise a front 3D camera module with a tof human face 3D model, an operation unit, a storage unit, a 5G data communication unit, a wifi module, a Bluetooth module, a USB data interface, a charging interface, a battery and the like. The outermost layer of the intelligent cosmetic mirror can be a transparent self-luminous Organic Light-Emitting display (OLED) screen with a touch function, and the OLED screen can be free of a backlight layer, so that the mirror glass of the inner layer of the screen can be seen through the screen when the OLED screen is not lighted. The upper portion of the intelligent cosmetic mirror can be a front-mounted 3D camera module and a light-emitting diode (LED) light supplementing lamp bar, two sides of the intelligent cosmetic mirror comprise a data charging interface, a volume switch key and other buttons, the whole intelligent cosmetic mirror can be a long and narrow flat plate with a metal shell, and the back of the flat plate is provided with a hook which can be matched with an accessory to be stably fixed on a mirror of an automobile.
In order to facilitate understanding of the intelligent cosmetic mirror, the interaction mode, communication mode and data algorithm of the intelligent cosmetic mirror are specifically described in the following 3 points.
1. Description of hardware interaction mode of the intelligent cosmetic mirror: the intelligent cosmetic mirror can be similar to a tablet computer, and a user can operate on a screen through contact operation such as finger touch or non-contact operation such as voice control and gesture control. The user can also operate by means of the suspension gesture of the front 3D camera module, and the gesture operation supports the effects of sliding left and right, zooming in and out, clicking and the like. The user can also throw the screen to the vanity mirror through intelligent cell-phone (for example, throw the screen to the vanity mirror through throwing in the air), and the cell-phone end can carry out the screen operation as the touch pad. In addition, the intelligent cosmetic mirror can also have traditional data interaction modes such as a Serial Bus (USB), bluetooth, a data hotspot and the like which are connected to mobile phone equipment.
2. Description of data communication mode of the intelligent cosmetic mirror: the data that intelligence vanity mirror needs to obtain are: weather information of the current position (after the intelligent cosmetic mirror is opened, the weather information of the current position, such as temperature, humidity, rainfall, sunshine intensity and the like, can be preferentially displayed), and after the user inputs a destination, outdoor or indoor future weather forecast of the destination can be displayed (the outdoor or indoor future weather forecast of the destination can be called as environment information of the destination, and can comprise humidity, rainfall, illumination intensity, ultraviolet intensity, pm2.5 index, air quality level, sweating degree and the like). The intelligent cosmetic mirror can acquire information of the current position and weather forecast information of a destination in real time through a communication module (such as action hot spots wifi, 5G communication technology and the like). The intelligent cosmetic mirror can be connected with an external data interface of the automobile sensor through a USB (universal serial bus), and can acquire data of equipment such as a vehicle humidity sensor, a rainfall sensor, an illumination sensor and an air quality sensor in real time. The data obtained in the two modes can be recorded in real time in a storage unit of the intelligent cosmetic mirror, and then data values for making up and beautifying reference can be obtained through an algorithm.
3. Intelligent makeup mirror data algorithm description: the environment information of the destination acquired in real time can be subjected to Kalman filtering and weighted confidence processing, so that a relatively stable and accurate environment data set of an available environment with small fluctuation can be acquired. The environment data set of the available environment may be stored in a database with a timestamp and a Global Positioning System (GPS) location as a tag. When a user needs to visit an environment data set of an available environment, by reading data in the database, algorithm available parameters for makeup and beauty (namely the type of the makeup and activity scene information needed by the user) can be obtained finally in combination with the user-defined type of the makeup and beauty. The information of the required makeup types and the activity scene is uploaded to a business server, virtual makeup objects can be determined through the business server, the intelligent makeup mirror can display the virtual makeup objects for a user to select, and display the target virtual makeup objects and the makeup procedures selected by the user, so that the user is guided to perform real makeup processing.
Further, please refer to fig. 4, where fig. 4 is a schematic flowchart of a data processing method according to an embodiment of the present application. The method may be executed by a terminal device (for example, any terminal device in the terminal device cluster shown in fig. 1) or a service server (for example, the service server 1000 shown in fig. 1), or may be executed by both the terminal device and the service server. For ease of understanding, the present embodiment is described as an example in which the method is executed by the terminal device described above. Wherein, the data processing method at least comprises the following steps S101-S103:
step S101, decoration configuration information associated with the target object is acquired.
In the present application, the decoration may include makeup application, clothing matching application, jewelry matching application, hair style matching application, and the like, and the decoration application may be an independent application, or an embedded sub-application in social application, video application, and entertainment application (such as game application), and will not be limited herein. When a decoration requirement exists, a user (hereinafter, referred to as a target object) can input decoration configuration information in a decoration application through a binding account number in the decoration application. The decoration configuration information may refer to activity scene information corresponding to the target object performing an activity at the destination location, or may refer to decoration preference information (which may be referred to as a demand decoration type) required by the target object, and the decoration configuration information may also refer to key part attribute information of a key part affecting a decoration effect. The decoration configuration information in the application can be composed of activity scene information, a demand decoration type and key part attribute information.
The activity scene information may be understood as activity environment information of the user, and mainly includes an object arrival position (object position information) of the user, a target scene type (an activity state type (an indoor state type or an outdoor state type), an activity type (which may be understood as an activity object of the user, for example, the activity type may include a work type, an appointment type, a dinner type, a match type, and the like), a total activity duration (i.e., a total work duration, a total appointment duration, a total match duration, and the like) of performing an activity at the object arrival position, an estimated sweat level type, and the like), environment information (which may be determined based on the activity state type in the target scene type, and may include an outdoor temperature, an outdoor humidity, an outdoor rainfall, an outdoor ultraviolet intensity, an outdoor visibility, and the like when the activity state type is the outdoor state type, and may include an indoor temperature, an indoor humidity, an indoor light intensity, and the like when the activity state type is the indoor state type).
The demand decoration type may be decoration preference information required by the user, and may include a demand hierarchy type, a demand effect type, a demand material type (a type of material for decoration), and the like. For example, when the decoration is makeup, the user may have a requirement on parameters of the makeup, such as a requirement on a makeup base concentration type (also referred to as a makeup base level type, e.g., light and breathable type, heavy and permanent type) in the makeup, a makeup concentration type of the whole body (e.g., light and heavy makeup), a makeup effect type (e.g., sweet type, primordial qi type, european type, cool type, etc.), and a brand type of a cosmetic, and the requirement level type may be understood as a makeup base level type or a makeup concentration type, and when the makeup base concentration is light and breathable or the makeup concentration is light makeup, the makeup level may be less; when the concentration of the base makeup is lasting thickness or the concentration of the makeup is thick makeup, the makeup has more layers; the demand effect type may be a makeup effect type; the demand material type may be a brand type of cosmetic. For example, when the decoration is a dress fit, the user may also request the parameters of the dress, such as the color level of the dress, the cloth level of the dress (light and thin or heavy), the number of dresses (i.e., the number of pieces), the style type of the dress (e.g., ancient style, loose style, leisure style, gentlewoman style, etc.). The type of the demand level may be a color level, a cloth level (light, thin or heavy), a number of clothes, and the like, and the type of the demand effect may be a style type of clothes; the type of the demand material can be the material of the clothing cloth. The above description is only for the example of the makeup and dress matching, and the required decoration types are explained, and the specific required decoration types may also include other parameters, and at the same time, the required decoration types may not only include the required layer type, the required effect type, and the required material type.
The key part attribute information may refer to attribute information of a key part of the target object. The key part may be a part of the target object, and may be different according to the decoration, for example, when the decoration is makeup, the key part may be a face; when the ornament is a dress ornament, the key part can be the rest limb parts except the head and the neck (certainly, the key part can also be a complete target object comprising the head, the neck and the limb parts); when the decoration is a hairstyle, the key part can be the head. The attribute information may include site proportion, site presentation sign information, and the like. For example, when the key part is a face, the proportion of five sense organs in the face, the information on the signs of five sense organs (which can be understood as the characteristics of each part of five sense organs, such as thick eyebrows, eye size, nose height, wide and narrow nose, thick and thin lips, lip size, ear size, and the like), the skin properties (such as roughness, pore size, speckle degree, and the like), the skin color properties (such as darkness, fair, and the like) can be identified, and the identified proportion of five sense organs, information on the signs of five sense organs, the skin properties, and the skin color properties can be referred to as key part property information.
Taking the decoration configuration information including the activity scene information as an example, one implementation manner for acquiring the activity scene information may be: the target object can start the decoration application by clicking the application starting control of the decoration application; the terminal equipment can respond to the triggering operation of the application starting control aiming at the decoration application and display the position information associated with the target object; then, the target object can confirm the target object through the triggering operation of the position information (for example, the target object can confirm the target object through the triggering operation of the character information corresponding to the position information, the terminal device can display a confirmation control at the same time when displaying the position information, and then the target object can confirm the target object through the triggering operation of the confirmation control of the position information), and when the position information receives the triggering operation, the terminal device can display the activity scene information associated with the target object. When the position information receives a trigger operation, one implementation manner of displaying the activity scene information associated with the target object may be as follows: the terminal equipment can respond to the trigger operation aiming at the position information and display a scene type selection interface firstly; the target object can input or select a target scene type in the scene type selection interface, and the terminal equipment can display the target scene type selected by the type selection operation according to the type selection operation on the scene type selection interface; further, the target object may confirm the absence of the error through a triggering operation on the target scene type (for example, the target object may confirm the absence of the error through a triggering operation on text information corresponding to the target scene type; the target object may also confirm the absence of the target scene type through a triggering operation on a blank area on the scene type selection interface other than the target scene type; the terminal device may also display a confirmation control while displaying the target scene type, and then the target object may also confirm the absence of the error through a triggering operation on the confirmation control of the target scene type), and the terminal device may display the active scene information associated with the target scene type and the location information in response to a triggering operation on the target scene type.
It should be understood that the location information herein may refer to destination location information of the target object, when the target object starts the decoration application, the terminal device may automatically obtain the location information of the target object, the terminal device may display the current location information (or may simultaneously display outdoor environment information corresponding to the current location information) for the target object to confirm, and whether the current location information may be used as the destination location information (that is, an activity place of the target object); if the target object confirms that the current position information can be used as the destination position information, the terminal equipment can use the current position information as the destination position information and display the destination position information. And if the target position information of the target object is not the current position information, the terminal equipment can display a position input interface for the target object to input the target position information, and therefore the terminal equipment can acquire and display the target position information. Optionally, after the decoration application is started on the target object, the terminal device may also directly display a position input interface for the target object to input the target position information, and the terminal device may obtain the target position information.
Further, after the target object confirms that the target position information is correct, the terminal device may display a scene type selection interface, the target object may input or select a target scene type (an activity type, a scene state type, a total activity duration, a sweating type, and the like) in the scene type selection interface, and after the target object confirms that the target object is correct, the terminal device may acquire outdoor environment information or indoor environment information corresponding to the target object according to the scene state type, and may determine the target scene type and the environment information together as the activity scene information. For an exemplary scenario for acquiring activity scenario information, reference may be made to the scenario embodiment corresponding to fig. 2a described above.
Taking the decoration configuration information including the demand decoration type as an example, one way to obtain the demand decoration type may be: the target object can start the decoration application by clicking the application starting control of the decoration application; the terminal equipment can respond to the triggering operation of the application starting control aiming at the decoration application and display a decoration type selection interface; the target object can input or select a required decoration type through the input control or the selection control in the decoration type selection interface, and the terminal device can respond to the triggering operation of the decoration type selection interface and display the required decoration type associated with the target object.
In a practical embodiment, if a historical decoration type of a target object is stored in the terminal device, the terminal device may display the historical decoration type in the decoration type selection interface for confirmation by the target object (the target object may be confirmed by a trigger operation on text information corresponding to the historical decoration type, or may be confirmed by a trigger operation on a confirmation control corresponding to the historical decoration type), and after the target object is confirmed, the historical decoration type may be used as the required decoration type, which may be specifically: and responding to the trigger operation aiming at the historical decoration type, determining the historical decoration type as a demand decoration type, and displaying the demand decoration type. In another possible embodiment, if the terminal device does not store the historical decoration type of the target object, the terminal device may display a decoration type selection interface, and the target object may input or select the corresponding required decoration type on the decoration type selection interface, where the specific method may be: and responding to the selection operation aiming at the decoration type in the decoration type selection interface, and displaying the required decoration type selected by the selection operation. For an exemplary scenario of obtaining the type of the demand decoration, refer to the scenario embodiment corresponding to fig. 2 b.
Taking the decoration configuration information including the key attribute information as an example, one implementation manner of correspondingly obtaining the key attribute information may be: after the decoration application is started on the target object, the terminal equipment can acquire a key position image of the target object (or acquire mirror image data of the target object through a mirror) through the camera assembly, and then identify corresponding attribute information through a manual intelligent algorithm or a special component.
It should be noted that, the present application is not limited to the order of acquiring the activity scene information, the type of the required decoration, and the attribute information of the key part. Can be obtained simultaneously or sequentially and independently.
Step S102, displaying N virtual decoration objects matched with the target object in the decoration application; each virtual decoration object is obtained by virtually decorating the key part based on a virtual decoration mode, and the virtual decoration mode adopted by each virtual decoration object is associated with the decoration configuration information; the target object contains a critical part.
In the application, after the terminal device acquires the decoration configuration information of the target object, the decoration configuration information can be displayed so that the target object can confirm whether the target object is correct or not. After the target object confirms that the decoration configuration information is correct, the terminal device may acquire N virtual decoration objects (which may be understood as images with different virtual decoration effects obtained after virtual decoration processing is performed on key part images in different virtual decoration manners). Optionally, in a possible embodiment, after obtaining the decoration configuration information of the target object, the terminal device may also directly obtain and display the virtual decoration object without displaying (or after displaying for a period of time) the decoration configuration information of the target object and without confirming the target object. The specific method comprises the following steps: displaying N virtual decoration objects matched with the target object in the decoration application in response to the confirmation operation aiming at the decoration configuration information; or displaying the decoration configuration information, and displaying N virtual decoration objects matched with the target object in the decoration application when the display duration of the decoration configuration information meets the display condition.
For a specific implementation manner of obtaining and displaying N virtual decoration objects matched with the target object, reference may be made to the following description in an embodiment corresponding to fig. 6.
Step S103, responding to the object selection operation aiming at the N virtual decoration objects, and displaying the decoration flow corresponding to the target virtual decoration object selected from the N virtual decoration objects; the decoration flow is used for guiding the target object to carry out real decoration processing.
In the application, the target object may select the N virtual decoration objects, the terminal device may respond to the object selection operation of the target object to obtain the target virtual decoration object selected by the target object, the terminal device may also obtain the decoration flow corresponding to the target virtual decoration object, and the terminal device may display the decoration flow corresponding to the target virtual decoration object. The specific method comprises the following steps: a decoration flow guide interface can be displayed in response to the object selection operation aiming at the N virtual decoration objects; the terminal device may divide a plurality of regions for displaying different object contents in the decoration process guidance interface, for example, an object display region may be divided to display a target virtual decoration object and key parts of the target object; meanwhile, a flow display area is divided to display the decoration flow corresponding to the target virtual decoration object. The object display area can be divided into a decoration object display area and a key part display area, the decoration object display area can be used for displaying the target virtual decoration object, and the key part display area can be used for displaying the key part. The key parts displayed in the key part display area can be key parts to be decorated, and the key parts can be body parts of the target object, such as the face, the limbs, the head and the like; the critical part displayed in the critical part display area may be mirror image data collected in real time by a camera assembly (or mirror). That is to say, the terminal device may synchronously display the target virtual decoration object, the decoration process, and the real-time key part decoration state of the target object acquired in real time in the same display interface, and the target object may perform real decoration processing under the comparison reference of the target virtual decoration object and the real-time key part decoration state and under the guidance of the decoration process.
Optionally, in a possible embodiment, the decoration process may include decoration audio data corresponding to the decoration text process and the decoration text process, and the decoration audio data may be output while the decoration text process is displayed, where a specific method of the method may be: the terminal equipment can display the decoration text flow in the flow display area of the decoration flow guide interface; and synchronously outputting decoration audio data while displaying the decoration text flow.
Wherein, the decoration text flow comprises a decoration text step T a For example, (a is a positive integer), a specific method for synchronously outputting decoration audio data while displaying a decoration text flow may be: step T of obtaining decoration text a A text timestamp of; subsequently, the audio time stamp set corresponding to the decoration audio data can be traversed; wherein, the audio time stamp set comprises one or more audio time stamps, and one audio time stamp is the time corresponding to one sub-audio data in the decorative audio dataStamping; then, the audio time stamp in the audio time stamp set, which has a time alignment relation with the text time stamp, can be determined as a target audio time stamp; determining the sub-audio data corresponding to the target audio time stamp as the target sub-audio data, and displaying the decoration text step T a Simultaneously, the target sub-audio data is synchronously output.
It should be understood that the decoration process may include a plurality of decoration text steps, one decoration audio (sub audio data) may be collocated for one decoration text step through a timestamp, when a certain decoration text step is displayed, the decoration audio corresponding to the decoration text step may be obtained through the timestamp, and thus the decoration audio may be played together while the decoration text step is displayed, so that the decoration process may be more clear, and the target object may perform real decoration processing according to the target virtual decoration object and the decoration process more smoothly, thereby improving user experience.
For easy understanding, please refer to fig. 5a together, and fig. 5a is a schematic view of a scene showing a decoration process according to an embodiment of the present application. As shown in fig. 5a, taking the decoration as a cosmetic example, the virtual decoration object may understand the virtual cosmetic object in the embodiment corresponding to fig. 3a, after the terminal device 100a displays the virtual cosmetic object 300a ', the virtual cosmetic object 300b ', and the virtual cosmetic object 300c ', and taking the target virtual cosmetic object selected by the user a as the virtual cosmetic object 300a ', after the user a clicks the determination control, the terminal device 100a may respond to the triggering operation of the user a to obtain the target virtual cosmetic object (i.e., the virtual cosmetic object 300a ').
Further, as shown in fig. 5a, the terminal device 100a may display the target virtual makeup object again, and display the expected makeup duration (e.g., 25 minutes) corresponding to the target virtual makeup object, and the terminal device 100a may synchronously display the determination control and the cancellation control; the user a can complete the confirmation of the target virtual makeup object 300a' through the triggering operation of the determination control; and a new target virtual makeup object can be reselected by triggering the cancel control. The determination control may also be understood as a decoration starting control, and after the user a clicks the determination control, further, as shown in fig. 5a, the terminal device 100a may respond to the trigger operation to obtain the target virtual makeup object 300a' and the makeup process corresponding to the target virtual makeup object. The terminal device 100a may synchronously display the target virtual makeup object and the makeup process in a terminal display interface; the user a can start to perform the real makeup process (i.e., the real decoration process). As shown in fig. 5a, the terminal device 100a may display a decoration flow guidance interface, and divide the decoration flow guidance interface into a region Z1, a region Z2, a region M1, and a region M2, where the region Z1 may be a flow display region, the region M1 may be a decoration object display region, the region M2 may be a key part (i.e., face) display region, and the region Z2 may be a control display region. The regions M1 and M2 together constitute an object display region.
The terminal device may display the target virtual makeup object in the area M1 (may display the complete target virtual makeup object, or may display a part, here, for example, half of the target virtual makeup object is displayed), and may display the current face image of the user a acquired in real time in the area M2. Meanwhile, it should be understood that a plurality of makeup steps may be included in the makeup flow, and each makeup step may have a time stamp, so that the terminal device 100a may display the makeup steps (decoration text steps) in the makeup flow item by item according to the chronological order of the time stamps. Then, the face image presented in the area M2 can be used as mirror image data, and the user a can view the mirror image data, refer to the makeup effect (part of the target makeup object) in the area M1, and make up under the guidance of the makeup flow; when a certain cosmetic step is displayed, the user a may perform corresponding cosmetic processing, and the terminal device 100a may detect the cosmetic process of the user a in real time, check whether the user a has completed the cosmetic step, and if the user a has completed the cosmetic step, the terminal device 100a may continue to display the next cosmetic step.
For example, as shown in fig. 5a, after the user a completes the current cosmetic step, the terminal device 100a may display the next cosmetic step and display a prompt "next: and (3) painting the lips by using the priming lipstick, painting the lips by using the lip glaze, and painting the lips, so that the user a can perform the action of painting the lips according to the makeup step. As shown in fig. 5a, the terminal device 100a may also synchronously display the makeup finish degree of the user a in the zone Z1 (which may be displayed in the form of a progress bar); the terminal device 100a may also synchronously display the determination control, the refresh control, and the switch control in the region M2. The user a can enter the subsequent steps (such as confirming that makeup of the current partial face image (half face) is finished, and confirming that the makeup enters the makeup process of the residual area (half face) through the triggering operation of the determination control); the user a can reload and display the current interface content by refreshing the control; for example, if the currently displayed area M2 is the mirror image data of the user a, after the user a clicks the switching control, the terminal device 100a may switch the content displayed in the area M2 to the remaining part of the target virtual makeup object (i.e., the other half of the content corresponding to the content displayed in the area M1), and through the switching, the user a may more clearly recognize the difference between the current real makeup effect of the user a and the reference makeup effect of the target virtual makeup object, and the user a may also correct the content in time.
Further, as shown in fig. 5a, when the user a finishes making up the current part of the face (i.e., the face corresponding to the mirror image presented in the area M2), the makeup progress bar displayed by the terminal device 100a stays at the middle position, and at the same time, the makeup step prompt message "next step: please finish the makeup of the other half face; after the user a clicks the determination control, the terminal device 100a may respond to the trigger operation of the user a to perform region switching, that is, taking the region M2 as a decoration object display region, taking the region M1 as a key part display region, displaying another part of the target virtual makeup object in the region M2 (which may be the remaining half of the target virtual makeup object except for the region M1), and displaying the remaining face image of the user a in the region M1 (which may be a mirror image corresponding to the remaining half face of the user a). Meanwhile, the terminal device 100a may display the remaining makeup steps in the makeup flow item by item according to the chronological order of the timestamps. Then, the face image presented in the area M1 can be regarded as mirror image data, and the user a can view the mirror image data, refer to the makeup effect in the area M2, and make up the remaining face under the guidance of the makeup flow. For example, as shown in fig. 5a, the cosmetic step currently displayed by the terminal device 100a is "please use the eyebrow pencil to draw the eyebrow, pay attention to the hair-line", and the user a may draw the eyebrow under the guidance of the cosmetic step. The specific process may refer to the scene flow of making up by the user a referring to the makeup effect in the area M1, which will not be described herein again.
Optionally, after the user a completes the whole makeup process, the terminal device may display the real makeup effect and the target makeup effect of the user a, please refer to fig. 5b together, and fig. 5b is a scene schematic diagram for displaying the real decoration effect provided in the embodiment of the present application. As shown in fig. 5b, when the user a finishes dressing of a current part of the face (i.e., the face corresponding to the mirror image presented in the area M1), the terminal device 100a may display a dressing progress bar and display a dressing finish prompt message "dressing is finished"; the user a can click the determination control to enter the subsequent step according to the makeup finishing prompt message. As shown in fig. 5b, after the user a clicks the determination control, the terminal device 100a may respond to the trigger operation of the user a, and simultaneously display the target virtual makeup object 300a' (i.e., the target makeup effect or the virtual makeup effect, which may also be referred to as the target makeup effect) and the real face mirror image (i.e., the real makeup effect) of the user a after finishing makeup, and compare the virtual makeup effect with the real makeup effect by simultaneously displaying, so that the user a may determine whether a difference exists between the real makeup effect and the virtual makeup effect through the comparison, and when the difference exists and the user a desires to modify makeup, the user a may perform real makeup via the modification control shown in fig. 5 b; if the gap exists but the user a does not modify the makeup requirement (or the gap does not exist and is small), the user a can enter the subsequent step through the determination control as shown in fig. 5 b.
For example, as shown in fig. 5b, after the user a clicks the determination control, the terminal device 100a may detect a difference between the virtual makeup effect and the real makeup effect, and if it is detected that there is a difference or the difference is large, the terminal device 100a may display makeup adding prompting information for the specific difference, as shown in fig. 5b, if the specific difference is a nose, the makeup adding prompting information may be "there is a difference between the current nose corner sense and the target makeup, it is recommended to add partial shadows at both sides of the wing of the nose, and highlight at the nose bridge, whether to modify or not"; the makeup prompting information can comprise a modification control and a direct completion control, and if the user a desires to modify makeup, the user a can click the modification control to carry out makeup on the makeup of the nose; and if the user a does not have the makeup supplementing requirement, the user a can click the direct completion control to enter the subsequent steps. For example, as shown in fig. 5b, after the user a clicks the direct makeup control, the terminal device 100a may display the final real makeup effect of the user a in response to this trigger operation for the user a, and display a makeup completion prompting message "happy you, makeup is done cheer".
Optionally, in a possible embodiment, the terminal device may record a historical virtual decoration object selected by the target object in a historical time period, so as to statistically determine a historical decoration preference (such as a historical preference makeup) of the target object; if the target object has a decoration requirement in a target time period (later than the historical time period), the target object does not need to input a requirement decoration type, the terminal device can directly take the historical decoration preference of the target object as the requirement decoration type of the target object, and directly determine a target virtual decoration object (which can be called as an adaptation virtual decoration object) for the target object by combining the weather state (namely, activity scene information) of the current trip of the target object, and display an adaptation decoration flow corresponding to the adaptation virtual decoration object. For example, when the decoration is makeup, the terminal device may directly determine the makeup prevention makeup appearance for the target object and make up display makeup effects (i.e., display the fitting virtual decoration object) in combination with the activity scene information of the target object, the historical makeup preference parameter (i.e., fitting decoration type), and the face attribute information of the target object (which may be recorded in the terminal device), so that the target object may perform real makeup processing under the guidance of the fitting virtual decoration object and the flow thereof. For the sake of understanding, the specific method may be: acquiring a history selection decorative object aiming at a target object, and acquiring a history decoration type corresponding to the history selection decorative object; the adaptive decoration type aiming at the target object can be determined together according to the decoration parameters corresponding to the target virtual decoration object and the historical decoration parameters; when new activity scene information of the target object is acquired in the target time period, an adaptive virtual decoration object matched with the new activity scene information and the adaptive decoration type and an adaptive decoration process corresponding to the adaptive virtual decoration object can be displayed; and the adaptive decoration process is used for guiding the target object to carry out real decoration processing in the target time period.
Optionally, in a feasible embodiment, in the process of performing real decoration processing on the target object, the terminal device may detect in real time whether the real decoration effect of the target object reaches the target decoration effect (that is, the decoration effect corresponding to the target virtual decoration object), and when the real decoration effect of the target object does not reach the target virtual decoration effect, the terminal device may prompt the target object, and the target object may perform decoration supplement or correction processing based on the prompt. The specific method comprises the following steps: acquiring a real decoration object obtained by real decoration processing of a target object in the process of real decoration processing of the target object; the real decorative object can be compared with the target virtual decorative object to determine a differential decorative area; then, generating correction decoration prompt information according to the difference decoration area, and displaying the correction decoration prompt information; wherein, the correction decoration prompting information is used for prompting the target object to perform correction decoration processing in the difference decoration area.
In the embodiment of the present application, a decoration application is provided, where decoration configuration information associated with a target object may be obtained, and in the decoration application, N virtual decoration objects (obtained by performing virtual decoration on a key part in different virtual decoration manners) matching the target object are displayed; the N virtual decoration objects can be selected by the target object, after the target object is selected, the decoration application can display the decoration process corresponding to the target virtual decoration object, and the target object can take the target virtual decoration object as a reference and perform real decoration processing under the guidance of the corresponding decoration process. It should be understood that by obtaining the decoration configuration information, each of the N virtual decoration objects determined by the decoration application may be adapted to the decoration configuration information (i.e., matched to the target object); the target object can select any virtual decoration object as the target virtual decoration object, and then, the target object can perform real decoration processing more accurately and in detail under the condition of having a reference effect and a guide flow by displaying the decoration flow corresponding to the target virtual decoration object. That is to say, according to the method and the device, different virtual decoration objects which are matched with the target object and are subjected to virtual decoration in a virtual decoration mode can be automatically recommended to the target object according to the decoration configuration information, and the recommendation precision and recommendation efficiency can be improved.
Further, please refer to fig. 6, where fig. 6 is a schematic flowchart of a data processing method according to an embodiment of the present application. The process may correspond to the process of acquiring and displaying N virtual decoration objects matched with the target object in the embodiment corresponding to fig. 4. As shown in fig. 6, the flow may include the following steps S601 to S603:
s601, obtaining N candidate virtual decoration modes matched with the decoration configuration information.
Specifically, the content of the decoration configuration information will not be described herein again. The terminal device can send the decoration configuration information to the service server, and the service server can obtain N candidate virtual decoration modes matched with the decoration configuration information based on an artificial intelligence algorithm. For a specific implementation manner of obtaining the N candidate virtual decoration manners, reference may be made to the description in the embodiment corresponding to fig. 8.
Step S602, according to the virtual decoration parameter of each candidate virtual decoration mode in the N candidate virtual decoration modes, respectively carrying out virtual decoration processing on the key part to obtain N virtual decoration objects.
Specifically, for each candidate virtual decoration mode, virtual decoration processing can be performed on the key part based on the virtual decoration parameter, so that the virtual decoration object corresponding to the candidate virtual decoration mode can be obtained. N candidate virtual decoration modes including a candidate virtual decoration mode M i For example, for the decoration parameter according to each candidate virtual decoration manner in the N candidate virtual decoration manners, the specific method of obtaining the N virtual decoration objects by performing virtual decoration processing on the key part respectively may be: according to the candidate virtual decoration mode M i Determining a region to be processed in the key part; adopting a candidate virtual decoration mode M i Corresponding decoration parameters are used for carrying out virtual decoration processing on the area to be processed to obtain a virtual decoration mode M i An intermediate virtual decoration object corresponding to the key portion; when the intermediate virtual decoration objects corresponding to the N candidate virtual decoration modes are determined, beautification processing (for example, skin grinding, face thinning, eye enlarging, body slimming, hair growing and the like) is performed on the N intermediate virtual decoration objects respectively to obtain the N virtual decoration objects. Taking the decoration as makeup, the area to be treated can be understood as the area to be treated with makeup, such as the eye area, the nose area, the lip area, and the like; for a scene example in which a candidate virtual decoration manner is adopted to perform virtual decoration processing on a key part to obtain a virtual decoration object, reference may be made to a scene embodiment corresponding to fig. 8 in the following.
Step S603, displaying N virtual decoration objects in the decoration application.
Specifically, the terminal device may display the virtual decoration object.
In the embodiment of the present application, a decoration application is provided, where decoration configuration information associated with a target object may be obtained, and in the decoration application, N virtual decoration objects (obtained by performing virtual decoration on a key part in different virtual decoration manners) matching the target object are displayed; the N virtual decoration objects can be selected by the target object, after the target object is selected, the decoration application can display the decoration process corresponding to the target virtual decoration object, and the target object can take the target virtual decoration object as a reference and perform real decoration processing under the guidance of the corresponding decoration process. It should be understood that by obtaining the decoration configuration information, each of the N virtual decoration objects determined by the decoration application may be adapted to the decoration configuration information (i.e., matched to the target object); the target object can select any virtual decoration object as the target virtual decoration object, and then, the target object can perform real decoration processing more accurately and in detail under the condition of having a reference effect and a guide flow by displaying the decoration flow corresponding to the target virtual decoration object. That is to say, according to the decoration configuration information, different virtual decoration objects which are matched with the target object and are subjected to virtual decoration in a virtual decoration mode can be automatically recommended to the target object, and the recommendation precision and recommendation efficiency can be improved.
Further, please refer to fig. 7, wherein fig. 7 is a schematic flowchart of a data processing method according to an embodiment of the present application. The process may correspond to the process of acquiring N candidate virtual decoration manners in the embodiment corresponding to fig. 6, where the process shown in fig. 7 is described by taking an example that the decoration configuration information includes activity scene information, a demand decoration type, and key part attribute information of a key part, and the demand decoration type includes a demand hierarchy type. As shown in fig. 7, the flow may include the following steps S801 to S804:
step S801, acquiring the virtual layer type and the virtual isolation efficacy grade of the adaptive activity scene information.
Specifically, the terminal device may determine information such as a virtual hierarchy type and a virtual isolation efficacy level that are suitable for the activity scene information. Taking the decoration as makeup, the virtual layer type may be a makeup layer type (or referred to as a makeup concentration type) or a makeup concentration type (for example, when the temperature is high and the ultraviolet light is strong, the makeup concentration or the makeup concentration should be concentrated so as to resist the high temperature and the strong ultraviolet light, and the makeup layers should be more); the virtual isolation efficacy level may be a makeup isolation level adapted to temperature, humidity, ultraviolet intensity, and water and sweat resistance (when temperature is high, ultraviolet intensity is high, and rainfall is heavy, the makeup sun-proof (sun-isolation) level and water-proof (water-isolation) level should be higher).
Optionally, as described above, the activity scene information includes a target scene type and environment information (may include outdoor environment information or indoor environment information), and after the activity scene information is obtained, the terminal device may display all the activity scene information for the target object to select, for example, in outdoor temperature, outdoor humidity, outdoor rainfall and outdoor ultraviolet intensity, the target object only selects outdoor ultraviolet intensity, and then the terminal device may use the outdoor ultraviolet intensity as a matching parameter of the candidate virtual decoration manner, and the terminal device or the service server may only determine a virtual hierarchy type and a virtual isolation efficacy level that are adapted to the outdoor ultraviolet intensity, and remaining information in the activity scene information may not need to be considered.
Step S802, determining whether the virtual hierarchy type is matched with the demand hierarchy type.
Specifically, the virtual hierarchy type determined according to the activity scene information may be matched with a demand hierarchy type required by the target object, whether the virtual hierarchy type is matched with the demand hierarchy type is determined, and when the virtual hierarchy type is matched with the demand hierarchy type, the subsequent step 802 may be performed; when the virtual hierarchy type does not match the demand hierarchy type, the subsequent step S803 may be performed.
Step S803, when the virtual hierarchy type is matched with the demand hierarchy type, N candidate virtual decoration modes can be determined according to the virtual hierarchy type, the virtual isolation efficacy level, the residual demand decoration type and the key part attribute information; the remaining demand decoration types are decoration types other than the demand hierarchy type among the demand decoration types.
Specifically, when the virtual hierarchy type is matched with the demand hierarchy type, N candidate virtual decoration modes can be determined according to the virtual hierarchy type, the virtual isolation efficacy level, the remaining demand decoration type and the key part attribute information. The specific method comprises the following steps: a decoration database can be obtained; the decoration database comprises M virtual decoration modes and decoration parameters corresponding to the M virtual decoration modes respectively; m is a positive integer greater than N; then, the decoration level in the M decoration parameters is a virtual level type, and the isolation efficacy level is a decoration parameter of the virtual isolation efficacy level, and the decoration parameter is determined as a first candidate decoration parameter; then, a second candidate decoration parameter matched with the remaining required decoration types can be obtained from the first candidate decoration parameters; then, a third candidate decoration parameter matched with the key part attribute information can be obtained from the second candidate decoration parameters; the virtual decoration manner corresponding to the third candidate decoration parameter may be determined as N candidate virtual decoration manners.
The key part attribute information comprises sub-part physical sign information and a sub-part proportion, wherein the sub-part physical sign information is characteristic information of the sub-part in the key part, and the sub-part proportion is the proportion of the sub-part in the key part; the second candidate decoration parameter comprises a second candidate decoration parameter S k For example, the specific method for obtaining the third candidate decoration parameter matched with the key part attribute information from the second candidate decoration parameters may be: a second candidate decoration parameter S may be determined k A first adaptation rate to the sub-part ratio, and a second candidate decoration parameter S k A second adaptation rate with sub-site peer information; then, a second candidate decoration parameter S can be determined according to the first adaptation rate and the second adaptation rate k The total adaptation rate with the attribute information of the key part; if the total adaptation rate is greater than the adaptation threshold, the second candidate decoration parameter S may be set k A third candidate decoration parameter is determined.
It should be understood that the decoration parameters matching the virtual hierarchy type and the virtual isolation efficacy level may be obtained in the decoration database as the first candidate decoration parameters; in the first candidate decoration parameters, the decoration parameters matched with the remaining required decoration types (namely, meeting the decoration type requirements of the target object) can be determined to be used as second candidate decoration parameters; in the second candidate decoration parameters, the adaptation rate between each second candidate decoration parameter and the key part attribute information can be determined, and if the adaptation rate is greater than the adaptation threshold value, the third candidate decoration parameter can be used as a third candidate decoration parameter, and the third candidate decoration parameter can be used as the N candidate virtual decoration modes. It should be understood that the decoration is used as makeup, and is adapted to the attribute information of the key part, such as the proportion of five sense organs in the face and the physical signs of the five sense organs (if the nose is lower, the decoration parameter with a larger highlight cosmetic degree can be selected, and if the eye is smaller, the decoration parameter with a larger enlarged eye can be selected).
Step S804, when the virtual hierarchy type is not matched with the demand hierarchy type, displaying hierarchy selection prompt information aiming at the virtual hierarchy type and the demand hierarchy type, and determining N candidate virtual decoration modes according to the hierarchy selection result, the virtual isolation efficacy level, the residual demand decoration type and the key part attribute information of the hierarchy selection prompt information.
Specifically, after the virtual hierarchy type determined according to the environment information is matched with the required hierarchy type required by the target object, when the virtual hierarchy type is not matched with the required hierarchy type (that is, the virtual hierarchy type is different from the required hierarchy type required by the target object), the terminal device may generate and display hierarchy selection prompt information, the target object may perform hierarchy selection (selecting the virtual hierarchy type or the required hierarchy type) according to the hierarchy selection prompt information, and the terminal device determines N candidate virtual decoration modes based on a hierarchy selection result of the target object. The specific method comprises the following steps: if the level selection result is the virtual level type, determining N candidate virtual decoration modes according to the virtual level type, the virtual isolation efficacy level, the remaining required decoration type and the key part attribute information; and if the level selection result is the required level type, determining N candidate virtual decoration modes according to the virtual isolation efficacy level, the required decoration type and the key part attribute information.
For determining specific implementation manners of the N candidate virtual decoration manners according to the virtual hierarchy type, the virtual isolation efficacy level, the remaining required decoration type, and the key part attribute information, reference may be made to the description in step S803. For the same specific implementation manner of determining N candidate virtual decoration manners according to the virtual isolation efficacy level, the demand decoration type (including the demand hierarchy type and the remaining demand decoration type), and the key location attribute information, as that of determining N candidate virtual decoration manners according to the virtual hierarchy type, the virtual isolation efficacy level, the remaining demand decoration type, and the key location attribute information, the description in the step S803 may also be referred to.
Optionally, in a possible embodiment, the terminal device may upload the activity scene information, the required decoration type, and the key portion attribute information of the target object to the service server together, and the service server may analyze each item of data in the key portion attribute information through an artificial intelligence algorithm, and match, for the target object, a plurality of candidate decoration manners that are matched with the key portion in the decoration database and meet the personalized decoration requirement of the target object, as a selection; in addition, the business server can also score the candidate decoration modes according to the total matching degree, the business server can select one candidate decoration mode with the highest score as an optimal candidate decoration mode, and when the plurality of candidate virtual decoration modes are displayed, the optimal candidate virtual decoration mode can be marked as a preferred recommendation for selection of the target object. And when the target object selects the candidate virtual decoration modes through multi-selection or single selection, performing virtual decoration processing on the key parts by using the candidate virtual decoration modes selected by the target object to obtain and display the virtual decoration object.
For ease of understanding, please refer to fig. 8 together, and fig. 8 is a schematic view of a scene for determining a virtual decoration object according to an embodiment of the present application. As shown in fig. 8, as described in fig. 2a and fig. 2b, the terminal device 100a may obtain the activity scene information, the key part attribute information, and the required decoration type, the terminal device 100a may send the activity scene information, the key part attribute information, and the required decoration type to the service server 1000, and after the service server 1000 receives the face attribute information (key part attribute information) of the user a, the required decoration type (makeup preference information), and the activity scene information sent by the terminal device 100a, a virtual makeup manner matching the user a may be determined in the makeup database according to the face attribute information, the required decoration type, and the expected activity scene information (the virtual makeup manner may be understood as a makeup manner stored in the makeup database). The image matching with the user a can be actually understood as matching with the face image a', adapting to the activity scene information and simultaneously meeting the requirement decoration type of the user a; for example, as shown in fig. 3a, taking as an example that the virtual makeup manners determined by the service server 1000 and matched with the user a include a virtual makeup manner 300a, a virtual makeup manner 300b, and a virtual makeup manner 300c, after determining the virtual makeup manners, the service server 1000 may perform virtual makeup trial processing on the face image a '(that is, make up the face image a') by using the virtual makeup manners.
As shown in fig. 3b, after the virtual makeup trying process is performed on the face image a ' by using the virtual makeup manner 300a, the face image a ' may have a virtual makeup effect corresponding to the virtual makeup manner 300a, because the face image a ' is 3D imaging data of the user a, it may also be actually understood that the face of the user a has the virtual makeup effect corresponding to the virtual makeup manner 300a, and the face image a ' having the virtual makeup effect corresponding to the virtual makeup manner 300a may be referred to as a virtual makeup object 300a '; similarly, after the virtual makeup trial processing is performed on the face image a 'by adopting the virtual makeup manner 300b, a virtual makeup object 300b' can be obtained; after virtual makeup trial processing is performed on the face image a 'by using the virtual makeup method 300c, a virtual makeup object 300c' can be obtained. For a specific implementation manner of determining the virtual makeup manner by the service server 1000 according to the required decoration type, the activity scene information, and the face attribute information, reference may be made to the description in the embodiment corresponding to fig. 8 above. Further, the business server 1000 may return the virtual cosmetic object 300a ', the virtual cosmetic object 300b ', and the virtual cosmetic object 300c ' to the terminal 1000a.
In the embodiment of the application, a decoration application is provided, in which a target object can input expected activity scene information and a required decoration type, and the decoration application can display N virtual decoration objects with different virtual decoration effects matched with the target object according to the expected activity scene information and the required decoration type of the target object; the N virtual decoration objects can be selected by the target object, after the target object is selected, the decoration application can synchronously display the target virtual decoration object selected by the target object and the decoration process corresponding to the target virtual decoration object, and the target object can take the target virtual decoration object as a reference and perform real decoration processing under the guidance of the decoration process. It should be understood that, by inputting the information of the desired activity scene and the type of the required decoration, each virtual decoration object in the N virtual decoration objects determined by the decoration application can be adapted to the activity scene, and can also be matched with the target object while conforming to the desired decoration effect of the target object; the target object can select any virtual decoration object as the target virtual decoration object, and then, the target object can perform real decoration processing more accurately and in detail under the condition of having a reference effect and a guide flow by synchronously displaying the target virtual decoration object and the decoration flow. That is to say, according to the application, different adaptive activity scenes can be automatically recommended to the target object according to the expected activity scene information and the demand decoration type, the virtual decoration object which meets the demand of the target object and is matched with the target object can be met, and the recommendation precision and the recommendation efficiency can be improved.
Further, please refer to fig. 9, where fig. 9 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application. The data processing means may be a computer program (comprising program code) running on a computer device, for example the data processing means being an application software; the data processing apparatus may be adapted to perform the method illustrated in fig. 4. As shown in fig. 9, the data processing apparatus 1 may include: an information display module 11, a decoration object display module 12 and a flow display module 13.
An information obtaining module 11, configured to obtain decoration configuration information associated with a target object;
a decoration object display module 12 for displaying N virtual decoration objects matched with the target object in the decoration application; each virtual decoration object is obtained by virtually decorating the key part based on a virtual decoration mode, and the virtual decoration mode adopted by each virtual decoration object is associated with the decoration configuration information; the target object comprises a key part;
a flow display module 13, configured to respond to an object selection operation for the N virtual decoration objects, and display a decoration flow corresponding to a target virtual decoration object selected from the N virtual decoration objects; the decoration flow is used for guiding the target object to carry out real decoration processing.
For specific implementation of the information display module 11, the decoration object display module 12, and the process display module 13, reference may be made to the description of step S101 to step S103 in the embodiment corresponding to fig. 4, which will not be described herein again.
In one embodiment, the decoration configuration information includes activity scenario information;
the information acquisition module 11 may include: a position information display unit 111 and a scene information display unit 112.
The position information display unit 111 is used for responding to the triggering operation of the application starting control aiming at the decoration application and displaying the position information associated with the target object;
a scene information display unit 112, configured to display the activity scene information associated with the target object when the position information receives the trigger operation.
For specific implementation of the position information display unit 111 and the scene information display unit 112, reference may be made to the description of step S101 in the embodiment corresponding to fig. 4, which will not be repeated herein.
In one embodiment, the scene information display unit 112 may include: a scene interface display sub-unit 1121, an object type display sub-unit 1122, and a scene information display sub-unit 1123.
A scene interface display subunit 1121, configured to respond to a trigger operation for the location information and display a scene type selection interface;
a target type display subunit 1122, configured to display, according to the type selection operation on the scene type selection interface, the target scene type selected by the type selection operation;
and a scene information display subunit 1123, configured to display, in response to a trigger operation of the confirmation control for the target scene type, the active scene information associated with the target scene type and the location information.
For specific implementation manners of the scene interface display subunit 1121, the target type display subunit 1122, and the scene information display subunit 1123, reference may be made to the description of step S101 in the embodiment corresponding to fig. 4, which will not be repeated herein.
In one embodiment, the decoration configuration information includes a type of decoration required;
the information acquisition module 11 may include: a decoration type interface unit 113 and a requirement type display unit 114.
A decoration type interface unit 113, configured to respond to a trigger operation of an application start control for a decoration application, and display a decoration type selection interface;
and a requirement type display unit 114, configured to display a requirement decoration type associated with the target object in response to a trigger operation for the decoration type selection interface.
For specific implementation of the decoration type interface unit 113 and the requirement type display unit 114, reference may be made to the description of step S101 in the embodiment corresponding to fig. 4, which will not be described herein again.
In one embodiment, the decoration type selection interface includes historical decoration types;
the requirement type display unit 113 is further specifically configured to determine the history decoration type as the requirement decoration type in response to a trigger operation of the confirmation control for the history decoration type, and display the requirement decoration type.
In one embodiment, the requirement type display unit 113 is further specifically configured to display the requirement decoration type selected by the selection operation in response to the selection operation for the decoration type in the decoration type selection interface.
In one embodiment, the decoration object display module 12 is further specifically configured to display, in response to the confirmation operation for the decoration configuration information, N virtual decoration objects matching the target object in the decoration application; or displaying the decoration configuration information, and displaying the N virtual decoration objects matched with the target object in the decoration application when the display duration of the decoration configuration information meets the display condition.
In one embodiment, the decoration object display module 12 may include: a candidate manner acquisition unit 121, a virtual decoration processing unit 122, and an object display unit 123.
A candidate mode acquiring unit 121 configured to acquire N candidate virtual decoration modes matching the decoration configuration information;
a virtual decoration processing unit 122, configured to perform virtual decoration processing on the key part according to the virtual decoration parameter of each candidate virtual decoration mode in the N candidate virtual decoration modes, to obtain N virtual decoration objects;
an object display unit 123 for displaying the N virtual decoration objects in the decoration application.
Among them, the decoration object display module 12 may include: for specific implementation manners of the candidate mode obtaining unit 121, the virtual decoration processing unit 122, and the object display unit 123, reference may be made to the description of step S102 in the embodiment corresponding to fig. 4, which will not be described herein again.
In one embodiment, the decoration configuration information includes activity scene information, a required decoration type, and key part attribute information of a key part; the requirement decoration type comprises a requirement hierarchy type;
the candidate mode acquiring unit 121 may include: a virtual parameter acquisition subunit 1211 and a mode determination subunit 1212.
A virtual parameter obtaining subunit 1211, configured to obtain a virtual hierarchy type and a virtual isolation efficacy level of the adapted activity scenario information;
a mode determining subunit 1212, configured to determine N candidate virtual decoration modes according to the virtual hierarchy type, the virtual isolation efficacy level, the remaining required decoration type, and the key location attribute information when the virtual hierarchy type matches the required hierarchy type; the rest requirement decoration types are decoration types except the requirement level type in the requirement decoration types;
the manner determining subunit 1212 is further configured to, when the virtual hierarchy type does not match the required hierarchy type, display hierarchy selection prompt information for the virtual hierarchy type and the required hierarchy type, and determine N candidate virtual decoration manners according to a hierarchy selection result of the hierarchy selection prompt information, the virtual isolation efficacy level, the remaining required decoration types, and the key part attribute information.
For a specific implementation of the virtual parameter obtaining subunit 1211 and the mode determining subunit 1212, reference may be made to the description of step S102 in the embodiment corresponding to fig. 4, which will not be described herein again.
In one embodiment, the mode determination subunit 1212 is further specifically configured to obtain a decoration database; the decoration database comprises M virtual decoration modes and virtual decoration parameters corresponding to the M virtual decoration modes respectively; m is a positive integer greater than N;
the mode determining subunit 1212 is further specifically configured to determine, as a first candidate decoration parameter, a virtual decoration parameter whose level type is a virtual level type and whose isolation efficacy level is a virtual isolation efficacy level, among the M virtual decoration parameters;
the mode determination subunit 1212 is further specifically configured to obtain, from the first candidate decoration parameters, a second candidate decoration parameter that matches the remaining required decoration types;
the mode determining subunit 1212 is further specifically configured to obtain, from the second candidate decoration parameters, a third candidate decoration parameter that matches the key part attribute information;
the mode determining subunit 1212 is further specifically configured to determine the virtual decoration mode corresponding to the third candidate decoration parameter as N candidate virtual decoration modes.
In one embodiment, the key part attribute information includes sub-part physical sign information and a sub-part proportion, the sub-part physical sign information is characteristic information of sub-parts in the key part, and the sub-part proportion is a proportion of the sub-parts in the key part; the second candidate decoration parameter comprises a second candidate decoration parameter S k K is a positive integer;
the mode determination subunit 1212 is further configured to determine a second candidate decoration parameter S k A first adaptation rate with a sub-part ratio, andtwo candidate decoration parameters S k A second adaptation rate with sub-site peer information;
the mode determining subunit 1212 is further specifically configured to determine a second candidate decoration parameter S according to the first adaptation rate and the second adaptation rate k The total adaptation rate with the attribute information of the key part;
the mode determination subunit 1212 is further specifically configured to determine a second candidate decoration parameter S if the total adaptation rate is greater than the adaptation threshold value k A third candidate decoration parameter is determined.
In an embodiment, the mode determining subunit 1212 is further specifically configured to determine, if the hierarchy selection result is a virtual hierarchy type, N candidate virtual decoration modes according to the virtual hierarchy type, the virtual isolation efficacy level, the remaining required decoration type, and the key part attribute information;
the manner determining subunit 1212 is further specifically configured to determine, if the level selection result is the requirement level type, N candidate virtual decoration manners according to the virtual isolation efficacy level, the requirement decoration type, and the key portion attribute information.
In one embodiment, the flow display module 13 may include: a guide interface display unit 131 and an interface area display unit 132.
A guidance interface display unit 131 for displaying a decoration flow guidance interface in response to an object selection operation for the N virtual decoration objects;
an interface area display unit 132 configured to display a target virtual decoration object and a key part selected from the N virtual decoration objects in an object display area of the decoration flow guidance interface;
the interface area display unit 132 is further configured to display a decoration flow corresponding to the target virtual decoration object in the flow display area of the decoration flow guidance interface.
For specific implementation of the guidance interface display unit 131 and the interface area display unit 132, reference may be made to the description of step S103 in the embodiment corresponding to fig. 4, which will not be described herein again.
In one embodiment, the object display area includes a decoration object display area and a key part display area;
an interface area display unit 132, further specifically configured to display the target virtual decoration object in the decoration object display area;
the interface area display unit 132 is further specifically configured to display the key part in the key part display area.
In one embodiment, the decoration flow includes decoration audio data corresponding to the decoration text flow;
the interface area display unit 132 is further specifically configured to display a decoration text flow in the flow display area of the decoration flow guidance interface;
the interface area display unit 132 is further specifically configured to synchronously output decoration audio data while displaying the decoration text flow.
In one embodiment, the decoration text flow includes a decoration text step T a A is a positive integer;
the interface area display unit 132, further specifically configured to obtain a decoration text step T a A text timestamp of (a);
the interface area display unit 132 is further specifically configured to traverse an audio timestamp set corresponding to the decoration audio data; the audio time stamp set comprises one or more audio time stamps, and one audio time stamp is a time stamp corresponding to one sub-audio data in the decorative audio data;
the interface area display unit 132 is further specifically configured to determine, as a target audio timestamp, an audio timestamp in the audio timestamp set that has a time alignment relationship with the text timestamp;
the interface area display unit 132 is further specifically configured to determine the sub-audio data corresponding to the target audio timestamp as the target sub-audio data, and perform the step T of displaying the decoration text a Simultaneously, the target sub-audio data is synchronously output.
In one embodiment, the data processing apparatus 1 may further include: a difference region determining module 14 and a correction information display module 15.
A difference region determining module 14, configured to obtain a real decoration object obtained by performing real decoration processing on a target object in a process of performing real decoration processing on the target object;
a difference region determining module 14, configured to compare the real decoration object with the target virtual decoration object, and determine a difference decoration region;
the correction information display module 15 is configured to generate correction decoration prompt information according to the difference decoration area, and display the correction decoration prompt information; the correction decoration prompting information is used for prompting the target object to perform correction decoration processing in the difference decoration area.
The specific implementation manners of the difference area determining module 14 and the correction information displaying module 15 can refer to the description of step S103 in the embodiment corresponding to fig. 4.
In the embodiment of the present application, a decoration application is provided, where decoration configuration information associated with a target object may be obtained, and in the decoration application, N virtual decoration objects (obtained by virtually decorating key parts in different virtual decoration manners) that are matched with the target object are displayed; the N virtual decoration objects can be selected by the target object, after the target object is selected, the decoration application can display the decoration process corresponding to the target virtual decoration object, and the target object can take the target virtual decoration object as a reference and perform real decoration processing under the guidance of the corresponding decoration process. It should be understood that by obtaining the decoration configuration information, each of the N virtual decoration objects determined by the decoration application may be adapted to the decoration configuration information (i.e., matched to the target object); the target object can select any virtual decoration object as the target virtual decoration object, and then, the target object can perform real decoration processing more accurately and in detail under the condition of having a reference effect and a guide flow by displaying the decoration flow corresponding to the target virtual decoration object. That is to say, according to the method and the device, different virtual decoration objects which are matched with the target object and are subjected to virtual decoration in a virtual decoration mode can be automatically recommended to the target object according to the decoration configuration information, and the recommendation precision and recommendation efficiency can be improved.
Further, please refer to fig. 10, where fig. 10 is a schematic structural diagram of a computer device according to an embodiment of the present application. As shown in fig. 10, the data processing apparatus 1 in the embodiment corresponding to fig. 9 may be applied to the computer device 8000, and the computer device 8000 may include: a processor 8001, a network interface 8004, and a memory 8005, and the computer device 8000 further includes: a user interface 8003, and at least one communication bus 8002. The communication bus 8002 is used for connection communication between these components. The user interface 8003 may include a Display (Display) and a Keyboard (Keyboard), and the optional user interface 8003 may further include a standard wired interface and a wireless interface. The network interface 8004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). Memory 8005 may be a high-speed RAM memory or a non-volatile memory, such as at least one disk memory. Memory 8005 may optionally also be at least one storage device located remotely from the aforementioned processor 8001. As shown in fig. 10, the memory 8005, which is a kind of computer readable storage medium, may include therein an operating system, a network communication module, a user interface module, and a device control application program.
In the computer device 8000 of fig. 10, a network interface 8004 may provide network communication functions; and user interface 8003 is primarily an interface for providing input to a user; and processor 8001 may be used to invoke a device control application stored in memory 8005 to implement:
acquiring decoration configuration information associated with a target object;
displaying N virtual decoration objects matched with the target object in the decoration application; each virtual decoration object is obtained by virtually decorating the key part based on a virtual decoration mode, and the virtual decoration mode adopted by each virtual decoration object is associated with the decoration configuration information; the target object comprises a key part;
responding to the object selection operation aiming at the N virtual decorative objects, and displaying a decorative flow corresponding to a target virtual decorative object selected from the N virtual decorative objects; the decoration flow is used for guiding the target object to carry out real decoration processing.
It should be understood that the computer device 8000 described in this embodiment may perform the description of the data processing method in the embodiment corresponding to fig. 4 to fig. 7, and may also perform the description of the data processing apparatus 1 in the embodiment corresponding to fig. 9, which is not described herein again. In addition, the beneficial effects of the same method are not described in detail.
Further, here, it is to be noted that: an embodiment of the present application further provides a computer-readable storage medium, where a computer program executed by the aforementioned data processing computer device 1000 is stored in the computer-readable storage medium, and the computer program includes program instructions, and when the processor executes the program instructions, the description of the data processing method in the embodiments corresponding to fig. 4 to fig. 7 can be executed, so that details are not repeated here. In addition, the beneficial effects of the same method are not described in detail. For technical details not disclosed in embodiments of the computer-readable storage medium referred to in the present application, reference is made to the description of embodiments of the method of the present application.
The computer-readable storage medium may be the data processing apparatus provided in any of the foregoing embodiments or an internal storage unit of the computer device, for example, a hard disk or a memory of the computer device. The computer readable storage medium may also be an external storage device of the computer device, such as a plug-in hard disk, a Smart Memory Card (SMC), a Secure Digital (SD) card, a flash card (flash card), and the like, provided on the computer device. Further, the computer-readable storage medium may also include both an internal storage unit and an external storage device of the computer device. The computer-readable storage medium is used for storing the computer program and other programs and data required by the computer device. The computer readable storage medium may also be used to temporarily store data that has been output or is to be output.
In one aspect of the application, a computer program product or computer program is provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the method provided by one aspect of the embodiments of the present application.
The terms "first," "second," and the like in the description and in the claims and drawings of the embodiments of the present application are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "comprises" and any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, apparatus, means, or apparatus that comprises a list of steps or elements is not limited to those listed but may alternatively include other steps or modules not listed or inherent to such process, method, apparatus, means, or apparatus.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The method and the related apparatus provided by the embodiments of the present application are described with reference to the flowchart and/or the structural diagram of the method provided by the embodiments of the present application, and each flow and/or block of the flowchart and/or the structural diagram of the method, and the combination of the flow and/or block in the flowchart and/or the block diagram can be specifically implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block or blocks of the block diagram. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block or blocks of the block diagram. These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block or blocks.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present application and is not to be construed as limiting the scope of the present application, so that the present application is not limited thereto, and all equivalent variations and modifications can be made to the present application.
Claims (21)
1. A method of data processing, comprising:
acquiring decoration configuration information associated with a target object;
displaying N virtual decoration objects matched with the target object in a decoration application; each virtual decoration object is an object obtained by virtually decorating a key part based on a virtual decoration mode, and the virtual decoration mode adopted by each virtual decoration object is associated with the decoration configuration information; the target object contains the key part;
responding to the object selection operation aiming at the N virtual decoration objects, and displaying the decoration flow corresponding to the target virtual decoration object selected from the N virtual decoration objects; the decoration process is used for guiding the target object to carry out real decoration processing.
2. The method of claim 1, wherein the decoration configuration information includes activity scene information;
the obtaining decoration configuration information associated with the target object comprises:
responding to the triggering operation of an application starting control aiming at the decoration application, and displaying the position information associated with the target object;
displaying the activity scene information associated with the target object when the position information receives a trigger operation.
3. The method according to claim 2, wherein the displaying the activity scene information associated with the target object when the position information receives a trigger operation comprises:
responding to the trigger operation aiming at the position information, and displaying a scene type selection interface;
displaying the target scene type selected by the type selection operation according to the type selection operation on the scene type selection interface;
displaying the activity scene information associated with the target scene type and the location information in response to a trigger operation for the target scene type.
4. The method of claim 1, wherein the decoration configuration information includes a demand decoration type;
the acquiring decoration configuration information associated with the target object comprises:
responding to the trigger operation of the application starting control aiming at the decoration application, and displaying a decoration type selection interface;
and responding to the trigger operation aiming at the decoration type selection interface, and displaying the required decoration type associated with the target object.
5. The method of claim 4, wherein the decoration type selection interface includes a historical decoration type;
the response is to the trigger operation of the decoration type selection interface, and the display of the requirement decoration type associated with the target object comprises the following steps:
and responding to the trigger operation aiming at the historical decoration type, determining the historical decoration type as the demand decoration type, and displaying the demand decoration type.
6. The method of claim 4, wherein displaying the desired decoration type associated with the target object in response to the triggering operation for the decoration type selection interface comprises:
responding to the selection operation of the decoration type in the decoration type selection interface, and displaying the required decoration type selected by the selection operation.
7. The method of claim 1, wherein displaying N virtual decoration objects matching the target object in a decoration application comprises:
responding to the confirmation operation aiming at the decoration configuration information, and displaying N virtual decoration objects matched with the target object in a decoration application; or the like, or, alternatively,
and displaying the decoration configuration information, and displaying N virtual decoration objects matched with the target object in decoration application when the display duration of the decoration configuration information meets the display condition.
8. The method of claim 1, wherein displaying N virtual decoration objects matching the target object in a decoration application comprises:
acquiring N candidate virtual decoration modes matched with the decoration configuration information;
respectively carrying out virtual decoration processing on the key part according to the virtual decoration parameter of each candidate virtual decoration mode in the N candidate virtual decoration modes to obtain the N virtual decoration objects;
displaying the N virtual decoration objects in the decoration application.
9. The method of claim 8, wherein the decoration configuration information comprises activity scene information, a requirement decoration type, and key location attribute information of the key location; the requirement decoration type comprises a requirement hierarchy type;
the obtaining of the N candidate virtual decoration modes matched with the decoration configuration information includes:
acquiring a virtual layer type and a virtual isolation efficacy grade which are adapted to the activity scene information;
when the virtual hierarchy type is matched with the demand hierarchy type, determining the N candidate virtual decoration modes according to the virtual hierarchy type, the virtual isolation efficacy level, the residual demand decoration type and the key part attribute information; the residual requirement decoration type is a decoration type except the requirement hierarchy type in the requirement decoration type;
when the virtual hierarchy type is not matched with the required hierarchy type, displaying hierarchy selection prompt information aiming at the virtual hierarchy type and the required hierarchy type, and determining the N candidate virtual decoration modes according to a hierarchy selection result of the hierarchy selection prompt information, the virtual isolation efficacy level, the residual required decoration types and the key part attribute information.
10. The method according to claim 9, wherein the determining the N candidate virtual decoration modes according to the virtual hierarchy type, the virtual isolation efficacy level, the remaining required decoration type, and the key part attribute information comprises:
acquiring a decoration database; the decoration database comprises M virtual decoration modes and virtual decoration parameters corresponding to the M virtual decoration modes respectively; m is a positive integer greater than N;
determining a virtual decoration parameter of which the hierarchy type is the virtual hierarchy type and the isolation efficacy level is the virtual isolation efficacy level in the M virtual decoration parameters as a first candidate decoration parameter;
obtaining a second candidate decoration parameter matched with the remaining required decoration types from the first candidate decoration parameters;
acquiring a third candidate decoration parameter matched with the key part attribute information from the second candidate decoration parameters;
and determining the virtual decoration mode corresponding to the third candidate decoration parameter as the N candidate virtual decoration modes.
11. The method according to claim 10, wherein the key location attribute information includes sub-location sign information and a sub-location ratio, the sub-location sign information is characteristic information of sub-locations in a key location, and the sub-location ratio is a ratio of the sub-locations in the key location; the second candidate decoration parameter comprises a second candidate decoration parameter S k K is a positive integer;
the obtaining of the third candidate decoration parameter matched with the key part attribute information from the second candidate decoration parameters includes:
determining the second candidate decoration parameter S k A first adaptation rate to the sub-part ratio, and the second candidate decoration parameter S k A second adaptation rate with the sub-site sign information;
determining the second candidate decoration parameter S according to the first adaptation rate and the second adaptation rate k A total adaptation rate with the key location attribute information;
if the total adaptation rate is greater than an adaptation threshold value, the second candidate decoration parameter S is used k Determining the third candidate decoration parameter.
12. The method according to claim 9, wherein the determining the N candidate virtual decoration modes according to the hierarchy selection result of the hierarchy selection prompt information, the virtual isolation efficacy level, the remaining required decoration type, and the key part attribute information includes:
if the layer selection result is the virtual layer type, determining the N candidate virtual decoration modes according to the virtual layer type, the virtual isolation efficacy level, the residual demand decoration type and the key part attribute information;
and if the hierarchy selection result is the required hierarchy type, determining the N candidate virtual decoration modes according to the virtual isolation efficacy level, the required decoration type and the key part attribute information.
13. The method according to claim 1, wherein the displaying, in response to the object selection operation for the N virtual decoration objects, the decoration flow corresponding to the target virtual decoration object selected from the N virtual decoration objects comprises:
responding to the object selection operation aiming at the N virtual decoration objects, and displaying a decoration flow guide interface;
displaying a target virtual decoration object selected from the N virtual decoration objects and the key part in an object display area of the decoration process guide interface;
and displaying the decoration process corresponding to the target virtual decoration object in the process display area of the decoration process guide interface.
14. The method of claim 13, wherein the object display area comprises a decoration object display area and a key part display area;
the displaying, in an object display area of the decoration flow guidance interface, a target virtual decoration object selected from the N virtual decoration objects and the key part, includes:
displaying the target virtual decoration object in the decoration object display area;
displaying the key part in the key part display area.
15. The method of claim 13, wherein the decoration flow comprises decoration audio data corresponding to a decoration text flow;
the displaying, in the flow display area of the decoration flow guidance interface, the decoration flow corresponding to the target virtual decoration object includes:
displaying the decoration text flow in a flow display area of the decoration flow guide interface;
and synchronously outputting the decoration audio data while displaying the decoration text flow.
16. The method of claim 15, wherein the decoration text flow comprises a decoration text step T a A is a positive integer;
the synchronously outputting the decoration audio data while displaying the decoration text flow comprises:
a step T of obtaining the decoration text a A text timestamp of (a);
traversing an audio time stamp set corresponding to the decoration audio data; the audio time stamp set comprises one or more audio time stamps, and one audio time stamp is a time stamp corresponding to one sub audio data in the decorative audio data;
determining an audio time stamp having a time alignment relation with the text time stamp in the audio time stamp set as a target audio time stamp;
determining the sub-audio data corresponding to the target audio time stamp as target sub-audio data, and displaying the decoration text step T a And synchronously outputting the target sub-audio data.
17. The method of claim 1, further comprising:
acquiring a real decoration object obtained by real decoration processing of the target object in the process of real decoration processing of the target object;
comparing the real decorative object with the target virtual decorative object to determine a differential decorative area;
generating correction decoration prompt information according to the difference decoration area, and displaying the correction decoration prompt information; and the correction decoration prompt information is used for prompting the target object to perform correction decoration processing in the difference decoration area.
18. A data processing apparatus, comprising:
the information acquisition module is used for acquiring decoration configuration information associated with the target object;
the decoration object display module is used for displaying N virtual decoration objects matched with the target object in a decoration application; each virtual decoration object is obtained by virtually decorating key parts based on a virtual decoration mode, and the virtual decoration mode adopted by each virtual decoration object is associated with the decoration configuration information; the target object contains the key part;
a flow display module, which responds to the object selection operation aiming at the N virtual decorative objects and displays the decorative flow corresponding to the target virtual decorative object selected from the N virtual decorative objects; the decoration process is used for guiding the target object to carry out real decoration processing.
19. A computer device, comprising: a processor, a memory, and a network interface;
the processor is coupled to the memory and the network interface, wherein the network interface is configured to provide network communication functionality, the memory is configured to store program code, and the processor is configured to invoke the program code to cause the computer device to perform the method of any of claims 1-17.
20. A computer-readable storage medium, in which a computer program is stored which is adapted to be loaded by a processor and to carry out the method of any one of claims 1 to 17.
21. A computer program product or computer program, characterized in that it comprises computer instructions stored in a computer-readable storage medium, said computer instructions being adapted to be read and executed by a processor, to cause a computer device having said processor to perform the method of any of claims 1-17.
Publications (1)
Publication Number | Publication Date |
---|---|
HK40083062A true HK40083062A (en) | 2023-06-23 |
Family
ID=
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11798201B2 (en) | Mirroring device with whole-body outfits | |
CN112396679B (en) | Virtual object display method and device, electronic equipment and medium | |
US10057637B2 (en) | Real-time content filtering and replacement | |
US12412347B2 (en) | 3D upper garment tracking | |
US11978110B2 (en) | Generating augmented reality content based on user-selected product data | |
EP4158598A1 (en) | Augmented reality content from third-party content | |
CN109863532A (en) | Generating and displaying custom avatars in a media overlay | |
CN104376160A (en) | Real person simulation individuality ornament matching system | |
CN117940962A (en) | Controlling interactive fashion based on facial expressions | |
US12307564B2 (en) | Applying animated 3D avatar in AR experiences | |
US12062146B2 (en) | Virtual wardrobe AR experience | |
CN109074679A (en) | The Instant Ads based on scene strengthened with augmented reality | |
US20250252644A1 (en) | Applying animated 3d avatar in ar experiences | |
US20240355019A1 (en) | Product image generation based on diffusion model | |
CN112465606A (en) | Cosmetic customization system | |
CN115826835A (en) | Data processing method and device and readable storage medium | |
EP4260172A1 (en) | Digital makeup artist | |
CN114028808A (en) | Virtual pet appearance editing method and device, terminal and storage medium | |
HK40083062A (en) | Data processing method, device, and readable storage medium | |
US12406444B2 (en) | Real-time fashion item transfer system | |
WO2024158932A1 (en) | Adaptive zoom try-on experience | |
WO2024107634A1 (en) | Real-time try-on using body landmarks | |
US20240161242A1 (en) | Real-time try-on using body landmarks | |
Werner | The fashion image: planning and producing fashion photographs and films | |
WO2025001382A1 (en) | Makeup scheme generation method and apparatus, device, and storage medium |