Detailed Description
In order to make those skilled in the art better understand the technical solutions in the embodiments of the present invention, the technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments obtained by a person skilled in the art based on the embodiments of the present invention shall fall within the scope of the protection of the embodiments of the present invention.
The following further describes specific implementation of the embodiments of the present invention with reference to the drawings.
Example one
Referring to fig. 1a, a flowchart illustrating steps of a data processing method according to a first embodiment of the present invention is shown.
The data processing method of the embodiment comprises the following steps:
step S102: the method comprises the steps of obtaining human body feedback data of audiences when watching performance works and feedback time information corresponding to the human body feedback data.
The human feedback data is used to indicate emotional feedback made by the audience to the episodes in the performance work while viewing it. For example, the emotional feedback when a certain story is watched is happy (e.g., laugh, dance, etc.), or the emotional feedback when a certain story is watched is sad (e.g., crying, masking, etc.), and so on. The human feedback data includes, but is not limited to, at least one of facial expression data, posture data, vital signs data, arm movement data, and sound data.
The human body feedback data may be human body feedback indicating the viewer at a certain time (e.g. an expression at a certain time), or may be a set of human body feedback indicating the viewer within a certain time period (e.g. an expression within several seconds or several minutes), which is not limited by the embodiment.
The person skilled in the art may obtain the human body feedback data in any suitable manner by authorization of the user, which is not limited in this embodiment. For example, by acquiring images of the audience, gesture recognition or expression recognition is performed on the images, so that human body feedback data is obtained. Or, facial muscle changes, limb muscle changes, etc. of the audience are acquired through a contact or non-contact sensor, so as to obtain human body feedback data, etc.
In addition, the audience may be an audience directly watching the performance works in a theater or other venue, or may be an audience watching live or recorded performance works through a terminal device, and this embodiment does not limit this. That is, the scheme of the embodiment of the invention is not only suitable for scenes of on-site performance, but also suitable for scenes of off-site performance, and only human feedback data of audiences can be obtained.
The feedback time information corresponding to the human body feedback data is used for indicating the time when the human body feedback data is generated by the audience, and the time information can comprise one or more time periods or one or more time moments.
The feedback time information may be a relative time (e.g., a time interval relative to a start time of the performance) or may be a time at which human body feedback data is generated (e.g., "2020-1-112: 22" or the like).
Two specific human feedback data and corresponding feedback time information are listed below:
for example, the human body feedback data includes posture data a, and the corresponding feedback time information indicates that the time for the audience to generate the posture data a is 5 minutes away from the starting time of the performance work, that is, the audience generates the posture data a 5 th minute after the starting time of the performance work.
For example, the human body feedback data comprises expression data a and sound data a, and the corresponding feedback time information comprises a time period 1 (such as "2020-1-312: 21: 34" to "2020-1-312: 21: 52") and a time period 2 (such as "2020-1-312: 25: 00" to "2020-1-312: 26: 02"), wherein the time period 1 is used for indicating that the audience generates the expression data a in the time period 1, and the time period 2 is used for indicating that the audience generates the sound data a in the time period 2.
Of course, the expression data a and the sound data a, and possibly the gesture data, may be generated at the same time, and the embodiment of the present invention is not limited herein. In other embodiments, the human body feedback data and the feedback time information may be in any other suitable form, and the embodiments of the present invention are not limited thereto.
Step S104: and performing emotion recognition on the human body feedback data to acquire corresponding emotion information, and determining emotion time information corresponding to the emotion information according to the feedback time information.
In this embodiment, the emotional information includes at least one of: mood information of happy mood, mood information of sad mood, mood information of calm mood, but not limited thereto.
According to the needs, the technicians in the field can also subdivide different emotion information, so as to obtain more accurate emotion information. For example, the emotional information for happy emotions can be subdivided into: and dividing emotion information into three types of detailed emotions, namely smile, laugh and laugh.
For various types of data in the human body feedback data, those skilled in the art may perform emotion recognition in an appropriate manner, and different types of data may perform emotion recognition in the same or different manner, which is not limited in this embodiment.
Taking the gesture data as an example, one possible emotion recognition approach may be: and gesture data 0 for indicating happy emotion is configured in advance, the gesture data B contained in the human body feedback data is matched with the gesture data 0, and if the gesture data B can be matched, the emotion information B corresponding to the gesture data B is determined to be the emotion information of the happy emotion.
Another possible emotion recognition approach may be: using the trained neural network model a with emotion recognition function, the posture data B is input into the neural network model a, and recognized emotion information B (e.g. emotion information of happy emotion) is output therefrom.
After the emotion information is obtained, the corresponding emotion time information can be obtained according to the feedback time information of the corresponding human body feedback data. For example, for the emotion information B, the corresponding human body feedback data is posture data B, and the feedback time information corresponding to the posture data B can be the emotion time information of the emotion information B.
Step S106: and determining the corresponding relation between the emotion information of the audience and the plot in the performance work according to the emotion time information, and determining the feedback result of the audience to the plot according to the corresponding relation.
In a specific implementation, matching the plot with the emotional time information may be implemented in any suitable manner, which is not limited by the embodiment. For example, a corresponding mark is set for each episode, each emotional time information also has a corresponding mark, and the episodes and the emotional time information matched with each other have the same mark, so that the corresponding relationship between the episodes and the emotional time information can be determined according to the marks. Or, a corresponding timestamp may be set for each episode, and the timestamp of the episode is matched with the emotional time information, thereby obtaining a correspondence.
The timestamp of the episode can be determined in any suitable manner, for example, by manually labeling the timestamp corresponding to each episode in advance, or by analyzing the audio/video data of the representation in a manner of semantic analysis or the like, and determining the timestamp corresponding to the episode according to the analysis result, which is not limited in this embodiment. In one possible approach, if the emotional time information is used to indicate a time interval between a time at which the viewer generates corresponding emotional information and a reference time point (e.g., a starting time of a performance), a scenario that is the same as the emotional time information or has a difference smaller than a preset range (a preset range may be determined as needed by those skilled in the art, such as ± 1 minute, ± 20 seconds, etc.) as the emotional time information is determined according to a timestamp of each scenario, and a corresponding relationship between the emotional time information and the scenario is determined.
In another possible manner, if the emotion time information indicates a time at which the viewer generates the corresponding emotion information (e.g., "2020-1-213: 21: 32"), the time interval between the emotion time information and the show start time may be calculated from the start time of the performance and the time at which the corresponding emotion information is generated, and the correspondence between the emotion time information and the episode may be determined from the time stamp of the episode.
Because the emotion time information corresponds to the emotion information, after the corresponding relation between the emotion time information and the plot is determined, the emotion information can correspond to the plot, and accordingly, the feedback result of the plot corresponding to the audience is determined. For example, for scenario a, the viewer's feedback results are 5% smile, 50% smile, 20% smile, and 25% no reaction, etc.
Therefore, the emotion information of the audience can be determined according to the human feedback data of the audience when the audience watches the performance works, the emotion time information corresponding to the emotion information is determined by combining the feedback time information generating the human feedback data, and the feedback result of the audience to the plot can be determined through the corresponding relation between the emotion time information and the plot of the performance works.
Compared with the method that only general results such as better feedback, poor feedback and the like from the self feeling of an observer can be obtained through manual observation, the feedback result in the embodiment can realize the subdivision feedback of each plot in the performance, so that according to the feedback result, which plots audience feedback meets the expected standard and which plots do not meet the expected standard can be intuitively known, and the subsequent plots which do not meet the expected standard can be conveniently and pertinently optimized. It should be noted that the episode in this embodiment may be understood as an episode corresponding to a period of time in the performance composition, or may be an episode corresponding to a certain moment (for example, a climax point of a certain scenario), and a person skilled in the art may mark different episodes from the performance composition as needed, which is not limited in this embodiment.
In addition, the feedback result can be in the form of data, so that the feedback result can be stored, data precipitation is realized, and subsequent comparative analysis and the like are facilitated.
As shown in fig. 1b and 1c, the following description of the data processing method with a specific usage scenario is as follows:
in the present usage scenario, the process of obtaining the feedback result of the audience about the episode will be described by taking as an example that the method is configured and configured with a server (the server may include a cloud or a server) for obtaining the feedback result of the audience directly watching the performance works (such as a certain comedy-type drama) in the theater.
Of course, in other usage scenarios, the method may be deployed on an appropriate terminal device or other device having computing capabilities.
The method comprises the steps of determining the fun points in the scenarios of the complete performance works in advance, wherein each fun point is taken as a plot (each plot is provided with a timestamp and used for indicating the position of the plot in the complete performance works).
And in the process of watching the performance works by the audience, acquiring human body feedback data of the audience and corresponding feedback time information.
For example, after user authorization, image data and/or sound data of the audience can be collected through a camera and/or a radio, and human body feedback data of the audience is obtained based on the image data and/or the sound data. For example, at the time t1, the image data 1 is acquired, and the human body feedback data 1 is obtained from the image data 1, the corresponding feedback time information can be determined according to the acquisition time of the image data 1 (i.e. the time t 1).
Or the human body feedback data of the audience is obtained through a wearable sensor arranged on a seat or worn by the audience. The feedback time information of the human body feedback data can be determined according to the time when the data is collected. For example, at the time t2, the body feedback data 2 is obtained through the wearable sensor, and the corresponding feedback time information can be determined according to the time t 2.
And performing emotion recognition on the human body feedback data to obtain emotion information, and determining emotion time information corresponding to the emotion information according to the feedback time information.
For example, emotion recognition is performed on the human body feedback data through a neural network model with an emotion recognition function, and corresponding emotion information is obtained. The emotional information includes at least one of the following: mood information of happy mood, mood information of calm mood, mood information of sad mood, and the like.
For each emotion included in the emotion information, corresponding emotion time information may be determined according to the feedback time information. For example, the emotion information includes emotion a (corresponding to posture data a), emotion B (corresponding to expression data B), and emotion C (corresponding to sound data C) for indicating happy emotion, calm emotion, and happy emotion, respectively. According to the feedback time information, the acquisition time of the posture data A corresponding to the emotion A is determined to be the time 1, the time corresponding to the emotion B and the time corresponding to the emotion C are determined to be the time 2 and the time 3, and then the emotion time information corresponding to the emotion information is acquired.
And determining the corresponding relation between the emotion time information and the plot, and determining a feedback result corresponding to the plot according to the corresponding relation.
For example, it is determined that episode 1 corresponds to emotion a based on the timestamp (e.g., time 1) of episode 1 and the emotion time information. Thus, the feedback emotions of the audiences corresponding to all the plots can be determined, and the feedback result is determined according to the feedback emotions.
An exemplary feedback result is, for example: at the Mth moment after the performance works start, the emotion information fed back by the audience is the emotional information of happy emotions; and at the Nth moment after the performance works start, the emotion information fed back by the audience is the emotion information of calm emotions, and the like.
And determining the plots corresponding to the Mth time and the Nth time by combining the timestamps of the plots according to needs, thereby determining the feedback of the audience to the plots.
Another exemplary feedback result is for example: at the Mth moment after the performance works start, 20% of emotion information fed back by audiences is happy emotion information, and 80% of emotion information fed back by the audiences is calm emotion information; at the nth moment after the performance works begin, 40% of the emotion information fed back by the audiences is emotional information of calm emotions, 60% of the emotion information fed back by the audiences is emotional information of sad emotions, and the like.
The specific form of the feedback result can be determined by those skilled in the art according to the needs, as long as the emotional feedback of the audience to the plot can be embodied.
Through the embodiment, emotion recognition is carried out on the obtained human feedback data in the process of watching the performance works by the audience to obtain the corresponding emotion information, the emotion time information corresponding to the emotion information is determined by combining the feedback time information, and then the emotion information fed back by the audience to the plots is determined according to the corresponding relation between the emotion time information and the plots in the performance works, so that the feedback result is determined, the analysis and observation of the feedback of the audience subdivided into the plots are realized, the stable analysis quality can be ensured, the excessive manpower investment is not needed, and the observation cost is favorably reduced.
The data processing method of the present embodiment may be performed by any suitable electronic device having data processing capabilities, including but not limited to: servers, mobile terminals (such as tablet computers, mobile phones and the like), PCs and the like.
Example two
Referring to fig. 2a, a flow chart of steps of a data processing method according to a second embodiment of the invention is shown.
The data processing method of the present embodiment includes steps S102 to S106 of the first embodiment.
Wherein, before step S102, the method further comprises:
step S100: at least one of an image, audio, vital sign data, and arm movement data of the audience is obtained.
In a specific implementation, for example, when the audience watches performance works in a theater or other venue, the user can acquire images and/or audio of the audience through a camera, a radio or other devices installed in the venue, and acquire vital sign data and/or arm movement data of the audience through a wearable device worn by the audience. A wearable device such as a bracelet. The life signs of the audiences, such as heartbeat speed and the like, can be detected in real time through the bracelet. In addition, the bracelet can also detect arm movement data of the wearer to determine emotion information of the audience according to the arm movement data of the audience.
The wearable equipment is used for collecting the vital sign data and/or the arm movement data of the audiences, so that the collected data is more real and reliable, and the emotion information of the audiences can be more accurately determined.
In another specific implementation, for example, when the audience watches recorded and broadcasted performance works through the terminal device, the image and/or audio of the audience can be collected through a camera, a microphone and other devices on the terminal device, so as to obtain the human body feedback data of the audience in the subsequent steps.
In this embodiment, the images and/or audio of the audience may be collected in real time during the performance and processed in real time (for example, step S102 to step S106 are performed), or the images and/or audio of the audience may be collected periodically and then processed for the images and/or audio in the collection time period.
In order to determine the feedback time corresponding to the human body feedback data in the subsequent steps, a corresponding time stamp is included in each image and/or each piece of audio.
For the acquired image and/or audio, step S102 may be performed to acquire human feedback data and corresponding feedback time information.
For example, one possible way in which step S102 includes at least one of the following:
mode A: and carrying out facial expression recognition on the collected image containing the human body feedback data to obtain the expression data and corresponding feedback time information.
For example, the image at time t is input to a trained neural network model having an expression recognition function, and expression data of the viewer included in the image is obtained using the neural network model.
The feedback time information (e.g. time t) corresponding to the expression data can be determined according to the timestamp corresponding to the image.
Of course, in other embodiments, the expression data may be obtained from the image in other manners, which is not limited in this embodiment.
Mode B: carrying out human body posture recognition on the collected image containing the human body feedback data to obtain posture data used for indicating the body posture of the audience and corresponding feedback time information;
for example, the image at time t is input to a trained neural network model having a gesture recognition function, and the neural network model is used to obtain the pose data of the viewer included in the image.
The feedback time information (e.g., time t) corresponding to the pose data can be determined according to the time stamp corresponding to the image.
Of course, in other embodiments, obtaining the pose data from the image may be performed in other manners, which is not limited by the embodiment.
Mode C: and carrying out sound identification on the collected audio frequency containing the human body feedback data to obtain feedback sound data used for indicating the audience to the performance works and corresponding feedback time information.
For example, the audio is processed by a neural network model with voice recognition or an existing voice recognition algorithm to obtain the voice data of the audience. And determining feedback time information corresponding to the sound data according to the time stamp of the audio.
Mode D: the method comprises the steps of obtaining vital sign data of a spectator and corresponding feedback time information, wherein the vital sign data are collected through wearable equipment worn by the spectator.
For example, vital sign data such as the heart rate of the audience is collected as human body feedback data by a bracelet worn by the audience. The feedback time information of each vital sign data can be determined according to the acquisition time of the bracelet.
Mode E: the method comprises the steps of obtaining arm movement data of a spectator and corresponding feedback time information, wherein the arm movement data are collected through wearable equipment worn by the spectator.
For example, arm position data or acceleration data of the viewer is collected as arm movement data of the viewer by a bracelet worn by the viewer. The feedback time information of each vital sign data can be determined according to the acquisition time of the bracelet.
Besides collecting vital sign data and arm movement data of the audience through the wearable equipment, the wearable equipment can also collect the ambient sound decibels around the audience, so that the feedback of the audience is determined according to the ambient sound decibels in the subsequent process.
It should be noted that the data collected by the wearable device may be stored in the wearable device, and the wearable device sends the collected data to the server end every set time. Or, the wearable device may also send the collected data to the server in real time.
In summary, through the step S102, the human feedback data of the audience at each time in the performance time period can be determined according to the collected different types of data, so that the adaptability is improved, and the types of the human feedback data are enriched, so that the emotion of the audience can be determined more accurately in the follow-up process, and the quality of the feedback result is ensured.
In the present embodiment, the human body feedback data includes at least one of expression data, posture data, vital sign data, arm movement data, and sound data, but is not limited thereto.
According to the acquired human body feedback data, step S104 is performed to acquire the emotion information of the audience and the corresponding emotion time information represented by the human body feedback data. For example, in a specific implementation, performing emotion recognition on the human body feedback data in step S104, and acquiring corresponding emotion information may be implemented as: and performing emotion recognition on the human body feedback data to acquire emotion information meeting preset conditions.
Wherein the emotion information of the preset condition includes at least one of: mood information of happy mood, mood information of sad mood, mood information of calm mood.
In a feasible mode, the posture data in the human body feedback data is matched with one or more preset posture data, the matched preset posture data is determined, and the emotion information (recorded emotion information A) corresponding to the posture data is determined according to the emotion information corresponding to the preset posture data. And then determining whether the emotion information A meets a preset condition, and if so, taking the emotion information A as the emotion information meeting the preset condition.
If the performance works are comedy types, the preset condition can be happy emotion, and if the emotion indicated by the emotion information A is happy, the emotion information A meets the preset condition and can be used as the emotion information meeting the preset condition.
The process of determining the corresponding emotion information according to the expression data and the sound data is similar to the process of the posture data, and therefore, the process is not repeated.
In another possible way, the gesture data is input into a neural network model for emotion recognition, and the emotion indicated by the gesture data is determined through the neural network model, that is, corresponding emotion information (written as emotion information B) is obtained. Similarly to the emotion information a, it may be determined whether the emotion information B satisfies a preset condition, and if so, it is determined as emotion information satisfying the preset condition.
It should be noted that, if the neural network models are used both when the human feedback data is acquired in step S102 and when the emotion recognition is performed in step S104, the two neural network models may be trained separately or in combination. In other words, if two neural network models are trained jointly, the two neural network models can be regarded as one model, and the human feedback data acquired in step S102 can be regarded as an intermediate result of the neural network model. For example, the collected image data is input into the neural network model, the feature data of the image data extracted by a certain hidden layer (hidden layer) of the neural network model is used as the human feedback data, and the output data which is continuously processed and output by the subsequent hidden layer and output layer on the human feedback data is used as the emotion information.
Or, the emotion of the audience is determined according to the vital sign data collected by the wearable device, for example, the vital sign data is matched with preset vital sign data of different emotions, so that the emotion of the audience is happy, calm, sad and the like at different times. In addition, the emotion of the viewer can be determined based on the collected arm movement data of the viewer, such as determining that the viewer has a clapping action for a certain time period based on the arm movement data (specifically, the movement data at the wrist), and determining that the emotion of the viewer is a happy emotion, or determining that the viewer has a tearing action for a certain time period based on the arm movement data, and determining that the emotion of the viewer is a sad emotion, and the like. After determining the emotion information of the viewer, step S104 may implement, in determining the emotion time information corresponding to the emotion information: and determining the acquired emotion information as the emotion time information of the emotion information according to the feedback time information of the corresponding human body feedback data.
After the emotion information is acquired, the feedback time information of the human body feedback data corresponding to the emotion information can be used as the corresponding emotion time information.
Further, based on the emotional time information, in step S106, a feedback result is determined. For example, step S106 includes the following sub-steps:
substep S1061: and determining the corresponding relation between the plots and the emotion information according to plot time information and the emotion time information corresponding to a plurality of plots preset in the performance works.
In a specific implementation, the corresponding time stamps may be marked for the plots in the performance composition in advance as the plot time information, or the plot time information may be a set of time stamps of all the plots, which is not limited in this embodiment.
Taking a comedy type drama as an example, the drama includes N beat points (i.e., episodes), and a timestamp corresponding to each beat point is marked as episode time information, or a set of N timestamps is marked as episode time information.
Taking the example that each plot has one corresponding plot time information, for each plot, matching is performed according to the corresponding plot time information and emotion time information, so as to determine the corresponding relationship between the plots and the emotion information.
For example, if the episode time information of episode a indicates a time t1, it is determined that episode a corresponds to emotion information of which emotion time information is t 1.
Alternatively, in another specific implementation, the sub-step S1061 may be implemented as: aggregating the emotion information according to the sequence of the corresponding emotion time information; matching the aggregated emotional information with the plot of the performance works according to time, and determining the corresponding relation between the plot and the emotional information.
In this way, the emotional information can be aggregated onto the time axis to better and more conveniently match the plot time axis to determine the viewer's emotional feedback for each plot.
Substep S1062: and determining the feedback result of the audience to the plot according to the corresponding relation.
In a specific implementation, for episode a, the emotion information fed back to episode a by the viewer is determined according to the corresponding relationship, so as to determine the feedback result.
For example, for scenario a, the feedback result is determined to be that the feedback of 30% of viewers is smiling, the feedback of 50% of viewers is laughing, and the feedback of 20% of viewers is laughing, according to the corresponding emotional information.
Alternatively, in another specific implementation, the sub-step S1062 may be implemented as: and determining the difference between the emotion information fed back by the audience to the plot of the performance work and the expected feedback emotion as the feedback result according to the corresponding relation.
For the plot of the performance work, some expected feedback emotions can be preset, for example, for plot A, 80% or more of feedback emotions of audiences are preset to be laughter, after actual feedback emotion information of the audiences is obtained, the emotion information is compared with the expected feedback emotions, the difference between the emotion information and the expected feedback emotions can be determined, the difference serves as a feedback result, the feedback result can more intuitively reflect the unexpected plot, and targeted improvement can be carried out. In conclusion, through the steps, feedback of the audience for the plots in the process of watching the performance works is determined based on the technologies of facial expression, voice recognition and gesture recognition, so that subdivision feedback analysis based on time points and plot points is realized, plot subdivision analysis is realized through matching of emotion information of the audience and the plots on time tracks, analysis quality is improved, and follow-up plot optimization is facilitated. Further, optionally, for the case that the host has adjusted the plot of the performance, the method of the present embodiment further includes step S108 and step S110.
Step S108: and determining whether the performance works have historical versions or not according to the version information corresponding to the performance works.
The host may configure one or more versions of the transcript for each performance work. The scenario includes at least episode time information corresponding to the episode. Each transcript has at least corresponding version information. And if necessary, setting a corresponding history identification bit for the script to indicate whether the script of the version is a history version. Thus, by comparing the version information of the scenario, it can be determined whether a historical version exists in a certain scenario.
If the historical version exists, the step S110 may be executed when the host needs to compare the feedback results of the viewers of the scenario with the different versions; otherwise, if the historical version does not exist, when the host needs to compare the feedback result of the scenario of the script with different versions, the host can be directly prompted with error information, such as 'no earlier version'.
Step S110: and if the historical version exists, obtaining audience feedback comparison information according to a historical feedback result corresponding to the historical version and a feedback result corresponding to the performance works.
If the historical version exists, the historical feedback results corresponding to the historical versions of a certain performance work can be found out from the prestored feedback results according to the version information of the historical version and the identification information of the scenario, and the feedback comparison information is determined by combining the obtained feedback results of the current version.
Whether the feedback of the audience to the updated plot meets the expectation can be intuitively determined through the feedback comparison information, so that the host can better optimize and adjust the content of the performance works according to the feedback result, and further, the drama which is more favored by the audience is produced.
Optionally, in order to better satisfy the viewer requirement, the method may further include steps S112 to S114.
It should be noted that, according to different requirements, the method of the present embodiment may include all of steps S108 to S114, or only a part of steps S108 to S114.
Step S112: a user identification corresponding to a user in the seat is obtained.
After the user sits in the seat, the identification of the seat can be used as the user identification, so that the user is referred to. Alternatively, the identifier of the terminal device carried by the user may be used as the user identifier. The terminal equipment can be intelligent terminals such as a mobile phone and a PAD, and can also be wearable equipment such as an intelligent bracelet, an intelligent watch or intelligent glasses.
Step S114: and storing the information of the performance works and the feedback result into user portrait data corresponding to the user identification.
In a particular embodiment, user portrait data may be created and information of the performance work watched by the user and feedback results of the performance work watched by the user may be saved to the user portrait data corresponding to the user identification, subject to user authorization.
For example, when the user a sits in the No. 3 seat in the row 2 and determines that the user ID of the user a is "ID 1" from the mobile phone carried by the user a, information of the work of performance watched by the user a (such as the name of the work of performance) and the feedback result of the work of performance watched by the user a are stored in the user portrait data corresponding to the user ID.
The user portrait data may include historical behavior data (such as historical ticket purchasing data) of the user, and in this embodiment, may further include a feedback result of the user for the performance composition. Therefore, the user portrait data can describe the personalized characteristics of the user, and the feedback result of the user watching the performance works is stored by utilizing the user portrait data, so that the problem that the field feedback data of the audience watching the performance works cannot be effectively obtained at present is solved, and the blank is effectively filled. Works which are loved by the user can be well recommended to the user based on rich and accurate user portrait data.
Optionally, in order to further improve the quality of the user portrait data, the method further includes step S116.
Step S116: and acquiring ticket purchasing information corresponding to the user identification, and storing the ticket purchasing information into user portrait data corresponding to the user identification.
The ticket purchasing information is stored in the user portrait data, so that the user portrait data is richer and more stereoscopic, and the information of interest, purchasing power and the like of the user can be comprehensively analyzed by combining the ticket purchasing information of the user, so that the recommendation can be better carried out for the user, and the planning and analysis data support can be provided for a sponsor of performance works.
Optionally, in this embodiment, the method of this embodiment further includes step S118 to step S120.
Step S118: and acquiring initial emotion information of the audience before watching the performance work and off-scene emotion information after finishing watching the performance work.
For example, when the audience enters, the initial emotional information (such as calm emotion) of the audience entering is determined by means of voice, image or vital sign data. Or the initial emotion information of the audience when the audience enters the room can be obtained in a mode that the audience fills in the emotion.
After the performance of the performance works is finished, the departure emotion information (such as happy emotion) of the audience when the audience is on the scene is determined again in the modes of sound, images or vital sign data and the like.
Step S120: and determining the integral emotion information of the audience on the complete performance works according to the initial emotion information and the departure emotion information.
For example, in one specific implementation, if the performance work watched by the audience is comedy, the emotion information of all the audience initially at the time of entering is calm emotion, and the departure emotion information of 80% of the audience is happy emotion, the overall emotion information of the audience on the complete performance work is determined to be happy emotion, which indicates that the audience is satisfied with the performance work.
By the method, the emotion which is not influenced by the scenario when the audience enters the scene can be compared with the emotion immersed in the scenario after watching, so that the infection degree of the audience to the overall scenario of the performance work can be accurately determined, and the scenario in the scenario can be better optimized in the following process.
As shown in fig. 2b, the implementation process of the data processing method is described as follows in conjunction with a specific usage scenario:
taking the example that the audience watches the performance works in the venue on site, the process of obtaining the feedback result of the audience and enriching the user portrait data of the audience according to the feedback result is explained.
During the process of watching the performance works by the audience, the audience data acquisition is carried out through a camera, a radio and the like arranged in the scene, namely, images and/or audios in the performance time period are acquired.
The gesture data can be obtained through a visual technology aiming at the collected image, and the expression data can be obtained through expression recognition in a face recognition technology. In addition, the position recognition and IOT technology (Internet of things) can be integrated to establish user identification for the users on the seats.
Specifically, for example, for the collected images, according to a timestamp of a plot in a pre-marked performance work (the data may be obtained from a scenario management system), corresponding images are selected from images of a performance time period, facial expression recognition and/or human body posture recognition is performed on each selected image, expression data and corresponding feedback time information are obtained, and/or posture data and corresponding feedback time information are obtained.
It should be noted that, because the change of the expression or the posture of the person has a gradual change process, in order to determine the audience feedback more accurately, for the timestamp of a certain episode, the image in the time range corresponding to the timestamp may be selected for the subsequent processing. For example, plot A corresponds to a timestamp indicating time t1, and images corresponding to the time range [ t1-N, t1+ N ] may be selected. N may be determined as desired.
In addition, one or more faces may be included in the image, and each face in the image may be extracted to obtain separate mood data and/or pose data to determine mood information.
And aiming at the collected audio, selecting a corresponding audio clip from the audio in the performance time period according to the timestamp of the plot in the pre-marked performance works, and carrying out sound identification on the audio in the audio clip to obtain sound data.
The obtained expression data, the posture data and the sound data can be used as human body feedback data, and the feedback time corresponding to the expression data, the posture data and the sound data can be used as feedback time information corresponding to the human body feedback data.
The human body feedback data and the timestamp of the plot can be sent to the analysis unit, emotion recognition is carried out on the human body feedback data by the analysis unit, and corresponding emotion information and emotion time information corresponding to the emotion information are obtained.
For example, for the human body feedback data a, emotion information a (for indicating happy emotion) is obtained by performing emotion recognition, and corresponding emotion time information is used for indicating time t 1. For the human body feedback data B, emotion recognition is performed to obtain emotion information B (for indicating calm emotion), corresponding emotion time information is used for indicating time t2, and so on.
And determining a corresponding relation according to the emotion time information and the timestamp of the plot, and determining a feedback result according to the corresponding relation.
For example, it is determined from the time stamp of the episode that episode a corresponds to the emotion time information at instruction time t1, and further that episode a corresponds to episode information a. Episode B corresponds to the emotional time information at the instruction time t2, and it is determined that episode B corresponds to episode information B. Thus, the feedback emotion of the audience to each episode is determined, and then a feedback result can be generated and sent to the audience analysis system to be displayed for the host to view.
The feedback result of the audience to the performance works is obtained in the mode, so that the problems that the analysis quality is unstable and the practitioner experience and the personal experience are seriously depended on in the manual observation are solved; the human input cost is high, the limitation of human resources is limited, and each project and each field are difficult to follow; the problems that fine analysis, no data precipitation, coarse analysis granularity, retention in somatosensory description, difficult precipitation without datamation and datamation comparison can only be realized by means of manual memory are solved.
The position recognition result obtained by performing position recognition on the venue image is combined with the IOT technology, so that the seat where the audience sits can be obtained, the user identification of the audience can be further determined according to the seat, and the feedback result and the ticket buyer information obtained from the ticketing system are stored in the user portrait data through the analysis unit based on the user identification. Therefore, the user ticket purchasing behavior analysis can be subsequently carried out according to the user portrait data.
For example, according to the user identification, the feedback result, the information of the performance work, and the like can be saved in the user portrait data corresponding to the user identification to enrich the user portrait data. Furthermore, ticket purchasing information (such as ticket purchasing time and ticket purchasing mode) can be stored in the user portrait data, so that the user portrait data is richer.
Based on the user portrait data, the data such as the ages, the locations and the interest preference information of the audiences corresponding to the performance works can be analyzed, so that audience groups corresponding to different performance works are analyzed, and the content of the performance works is optimized better.
In addition, the host can update the performance works through the analysis unit, so that a plurality of different versions of the performance works can be formed. The feedback result corresponding to each version of the performance works can obtain audience feedback contrast information, so that whether content optimization meets expectations or not can be checked more intuitively.
In addition, the feedback results of different performance works are compared, so that the performance works which meet the requirements of audiences more are determined.
In summary, the method of the embodiment implements analysis of the on-site feedback result of the performance works through expression analysis technology, voiceprint analysis technology, posture analysis and the like, and solves the problem that the on-site feedback result of the audience cannot be accurately obtained through manual observation in the prior art. And the feedback result can be stored in the user image data, thereby filling the blank of the block of the user image data for feeding back the result on site. In addition, the feedback result can be matched with the time track of the plot, the feedback result subdivided into the plots is realized, and the analysis granularity is finer, so that the content quality of the performance works is improved, and the audience observation satisfaction is improved.
By the method, on one hand, feedback results used for indicating the plots and the emotion feedback of audiences can be obtained, and front-back feedback result comparison of the plots modified by a host can be realized, so that the updating and optimization of contents are facilitated, and on the other hand, the association of the viewing and reflecting of seats and audiences is realized and is stored in user portrait data to form richer and more stereoscopic user portrait data.
Through the embodiment, emotion recognition is carried out on the obtained human feedback data in the process of watching the performance works by the audience to obtain the corresponding emotion information, the emotion time information corresponding to the emotion information is determined by combining the feedback time information, and then the emotion information fed back by the audience to the plots is determined according to the corresponding relation between the emotion time information and the plots in the performance works, so that the feedback result is determined, the analysis and observation of the feedback of the audience subdivided into the plots are realized, the stable analysis quality can be ensured, the excessive manpower investment is not needed, and the observation cost is favorably reduced.
The data processing method of the present embodiment may be performed by any suitable electronic device having data processing capabilities, including but not limited to: servers, mobile terminals (such as tablet computers, mobile phones and the like), PCs and the like.
EXAMPLE III
Referring to fig. 3, a block diagram of a data processing apparatus according to a fourth embodiment of the present invention is shown.
The data processing apparatus of the present embodiment includes: a first obtaining module 302, configured to obtain human feedback data of a viewer watching a performance work, and feedback time information corresponding to the human feedback data; a second obtaining module 304, configured to perform emotion recognition on the human body feedback data, obtain corresponding emotion information, and determine emotion time information corresponding to the emotion information according to the feedback time information; a first determining module 306, configured to determine, according to the emotion time information, a correspondence between emotion information of a viewer and an episode in the performance work, and determine, according to the correspondence, a feedback result of the viewer on the episode.
Optionally, the human body feedback data includes at least one of expression data, posture data, vital sign data, arm movement data, and sound data.
Optionally, the first obtaining module 302 is configured to perform facial expression recognition on the acquired image including the human body feedback data, so as to obtain the expression data and corresponding feedback time information; and/or carrying out human body posture recognition on the collected image containing the human body feedback data to obtain posture data used for indicating the body posture of the audience and corresponding feedback time information; and/or carrying out voice recognition on the collected audio frequency containing the human body feedback data to obtain feedback voice data for indicating audiences to the performance works and corresponding feedback time information; and/or acquiring vital sign data of the audience and corresponding feedback time information, wherein the vital sign data is acquired through wearable equipment worn by the audience; and/or acquiring arm movement data of the audience and corresponding feedback time information, wherein the arm movement data are acquired through wearable equipment worn by the audience.
Optionally, the second obtaining module 304 is configured to, when performing emotion recognition on the human body feedback data and obtaining corresponding emotion information, perform emotion recognition on the human body feedback data and obtain emotion information meeting a preset condition, where the emotion information meeting the preset condition includes at least one of: mood information of happy mood, mood information of sad mood, mood information of calm mood; and determining the acquired emotion information as the emotion time information of the emotion information according to the feedback time information of the corresponding human body feedback data.
Optionally, the first determining module 306 includes: a second determining module 3061, configured to determine a correspondence between the episodes and the emotional information according to episode time information and the emotional time information corresponding to a plurality of episodes preset in the performance composition; a third determining module 3062, configured to determine a feedback result of the audience to the episode according to the correspondence.
Optionally, the second determining module 3061 is configured to aggregate the emotion information in an order of corresponding emotion time information; matching the aggregated emotional information with the plot of the performance works according to time, and determining the corresponding relation between the plot and the emotional information.
Optionally, the third determination module 3062 is configured to determine a difference between the emotion information of the audience feedback to the episode of the performance work and the expected feedback emotion according to the correspondence as the feedback result.
Optionally, the apparatus further comprises:
a fifth obtaining module 318, configured to obtain initial emotion information of the audience before watching the performance work and off-site emotion information after finishing watching the performance work;
a fourth determining module 320, configured to determine, according to the initial emotion information and the departure emotion information, overall emotion information of the audience on the complete performance work.
Optionally, the apparatus further comprises: a history version determining module 308, configured to determine whether a history version exists in the performance work according to version information corresponding to the performance work; and the comparison module 310 is configured to, if the historical version exists, obtain audience feedback comparison information according to the historical feedback result corresponding to the historical version and the feedback result corresponding to the performance work.
Optionally, the apparatus further comprises: a third obtaining module 312, configured to obtain a user identifier corresponding to a user in a seat; and the storage module 314 is configured to store the information of the performance work and the feedback result in user portrait data corresponding to the user identifier.
Optionally, the apparatus further comprises: a fourth obtaining module 316, configured to obtain ticket purchasing information corresponding to the user identifier, and store the ticket purchasing information in the user portrait data corresponding to the user identifier.
The data processing apparatus of this embodiment is configured to implement the corresponding data processing method in the foregoing multiple method embodiments, and has the beneficial effects of the corresponding method embodiment, which are not described herein again. In addition, the functional implementation of each module in the data processing apparatus of this embodiment can refer to the description of the corresponding part in the foregoing method embodiment, and is not repeated here.
Example four
Referring to fig. 4, a schematic structural diagram of an electronic device according to a fourth embodiment of the present invention is shown, and the specific embodiment of the present invention does not limit the specific implementation of the electronic device.
As shown in fig. 4, the electronic device may include: a processor (processor)402, a Communications Interface 404, a memory 406, and a Communications bus 408.
Wherein:
the processor 402, communication interface 404, and memory 406 communicate with each other via a communication bus 408.
A communication interface 404 for communicating with other electronic devices such as a terminal device or a server.
The processor 402 is configured to execute the program 410, and may specifically perform relevant steps in the above-described data processing method embodiment.
In particular, program 410 may include program code comprising computer operating instructions.
The processor 402 may be a central processing unit CPU or an application Specific Integrated circuit asic or one or more Integrated circuits configured to implement embodiments of the present invention. The electronic device comprises one or more processors, which can be the same type of processor, such as one or more CPUs; or may be different types of processors such as one or more CPUs and one or more ASICs.
And a memory 406 for storing a program 410. Memory 406 may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
The program 410 may specifically be configured to cause the processor 402 to perform the following operations: acquiring human body feedback data of audiences watching performance works and feedback time information corresponding to the human body feedback data; performing emotion recognition on the human body feedback data to obtain corresponding emotion information, and determining emotion time information corresponding to the emotion information according to the feedback time information; and determining the corresponding relation between the emotion information of the audience and the plot in the performance work according to the emotion time information, and determining the feedback result of the audience to the plot according to the corresponding relation.
In an alternative embodiment, the human feedback data comprises at least one of expression data, posture data, vital sign data, arm movement data and sound data.
In an optional implementation manner, the program 410 is further configured to enable the processor 402 to perform facial expression recognition on the acquired image including the human body feedback data when the human body feedback data and the feedback time information corresponding to the human body feedback data are acquired while the human body feedback data of the audience is watching the performance work, and acquire the expression data and the corresponding feedback time information; and/or carrying out human body posture recognition on the collected image containing the human body feedback data to obtain posture data used for indicating the body posture of the audience and corresponding feedback time information; and/or carrying out voice recognition on the collected audio frequency containing the human body feedback data to obtain feedback voice data for indicating audiences to the performance works and corresponding feedback time information; and/or acquiring vital sign data of the audience and corresponding feedback time information, wherein the vital sign data is acquired through wearable equipment worn by the audience; and/or acquiring arm movement data of the audience and corresponding feedback time information, wherein the arm movement data are acquired through wearable equipment worn by the audience.
In an optional implementation manner, the program 410 is further configured to cause the processor 402 to perform emotion recognition on the human body feedback data to obtain corresponding emotion information, and perform emotion recognition on the human body feedback data to obtain emotion information meeting a preset condition when emotion time information corresponding to the emotion information is determined, where the emotion information meeting the preset condition includes at least one of: mood information of happy mood, mood information of sad mood, mood information of calm mood; and determining the acquired emotion information as the emotion time information of the emotion information according to the feedback time information of the corresponding human body feedback data.
In an optional implementation manner, the program 410 is further configured to cause the processor 402 to determine, when determining a correspondence between emotion information of a viewer and an episode in the performance work according to the emotion time information and determining a feedback result of the viewer on the episode according to the correspondence, determine a correspondence between the episode and the emotion information according to episode time information and the emotion time information corresponding to a plurality of episodes preset in the performance work; and determining the feedback result of the audience to each plot according to the corresponding relation.
In an optional implementation manner, the program 410 is further configured to enable the processor 402 to aggregate the emotion information according to the sequence of the corresponding emotion time information when determining the correspondence between the episodes and the emotion information according to episode time information and emotion time information corresponding to a plurality of episodes preset in the performance composition; matching the aggregated emotional information with the plot of the performance works according to time, and determining the corresponding relation between the plot and the emotional information.
In an alternative embodiment, the program 410 is further configured to cause the processor 402 to determine, as the feedback result, a difference between emotion information fed back by the viewer to the episode of the performance work and an expected feedback emotion according to the correspondence when determining the feedback result of the viewer to the episode according to the correspondence.
In an alternative embodiment, the program 410 is further configured to cause the processor 402 to obtain initial mood information of the viewer before viewing the performance composition and off-site mood information after viewing the performance composition; and determining the integral emotion information of the audience on the complete performance works according to the initial emotion information and the departure emotion information.
In an alternative embodiment, the program 410 is further configured to cause the processor 402 to determine whether a historical version of the performance work exists based on version information corresponding to the performance work; and if the historical version exists, obtaining audience feedback comparison information according to a historical feedback result corresponding to the historical version and a feedback result corresponding to the performance works.
In an alternative embodiment, the program 410 is further configured to cause the processor 402 to obtain a user identification corresponding to a user in a seat; and storing the information of the performance works and the feedback result into user portrait data corresponding to the user identification.
In an alternative embodiment, program 410 is further configured to cause processor 402 to obtain ticketing information corresponding to the user identification and save the ticketing information to user representation data corresponding to the user identification.
For specific implementation of each step in the program 410, reference may be made to corresponding steps and corresponding descriptions in units in the foregoing data processing method embodiments, which are not described herein again. It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described devices and modules may refer to the corresponding process descriptions in the foregoing method embodiments, and are not described herein again.
It should be noted that, according to the implementation requirement, each component/step described in the embodiment of the present invention may be divided into more components/steps, and two or more components/steps or partial operations of the components/steps may also be combined into a new component/step to achieve the purpose of the embodiment of the present invention.
The above-described method according to an embodiment of the present invention may be implemented in hardware, firmware, or as software or computer code storable in a recording medium such as a CD ROM, a RAM, a floppy disk, a hard disk, or a magneto-optical disk, or as computer code originally stored in a remote recording medium or a non-transitory machine-readable medium downloaded through a network and to be stored in a local recording medium, so that the method described herein may be stored in such software processing on a recording medium using a general-purpose computer, a dedicated processor, or programmable or dedicated hardware such as an ASIC or FPGA. It will be appreciated that the computer, processor, microprocessor controller or programmable hardware includes memory components (e.g., RAM, ROM, flash memory, etc.) that can store or receive software or computer code that, when accessed and executed by the computer, processor or hardware, implements the data processing methods described herein. Further, when a general-purpose computer accesses code for implementing the data processing method shown herein, execution of the code converts the general-purpose computer into a special-purpose computer for executing the data processing method shown herein.
Those of ordinary skill in the art will appreciate that the various illustrative elements and method steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present embodiments.
The above embodiments are only for illustrating the embodiments of the present invention and not for limiting the embodiments of the present invention, and those skilled in the art can make various changes and modifications without departing from the spirit and scope of the embodiments of the present invention, so that all equivalent technical solutions also belong to the scope of the embodiments of the present invention, and the scope of patent protection of the embodiments of the present invention should be defined by the claims.