[go: up one dir, main page]

CN112995772A - Video playing method, device, terminal and storage medium - Google Patents

Video playing method, device, terminal and storage medium Download PDF

Info

Publication number
CN112995772A
CN112995772A CN201911279555.3A CN201911279555A CN112995772A CN 112995772 A CN112995772 A CN 112995772A CN 201911279555 A CN201911279555 A CN 201911279555A CN 112995772 A CN112995772 A CN 112995772A
Authority
CN
China
Prior art keywords
interactive
video
real
time
matching result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911279555.3A
Other languages
Chinese (zh)
Other versions
CN112995772B (en
Inventor
房秀强
章兢
徐昊
陈翌
朱艺
袁小伟
张仁伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Youku Culture Technology Beijing Co ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201911279555.3A priority Critical patent/CN112995772B/en
Publication of CN112995772A publication Critical patent/CN112995772A/en
Application granted granted Critical
Publication of CN112995772B publication Critical patent/CN112995772B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4882Data services, e.g. news ticker for displaying messages, e.g. warnings, reminders

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Human Computer Interaction (AREA)
  • Marketing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本发明实施例提供一种视频播放方法、装置、终端及存储介质,该方法包括:若待播放的视频具有互动标签,获取所述视频对应的互动脚本;当所述视频播放至所述互动脚本记录的所述视频的互动节点时,显示所述互动节点对应的互动提示信息;获取用户的实时脸部图像,基于所述实时脸部图像检测用户的实时脸部动作与所述互动提示信息要求的目标脸部动作是否相匹配;根据匹配结果,播放所述匹配结果对应的视频的分支视频内容。本发明实施例可支持互动视频的实现,使得用户可在互动视频的播放过程中进行参与,提升用户参与度。

Figure 201911279555

Embodiments of the present invention provide a video playback method, device, terminal, and storage medium. The method includes: if a video to be played has an interactive tag, acquiring an interactive script corresponding to the video; when the video is played to the interactive script When the interactive node of the video is recorded, the interactive prompt information corresponding to the interactive node is displayed; the real-time facial image of the user is obtained, and the real-time facial movement of the user and the interactive prompt information requirements are detected based on the real-time facial image. Whether the target facial action matches the target face; according to the matching result, play the branch video content of the video corresponding to the matching result. The embodiment of the present invention can support the realization of interactive video, so that users can participate in the playing process of the interactive video, and the degree of user participation can be improved.

Figure 201911279555

Description

Video playing method, device, terminal and storage medium
Technical Field
The embodiment of the invention relates to the technical field of data processing, in particular to a video playing method, a video playing device, a video playing terminal and a storage medium.
Background
With the popularization of intelligent terminals and the development of network technologies, users can watch videos through video applications or video websites on various types of terminals. At present, a user can only passively receive video content as a spectator when watching a video, the user participation degree is low, and the interactive video is produced in order to improve the user participation degree in watching the video.
Interactive video allows users to interact based on video content in the video playing process, so that the plot development is promoted or different plot development directions are selected, and how to provide a technical solution capable of supporting the interactive video realization is always a problem for interactive video producers.
Disclosure of Invention
In view of this, embodiments of the present invention provide a video playing method, an apparatus, a terminal, and a storage medium to support implementation of an interactive video, so that a user can participate in a playing process of the interactive video, and a user participation degree is improved.
In order to achieve the above purpose, the embodiments of the present invention provide the following technical solutions:
a video playback method, comprising:
if the video to be played has the interactive label, acquiring an interactive script corresponding to the video;
when the video is played to the interactive node of the video recorded by the interactive script, displaying interactive prompt information corresponding to the interactive node;
acquiring a real-time facial image of a user, and detecting whether the real-time facial action of the user is matched with the target facial action required by the interaction prompt information based on the real-time facial image;
and playing the branch video content of the video corresponding to the matching result according to the matching result.
An embodiment of the present invention further provides a video playing device, including:
the script acquisition module is used for acquiring an interactive script corresponding to the video if the video to be played has an interactive label;
the interactive prompt display module is used for displaying interactive prompt information corresponding to the interactive node when the video is played to the interactive node of the video recorded by the interactive script;
the detection module is used for acquiring a real-time facial image of the user and detecting whether the real-time facial action of the user is matched with the target facial action required by the interaction prompt information based on the real-time facial image;
and the branch content playing module is used for playing the branch video content of the video corresponding to the matching result according to the matching result.
An embodiment of the present invention further provides a terminal, including: the video playing system comprises at least one memory and at least one processor, wherein the memory stores one or more computer-executable instructions, and the processor calls the one or more computer-executable instructions to execute the video playing method.
An embodiment of the present invention further provides a storage medium, where the storage medium stores one or more computer-executable instructions, and the one or more computer-executable instructions are used to execute the video playing method.
According to the video playing method provided by the embodiment of the invention, the face action of the user is taken as the interactive operation of the user, and for the video with the interactive label, in the video playing process, when the video is played to the interactive node of the video recorded by the interactive script, the embodiment of the invention can display the interactive prompt information corresponding to the interactive node, so that at least the target face action executed by the user is prompted through the interactive prompt information; furthermore, the embodiment of the invention can acquire the real-time facial image of the user, and detect whether the real-time facial action of the user is matched with the target facial action required by the interaction prompt information based on the real-time facial image so as to realize that whether the user executes the target facial action according to the requirement of the interaction prompt information is detected; after the matching result is obtained, the embodiment of the invention can play the branch video content of the video corresponding to the matching result according to the matching result, thereby realizing the purpose of promoting the video to play the branch video content or selecting the branch video content with different plot trends based on the facial action of the user. Therefore, the video playing method provided by the embodiment of the invention can be provided for an interactive video producer as a technical solution for supporting the realization of the interactive video, and when the video is played to an interactive node, a user can determine the subsequent playing content of the video through the facial action, namely the interactive operation of the user is embodied by the facial action of the user, the participation degree of the user is higher, and the participation degree of the user in the interactive video playing process can be improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart of a video playing method according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating an example of an interactive prompt message according to an embodiment of the present invention;
FIG. 3 is an exemplary diagram of a face model image provided by an embodiment of the invention;
fig. 4 is another flowchart of a video playing method according to an embodiment of the present invention;
FIG. 5 is a flowchart of detecting whether a real-time facial action matches a target facial action according to an embodiment of the present invention;
FIG. 6 is an exemplary illustration of an icon display provided by an embodiment of the present invention;
FIG. 7 is a flowchart of a video playing method according to an embodiment of the present invention;
fig. 8 is a block diagram of a video playing apparatus according to an embodiment of the present invention;
fig. 9 is a block diagram of a terminal.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
When the interactive video is played to an interactive node, a user needs to promote the plot development or select different plot development directions through interactive operation; taking the example that different branch scenarios are set behind the interactive node, when the interactive video is played to the interactive node, the user needs to be prompted to complete the interactive operation through the prompt information displayed on the terminal screen, so that the selection of the branch scenarios is realized in a man-machine interaction manner.
When an interactive video is played to an interactive node at present, a user is prompted to select a branch scenario mainly through options displayed on a terminal screen, the branch scenario corresponding to the option clicked by the user is used as the branch scenario selected by the user, the user is prompted to simply select the options only through displaying the options in the interactive video playing process, the user participation degree is low, and the immersive experience of the video cannot be brought.
Based on this, the embodiment of the invention considers the face action of the user as the interactive operation of the user, so that when the interactive video is played to an interactive node, the real-time face image of the user is collected on the basis of prompting the user to execute the target face action by displaying the interactive prompt information at least indicating the target face action, and whether the real-time face action corresponding to the real-time face image is matched with the target face action is detected, so as to realize the detection of whether the interactive operation of the user is correct, further determine the subsequent playing content of the interactive video based on the face action of the user, and improve the participation of the user in the playing process of the interactive video.
As an optional implementation, fig. 1 shows an optional flow of a video playing method provided in an embodiment of the present invention, where the method may be executed by a terminal, and the terminal may be an electronic device such as a smart phone, a tablet computer, a PC (personal computer), or a smart television that plays a video; referring to fig. 1, the process may include:
and S100, if the video to be played has the interactive label, acquiring an interactive script corresponding to the video.
The interactive label is label information for distinguishing the interactive video from the non-interactive video; the video with the interactive label is an interactive video, and a user is supported to participate in interaction in the video playing process. Optionally, an interactive tag attribute may be added to the video in the embodiment of the present invention, and a value setting of the interactive tag attribute may indicate whether the video has an interactive tag, for example, if the value of the interactive tag attribute of the video is 1, the video has the interactive tag, and if the value of the interactive tag attribute of the video is 0, the video does not have the interactive tag.
Aiming at a video to be played by a terminal, if the video is provided with an interactive label, the embodiment of the invention can determine that the video is an interactive video and support a user to participate in interaction in the playing process of the video; in order to realize user interaction, the embodiment of the invention can acquire the interaction script corresponding to the video, and the interaction script can set interaction nodes, interaction modes, interaction requirements and the like of the user participating in the interaction.
And S110, when the video is played to the interactive node of the video recorded by the interactive script, displaying the interactive prompt information corresponding to the interactive node.
The interactive script can record interactive nodes of the video, the video can be provided with one or more interactive nodes, and when the video is played to any interactive node, a user can carry out interactive operation, so that the plot development of the video is promoted or different plot development directions are selected.
In the process of playing the video at the terminal, when the video is played to the interactive node, the embodiment of the invention can display the interactive prompt information on the screen of the terminal, so that a user can carry out interactive operation based on the prompt of the interactive prompt information. Because the embodiment of the invention takes the facial action of the user as the interactive operation of the user, in order to ensure that the user has the reference standard when executing the facial action, the interactive prompt information can at least indicate the target facial action to be executed by the user; the target facial action may be a facial expression action (such as smiling, frown, etc.), or may be a facial rotation action, and any facial action that can be captured by the facial image may be supported by the embodiments of the present invention, and the embodiments of the present invention are not limited thereto.
Optionally, the embodiment of the present invention does not limit the display form of the interactive prompt information, including but not limited to: the display form of the small window, the display form of the rolling bullet screen, the display form of the message bullet window, the display form of the superposition on the video interface and the like; in addition, the display position of the interactive prompt information is not limited in the embodiment of the present invention, for example, the interactive prompt information may be displayed at any position of the video interface, and of course, the embodiment of the present invention may preferentially select a position where the video interface is not blocked or a position where the video interface is less blocked, and display the interactive prompt information; in another optional implementation, the embodiment of the present invention may also display the interaction prompt information in a region of the terminal screen other than the video interface, for example, in a case that the video is not played in a full screen, the interaction prompt information may be displayed in the non-video interface of the terminal screen.
It should be noted that, the embodiment of the present invention does not limit the execution sequence of the interactive script corresponding to the video playing and the video obtaining, and the two may be executed in parallel or have a sequential execution sequence; for example, an interactive script of a video may be obtained first, and then the video may be played; the video can also be played first, and the corresponding interactive script can be obtained as soon as possible in the process of playing the video, for example, the corresponding interactive script can be obtained immediately after the video is played.
And step S120, acquiring a real-time face image of the user, and detecting whether the real-time face action of the user is matched with the target face action required by the interaction prompt information based on the real-time face image.
The embodiment of the invention can acquire the real-time face image of the user by activating the camera of the terminal, thereby realizing the acquisition of the real-time face image of the user; optionally, if the terminal detection camera does not acquire the face image of the user, a prompt that the face image is not detected may be displayed on the terminal interface, so that the user places the face within the viewing range of the terminal camera. Optionally, in the embodiment of the present invention, the camera of the terminal may be activated when the video is played to the interactive node, and the situation that the camera is always activated is not excluded.
Optionally, for example, when the terminal plays the video through the video application, the video application should have an authority to call the terminal camera, so that the camera of the terminal can be activated to obtain the real-time facial image of the user acquired by the camera.
When the real-time facial image of the user is acquired through the camera of the terminal, the embodiment of the invention can identify the real-time facial action corresponding to the real-time facial image based on the acquired real-time facial image, and detect whether the real-time facial action of the user is matched with the target facial action required by the interactive prompt information so as to judge whether the user executes the target facial action according to the requirement of the interactive prompt information.
For example, based on different forms of target facial movements, the form of the real-time facial movement recognized from the real-time facial image may also be different, for example, if the target facial movement is a left turn of the face, the real-time facial movement recognized from the real-time facial image should be a face turning angle (the face turning angle may have direction information and angle specific numerical information), and if the target facial movement is a frown, the real-time facial movement recognized from the real-time facial image should be a movement of the eyebrow region; based on target facial actions with different requirements, the embodiment of the present invention is adaptable to adjusting the form of the identified real-time facial action, and may be specifically determined according to the actual interaction setting, which is not limited by the embodiment of the present invention.
And step S130, according to the matching result, playing the branch video content of the video corresponding to the matching result.
It can be understood that the matching result is divided into two cases: the real-time facial action matches the target facial action, and the real-time facial action does not match the target facial action. In optional implementation, the embodiment of the present invention may preset the interaction time for the user to perform the interaction operation, so as to detect whether the real-time facial action matches the target facial action within the preset interaction time, and obtain a matching result; in a more specific optional implementation, if the real-time facial motion matches the target facial motion at least once within the interaction time (the number of times may be set according to the actual situation, for example, one or more times), the embodiment of the present invention may immediately obtain the matching result of the real-time facial motion matching the target facial motion, that is, the user has executed the target facial motion as required by the interaction prompt information; if the real-time facial action is not matched with the target facial action all the time in the interaction time, when the interaction time is over, the embodiment of the invention can obtain a matching result that the real-time facial action is not matched with the target facial action, namely, the user does not execute the target facial action according to the requirement of the interaction prompt information in the interaction time.
Based on the obtained matching result, the embodiment of the invention can play the branch video content of the video corresponding to the matching result. Optionally, the interactive node of the video may correspond to a plurality of branch video contents in different plot trends, and different results of the user interactive operation may realize selection of different branch video contents, so as to promote the plot to develop in different directions, for example, in the embodiment of the present invention, a first branch video content corresponding to the target facial action and a second branch video content corresponding to the non-target facial action may be set in the embodiment of the present invention, and the first branch video content and the second branch video content are branch video contents in different plot trends; therefore, when the matching result is that the real-time facial action is matched with the target facial action, the embodiment of the invention can play the first branch video content corresponding to the target facial action, and when the matching result is that the real-time facial action is not matched with the target facial action, the embodiment of the invention can play the second branch video content corresponding to the non-target facial action.
In another optional implementation, a branch video content of a single scenario trend may be set behind an interactive node of a video, and a user may push the video to play the branch video content after performing a required interactive operation, for example, in an embodiment of the present invention, the video may smoothly push the next branch video content only when a real-time facial action of the user matches a target facial action, so that the embodiment of the present invention may play the next branch video content of the video when a matching result is that the real-time facial action matches the target facial action; and when the matching result is that the real-time facial action is not matched with the target facial action, the subsequent content of the video is not played, at the moment, the embodiment of the invention can circularly prompt the user to carry out the target facial action, and the next branch video content of the video is not played until the matching result that the real-time facial action of the user is matched with the target facial action is obtained.
According to the video playing method provided by the embodiment of the invention, the face action of the user is taken as the interactive operation of the user, and for the video with the interactive label, in the video playing process, when the video is played to the interactive node of the video recorded by the interactive script, the embodiment of the invention can display the interactive prompt information corresponding to the interactive node, so that at least the target face action executed by the user is prompted through the interactive prompt information; furthermore, the embodiment of the invention can acquire the real-time facial image of the user through the camera of the terminal, identify the real-time facial action corresponding to the real-time facial image by utilizing an image identification technology, and detect whether the real-time facial action is matched with the target facial action, so as to realize the purpose of detecting whether the user executes the target facial action according to the requirement of the interaction prompt information; after the matching result is obtained, the embodiment of the invention can play the branch video content of the video corresponding to the matching result according to the matching result, thereby realizing the purpose of promoting the video to play the branch video content or selecting the branch video content with different plot trends based on the facial action of the user. Therefore, the video playing method provided by the embodiment of the invention can be provided for an interactive video producer as a technical solution for supporting the realization of the interactive video, and when the video is played to an interactive node, a user can determine the subsequent playing content of the video through the facial action, namely the interactive operation of the user is embodied by the facial action of the user, the participation degree of the user is higher, and the participation degree of the user in the interactive video playing process can be improved.
In an optional implementation of step S110, when the video is played to the interactive node, the embodiment of the present invention may display the interactive prompt information by displaying the interactive prompt image and/or the interactive prompt text. The interactive prompt image may be considered as an image-form interactive prompt information for displaying the target facial action in an image manner, and the interactive prompt image may be a dynamic image, and may visually prompt the user about the target facial action to be executed by displaying the dynamic image of the target facial action, and certainly, the interactive prompt image may also be a static image, which is not limited in the embodiment of the present invention. The interactive prompt text may be considered as text-form interactive prompt information for describing the target facial action in a text manner, and optionally, the interactive prompt text may be further combined with the video content of the interactive node, for example, the interactive prompt text may describe an influence on a video scenario after the user performs the target facial action.
For example, taking the target facial action as a face turning to the left and the video content of the interactive node as a video character to hide scars on the face as an example, as shown in fig. 2, the interactive prompt image may be set as a dynamic image of the face turning to the left, and the interactive prompt text may be set as "turn to the left to expose scars".
Optionally, in the embodiment of the present invention, an interactive prompt image or an interactive prompt text may be selectively displayed to display the interactive prompt information, or the interactive prompt image and the interactive prompt text may be simultaneously displayed to display the interactive prompt information.
In an optional implementation, in order to perform real-time feedback on the facial action performed by the user from the display aspect, the embodiment of the invention may further display the interactive feedback information based on the real-time facial action of the user, so that the user can clearly understand the real-time facial action performed by the user through the displayed interactive feedback information. Optionally, the interactive feedback information may at least include a face model image, and the facial motion of the face model image is adjusted in real time along with the identified real-time facial motion; for example, as shown in fig. 3, a video interface may display a face model image of a human face shape, where the face model may be a 3D model or a 2D model, and based on a real-time face motion recognized by an embodiment of the present invention, the face motion of the face model image may be adjusted in real time, so that the face motion of the face model image is consistent with the real-time face motion, and if the face of a user rotates left, the face model image also rotates left, and if the face of the user rotates right, the face model image also rotates right.
By reflecting the real-time facial action of the user as the facial action of the face model image, the embodiment of the invention can feed back the facial action of the user in real time from the display aspect, so that the user can correct the facial action in time through the prompt of the face model image in the process of executing the facial action, the invalid interaction is reduced, and the success rate of the facial action of the user is improved.
Optionally, the interactive feedback information may further include interactive feedback text, where the interactive feedback text may indicate a deviation between the real-time facial motion and the target facial motion, so as to remind the user how to correct the facial motion in a text manner; for example, taking the target facial motion as a leftward turning face and the turning angle value reaching a certain value as an example, when the real-time facial motion of the user is detected as the leftward turning face, the interactive feedback text may prompt to continue turning the face leftward, and when the real-time facial motion of the user is detected as the rightward turning face, the interactive feedback text may prompt that the face turning direction is wrong, so as to prompt the user to correct the facial motion in a text manner.
In optional implementation, in the stage of displaying the interactive prompt information, the embodiment of the invention prompts the user of the target facial action to be executed through the interactive prompt image and the interactive prompt text; in the stage of displaying the interactive feedback information, the embodiment of the invention can feed back the real-time facial action of the user in real time through the facial model image and the interactive feedback characters; the jumping of the two stages can be controlled by whether a camera of the terminal collects a face image of the user or not; optionally, fig. 4 shows another flow of the video playing method provided by the embodiment of the present invention, and as shown in fig. 4, the flow may include:
and S200, if the video to be played has the interactive label, acquiring an interactive script corresponding to the video.
And S210, displaying an interactive prompt image and interactive prompt characters when the video is played to the interactive node of the video recorded by the interactive script.
Optionally, the interactive prompt image shows the target facial action in an image mode, and the interactive prompt text describes the target facial action in a text mode.
And S220, judging whether a real-time face image of the user is acquired, if not, returning to the S220, and if so, executing the S230.
Optionally, under the condition that the authority of the terminal camera is authorized to be invoked, if the image acquired by the terminal camera does not include the face of the user, that is, the real-time face image of the user is not acquired at this time, the embodiment of the present invention may continuously execute step S220 until the real-time face image of the user is determined to be acquired, and in this process, the embodiment of the present invention may maintain the display of the interaction prompt image and the interaction prompt text.
Step S230, identifying a real-time facial action corresponding to the real-time facial image, displaying interaction feedback information according to the real-time facial action, and detecting whether the real-time facial action is matched with a target facial action required by the interaction prompt information.
Optionally, in a case where the real-time facial image of the user is obtained, the embodiment of the present invention may identify a real-time facial action corresponding to the real-time facial image, and perform the following processes in step S230 at the same time:
firstly, displaying interactive feedback information according to the real-time facial action, and thus carrying out real-time feedback on the facial action of a user from a display level; the interactive feedback information at least comprises a face model image and interactive feedback characters, the face action of the face model image is adjusted in real time along with the real-time face action, and the interactive feedback characters indicate the deviation between the real-time face action and the target face action;
and secondly, detecting whether the real-time facial action is matched with the target facial action required by the interaction prompt information.
Optionally, further, under the condition that the real-time face image of the user is obtained, the interactive prompt image and the interactive prompt text can be cancelled to be displayed, so that interactive feedback information is displayed instead, and the real-time face action of the user is fed back from the aspect of the image and the text through the face model image and the interactive feedback text.
And step S240, according to the matching result, playing the branch video content of the video corresponding to the matching result.
Optionally, the embodiment of the present invention may also set whether to execute the step of displaying the interactive prompt information, for example, the interactive prompt display time may be preset, for example, the interactive prompt display time is set by a video producer, a video publisher, or a user; when the video is played to the interactive node, the embodiment of the invention can detect whether the preset interactive prompt display time is more than 0; if the value is greater than 0, the interactive prompt information is set to be not skipped, so that the step of displaying the interactive prompt information can be entered in the embodiment of the invention, for example, an interactive prompt image and/or interactive prompt text are displayed; if not greater than 0, it is said that the interaction prompt information is set to be skippable, the embodiment of the present invention may skip the step of displaying the interaction prompt information, and execute the process downwards, for example, after acquiring the real-time facial image of the user, execute step S230 shown in fig. 4.
In an alternative implementation, for convenience of describing the process of detecting whether the real-time facial motion matches the target facial motion according to the embodiment of the present invention, an alternative implementation of step S120 shown in fig. 1 is described below by taking a face rotation motion as an example, and optionally, fig. 5 shows an alternative flow for detecting whether the real-time facial motion matches the target facial motion, and as shown in fig. 5, the flow may include:
and S300, identifying a real-time face rotation angle corresponding to the real-time face image.
In an embodiment of the invention, the real-time facial motion of the user is represented by a real-time facial rotation angle of the user.
Step S310, detecting whether the real-time face rotation angle is within a preset target face rotation angle range.
In the embodiment of the present invention, the target face action is represented by a preset target face rotation angle range.
In an alternative implementation of the flow shown in fig. 5, the face rotation angle may refer to a specific angle value without indicating the face rotation direction.
In an alternative implementation of the flow shown in fig. 5, the face rotation angle indicates the face rotation direction in addition to the specific angle value; for example, the face rotation angle may be an angle with positive and negative values, the positive and negative values of the face rotation angle represent the face rotation direction, and the specific angle value (i.e. the absolute value of the angle) represents the specific rotation angle value in the face rotation direction; illustratively, if the position of the initially acquired user face image is taken as the origin, the left rotation is taken as the positive direction, and the right rotation is taken as the negative direction, the face rotation angle is 30 °, which means that the user face rotates 30 ° to the left from the initially acquired state, and the face rotation angle is-30 °, which means that the user face rotates 30 ° to the right from the initially acquired state; of course, the positive and negative directions of the face rotation angle may be a left rotation direction and a right rotation direction, and the positive and negative directions of the face rotation angle may be set according to actual situations, which is not limited in the embodiments of the present invention.
Under the condition that the face rotation angle has a positive value and a negative value, the embodiment of the invention can identify the real-time face rotation direction corresponding to the real-time face image and the rotated angle value of the real-time face rotation direction as an optional implementation mode for identifying the real-time face rotation angle of the user; for example, the real-time face rotation direction is determined by the positive and negative values of the real-time face rotation angle, and the rotated angle value of the real-time face rotation direction is determined by the specific angle value of the real-time face rotation angle.
Optionally, under the condition that the face rotation angle has a positive value and a negative value, the embodiment of the invention can preset the target face rotation angle range to define the target face action; for example, if the left rotation is a positive direction, the right rotation is a negative direction, and the target face movement is a left rotation greater than 30 °, the target face rotation angle range is greater than 30 °, or if the target face movement is a right rotation greater than 30 °, the target face rotation angle range is less than-30 °, for example, -40 ° is less than-30 °.
In an alternative implementation, the method for defining the target face rotation angle range according to the embodiment of the present invention includes: a target face rotating direction and an angle lower limit value corresponding to the target face rotating direction; for example, if the target face moves to the left and rotates more than 30 °, the target face rotates to the left and the lower limit value of the angle is 30; once the real-time face rotation direction of the user is consistent with the target face rotation direction and the rotated angle value of the real-time face rotation direction is greater than the angle lower limit value, the embodiment of the invention can determine that the real-time face rotation angle of the user is within the target face rotation angle range, so that the real-time face action of the user is matched with the target face action; based on this, the embodiment of the present invention may detect whether the real-time face rotation direction is consistent with the target face rotation direction, and detect whether a rotated angle value of the real-time face rotation direction is greater than the angle lower limit value when the real-time face rotation direction is consistent with the target face rotation direction, so as to detect whether the real-time face rotation angle is within the target face rotation angle range.
In a further optional implementation, in the process of performing the face rotation by the user, there may be a case where an extreme error occurs in the real-time face rotation direction of the user, for example, the real-time face rotation direction of the user is not consistent with the target face rotation direction, and the rotated angle value of the user in the real-time face rotation direction is greater than the angle lower limit value, at this time, the user rotates an angle greater than the angle lower limit value in the direction opposite to the target face rotation direction, which is seriously deviated from the target face action required by the interactive prompt information, so as to provide a striking error prompt for the user, an alarm icon may be displayed in the embodiment of the present invention; for example, assuming that the target face is rotated to the left by more than 30 °, if it is detected that the real-time face rotation direction of the user is rotated to the right and the rotated angle value is more than 30, the embodiment of the present invention may display a warning icon; meanwhile, when the real-time face rotating direction is detected to be consistent with the target face rotating direction and the rotated angle value of the real-time face rotating direction is larger than the angle lower limit value, a correct icon can be displayed so as to provide an interactive correct prompt for a user; in addition to the above-described cases, the embodiment of the present invention may not perform display of other icons;
for example, assuming that the face is rotated leftward as a positive direction, the face is rotated rightward as a negative direction, and the target face action is rotated leftward more than 30 °, fig. 6 shows an example of icon display, which can be referred to; as shown in fig. 6, the real-time face rotation angle > 30 °, the correct icon is displayed, the real-time face rotation angle is less than-30 °, the warning icon is displayed (i.e. at this time, the real-time face rotation direction of the user is rightward rotation, the angle is negative, and the value of the rightward rotation angle is greater than 30), when the real-time face rotation angle > 30 °, the icon is not displayed, the opposite sign in fig. 6 may represent the correct icon, and the exclamation mark may represent the warning icon.
The above-described warning icon and correct icon may be used in conjunction with interactive feedback information (facial model images and/or interactive feedback text) to provide feedback information in a variety of ways during the course of the user's facial movements.
Optionally, after the matching result is obtained, the embodiment of the present invention may further display the interaction result prompt information according to the matching result, where the step may be performed before playing the branch video content of the video corresponding to the matching result according to the matching result; it is understood that the interactive result prompt message may include: and when the real-time facial action is matched with the target facial action, prompting information of an interaction success result, and when the matching result is not matched with the target facial action, prompting information of an interaction failure result.
Based on the preset setting, the embodiment of the present invention may also skip the step of displaying the prompt information of the interaction result, for example, a video producer, a video publisher, or a user may set the display time of the prompt information of the interaction result, if the display time is greater than 0, the step of displaying the prompt information of the interaction result according to the matching result is performed, otherwise, the step of displaying the prompt information of the interaction result according to the matching result is skipped.
Optionally, after determining the matching result, the embodiment of the present invention may immediately play the branch video content of the video corresponding to the matching result according to the matching result, for example, immediately play the first branch video content of the video when determining that the matching result is that the real-time facial action matches the target facial action, and immediately play the second branch video content of the video when determining that the matching result is that the real-time facial action does not match the target facial action; of course, after the matching result is determined, the embodiment of the present invention may also set that a step of playing the branch video content of the video corresponding to the matching result according to the matching result is performed after the video content corresponding to the interactive node is played;
in a more specific implementation, a video content skip switch may be preset, and the turning on or off of the video content skip switch may be set by a video producer, a video publisher, or a user, so that after a matching result is determined, if the video content skip switch is detected to be turned on, the embodiment of the present invention may proceed to a step of playing the branch video content of the video corresponding to the matching result according to the matching result, thereby immediately playing the corresponding branch video content; when it is detected that the video content skip switch is not turned on, the step of playing the branch video content of the video corresponding to the matching result according to the matching result is only started when the playing of the video content corresponding to the interactive node is finished.
Optionally, fig. 7 shows another flowchart of a video playing method provided in an embodiment of the present invention, and as shown in fig. 7, the flowchart may include:
and S400, if the video to be played has the interactive label, acquiring an interactive script corresponding to the video.
Step S410, when the video is played to the interactive node of the video recorded by the interactive script, detecting whether preset interactive prompt display time is greater than 0, if so, executing step S420, and if not, executing step S430.
When the interactive prompt display time is greater than 0, the embodiment of the invention can execute the step S420, display the interactive prompt image and the interactive prompt text, and then enter the step S430; when the interactive prompt display time is less than 0, the embodiment of the invention may skip step S420 and directly enter step S430.
And step S420, displaying the interactive prompt image and the interactive prompt characters.
And step S430, judging whether a real-time face image of the user is acquired, if not, returning to the step S430, and if so, executing the step S440.
And S440, identifying real-time facial actions corresponding to the real-time facial images, and displaying interaction feedback information according to the real-time facial actions.
Optionally, further, embodiments of the present invention may also incorporate displaying a warning icon and/or a correct icon.
Step S450, detecting whether the real-time facial motion matches the target facial motion required by the interaction prompt information, if not, executing step S460, and if so, executing step S470.
It should be noted that step S440 and step S450 may be executed synchronously, and for convenience of description, step S440 and step S450 are separately described, but this does not mean that step S440 and step S450 are executed sequentially.
Step S460, determining whether the preset interaction time is reached, if not, returning to step S440, and if so, executing step S470.
When the detection result in step S450 is yes, the embodiment of the present invention may determine a matching result that the real-time facial motion matches the target facial motion, and enter step S470; when the preset interaction time is reached, if the detection result of step S450 is no, the matching result that the real-time facial motion and the target facial motion are not matched can be determined, and the process proceeds to step S470.
Step S470, detecting whether the display time of the preset interaction result prompt message is greater than 0, if so, executing step S480, and if not, executing step S490.
And S480, displaying the prompt information of the interaction result according to the matching result.
Correspondingly, if the detection result of step S450 is yes and the detection result of step S470 is yes, the embodiment of the present invention may display the interaction result prompt message indicating that the interaction is successful in step S480; when the preset interaction time is reached, the detection result in the step S450 is no, so that when the preset interaction time is reached and the display time of the interaction result prompt message is detected to be greater than 0, the embodiment of the invention can display the interaction result prompt message indicating that the interaction fails in the step S480.
After step S480 is executed, the embodiment of the present invention proceeds to step S490.
Step S490, detecting whether a preset video content skip switch is turned on, if not, performing step S500, and if so, performing step S510.
Step S500, whether the video content corresponding to the interactive node is played is finished, if not, step S500 is executed, and if so, step S510 is executed.
And step S510, according to the matching result, playing the branch video content of the video corresponding to the matching result.
Optionally, based on a matching result that the real-time facial action matches the target facial action, the embodiment of the present invention may play the first branch video content of the video; based on the matching result that the real-time facial action is not matched with the target facial action, the embodiment of the invention can play the second branch video content of the video.
Optionally, further, when the real-time facial action matches the target facial action, a video object (e.g., a video character or object) in the video may perform feedback corresponding to the target facial action, and when the real-time facial action does not match the target facial action, the video object in the video may perform feedback corresponding to the non-target facial action; for example, taking the user turns the face more than 30 ° to the left to hide the scar on the face of the video character as an example, when the user turns the face more than 30 ° to the left within the preset interaction time, the scar on the face of the video character is successfully hidden and is not found by other video characters, and if the user does not turn the face more than 30 ° to the left within the preset interaction time, the scar on the face of the video character fails to be hidden and is found by other video characters.
The video playing method provided by the embodiment of the invention can be provided for an interactive video producer as a technical solution for supporting interactive video realization, and when the video is played to an interactive node, a user can determine the subsequent playing content of the video through facial actions, namely the interactive operation of the user is embodied by the facial actions of the user, the participation degree of the user is higher, and the participation degree of the user in the interactive video playing process can be improved.
While various embodiments of the present invention have been described above, various alternatives described in the various embodiments can be combined and cross-referenced without conflict to extend the variety of possible embodiments that can be considered disclosed and disclosed in connection with the embodiments of the present invention.
In the following, the video playing apparatus provided in the embodiment of the present invention is introduced, and the video playing apparatus described below may be considered as a functional module that is required by the terminal to implement the video playing method provided in the embodiment of the present invention. The contents of the video playing method described below may be referred to in correspondence with the contents of the video playing method described above.
In an alternative implementation, fig. 8 shows a block diagram of a video playing apparatus provided in an embodiment of the present invention, and referring to fig. 8, the video playing apparatus may include:
the script obtaining module 100 is configured to obtain an interactive script corresponding to a video to be played if the video has an interactive tag;
the interactive prompt display module 110 is configured to display interactive prompt information corresponding to the interactive node when the video is played to the interactive node of the video recorded by the interactive script;
the detection module 120 is configured to obtain a real-time facial image of the user, and detect whether a real-time facial action of the user matches a target facial action required by the interaction prompt information based on the real-time facial image;
and the branch content playing module 130 is configured to play the branch video content of the video corresponding to the matching result according to the matching result.
Optionally, the interactive prompt display module 110 is configured to display interactive prompt information corresponding to the interactive node, and includes:
displaying an interactive prompt image and/or interactive prompt characters; the interactive prompt image displays the target facial action in an image mode, and the interactive prompt text describes the target facial action in a text mode.
Optionally, before the interactive prompt display module 110 displays the interactive prompt information corresponding to the interactive node, the video playing device provided in the embodiment of the present invention may further be configured to:
detecting whether preset interactive prompt display time is greater than 0; if the interactive prompt display time is greater than 0, the interactive prompt display module 110 enters the step of displaying the interactive prompt information corresponding to the interactive node.
Optionally, based on the obtained real-time facial image, the video playing apparatus provided in the embodiment of the present invention may further be configured to:
displaying interactive feedback information according to the real-time facial action, wherein the interactive feedback information at least comprises a facial model image; and the facial action of the facial model image is adjusted in real time along with the real-time facial action.
Optionally, the detecting module 120 is configured to detect whether the real-time facial motion of the user matches the target facial motion required by the interaction prompt information based on the real-time facial image, and includes:
identifying a real-time face rotation angle corresponding to the real-time face image;
detecting whether the real-time face rotation angle is within a preset target face rotation angle range
Optionally, the target face rotation angle range includes: a target face rotating direction and an angle lower limit value corresponding to the target face rotating direction; the detecting module 120 is configured to identify a real-time face rotation angle corresponding to the real-time face image, and includes:
identifying a real-time face rotation direction corresponding to the real-time face image and a rotated angle value of the real-time face rotation direction;
the detecting module 120 is configured to detect whether the real-time face rotation angle is within a preset target face rotation angle range, and includes:
detecting whether the real-time face rotation direction is consistent with the target face rotation direction, and detecting whether a rotated angle value of the real-time face rotation direction is greater than the angle lower limit value when the real-time face rotation direction is consistent with the target face rotation direction.
Optionally, further, the video playing apparatus provided in the embodiment of the present invention may be further configured to:
if the real-time face rotating direction is detected to be inconsistent with the target face rotating direction, and the rotated angle value of the real-time face rotating direction is larger than the angle lower limit value, displaying a warning icon;
and if the real-time face rotating direction is detected to be consistent with the target face rotating direction, and the rotated angle value of the real-time face rotating direction is larger than the angle lower limit value, displaying a correct icon.
Optionally, the branch content playing module 130 is configured to play the branch video content of the video corresponding to the matching result according to the matching result, and includes:
if the matching result is that the real-time facial action is matched with the target facial action, playing first branch video content of the video;
and if the matching result is that the real-time face action is not matched with the target face action, playing second branch video content of the video.
Optionally, in a preset interaction time, detecting that the real-time facial motion matches with the target facial motion, and determining that the matching result is that the real-time facial motion matches with the target facial motion; and if the real-time facial action is not detected to be matched with the target facial action within the preset interaction time, after the interaction time, the matching result is that the real-time facial action is not matched with the target facial action.
Optionally, before the branch content playing module 130 plays the branch video content of the video corresponding to the matching result according to the matching result, the video playing apparatus provided in the embodiment of the present invention may further be configured to:
displaying interaction result prompt information according to the matching result; wherein, the interaction result prompt message includes: and the matching result is prompt information of an interaction success result when the real-time facial action is matched with the target facial action, and the matching result is prompt information of an interaction failure result when the real-time facial action is not matched with the target facial action.
Optionally, before the branch content playing module 130 plays the branch video content of the video corresponding to the matching result according to the matching result, the video playing apparatus provided in the embodiment of the present invention may further be configured to:
detecting whether the display time of preset interaction result prompt information is greater than 0; and if the display time of the interaction result prompt message is more than 0, entering the step of displaying the interaction result prompt message according to the matching result.
Optionally, before the branch content playing module 130 plays the branch video content of the video corresponding to the matching result according to the matching result, the video playing apparatus provided in the embodiment of the present invention may further be configured to:
after the matching result is determined, detecting whether a preset video content skip switch is turned on or not;
if the video content skip switch is turned on, the branch content playing module 130 proceeds to the step of playing the branch video content of the video corresponding to the matching result according to the matching result;
if the video content skip switch is not turned on, when the playing of the video content corresponding to the interactive node is finished, the branch content playing module 130 enters the step of playing the branch video content of the video corresponding to the matching result according to the matching result.
The embodiment of the invention also provides a terminal, and the terminal can realize the video playing method provided by the embodiment of the invention by loading the device in the form of computer executable instructions (such as programs). Optionally, fig. 9 shows a block diagram of a terminal provided in the embodiment of the present invention, and as shown in fig. 9, the terminal may include: at least one processor 1, at least one communication interface 2, at least one memory 3 and at least one communication bus 4;
in the embodiment of the present invention, the number of the processor 1, the communication interface 2, the memory 3, and the communication bus 4 is at least one, and the processor 1, the communication interface 2, and the memory 3 complete mutual communication through the communication bus 4; it is clear that the illustrated communication connection illustration of the processor 1, the communication interface 2, the memory 3 and the communication bus 4 is only optional;
optionally, the communication interface 2 may be an interface of a communication module for performing network communication;
the processor 1 may be a central processing unit CPU or a Specific Integrated circuit asic (application Specific Integrated circuit) or one or more Integrated circuits configured to implement an embodiment of the invention.
The memory 3 may comprise a high-speed RAM memory and may also comprise a non-volatile memory, such as at least one disk memory.
The memory 3 stores one or more computer-executable instructions, and the processor 1 calls the one or more computer-executable instructions to execute the video playing method provided by the embodiment of the present invention.
Embodiments of the present invention may also provide a storage medium, where the storage medium may store one or more computer-executable instructions, and the one or more computer-executable instructions may be configured to execute the video playing method provided in the embodiments of the present invention.
For the details of the video playing method, reference may be made to the description of the corresponding parts, and details are not repeated here.
Although the embodiments of the present invention have been disclosed, the present invention is not limited thereto. Various changes and modifications may be effected therein by one skilled in the art without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (15)

1.一种视频播放方法,其特征在于,包括:1. a video playback method, is characterized in that, comprises: 若待播放的视频具有互动标签,获取所述视频对应的互动脚本;If the video to be played has an interactive tag, obtain the interactive script corresponding to the video; 当所述视频播放至所述互动脚本记录的所述视频的互动节点时,显示所述互动节点对应的互动提示信息;When the video is played to the interactive node of the video recorded by the interactive script, displaying interactive prompt information corresponding to the interactive node; 获取用户的实时脸部图像,基于所述实时脸部图像检测用户的实时脸部动作与所述互动提示信息要求的目标脸部动作是否相匹配;Obtain the real-time facial image of the user, and detect whether the real-time facial movement of the user matches the target facial movement required by the interactive prompt information based on the real-time facial image; 根据匹配结果,播放所述匹配结果对应的视频的分支视频内容。According to the matching result, the branch video content of the video corresponding to the matching result is played. 2.根据权利要求1所述的视频播放方法,其特征在于,所述显示所述互动节点对应的互动提示信息包括:2. The video playback method according to claim 1, wherein the displaying the interactive prompt information corresponding to the interactive node comprises: 显示互动提示图像和/或互动提示文字;所述互动提示图像以图像方式展示目标脸部动作,所述互动提示文字以文字方式描述所述目标脸部动作。An interactive prompt image and/or interactive prompt text are displayed; the interactive prompt image shows the target facial action in an image, and the interactive prompt text describes the target facial action in text. 3.根据权利要求1或2所述的视频播放方法,其特征在于,在显示所述互动节点对应的互动提示信息之前,所述方法还包括:3. The video playback method according to claim 1 or 2, wherein before displaying the interactive prompt information corresponding to the interactive node, the method further comprises: 检测预设置的互动提示显示时间是否大于0;Detect whether the preset interactive prompt display time is greater than 0; 若所述互动提示显示时间大于0,进入所述显示所述互动节点对应的互动提示信息的步骤。If the interactive prompt display time is greater than 0, the step of displaying interactive prompt information corresponding to the interactive node is entered. 4.根据权利要求1所述的视频播放方法,其特征在于,还包括:4. video playback method according to claim 1, is characterized in that, also comprises: 根据所述实时脸部动作显示互动反馈信息,所述互动反馈信息至少包括脸部模型图像;所述脸部模型图像的脸部动作随所述实时脸部动作实时调整。Interactive feedback information is displayed according to the real-time facial movement, and the interactive feedback information at least includes a face model image; the facial movement of the face model image is adjusted in real time according to the real-time facial movement. 5.根据权利要求1或4所述的视频播放方法,其特征在于,所述基于所述实时脸部图像检测用户的实时脸部动作与所述互动提示信息要求的目标脸部动作是否相匹配包括:5. The video playback method according to claim 1 or 4, wherein the detection of the user's real-time facial movement based on the real-time facial image matches the target facial movement required by the interactive prompt information include: 识别所述实时脸部图像对应的实时脸部转动角度;Identify the real-time face rotation angle corresponding to the real-time face image; 检测所述实时脸部转动角度是否位于预设的目标脸部转动角度范围。It is detected whether the real-time face rotation angle is within a preset target face rotation angle range. 6.根据权利要求5所述的视频播放方法,其特征在于,所述目标脸部转动角度范围包括:目标脸部转动方向及所述目标脸部转动方向对应的角度下限数值;6. The video playback method according to claim 5, wherein the target face rotation angle range comprises: the target face rotation direction and the angle lower limit value corresponding to the target face rotation direction; 所述识别所述实时脸部图像对应的实时脸部转动角度包括:The real-time face rotation angle corresponding to the identification of the real-time face image includes: 识别所述实时脸部图像对应的实时脸部转动方向,及所述实时脸部转动方向的已转动角度数值;Identify the real-time face rotation direction corresponding to the real-time face image, and the rotated angle value of the real-time face rotation direction; 所述检测所述实时脸部转动角度是否位于预设的目标脸部转动角度范围包括:The detecting whether the real-time face rotation angle is within the preset target face rotation angle range includes: 检测所述实时脸部转动方向是否与所述目标脸部转动方向一致,及在所述实时脸部转动方向与所述目标脸部转动方向一致时,检测所述实时脸部转动方向的已转动角度数值是否大于所述角度下限数值。Detecting whether the real-time face rotation direction is consistent with the target face rotation direction, and when the real-time face rotation direction is consistent with the target face rotation direction, detecting that the real-time face rotation direction has been rotated Whether the angle value is greater than the angle lower limit value. 7.根据权利要求6所述的视频播放方法,其特征在于,还包括:7. video playback method according to claim 6, is characterized in that, also comprises: 若检测所述实时脸部转动方向与所述目标脸部转动方向不一致,且所述实时脸部转动方向的已转动角度数值大于所述角度下限数值,显示警告图标;If it is detected that the real-time face rotation direction is inconsistent with the target face rotation direction, and the value of the rotated angle of the real-time face rotation direction is greater than the lower limit value of the angle, a warning icon is displayed; 若检测所述实时脸部转动方向与所述目标脸部转动方向一致,且所述实时脸部转动方向的已转动角度数值大于所述角度下限数值,显示正确图标。If it is detected that the real-time face rotation direction is consistent with the target face rotation direction, and the value of the rotated angle of the real-time face rotation direction is greater than the lower limit value of the angle, a correct icon is displayed. 8.根据权利要求1所述的视频播放方法,其特征在于,所述根据匹配结果,播放所述匹配结果对应的视频的分支视频内容包括:8. The video playback method according to claim 1, wherein, according to the matching result, playing the branch video content of the video corresponding to the matching result comprises: 若所述匹配结果为实时脸部动作与所述目标脸部动作相匹配,播放所述视频的第一分支视频内容;If the matching result is that the real-time facial action matches the target facial action, play the first branch video content of the video; 若所述匹配结果为所述实时脸部动作与所述目标脸部动作不匹配,播放所述视频的第二分支视频内容。If the matching result is that the real-time facial motion does not match the target facial motion, the second branch video content of the video is played. 9.根据权利要求1或8所述的视频播放方法,其特征在于,在预设的互动时间内,检测到所述实时脸部动作与所述目标脸部动作相匹配,则所述匹配结果为实时脸部动作与所述目标脸部动作相匹配;在预设的互动时间内,未检测到所述实时脸部动作与所述目标脸部动作相匹配,则在所述互动时间后,所述匹配结果为所述实时脸部动作与所述目标脸部动作不匹配。9. The video playback method according to claim 1 or 8, wherein, within a preset interaction time, if it is detected that the real-time facial action matches the target facial action, the matching result Match the real-time facial movement with the target facial movement; within the preset interaction time, if the real-time facial movement is not detected to match the target facial movement, then after the interaction time, The matching result is that the real-time facial motion does not match the target facial motion. 10.根据权利要求1所述的视频播放方法,其特征在于,在根据匹配结果,播放所述匹配结果对应的视频的分支视频内容之前,所述方法还包括:10. The video playback method according to claim 1, wherein, before playing the branch video content of the video corresponding to the matching result according to the matching result, the method further comprises: 根据匹配结果显示互动结果提示信息;其中,所述互动结果提示信息包括:所述匹配结果为所述实时脸部动作与所述目标脸部动作相匹配时的互动成功结果提示信息,及,所述匹配结果为所述实时脸部动作与所述目标脸部动作不匹配时的互动失败结果提示信息。The interactive result prompt information is displayed according to the matching result; wherein, the interactive result prompt information includes: the matching result is the interactive successful result prompt information when the real-time facial motion matches the target facial motion, and the The matching result is an interaction failure result prompt message when the real-time facial action does not match the target facial action. 11.根据权利要求10所述的视频播放方法,其特征在于,在根据匹配结果显示互动结果提示信息之前,所述方法还包括:11. The video playback method according to claim 10, wherein before displaying the interactive result prompt information according to the matching result, the method further comprises: 检测预设置的互动结果提示信息的显示时间是否大于0;Detect whether the display time of the preset interactive result prompt information is greater than 0; 若所述互动结果提示信息的显示时间大于0,进入所述根据匹配结果显示互动结果提示信息的步骤。If the display time of the interactive result prompt information is greater than 0, enter the step of displaying the interactive result prompt information according to the matching result. 12.根据权利要求1所述的视频播放方法,其特征在于,在根据匹配结果,播放所述匹配结果对应的视频的分支视频内容之前,所述方法还包括:12. The video playback method according to claim 1, wherein, before playing the branch video content of the video corresponding to the matching result according to the matching result, the method further comprises: 在确定匹配结果后,检测预设的视频内容跳转开关是否开启;After determining the matching result, detect whether the preset video content jump switch is turned on; 若所述视频内容跳转开关开启,进入所述根据匹配结果,播放所述匹配结果对应的视频的分支视频内容的步骤;If the video content jump switch is turned on, enter the step of playing the branch video content of the video corresponding to the matching result according to the matching result; 若所述视频内容跳转开关未开启,在所述互动节点对应的视频内容播放结束时,进入所述根据匹配结果,播放所述匹配结果对应的视频的分支视频内容的步骤。If the video content jump switch is not turned on, when the playback of the video content corresponding to the interactive node ends, the step of playing the branch video content of the video corresponding to the matching result according to the matching result is entered. 13.一种视频播放装置,其特征在于,包括:13. A video playback device, comprising: 脚本获取模块,用于若待播放的视频具有互动标签,获取所述视频对应的互动脚本;a script obtaining module, configured to obtain an interactive script corresponding to the video if the video to be played has an interactive tag; 互动提示显示模块,用于当所述视频播放至所述互动脚本记录的所述视频的互动节点时,显示所述互动节点对应的互动提示信息;an interactive prompt display module, configured to display interactive prompt information corresponding to the interactive node when the video is played to the interactive node of the video recorded by the interactive script; 检测模块,用于获取用户的实时脸部图像,基于所述实时脸部图像检测用户的实时脸部动作与所述互动提示信息要求的目标脸部动作是否相匹配;a detection module, configured to acquire the real-time facial image of the user, and detect whether the real-time facial movement of the user matches the target facial movement required by the interactive prompt information based on the real-time facial image; 分支内容播放模块,用于根据匹配结果,播放所述匹配结果对应的视频的分支视频内容。The branch content playing module is configured to play the branch video content of the video corresponding to the matching result according to the matching result. 14.一种终端,其特征在于,包括:至少一个存储器和至少一个处理器,所述存储器存储一条或多条计算机可执行指令,所述处理器调用所述一条或多条计算机可执行指令,以执行权利要求1-12任一项所述的视频播放方法。14. A terminal, comprising: at least one memory and at least one processor, wherein the memory stores one or more computer-executable instructions, and the processor invokes the one or more computer-executable instructions, to execute the video playback method according to any one of claims 1-12. 15.一种存储介质,其特征在于,所述存储介质存储一条或多条计算机可执行指令,所述一条或多条计算机可执行指令用于执行权利要求1-12任一项所述的视频播放方法。15. A storage medium, wherein the storage medium stores one or more computer-executable instructions, and the one or more computer-executable instructions are used to execute the video according to any one of claims 1-12 play method.
CN201911279555.3A 2019-12-13 2019-12-13 Video playback method, device, terminal and storage medium Active CN112995772B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911279555.3A CN112995772B (en) 2019-12-13 2019-12-13 Video playback method, device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911279555.3A CN112995772B (en) 2019-12-13 2019-12-13 Video playback method, device, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN112995772A true CN112995772A (en) 2021-06-18
CN112995772B CN112995772B (en) 2024-11-19

Family

ID=76332255

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911279555.3A Active CN112995772B (en) 2019-12-13 2019-12-13 Video playback method, device, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN112995772B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050059488A1 (en) * 2003-09-15 2005-03-17 Sony Computer Entertainment Inc. Method and apparatus for adjusting a view of a scene being displayed according to tracked head motion
CN102163081A (en) * 2011-05-13 2011-08-24 北京新岸线软件科技有限公司 Method, electronic equipment and device for screen interaction
CN104801039A (en) * 2015-04-30 2015-07-29 浙江工商大学 Virtual reality gaming device and scene realization method
US9367196B1 (en) * 2012-09-26 2016-06-14 Audible, Inc. Conveying branched content
CN106383575A (en) * 2016-09-07 2017-02-08 北京奇虎科技有限公司 VR video interactive control method and apparatus
CN106774936A (en) * 2017-01-10 2017-05-31 上海木爷机器人技术有限公司 Man-machine interaction method and system
CN106851407A (en) * 2017-01-24 2017-06-13 维沃移动通信有限公司 A method and terminal for controlling video playback progress
CN107197135A (en) * 2016-03-21 2017-09-22 成都理想境界科技有限公司 A kind of video generation method, player method and video-generating device, playing device
CN107948751A (en) * 2017-11-24 2018-04-20 互影科技(北京)有限公司 The playback method and device of branching storyline video
CN108040284A (en) * 2017-12-21 2018-05-15 广东欧珀移动通信有限公司 Radio station control method for playing back, device, terminal device and storage medium
CN108156523A (en) * 2017-11-24 2018-06-12 互影科技(北京)有限公司 The interactive approach and device that interactive video plays
CN109151540A (en) * 2017-06-28 2019-01-04 武汉斗鱼网络科技有限公司 The interaction processing method and device of video image
CN109224432A (en) * 2018-08-30 2019-01-18 Oppo广东移动通信有限公司 Control method, device, storage medium and the wearable device of entertainment applications
US20190297376A1 (en) * 2018-03-23 2019-09-26 Rovi Guides, Inc. Systems and methods for obscuring presentation of media objects during playback of video based on interactions with other media objects

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050059488A1 (en) * 2003-09-15 2005-03-17 Sony Computer Entertainment Inc. Method and apparatus for adjusting a view of a scene being displayed according to tracked head motion
CN102163081A (en) * 2011-05-13 2011-08-24 北京新岸线软件科技有限公司 Method, electronic equipment and device for screen interaction
US9367196B1 (en) * 2012-09-26 2016-06-14 Audible, Inc. Conveying branched content
CN104801039A (en) * 2015-04-30 2015-07-29 浙江工商大学 Virtual reality gaming device and scene realization method
CN107197135A (en) * 2016-03-21 2017-09-22 成都理想境界科技有限公司 A kind of video generation method, player method and video-generating device, playing device
CN106383575A (en) * 2016-09-07 2017-02-08 北京奇虎科技有限公司 VR video interactive control method and apparatus
CN106774936A (en) * 2017-01-10 2017-05-31 上海木爷机器人技术有限公司 Man-machine interaction method and system
CN106851407A (en) * 2017-01-24 2017-06-13 维沃移动通信有限公司 A method and terminal for controlling video playback progress
CN109151540A (en) * 2017-06-28 2019-01-04 武汉斗鱼网络科技有限公司 The interaction processing method and device of video image
CN107948751A (en) * 2017-11-24 2018-04-20 互影科技(北京)有限公司 The playback method and device of branching storyline video
CN108156523A (en) * 2017-11-24 2018-06-12 互影科技(北京)有限公司 The interactive approach and device that interactive video plays
CN108040284A (en) * 2017-12-21 2018-05-15 广东欧珀移动通信有限公司 Radio station control method for playing back, device, terminal device and storage medium
US20190297376A1 (en) * 2018-03-23 2019-09-26 Rovi Guides, Inc. Systems and methods for obscuring presentation of media objects during playback of video based on interactions with other media objects
CN109224432A (en) * 2018-08-30 2019-01-18 Oppo广东移动通信有限公司 Control method, device, storage medium and the wearable device of entertainment applications

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
卓力;赵;张菁;周真理;: "用户驱动的交互式立体视频流传输系统", 北京工业大学学报, no. 06, 10 June 2013 (2013-06-10) *

Also Published As

Publication number Publication date
CN112995772B (en) 2024-11-19

Similar Documents

Publication Publication Date Title
CN110784752B (en) Video interaction method and device, computer equipment and storage medium
CN111970577B (en) Subtitle editing method and device and electronic equipment
EP3696648A1 (en) Interaction method and device
CN112261481B (en) Interactive video creating method, device and equipment and readable storage medium
US20200310842A1 (en) System for User Sentiment Tracking
CN106998494B (en) Video recording method and related device
CN109068081A (en) Video generation method, device, electronic equipment and storage medium
CN111800668B (en) Barrage processing method, barrage processing device, barrage processing equipment and storage medium
CN112118395B (en) Video processing method, terminal and computer readable storage medium
CN113101649B (en) Cloud game control method and device and electronic equipment
CN113438436B (en) Video playing method, video conference method, live broadcast method and related equipment
CN112199016A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN113518264A (en) Interaction method, device, terminal and storage medium
CN105983233A (en) Game history recording device, game history recording method and game history interaction method
CN112954113B (en) Scene switching method and device, electronic equipment and storage medium
CN112929683A (en) Video processing method and device, electronic equipment and storage medium
CN112691385B (en) Method and device for acquiring outgoing and installed information, electronic equipment, server and storage medium
CN115729407A (en) Message processing method and related equipment
CN109190019A (en) User image generation method, electronic equipment and computer storage medium
CN114082197A (en) Interactive live broadcast method, device, computer equipment and storage medium for offline games
CN112995772B (en) Video playback method, device, terminal and storage medium
CN111790153A (en) Object display method and device, electronic equipment and computer-readable storage medium
CN112995774B (en) Video playback method, device, terminal and storage medium
CN107566471B (en) Remote control method and device and mobile terminal
CN112995773B (en) Interactive prompt method, device, terminal and storage medium of interactive video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20240619

Address after: Room 201, No. 9 Fengxiang East Street, Yangsong Town, Huairou District, Beijing

Applicant after: Youku Culture Technology (Beijing) Co.,Ltd.

Country or region after: China

Address before: A four-storey 847 mailbox in Grand Cayman Capital Building, British Cayman Islands

Applicant before: ALIBABA GROUP HOLDING Ltd.

Country or region before: Cayman Islands

GR01 Patent grant
GR01 Patent grant