CN111131876A - Control method, device and terminal for live video and computer readable storage medium - Google Patents
Control method, device and terminal for live video and computer readable storage medium Download PDFInfo
- Publication number
- CN111131876A CN111131876A CN201911284534.0A CN201911284534A CN111131876A CN 111131876 A CN111131876 A CN 111131876A CN 201911284534 A CN201911284534 A CN 201911284534A CN 111131876 A CN111131876 A CN 111131876A
- Authority
- CN
- China
- Prior art keywords
- playing
- slide
- video
- image
- video content
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 79
- 230000008569 process Effects 0.000 claims abstract description 28
- 238000012552 review Methods 0.000 claims abstract description 18
- 238000004590 computer program Methods 0.000 claims description 14
- 230000000694 effects Effects 0.000 description 17
- 238000004891 communication Methods 0.000 description 13
- 238000010586 diagram Methods 0.000 description 11
- 238000013507 mapping Methods 0.000 description 11
- 238000012545 processing Methods 0.000 description 11
- 238000001514 detection method Methods 0.000 description 10
- 238000000605 extraction Methods 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 5
- 230000004044 response Effects 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 238000012549 training Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000003068 static effect Effects 0.000 description 3
- 230000001960 triggered effect Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 230000000903 blocking effect Effects 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000035939 shock Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 239000013598 vector Substances 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
- H04N21/4312—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
- H04N21/4316—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B5/00—Electrically-operated educational appliances
- G09B5/06—Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
- G09B5/065—Combinations of audio and video presentations, e.g. videotapes, videodiscs, television systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/442—Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
- H04N21/44213—Monitoring of end-user related data
- H04N21/44218—Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/47217—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/485—End-user interface for client configuration
- H04N21/4858—End-user interface for client configuration for modifying screen layout parameters, e.g. fonts, size of the windows
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/488—Data services, e.g. news ticker
- H04N21/4884—Data services, e.g. news ticker for displaying subtitles
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Business, Economics & Management (AREA)
- Human Computer Interaction (AREA)
- Social Psychology (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Physics & Mathematics (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- Marketing (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the application discloses a method, a device, a terminal and a computer readable storage medium for controlling live video, wherein the method comprises the following steps: acquiring video live broadcast data; playing and displaying video content corresponding to the live video data in a first playing and displaying area of a display interface; receiving a first control instruction aiming at video content; the first control instruction is used for indicating the video content to be watched back on demand; according to the first control instruction, a second playing display area and a third playing display area are divided on the display interface; the second playing and displaying area is used for continuously playing and displaying the video content corresponding to the acquired video live broadcast data; the third playing display area is used for playing back the video content before the first control instruction is received. By implementing the method and the device, the technical problem that a user can not operate and review in the video live broadcast process in the prior art is solved.
Description
Technical Field
The present application relates to the field of internet technologies, and in particular, to a method, an apparatus, a terminal, and a computer-readable storage medium for controlling live video.
Background
With the rapid development of computer network computing, especially the development of mobile internet, mobile internet is gradually permeating into various fields of people's life and work. For example, people are increasingly accustomed to watching live video, such as game live video, educational live video, and the like, over the mobile internet.
In the prior art, when a user watches live video, if the user has a review requirement for the content which is just watched, the requirement is often difficult to meet; the review is typically handled by the director subjectively and is interrupted at some intermittent time during the live broadcast. The user cannot operate review in the video live process.
For example, the application of PPT (power point) is very wide, and PPT is mainly applied to places such as conferences, training, lectures, and the like. The teacher can use video files, PPT courseware or picture files to give lessons. PPT teaching, namely, the teacher uses the PPT document prepared before the class to give a course. The PPT can help teachers to more clearly and vividly explain problems and principles, and PPT teaching is a popular trend of online teaching at present. When a user watches PPT online lectures, and the user is in doubt or not clearly listening to the content which is spoken, and wants to order or review, how to switch between order (review) and current live broadcast is realized, so that the requirements of the user on better receiving the lecture content and improving the viewing quality are met, and the PPT online lecture system is a hotspot problem of research of people.
Disclosure of Invention
The embodiment of the application provides a control method, a control device, a control terminal and a computer-readable storage medium for live video, solves the technical problem that a user cannot operate review in the live video process in the prior art, realizes switching between on-demand (review) and current live video, better meets the requirement of the user on watching live video content, and improves the quality of video playing.
In a first aspect, an embodiment of the present application provides a method for controlling live video, where the method includes:
acquiring video live broadcast data;
playing and displaying video content corresponding to the video live broadcast data in a first playing and displaying area of a display interface;
receiving a first control instruction aiming at the video content; the first control instruction is used for indicating on-demand review of the video content;
according to the first control instruction, a second playing display area and a third playing display area are divided on the display interface; the second playing and displaying area is used for continuously playing and displaying video content corresponding to the obtained video live broadcast data; the third playing display area is used for playing back the video content before the first control instruction is received.
By implementing the embodiment of the application, when the fact that the user needs to play back videos in the live video process is known, the second playing display area and the third playing display area are divided on the display interface, the video content corresponding to the obtained live video data can be played back while the video content is played and displayed continuously, the technical problem that the user cannot operate the playback in the live video process in the prior art is solved, the switching between the play-on-demand (playback) and the current live video is realized, the requirement of the user for watching the live video content is met better, and the watching quality is improved.
In one possible implementation, the video content before receiving the first control instruction includes a PPT video file; the PPT video file comprises an image corresponding to each slide in the PPT document and a playing time stamp of each slide;
after the display interface is divided into a second playing display area and a third playing display area, the method further comprises the following steps:
displaying the first slide image in the third broadcast display area; the first slide image is a slide image which is correspondingly played by the video content at the moment of receiving the first control instruction;
receiving a page turning instruction input by a user in the third playing display area, and turning the slide images according to the page turning instruction;
receiving a page turning determination instruction input by a user aiming at the second slide image;
and adjusting the playing progress of the PPT video file to the playing time stamp corresponding to the second slide image according to the page turning determining instruction, and playing.
By implementing the embodiment of the application, the terminal can acquire the playing time stamp corresponding to the second slide image from the mapping relation between the slide images and the playing time stamp, and then adjust the playing progress of the PPT video file to the playing time stamp corresponding to the second slide image for playing, so that the playing progress bar of the PPT video file can be accurately adjusted to the node needing to be played, and the requirement of a user for quickly and accurately adjusting the playing progress of the PPT video file can be met.
In one possible implementation manner, the PPT document includes a first slide, the first slide includes animation play elements, an image corresponding to the first slide is a first dynamic image, and the first dynamic image is generated according to a play sequence of the animation play elements.
In one possible implementation, the method further includes:
when the PPT video file is subjected to page recommendation display, displaying a display motion picture on the page; wherein the display motion picture is generated according to a key slide image contained in the PPT document.
In one possible implementation manner, the page turning instruction includes a first fast page turning instruction; the turning of the slide images according to the page turning instruction comprises the following steps:
if the number of the slide images contained in the PPT document is smaller than a first threshold value, turning the slide images currently displayed to a third slide image according to the first quick page turning instruction; the third slide image is the next N or the last N images of the slide image currently displayed;
if the number of the slide images contained in the PPT document is larger than or equal to the first threshold value, turning from the slide image currently displayed to a fourth slide image; the fourth slide image is the lower M or upper M images of the slide image currently displayed;
the N and the M are positive integers, and the N is smaller than the M.
In one possible implementation manner, the page turning instruction includes a second fast page turning instruction; the turning of the slide images according to the page turning instruction comprises the following steps:
according to the second quick page turning instruction, turning up the images in the previous image set or turning down the images in the next image set from the image set where the slide images are currently displayed;
wherein, the flipping from the image set where the slide image is currently displayed to the image in the previous image set or the flipping to the image in the next image set, further comprises: and determining a plurality of image sets according to the relevance between the slide presentation contents in the PPT document.
By implementing the embodiment of the application, the slide images are clustered according to the relevance between every two slide demonstration contents in the PPT document, so that a plurality of slide images are turned over by one-time page turning operation, and then, a user can input a page turning determination instruction in the slide images contained in the slide image set, so that the complexity of the page turning operation is reduced. In practical application, the page turning experience of a user can be improved.
In a possible implementation manner, in the process of playing and displaying the video content corresponding to the live video data in the first playing and displaying area of the display interface, the video content includes barrage information; the method further comprises the following steps:
under the condition that the bullet screen information is in an activated state, displaying the bullet screen information in a first sub-area, and playing the video content in a second sub-area; the first sub-area and the second sub-area are both located in the first playing display area, and the first sub-area and the second sub-area are not overlapped.
By implementing the method and the device, under the condition that the PPT video file is in a full-screen playing state and the barrage information is in an activated state, the barrage information and the playing window of the PPT video file are displayed in a sub-region mode, and the barrage information can be prevented from shielding the content of the PPT document.
In a possible implementation manner, in the process of playing and displaying the video content corresponding to the live video data in the first playing and displaying area of the display interface, the video content includes barrage information; the method further comprises the following steps:
under the condition that the bullet screen information is in an activated state, displaying the bullet screen information in a first sub-area, and playing the video content in a second sub-area; the first sub area and the second sub area form the first playing display area, and the first sub area and the second sub area are not overlapped.
In a possible implementation manner, after the dividing the display interface into the second play display area and the third play display area, the method further includes:
detecting video content played in the second playing display area;
when detecting that the played video content contains target characteristic information, outputting switching reminding information; and the switching reminding information is used for reminding whether to switch back to play and display the video content corresponding to the obtained video live broadcast data in the first playing and displaying area.
By implementing the embodiment of the application, the video content played in the second playing display area can be detected or feature extracted through a pre-trained detection model or feature extraction model, and when the played video content contains target feature information which is used for representing that the current video content contains important content information or content information interested by the current user, switching reminding information is output to remind whether to switch back to the video content corresponding to the video live broadcast data played and displayed in the first playing display area; the method and the device realize very intelligent reminding of the user to switch back to live broadcasting, so that the user can timely know important content or interesting content, and the video playing quality is further improved.
In a second aspect, an embodiment of the present application provides a control apparatus for live video, where the apparatus includes a unit configured to perform the method of the first aspect. Specifically, the apparatus may include:
the acquisition unit is used for acquiring video live broadcast data;
the first display unit is used for displaying the video content corresponding to the video live broadcast data in a first playing display area of a display interface;
a first instruction receiving unit configured to receive a first control instruction for the video content; the first control instruction is used for indicating on-demand review of the video content;
the display area dividing unit is used for dividing a second playing display area and a third playing display area on the display interface according to the first control instruction; the second playing and displaying area is used for continuously playing and displaying video content corresponding to the obtained video live broadcast data; the third playing display area is used for playing back the video content before the first control instruction is received.
In one possible implementation, the video content before receiving the first control instruction includes a PPT video file; the PPT video file comprises an image corresponding to each slide in the PPT document and a playing time stamp of each slide;
the device further comprises:
a second display unit configured to display the first slide image in a third broadcast display area after the display area division unit divides the second broadcast display area and the third broadcast display area in the display interface; the first slide image is a slide image which is correspondingly played by the video content at the moment of receiving the first control instruction;
the second instruction receiving unit is used for receiving a page turning instruction input by a user in the third playing display area;
the page turning unit is used for turning the slide images according to the page turning instruction;
a third instruction receiving unit, configured to receive a page turning determination instruction input by a user for the second slide image;
and the adjusting unit is used for adjusting the playing progress of the PPT video file to the playing time stamp corresponding to the second slide image according to the page turning determining instruction and playing.
In one possible implementation manner, the PPT document includes a first slide, the first slide includes animation play elements, an image corresponding to the first slide is a first dynamic image, and the first dynamic image is generated according to a play sequence of the animation play elements.
In one possible implementation, the apparatus further includes:
the first processing unit is used for displaying a display motion picture on the page when the page recommendation display is carried out on the PPT video file; wherein the display motion picture is generated according to a key slide image contained in the PPT document.
In one possible implementation manner, the page turning instruction includes a first fast page turning instruction; the page turning unit is specifically configured to:
if the number of the slide images contained in the PPT document is smaller than a first threshold value, turning the slide images currently displayed to a third slide image according to the first quick page turning instruction; the third slide image is the next N or the last N images of the slide image currently displayed;
if the number of the slide images contained in the PPT document is larger than or equal to the first threshold value, turning from the slide image currently displayed to a fourth slide image; the fourth slide image is the lower M or upper M images of the slide image currently displayed;
the N and the M are positive integers, and the N is smaller than the M.
In one possible implementation manner, the page turning instruction includes a second fast page turning instruction; the page turning unit is specifically configured to:
according to the second quick page turning instruction, turning up the images in the previous image set or turning down the images in the next image set from the image set where the slide images are currently displayed;
wherein, the flipping from the image set where the slide image is currently displayed to the image in the previous image set or the flipping to the image in the next image set, further comprises: and determining a plurality of image sets according to the relevance between the slide presentation contents in the PPT document.
In a possible implementation manner, in the process of playing and displaying the video content corresponding to the live video data in the first playing and displaying area of the display interface, the video content includes barrage information; the device further comprises:
the second processing unit is used for displaying the bullet screen information in the first sub-area and playing the video content in the second sub-area under the condition that the bullet screen information is in an activated state; the first sub-area and the second sub-area are both located in the first playing display area, and the first sub-area and the second sub-area are not overlapped.
In a possible implementation manner, in the process of playing and displaying the video content corresponding to the live video data in the first playing and displaying area of the display interface, the video content includes barrage information; the device further comprises:
the third processing unit is used for displaying the bullet screen information in the first sub-area and playing the video content in the second sub-area under the condition that the bullet screen information is in an activated state; the first sub area and the second sub area form the first playing display area, and the first sub area and the second sub area are not overlapped.
In one possible implementation, the apparatus further includes:
the detection unit is used for detecting the video content played in the second playing display area after the display area dividing unit divides a second playing display area and a third playing display area in the display interface;
the reminding unit is used for outputting switching reminding information when the played video content is detected to contain the target characteristic information; and the switching reminding information is used for reminding whether to switch back to play and display the video content corresponding to the obtained video live broadcast data in the first playing and displaying area.
In a third aspect, an embodiment of the present application provides a terminal, including a processor and a memory, where the processor and the memory are connected to each other, where the memory is used to store a computer program that supports the terminal to execute the method described above, and the computer program includes program instructions, and the processor is configured to call the program instructions to execute the method described above in the first aspect.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium storing a computer program, the computer program comprising program instructions that, when executed by a processor, cause the processor to perform the method of the first aspect.
In a fifth aspect, embodiments of the present application further provide a computer program, where the computer program includes program instructions, and the program instructions, when executed by a processor, cause the processor to execute the method of the first aspect.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings used in the description of the embodiments will be briefly introduced below.
Fig. 1A is a schematic flowchart of a control method for live video provided in an embodiment of the present application;
fig. 1B is a schematic diagram of an architecture of a live video provided in an embodiment of the present application;
FIG. 2 is a schematic view of an interface display of a live video provided by an embodiment of the present application;
fig. 3A is a schematic flowchart of a control method for video playback according to an embodiment of the present application;
fig. 3B is a schematic interface diagram of a video playback control provided in an embodiment of the present application;
fig. 3C is a schematic diagram illustrating a partitioned display of bullet screen information and a playing of a PPT video file according to an embodiment of the present application;
fig. 3D is a schematic diagram of another partition display bullet screen information and playing of a PPT video file according to an embodiment of the present application;
fig. 4 is a schematic block diagram of a control device for live video provided in an embodiment of the present application;
fig. 5 is a schematic block diagram of a terminal according to an embodiment of the present application.
Detailed Description
Technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
In the embodiment of the application, for each slide of the PPT document, a corresponding slide image can be generated. In the case that the slide show includes a plurality of animation playback elements, the generated slide show image includes a plurality of sub-images, that is, one animation playback element has a sub-image corresponding thereto.
In the embodiment of the present application, the playing time of the slide image may be calculated by using the time when the video starts playing as the starting time.
In this embodiment of the application, the Video format of the PPT Video file may include WMV (windows media format, a streaming media format derived by microsoft), MPEG-1(Motion Picture Experts Group, VCD format), AVI (Audio Video Interleaved format), swf (shock wave flash, a special format for flash of animation design software, an animation file format supporting vectors and dot matrix graphics), and the like, and the embodiment of the application is not limited specifically.
Fig. 1A is a schematic flowchart of a method for controlling live video provided in an embodiment of the present application, and as shown in fig. 1A, the method includes, but is not limited to, the following steps:
s100, acquiring video live broadcast data;
specifically, the terminal on the viewer side may acquire the live video data through the network. Fig. 1B is a schematic diagram of an architecture of a live video broadcast provided by the embodiment of the present application, and the architecture includes one or more terminals 101 on a viewer side, a server 102, and a live device 103 on a main broadcast side or main broadcast side. After acquiring the live video data, the live video equipment 103 sends the live video data to the server 102 through the network 103, and then the server 102 forwards the live video data to the terminal 101, so that the terminal 101 can acquire the live video data.
Step S102, playing and displaying video content corresponding to the video live broadcast data in a first playing and displaying area of a display interface;
specifically, the first playing display area in the embodiment of the present application may be a full screen area, that is, the entire display interface; as shown in fig. 2, in an interface display diagram of live video provided in this embodiment of the present application, the first playing display area 201 may also be a non-full screen area, and this application is not limited. And continuously playing the acquired live video data in the first playing display area.
Step S104, receiving a first control instruction aiming at the video content; the first control instruction is used for indicating on-demand review of the video content;
specifically, in the process of watching the live video, if there is a review requirement for the content that has just been watched, the user may input a first control instruction to instruct that the video content is to be reviewed on demand. The terminal receives a first control instruction for the video content.
The input of the first control instruction may be triggered according to a first operation input by a user, where the first operation may be an operation of clicking a virtual button or a control, an operation of pressing a display area, a preset sliding operation, and the like, and the application is not limited.
And S106, dividing a second playing display area and a third playing display area on the display interface according to the first control instruction.
Specifically, as shown in fig. 2, in response to the first control instruction, the terminal divides a second play display area 202 and a third play display area 203 into display interfaces, which are equivalent to split display. The second playing and displaying area 202 is used for continuously playing and displaying the video content corresponding to the obtained video live broadcast data; the third display area 203 is used for playing back the video content before the first control instruction is received. The user can control the playing of the video content in the third playing display area 203 to realize review.
In the following, the embodiments of the present application will be described by taking a description of a PPT document on a host side as an example: the video content of the embodiment of the present application may be a PPT video file.
The PPT video file in the embodiment of the present application may contain an image corresponding to each slide in the PPT document and a playing time stamp of each slide. Specifically, after video content uploaded by live broadcast equipment on the anchor side is acquired, the server may acquire an image corresponding to each slide in the PPT document and a playing time stamp of each slide by analyzing the video content, and then generate a PPT video file. The PPT video file also includes audio data for the anchor presentation.
In a possible implementation manner, a server may obtain a PPT document uploaded by a live device on a anchor side in advance, then control a command for playing the PPT document according to the received anchor, and then generate a corresponding slide image or picture for each slide in the PPT document; and acquiring the playing time stamp of each slide in the playing process, and establishing a mapping relation between the generated image and the time stamp so as to generate the PPT video file. The mapping relationship facilitates the subsequent operation of adjusting the playing progress.
In a possible implementation manner, the server may obtain a PPT document uploaded by live broadcast equipment on the anchor side in advance, analyze a change degree of video live broadcast content through an intelligent image analysis algorithm, extract an image frame of which the change degree exceeds a certain threshold to form a slide, and record a change time as a play time stamp in a playing process of the slide. And then the server acquires the playing time stamp of each slide in the playing process, and establishes a mapping relation between the generated image and the time stamp, thereby generating the PPT video file. The mapping relationship facilitates the subsequent operation of adjusting the playing progress.
In the embodiment of the present application, the image corresponding to the slide show is specifically an image containing part or all of the content in the slide show. The presentation time stamp for each slide image includes: the starting time point and the duration of each slide image. For example, as shown in table 1, the PPT document includes 10 slides, and for each slide, a corresponding slide image is generated, namely image 1, image 2, and image 10. Taking image 1 as an example, the starting time point of the image starting playing is 0s, and the duration of the image playing is 2min20 s.
TABLE 1 mapping relationship between images and timestamps
| Slide show | Numbering | Starting point in time | Duration of continuous play |
| Image 1 | 1 | 0s | 2min20s |
| Image 2 | 2 | 2min20s | 2min25s |
| Image 3 | 3 | 4min45s | 2min35s |
| ...... | ...... | ...... | ...... |
| Image 10 | 10 | 45min20s | 4min00s |
In a possible implementation manner, when the server sends the video live broadcast data to the terminal on the viewer side, the server can simultaneously carry the image corresponding to each slide in the PPT document and the playing time stamp of each slide to send to the terminal. Or when the user inputs the first control instruction, the image corresponding to each slide in the PPT document and the playing time stamp of each slide are sent to the terminal.
With reference to fig. 3A, which is a schematic flow chart of a control method for video playback provided in the embodiment of the present application, how to play and display video content in a third playing display area after a second playing display area and the third playing display area are divided from the display interface is described:
step S300, after a second playing display area and a third playing display area are divided on the display interface, displaying a first slide image in the third playing display area; the first slide image is a slide image which is correspondingly played by the video content at the moment of receiving the first control instruction;
in this embodiment of the application, as shown in fig. 3B, a touch button may be displayed in a third display area of the terminal, and specifically, the touch button may include: a play button and a page turning button; wherein, the play button is used for controlling the play of the video layer (the play can include play start and play pause); and the page turning button is used for controlling the turning of the slide images. In an embodiment of the present application, the page turning buttons may include a page up button, a page down button, a fast page up button, and a fast page down button. Wherein, the page up button is used to flip the currently displayed slide image (i.e., the first slide image) to the upper 1 slide image; a page down button for flipping the currently displayed slide image to the next 1 slide image; a fast page-up button for flipping the currently displayed slide image to the top N slide images, where N is a positive integer greater than 1; a fast page down button for flipping the currently displayed slide image to the next N slide images, where N is a positive integer greater than 1. It should be noted that the display of the play button and the page turning button provided in the embodiment of the present application is only an example and should not be limited, for example, the page turning button may be displayed on the left and right sides of the terminal.
In this embodiment of the present application, in a case that a slide includes an animation playback element, a slide image corresponding to a current playback time may be a sub-image corresponding to a certain animation playback element, or may be an image in which all animation playback elements complete animation effect display.
In some implementations, if the PPT document contains a first slide, the first slide contains animation playback elements, which may include an entry effect or an exit effect for text, pictures, charts, video, and the like. At this time, the terminal may obtain a playing sequence of animation playing elements included in the first slide, and generate the first dynamic image of the first slide according to the playing sequence of the animation playing elements, and then, after the terminal receives a playing control instruction initiated by a user, if a slide image corresponding to a current playing time includes animation playing elements, then the slide image corresponding to the current playing time is displayed, the playing dynamic image is displayed according to the playing sequence of the included animation playing elements, that is, the PPT video file in the playing state is covered by the dynamic image of the slide. The dynamic images of the slides can represent the richness of the slide demonstration effect, so that the user can better know the PPT document content, and the use experience of the user can be improved.
In some implementations, the implementation process of the terminal generating the dynamic image according to the playing sequence of the animation playing elements may include: the terminal obtains the starting playing time and the ending playing time of each animation playing element in the PPT video file, and then the PPT video file can be cut according to the starting playing time and the ending playing time of each animation playing element to obtain the animation playing video corresponding to each animation playing element. Taking the first animation playing video corresponding to the first animation playing element as an example, in this case, the terminal decodes the first animation playing video to obtain a video frame sequence corresponding to the first animation playing video, and then extracts a preset number of video frames in the video frame sequence according to a preset extraction rule, so as to generate a dynamic image corresponding to the first animation playing element according to the extracted video frames. Here, the preset decimation rule may include decimation at a fixed video frame interval, for example, decimation by one video frame every two frames, decimation by one video frame every three frames, and so on. In this implementation, it can be ensured that the generated dynamic image completely contains the characteristics of the animation playing element.
In this embodiment of the application, the terminal may determine whether the video frame of the PPT video file includes an animation playing element by detecting whether the video frame of the PPT video file changes, and acquire the animation playing video corresponding to the animation playing element when detecting that the video frame of the PPT video file changes.
In the embodiment of the application, the terminal may detect whether a video frame of the PPT video file has changed by using an image recognition detection algorithm, for example, the image recognition detection algorithm may include a convolutional neural network, and may also include a deep learning neural network, and the like.
In some implementations, the implementation process of the terminal generating the dynamic image according to the playing sequence of the animation playing elements may include: the terminal acquires time point information of each animation playing element for finishing animation effect display, acquires a frame of video frame image according to the time point information of each animation effect display, and then generates a dynamic image according to the video frame image corresponding to each animation playing element. For example, the slide image 1 includes 3 animation playback elements, where the time point information when the animation playback element 1 completes the animation effect display is 0 second 00, the time point information when the animation playback element 2 completes the animation effect display is 1 second 20, and the time point information when the animation playback element 3 completes the animation effect display is 3 seconds 40, and the terminal respectively obtains video frame images corresponding to the 3 pieces of time point information, and then generates a dynamic image according to the 3 pieces of video frame images. In practical applications, considering that the time duration for displaying the dynamic image is a set time duration (e.g., 3 seconds), in this case, the video frame image sequences of 1 second may be respectively obtained according to the above-mentioned 3 time point information (i.e., an average obtaining manner), and then the dynamic image is generated according to the 3 video frame image sequences.
In some implementations, taking slide image 1 as an example, the playing duration of slide image 1 is 4 seconds, where the time point information of animation effect display completed by animation play element 1 is 0 second 00, the time point information of animation effect display completed by animation play element 2 is 1 second 20, and the time point information of animation effect display completed by animation play element 3 is 3 seconds 40, and the implementation process of generating dynamic images by the terminal according to the playing sequence of animation play elements may include: the terminal determines the display duration of each animation playing element according to the playing duration of the slide image 1 and the time point information of each animation playing element for finishing the animation effect display, and then determines the occupation ratio between the display duration and the playing duration of each animation playing element. Considering that the time length for displaying the dynamic image is a set time length (for example, 3 seconds), in this case, a corresponding video frame image sequence is obtained according to the ratio between the display time length and the playing time length of each animation playing element, and then the dynamic image is generated according to the obtained plurality of video frame image sequences.
The above-mentioned process of generating the moving image may be completed by the server, and then the server sends the moving image to the terminal for displaying.
Step S302, receiving a page turning instruction input by a user in the third playing display area, and turning the slide images according to the page turning instruction;
in this embodiment of the application, the page turning instruction may be a click operation of the user, a press operation, a slide operation, an operation for the page turning button shown in fig. 3B, or the like. For example, the user has performed one-press operation on the page turn button displayed in the third play display area. Then, after receiving a page turning instruction input by the user in the third broadcast display area, the terminal may perform a page turning up operation or a page turning down operation on the plurality of slide images to implement page turning of the slide images.
If the next slide image contains animation play elements after the current slide image is turned to the next slide image, then the next slide image is displayed according to the playing sequence of the animation play elements contained in the next slide image.
In this embodiment, the page turning instruction may include a normal page turning instruction and a fast page turning instruction.
Taking a common page turning instruction as an example, in some implementation manners, after receiving a page turning instruction input by a user, a terminal turns 1 slide image by a page turning operation once, for example, the number of the slide images is 10, and the images are respectively image 1, image 2, image 10, and a user performs a page turning operation for image 1, and at this time, image 2 is displayed on a display screen of the terminal, that is, image 2 is turned from currently displayed image 1. In the case where the number of slide images is small, it may be convenient for the user to better input a page turn determination instruction.
Taking a fast page turning instruction as a first fast page turning instruction as an example, if the number of the slide images contained in the PPT document is smaller than a first threshold value, turning the slide images currently displayed to a third slide image according to the first fast page turning instruction; the third slide image is the next N or the last N images of the slide image currently displayed. If the number of the slide images contained in the PPT document is larger than or equal to the first threshold value, turning from the slide image currently displayed to a fourth slide image; the fourth slide image is the lower M or upper M images of the slide image currently displayed; the N and the M are positive integers, and the N is smaller than the M.
In some implementations, the number of slide images that can be turned by one page-turning operation may be set according to how many slide images are, for example, when the number of slide images is smaller than a set first threshold (for example, the first threshold is 30), the number of slides that can be turned by one page-turning operation is N each time under the effect of the first fast page-turning instruction, for example, 3 slide images can be turned by one page-turning operation; for another example, when the number of slide images is greater than or equal to the set first threshold, the number of slide images is M each time under the effect of the first fast page turning instruction, and for example, 5 slide images may be turned by one page turning operation. The number of slide images turned each time may be N when the number of slide images is less than or equal to a set first threshold (e.g., the first threshold is 30), and M when the number of slide images is greater than the set first threshold.
Taking the example that the fast page turning instruction comprises a second fast page turning instruction, according to the second fast page turning instruction, turning up the image in the previous image set or turning down the image in the next image set from the image set where the slide image currently displayed is located; wherein, the flipping from the image set where the slide image is currently displayed to the image in the previous image set or the flipping to the image in the next image set, further comprises: and determining a plurality of image sets according to the relevance between the slide presentation contents in the PPT document.
The image in the flipped image set may be the first image or the key image in the image set, or the last image. The key image can be an image which can represent the content of the image set after being analyzed by a related content analysis algorithm.
In some implementation manners, from the user experience, when a user learns by using a PPT video file, in order to better understand a certain related knowledge point in the PPT document (that is, there is often a greater association degree between two adjacent slide presentation contents in the PPT document) so as to achieve the best learning effect, the user often has a higher requirement for adjusting the playing progress of the PPT video file. In this case, the terminal may obtain the association degree between two slide presentations in the PPT document, and then determine whether the association degree between two slide presentations is greater than a preset threshold (for example, the preset threshold may be 0.6, or may be 0.8, or the like), and cluster two slides when the association degree between two slide presentations is greater than the preset threshold, so as to obtain M image sets. Here, M is a positive integer greater than 0.
For example, taking the PPT document as "yearly summary" as an example, the PPT document contains 10 slides, where slide 1 forms a slide image set a "summary outline", slide 2, slide 3, slide 4, slide 5, and slide 6 form a slide image set B "work summary", slide 7 and slide 8 form a slide image set C "problem", and slide 9 and slide 10 form a slide image set D "new plan", that is, the PPT document can be divided into 4 slide image sets in content. Then, in this case, one slide image set is flipped by a page flip operation, and then, the user can input a page flip determination instruction in the slide images included in the slide image set to reduce the complexity of the page flip operation.
It should be noted that, after the slides are initially displayed in the third broadcast display area, the pages cannot be turned forward in general, because the current slide is the newest slide in the review video content by default, and the page turning can only be performed backward.
Step S304, receiving a page turning determination instruction input by a user aiming at the second slide image;
in this embodiment of the application, the page turning determination instruction may be triggered according to a third operation input by the user, where the third operation may be a click operation, a press operation, a slide operation, or the like. Here, by inputting a page-turning determination instruction to the second slide image, an adjusted playback node of the PPT video file may be determined, and then playback may be performed according to the adjusted playback node. The first operation, the second operation, and the third operation may be the same or different, and the present application is not particularly limited.
And S306, adjusting the playing progress of the PPT video file to the playing time stamp corresponding to the second slide image according to the page turning determination instruction, and playing.
In the embodiment of the application, after the terminal receives a page turning determination instruction input by a user for the second slide image, the terminal obtains the play timestamp corresponding to the second slide image from the mapping relationship between the slide images and the play timestamps, and then the terminal starts to play the PPT video file from the play timestamp corresponding to the second slide image in the third play display area, that is, the play progress of the PPT video file is adjusted to the play timestamp corresponding to the second slide image for playing, so that the adjustment of the play progress can be completed.
In practical applications, after the terminal receives a page turning determination instruction input by the user for the second slide image, the terminal starts to play the PPT video file in the third play display area from the play timestamp corresponding to the second slide image, and at the same time, the terminal may hide the second slide image, which may be understood as overlaying the PPT video file in the play state on the second slide image.
In order to facilitate better understanding of the technical solutions described in the present application, the following description is made with reference to specific examples. For example, taking the PPT video file as "yearly summary" as an example, the PPT video file contains 10 slide images, and specifically, the mapping relationship between the 10 slide images and the playing time stamp can be shown in table 2:
TABLE 2 mapping relationship between images and timestamps
In practical application, the terminal plays the PPT video file "summary in year", and the slide image played at the current time is image 5. The user inputs a playing control instruction for the PPT video file "yearly summary" in the playing state, the terminal displays the slide image (i.e. image 5) corresponding to the current playing time in the third playing display area of the PPT video file according to the playing control instruction after receiving the playing control instruction, at this time, the terminal receives a page turning operation input by the user in the third playing display area to turn the slide image, and then, the terminal receives a page turning determination instruction input by the user for the second slide image (image 2), the terminal obtains the playing time stamp (0min25s) corresponding to the second slide image (image 2) from the mapping relation (such as table 2) between the slide images and the playing time stamps, and then, the terminal starts playing the PPT video file from the playing time stamp (0min25s) corresponding to the second slide image in the third playing display area, thereby, the adjustment of the playing progress can be completed.
By implementing the embodiment of the application, the terminal can acquire the playing time stamp corresponding to the second slide image from the mapping relation between the slide images and the playing time stamps, then, the playing progress of the PPT video file is adjusted to the playing time stamp corresponding to the second slide image to be played, the playing progress bar of the PPT video file can be accurately adjusted to the node needing to be played, and the requirement of a user for quickly and accurately adjusting the playing progress of the PPT video file can be met.
In some implementations, in order to attract a lot of users to watch the PPT video file, a page of the video playing platform may display a high-quality PPT video file in a page recommendation list, where the page may be a top page of the application platform. In the prior art, a PPT video file is often presented in a page recommendation list in a static manner, and the realization manner cannot completely reflect the characteristics of the PPT video file, so that the eyes of a user cannot be attracted to the greatest extent. Based on this, in the embodiment of the application, when performing page recommendation display on a PPT video file, a key slide image in the PPT document is acquired, and a display motion map is generated according to the key slide image and displayed on a page with the display motion map, where the display motion map may be used to describe characteristics of the PPT video file. Therefore, when a user selects a PPT video file which is interested in, the user can quickly know which key contents are covered by the PPT video file by displaying the motion picture, and the clicking efficiency of the PPT video file can be improved.
In this embodiment of the application, the terminal may select one or more representative slide images from the generated slide images as key slide images, and then generate a display moving picture according to the key slide images.
In the embodiment of the application, the slide images can be screened according to the key content in the PPT document, so that one or more representative slide images can be screened and obtained as the key slide images. Specifically, after being analyzed by a related content analysis algorithm, one or more key slide images which can represent the content of the image set are obtained.
It should be noted that, in the embodiment of the present application, an execution sequence of generating a display motion picture by a terminal according to a key slide image is not limited, for example, in some implementations, when the terminal performs page recommendation display on a PPT video file, the slide images may be first screened to obtain the key slide image, and then the display motion picture is generated according to the key slide image; in some implementations, the display animation can also be generated from the key slide image when the page recommendation display is not performed on the PPT video file. It should be noted that the implementation manner and the execution sequence of generating the display images according to the key slide images are only an exemplary case, and are not exhaustive, and those skilled in the art may make other modifications or changes based on the technical solutions of the present application while understanding the spirit of the technical solutions of the present application, so long as the implemented functions and the achieved technical effects are similar to the present application, and the scope of the present application should be covered.
In some implementations, the PPT video file may include barrage information, which may be a note clip generated when the user last viewed the PPT video file, or comment information generated when the user viewed the PPT video file, for example, the comment information may include but is not limited to: "Tainiu! "," too wonderful! "and the like.
For example, as shown in fig. 3C, in the process of displaying, in a first play display region of a display interface, a video content corresponding to the video live broadcast data in a play mode, when the bullet screen information is in an activated state, the first play display region may be divided to obtain a first sub-region and a second sub-region, where the first sub-region and the second sub-region are both located in the first play display region and are not overlapped with each other, specifically, the bullet screen information is displayed in the first sub-region, the PPT video file is played in the second sub-region, and a region equivalent to an original PPT video file is scaled to the second sub-region to be played, so that the bullet screen information and the PPT video file can be displayed in different regions, and the bullet screen information is prevented from blocking the PPT document content. Or it can be understood from another direction that the blank area also belongs to the second sub-area in fig. 3C, that is, the first sub-area and the second sub-area constitute a first playing display area, but the PPT video file is not displayed in the whole second sub-area, the area of the original PPT video file is scaled to the second sub-area for playing, in order to ensure the playing ratio of the original PPT video file, the left and right edges of the second sub-area are blank areas (the blank areas refer to areas without content display, and may be full white areas or full black areas, etc.)
For another example, as shown in fig. 3D, in the process of displaying, in a first playing display region of a display interface, a video content corresponding to the video live broadcast data in a playing manner, when the bullet screen information is in an activated state, the first playing display region may be divided to obtain a first sub-region and a second sub-region, where the first sub-region and the second sub-region form the first playing display region, and the first sub-region and the second sub-region are not overlapped with each other, specifically, the bullet screen information is displayed in the first sub-region, and a PPT video file is played in the second sub-region, so that the bullet screen information and the PPT video file can be displayed in different regions, and the bullet screen information is prevented from blocking the document content.
In this embodiment of the present application, the bullet screen information being in the activated state may be triggered by the user turning on a "bullet screen switch button".
In some implementation manners, the size of the second sub-region may be obtained by scaling the display region of the PPT video file according to a preset scale, or by expanding a scaling edge and a blank region generated by an edge of the display region of the terminal after scaling the display region of the PPT video file according to the preset scale, which is not limited in this embodiment of the application.
In some implementation manners, the terminal can also detect a blank area in the PPT video file in a playing state through an intelligent algorithm, and display the bullet screen information in the blank area of the PPT video file.
It is noted that while for simplicity of explanation, the foregoing method embodiments have been described as a series of acts or combination of acts, it will be appreciated by those skilled in the art that the present disclosure is not limited by the order of acts, as some steps may, in accordance with the present disclosure, occur in other orders and concurrently. Further, those skilled in the art will also appreciate that the embodiments described in the specification are exemplary embodiments and that acts and modules referred to are not necessarily required by the disclosure.
It is further noted that, although the steps in the flowchart of fig. 1A are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in fig. 1A may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
In some implementation manners, the method for controlling live video broadcast according to the present application, after the display interface divides into a second play display area and a third play display area, may further include:
detecting video content played in the second playing display area;
when detecting that the played video content contains target characteristic information, outputting switching reminding information; and the switching reminding information is used for reminding whether to switch back to play and display the video content corresponding to the obtained video live broadcast data in the first playing and displaying area.
Specifically, the terminal or the server may collect a large number of training samples of video content in advance, train one or more detection models or feature extraction models through the training samples, and different types of video content may correspond to different detection models or feature extraction models. The detection model or the feature extraction model is used for detecting and analyzing the video content to detect whether the video content contains information which is concerned or interested or should be regarded by the user.
For example, for educational video content, keywords such as "dividing point", "core", "attention", "examination point" and the like can be obtained as target feature information through training, and then whether the target feature information is included or not can be detected from video images or audio data, and if the target feature information is included, switching reminding information is output.
In some implementations, the output form of the switching reminder information includes, but is not limited to, highlighting the target information in the second display area (e.g., flashing a target button of a certain area several times), or playing a reminder switching voice to the user, etc.
By implementing the embodiment of the application, the video content played in the second playing display area can be detected or feature extracted through a pre-trained detection model or feature extraction model, and when the played video content contains target feature information which is used for representing that the current video content contains important content information or content information interested by the current user, switching reminding information is output to remind whether to switch back to the video content corresponding to the video live broadcast data played and displayed in the first playing display area; the method and the device realize very intelligent reminding of the user to switch back to live broadcasting, so that the user can timely know important content or interesting content, and the video playing quality is further improved.
In order to better implement the method of the embodiment of the present application, the embodiment of the present application further describes a schematic structural diagram of a control device for live video broadcast, which belongs to the same application concept as the embodiment of the method described in fig. 1A. The following detailed description is made with reference to the accompanying drawings:
as shown in fig. 4, the control device 40 for live video may include:
the acquiring unit 400 is used for acquiring video live broadcast data;
the first display unit 402 is configured to play and display video content corresponding to the live video data in a first play display area of a display interface;
the first instruction receiving unit 404 is configured to receive a first control instruction for the video content; the first control instruction is used for indicating on-demand review of the video content;
the display area dividing unit 406 is configured to divide a second play display area and a third play display area on the display interface according to the first control instruction; the second playing and displaying area is used for continuously playing and displaying video content corresponding to the obtained video live broadcast data; the third playing display area is used for playing back the video content before the first control instruction is received.
In one possible implementation, the video content before receiving the first control instruction includes a PPT video file; the PPT video file comprises an image corresponding to each slide in the PPT document and a playing time stamp of each slide;
the control device 40 for live video may further include:
a second display unit configured to display the first slide image in a third broadcast display area after the display area division unit divides the second broadcast display area and the third broadcast display area in the display interface; the first slide image is a slide image which is correspondingly played by the video content at the moment of receiving the first control instruction;
the second instruction receiving unit is used for receiving a page turning instruction input by a user in the third playing display area;
the page turning unit is used for turning the slide images according to the page turning instruction;
a third instruction receiving unit, configured to receive a page turning determination instruction input by a user for the second slide image;
and the adjusting unit is used for adjusting the playing progress of the PPT video file to the playing time stamp corresponding to the second slide image according to the page turning determining instruction and playing.
In one possible implementation manner, the PPT document includes a first slide, the first slide includes animation play elements, an image corresponding to the first slide is a first dynamic image, and the first dynamic image is generated according to a play sequence of the animation play elements.
In a possible implementation manner, the control apparatus 40 for live video may further include:
the first processing unit is used for displaying a display motion picture on the page when the page recommendation display is carried out on the PPT video file; wherein the display motion picture is generated according to a key slide image contained in the PPT document.
In one possible implementation manner, the page turning instruction includes a first fast page turning instruction; the page turning unit is specifically configured to:
if the number of the slide images contained in the PPT document is smaller than a first threshold value, turning the slide images currently displayed to a third slide image according to the first quick page turning instruction; the third slide image is the next N or the last N images of the slide image currently displayed;
if the number of the slide images contained in the PPT document is larger than or equal to the first threshold value, turning from the slide image currently displayed to a fourth slide image; the fourth slide image is the lower M or upper M images of the slide image currently displayed;
the N and the M are positive integers, and the N is smaller than the M.
In one possible implementation manner, the page turning instruction includes a second fast page turning instruction; the page turning unit is specifically configured to:
according to the second quick page turning instruction, turning up the images in the previous image set or turning down the images in the next image set from the image set where the slide images are currently displayed;
wherein, the flipping from the image set where the slide image is currently displayed to the image in the previous image set or the flipping to the image in the next image set, further comprises: and determining a plurality of image sets according to the relevance between the slide presentation contents in the PPT document.
In a possible implementation manner, in the process of playing and displaying the video content corresponding to the live video data in the first playing and displaying area of the display interface, the video content includes barrage information; the control device 40 for live video may further include:
the second processing unit is used for displaying the bullet screen information in the first sub-area and playing the video content in the second sub-area under the condition that the bullet screen information is in an activated state; the first sub-area and the second sub-area are both located in the first playing display area, and the first sub-area and the second sub-area are not overlapped.
In a possible implementation manner, in the process of playing and displaying the video content corresponding to the live video data in the first playing and displaying area of the display interface, the video content includes barrage information; the control device 40 for live video may further include:
the third processing unit is used for displaying the bullet screen information in the first sub-area and playing the video content in the second sub-area under the condition that the bullet screen information is in an activated state; the first sub area and the second sub area form the first playing display area, and the first sub area and the second sub area are not overlapped.
In a possible implementation manner, the control apparatus 40 for live video may further include:
the detection unit is used for detecting the video content played in the second playing display area after the display area dividing unit divides a second playing display area and a third playing display area in the display interface;
the reminding unit is used for outputting switching reminding information when the played video content is detected to contain the target characteristic information; and the switching reminding information is used for reminding whether to switch back to play and display the video content corresponding to the obtained video live broadcast data in the first playing and displaying area.
In order to better implement the above scheme of the embodiment of the present application, the present application further provides another schematic structural diagram of the terminal, and the following detailed description is provided with reference to the accompanying drawings:
as shown in fig. 5, which is a schematic structural diagram of another terminal provided in the embodiment of the present application, the terminal 500 may include at least one processor 501, a communication bus 502, a memory 503, and at least one communication interface 504.
The processor 501 may be a general-purpose Central Processing Unit (CPU), a microprocessor, an Application-Specific Integrated Circuit (ASIC), or one or more ics for controlling the execution of programs in accordance with the present invention.
The communication bus 502 may include a path that conveys information between the aforementioned components. The communication interface 504 may be any transceiver or other communication network, such as ethernet, radio access Technology (RAN), Wireless Local Area Network (WLAN), etc., for obtaining the live video data.
The Memory 503 may be a Read-Only Memory (ROM) or other type of static storage device that can store static information and instructions, a Random Access Memory (RAM) or other type of dynamic storage device that can store information and instructions, an electrically erasable Programmable Read-Only Memory (EEPROM), a Compact Disc Read-Only Memory (CD-ROM) or other optical Disc storage, optical Disc storage (including Compact Disc, laser Disc, optical Disc, digital versatile Disc, blu-ray Disc, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to these. The memory may be self-contained and coupled to the processor via a bus. The memory may also be integral to the processor.
The memory 503 is used for storing program codes for executing the scheme of the present application, and is controlled by the processor 501 to execute. The processor 501 is configured to execute the program code stored in the memory 503, and perform the following steps:
acquiring video live broadcast data through the communication interface 504;
playing and displaying video content corresponding to the video live broadcast data in a first playing and displaying area of a display interface through the output device 505;
receiving a first control instruction for the video content through an input device 506; the first control instruction is used for indicating on-demand review of the video content;
according to the first control instruction, a second playing display area and a third playing display area are divided on the display interface; the second playing and displaying area is used for continuously playing and displaying video content corresponding to the obtained video live broadcast data; the third playing display area is used for playing back the video content before the first control instruction is received.
Wherein the video content prior to receiving the first control instruction comprises a PPT video file; the PPT video file comprises an image corresponding to each slide in the PPT document and a playing time stamp of each slide;
after the display interface divides the second play display area and the third play display area, the processor 501 may further perform:
displaying the first slide image in the third broadcast display area through the output device 505; the first slide image is a slide image which is correspondingly played by the video content at the moment of receiving the first control instruction;
receiving a page turning instruction input by a user in the third playing display area through an input device 506, and turning the slide images according to the page turning instruction;
receiving a page turning determination instruction input by the user for the second slide image through the input device 506;
and adjusting the playing progress of the PPT video file to the playing time stamp corresponding to the second slide image according to the page turning determining instruction, and playing.
In a possible implementation manner, after the processor 501 divides the display interface into the second play display area and the third play display area, the following may be further performed:
detecting video content played in the second playing display area;
when detecting that the played video content contains the target characteristic information, outputting switching reminding information through the output equipment 505; and the switching reminding information is used for reminding whether to switch back to play and display the video content corresponding to the obtained video live broadcast data in the first playing and displaying area.
The PPT document comprises a first slide, the first slide comprises animation playing elements, an image corresponding to the first slide is a first dynamic image, and the first dynamic image is generated according to the playing sequence of the animation playing elements.
Wherein the processor 501 is further configured to:
when the PPT video file is subjected to page recommendation display, displaying a display motion picture on the page; wherein the display motion picture is generated according to a key slide image contained in the PPT document.
The page turning instruction comprises a first quick page turning instruction; the turning of the slide images by the processor 501 according to the page turning instruction may include:
if the number of the slide images contained in the PPT document is smaller than a first threshold value, turning the slide images currently displayed to a third slide image according to the first quick page turning instruction; the third slide image is the next N or the last N images of the slide image currently displayed;
if the number of the slide images contained in the PPT document is larger than or equal to the first threshold value, turning from the slide image currently displayed to a fourth slide image; the fourth slide image is the lower M or upper M images of the slide image currently displayed;
the N and the M are positive integers, and the N is smaller than the M.
The page turning instruction comprises a second quick page turning instruction; the processor 501 turns the slide images according to the page turning instruction, including:
according to the second quick page turning instruction, turning up the images in the previous image set or turning down the images in the next image set from the image set where the slide images are currently displayed;
wherein, the flipping from the image set where the slide image is currently displayed to the image in the previous image set or the flipping to the image in the next image set, further comprises: and determining a plurality of image sets according to the relevance between the slide presentation contents in the PPT document.
The PPT video file also comprises bullet screen information; the processor 501 may be further configured to:
in the process of displaying the video content corresponding to the video live broadcast data in a first playing display area of a display interface, under the condition that the bullet screen information is in an activated state, displaying the bullet screen information in a first subarea, and playing the video content in a second subarea; the first sub area and the second sub area form the first playing display area, and the first sub area and the second sub area are not overlapped.
The PPT video file also comprises bullet screen information; the processor 501 may be further configured to:
in the process of playing and displaying video content corresponding to the video live broadcast data in a first playing and displaying area of a display interface, under the condition that bullet screen information is in an activated state, the bullet screen information is displayed in a first sub-area, and the PPT video file is played in a second sub-area; the first sub area and the second sub area form the first playing display area, and the first sub area and the second sub area are not overlapped.
In particular implementations, processor 501 may include one or more CPUs, such as CPU0 and CPU1 in fig. 5, as an alternative embodiment.
In an alternative embodiment, terminal 500 may include multiple processors, such as processor 501 and processor 508 of fig. 5. Each of these processors may be a single-core (single-CPU) processor or a multi-core (multi-CPU) processor. A processor herein may refer to one or more devices, circuits, and/or processing cores for processing data (e.g., computer program instructions).
In this implementation, as an alternative embodiment, the terminal 500 may further include an output device 505 and an input device 506. An output device 505, which is in communication with the processor 501, may display information in a variety of ways. For example, the output device 505 may be a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) Display device, a Cathode Ray Tube (CRT) Display device, a projector (projector), or the like, and is used for playing and displaying video content, including video content corresponding to the acquired live video data and also video content played back. The input device 506 is in communication with the processor 501 and can accept user input in a variety of ways. For example, the input device 506 may be a mouse, a keyboard, a touch screen device, or a sensing device, among others.
In a specific implementation, the terminal 500 may include a Mobile phone, a tablet computer, a Personal Digital Assistant (PDA), a Mobile Internet Device (MID), an intelligent wearable Device (such as a smart watch and a smart bracelet), and other terminals that can be used by various users, and the embodiment of the present application is not limited in particular.
The present embodiments also provide a computer storage medium having instructions stored therein, which when executed on a computer or a processor, cause the computer or the processor to perform one or more steps of the method according to any one of the above embodiments. Based on the understanding that the constituent modules of the above-mentioned apparatus, if implemented in the form of software functional units and sold or used as independent products, may be stored in the computer-readable storage medium, and based on this understanding, the technical solutions of the present application, in essence, or a part contributing to the prior art, or all or part of the technical solutions, may be embodied in the form of software products, and the computer products are stored in the computer-readable storage medium.
The computer readable storage medium may be an internal storage unit of the device according to the foregoing embodiment, such as a hard disk or a memory. The computer readable storage medium may be an external storage device of the above-described apparatus, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like. Further, the computer-readable storage medium may include both an internal storage unit and an external storage device of the device. The computer-readable storage medium is used for storing the computer program and other programs and data required by the apparatus. The above-described computer-readable storage medium may also be used to temporarily store data that has been output or is to be output.
It will be understood by those skilled in the art that all or part of the processes of the methods of the above embodiments may be implemented by a computer program, which can be stored in a computer-readable storage medium, and can include the processes of the above embodiments of the methods when the computer program is executed. And the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
The steps in the method of the embodiment of the application can be sequentially adjusted, combined and deleted according to actual needs.
The modules in the device can be merged, divided and deleted according to actual needs.
It is to be understood that one of ordinary skill in the art would recognize that the elements and algorithm steps of the various examples described in connection with the embodiments disclosed in the various embodiments disclosed herein can be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
Those of skill would appreciate that the functions described in connection with the various illustrative logical blocks, modules, and algorithm steps disclosed in the various embodiments disclosed herein may be implemented as hardware, software, firmware, or any combination thereof. If implemented in software, the functions described in the various illustrative logical blocks, modules, and steps may be stored on or transmitted over as one or more instructions or code on a computer-readable medium and executed by a hardware-based processing unit. The computer-readable medium may include a computer-readable storage medium, which corresponds to a tangible medium, such as a data storage medium, or any communication medium including a medium that facilitates transfer of a computer program from one place to another (e.g., according to a communication protocol). In this manner, a computer-readable medium may generally correspond to (1) a non-transitory tangible computer-readable storage medium, or (2) a communication medium, such as a signal or carrier wave. A data storage medium may be any available medium that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementing the techniques described herein. The computer program product may include a computer-readable medium.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (10)
1. A control method for live video is characterized by comprising the following steps:
acquiring video live broadcast data;
playing and displaying video content corresponding to the video live broadcast data in a first playing and displaying area of a display interface;
receiving a first control instruction aiming at the video content; the first control instruction is used for indicating on-demand review of the video content;
according to the first control instruction, a second playing display area and a third playing display area are divided on the display interface; the second playing and displaying area is used for continuously playing and displaying video content corresponding to the obtained video live broadcast data; the third playing display area is used for playing back the video content before the first control instruction is received.
2. The method of claim 1, wherein the video content prior to receiving the first control instruction comprises a PPT video file; the PPT video file comprises an image corresponding to each slide in the PPT document and a playing time stamp of each slide;
after the display interface is divided into a second playing display area and a third playing display area, the method further comprises the following steps:
displaying the first slide image in the third broadcast display area; the first slide image is a slide image which is correspondingly played by the video content at the moment of receiving the first control instruction;
receiving a page turning instruction input by a user in the third playing display area, and turning the slide images according to the page turning instruction;
receiving a page turning determination instruction input by a user aiming at the second slide image;
and adjusting the playing progress of the PPT video file to the playing time stamp corresponding to the second slide image according to the page turning determining instruction, and playing.
3. The method according to claim 2, wherein after the dividing of the display interface into the second display area and the third display area, further comprising:
detecting video content played in the second playing display area;
when detecting that the played video content contains target characteristic information, outputting switching reminding information; and the switching reminding information is used for reminding whether to switch back to play and display the video content corresponding to the obtained video live broadcast data in the first playing and displaying area.
4. The method as claimed in claim 2 or 3, wherein the PPT document comprises a first slide, the first slide comprises animation playback elements, the images corresponding to the first slide are first dynamic images, and the first dynamic images are generated according to a playback order of the animation playback elements.
5. A method according to claim 2 or 3, characterized in that the method further comprises:
when the PPT video file is subjected to page recommendation display, displaying a display motion picture on the page; wherein the display motion picture is generated according to a key slide image contained in the PPT document.
6. The method of claim 2 or 3, wherein the page flip instruction comprises a second fast page flip instruction; the turning of the slide images according to the page turning instruction comprises the following steps:
according to the second quick page turning instruction, turning up the images in the previous image set or turning down the images in the next image set from the image set where the slide images are currently displayed;
wherein, the flipping from the image set where the slide image is currently displayed to the image in the previous image set or the flipping to the image in the next image set, further comprises: and determining a plurality of image sets according to the relevance between the slide presentation contents in the PPT document.
7. The method according to claim 2 or 3, wherein in the process of playing and displaying the video content corresponding to the live video data in the first playing and displaying area of the display interface, the video content includes barrage information; the method further comprises the following steps:
under the condition that the bullet screen information is in an activated state, displaying the bullet screen information in a first sub-area, and playing the video content in a second sub-area; the first sub area and the second sub area form the first playing display area, and the first sub area and the second sub area are not overlapped.
8. A control apparatus for live video, comprising:
the acquisition unit is used for acquiring video live broadcast data;
the first display unit is used for displaying the video content corresponding to the video live broadcast data in a first playing display area of a display interface;
a first instruction receiving unit configured to receive a first control instruction for the video content; the first control instruction is used for indicating on-demand review of the video content;
the display area dividing unit is used for dividing a second playing display area and a third playing display area on the display interface according to the first control instruction; the second playing and displaying area is used for continuously playing and displaying video content corresponding to the obtained video live broadcast data; the third playing display area is used for playing back the video content before the first control instruction is received.
9. A terminal, characterized in that it comprises a processor and a memory, said processor and memory being interconnected, wherein said memory is adapted to store a computer program comprising program instructions, said processor being configured to invoke said program instructions to perform the method according to any one of claims 1-7.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program comprising program instructions that, when executed by a processor, cause the processor to carry out the method according to any one of claims 1-7.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201911284534.0A CN111131876B (en) | 2019-12-13 | 2019-12-13 | Control method, device and terminal for live video and computer readable storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201911284534.0A CN111131876B (en) | 2019-12-13 | 2019-12-13 | Control method, device and terminal for live video and computer readable storage medium |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN111131876A true CN111131876A (en) | 2020-05-08 |
| CN111131876B CN111131876B (en) | 2022-06-24 |
Family
ID=70498807
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201911284534.0A Active CN111131876B (en) | 2019-12-13 | 2019-12-13 | Control method, device and terminal for live video and computer readable storage medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN111131876B (en) |
Cited By (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112954380A (en) * | 2021-02-10 | 2021-06-11 | 北京达佳互联信息技术有限公司 | Video playing processing method and device |
| CN113038151A (en) * | 2021-02-25 | 2021-06-25 | 北京达佳互联信息技术有限公司 | Video editing method and video editing device |
| CN113301413A (en) * | 2020-05-11 | 2021-08-24 | 阿里巴巴集团控股有限公司 | Information display method and device |
| CN113490010A (en) * | 2021-07-06 | 2021-10-08 | 腾讯科技(深圳)有限公司 | Interaction method, device and equipment based on live video and storage medium |
| CN113992876A (en) * | 2020-07-27 | 2022-01-28 | 北京金山办公软件股份有限公司 | Method for recording document and playing video, storage medium and terminal |
| CN114003574A (en) * | 2021-11-04 | 2022-02-01 | 中国银行股份有限公司 | Method and device for real-time synchronization and separation of remote documents |
| CN114051150A (en) * | 2021-11-11 | 2022-02-15 | 北京轨道交通路网管理有限公司 | Live broadcast method and device, electronic equipment and computer readable storage medium |
| WO2024193543A1 (en) * | 2023-03-21 | 2024-09-26 | 华为技术有限公司 | Screen splitting method and related apparatus |
Citations (14)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20050041872A1 (en) * | 2003-08-20 | 2005-02-24 | Wai Yim | Method for converting PowerPoint presentation files into compressed image files |
| US20100303444A1 (en) * | 2009-05-27 | 2010-12-02 | Taiji Sasaki | Recording medium, playback device, encoding device, integrated circuit, and playback output device |
| CN103384346A (en) * | 2012-12-28 | 2013-11-06 | 深圳海联讯科技股份有限公司 | PPT file processing method and device |
| CN103607657A (en) * | 2013-11-20 | 2014-02-26 | 乐视网信息技术(北京)股份有限公司 | Method and device for realizing picture-in-picture playing function |
| US20140298179A1 (en) * | 2013-01-29 | 2014-10-02 | Tencent Technology (Shenzhen) Company Limited | Method and device for playback of presentation file |
| US20150071460A1 (en) * | 2013-09-06 | 2015-03-12 | Nathan K. Stiles | 2-Way Enhanced Live Recording Splicing (ELRS) |
| CN104572686A (en) * | 2013-10-17 | 2015-04-29 | 北大方正集团有限公司 | Method and device for processing PPT (power point) files |
| US20160065992A1 (en) * | 2014-08-27 | 2016-03-03 | Microsoft Technology Licensing, Llc | Exporting animations from a presentation system |
| CN105915985A (en) * | 2015-12-15 | 2016-08-31 | 乐视致新电子科技(天津)有限公司 | Method for performing watch-back in live broadcasting and device thereof |
| CN106792219A (en) * | 2016-12-20 | 2017-05-31 | 天脉聚源(北京)传媒科技有限公司 | A kind of live method and device reviewed |
| CN106816055A (en) * | 2017-04-05 | 2017-06-09 | 杭州恒生数字设备科技有限公司 | A kind of low-power consumption live teaching broadcast recording and broadcasting system for interacting and method |
| CN107920270A (en) * | 2017-10-27 | 2018-04-17 | 努比亚技术有限公司 | Video separated screen control method for playing back, terminal and computer-readable recording medium |
| US20180183849A1 (en) * | 2016-12-22 | 2018-06-28 | Hanwha Techwin Co., Ltd. | Method and device for media streaming between server and client using rtp/rtsp standard protocol |
| CN109981711A (en) * | 2017-12-28 | 2019-07-05 | 腾讯科技(深圳)有限公司 | Document dynamic playback method, device, system and computer-readable storage medium |
-
2019
- 2019-12-13 CN CN201911284534.0A patent/CN111131876B/en active Active
Patent Citations (14)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20050041872A1 (en) * | 2003-08-20 | 2005-02-24 | Wai Yim | Method for converting PowerPoint presentation files into compressed image files |
| US20100303444A1 (en) * | 2009-05-27 | 2010-12-02 | Taiji Sasaki | Recording medium, playback device, encoding device, integrated circuit, and playback output device |
| CN103384346A (en) * | 2012-12-28 | 2013-11-06 | 深圳海联讯科技股份有限公司 | PPT file processing method and device |
| US20140298179A1 (en) * | 2013-01-29 | 2014-10-02 | Tencent Technology (Shenzhen) Company Limited | Method and device for playback of presentation file |
| US20150071460A1 (en) * | 2013-09-06 | 2015-03-12 | Nathan K. Stiles | 2-Way Enhanced Live Recording Splicing (ELRS) |
| CN104572686A (en) * | 2013-10-17 | 2015-04-29 | 北大方正集团有限公司 | Method and device for processing PPT (power point) files |
| CN103607657A (en) * | 2013-11-20 | 2014-02-26 | 乐视网信息技术(北京)股份有限公司 | Method and device for realizing picture-in-picture playing function |
| US20160065992A1 (en) * | 2014-08-27 | 2016-03-03 | Microsoft Technology Licensing, Llc | Exporting animations from a presentation system |
| CN105915985A (en) * | 2015-12-15 | 2016-08-31 | 乐视致新电子科技(天津)有限公司 | Method for performing watch-back in live broadcasting and device thereof |
| CN106792219A (en) * | 2016-12-20 | 2017-05-31 | 天脉聚源(北京)传媒科技有限公司 | A kind of live method and device reviewed |
| US20180183849A1 (en) * | 2016-12-22 | 2018-06-28 | Hanwha Techwin Co., Ltd. | Method and device for media streaming between server and client using rtp/rtsp standard protocol |
| CN106816055A (en) * | 2017-04-05 | 2017-06-09 | 杭州恒生数字设备科技有限公司 | A kind of low-power consumption live teaching broadcast recording and broadcasting system for interacting and method |
| CN107920270A (en) * | 2017-10-27 | 2018-04-17 | 努比亚技术有限公司 | Video separated screen control method for playing back, terminal and computer-readable recording medium |
| CN109981711A (en) * | 2017-12-28 | 2019-07-05 | 腾讯科技(深圳)有限公司 | Document dynamic playback method, device, system and computer-readable storage medium |
Non-Patent Citations (2)
| Title |
|---|
| 张哲: "微课制作的新型软件分析", 《现代职业教育》 * |
| 赵卫: "时移电视和电视回看管理系统的设计与实现", 《中国优秀硕士学位论文全文数据库》 * |
Cited By (16)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113301413B (en) * | 2020-05-11 | 2023-09-29 | 阿里巴巴(中国)网络技术有限公司 | Information display method and device |
| CN113301413A (en) * | 2020-05-11 | 2021-08-24 | 阿里巴巴集团控股有限公司 | Information display method and device |
| CN113992876A (en) * | 2020-07-27 | 2022-01-28 | 北京金山办公软件股份有限公司 | Method for recording document and playing video, storage medium and terminal |
| CN112954380A (en) * | 2021-02-10 | 2021-06-11 | 北京达佳互联信息技术有限公司 | Video playing processing method and device |
| CN112954380B (en) * | 2021-02-10 | 2023-03-21 | 北京达佳互联信息技术有限公司 | Video playing processing method and device |
| CN113038151B (en) * | 2021-02-25 | 2022-11-18 | 北京达佳互联信息技术有限公司 | Video editing method and video editing device |
| CN113038151A (en) * | 2021-02-25 | 2021-06-25 | 北京达佳互联信息技术有限公司 | Video editing method and video editing device |
| CN113490010B (en) * | 2021-07-06 | 2022-08-09 | 腾讯科技(深圳)有限公司 | Interaction method, device and equipment based on live video and storage medium |
| CN113490010A (en) * | 2021-07-06 | 2021-10-08 | 腾讯科技(深圳)有限公司 | Interaction method, device and equipment based on live video and storage medium |
| KR20230144582A (en) * | 2021-07-06 | 2023-10-16 | 텐센트 테크놀로지(센젠) 컴퍼니 리미티드 | Live streaming video-based interaction method and apparatus, device and storage medium |
| JP2024510998A (en) * | 2021-07-06 | 2024-03-12 | ▲騰▼▲訊▼科技(深▲セン▼)有限公司 | Live streaming video interaction methods, devices, equipment and computer programs |
| KR102758490B1 (en) | 2021-07-06 | 2025-01-21 | 텐센트 테크놀로지(센젠) 컴퍼니 리미티드 | Live streaming video-based interaction method and device, and device and storage medium |
| JP7640180B2 (en) | 2021-07-06 | 2025-03-05 | ▲騰▼▲訊▼科技(深▲セン▼)有限公司 | Method, device, equipment and computer program for live video interaction |
| CN114003574A (en) * | 2021-11-04 | 2022-02-01 | 中国银行股份有限公司 | Method and device for real-time synchronization and separation of remote documents |
| CN114051150A (en) * | 2021-11-11 | 2022-02-15 | 北京轨道交通路网管理有限公司 | Live broadcast method and device, electronic equipment and computer readable storage medium |
| WO2024193543A1 (en) * | 2023-03-21 | 2024-09-26 | 华为技术有限公司 | Screen splitting method and related apparatus |
Also Published As
| Publication number | Publication date |
|---|---|
| CN111131876B (en) | 2022-06-24 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN111131876B (en) | Control method, device and terminal for live video and computer readable storage medium | |
| CN111078070B (en) | PPT video barrage play control method, device, terminal and medium | |
| US20220392365A1 (en) | Interactive Oral Presentation Display System | |
| US9049482B2 (en) | System and method for combining computer-based educational content recording and video-based educational content recording | |
| CN116137662B (en) | Page display method and device, electronic device, storage medium and program product | |
| CA2873308C (en) | Rotatable object system for visual communication and analysis | |
| CN111078078B (en) | Video playing control method, device, terminal and computer readable storage medium | |
| CN113032626B (en) | Search result processing method, device, electronic equipment and storage medium | |
| JPWO2019130492A1 (en) | Cartoon data display system, method and program | |
| CN114846808A (en) | Content distribution system, content distribution method, and content distribution program | |
| EP4676067A1 (en) | Virtual gift generation method and apparatus, device, and medium | |
| CN112988008A (en) | Information display method and device, computer equipment and storage medium | |
| US12244551B2 (en) | System, method, and program for specifying character-string-based comment art | |
| CN112989112B (en) | Online classroom content collection method and device | |
| CN115580696A (en) | Layout switching method and system based on video communication desktop content | |
| JP2017199058A (en) | Recognition device, image content presentation system, program | |
| CN119402727B (en) | A video generation method, apparatus, device, storage medium, and program product. | |
| CN112492381A (en) | Information display method and device and electronic equipment | |
| CN113626585B (en) | Abstract generation method, device, electronic device and storage medium | |
| CN107609018B (en) | Search result presenting method and device and terminal equipment | |
| CN116980631A (en) | File processing methods, devices, program products, computer equipment and media | |
| CN117372397A (en) | Interaction method and device for drawing, electronic equipment and storage medium | |
| CN115379254A (en) | Live broadcasting method, live broadcasting device and electronic equipment | |
| CN112799751A (en) | A text display method, device, terminal device and storage medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |