[go: up one dir, main page]

HK1238040A1 - Interactive applications implemented in video streams - Google Patents

Interactive applications implemented in video streams Download PDF

Info

Publication number
HK1238040A1
HK1238040A1 HK17111412.8A HK17111412A HK1238040A1 HK 1238040 A1 HK1238040 A1 HK 1238040A1 HK 17111412 A HK17111412 A HK 17111412A HK 1238040 A1 HK1238040 A1 HK 1238040A1
Authority
HK
Hong Kong
Prior art keywords
video
script
interactive video
user
interactive
Prior art date
Application number
HK17111412.8A
Other languages
Chinese (zh)
Inventor
郭荣昌
杨昇龙
Original Assignee
英属开曼群岛商优比特思有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 英属开曼群岛商优比特思有限公司 filed Critical 英属开曼群岛商优比特思有限公司
Publication of HK1238040A1 publication Critical patent/HK1238040A1/en

Links

Description

Interactive application implemented in video streaming
Technical Field
The invention relates to a method for interactive application.
Background
Interactive applications, such as games, may be computationally intensive. Especially for certain kinds of interaction being application programs, such as interactive multimedia applications, the main component of such a high computational load is the need to generate video or audio in response to user input. Furthermore, the load may add up with the number of users, as the same imagery and sound may need to be generated separately for each of multiple users for a given application.
When such applications are located on servers, such as cloud-based servers, as a result, a large number of servers may be required, which are expensive to acquire, update, and maintain.
Better solutions are needed for computationally intensive interactive applications such as games.
Disclosure of Invention
Embodiments of the present invention convert the multimedia computer program output into a series of streaming video clips that can be distributed globally over a video streaming infrastructure consisting of network data centers (IDCs) and a Content Delivery Network (CDN).
Further, in some embodiments, the video clip is conveniently played with metadata tags, which may include, for example, identifiers and trigger information. The identifier may be a unique identifier for each video clip and the trigger information may specify the identifier of the next clip played, possibly as a function of current user input or other conditions.
Generally, embodiments of the present invention include a video clip generation process and an interactive playback program.
During the production process, the user (or in some variations, simulated, robotic user) interacts with a conventional interactive computer program. In response to user interaction, the computer program generates raw video and audio data, stores the specific video and audio data generated as a result of user input or other event triggers, and converts the specific video and audio data associated with the trigger conditions to a streaming video clip. The clip is marked with metadata including, for example, an ID, a trigger condition or play event, and a length. In some embodiments, the clip is then transmitted via a content delivery network to a selected network data center to support one or more interactive applications.
During play, in some embodiments, such as those supporting interactive game play, a first video clip is played. At the end of the first video clip play (or in some embodiments, at any time during the first video clip play), the metadata is referenced to identify a trigger condition or condition that will trigger the next video clip play. Upon detecting a trigger condition (e.g., the user pressing a button), the next video clip is played. Playback continues in this manner until the last video clip is played based on the last trigger condition.
In some embodiments, the playing occurs in a server, such as a cloud-based streaming server, and the content is streamed from the server to the user. In other embodiments, the content is streamed to the user via the CDN and the IDC while playing.
The invention is described in detail below with reference to the drawings and specific examples, but the invention is not limited thereto.
Drawings
FIG. 1 is a block diagram of a distributed master-slave computer system supporting interactive real-time multimedia applications, according to an embodiment of the present invention;
FIG. 2 is a block diagram of a video streaming infrastructure including a Content Delivery Network (CDN) and a plurality of network data centers (IDCs) to distribute video clips through embodiments of the present invention;
FIG. 3 is a diagram depicting an interactive video clip generation and playback system, in accordance with an embodiment of the present invention;
FIG. 4 is a flow diagram of a video clip generation and playback procedure according to an embodiment of the present invention;
FIG. 5 is a diagram structure group depicting video clips, according to an embodiment of the invention;
FIG. 6 depicts a system for linking from linear video to interactive video scripts in accordance with an embodiment of the present invention;
FIG. 7 depicts a method for linking from linear video to interactive video scripts in accordance with an embodiment of the present invention.
Detailed Description
Embodiments of the present invention provide for the generation and playback of multimedia information, such as streaming video clips for interactive real-time media applications.
FIG. 1 is a block diagram of a distributed master-slave computer system 1000 supporting interactive real-time multimedia applications, according to an embodiment of the present invention. The computer system 1000 includes one or more server computers 101 and one or more user devices 103 configured by a computer program product 131. The computer program 131 may be provided on a transitory or non-transitory computer readable medium; however, in certain embodiments, it is provided in a non-transitory computer readable medium, such as persistent (e.g., non-volatile) storage, volatile memory (e.g., random access memory), or various other known non-transitory computer readable media.
The user device 103 includes a Central Processing Unit (CPU)120, memory 122, and storage space 121. the user device 103 also includes input and output (I/O) subsystems (not separately shown) including, for example, a display or touch display, a keyboard, a d-pad, a trackball, a touchpad, a joystick, a microphone, and/or other user interface devices and associated controller circuitry and/or software. User devices 103 may include any type of electronic device that may provide media content. Some examples include desktop computers and portable electronic devices such as mobile phones, smart phones, multimedia players, e-readers, tablet/touch pads, notebook or laptop PCs, smart televisions, smart watches, head mounted displays, and other communication devices.
The server computer 101 includes a central processing unit CPU110, a storage space 111, and a memory 112 (and may include an I/O subsystem, not separately shown). Server computer 101 may be any computer device capable of hosting computer product 131 for communicating with one or more client computers, e.g., user device 103, via a network, e.g., network 102 (e.g., a web). The server computer 101 communicates with one or more client computers via a network and may employ a protocol such as the network protocol suite (TCP/IP), hypertext transfer protocol (HTTP) or HTTPs, real-time protocol, or other protocol.
Memories 112 and 122 may comprise any known computer memory devices. The storage spaces 111 and 121 may comprise any known storage space device.
Although not shown, the memories 112 and 122 and/or the storage spaces 111 and 121 may also include any data storage device accessible by the server computer 101 and the user device 103, such as any memory (e.g., flash memory or an external hard disk) that is removable or portable, or any data storage space hosted by a third party (e.g., a cloud-side storage space), and is not limited thereto.
The user device 103 and the server computer 101 are accessed and communicate via the network 102. Network 102 includes wired and wireless connections including Wide Area Networks (WANs) and cellular networks or any other type of computer network used for inter-device communication.
In the illustrated embodiment, the computer program product 131 actually represents a computer program product or a computer program product part composition for execution on the respective server 101 and user equipment 103. The computer program product 131, which is partially loaded into the memory 112, constitutes the server 101 for recording and playing interactive streaming video clips in accordance with the inventive requirements further described herein. The streaming video clip is played, for example, to user equipment 103 that enables receiving streaming video, for example, via a browser with HTML5 functionality.
FIG. 2 shows an example of a video streaming infrastructure used by embodiments of the present invention to distribute video clips. As shown, the video streaming infrastructure 2000 includes a Content Delivery Network (CDN)200 and network data centers (IDCs)210 and 260.
The media file 201 is initially stored in the file storage space 202, and the media file 201 is then distributed to the IDCs 210 and 260 via the CDN 200. After file distribution, each individual IDC has a local copy of the distributed media file. The respective local copy is then stored as media file copy 211-261. Each IDC 210-260 then serves streaming media, such as video, to users in the geographic vicinity of the respective IDC in response to the user's request. Media file copy 211-261 may be updated periodically.
In some embodiments of the invention, video clips generated by the inventive process disclosed herein are distributed using video streaming infrastructure 2000. That is, for example, the video clips of the present invention are stored as media files 201 in the file storage space 202, and then distributed via the CDN200 to the IDCs 210 and 260, where they can be used to play as streaming video to users.
In other embodiments, the inventive video clips are distributed directly from, for example, one or more servers, such as cloud-based servers, without using the video streaming infrastructure 2000.
Fig. 3 is a high-level block diagram of a system 3000 for generating and storing interactive video clips tagged with metadata and for distributing interactive video to user devices, in accordance with an embodiment of the present invention. System 3000 may be implemented as a hardware module or a software module, or a combination of hardware and software modules. In some embodiments, at least a portion of system 3000 includes software running on a server, such as server 101.
In the illustrated embodiment, system 3000 performs additional related functions in addition to generating and storing interactive video clips tagged with metadata. For example, in this embodiment system 3000 is also capable of playing back pre-stored video clips and of streaming video to a user in response to user interaction without first storing the video as a video clip. In alternative embodiments, these one or more functions may be provided by separate or multiple systems.
In fig. 3, the computer program 310 may be, for example, an interactive multimedia application. For example, the computer program 310 may be a game application. The computer program 310 generates an output program 320 in response to the input program 330.
In some embodiments, the output program 320 includes raw video and audio outputs, and in some embodiments, the output program 320 includes video rendering results.
In some embodiments, the input process 330 includes a control message based on a user input interaction indication, such as a user pressing a button, selecting an item on a list, or typing a command. Such user input interaction may originate from an input interface device 350, which may be an interface device associated with a user device, such as user device 103. The interface devices associated with a particular user device may include a joystick, mouse, touch screen, etc. In some embodiments, the input interface device 350 may be collocated with the remote user device 103 and in communication with other system components via a network. Although labeled as an "interface device," those skilled in the art will appreciate that input devices/components, such as interface device 350, may, in particular embodiments, include input components built into, i.e., part of, user device 103 (e.g., touch screen, buttons, etc.), rather than being separate from user device 103 and plugged into user device 103.
In some embodiments, the input interface device 350 is a "robotic" entity that generates a series of input sequences that simulate real user behavior. Such a robotic entity may be used to "train" the system and cause it to generate many (or even all) possible instances of the output process 320. The purpose of "training" system 3000 in this manner may be, for example, to cause it to generate and store at least one copy of each video clip associated with output program 320.
The application interaction container 340 provides a runtime environment to run the computer program 310. In an embodiment of the present invention, the application interaction container 340 detects and intercepts user input generated through the input interface device 350 and passes the intercepted user input to the computer program 310 in the form of an input program 330.
The application interaction container 340 also intercepts the raw video and audio and generates as output program 320 and converts the raw video and audio into a streaming video format using the services of the computer program video processing platform 360, and then stores the converted video and audio as one or more video segments or clips 370 in the database 390. Each clip represents an audio and video output program (or play event) responsive to a particular trigger condition, where a set of possible trigger conditions includes, for example, a particular item of the input program 330. In some embodiments, the raw video and audio are converted to a multimedia packaging format, and in some embodiments, the raw video and audio are converted to a format known as MPEG 2-transport stream (MPEG 2-TS).
As video clips 370 are generated, they are also tagged with a set of attributes 380 (also referred to herein as "metadata"), consisting of, for example, a clip ID, a play event, and a length. The attributes in metadata 380 and the associated corresponding video clip 370 are stored in database 390. Stored clip 370 may be used for future playback and stored, tagged video clip 370 may be reused by the same or a different user. Potentially, a given clip 370 can be reused by thousands of users interacting with the computer program 310 on a shared server or group of servers.
For example, the next time a given play event occurs (from the same user or a different user, based on, for example, detection of input from a particular user), the stored video clip 370 marked with that event can be played, thereby avoiding the need to recreate the corresponding original video and audio. For some applications, this may result in significant savings in computer processing power. See the following description of the playback process for further details.
As described above, in the illustrated embodiment, system 3000 can also play back pre-stored video clips. For example, based on user interaction via the input interface device 350, the input program 330, the computer program 310, is enabled to determine that a particular pre-stored clip 370 having metadata 380 corresponding to the user interaction is valid and appropriate in response to the user interaction. The matching clips 370 may then be retrieved from storage and from the stream, e.g., according to a multimedia sealed format, such as MPEG2-TS, to the user device 103.
As described above, in the illustrated embodiment, system 3000 may also stream video to a user in response to a user interaction, even though the video is not currently stored as a streamed video clip 370, e.g., based on the user interacting via input interface device 350 so that input program 330, computer program 310 may determine that a particular video output is appropriate for responding to the user interaction, but no corresponding clip 370 is available for use. The desired video may then be produced by the computer program 310 as the raw output video 320. The application interaction container 340 then intercepts the output program 320 and, using the services of the computer program video processing platform 360, converts the raw video into a streaming format according to, for example, a multimedia packaging format such as MPEG2-TS and sends the streaming video to the user device 103. Advantageously, streaming video may be recorded simultaneously, packaged as video clips 370, and stored with appropriate metadata 380 for future use.
FIG. 4 shows a process 4000 for producing, storing, and playing interactive video clips and associated metadata, according to an embodiment of the invention. In some embodiments, the process 4000 also supports other related functions, such as, for example, streaming video to a user without first storing the video as a video clip.
At step 410, the computer program is started at a server, such as server 101. The server may be, for example, a cloud-based server. The server may be, for example, a game server. The computer program may be, for example, an interactive multimedia application, such as, for example, a gaming application.
At step 420, the process monitors for user input.
At decision block 430, if no user input is detected, the process returns to step 420 and continues to monitor for user input. If user input is detected, control transfers to decision block 440.
At decision block 440, if a pre-stored video clip with matching metadata exists (i.e., the metadata corresponds to user input), control transfers to step 450 where the pre-stored video clip is streamed to the user. Control then returns to step 420 and the process continues to monitor for user input.
If, at decision block 440, no pre-stored clip with matching metadata is found, control transfers to step 460. At step 460, the video segments from the output program responsive to the user input are streamed to the user. At the same time, the video segments are recorded in preparation for the creation of the corresponding video clips. At step 470, the recorded video is packaged into video clips in streaming form. For example, the stream format may be a multimedia packaging format such as MPEG 2-TS.
In step 480, metadata associated with the video clip (e.g., clip ID, play event or trigger, length) is generated.
At step 490, the video clip and associated metadata are stored for future use, e.g., the video clip may be used in the future by a playback program when an initiating device is encountered that stores metadata with the corresponding clip. By using the stored video clip, the playback program can avoid the need for the computer program to regenerate the video segment corresponding to the stored video clip.
The video clip may continue to be recorded, packaged into a streaming clip, and stored with associated metadata until, for example, the game is over.
Note that program 4000 may be run on a server, such as a cloud-based server, which may actually handle multiple users, possibly many users, at the same time. In such a case, it is entirely possible that a given video segment has been recorded, packaged, and stored as a video clip 370, with corresponding metadata 380 during a previous user interaction with the program 4000. In this case, the corresponding segment should not need to be recorded again, instead the video clip may be retrieved from a previously stored series of clips, which may include a unique ID based on metadata.
FIG. 5 shows an example set 5000 of graphical structures of video clips and associated metadata for use in a playback program according to an embodiment of the present invention. These clips may be, for example, video clips 370 and associated metadata 380 generated from the system 3000 of FIG. 3 and/or by the program 4000 of FIG. 4. During the playback procedure, video clips 370 are streamed from a server, such as server computer 101 or a server associated with a network data center, such as IDC 210. Video clip 370 is received and viewed at a user device, such as user device 103, which has appropriate functionality, such as a browser supporting HTML 5.
Each interactive multimedia application, or portion of an application, may be associated with a playing video clip group, also referred to as a metadata playlist, in a form similar to video clip group 5000, e.g., each level of a multi-level game may have its own metadata playlist. As described above, the metadata for each video clip 370 is learned as an executing application in response to real or "robotic" user input, and thus, at the same time, the metadata playlist is also learned because the metadata playlist is a collection of video clips 370, connected according to metadata 380, for a particular application or portion of an application.
In the example of fig. 5, the video clips are represented by circles, each having an ID. For example, video clip 510 is tagged with an ID ═ a. The arrow indicates a "play event" or trigger condition that causes the playback program 5000 to proceed in the direction of the arrow, e.g., if the video clip 520 is playing and button X is pressed, the playing of the video clip 520 stops and the video clip 530 starts. If, on the other hand, when video clip 520 is playing, the user selects "item 2", and the program changes to video clip 540 instead. If video clip 530 is playing and button Y is pressed, the program switches and plays video clip 550. If video clip 540 is playing and the user slides to "target Z", the program transitions and plays video clip 560. If either video clip 560 or 550 is playing and an audio command "submit" is received from a microphone ("MIC"), the program switches and begins playing video clip 570. Illustrating a slightly different kind of trigger, when video clip 510 finishes playing, the program automatically advances to the video clip labeled a', i.e., video clip 520.
Optionally, a caching mechanism may be employed to facilitate smooth playback of the video clip.
In some embodiments of the invention, the video transmitted from the server to the user device is a mix of pre-computed video (video clips stored and replayed) and a video stream generated in real time (video not already stored as video clips with metadata).
In the above description, reference is made to streaming multimedia packaging formats, such as MPEG 2-TS. It should be understood that embodiments of the present invention are not limited to MPEG2-TS, but may employ any of a variety of stream packaging formats, including, but not limited to, 3GP, ASF, AVI, DVR-MS, Flash Video (FLV, F4V), IFF, Matroska (MKV), MJ2, QuickTime file format, MPEG program streams, MP4, Ogg, and RM (RealMedia packaging). Operation of the embodiments without a standardized packaging format is also contemplated.
Linking from linear video to interactive video scripts
A metadata playlist, such as metadata playlist 5000, may also be considered to represent an "interactive video script". That is, the metadata playlist 5000 indicates a direction in which the video "script" is determined to take as a function of the user input. Thus, in the example of fig. 5, if item 2 is selected, the video script continues the content of video clip ID B' in step 520, and if button X is pressed, the video script continues the content of video clip ID B. This is in contrast to conventional "linear" video (or "linear play" video). "Linear video" or "Linear Play video" is defined herein as traditional video that simply plays from beginning to end, possibly under the control of a typical media player user, such as fast forward or rewind capability, moving around the same linear video.
It may sometimes be desirable to utilize a combination of linear play video and interactive video scripts. For example, it may be desirable to link from linear video to interactive video script, or to switch between (and possibly switch back and forth between) linear play and interactive video script. Furthermore, in some cases, it may be desirable to link from the linear video to a particular location in the interactive video script, for example, in order to play a particular portion of the interactive video script.
As an example, considering an advertisement for a set of games served by a cloud game server, the playing of the advertisement may be initiated by a user clicking on a video advertisement connection on a website as a service. The video advertisement then begins playing as a conventional linear play video. However, at one or more points in the linear video, the user is invited to experience playing the actual game, and then the user performs a triggering action to initiate play of the game via the interactive video script. Depending on, for example, the triggering action, the playback may be initiated from the beginning of the interactive video script, or from some other location in the interactive video script. Referring again to fig. 5, playback may begin at 510 with a video clip ID, but alternatively may begin at 530 with B. The interactive script terminates at the end of the game (or at a further specific user input), and optionally the linear video playback may resume after termination of the video script.
In the above-described situation, one important challenge is to identify the particular game (or portion of the game, or location within the game) that is initiated based on the user input triggering action. One solution to this problem is to provide a timestamp of the current play that is sent with the user interaction trigger. The timestamp identifies the portion of the linear video being viewed upon user interaction. The desired game is then identified as the characterized game upon linear video playback.
FIG. 6 shows an example system for linking from linear video to interactive video scripts according to some embodiments of the present invention. In the system of fig. 6, for example, a user operating user device 606 may initiate the playing of a linear video, for example by clicking on a link on screen 607 that displays a web page. The user device 606 may be, for example, a laptop computer, tablet, or smart phone. In some embodiments, linear video may be provided via the video streaming infrastructure 602. The video streaming infrastructure 602 may correspond to, for example, the video streaming infrastructure 2000, including the CDN200 and the network data center 210 and 260, as shown in fig. 2.
At any time during the linear video playback, the user may trigger the playback of the interactive video script. Alternatively, the user may initiate the triggering of the playing of the interactive video script from anywhere within the script. Any of a variety of user actions may be used as a trigger mechanism including, for example, selecting a menu item, clicking a link, pressing a physical button, or speaking a voice command. As one example, a user may trigger the playing of an interactive video script by touching screen location 608 (labeled "T" as the trigger in the figure). Touching the screen location 608 during linear video playback will trigger the sending of a script request. In some embodiments, the script request includes a timestamp.
The purpose of the time stamp is to identify the particular video script that is requested based on the amount of time that has elapsed when the linear video begins to play. For example, in the advertising scenario described above, a request at T1 seconds may identify a script corresponding to the multimedia interactive game G1, since the linear ad renders the features of the game at time T1. In other embodiments, the particular video script is identified by another mechanism. In some embodiments, the particular video script is identified by clicking on a menu item, in some embodiments, the particular video script is identified by pressing a physical button, or via a voice command. In some embodiments, a timestamp or other mechanism is used to identify not only a particular video script, but also a particular starting location within the video script.
Once a particular script is identified from the timestamp, game play corresponding to the selected script may begin. Playback will proceed according to the procedure as described above, for example in connection with fig. 5. That is, the video clips for the corresponding game will be played in the order determined by the stored metadata and the current user interaction. Although examples of games have been described, it should be understood that other forms of interactive content may be delivered in a similar manner.
In fig. 6, the interactive video script is shown as being transmitted through the interactive video distribution infrastructure 604. As discussed above in connection with fig. 2, it is possible and desirable to deliver video clips through an existing video streaming infrastructure 2000. In this case, both the video streaming infrastructure 602 and the interactive video distribution infrastructure 604 correspond to the existing video streaming infrastructure 2000. In other embodiments, the video clips corresponding to the selected interactive video scripts are streamed directly to the user device 606, such as the internet, over a network without using a video streaming infrastructure or an interactive video distribution infrastructure.
FIG. 7 is a flow diagram depicting an example process for linking from a linear video to an interactive video script, such as a video advertisement.
At step 710, the user clicks on a link on the web page corresponding to the linear video advertisement. At step 720, the linear video advertisement is played via the video streaming infrastructure. At decision block 730, if the linear video advertisement has ended, control transfers to decision block 790 and processing ends. If not, the system monitors for a trigger at decision block 740. If no trigger is detected, the linear play continues.
If a trigger is detected, the video playback time for the trigger action is calculated from the timestamp transmitted with the trigger at step 750. A particular interactive video script is selected based on the calculated play time at step 760. As described above, other mechanisms may alternatively be employed to select a particular interactive video script. Further, as described above, a timestamp or other mechanism may optionally be used to identify a particular start location within a particular interactive video script.
The selected interactive video script is played at step 770, which may, for example, generally be in the manner of the metadata playlist embodiment discussed above in connection with FIG. 5. At decision block 780, if the script has not ended, playback continues. If the script has ended, control transfers to decision block 790 and the process ends.
Although, in the embodiment of FIG. 7, the process terminates when the interactive video script ends, other embodiments contemplate that the linear video may continue to be viewed after the interactive video script ends. Other embodiments are also contemplated in which the user can switch viewing between linear video and interactive video any number of times. Other embodiments contemplate that the user may switch viewing between any of the plurality of linear videos and any of the second plurality of interactive video scripts.
Although a few exemplary embodiments have been described above, those skilled in the art will appreciate that many modifications and variations are possible without departing from the spirit and scope of the invention. Accordingly, all such modifications and variations are intended to be included herein within the scope of this disclosure.

Claims (22)

1. A method for playing an interactive video script, comprising:
responding to a first user request, and starting to play a linear playing video;
receiving a second user request comprising a timestamp associated with a temporal location in the linearly played video;
selecting, based on the timestamp, at a particular location of a particular interactive video script, the particular interactive video script comprising a set of pre-recorded streaming video clips and corresponding metadata stores; and
playing the particular interactive video script, including in response to any further user input based on the metadata, from the particular location.
2. The method of claim 1, further comprising returning to the playing of the linear play video.
3. The method of claim 2, wherein returning to play of the linear play video is triggered by an end of the interactive video script.
4. The method of claim 2, wherein the return to the playing of the linear play video is triggered by a third user request.
5. The method of claim 1, wherein the streaming video clip is formatted according to a multimedia packaging format.
6. The method of claim 1, wherein the linear play video is transmitted via a video streaming infrastructure.
7. The method of claim 1, wherein the interactive video script is transmitted via a video streaming infrastructure.
8. The method of claim 1, wherein the particular interactive video script is selected from a plurality of interactive video scripts.
9. The method of claim 1, wherein the particular is a starting point.
10. A method for playing an interactive video script, comprising:
responding to a first user request, and starting to play a linear playing video;
receiving a second user request comprising a user interaction with the linear play video;
selecting a particular location in an interactive video script using a characteristic of the user interaction;
playing the particular interactive video script from the particular location, including responding to any further user input based on pre-stored metadata; and
returning to the linear play video at the end of the interactive video script.
11. The method of claim 10, wherein the second user request comprises clicking on a link for selecting the particular location in the interactive video script.
12. The method of claim 10, wherein the second user request comprises a voice command for selecting the particular location in the interactive video script.
13. The method of claim 10, wherein the second user request comprises a physical button press for selecting the particular location in the interactive video script.
14. The method of claim 10, wherein the second user request includes a timestamp for selecting the particular location in the interactive video script based on an amount of time that has elapsed when the linear play video began playing.
15. The method of claim 10, wherein the first user request comprises clicking on a connection representing a video advertisement.
16. A system for alternately playing one or more linear videos and one or more interactive video scripts, comprising a user equipment, a video streaming infrastructure, and an interactive video script player,
the user equipment is configured to receive and display the playing of a linear video, receive a user request for playing an interactive video script, and transmit the user request to the interactive video script player in parallel with a corresponding timestamp;
the interactive video script player is configured to select and play a particular interactive video script from a particular location in the interactive video script based on the timestamp; and
the video streaming infrastructure is configured to transmit at least one of the linear video and the interactive video script to the user device.
17. The system of claim 16, wherein the user device is configured to resume playing the linear video when the interactive video script ends; .
18. The system of claim 17, wherein the user device is configured to accept a second user request for playback of a second interactive video script and to transmit the second user request along with a second timestamp to the interactive video script player.
19. The system of claim 18, wherein the interactive video script player is configured to select and play a second particular interactive video script based on the second timestamp.
20. The system of claim 16, wherein the interactive video script player comprises a set of pre-stored streaming video clips stored in a database along with corresponding metadata.
21. The system of claim 16, wherein the user device is a smart phone comprising a touch screen.
22. A computer program product on a non-transitory computer readable medium, comprising instructions executable by a computer processor to:
responding to a first user request, and starting to play a linear playing video;
receiving a second user request comprising a timestamp;
selecting, based on the timestamp, a particular location in a particular interactive video script; and playing the particular interactive video script from the particular location, including responding to any further user requests based on pre-stored metadata.
HK17111412.8A 2015-11-04 2017-11-07 Interactive applications implemented in video streams HK1238040A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US14/932,252 2015-11-04
US15/095,987 2016-04-11

Publications (1)

Publication Number Publication Date
HK1238040A1 true HK1238040A1 (en) 2018-04-20

Family

ID=

Similar Documents

Publication Publication Date Title
CN106657257B (en) Method and apparatus for generating audio and video for interactive multimedia applications
US9635073B1 (en) Interactive applications implemented in video streams
US11736749B2 (en) Interactive service processing method and system, device, and storage medium
CN108066986B (en) A kind of streaming media determination method and device and storage medium
US9271015B2 (en) Systems and methods for loading more than one video content at a time
US20130263182A1 (en) Customizing additional content provided with video advertisements
CN102298947A (en) Method for carrying out playing switching among multimedia players and equipment
US20160366228A1 (en) Sending application input commands over a network
US12149756B2 (en) Method, device, and apparatus for pausing media content
US20150106841A1 (en) Dynamic Advertisement During Live Streaming
US20130226962A1 (en) Media Player Feedback
US9367125B2 (en) Terminal apparatus for shooting and distributing video data and video-data distribution method
US9578395B1 (en) Embedded manifests for content streaming
US20160119661A1 (en) On-Demand Metadata Insertion into Single-Stream Content
US9319455B2 (en) Method and system for seamless navigation of content across different devices
CN105916030A (en) A method, device and system for recording video-on-demand breakpoint information
CN105793841A (en) Client behavior control in adaptive streaming file
CN113424553A (en) Techniques for facilitating playback of interactive media items in response to user selections
CN108574877A (en) Live broadcast method, anchor terminal, audience terminal, equipment, system and storage medium
WO2015143854A1 (en) Data acquisition and interaction method, set top box, server and multimedia system
US20260012667A1 (en) Smart automatic skip mode
CN113556568A (en) Cloud application program operation method, system, device and storage medium
JP6063952B2 (en) Method for displaying multimedia assets, associated system, media client, and associated media server
HK1238040A1 (en) Interactive applications implemented in video streams
US11870830B1 (en) Embedded streaming content management