CN111726701A - Information implantation method, video playing method, device and computer equipment - Google Patents
Information implantation method, video playing method, device and computer equipment Download PDFInfo
- Publication number
- CN111726701A CN111726701A CN202010615896.XA CN202010615896A CN111726701A CN 111726701 A CN111726701 A CN 111726701A CN 202010615896 A CN202010615896 A CN 202010615896A CN 111726701 A CN111726701 A CN 111726701A
- Authority
- CN
- China
- Prior art keywords
- video
- media information
- information
- implantable
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 104
- 238000002513 implantation Methods 0.000 title claims abstract description 99
- 230000008569 process Effects 0.000 claims description 34
- 239000007943 implant Substances 0.000 claims description 32
- 230000006870 function Effects 0.000 claims description 25
- 238000004590 computer program Methods 0.000 claims description 20
- 238000012545 processing Methods 0.000 abstract description 20
- 230000003287 optical effect Effects 0.000 description 21
- 238000010586 diagram Methods 0.000 description 19
- 239000011159 matrix material Substances 0.000 description 9
- 238000001514 detection method Methods 0.000 description 7
- 238000004422 calculation algorithm Methods 0.000 description 5
- 238000004891 communication Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 238000010276 construction Methods 0.000 description 4
- 239000000284 extract Substances 0.000 description 4
- 239000000463 material Substances 0.000 description 4
- 238000009877 rendering Methods 0.000 description 4
- 230000003993 interaction Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000012550 audit Methods 0.000 description 2
- 239000012634 fragment Substances 0.000 description 2
- 230000009191 jumping Effects 0.000 description 2
- 230000036316 preload Effects 0.000 description 2
- 239000003826 tablet Substances 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/845—Structuring of content, e.g. decomposing content into time segments
- H04N21/8456—Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/47202—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting content on demand, e.g. video on demand
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/488—Data services, e.g. news ticker
- H04N21/4882—Data services, e.g. news ticker for displaying messages, e.g. warnings, reminders
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
- H04N21/8543—Content authoring using a description language, e.g. Multimedia and Hypermedia information coding Expert Group [MHEG], eXtensible Markup Language [XML]
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Security & Cryptography (AREA)
- Databases & Information Systems (AREA)
- Human Computer Interaction (AREA)
- Information Transfer Between Computers (AREA)
Abstract
The application relates to an information embedding method, a video playing device and computer equipment. The information implantation method comprises the following steps: playing the target video; when playing the video clip of the implantable media information, outputting prompt information; when an acquisition instruction responding to the prompt message is obtained, acquiring media information to be implanted; the media information is embedded into a video clip of the implantable media information. By adopting the method, the watched video can be processed, and the convenience of video processing and the video processing efficiency are improved.
Description
Technical Field
The present application relates to the field of video processing technologies, and in particular, to an information embedding method, a video playing method, an information embedding device, and a video playing device.
Background
With the continuous development of video processing technology and cloud storage technology, a great number of users can conveniently watch various videos through the intelligent terminal. For a video producer, video frames or video segments obtained by shooting different objects are produced into a complete video for a user to watch. For an ordinary user, the content presented in the video can only be passively watched, and the watched video cannot be processed, so that the convenience and processing efficiency of video processing are reduced.
Disclosure of Invention
In view of the foregoing, it is desirable to provide an information embedding method, a video playing method, an apparatus, and a computer device capable of processing a viewed video to improve convenience of video processing and video processing efficiency.
An information implanting method, the method comprising:
playing the target video;
when playing the video clip of the implantable media information, outputting prompt information;
when an acquisition instruction responding to the prompt message is obtained, acquiring media information to be implanted;
the media information is embedded into a video clip of the implantable media information.
An information implanting apparatus, the apparatus comprising:
the playing module is used for playing the target video;
the prompting module is used for outputting prompting information when playing a video clip of the implantable media information;
the second acquisition module is used for acquiring the media information to be implanted when acquiring an acquisition instruction responding to the prompt information;
and the implantation module is used for implanting the media information into the video clip of the implantable media information.
A computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
playing the target video;
when playing the video clip of the implantable media information, outputting prompt information;
when an acquisition instruction responding to the prompt message is obtained, acquiring media information to be implanted;
the media information is embedded into a video clip of the implantable media information.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
playing the target video;
when playing the video clip of the implantable media information, outputting prompt information;
when an acquisition instruction responding to the prompt message is obtained, acquiring media information to be implanted;
the media information is embedded into a video clip of the implantable media information.
According to the information embedding method, the information embedding device, the computer equipment and the storage medium, when the target video is played, whether the video segment of the implantable media information is played or not can be judged, and when the video segment of the implantable media information is played, the prompt information is output, so that a user can be prompted that the video segment can be embedded with the media information. In addition, the user can acquire the media information to be embedded according to the prompt information and then embed the media information into the video clip of the implantable media information, so that the processing of the target video by the user is realized on the premise of ensuring that the target video is not modified, the convenience and the video processing efficiency of the video processing are improved, and the secondary creation of the target video by the user is promoted.
A video playback method, the method comprising:
determining that the target video corresponds to the video clip implanted with the media information according to the implantation record;
sequentially downloading the video clips of the target video and the video clips implanted with the media information according to a playing sequence;
and playing the video clips of the target video and the video clips implanted with the media information according to the playing sequence so as to display the media information in the playing process.
In one embodiment, the sequentially downloading the video segments of the target video in the playing order includes:
sequentially acquiring the segment description information of the video segments in the target video according to the playing sequence;
and after the segment description information is acquired each time, downloading the corresponding video segment according to the acquired segment description information.
A video playback device, the device comprising:
the determining module is used for determining that the target video corresponds to the video segment implanted with the media information according to the implantation record;
the downloading module is used for sequentially downloading the video clip of the target video and the video clip implanted with the media information according to a playing sequence;
and the playing module is used for playing the video clips of the target video and the video clips implanted with the media information according to the playing sequence so as to display the media information in the playing process.
A computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
determining that the target video corresponds to the video clip implanted with the media information according to the implantation record;
sequentially downloading the video clips of the target video and the video clips implanted with the media information according to a playing sequence;
and playing the video clips of the target video and the video clips implanted with the media information according to the playing sequence so as to display the media information in the playing process.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
determining that the target video corresponds to the video clip implanted with the media information according to the implantation record;
sequentially downloading the video clips of the target video and the video clips implanted with the media information according to a playing sequence;
and playing the video clips of the target video and the video clips implanted with the media information according to the playing sequence so as to display the media information in the playing process.
According to the video playing method, the video playing device, the computer equipment and the storage medium, the implantation record of the target video is inquired in the playing process, so that the target video can be determined to correspond to the video segment with the implanted media information, then the video segment with the implanted media information is downloaded, and the implanted media information can be displayed while the video segment with the implanted media information is played, so that a user can watch the implanted media information of an implant, the content of the video is enriched, and the interaction between the user and the target video is favorably improved.
Drawings
FIG. 1 is a diagram of an exemplary implementation of an information embedding method and a video playback method;
FIG. 2 is a flow diagram illustrating a method for information embedding according to one embodiment;
FIG. 3 is a flow diagram illustrating downloading and playing of a video clip according to one embodiment;
FIG. 4 is a diagram of a play page in one embodiment;
FIG. 5 is a schematic diagram of an embodiment of an interface for prompting a user of a first-time-experience information-placement function;
FIG. 6a is a schematic diagram of an embodiment of an interface for prompting a user for non-first-use information implantation functionality;
FIG. 6b is a schematic diagram of an interface for embedding media information in one embodiment;
FIG. 7 is a schematic flowchart illustrating the steps of reviewing media information embedded video clips, sharing media information embedded target videos with target contacts, and saving embedded records according to one embodiment;
FIG. 8 is a flow diagram illustrating the steps of embedding an image or video in a video segment in one embodiment;
FIG. 9 is a flowchart illustrating the step of determining a media information placement region in one embodiment;
FIG. 10 is a flowchart illustrating a video playback method according to an embodiment;
FIG. 11 is a logic diagram illustrating distributed storage of target videos in one embodiment;
FIG. 12 is a diagram illustrating the structure of an image embedded in an original video according to an embodiment;
FIG. 13 is a flow diagram illustrating playing of a video with an embedded image according to one embodiment;
FIG. 14 is a schematic interface diagram for embedding an image in an implantable region of a video frame in one embodiment;
FIG. 15 is a block diagram showing the construction of an information implanting apparatus according to an embodiment;
FIG. 16 is a block diagram showing the construction of an information implanting apparatus according to another embodiment;
FIG. 17 is a block diagram showing the construction of a video player according to an embodiment;
fig. 18 is a block diagram showing the construction of a video playback apparatus according to another embodiment;
FIG. 19 is a diagram showing an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
A distributed cloud storage system (hereinafter, referred to as a storage system) refers to a storage system that integrates a large number of storage devices (storage devices are also referred to as storage nodes) of different types in a network through application software or application interfaces to cooperatively work by using functions such as cluster application, grid technology, and a distributed storage file system, and provides a data storage function and a service access function to the outside.
At present, a storage method of a storage system is as follows: logical volumes are created, and when created, each logical volume is allocated physical storage space, which may be the disk composition of a certain storage device or of several storage devices. The client stores data on a certain logical volume, that is, the data is stored on a file system, the file system divides the data into a plurality of parts, each part is an object, the object not only contains the data but also contains additional information such as data identification (ID, ID entry), the file system writes each object into a physical storage space of the logical volume, and the file system records storage location information of each object, so that when the client requests to access the data, the file system can allow the client to access the data according to the storage location information of each object.
The process of allocating physical storage space for the logical volume by the storage system specifically includes: physical storage space is divided in advance into stripes according to a group of capacity measures of objects stored in a logical volume (the measures often have a large margin with respect to the capacity of the actual objects to be stored) and Redundant Array of Independent Disks (RAID), and one logical volume can be understood as one stripe, thereby allocating physical storage space to the logical volume.
The information embedding method and the video playing method provided by the application can be applied to the application environment shown in fig. 1. The terminal 102, the first server 104, the second server 106 and the terminal 108 are communicated with each other through a network. The terminal 102 may be used as an information implanting party to implant media information into a target video, and then send a video sharing message to the terminal 108. Specifically, the terminal 102 acquires video description information from the first server 104, and acquires a target video corresponding to the video description information from the second server 106; playing the target video; when the video clip played to the implantable media information is determined according to the video description information, outputting prompt information; when an acquisition instruction responding to the prompt message is obtained, acquiring the media information to be implanted; the media information is implanted into the video clip of the implantable media information, and then the video sharing message is sent to the terminal 108, so that the terminal 108 plays the target video implanted with the media information.
The terminal 108 may be used as a video player, and when receiving the video sharing message sent by the terminal 102, downloads the target video embedded with the media information and plays the target video. Specifically, the terminal 108 obtains the segment description information of the video segment in the target video according to the playing sequence; downloading a corresponding video clip according to the acquired clip description information and playing the video clip when the clip description information is acquired each time; in the process of playing the downloaded video clip, inquiring the implantation record of the target video; determining the video clip implanted with the media information according to the implantation record and preloading the video clip; and playing the preloaded video clips implanted with the media information according to the playing sequence.
The terminals 102 and 108 are user terminals, which may be but not limited to various personal computers, notebook computers, smart phones, tablet computers, smart televisions and portable wearable devices, and clients, which may be video clients, instant messaging clients, browser clients, education clients and the like, are installed on the terminals 102 and 108. Here, the number of the terminal 102 and the terminal 108 may be one or more, respectively, and the number of the terminal 102 and the terminal 108 is not limited. For example, one or more terminals 102 implant information and then send a video sharing message to one or more terminals 108, where the video sharing message is a message about the video in which the implanted information is implanted.
The server 104 and the server 106 may be integrated together, may be independent physical servers, or may be a server cluster or a distributed system or a distributed cloud storage system formed by a plurality of physical servers. The number of the servers 104 and 106 is not limited.
As shown in fig. 2, fig. 2 is an information embedding method provided in an embodiment, which may be executed by a terminal or by both the terminal and a server, and is described by taking the method as an example applied to the terminal 102 in fig. 1, including the following steps:
s202, playing the target video.
The target video may be a short video or a long video, which is not limited in this application. The target video may be saved in segments, which include at least two video segments, each of which is saved as a separate video file. For example, each video segment of the target video is distributed and stored in a CDN (Content Delivery Network) server.
In the target video, there are a plurality of video frames in which media information can be embedded, the plurality of video frames may be still frames, or objects in the plurality of video frames are still and a person may be moving. The still frame is not necessarily a video frame, and all objects (including objects, persons, and the like) may be still or moving, but may be still relative to the camera that captures the image.
In one embodiment, before S202, the terminal may acquire the target video and the corresponding video description information.
The video description information may refer to information for describing a target video, such as an encoding mode, a frame rate, a video length, a video frame number, and a video clip of the implantable media information, etc. for describing a video. Since the target video includes at least two video segments, each of the video segments may be distributed and stored in the CDN server, correspondingly, the video description information of the target video may include segment description information corresponding to the at least two video segments, where the segment description information is used to describe information of the video segments, including but not limited to: the encoding mode, length and frame number of the video segment, and the network address for storing the video segment. For a video clip of implantable media information, its clip description information may also contain an implantable video frame number and an implantable coordinate location.
For example, in video frames 1 to n in the target video, the target object is a running car a, and when shooting the car a, the camera shoots another car B running at the same speed and in the same direction, so that video frames 1 to n are obtained. Then, the video frames 1 to n are still frames with the background omitted. Where n may be a positive integer greater than 1.
In one embodiment, before starting playing the target video, the terminal acquires a network address for storing the target video, downloads the target video according to the network address, and then plays the target video. The target video can be played through a client (such as a video client) or a web page on the terminal.
In one embodiment, the terminal sequentially acquires corresponding segment description information according to a video playing sequence; when the segment description information is obtained each time, the corresponding video segment is downloaded from the CDN server (i.e., the second server) according to the obtained segment description information. Since the segment description information includes the network address of the video segment, the corresponding video segment can be downloaded from the CDN server according to the segment description information. For example, the segment description information 1 of the video frame 1 is obtained according to the video playing sequence, and then the video frame 1 corresponding to the segment description information 1 is downloaded from the CDN server; before the video frame 1 is played, the segment description information 2 of the video frame 2 is obtained, and then the video frame 2 corresponding to the segment description information 2 is downloaded from the CDN server, and so on.
Specifically, before starting playing the target video, the terminal first acquires the segment description information of the first video segment from the first server, then extracts the network address of the first video segment from the segment description information, then downloads the corresponding first video segment from the second server according to the network address, and then executes S202. In the process of executing S202, the terminal may sequentially acquire subsequent segment description information from the first server according to the playing sequence, and extract a network address of a corresponding video segment from the acquired segment description information each time the segment description information is acquired, and then download the corresponding video segment from the second server (i.e., CDN server) according to the network address.
The above playing sequence may refer to playing in sequence according to time in the target video, for example, the video segments 1 to n are video segments of 1 st to nth seconds, the video segment 1 is played first, then the video segment 2 is played, and so on until the video segment n is played.
For example, as shown in fig. 3, when the terminal is ready to play through the client, the clip description information of the video clip 1 is pulled from the first server, and then the video clip 1 is downloaded from the second server and played. Then, in the process of playing the video segment 1, the terminal reads the segment description information of the video segment 2 from the first server in advance, then preloads the video segment 2 from the second server, and plays the video segment 2 after playing the video segment 1, and so on.
In another embodiment, the terminal may also obtain the segment description information of each video segment at a time from the first server, and parse the segment description information to obtain the coding mode, the length of the video segment, the video frame number, the network address, and the like of the video segment. And then, downloading the corresponding video clips from the second server corresponding to the network address in sequence according to the playing sequence.
In one embodiment, for the playing of the target video, the terminal may play the target video through a client or a web page on the terminal. The client can be a video client or a social application with a video playing function. The web page may be a web page constructed by HTML (Hyper Text Markup Language) version 5, that is, HTML5, or a web page constructed by a subsequent version of HTML 5. The web page may be a stand-alone browser, such as a computer-based browser, a cell phone-based browser, a tablet computer-based browser, and the like.
In one embodiment, the terminal may play the video segments in sequence in the video playing order and at the normal playing speed, or play the video segments in sequence at m times speed.
The normal play speed refers to original speed play, for example, for a video a with a frame rate of 30fpbs, the normal play speed refers to play at a speed of 30 frames per second. The m may be a number greater than 0, such as 0.5 times speed playing, or 2 times speed playing. It should be noted that when m is smaller than 1, slow playing is performed; when m is equal to 1, playing at the original speed; but when m is larger than 1, the video is played quickly, and the video clip of the implantable media information can be played quickly by the quick playing mode, so that the time for watching the video by the user is saved.
In another embodiment, the terminal may play the video segments in a skip mode according to the video playing sequence. Specifically, when playing a video clip, the terminal determines whether the played video clip is a video clip in which media information can be embedded according to clip description information corresponding to the video clip, and if not, jumps to the next video clip. The video clip of the implantable media information can be played quickly by the skip playing mode, and the time for watching the video by the user is saved.
In one embodiment, the terminal may further determine a video segment of the implantable media information according to the segment description information, and then directly jump to the video segment of the implantable media information for playing. The skip playing mode can play the video segments of the implantable media information quickly, and the time for watching the video by the user is saved.
In one embodiment, when the target video is played on the client, a pause button, a preview button, a first play button for jumping to a video clip of the previous implantable media information, a second play button for jumping to a video clip of the next implantable media information, and an information embedding button are arranged on the play page of the client. In the process of playing the target video, the terminal can skip to play a video clip of the previous implantable media information by triggering the first playing button; the next video segment of the implantable media information can be jumped to play by triggering the second play button, so that the video segment of the previous or next implantable media information can be directly viewed.
As shown in fig. 4, a plurality of function buttons are arranged on the playing page of the client, and from left to right: a first play button to jump to a video clip of the previous implantable media information, a pause button, a preview button, a second play button to jump to a video clip of the next implantable media information, and an information implant button. In the playing process, when the information embedding button is clicked for the first time, a media information editing area pops up on the right side of the playing page, the media information to be embedded is selected in the media information editing area, and the media information is edited. After selecting the media information and implantable coordinate location, click the information implant button: 1. uploading the video clip to a first server for auditing; 2. calculating implantation risk of the media information by using an AI algorithm; 3. if the implantation is risky, prompting the user that the implanted media information has risks and the implantation fails. The information embedding button has various expression forms, specifically, the expression form may be a text form or an image form, and the "embedding" button in fig. 4 is one of the expression forms.
And S204, outputting prompt information when playing the video clip of the implantable media information.
In one embodiment, the terminal outputs the reminder information when a video clip to the implantable media information is determined to be played based on the video description information.
Wherein, determining the video segment played to the implantable media information according to the video description information may refer to: and determining that the currently played video frame is the video frame of the implantable media information according to the video description information, and the video segment of the video frame is the video segment of the implantable media information.
In one embodiment, in the playing process, the terminal detects whether a video clip of the implantable media information is played, and specifically, detects whether the video clip of the implantable media information is played according to the clip description information of the video clip, and if so, outputs the prompt information (that is, displays the prompt information on the playing page of the terminal); if not, continuing to play and detecting.
In one embodiment, the segment description information corresponding to the video segment of the implantable media information contains the implantable video frame number. Correspondingly, when the terminal plays the video frame corresponding to the implantable video frame number in the process of playing the target video, the terminal determines to play the video clip of the implantable media information.
In one embodiment, all of the video segments of the implantable media information are video frames of the implantable media information; alternatively, the video segments of the implantable media information include video sub-segments of the implantable media information and video sub-segments of the non-implantable media information.
In an embodiment, when the video segment of the implantable media information includes a video sub-segment of the implantable media information and a video sub-segment of the non-implantable media information, the step of determining the video segment played to the implantable media information according to the video description information may specifically include: and in the process of playing the target video, the terminal judges the video sub-segments played to the implantable media information according to the implantable video frame number.
For S204, according to whether the user history uses the information implanting function, S204 is divided into the following two scenarios:
In one embodiment, when the information implantation function of the terminal is used, the terminal outputs the prompt information through the information implantation button.
Specifically, when information is implanted by using an information implanting button on a client or a webpage historically, the terminal outputs prompt information through the information implanting button, and the prompt information can be presented in a form that the information implanting button emits light and flashes, or the information implanting button is continuously enlarged and reduced to display, or prompt text or prompt animation is displayed near the position of the information prompting button.
As shown in fig. 5, under the premise that the user has historically used the information embedding function, when playing a video frame of the implantable media information, the "embed" button in the figure, which represents the information embedding button, will flash; or displaying a prompt text near an 'implantation' button which represents an information implantation button in the figure; or displaying a prompt animation for prompting the user that the video clip can be embedded with media information; alternatively, the resident implant function start button on the right side of the video playback interface flashes in a light (not shown). Wherein, the implantation button and the resident implantation function starting button belong to the information implantation button.
Scenario 2, user history does not use the information instrumentation function.
In one embodiment, when the information implantation function of the terminal is not used, the terminal acquires an implantable coordinate position of a currently played video frame and outputs prompt information corresponding to the implantable coordinate position.
Specifically, when the user has historically not performed information embedding by using an information embedding button on the client or the webpage, the terminal acquires an implantable coordinate position of the currently played video frame and outputs prompt information corresponding to the implantable coordinate position.
The display form of the prompt message can be a flashing icon, specifically, the prompt message can refer to an icon generated at the position of the implantable coordinate, and the icon is luminous and flashing in the display process; in addition, the presenting form of the prompting message can also comprise displaying prompting text or prompting animation at the position of the implantable coordinate, and the media information implantation area can be marked when the prompting text or the prompting animation is displayed so as to prompt the user that the media information implantation area can be implanted with the media information. The implantable coordinate location may refer to a coordinate location of a media information implantation region in a video frame. The media information embedding area refers to an area in a video frame where media information can be embedded. The information embedding button is used for triggering the information embedding function, and by triggering the information embedding button, appropriate media information can be selected and then embedded into the corresponding video clip. The media information embedded area is an area in which media information can be embedded in the video frame, namely an embedded area.
As shown in fig. 6a, when the user has not used the information embedding function historically, i.e. the user has used the information embedding function for the first time, when it is determined to play the video clip of the implantable media information according to the clip description information, the media information embedding area in the currently played video frame is prompted in the form of a flashing small icon (e.g. a small icon flashing one or more times), such as the arrow and the dotted ellipse shown in fig. 6 a. When the small icon is blinked, a prompt text "material that the user likes can be embedded in here" is displayed near the blinked small icon (not shown in fig. 6 a). The material is the media information that the user wants to implant.
In addition, whether the information embedding function is used or not, a corresponding prompt is performed, and therefore, for S204, the method specifically includes: the terminal outputs prompt information through an information implantation button; or acquiring an implantable coordinate position corresponding to the currently played video frame from the segment description information of the video segment, and outputting prompt information according to the implantable coordinate position when the video segment of the implantable media information is played. Wherein, the presentation form of the prompt message comprises: displaying prompt text, displaying prompt animation, flashing icons or flashing information implanting buttons. The prompt text is prompt words.
For example, as shown in FIG. 5, when a video frame of implantable media information is played, the "implant" button, which is shown as an information implant button, will flash; or displaying a prompt text near an 'implantation' button which represents an information implantation button in the figure; or displaying a prompt animation for prompting the user that the video clip can be embedded with media information; alternatively, the resident implant function start button on the right side of the video playback interface flashes in a light (not shown). Wherein, the implantation button and the resident implantation function starting button belong to the information implantation button.
For another example, as shown in fig. 6a, when it is determined to play a video clip in which media information is embedded according to the clip description information, the media information embedding area in the currently played video frame is prompted in a manner of flashing a small icon (e.g., flashing a small icon one or more times), such as the arrow and the dotted oval shown in fig. 6 a. When the small icon is blinked, a prompt text "material that the user likes can be embedded in here" is displayed near the blinked small icon (not shown in fig. 6 a). The material is the media information that the user wants to implant.
S206, when the acquisition instruction responding to the prompt message is obtained, the media information to be implanted is acquired.
The media information may be text, images (including still images and moving images), small videos, and audios, among others.
In one embodiment, the terminal detects the instruction responding to the prompt information in real time after outputting the prompt information, and when the acquisition instruction responding to the prompt information is obtained, the media information to be embedded specified by the acquisition instruction is acquired from the media information base.
In one embodiment, the media information may also be edited in an editing area when the media information is retrieved.
For example, if the media information is a text, the text can be edited, such as modifying the font, adjusting the font size or adjusting the font color; if the media information is an image, the image may be processed, such as cutting the size of the image, adjusting the contrast, or adjusting the brightness, as shown in fig. 6 b; if the media information is a video, the length of the video can be cut, the video can be transcoded or the video can be compressed to reduce the size of the data and the like; if the media information is audio, the length of the audio can be cut.
And S208, implanting the media information into the video clip of the implantable media information.
In one embodiment, the terminal may select a media information embedding area in the currently played video frame of the implantable media information according to the location selection instruction, then embed the media information into the selected media information embedding area, and then synchronously embed the media information into other video frames of the video segment.
In another embodiment, the terminal implants the media information into the video clip of the implantable media information according to the implantable coordinate position contained in the clip description information.
In one embodiment, when all the video segments of the implantable media information are video frames of the implantable media information, the terminal acquires a media information implantation area of each video frame in the video segments of the implantable media information and then implants the media information into the media information implantation area.
For example, when the media information is text or image, the text or image is embedded in the media information embedding area. When the media information is video or audio, implanting the video or audio into the media information implantation area; when the target video with the implanted video or audio is played, if the video segment with the implanted video or audio is played, the implanted video is played in the media information implantation area, or the playing page of the implanted audio and the corresponding audio information are displayed.
In one embodiment, when the video segment of the implantable media information comprises the video sub-segment of the implantable media information and the video sub-segment of the non-implantable media information, the terminal implants the media information into the video sub-segment of the implantable media information. And for video sub-segments where media information is not implantable, media information will not be implantable.
In the above embodiment, when the target video is played, it may be determined whether a video clip of the implantable media information is played, and when the video clip of the implantable media information is played, the prompt information is output, so that the user may be prompted that the video clip is the implantable media information. In addition, the user can acquire the media information to be embedded according to the prompt information and then embed the media information into the video clip of the implantable media information, so that the processing of the target video by the user is realized on the premise of ensuring that the target video is not modified, the convenience and the video processing efficiency of the video processing are improved, and the secondary creation of the target video by the user is promoted.
In one embodiment, as shown in fig. 7, after S208, the method may further include:
s702, the video clip implanted with the media information is sent to a first server for auditing.
In one embodiment, after the video clip of the media information is to be embedded, the terminal may perform a pre-review on the embedded media information, such as reviewing the data size and data format of the media information.
In one embodiment, after pre-auditing the implanted media information, the terminal sends the video segment of the implanted media information to the first server for content auditing, such as auditing whether the implanted media information contains objectionable content.
And S704, uploading the video clip embedded with the media information to a second server when the audit passing information fed back by the first server is received.
Wherein the second server is a server of a content distribution network.
In one embodiment, S704 may specifically include: the terminal acquires a network address designated by a first server; and uploading the video segments implanted with the media information to a server of the content distribution network according to the network address.
Wherein, for the video segment embedded with the media information, the first server can specify a network address, and the network address is a certain server of the content distribution network or a specific storage path of the certain server.
And S706, generating a video sharing message.
The video sharing message may include a playing address of the target video, a name of the target video, and introduction information of the embedded media information. For example, the video sharing message may be: "I planted a good-looking animation in XX small video to catch up with the bar, network address www.xyzw.com".
And S708, selecting the target contact from the contact list.
In one embodiment, a web page for playing a target video or a playing page of a client is provided with an entry of a social application, and when the target video embedded with media information needs to be shared, the entry of the social application can be triggered, so that the social application is opened, and a target contact is selected from a contact list of the social application. The target contact may be a single contact or a social group (i.e., a member of the social group is the target contact).
And S710, sending a video sharing message to a terminal corresponding to the target contact person, so that the terminal plays the target video implanted with the media information when obtaining the playing operation of the target contact person.
In one embodiment, when the user terminal of the target contact receives the video sharing message, the video sharing message is displayed on the corresponding session page. And when the playing operation generated by clicking the playing address of the target video by the target contact person is detected, entering a corresponding playing page to play the target video implanted with the media information.
And S712, generating implantation record of the target video.
The implantation record includes a segment number, a network address, an identifier of an implanter and a sharing text of the video segment in which the media information is implanted, and the specific presentation form may be [ the segment number, the implanter, the network address and the sharing text of the video segment in which the media information is implanted ]. An embedder may refer to a user who embeds media information in a target video.
And S714, sending the implantation record to the first server to instruct the first server to store the implantation record to the information base corresponding to the target contact person.
In one embodiment, the first server stores the implant record to an information base corresponding to the target contact person, so that the terminal of the target contact person can query the implant record from the information base, and the terminals of other contact persons have no authority to query the implant record from the information base.
As an example, as shown in fig. 8, when the terminal is ready to play through the client, the clip description information of the video clip 1 is pulled from the first server, and then the video clip 1 is downloaded from the second server and played. Then, in the process of playing the video clip 1, the terminal pre-reads the clip description information of the video clip 2 from the first server, and then pre-loads the video clip 2 from the second server. If the implantable coordinate position exists in the segment description information of the video segment 2, it is determined that an image (or a video) can be implanted in the video segment 2, and when the video segment 2 is played, the image (or the video) is implanted into the video segment 2, so as to obtain the video segment 2 in which the image (or the video) is implanted (hereinafter referred to as an implanted video segment 2). And then, the implanted video segment 2 is sent to the first server for auditing, and when the auditing is passed, the implanted video segment 2 is uploaded to a second server (namely, a server of a content distribution network) for storage. Then, the terminal generates a video sharing message, sends the video sharing message to the friend B, and further generates an implantation record of the target video [ the segment number of the video segment in which the media information is implanted, the implanter, the network address, and the sharing text ], sends the implantation record to the first server for storage, so that the first server can record a sharing relationship, that is, the target video in which the image (or the video) is implanted is shared by the terminal a to the friend B.
In the above embodiment, by sending the video clip with the embedded media information to the first server for auditing, it can be ensured that the embedded media information is valid and valid. The video sharing message of the target video implanted with the media information is sent to the target contact person, so that the access frequency of the target video is improved, the sharing desire of the user can be stimulated, and the viscosity of the video to the user is improved. And storing the implantation record of the target video into an information base corresponding to the target contact person, so that when the user plays the target video, the user can determine that the media information is implanted in the video segment, and the video segment implanted with the media information is played instead of the original video segment during playing.
In one embodiment, as shown in fig. 9, before S202, the method further includes:
s902, selecting a target video frame in the target video.
The target video frame may refer to a video frame corresponding to a video frame when an optical flow value between a video frame immediately above the target video frame or a video frame spaced by at least one frame from the target video frame reaches an optical flow threshold. For example, the optical flow value a between the ith video frame and the (i-1) th video frame of the target video is large enough to reach the optical flow threshold b, and then the ith video frame is the target video frame. For another example, if the optical flow value c between the i-th video frame and the i-2 th video frame of the target video is large enough to reach the optical flow threshold b, the i-th video frame is the target video frame.
The optical flow may be the motion of an object, a scene, or an object caused by the camera moving between two consecutive video frames. Correspondingly, the optical flow value may be the magnitude of the optical flow, which may represent the amount of displacement change of the target object in the video frame. The target object may include a person, an animal, a building, other objects, and the like.
In one embodiment, S902 may specifically include: the method comprises the steps that a terminal calculates a first optical flow value between video frames in a target video; and when the first optical flow value in each video frame reaches the optical flow threshold value, taking the video frame corresponding to the optical flow threshold value as a target video frame.
In an embodiment, the step of calculating the first optical flow value between video frames in the target video may specifically include: the method comprises the steps that a terminal detects key points of a first video frame in a target video; and carrying out optical flow tracking on the detected key points in the target video to obtain a first optical flow value between video frames in the target video. The key points may be corner points of the target object, such as four corner points of a computer or a mobile phone.
And S904, determining the geometric area of the target object in the target video frame based on the key points of the target video frame.
In one embodiment, after determining the target video frame, the terminal tracks the tracked key points of the target video frame according to the optical flow, and extracts the geometric area of the target object from the target video frame.
Specifically, S904 may include: the terminal carries out homography detection on key points of a target video frame; and when target key points which meet the homography condition and the number of the key points which meet the homography condition reaches a number threshold exist in the key points of the target video frame, extracting the geometric area where the target key points are located.
The homography can be a projection mapping of an object or a feature point from one video frame to another video frame, and can be used for describing the transformation situation of the object or the feature point in different video frames (which can be understood as different visual angles). For example, assuming that video frame a and video frame B are video frames taken at two different perspectives, the homography at this time may be a case where the object or the feature point is transformed at the two different perspectives.
In an embodiment, the step of performing homography detection on the key points of the target video frame may specifically include: the method comprises the steps that a terminal extracts an initial geometric area by using a part of key points, then homography detection is carried out on the other part of key points by respectively adopting a RANSAC algorithm, if the detected key points meet a homography condition, the key points are positioned on a geometric surface where the initial geometric area is positioned, the key points are added into the initial geometric area, and the homography detection is stopped until the number of the key points in the area reaches a number threshold value or the number of the undetected key points is smaller than the number threshold value, so that a final geometric area is obtained. It should be noted that when the key point is added to the initial geometric area, the size of the obtained geometric area is increased, i.e. the size of the geometric area is larger than that of the initial geometric area. The size may be used to indicate the length and width of the geometric region, or may indicate the area of the geometric region.
In an embodiment, the step of performing homography detection on the key points of the target video frame may specifically include: the terminal determines a homography matrix; constructing a homography judgment model according to the homography matrix; and judging whether the key points of the target video frame are on the same plane or not through the homography judgment model.
In one embodiment, the terminal calculates a homography matrix from a target video frame and a previous video frame of the target video frame. Specifically, during optical flow tracking of the key points, the terminal calculates the positions of the key points in the target video frame and the position (i.e., pixel position) of the previous video frame, and from these two positions, a homography matrix can be calculated.
For example, the terminal calculates the position X of a part of key points (e.g., a part of key points extracted from all key points) in the previous video frame, then calculates the position Y of the part of key points in the target video frame by using an optical flow tracking algorithm, and calculates the homography matrix H of the geometric surface by using a calculation formula Y ═ HX generated by simultaneously connecting all tracked key points. When the homography matrix of the geometric surface is calculated, a homography judgment model may be constructed by using the homography matrix H, where an algorithm of the homography judgment model may be y ═ Hx, where y is used to represent a position of a key point in a target video frame, x is used to represent a position of a key point in a previous video frame of the target video frame, and H is a homography matrix of a different geometric surface, and if the key point in the target video frame and the key point in the previous video frame are on the same geometric surface (e.g., the homography matrix H of the geometric surface), positions of both key points satisfy y ═ Hx (i.e., a homography condition).
S906, when the key points of other video frames in the target video are respectively determined to fall into the geometric surface where the geometric area is located, determining the geometric area meeting the preset size condition as a media information implantation area.
Wherein, the media information embedding area may refer to an area available for embedding media information.
In one embodiment, when at least one geometric region of a target video frame is obtained, a non-key frame before the target video frame in the target video may be traced back, and when a key point in the non-key frame falls into the geometric surface where the geometric region is located, the key point in the non-key frame before the target video frame is added to the corresponding geometric region. In addition, when at least one geometric area of the target video frame is obtained, optical flow tracking can be performed on key points in the video frame behind the target video frame according to the geometric area, and new key points are detected to judge whether the new key points are located on a geometric surface where the geometric area is located, so that the geometric area of each video frame in the target video is obtained.
In one embodiment, S906 may specifically include: when the key points of other video frames in the target video are determined to fall into the geometric surfaces where the geometric areas are located respectively, the terminal calculates the sizes of the geometric areas in the corresponding video frames respectively; or, calculating the total size of the geometric area in the corresponding video frame; when the size or the total size satisfies a preset size condition, the geometric area satisfying the preset size condition is determined as the media information embedding area.
In one embodiment, after S906, the method further comprises: the terminal determines the number of areas of different media information implantation areas in a target video; determining the time length of each media information implantation area appearing in the target video; and when playing the video clip of the implantable media information, outputting the number of the areas and the corresponding duration.
The number of the areas is the number of the media information implantation areas in different geometric planes. The number of the output areas can prompt a user how many media information implantation areas can be used for implanting media information; outputting the corresponding duration can prompt the user how long the media information embedding area can be used for embedding the media information, thereby avoiding selecting an excessively short media information embedding area for embedding the media information.
In one embodiment, the method further comprises: and the terminal determines the value of the media information implantation area according to the size and the duration of the media information implantation area in the corresponding video frame. Wherein, when the size is large enough and the duration is long enough, it indicates that the media information implantation area is more valuable.
In the above embodiment, the target video frame in the target video is selected, the geometric area of the target object in the target video frame can be determined through the key point of the target video frame, whether the key point of other video frames falls into the geometric plane where the geometric area is located is judged, and when the key point of other video frames falls into the geometric plane where the geometric area is located, the geometric area meeting the preset size condition is determined as the media information embedding area, so that the video frame including the media information embedding area with sufficient time length is automatically obtained, the need of manually selecting the geometric area which can be used for popularizing information in the video by watching the video is avoided, the selection time of the media information embedding area is reduced, and the selection efficiency of the media information embedding area is improved. In addition, the geometric area meeting the preset size condition is determined as the media information embedding area, so that the obtained media information embedding area can be effectively ensured to have enough application value.
As shown in fig. 10, fig. 10 is a video playing method provided in an embodiment, which is described by taking the method as an example applied to the terminal 108 in fig. 1, and includes the following steps:
s1002, determining that the target video corresponds to the video clip implanted with the media information according to the implantation record.
The target video may be a short video or a long video, which is not limited in this application. The target video may be stored in segments, which include at least two video segments, each stored as a separate video file. The target video is embedded with media information corresponding to one or more video segments, and each video segment embedded with the media information is stored as an independent video file. The video segment with embedded media information means that the video segment contains the embedded media information. It should be noted that, each original video segment of the target video is stored in a separate video file, and each video segment embedded with the media information is also stored in a separate video file, please refer to fig. 11.
The implantation record includes the segment number, the network address, the identifier of the implanter and the sharing text of the video segment implanted with the media information, and the specific presentation form may be [ the segment number, the implanter, the network address and the sharing text of the video segment implanted with the media information ].
In one embodiment, before starting playing the target video, the terminal inquires about an implantation record of the target video from the first server so as to determine whether the target video corresponds to a video segment implanted with media information.
In one embodiment, S1002 may specifically include: and the terminal determines that the target video corresponds to the video clip implanted with the media information according to the clip number of the video clip implanted with the media information contained in the implantation record.
In one embodiment, the implant record is sent by the terminal of the implant to the first server and is maintained by the first server in the information repository corresponding to the target contact. When the terminal of the target contact downloads and plays the target video, the implantation record of the target video can be inquired from the information base according to the identity of the target contact. When the terminal of other contact persons downloads and plays the target video, the implantation record of the target video cannot be inquired from the information base. In the embodiment of fig. 10, unless otherwise specified, the terminal of the target contact is taken as an example for explanation. Wherein, the target contact person is the user implanted with the media information.
And S1004, sequentially downloading the video clip of the target video and the video clip implanted with the media information according to the playing sequence.
In one embodiment, S1004 may specifically include: and when the target video is determined to correspond to the video segment implanted with the media information, downloading the video segment implanted with the media information according to the playing sequence, and stopping downloading the video segment not implanted with the media information corresponding to the video segment implanted with the media information when the video segment implanted with the media information is downloaded. For example, a user a (i.e., an implanter) implants media information in a video segment i of a target video to obtain a target video segment i containing the media information; when the user B (i.e. the target contact mentioned above) plays the target video, the target video segment i containing the media information will be downloaded from the CDN server, and the original video segment i will not be downloaded from the CDN server. Wherein i is a positive integer greater than or equal to 1.
In one embodiment, S1004 may specifically include: the terminal sequentially acquires the segment description information of the video segments in the target video according to the playing sequence; and downloading the corresponding video clip according to the acquired clip description information when the clip description information is acquired each time.
The clip description information is used to describe information of the video clip, including but not limited to: the encoding mode, length and frame number of the video segment, and the network address for storing the video segment. For the video clip embedded with the media information, the clip description information may further include an embedded video frame number and an embedded coordinate position embedded with the media information.
The target video comprises at least two video segments, each video segment can be distributed and stored in the CDN server, correspondingly, segment description information of each video segment constitutes video description information, and the video description information may refer to information for describing the target video.
For example, before playing, a client or a web page obtains segment description information of a video segment 1 (i.e., a first video segment) from a first server, where the segment description information has a network address of the video segment 1, downloads the video segment 1 from a second server (i.e., a CDN server) according to the network address, and then plays the video segment 1.
In one embodiment, S1004 may specifically include: and the terminal downloads the video clip implanted with the media information from the second server according to the network address of the video clip implanted with the media information in the implantation record. For the target contact person, the original video segment in the target video does not need to be downloaded from the second server, and only the corresponding video segment implanted with the media information needs to be downloaded.
S1006, playing the video clip of the target video and the video clip implanted with the media information according to the playing sequence so as to display the media information in the playing process.
In one embodiment, the terminal loads the video segment implanted with the media information and other video segments in the target video into the player in sequence according to the playing sequence, and then plays the video segments in sequence.
In one embodiment, S1006 may specifically include: rendering the loaded video clip by the terminal so as to obtain a rendered video clip; then, the rendered video clips are played in the playing sequence.
In one embodiment, for playing a video clip embedded with media information, the specific steps may include: after the loaded video clips implanted with the media information are rendered to obtain rendered video clips containing the media information, if the video clips implanted with the media information are played to the positions corresponding to the video clips implanted with the media information according to the playing sequence, the terminal plays the rendered video clips containing the media information so as to display the media information in the playing process.
In one embodiment, the terminal invokes a rendering tool to render the loaded video clip implanted with the media information, so as to obtain the rendered video clip containing the media information.
For example, the terminal invokes WebGL (Web Graphics Library), or OpenGL ES (open Graphics Library for Embedded Systems), or OpenGL ES2.0 version to render the video segment with the Embedded media information, so as to obtain a rendered video segment with the media information.
In one embodiment, when the media information is a text, an image or a video, the terminal renders the text, the image or the video contained in the video clip during the rendering process, so as to display the corresponding text, the image or the video in the played video picture.
In one embodiment, when the media information is audio, the terminal renders the audio playing interface and the corresponding text information (e.g., lyrics) included in the video clip during the rendering process, so that the audio playing interface and the corresponding text information are displayed during playing, and the sound (e.g., music) corresponding to the audio is also played.
In one embodiment, when the video clip implanted with the media information is played, the terminal detects a trigger operation aiming at the media information in real time; if the triggering operation aiming at the media information is detected, extracting the identifier and the sharing text of the implanter from the implantation record; the identifier of the implanter and the shared text are displayed corresponding to the display position of the media information.
For example, when a video clip with embedded media information is played, the terminal may detect a trigger operation of the user on the media information in real time, for example, if the mouse clicks the media information in the video frame, or if a focus of the mouse is placed above the media information in the video frame, the identifier of the embedded user and the shared text may be displayed near a display position of the media information.
In one embodiment, when the loaded video clip embedded with the media information is played in the playing sequence, the terminal also displays the user information embedded with the media information. For example, when a user at the local end plays a video, since media information has been embedded in one or more video segments of the video by a friend of the user, at this time, when the video segment in which the media information is embedded is played, user information (such as a user name or an avatar of the friend) of the friend is displayed. It should be noted that the friends who plant the media information may be one or more friends, for example, friend 1 plants media information a in video segment i, and friend 2 plants media information B in video segment i or j, where i and j are different positive integers.
In one embodiment, if a trigger operation (e.g., clicking or touching) for the user information is detected, the terminal invokes an interface of a social application, and interacts with the user who implants the media information by using the social application, for example, sends a message related to the implanted information to the user who implants the media information. For example, the message may be about the source of the implantation information or the included meaning, and in this embodiment, the specific content of the message is not limited.
In the above embodiment, the output prompt information queries the implantation record of the target video in the playing process, so that it can be determined that the target video corresponds to the video segment with implanted media information, and then the video segment with implanted media information is downloaded, and the implanted media information can be displayed while the video segment with implanted media information is played, so that the user can view the implanted media information of the implant, the content of the video is enriched, and the interaction between the user and the target video is improved.
The application also provides an application scenario applying the information implantation method. Specifically, the information embedding method is explained by taking media information as an image as an example:
overview of implantation
When a certain video is actually played, the content of the video is actually cut into video segments of tens of seconds to tens of seconds, and the video segments are pre-downloaded and linked in the playing process, so that the video is perceived as a continuous video.
The implantation in this embodiment utilizes such a cutting manner, and for a target user watching a video (a video with an implanted image), when playing a video segment with an implanted image, the original video segment is replaced with the video segment with an implanted image, so that only a specified target user can see the video segment with an implanted image.
As shown in FIG. 12, for a video with the duration of 60s, the video is cut into 6 video clips with the length of 10s, an A user implants images in the video clips of 21-30 s, and then the video with the implanted images is shared with a B user. At this time, the B user can view the video clip with the embedded image while viewing the video.
(II) implantation procedure
The implantation process can refer to fig. 8, and the specific content is as follows:
(1) when a user a is ready to play a certain video, the client pulls the segment description information of the video segment 1 from the server (i.e., the first server in the above embodiment), and then loads the video segment 1 from the CDN server according to the segment description information.
The client may be a video client for playing video.
(2) When the video segment 1 is played, the client automatically pulls the segment description information of the video segment 2 from the server, and then loads the video segment 2 from the CDN server according to the segment description information. Since the section description information of the video section 2 includes the implantable position information (such as the implantable coordinate position and the implantable video frame number), it can be determined that the video section 2 can be creatively implanted. For example, the implantable video frames are numbered from N to N + M, i.e., the N frames to the N + M frames in the video clip 2 can be creatively implanted.
The implantable position information may be position information that can be used for implantation in the video calculated by the AI algorithm, such as an area enclosed by a dashed ellipse in fig. 6 a.
(3) When the client plays the Nth frame of the second video clip, prompt information is output on the playing page, and the prompt can be used for creative implantation.
(4) The user A finishes editing the image to be embedded through the client, and then drags the image to the corresponding implantable area, as shown in FIG. 6b, so as to embed the image into the corresponding video frame. The image is then synchronized into a subsequent video frame of video segment 2. After the image is implanted, the video clip 2 implanted with the image is uploaded to a server side for auditing, and after the auditing is passed, the client side uploads the video clip 2 implanted with the image to a CDN server appointed by the server side.
In addition, the video clip 2 implanted with the image can be uploaded to a server side for auditing, and only the image to be implanted can be uploaded to the server side for auditing.
(5) The user A clicks to share, the video implanted with the image is shared to the user B, and the server side stores the following implantation records to an information base of the user B. Wherein the implant record is: [ video clip 2 with image embedded, by A embedded, network address of video clip 2, shared text ]; the user A and the user B are in friend relationship.
(III) friend watching process (i.e. process of friend playing video)
The video playing process is shown in fig. 13, and is described as follows:
(1) and B, the user starts playing the video shared by the user A, the segment description information of the video segment 1 is read from the server side, and then the video segment 1 is downloaded from the CDN server for playing. In the playing process, inquiring implantation records from the information base of the B user, wherein the implantation records are as follows: [ video clip 2 with image embedded, by A embedded, network address of video clip 2, shared text ]. Wherein, the implantation by A refers to the implantation.
(2) When it is determined that the user a (i.e., a friend) implants an image in the video clip 2 according to the implantation record, the client downloads the video clip 2 with the implanted image from the CDN server, and when the video clip 1 is played, plays the video clip 2, displays the implanted image at a corresponding position, and may also display a sharing text (i.e., a sharing language) of the user a.
Describing an information embedding method from a product side, as an example, taking media information as an image for explanation:
(1) if the user never experiences the information instrumentation feature in the client, then when playing a movie (tv show), the small icon is flashed 2 times in the instrumentation region, attracting the user to click, as shown in fig. 6 a. If the user had experienced this information-embedding function, the information-embedding button on the client will flash on and off when the implantable video segment 2 is played, as shown with reference to fig. 5.
(2) After clicking the small icon or the information implanting button, the user pops up the implanting content editing area, and the user A can implant the information according to the creative idea of the user A.
For example, as shown in fig. 14, when the a user has selected the image to be implanted, the area to be implanted is specified, such as selecting the image to be implanted in the implantable area of fig. 14(a), and then the image to be implanted is dragged to the implantable area, thereby obtaining the video frame of fig. 14 (b). In addition, the client may synchronize the image to other video frames of video segment 2.
(3) When the video is played by the user B (i.e., the social friend of the user a), if the video clip 2 is played, the image implanted by the user a is displayed. And when the mouse hovers or touches the implanted image or the area of the implanted image, the playing is paused, and the information and the sharing language of the user A are displayed.
By implementing the scheme of the embodiment, the common user can participate in implanting personalized images or other contents into the video to stimulate the creativity of the common user, so that the uniqueness of the video product is increased, the sharing desire of the user is stimulated through friend sharing, and the stickiness of the product to the user is improved. In addition, the convenience and the video processing efficiency of video processing are improved, and secondary creation of the video by the user is promoted.
It should be understood that although the steps in the flowcharts of fig. 2, 7, 9 and 10 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2, 7, 9 and 10 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least some of the other steps.
Fig. 15 shows an information implanting apparatus provided in an embodiment, where the apparatus may be a part of a computer device using a software module or a hardware module, or a combination of the two, and the apparatus specifically includes: a play module 1502, a prompt module 1504, a second acquisition module 1506, and an implant module 1508, wherein:
a playing module 1502, configured to play a terminal target video;
a prompt module 1504 for outputting prompt information when playing a video clip of implantable media information;
the obtaining module 1506 is configured to obtain media information to be embedded when obtaining an obtaining instruction in response to the terminal prompt information;
an implanting module 1508, configured to implant the terminal media information into a video clip of the terminal-implantable media information.
In one embodiment, the terminal target video comprises at least two video segments, and the video description information of the target video comprises segment description information corresponding to the at least two video segments of the terminal. The obtaining module 1506 is further configured to sequentially obtain corresponding segment description information according to a video playing sequence; after the terminal segment description information is acquired each time, downloading the corresponding video segment from the server of the content distribution network according to the acquired terminal segment description information.
In one embodiment, the segment description information corresponding to the video segment of the terminal implantable media information contains the implantable video frame number; the video clip of the terminal implantable media information comprises a video sub-clip of the implantable media information;
the prompting module 1504 is further configured to, in the process of playing the terminal target video, judge a video sub-segment played to the implantable media information according to the terminal implantable video frame number;
the implanting module 1508 is further configured to implant the terminal media information into the video sub-segment of the terminal-implantable media information.
In one embodiment, the fragment description information corresponding to the video fragment of the terminal implantable media information comprises an implantable coordinate position; the terminal target video is played through a client, and an information implantation button is arranged in the terminal client;
the prompting module 1504 is further used for outputting prompting information through the information implanting button when the information implanting function of the terminal is used; and when the information implantation function of the terminal is not used, acquiring the implantable coordinate position of the currently played video frame, and outputting prompt information corresponding to the implantable coordinate position.
In one embodiment, the prompting module 1504 is further configured to: outputting prompt information through an information implantation button of the terminal; or acquiring an implantable coordinate position corresponding to the currently played video frame from the segment description information of the video segment, and outputting prompt information according to the implantable coordinate position when the video segment of the implantable media information is played.
In one embodiment, the presentation form of the prompt message includes: displaying prompt text, displaying prompt animation, flashing icons or flashing the information implanting button.
In the above embodiment, when the target video is played, it may be determined whether a video clip of the implantable media information is played, and when the video clip of the implantable media information is played, the prompt information is output, so that the user may be prompted that the video clip is the implantable media information. In addition, the user can acquire the media information to be embedded according to the prompt information and then embed the media information into the video clip of the implantable media information, so that the processing of the target video by the user is realized on the premise of ensuring that the target video is not modified, the convenience and the video processing efficiency of the video processing are improved, and the secondary creation of the target video by the user is promoted.
In one embodiment, as shown in fig. 16, the apparatus may further include: a sending module 1510 and an uploading module 1512; wherein:
the sending module 1510 is configured to, after the terminal media information is implanted into the video segment of the terminal implantable media information, send the video segment of the implanted terminal media information to the first server for auditing;
the uploading module 1512 is configured to upload the video clip embedded with the terminal media information to the second server when receiving the audit passing information fed back by the first server of the terminal.
In one embodiment, the terminal second server is a server of a content distribution network; the uploading module 1512 is further configured to obtain a network address specified by the first server of the terminal; and uploading the video clip implanted with the terminal media information to a second server according to the terminal network address.
In one embodiment, as shown in fig. 16, the apparatus may further include: an analysis module 1514; wherein:
an analysis module 1514 for generating a video sharing message; selecting a target contact from a contact list; and sending a terminal video sharing message to a terminal corresponding to the terminal target contact person, so that the terminal plays the target video implanted with the terminal media information when obtaining the playing operation of the terminal target contact person.
In one embodiment, as shown in fig. 16, the apparatus may further include: a record save module 1516; wherein:
a record storage module 1516, configured to generate an implantation record of the terminal target video; the terminal implantation record comprises a segment number of a video segment implanted with terminal media information, a network address, an identifier of an implanter and a sharing text; and sending the terminal implantation record to the first server to indicate the first server of the terminal to store the terminal implantation record to an information base corresponding to the terminal target contact person.
In the above embodiment, by sending the video clip with the embedded media information to the first server for auditing, it can be ensured that the embedded media information is valid and valid. The video sharing message of the target video implanted with the media information is sent to the target contact person, so that the access frequency of the target video is improved, the sharing desire of the user can be stimulated, and the viscosity of the video to the user is improved. And storing the implantation record of the target video into an information base corresponding to the target contact person, so that when the user plays the target video, the user can determine that the media information is implanted in the video segment, and the video segment implanted with the media information is played instead of the original video segment during playing.
In one embodiment, as shown in fig. 16, the apparatus may further include: an implantable region determination module 1518; wherein:
an implantable region determining module 1518, configured to select a target video frame in the terminal target video before playing the terminal target video; determining a geometric area of a target object in a terminal target video frame based on key points of the terminal target video frame; and when the key points of other video frames in the terminal target video are determined to respectively fall into the geometric surface where the terminal geometric area is located, determining the terminal geometric area meeting the preset size condition as the media information implantation area.
In the above embodiment, the target video frame in the target video is selected, the geometric area of the target object in the target video frame can be determined through the key point of the target video frame, whether the key point of other video frames falls into the geometric plane where the geometric area is located is judged, and when the key point of other video frames falls into the geometric plane where the geometric area is located, the geometric area meeting the preset size condition is determined as the media information embedding area, so that the video frame including the media information embedding area with sufficient time length is automatically obtained, the need of manually selecting the geometric area which can be used for popularizing information in the video by watching the video is avoided, the selection time of the media information embedding area is reduced, and the selection efficiency of the media information embedding area is improved. In addition, the geometric area meeting the preset size condition is determined as the media information embedding area, so that the obtained media information embedding area can be effectively ensured to have enough application value.
For specific limitations of the information implanting apparatus, reference may be made to the above limitations of the information implanting method, which will not be described herein again. The various modules in the information implanting device described above may be implemented in whole or in part by software, hardware, and combinations thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
As shown in fig. 17, fig. 17 is a video playing apparatus provided in an embodiment, where the apparatus may adopt a software module or a hardware module, or a combination of the two modules to form a part of a computer device, and the apparatus specifically includes: a determination module 1702, a download module 1704, and a play module 1706, wherein:
a determining module 1702, configured to determine, according to the implantation record, that the target video corresponds to the video segment in which the media information is implanted;
a downloading module 1704, configured to sequentially download the video segments of the target video and the video segments implanted with the media information according to a playing sequence;
a playing module 1706, configured to play the video segment of the target video and the video segment implanted with the media information according to the playing sequence, so as to display the media information in a playing process.
In an embodiment, the downloading module 1702 is further configured to sequentially obtain segment description information of video segments in the target video according to a playing order; and after the terminal segment description information is acquired each time, downloading the corresponding video segment according to the acquired terminal segment description information.
In one embodiment, as shown in fig. 18, the apparatus may further include: a detection module 1708, an extraction module 1710 and a display module 1712; wherein:
a detection module 1708, configured to detect, in real time, a trigger operation for the terminal media information when the video segment in which the terminal media information is implanted is played;
an extracting module 1710, configured to extract an identifier and a shared text of an implanter from a terminal implantation record if a trigger operation for terminal media information is detected;
and the display module 1712 is configured to display the identifier of the terminal implanter and the terminal sharing text corresponding to the display position of the terminal media information.
In the above embodiment, the output prompt information queries the implantation record of the target video in the playing process, so that it can be determined that the target video corresponds to the video segment with implanted media information, and then the video segment with implanted media information is downloaded, and the implanted media information can be displayed while the video segment with implanted media information is played, so that the user can view the implanted media information of the implant, the content of the video is enriched, and the interaction between the user and the target video is improved.
For specific limitations of the video playing apparatus, reference may be made to the above limitations of the video playing method, which is not described herein again. The modules in the video playing apparatus can be wholly or partially implemented by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, such as terminal 102 or terminal 108 in fig. 1, and its internal structure diagram may be as shown in fig. 19. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement an information embedding method or a video playing method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 19 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, there is also provided a computer device comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the steps of the above-described information implanting method embodiments when executing the computer program.
In one embodiment, a computer-readable storage medium is provided, storing a computer program that, when executed by a processor, performs the steps in the above-described information implantation method embodiments.
In another embodiment, a computer device is further provided, which includes a memory and a processor, where the memory stores a computer program, and the processor implements the steps in the above video playing method embodiment when executing the computer program.
In another embodiment, a computer-readable storage medium is provided, storing a computer program, which when executed by a processor implements the steps in the above-described video playback method embodiment.
In one embodiment, a computer program product or computer program is provided that includes computer instructions stored in a computer-readable storage medium. The computer instructions are read by a processor of a computer device from a computer-readable storage medium, and the computer instructions are executed by the processor to cause the computer device to perform the steps in the above-mentioned method embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Claims (15)
1. An information embedding method, characterized in that the method comprises:
playing the target video;
when playing the video clip of the implantable media information, outputting prompt information;
when an acquisition instruction responding to the prompt message is obtained, acquiring media information to be implanted;
the media information is embedded into a video clip of the implantable media information.
2. The method according to claim 1, wherein the target video comprises at least two video segments, and the video description information of the target video comprises segment description information corresponding to the at least two video segments; before the playing the target video, the method further comprises:
sequentially acquiring corresponding segment description information according to a video playing sequence;
and after the segment description information is acquired each time, downloading the corresponding video segment from a server of the content distribution network according to the acquired segment description information.
3. The method according to claim 1, wherein the segment description information corresponding to the video segment of the implantable media information comprises an implantable video frame number; the video segment of the implantable media information comprises video subsegments of the implantable media information;
the determining a video segment played to implantable media information according to the video description information comprises:
in the process of playing the target video, determining video subsections played to the implantable media information according to the implantable video frame number;
the embedding the media information into a video clip of the implantable media information comprises:
and implanting the media information into the video sub-segments of the implantable media information.
4. The method according to claim 1, wherein the segment description information corresponding to the video segment of the implantable media information includes an implantable coordinate position; the target video is played through a terminal; the outputting the prompt message comprises:
when the information implantation function of the terminal is used, outputting prompt information through the information implantation button;
and when the information implantation function of the terminal is not used, acquiring the implantable coordinate position of the currently played video frame, and outputting prompt information corresponding to the implantable coordinate position.
5. The method of claim 1, wherein outputting the prompt message comprises:
outputting prompt information through an information implantation button of the terminal; or,
and acquiring an implantable coordinate position corresponding to the currently played video frame from the segment description information of the video segment, and outputting prompt information according to the implantable coordinate position when the video segment of the implantable media information is played.
6. The method according to any one of claims 1 to 5, wherein the presentation form of the prompt message comprises: displaying prompt text, displaying prompt animation, flashing icons or flashing the information implanting button.
7. The method of claim 1, wherein the embedding the media information after the video segment of the implantable media information, the method further comprises:
sending the video clip implanted with the media information to a first server for auditing;
and when the auditing passing information fed back by the first server is received, uploading the video clip implanted with the media information to a second server.
8. The method of claim 7, wherein the second server is a server of a content distribution network; the uploading the video clip embedded with the media information to a second server comprises:
acquiring a network address specified by the first server;
and uploading the video clip implanted with the media information to a server of the content distribution network according to the network address.
9. The method of any one of claims 1 to 5, 7 to 8, further comprising:
generating a video sharing message;
selecting a target contact from a contact list;
and sending the video sharing message to a terminal corresponding to the target contact person, so that the terminal plays the target video implanted with the media information when obtaining the playing operation of the target contact person.
10. The method of claim 9, further comprising:
generating an implantation record of the target video; the implantation record comprises a segment number of a video segment implanted with the media information, a network address, an identifier of an implanter and a sharing text;
and sending the implantation record to a first server to instruct the first server to store the implantation record to an information base corresponding to the target contact person.
11. The method according to any one of claims 1 to 5 and 7 to 8, wherein before playing the target video, the method further comprises:
selecting a target video frame in the target video;
determining a geometric region of a target object in the target video frame based on the key points of the target video frame;
when the key points of other video frames in the target video are determined to respectively fall into the geometric surface where the geometric area is located, then
And determining the geometric area meeting the preset size condition as a media information implantation area.
12. A video playback method, the method comprising:
determining that the target video corresponds to the video clip implanted with the media information according to the implantation record;
sequentially downloading the video clips of the target video and the video clips implanted with the media information according to a playing sequence;
and playing the video clips of the target video and the video clips implanted with the media information according to the playing sequence so as to display the media information in the playing process.
13. The method of claim 12, further comprising:
when the video clip implanted with the media information is played, if the triggering operation aiming at the media information is detected, extracting the identifier and the sharing text of the implant from the implantation record;
displaying the identifier of the implanter and the sharing text corresponding to the display position of the media information.
14. An information implanting apparatus, the apparatus comprising:
the playing module is used for playing the target video;
the prompting module is used for outputting prompting information when playing a video clip of the implantable media information;
the acquisition module is used for acquiring the media information to be implanted when acquiring an acquisition instruction responding to the prompt information;
and the implantation module is used for implanting the media information into the video clip of the implantable media information.
15. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor realizes the steps of the method of any one of claims 1 to 13 when executing the computer program.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010615896.XA CN111726701B (en) | 2020-06-30 | 2020-06-30 | Information implantation method, video playing method, device and computer equipment |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010615896.XA CN111726701B (en) | 2020-06-30 | 2020-06-30 | Information implantation method, video playing method, device and computer equipment |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN111726701A true CN111726701A (en) | 2020-09-29 |
| CN111726701B CN111726701B (en) | 2022-03-04 |
Family
ID=72570916
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202010615896.XA Active CN111726701B (en) | 2020-06-30 | 2020-06-30 | Information implantation method, video playing method, device and computer equipment |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN111726701B (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112738611A (en) * | 2020-12-28 | 2021-04-30 | 安徽海豚新媒体产业发展有限公司 | Short video collection and spread application system |
Citations (16)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103414941A (en) * | 2013-07-15 | 2013-11-27 | 深圳Tcl新技术有限公司 | Program editing method and device based on intelligent television |
| CN103412746A (en) * | 2013-07-23 | 2013-11-27 | 华为技术有限公司 | Media content sharing method, terminal device and content sharing system |
| US8620146B1 (en) * | 2008-03-28 | 2013-12-31 | Theresa Coleman | Picture-in-picture video system for virtual exercise, instruction and entertainment |
| CN103686396A (en) * | 2013-11-19 | 2014-03-26 | 乐视致新电子科技(天津)有限公司 | Video sharing method and device |
| CN104811814A (en) * | 2015-04-28 | 2015-07-29 | 腾讯科技(北京)有限公司 | Video playing-based information processing method and system, client and server |
| CN105791692A (en) * | 2016-03-14 | 2016-07-20 | 腾讯科技(深圳)有限公司 | Information processing method and terminal |
| CN105812943A (en) * | 2016-03-31 | 2016-07-27 | 北京奇艺世纪科技有限公司 | Video editing method and system |
| CN105898414A (en) * | 2015-11-13 | 2016-08-24 | 乐视云计算有限公司 | Video reviewing method and system |
| CN106507200A (en) * | 2015-09-07 | 2017-03-15 | 腾讯科技(深圳)有限公司 | Video-frequency playing content insertion method and system |
| CN106792077A (en) * | 2016-11-04 | 2017-05-31 | 乐视控股(北京)有限公司 | A kind of video broadcasting method, device and electronic equipment |
| CN108038185A (en) * | 2017-12-08 | 2018-05-15 | 广州市百果园信息技术有限公司 | Video dynamic edit methods, device and intelligent mobile terminal |
| CN109379623A (en) * | 2018-11-08 | 2019-02-22 | 北京微播视界科技有限公司 | Video content generation method, device, computer equipment and storage medium |
| CN109495791A (en) * | 2018-11-30 | 2019-03-19 | 北京字节跳动网络技术有限公司 | A kind of adding method, device, electronic equipment and the readable medium of video paster |
| CN110062269A (en) * | 2018-01-18 | 2019-07-26 | 腾讯科技(深圳)有限公司 | Extra objects display methods, device and computer equipment |
| CN110582018A (en) * | 2019-09-16 | 2019-12-17 | 腾讯科技(深圳)有限公司 | Video file processing method, related device and equipment |
| CN111314626A (en) * | 2020-02-24 | 2020-06-19 | 北京字节跳动网络技术有限公司 | Method and apparatus for processing video |
-
2020
- 2020-06-30 CN CN202010615896.XA patent/CN111726701B/en active Active
Patent Citations (16)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US8620146B1 (en) * | 2008-03-28 | 2013-12-31 | Theresa Coleman | Picture-in-picture video system for virtual exercise, instruction and entertainment |
| CN103414941A (en) * | 2013-07-15 | 2013-11-27 | 深圳Tcl新技术有限公司 | Program editing method and device based on intelligent television |
| CN103412746A (en) * | 2013-07-23 | 2013-11-27 | 华为技术有限公司 | Media content sharing method, terminal device and content sharing system |
| CN103686396A (en) * | 2013-11-19 | 2014-03-26 | 乐视致新电子科技(天津)有限公司 | Video sharing method and device |
| CN104811814A (en) * | 2015-04-28 | 2015-07-29 | 腾讯科技(北京)有限公司 | Video playing-based information processing method and system, client and server |
| CN106507200A (en) * | 2015-09-07 | 2017-03-15 | 腾讯科技(深圳)有限公司 | Video-frequency playing content insertion method and system |
| CN105898414A (en) * | 2015-11-13 | 2016-08-24 | 乐视云计算有限公司 | Video reviewing method and system |
| CN105791692A (en) * | 2016-03-14 | 2016-07-20 | 腾讯科技(深圳)有限公司 | Information processing method and terminal |
| CN105812943A (en) * | 2016-03-31 | 2016-07-27 | 北京奇艺世纪科技有限公司 | Video editing method and system |
| CN106792077A (en) * | 2016-11-04 | 2017-05-31 | 乐视控股(北京)有限公司 | A kind of video broadcasting method, device and electronic equipment |
| CN108038185A (en) * | 2017-12-08 | 2018-05-15 | 广州市百果园信息技术有限公司 | Video dynamic edit methods, device and intelligent mobile terminal |
| CN110062269A (en) * | 2018-01-18 | 2019-07-26 | 腾讯科技(深圳)有限公司 | Extra objects display methods, device and computer equipment |
| CN109379623A (en) * | 2018-11-08 | 2019-02-22 | 北京微播视界科技有限公司 | Video content generation method, device, computer equipment and storage medium |
| CN109495791A (en) * | 2018-11-30 | 2019-03-19 | 北京字节跳动网络技术有限公司 | A kind of adding method, device, electronic equipment and the readable medium of video paster |
| CN110582018A (en) * | 2019-09-16 | 2019-12-17 | 腾讯科技(深圳)有限公司 | Video file processing method, related device and equipment |
| CN111314626A (en) * | 2020-02-24 | 2020-06-19 | 北京字节跳动网络技术有限公司 | Method and apparatus for processing video |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112738611A (en) * | 2020-12-28 | 2021-04-30 | 安徽海豚新媒体产业发展有限公司 | Short video collection and spread application system |
Also Published As
| Publication number | Publication date |
|---|---|
| CN111726701B (en) | 2022-03-04 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11176967B2 (en) | Automatic generation of video playback effects | |
| US11381739B2 (en) | Panoramic virtual reality framework providing a dynamic user experience | |
| US10277861B2 (en) | Storage and editing of video of activities using sensor and tag data of participants and spectators | |
| US10020025B2 (en) | Methods and systems for customizing immersive media content | |
| CN111491174A (en) | Virtual gift acquisition and display method, device, equipment and storage medium | |
| JP2019528654A (en) | Method and system for customizing immersive media content | |
| WO2018140434A1 (en) | Systems and methods for creating video compositions | |
| CN114143568B (en) | Method and device for determining augmented reality live image | |
| CN111726701B (en) | Information implantation method, video playing method, device and computer equipment | |
| CN113645472A (en) | Interaction method and device based on playing object, electronic equipment and storage medium | |
| CN116366874A (en) | Resource distribution method, device, electronic equipment and storage medium | |
| CN113965665B (en) | A method and device for determining a virtual live broadcast image | |
| CN111359220A (en) | Game advertisement generation method and device and computer equipment | |
| US20250203137A1 (en) | Methods and systems for utilizing live embedded tracking data within a live sports video stream | |
| HK40028103A (en) | Information implantation method, video playback method, apparatus, and computer device | |
| US11956518B2 (en) | System and method for creating interactive elements for objects contemporaneously displayed in live video | |
| CN113992878B (en) | Remote desktop operation auditing method, device and equipment | |
| HK40028103B (en) | Information implantation method, video playback method, apparatus, and computer device | |
| CN116887003A (en) | Live broadcast room interaction method, device, electronic equipment and storage medium | |
| US20240062496A1 (en) | Media processing method, device and system | |
| CN115734006B (en) | A method, device and equipment for processing mask files | |
| CN114466220B (en) | Video downloading method and electronic device | |
| CN114245214B (en) | Object playing method, server, terminal and storage medium | |
| CN117596436B (en) | A bullet screen display method and related device | |
| CN119172555A (en) | Target application live broadcast method, device, computer equipment and storage medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 40028103 Country of ref document: HK |
|
| GR01 | Patent grant | ||
| GR01 | Patent grant |