[go: up one dir, main page]

CN116033221B - Video processing method, device, equipment and medium - Google Patents

Video processing method, device, equipment and medium Download PDF

Info

Publication number
CN116033221B
CN116033221B CN202211659371.1A CN202211659371A CN116033221B CN 116033221 B CN116033221 B CN 116033221B CN 202211659371 A CN202211659371 A CN 202211659371A CN 116033221 B CN116033221 B CN 116033221B
Authority
CN
China
Prior art keywords
video
time
current
playing
original
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211659371.1A
Other languages
Chinese (zh)
Other versions
CN116033221A (en
Inventor
王贤亮
张锐杰
程林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xintang Sichuang Education Technology Co Ltd
Original Assignee
Beijing Xintang Sichuang Education Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xintang Sichuang Education Technology Co Ltd filed Critical Beijing Xintang Sichuang Education Technology Co Ltd
Priority to CN202211659371.1A priority Critical patent/CN116033221B/en
Publication of CN116033221A publication Critical patent/CN116033221A/en
Application granted granted Critical
Publication of CN116033221B publication Critical patent/CN116033221B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Television Signal Processing For Recording (AREA)

Abstract

The disclosure relates to a video processing method, device, equipment and medium, wherein the method comprises the steps of obtaining an original video and a first video to be replaced, taking the first video as a current video when a video preview instruction is obtained, executing video processing operation, namely playing the original video and the current video, comparing whether the duration of the current video is equal to the preset clipping time in the original video or not in the playing process, clipping the current video according to a comparison result to obtain a new current video if the duration of the current video is not equal to the preset clipping time, executing at least one video processing operation until the duration of the new current video is equal to the clipping time, stopping, determining the new current video as a second video, and executing video synthesis according to the second video and the original video to obtain a target video. The method and the device can adjust the suitability of the original video and the current video in the time dimension in the video preview stage, reduce the video synthesis times and improve the video generation efficiency.

Description

Video processing method, device, equipment and medium
Technical Field
The disclosure relates to the technical field of video processing, and in particular relates to a video processing method, a device, equipment and a 5-medium.
Background
In online class of three-dimensional virtual scenes, video is a key content therein. In order to be able to flexibly adapt to classroom content, it is often necessary to make a secondary clip on the basis of the recorded original video. At present, it is generally that
And synthesizing the original video and the inserted new video, and then previewing the synthesized video to judge whether normal 0 play can be performed. However, this approach does not allow flexible editing of video during the secondary editing process, and video composition
The method is very time-consuming and resource-consuming, and if the playing cannot be normally performed, the playing needs to be edited and synthesized again and twice, so that time and resources are wasted greatly. Therefore, there is a problem of how to improve video editing flexibility and video generation efficiency.
Disclosure of Invention
To solve or at least partially solve the above technical problems, the present disclosure provides
Video processing method, device, equipment and medium.
According to an aspect of the present disclosure, there is provided a video processing method including:
Acquiring an original video and a first video to be replaced;
When a video preview instruction is acquired, taking the first video as a current video, and executing the following video 0 processing operation:
The original video and the current video are played, and in the playing process, whether the duration of the current video is equal to the preset clipping time in the original video or not is compared;
If the video is not equal, editing the current video according to the comparison result to obtain a new current video;
executing at least one video processing operation until the duration of the new current video is equal to the time of the clip 5, and determining the new current video as a second video;
and carrying out video synthesis according to the second video and the original video to obtain a target video.
According to another aspect of the present disclosure, there is provided a video processing apparatus including:
The video acquisition module is used for acquiring an original video and a first video to be replaced;
The video preview module is used for taking the first video as the current video when a video preview instruction is acquired, and executing the following video processing operation:
The original video and the current video are played, and in the playing process, whether the duration of the current video is equal to the preset clipping time in the original video or not is compared;
If the video is not equal, editing the current video according to the comparison result to obtain a new current video;
Executing at least one video processing operation until the duration of the new current video is equal to the clipping time, and determining the new current video as a second video;
and the video synthesis module is used for carrying out video synthesis according to the second video and the original video to obtain a target video.
According to another aspect of the present disclosure, there is provided an electronic device including a processor;
A memory for storing the processor-executable instructions;
The processor is configured to read the executable instructions from the memory and execute the instructions to implement the video processing method.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions, characterized in that the computer instructions, when run on a terminal device, cause the terminal device to implement the above-mentioned method.
Compared with the prior art, the technical scheme provided by the embodiment of the disclosure has the following advantages:
The video processing method, device, equipment and medium comprise the steps of obtaining an original video and a first video to be replaced, taking the first video as a current video when a video preview instruction is obtained, executing video processing operation, namely playing the original video and the current video, comparing whether the duration of the current video is equal to the preset clipping time in the original video or not in the playing process, clipping the current video according to a comparison result if the duration of the current video is not equal to the preset clipping time in the original video, obtaining a new current video, executing at least one video processing operation until the duration of the new current video is equal to the clipping time, stopping, determining the new current video as a second video, and performing video synthesis according to the second video and the original video to obtain a target video. The method and the device can adjust the suitability of the original video and the current video in the time dimension in the video preview stage, reduce the video synthesis times and improve the video generation efficiency.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
In order to more clearly illustrate the embodiments of the present disclosure or the solutions in the prior art, the drawings that are required for the description of the embodiments or the prior art will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
Fig. 1 is a flowchart of a video processing method according to an embodiment of the present disclosure;
FIG. 2 is a flowchart of video processing operations provided in an embodiment of the present disclosure;
FIG. 3 is a flowchart of another video processing operation provided by an embodiment of the present disclosure;
fig. 4 is a schematic diagram of a video processing procedure according to an embodiment of the disclosure;
FIG. 5 is a flowchart of a synchronization alignment method provided by an embodiment of the present disclosure;
Fig. 6 is a signaling interaction schematic diagram of synchronization alignment provided by an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
In order that the above objects, features and advantages of the present disclosure may be more clearly understood, embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment," another embodiment "means" at least one additional embodiment, "and" some embodiments "means" at least some embodiments. Related definitions of other terms will be given in the description below. It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
Aiming at the problem of how to improve the flexibility of video editing and the efficiency of video generation, the embodiments of the present disclosure provide a video processing method, device, equipment and medium, and for easy understanding, the embodiments of the present disclosure are described below.
Fig. 1 is a flowchart of a video processing method according to an embodiment of the present disclosure, where the method may be performed by a video processing apparatus, and the apparatus may be implemented in software and/or hardware. Referring to fig. 1, the video processing method may include the following steps.
Step S102, an original video and a first video to be replaced are acquired.
In this embodiment, the original video may be, for example, a video recorded in a live classroom. In order to enable the original video to flexibly adapt to personalized classroom content, a new first video can be generally acquired, and the original video is subjected to secondary editing based on the first video. The first video may be a video recorded in real time, a video uploaded from a local or a video downloaded over the network, etc. Referring to fig. 4, for the recorded first video and the local first video, in order to avoid the first video from being lost due to misoperation, or in order to reuse the first video, after the first video is obtained, the first video may be uploaded to the cloud for storage, and then the first video is downloaded from the cloud during the second editing.
Step S104, when the video preview instruction is acquired, the first video is taken as the current video, and the following video processing operation is executed.
In this embodiment, when the user triggers a button, a control, or an icon, or other information representing a video preview, the terminal device may acquire a video preview instruction, and start to perform at least one video processing operation as shown in steps S106 to S108 below according to the video preview instruction.
Step S106, playing the original video and the current video, and comparing whether the duration of the current video is equal to the preset clipping time in the original video or not in the playing process, if not, executing the following step S108, and if so, executing the following step S110.
The present embodiment can determine the start time and the end time at which the secondary editing is required in the original video, and the period from the start time to the end time is the clip time. And playing the original video and the first video according to a preset preview mode, wherein the preview mode is to replace the played video from the original video to the current video in the clipping time.
For example, the starting playing time of the original video may be denoted as t0, and the playing time ti of the original video is recorded in real time according to the playing time axis of the original video. When the playing time reaches the starting time (t 1) in the clipping time, the original video is paused and the current video starts to be played, and when the playing of the current video is finished, the playing progress of the original video is jumped to the ending time (t 2) in the clipping time, and the original video is played from the ending time.
In order to reduce the use constraint on the user, the embodiment has no time limit on the first video, and the user can freely collect the first video with any time length or upload the first video with any time length from the local. However, in the new video formed after the original video is edited for the second time, the time length of the first video is required to be matched with the editing time of the original video, so that the video before and after switching can be normally played at the beginning and the end of the editing time, and abnormal playing effects such as video overlapping or video coverage and the like caused by too short time length of the first video or too long time length of the first video are avoided.
Based on the above consideration, in order to match the duration of the current video with the clip time, the present embodiment may compare whether the duration of the current video is equal to the clip time preset in the original video in the process of playing the original video and the current video, if not, perform the following step S108 to clip the current video, and if equal, perform the following step S110.
And S108, clipping the current video according to the comparison result to obtain a new current video.
In this embodiment, when the duration of the current video is longer than the clipping time, a video segment with the duration equal to the clipping time is selected from the current video, so as to obtain a new current video.
And deleting the current video when the duration of the current video is less than the clipping time, re-acquiring one video, and determining a new current video based on the re-acquired video. Or a new video capable of filling the missing duration can be acquired on the basis of the current video, and the new current video is determined based on the current video and the new video.
For ease of description, the current video may be denoted as V i and the new current video may be denoted as V i+1. For a new current video V i+1, playing the original video and the current video V i+1, and during the playing process, comparing again whether the duration of the current video V i+1 is equal to the clip time preset in the original video. And so on, the video processing operation is performed multiple times until the new current video is stopped when the duration of the new current video is equal to the clip time, that is, the following step S110 is performed.
In step S110, the video processing operation is stopped when the duration of the new current video is equal to the clip time, and the new current video is determined as the second video.
Specifically, the video processing operation is repeatedly performed until the new current video is stopped when the duration of the new current video is equal to the clip time, and then, the new current video obtained when the video processing operation is stopped is determined as the second video.
In the process of playing the original video and the current video, the original video is not cut, and the current video and the original video are not synthesized, in other words, no clipping operation is performed on the current video and the original video at this time, which is just equivalent to playing a plurality of videos in an alternative mode. Thus, the video preview process does not require additional consumption of video clip resources. In the video preview stage, the embodiment does not perform video synthesis, reduces the consumption of time and resources, and can facilitate a user to adjust clip contents such as an original video, a current video, clip time and the like at any time and for many times until the secondary editing effect corresponding to the current video and the original video is previewed to meet the user requirement.
And step S112, video synthesis is carried out according to the second video and the original video, and a target video is obtained.
In this embodiment, the video processing operation is executed at least once according to the above manner, and after the playing effects of video replacement playing according to the clip time, video duration matching each other, video content linking fluency and the like are satisfied, the user may initiate a video composition instruction. And then synthesizing the original video and the second video into a target video according to the video synthesis instruction, wherein the specific synthesis mode is that, for example, a video segment corresponding to the clipping time in the original video is deleted, and the second video is inserted into the clipping time in the original video to obtain the target video.
The embodiment performs video synthesis after video preview, so that not only can the video synthesis effect be ensured, but also the video synthesis times can be reduced, only one video synthesis operation is needed after the video synthesis effect is met, and the consumption of time and resources is obviously reduced.
The video processing method provided by the embodiment of the disclosure firstly acquires an original video and a first video to be replaced. And then, when a video preview instruction is acquired, taking the first video as the current video, and executing video processing operation, namely playing the original video and the current video, comparing whether the duration of the current video is equal to the preset clipping time in the original video in the playing process, and if not, clipping the current video according to the comparison result to obtain a new current video. And executing at least one video processing operation until the duration of the new current video is equal to the clipping time, stopping, and determining the new current video as a second video. And then, video synthesis is carried out according to the second video and the original video, so as to obtain a target video.
According to the technical scheme, before video synthesis, at least one video processing operation is executed, repeated editing is carried out on the current video for multiple times, a user can conveniently adjust the content of the secondary editing for multiple times until the duration of the current video is equal to the editing time, the determined second video meets the video synthesis condition in the time dimension, then video synthesis is carried out according to the second video and the original video, the suitability of the original video and the current video in the time dimension can be adjusted in the video preview stage, and therefore the target video meeting the playing effect can be obtained only by one-time video synthesis, the video synthesis times and unnecessary time and resources consumed by video synthesis are effectively reduced, and the video generation efficiency is improved.
Referring to fig. 2, an embodiment of performing video processing operations is provided herein, including the following.
Step S202, the original video and the current video V i are played. The method can comprise the steps of starting to play the original video and obtaining the playing time of the original video, suspending playing the original video and starting to play the current video when the playing time reaches the starting time in the clipping time, and jumping the playing progress of the original video to the ending time in the clipping time and playing the original video from the ending time when the playing of the current video is ended.
Referring to fig. 3, the original video may be played from t0, when the playing time ti=t1 of the original video, the original video is paused and the current video is started, and when the playing of the current video is finished, the original video is played from t 2. In the playing process, the original video does not need to be cut, the first video does not need to be synthesized with the current video, and a plurality of videos are played in an alternative mode, so that the video previewing process does not need to consume additional video clip resources.
In consideration of the start time and the end time in the clip time, as key time points for the connection of the original video and the current video, the problem of abnormal playing is most likely to occur. Based on this, in the process of playing the original video and the current video, the embodiment can selectively and flexibly play the video clips corresponding to the key time points, and several flexible play embodiments are provided herein.
In one embodiment, the method for starting playing the original video can comprise the steps of determining a first time before a starting time in the clipping time according to a preset first time interval on a time axis of the original video, and starting playing the original video from the first time.
In practical application, the original video before the start time t1 generally has no play problem, and only the connection time period between the original video and the current video, that is, whether normal play can be performed before and after the start time t1 can be checked. Based on this, a first time t1 'before the start time t in the clip time is determined according to the first time interval Δt1, that is, t1' =t1- Δt1, and the playing progress of the original video is directly skipped to the first time and then the original video starts to be played. When the playing time of the original video reaches the starting time t1 in the clipping time, the playing of the original video is paused and the playing of the current video is started, and when the playing of the current video is finished, the playing progress of the original video is jumped to the ending time in the clipping time, and the original video is played from the ending time. In the embodiment, the original video is played from the first time before the starting time, so that the key video segments in the original video are not missed, and the video preview time is reduced.
Similarly, for the current video, the video clips corresponding to the key time points of the current video can be played by determining a second time before the ending time of the current video according to a preset second time interval on a time axis of the current video, skipping the playing progress of the current video to the second time when the playing time of the current video reaches a preset time threshold, and continuing to play the current video from the second time until the playing of the current video is ended.
Specifically, the video clip in the middle of the current video generally has no play problem, and only the connection time period between the current video and the original video can be checked to see whether the video clip can be normally played. In this embodiment, after the original video is paused and the current video starts to be played, and the current video reaches the preset duration threshold (for example, 5 seconds), the middle video segment that will not have the playing problem may be skipped, the playing progress of the current video is skipped to the second time before the ending time, and the current video continues to be played from the second time until the playing of the current video is ended, at this time, the original video continues to be played from the ending time t2 in the clipping time. In the embodiment, after the current video starts to be played for a period of time, the playing is continued by directly jumping to the second time before the ending time, so that the method does not miss the key video fragments in the current video, and reduces the video preview time.
In another embodiment, in the process of playing the original video and the current video, when the drag instruction is acquired, the playing progress of the original video or the current video is adjusted to the time corresponding to the end position of the drag instruction.
According to the embodiment, the video playing progress can be dragged according to the dragging instruction, and the video playing progress is quickly dragged to the key time nodes of video connection such as start time and end time which need to be concerned. The playing time of the original video is taken as a reference, so that when the playing progress of the video is dragged, the playing time of the original video can be updated in real time along with the dragging action, and the replacement playing between the original video and the current video can not be influenced. For example, the original video before the start time t1 generally has no play problem, so the play progress of the original video is directly dragged to be close to the start time t1 and then played, and whether the original video and the current video before and after the start time t1 are normally played is checked. Similarly, the current video can also be dragged to be close to the end time t2 and then played, and whether the current video and the original video before and after the end time t2 are normally played is checked. According to the embodiment, the video preview time can be shortened by dragging the video playing progress, and the efficiency of video secondary editing is further improved.
Step S204, in the playing process, comparing whether the duration of the current video is equal to the preset clipping time in the original video. If equal, the video processing operation is stopped and the second video is determined, referring to step S210 as follows. If not, the following step S206 is performed in case the duration of the current video is greater than the clip time, or the following step S208 is performed in case the duration of the current video is less than the clip time.
Step S206, selecting video segments with the same duration as the clipping time from the current video to obtain a new current video under the condition that the duration of the current video is longer than the clipping time.
In an example, a target time period equal to the duration of the clipping time may be determined in the current video, and then a video segment corresponding to the target time period may be intercepted in the current video to obtain a new current video with a duration of the clipping time, where the new current video may be denoted as V i+1 for convenience of distinction.
In step S208, in the case where the duration of the current video is less than the clip time, the current video is deleted and the third video is acquired.
In this embodiment, the third video may be a video recorded in real time, a video uploaded from a local area, or a video downloaded on the internet, etc., and the third video may be uploaded to the cloud for storage. Considering that the third video is similar to the first video, the third video is a new video which is not clipped, and thus, the processing manner of the third video and the first video is similar, that is, the third video is also taken as the current video, and the video processing operation is performed.
Specifically, for the third video, the method returns to step S202 to play the original video and the third video, compares whether the duration of the third video is equal to the clipping time in the playing process, if not, clips the third video according to the comparison result to obtain a new current video as in step S206 or S208, and if so, executes the following step S210.
In step S210, the video processing operation is stopped, and the new current video is determined as the second video.
The above steps S202 to S208 are repeatedly performed until the current video is stopped when the length of time of the current video is equal to the clip time in the original video, and the current video at this time satisfies the composition condition in the time dimension, thereby determining it as the second video available for the next video composition.
Referring to fig. 3, another embodiment of performing video processing operations is provided herein, including the following.
In step S302, the original video and the current video V i are played. The specific playing mode refers to the foregoing embodiments and will not be described herein.
Step S304, in the playing process, comparing whether the duration of the current video is equal to the preset clipping time in the original video. If equal, the following step S314 is performed to stop the video processing operation and determine the second video. If not, the following step S306 is performed in case the duration of the current video is greater than the clip time, or the following steps S308 to S312 are performed in case the duration of the current video is less than the clip time.
Step S306, selecting video segments with the same duration as the clipping time from the current video to obtain a new current video under the condition that the duration of the current video is longer than the clipping time.
In step S308, in the case where the duration of the current video is smaller than the clip time, a missing duration of the phase difference between the clip time and the duration of the current video is determined.
Step S310, a fourth video which is not less than the missing duration is acquired. The fourth video with the duration being enough to fill the missing duration can be obtained by recording or uploading.
Step S312, a combination of the current video and the fourth video is regarded as a new current video.
In the embodiment, the missing duration is determined first, and then the fourth video with the missing duration not smaller than the missing duration is acquired, so that the duration of the new current video after combination is not smaller than the editing time, the video clips with the same editing time can be conveniently selected from the new current video, and the excessive operation times are avoided.
For the new current video V i+1, returning to the above step S302, the video processing operation is performed again, and so on, until the duration of the current video is equal to the clip time, the following step S314 is performed.
In step S314, the video processing operation is stopped, and the new current video is determined as the second video.
The above steps S302 to S314 are repeatedly performed until the current video is stopped when the length of time of the current video is equal to the clip time in the original video, and the current video at this time satisfies the composition condition in the time dimension, thereby determining it as the second video available for the next video composition.
As shown in fig. 4, obtaining a second video that matches the original video in play time according to the above embodiment, in which case playing the original video and the second video may include starting playing the original video, suspending playing the original video and starting playing the second video when the play time reaches a start time (t 1) in the clip time, jumping the play progress of the original video to an end time (t 2) in the clip time when the second video is played, and playing the original video from the end time. Since the duration of the second video is equal to the clip time determined by the start time t1 and the end time t2, when the second video is played, the original video should continue to be played from just the end time t 2. In this case, the playing progress of the original video is jumped from the time node t1 at the time of pause to the time node t2 at which the playing is required, that is, the playing progress of the original video is jumped to the end time in the clip time, and the original video is continuously played from the end time.
In one embodiment, the second video and the original video may be directly synthesized into the target video.
In another embodiment, to further improve the video composition quality, there may be provided a video composition method including:
And detecting the association degree of the video contents before and after the replacement playing.
In a specific mode, a first video picture and a first audio before replacement playing and a second video picture and a second audio after replacement playing are respectively extracted, similarity calculation is conducted on the first video picture and the second video picture to obtain first similarity, similarity calculation is conducted on the first audio and the second audio to obtain second similarity, and relevance of video contents before and after replacement playing is obtained according to the first similarity and the second similarity.
And under the condition that the association degree is higher than a preset association degree threshold value, video synthesis is carried out according to the second video and the original video, and a target video is obtained. The association degree is higher than a preset association degree threshold value, which indicates that the video content before and after the replacement playing has smaller change, the front and the rear contents are consistent, and abnormal conditions such as picture mutation and the like are not generated.
Based on the above, when the video synthesis instruction is obtained, deleting the video segment corresponding to the clipping time in the original video, and inserting the second video into the clipping time in the original video to obtain the target video. According to the embodiment, the target video is synthesized again under the condition that the video synthesis instruction is acquired, the video synthesis times can be reduced, the video generation efficiency is improved, meanwhile, an operable space can be reserved for a user, and the user can conveniently operate and process the video at any time before determining the video synthesis.
In online class of three-dimensional virtual scene, interactive signaling is another key content besides video, such as rewarding, summoning students, voice interaction, etc., which can improve the interest of online class. The interactive signaling is used for triggering the interactive effect matched with the classroom content in the three-dimensional virtual scene along with the playing of the video, so that the triggering time of the interactive signaling needs to be synchronously aligned with the playing time of the video. However, secondary editing of the video may disrupt the alignment between the interactive signaling and the original video.
Based on this, this embodiment needs to synchronously align the target video with the interactive signaling after the second editing. The following alignment schemes exist mainly at present. In the scheme I, the video playing progress is calibrated at a frequency of once per second, whether the interactive signaling is played at the corresponding time point is calculated according to the playing time of the video, and the video playing in the scheme depends on a network and can have the conditions of blocking and buffering, so that the problem that the video and the interactive signaling cannot be aligned strictly in real time is solved. In the second scheme, the playing of the video can be synchronously controlled through interactive signaling, but the playing progress of the video cannot be flexibly dragged.
In view of the foregoing, embodiments of the present disclosure may provide a method for aligning video and interactive signaling, with reference to the following.
In this embodiment, the original video is pre-associated with interactive signaling. Accordingly, the method provided by the embodiment comprises the step of synchronously aligning the target video with the interactive signaling.
The original video is provided with at least one preset interactive signaling, and each interactive signaling carries respective signaling trigger time, namely, the signaling trigger time triggers an interactive effect corresponding to the interactive signaling, for example, awarding of a reward at 10 seconds, walking of a teacher at 20 seconds, summoning of students at 25 seconds, voice interaction at 50 seconds and the like.
In an interactive signaling acquisition mode, signaling resources associated with an original video can be acquired, and the signaling resources are analyzed to obtain at least one interactive signaling carrying signaling trigger time. As shown in fig. 6, the signaling resource, such as a compressed zip packet, may be requested from the backend service according to a preset identification number of the original video, where the signaling resource includes at least one interactive signaling. The interactive signaling may be binary frame signaling stored in binary format, which may occupy less memory space to record more information. Downloading and decompressing the signaling resource, and storing the decompressing result into a local sandbox directory. Thus, the preparation work of the interactive signaling is completed.
The target video is obtained by performing secondary editing on the basis of the original video, so that the target video and the interactive signaling have an association relationship, and the target video and the interactive signaling can be synchronously aligned.
Referring to fig. 5, the present embodiment may implement synchronous alignment of a target video and interactive signaling by:
step S502, playing the target video, and recording the entering duration of the target video by starting the local clock service.
In this embodiment, the local clock service is started while the target video starts to be played, and the local clock service, such as DISPLAYLINK timers, can provide standard time information. The embodiment records the entry duration of the target video by starting the local clock service. The entering duration is a duration obtained by recording time information provided by the local clock service, and represents a duration of entering the target video, and the entering duration is only based on the time information of the local clock service under the condition that the target video is abnormally played such as being blocked, paused, fast-forwarded and the like. In contrast, the playing time of the video is based on the playing time axis of the video, and the playing time is changed due to the influence of abnormal playing such as jamming, pause, fast forward, etc. For example, in the case where the target video is paused during playing, the playing duration is paused for timing with the playing being paused, and the entering duration is continuously timed according to the time information provided by the local clock service.
Step S504, when the entering duration reaches the signaling trigger time carried by the interactive signaling, the current playing time of the target video is calibrated to the signaling trigger time.
Referring to fig. 6, when the entering duration reaches the signaling trigger time carried by the interactive signaling, the interactive signaling is sent to a pre-configured scene service engine, for example, a unit engine, through the local clock service. And analyzing the interactive signaling through the scene service engine to obtain signaling trigger time corresponding to the interactive signaling. Specifically, the scene service engine analyzes the interactive signaling to obtain information contained in the interactive signaling, for example, the interactive signaling can include signaling trigger time and triggered interactive effect, and can also include information such as trigger position, triggered virtual object, etc., which is not limited herein. The scene service engine analyzes the signaling trigger time in the interactive signaling, and bridges the signaling trigger time to the video playing channel so that the video playing channel calibrates the current playing time of the target video to the signaling trigger time.
In practical applications, the video playing channel is different from the triggering channel of the interactive signaling, so that the interactive effect of the target video and the interactive signaling needs to be synchronously aligned. When the entering duration reaches the signaling trigger time, the playing time of the target video is not necessarily equal to the entering duration due to the reasons of blocking and the like, and in this case, in order to keep synchronization between the content played by the video and the interactive effect of the interactive signaling, the current playing time of the target video can be calibrated to the signaling trigger time.
In this embodiment, the local clock service may send the interactive signaling at a higher frequency (15 frames per second), and the scene service engine may analyze and send the signaling trigger time with low delay, so that, by the interaction between the local clock service and the scene service engine, the accuracy of calibrating the playing time of the target video based on the signaling trigger time may be improved.
In a specific embodiment of calibrating the playing time, whether the current playing time of the target video is the same as the signaling trigger time or not can be compared, and under the condition that the current playing time is different from the signaling trigger time, the playing progress of the target video is jumped from the current playing time to the signaling trigger time. After the playing time of the target video is calibrated to the signaling triggering time, the playing content of the target video is matched with the interaction effect of the interaction signaling, and the playing content and the interaction effect of the interaction signaling are synchronously aligned.
In this embodiment, when the entering duration reaches the signaling trigger time, the interaction effect corresponding to the interaction signaling is triggered. The method comprises the steps of analyzing the interaction signaling through a scene service engine to obtain interaction effects corresponding to the interaction signaling, and triggering the interaction effects in a three-dimensional virtual scene through the scene service engine. That is, in this embodiment, the signaling trigger time is sent to the video playing channel by the scene service engine to perform time calibration, and the interaction effect corresponding to the interaction signaling is triggered in the three-dimensional virtual scene by the scene service engine. In particular, the interactive signaling in the foregoing embodiment is taken as an example. When the entering duration of the target video reaches the signaling trigger time of 10 seconds, the current playing time of the target video is calibrated to the 10 th second, video content of the 10 th second is played, meanwhile, the interaction effect of issuing rewards is triggered in the three-dimensional virtual scene, when the entering duration of the target video reaches the signaling trigger time of 20 seconds, the current playing time of the target video is calibrated to the 20 th second, video content of the 20 th second is played, and meanwhile, the interaction effect of a teacher walking in the three-dimensional virtual scene is triggered.
In one embodiment, a method for calibrating a current playing time of a target video in real time is provided, as follows.
Generating null frame signaling containing the current clock time at a preset frequency (frequency of 15 frames per second) through the local clock service, and transmitting the null frame signaling to the scene service engine. In contrast to interactive signaling, which includes signaling trigger time and interactive effects, null frame signaling refers to binary frame signaling that only includes the current clock time. In practical application, the local clock service sends binary frame signaling to the scene service engine at 15 frames per second, and it can be understood that the binary frame signaling sent by the local clock service is null frame signaling in the non-signaling trigger time, and the binary frame signaling sent by the local clock service is interactive signaling in the signaling trigger time.
And analyzing the current clock time in the empty frame signaling through the scene service engine, and bridging the current clock time to the video playing channel so that the video playing channel calibrates the current playing time of the target video to the current clock time. The null frame signaling does not contain interaction effect, so that the null frame signaling is only used for calibrating the current playing time of the target video. When the video playing method is used for calibrating, the current playing time of the target video is acquired, whether the current playing time is identical to the current clock time or not is judged, and under the condition that the current playing time is not identical to the current clock time, the playing progress of the target video is jumped from the current playing time to the current clock time.
In this embodiment, the transmission frequencies of the null frame signaling and the interactive signaling are higher, so that the current playing time of the target video can be calibrated more accurately in real time, and particularly under the condition that the current playing time is calibrated by using the interactive signaling, the playing time of the video and the interactive effect can be synchronously aligned.
According to the method and the device for achieving the synchronous alignment of the target video and the interactive signaling, the synchronous and consistent performance between the playing progress of the target video and the interactive effect of the interactive signaling can be improved.
Fig. 7 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present disclosure, where the apparatus may be used to implement the video processing method described above, and the apparatus may be implemented in software and/or hardware. Referring to fig. 7, the video processing apparatus 700 may include:
A video acquisition module 702, configured to acquire an original video and a first video to be replaced;
The video preview module 704 is configured to take the first video as a current video when a video preview instruction is acquired, and perform the following video processing operations:
The original video and the current video are played, and in the playing process, whether the duration of the current video is equal to the preset clipping time in the original video or not is compared;
If the video is not equal, editing the current video according to the comparison result to obtain a new current video;
Executing at least one video processing operation until the duration of the new current video is equal to the clipping time, and determining the new current video as a second video;
and the video synthesis module 706 is configured to perform video synthesis according to the second video and the original video, so as to obtain a target video.
In some embodiments, the video preview module 704 includes a first clipping unit for:
And under the condition that the duration of the current video is longer than the clipping time, selecting a video segment with the duration equal to the clipping time from the current video to obtain a new current video.
In some embodiments, the video preview module 704 includes a second clipping unit for:
deleting the current video and acquiring a third video under the condition that the duration of the current video is smaller than the clipping time;
comparing whether the duration of the third video is equal to the clipping time;
and if the video is not equal, editing the third video according to the comparison result to obtain a new current video.
In some embodiments, the video preview module 704 includes a third clipping unit for:
Determining a missing duration of a phase difference between the clipping time and the duration of the current video when the duration of the current video is smaller than the clipping time;
acquiring a fourth video which is not smaller than the missing duration;
and taking the combination of the current video and the fourth video as a new current video.
In some embodiments, the video preview module 704 includes a video playback unit for:
Starting to play the original video and acquiring the playing time of the original video;
when the playing time reaches the starting time in the clipping time, pausing the playing of the original video and starting the playing of the current video;
And when the current video playing is finished, jumping the playing progress of the original video to the finishing time in the clipping time, and playing the original video from the finishing time.
In some embodiments, the video playback unit is further configured to:
On a time axis of the original video, determining a first time before a start time in the clipping time according to a preset first time interval;
And playing the original video from the first time.
In some embodiments, the video playback unit is further configured to:
on the time axis of the current video, determining a second time before the ending time of the current video according to a preset second time interval;
When the playing time length of the current video reaches a preset time length threshold value, the playing progress of the current video is jumped to the second time;
And continuing to play the current video from the second time until the current video is played.
In some embodiments, the video playback unit is further configured to:
And in the process of playing the original video and the current video, when a drag instruction is acquired, adjusting the playing progress of the original video or the current video to the time corresponding to the end position of the drag instruction.
In some embodiments, the video compositing module 706 is further to:
Detecting the association degree of video contents before and after replacement playing;
and under the condition that the association degree is higher than a preset association degree threshold value, video synthesis is carried out according to the second video and the original video, and a target video is obtained.
In some embodiments, the video compositing module 706 is further to:
When a video synthesis instruction is acquired, deleting a video segment corresponding to the clipping time in the original video, and inserting the second video into the clipping time in the original video to obtain a target video.
In some embodiments, the original video is pre-associated with interactive signaling, and the video processing device 700 further comprises an alignment module for:
and synchronously aligning the target video with the interactive signaling.
In some embodiments, the alignment module is further to:
playing the target video, and recording the entering time length of the target video by starting a local clock service;
and when the entering duration reaches the signaling trigger time carried by the interactive signaling, calibrating the current playing time of the target video to the signaling trigger time.
In some embodiments, the alignment module is further to:
transmitting the interactive signaling to a pre-configured scene service engine through the local clock service;
Analyzing the interactive signaling through the scene service engine to obtain the signaling trigger time corresponding to the interactive signaling;
Bridging the signaling trigger time to a video playing channel so that the video playing channel calibrates the current playing time of the target video to the signaling trigger time.
In some embodiments, the alignment module is further to:
generating a null frame signaling containing the current clock time at a preset frequency through the local clock service, and sending the null frame signaling to the scene service engine;
analyzing the current clock time in the empty frame signaling through the scene service engine, and bridging the current clock time to the video playing channel so that the video playing channel calibrates the current playing time of the target video to the current clock time.
In some embodiments, the alignment module is further to:
Comparing whether the current playing time of the target video is the same as the signaling triggering time;
And under the condition of different conditions, jumping the playing progress of the target video from the current playing time to the signaling triggering time.
In some embodiments, the alignment module is further to:
And triggering the interaction effect corresponding to the interaction signaling when the entering duration reaches the signaling triggering time.
In some embodiments, the alignment module is further to:
analyzing the interactive signaling through a scene service engine to obtain an interactive effect corresponding to the interactive signaling;
triggering the interaction effect in a three-dimensional virtual scene through the scene service engine.
In some embodiments, the alignment module is further to:
Acquiring signaling resources associated with the original video;
Analyzing the signaling resources to obtain at least one interactive signaling carrying signaling trigger time.
The device provided in this embodiment has the same implementation principle and technical effects as those of the foregoing method embodiment, and for brevity, reference may be made to the corresponding content of the foregoing method embodiment where the device embodiment is not mentioned.
The exemplary embodiments of the present disclosure also provide an electronic device comprising at least one processor and a memory communicatively coupled to the at least one processor. The memory stores a computer program executable by the at least one processor for causing the electronic device to perform a method according to embodiments of the present disclosure when executed by the at least one processor.
The present disclosure also provides a computer program product comprising a computer program, wherein the computer program, when executed by a processor of a computer, is for causing the computer to perform a method according to embodiments of the disclosure.
Referring to fig. 8, a block diagram of an electronic device 800 that may be a server or a client of the present disclosure, which is an example of a hardware device that may be applied to aspects of the present disclosure, will now be described. Electronic devices are intended to represent various forms of digital electronic computer devices, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 8, the electronic device 800 includes a computing unit 801 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 802 or a computer program loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data required for the operation of the device 800 can also be stored. The computing unit 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to the bus 804.
Various components in the electronic device 800 are connected to the I/O interface 805, including an input unit 806, an output unit 807, a storage unit 808, and a communication unit 809. The input unit 806 may be any type of device capable of inputting information to the electronic device 800, and the input unit 806 may receive input numeric or character information and generate key signal inputs related to user settings and/or function controls of the electronic device. The output unit 807 may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, video/audio output terminals, vibrators, and/or printers. The storage unit 808 may include, but is not limited to, magnetic disks, optical disks. The communication unit 809 allows the electronic device 800 to exchange information/data with other devices over computer networks, such as the internet, and/or various telecommunications networks, and may include, but is not limited to, modems, network cards, infrared communication devices, wireless communication transceivers and/or chipsets, such as bluetooth (TM) devices, wiFi devices, wiMax devices, cellular communication devices, and/or the like.
The computing unit 801 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 801 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 801 performs the various methods and processes described above. For example, in some embodiments, the video processing method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 808. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 800 via the ROM 802 and/or the communication unit 809. In some embodiments, the computing unit 801 may be configured to perform the video processing method by any other suitable means (e.g., by means of firmware).
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
As used in this disclosure, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user and a keyboard and a pointing device (e.g., a mouse or a trackball) by which the user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user, for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback), and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a Local Area Network (LAN), a Wide Area Network (WAN), and the Internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
The foregoing is merely a specific embodiment of the disclosure to enable one skilled in the art to understand or practice the disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown and described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (21)

1. A video processing method, comprising:
acquiring an original video and a first video to be replaced, wherein the first video is a new video independent of the original video, and the first video is a video with any time length;
When a video preview instruction is acquired, taking the first video as a current video, and executing the following video processing operation:
The original video and the current video are played, and in the playing process, whether the duration of the current video is equal to the preset clipping time in the original video is compared, wherein the clipping time is the preset time which is required to be edited for the second time in the original video;
If the video is not equal, editing the current video according to the comparison result to obtain a new current video;
Executing at least one video processing operation until the duration of the new current video is equal to the clipping time, and determining the new current video as a second video;
and carrying out video synthesis according to the second video and the original video to obtain a target video.
2. The method according to claim 1, wherein the editing the current video according to the comparison result to obtain a new current video comprises:
And under the condition that the duration of the current video is longer than the clipping time, selecting a video segment with the duration equal to the clipping time from the current video to obtain a new current video.
3. The method according to claim 1, wherein the editing the current video according to the comparison result to obtain a new current video comprises:
deleting the current video and acquiring a third video under the condition that the duration of the current video is smaller than the clipping time;
comparing whether the duration of the third video is equal to the clipping time;
and if the video is not equal, editing the third video according to the comparison result to obtain a new current video.
4. The method according to claim 1, wherein the editing the current video according to the comparison result to obtain a new current video comprises:
Determining a missing duration of a phase difference between the clipping time and the duration of the current video when the duration of the current video is smaller than the clipping time;
acquiring a fourth video which is not smaller than the missing duration;
and taking the combination of the current video and the fourth video as a new current video.
5. The method of claim 1, wherein the playing the original video and the current video comprises:
Starting to play the original video and acquiring the playing time of the original video;
when the playing time reaches the starting time in the clipping time, pausing the playing of the original video and starting the playing of the current video;
And when the current video playing is finished, jumping the playing progress of the original video to the finishing time in the clipping time, and playing the original video from the finishing time.
6. The method of claim 5, wherein the beginning of playing the original video comprises:
On a time axis of the original video, determining a first time before a start time in the clipping time according to a preset first time interval;
And playing the original video from the first time.
7. The method of claim 5, wherein the method further comprises:
on the time axis of the current video, determining a second time before the ending time of the current video according to a preset second time interval;
When the playing time length of the current video reaches a preset time length threshold value, the playing progress of the current video is jumped to the second time;
And continuing to play the current video from the second time until the current video is played.
8. The method according to claim 1, wherein the method further comprises:
And in the process of playing the original video and the current video, when a drag instruction is acquired, adjusting the playing progress of the original video or the current video to the time corresponding to the end position of the drag instruction.
9. The method according to claim 1, wherein the video synthesizing according to the second video and the original video to obtain a target video comprises:
Detecting the association degree of video contents before and after replacement playing;
and under the condition that the association degree is higher than a preset association degree threshold value, video synthesis is carried out according to the second video and the original video, and a target video is obtained.
10. The method according to claim 1 or 9, wherein the video synthesizing according to the second video and the original video to obtain a target video comprises:
When a video synthesis instruction is acquired, deleting a video segment corresponding to the clipping time in the original video, and inserting the second video into the clipping time in the original video to obtain a target video.
11. The method of claim 1, wherein the original video is pre-associated with interactive signaling, the method further comprising:
and synchronously aligning the target video with the interactive signaling.
12. The method of claim 11, wherein said synchronizing alignment of said target video with said interactive signaling comprises:
playing the target video, and recording the entering time length of the target video by starting a local clock service;
and when the entering duration reaches the signaling trigger time carried by the interactive signaling, calibrating the current playing time of the target video to the signaling trigger time.
13. The method of claim 12, wherein calibrating the current play time of the target video to the signaling trigger time comprises:
transmitting the interactive signaling to a pre-configured scene service engine through the local clock service;
Analyzing the interactive signaling through the scene service engine to obtain the signaling trigger time corresponding to the interactive signaling;
Bridging the signaling trigger time to a video playing channel so that the video playing channel calibrates the current playing time of the target video to the signaling trigger time.
14. The method of claim 13, wherein the method further comprises:
generating a null frame signaling containing the current clock time at a preset frequency through the local clock service, and sending the null frame signaling to the scene service engine;
analyzing the current clock time in the empty frame signaling through the scene service engine, and bridging the current clock time to the video playing channel so that the video playing channel calibrates the current playing time of the target video to the current clock time.
15. The method according to any of claims 13-14, wherein said calibrating the current play time of the target video to the signaling trigger time comprises:
Comparing whether the current playing time of the target video is the same as the signaling triggering time;
And under the condition of different conditions, jumping the playing progress of the target video from the current playing time to the signaling triggering time.
16. The method according to claim 12, wherein the method further comprises:
And triggering the interaction effect corresponding to the interaction signaling when the entering duration reaches the signaling triggering time.
17. The method of claim 14, wherein the triggering the interaction effect corresponding to the interaction signaling comprises:
analyzing the interactive signaling through a scene service engine to obtain an interactive effect corresponding to the interactive signaling;
triggering the interaction effect in a three-dimensional virtual scene through the scene service engine.
18. The method of claim 11, wherein the method further comprises:
Acquiring signaling resources associated with the original video;
Analyzing the signaling resources to obtain at least one interactive signaling carrying signaling trigger time.
19. A video processing apparatus, comprising:
the video acquisition module is used for acquiring an original video and a first video to be replaced, wherein the first video is a new video independent of the original video, and the first video is a video with any time length;
The video preview module is used for taking the first video as the current video when a video preview instruction is acquired, and executing the following video processing operation:
The original video and the current video are played, and in the playing process, whether the duration of the current video is equal to the preset clipping time in the original video is compared, wherein the clipping time is the preset time which is required to be edited for the second time in the original video;
If the video is not equal, editing the current video according to the comparison result to obtain a new current video;
Executing at least one video processing operation until the duration of the new current video is equal to the clipping time, and determining the new current video as a second video;
and the video synthesis module is used for carrying out video synthesis according to the second video and the original video to obtain a target video.
20. An electronic device, the electronic device comprising:
A processor;
A memory for storing the processor-executable instructions;
The processor is configured to read the executable instructions from the memory and execute the instructions to implement the video processing method of any of the preceding claims 1-18.
21. A non-transitory computer readable storage medium storing computer instructions which, when executed on a terminal device, cause the terminal device to implement the method of any of claims 1-18.
CN202211659371.1A 2022-12-22 2022-12-22 Video processing method, device, equipment and medium Active CN116033221B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211659371.1A CN116033221B (en) 2022-12-22 2022-12-22 Video processing method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211659371.1A CN116033221B (en) 2022-12-22 2022-12-22 Video processing method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN116033221A CN116033221A (en) 2023-04-28
CN116033221B true CN116033221B (en) 2024-12-03

Family

ID=86071601

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211659371.1A Active CN116033221B (en) 2022-12-22 2022-12-22 Video processing method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN116033221B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107438839A (en) * 2016-10-25 2017-12-05 深圳市大疆创新科技有限公司 A kind of multimedia editing method, device and intelligent terminal

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9111579B2 (en) * 2011-11-14 2015-08-18 Apple Inc. Media editing with multi-camera media clips
CN105307028A (en) * 2015-10-26 2016-02-03 新奥特(北京)视频技术有限公司 Video editing method and device specific to video materials of plurality of lenses
US10650861B2 (en) * 2018-06-22 2020-05-12 Tildawatch, Inc. Video summarization and collaboration systems and methods
CN109151537B (en) * 2018-08-29 2020-05-01 北京达佳互联信息技术有限公司 Video processing method and device, electronic equipment and storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107438839A (en) * 2016-10-25 2017-12-05 深圳市大疆创新科技有限公司 A kind of multimedia editing method, device and intelligent terminal

Also Published As

Publication number Publication date
CN116033221A (en) 2023-04-28

Similar Documents

Publication Publication Date Title
CN109089130B (en) Method and device for adjusting timestamp of live video
CN110519638B (en) Processing method, processing device, electronic device, and storage medium
CN108174280B (en) Audio and video online playing method and system
CN108810657B (en) A method and system for setting video cover
RU2763518C1 (en) Method, device and apparatus for adding special effects in video and data media
CN104080006B (en) A kind of video process apparatus and method
WO2020024165A1 (en) Video clipping method, apparatus, device and storage medium
US12432835B2 (en) Selecting one or more light effects in dependence on a variation in delay
WO2018233539A1 (en) Video processing method, computer storage medium, and device
CN113424553A (en) Techniques for facilitating playback of interactive media items in response to user selections
CN110113655B (en) Video playing method and device and user terminal
US12058421B2 (en) Content aggregation
CN109821235B (en) Game video recording method, device and server
CN110198494A (en) A kind of video broadcasting method, device, equipment and storage medium
CN113542909A (en) Video processing method and device, electronic equipment and computer storage medium
WO2025190221A1 (en) Video generation method and apparatus, and device, computer-readable storage medium and product
CN116033221B (en) Video processing method, device, equipment and medium
CN105141644B (en) A kind of method for down loading and terminal of files in stream media
CN113115117A (en) Interactive video playing method and device, storage medium and electronic equipment
US12301920B2 (en) Audio transitions when streaming audiovisual media titles
US20170070755A1 (en) Information device and distribution device
JP2025521349A (en) Online video editing method, apparatus, electronic device and storage medium
CN113014981A (en) Video playing method and device, electronic equipment and readable storage medium
CN107820111B (en) Information equipment
US20240170022A1 (en) Music extension method and apparatus, electronic device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant