[go: up one dir, main page]

CN114979712B - Video playing method, device, equipment and storage medium - Google Patents

Video playing method, device, equipment and storage medium Download PDF

Info

Publication number
CN114979712B
CN114979712B CN202210522521.8A CN202210522521A CN114979712B CN 114979712 B CN114979712 B CN 114979712B CN 202210522521 A CN202210522521 A CN 202210522521A CN 114979712 B CN114979712 B CN 114979712B
Authority
CN
China
Prior art keywords
frame
frames
media
video
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210522521.8A
Other languages
Chinese (zh)
Other versions
CN114979712A (en
Inventor
王磊
桂润祥
曾显华
李晨光
曾栩鸿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN202210522521.8A priority Critical patent/CN114979712B/en
Publication of CN114979712A publication Critical patent/CN114979712A/en
Application granted granted Critical
Publication of CN114979712B publication Critical patent/CN114979712B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234381Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by altering the temporal resolution, e.g. decreasing the frame rate by frame skipping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440281Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the temporal resolution, e.g. by frame skipping

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The embodiment of the disclosure discloses a video playing method, a device, equipment and a storage medium. Determining a media frame to be issued according to the playing time; wherein the media frames include video frames and audio frames; determining the number of target frames according to the minimum delay time length; if the number of the media frames to be issued is larger than the number of the target frames, carrying out frame loss processing on the media frames to be issued to obtain a first residual media frame; and updating the time stamp of the first residual media frame, and transmitting the first residual media frame after updating the time stamp to the client so that the client plays the video according to the first residual media frame. According to the video playing method provided by the embodiment of the disclosure, when the number of the media frames to be sent is larger than the number of the target frames, the frame loss processing is carried out on the media frames to be sent, so that the decoding pressure can be reduced, the timeliness of video playing can be ensured, and the playing clamping and stopping can be reduced.

Description

Video playing method, device, equipment and storage medium
Technical Field
The embodiment of the disclosure relates to the technical field of video playing, in particular to a video playing method, a device, equipment and a storage medium.
Background
When video is started, redundancy exists in the issued video frame data. In order to quickly start the first frame, decoding of all issued video frames needs to be completed in a short time, including decoding of redundant video frames, which causes great performance pressure on a central processing unit (central processing unit, CPU) to affect scheduling of other threads, further causes phenomena such as interface switching and blocking, message response delay arrival and the like, and causes delay of video playing, thereby causing serious influence on user experience.
Disclosure of Invention
The embodiment of the disclosure provides a video playing method, a device, equipment and a storage medium, which are used for carrying out frame loss processing on issued video frames to a certain extent, so that decoding pressure can be reduced, and timeliness of video playing can be ensured.
In a first aspect, an embodiment of the present disclosure provides a video playing method, including:
determining a media frame to be issued according to the playing time; wherein the media frames include video frames and audio frames;
Determining the number of target frames according to the minimum delay time length;
if the number of the media frames to be issued is larger than the number of the target frames, carrying out frame loss processing on the media frames to be issued to obtain a first residual media frame;
And updating the time stamp of the first residual media frame, and transmitting the first residual media frame after updating the time stamp to the client so that the client plays the video according to the first residual media frame.
In a second aspect, an embodiment of the present disclosure further provides a video playing method, including:
Receiving a media frame to be decoded issued by the CDN; wherein the media frames include video frames and audio frames;
Determining the number of target frames according to the minimum delay time length;
if the number of the media frames to be decoded is larger than the number of the target frames, carrying out frame loss processing on the media frames to be decoded to obtain a first residual media frame;
and updating the time stamp of the first residual media frame, decoding the first residual media frame after updating the time stamp, and playing the video based on the decoded first residual media frame.
In a third aspect, an embodiment of the present disclosure further provides a video playing device, including:
the to-be-delivered media frame determining module is used for determining to-be-delivered media frames according to the playing time; wherein the media frames include video frames and audio frames;
the target frame number determining module is used for determining the target frame number according to the minimum delay time length;
The frame loss processing module is used for carrying out frame loss processing on the to-be-issued media frames to obtain first residual media frames if the number of to-be-issued media frames is larger than the number of target frames;
And the time stamp updating module is used for updating the time stamp of the first residual media frame, and transmitting the first residual media frame after the time stamp is updated to the client side so that the client side plays the video according to the first residual media frame.
In a fourth aspect, an embodiment of the present disclosure further provides a video playing apparatus, including:
The media frame receiving module to be decoded is used for receiving the media frame to be decoded issued by the CDN; wherein the media frames include video frames and audio frames;
the target frame number determining module is used for determining the target frame number according to the minimum delay time length;
The frame loss processing module is used for carrying out frame loss processing on the media frames to be decoded to obtain a first residual media frame if the number of the media frames to be decoded is larger than the number of the target frames;
And the time stamp updating module is used for updating the time stamp of the first residual media frame, decoding the first residual media frame after the time stamp is updated, and playing the video based on the decoded first residual media frame.
In a fifth aspect, embodiments of the present disclosure further provide an electronic device, including:
one or more processing devices;
a storage means for storing one or more programs;
the one or more programs, when executed by the one or more processing devices, cause the one or more processing devices to implement a video-on-demand method as described in embodiments of the present disclosure.
In a sixth aspect, the disclosed embodiments disclose a computer readable medium having stored thereon a computer program which when executed by a processing device implements a video-playing method as described in the disclosed embodiments.
The embodiment of the disclosure discloses a video playing method, a device, equipment and a storage medium. Determining a media frame to be issued according to the playing time; wherein the media frames include video frames and audio frames; determining the number of target frames according to the minimum delay time length; if the number of the media frames to be issued is larger than the number of the target frames, carrying out frame loss processing on the media frames to be issued to obtain a first residual media frame; and updating the time stamp of the first residual media frame, and transmitting the first residual media frame after updating the time stamp to the client so that the client plays the video according to the first residual media frame. According to the video playing method provided by the embodiment of the disclosure, when the number of the media frames to be sent is larger than the number of the target frames, the frame loss processing is carried out on the media frames to be sent, so that the decoding pressure can be reduced, the timeliness of video playing can be ensured, and the playing clamping and stopping can be reduced.
Drawings
FIG. 1 is a flow chart of a video-on-demand method in an embodiment of the present disclosure;
Fig. 2a is an example diagram of a frame loss process in an embodiment of the present disclosure;
Fig. 2b is an example diagram of a frame loss process in an embodiment of the present disclosure;
fig. 2c is an example diagram of a frame loss process in an embodiment of the present disclosure;
FIG. 3a is an example diagram of an update timestamp in an embodiment of the present disclosure;
FIG. 3b is an example diagram of an update timestamp in an embodiment of the present disclosure
FIG. 4 is a flow chart of a video-on-demand method in an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a video playing device in an embodiment of the disclosure;
FIG. 6 is a schematic diagram of a video playing device in an embodiment of the disclosure;
fig. 7 is a schematic structural diagram of an electronic device in an embodiment of the disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
In video coding sequences, there are mainly three types of coded frames: key frames (abbreviated I frames), forward reference frames (abbreviated P frames), bi-directional reference frames (abbreviated B frames). In a video coding sequence, a complete Group of video frames (GOP) consists of video frames starting from a current I frame and proceeding to the next I frame. In decoding video, it is necessary to start with an I frame to decode normally. Therefore, in the play-up stage, when the CDN issues a video frame, it is necessary to issue a video frame between the forward I frame closest to the play-up time and the play-up time. In order to quickly start the first frame, decoding of all video frames issued by the CDN needs to be completed in a short time, including decoding of redundant video frames, which causes great performance pressure on the CPU, thereby affecting scheduling of other threads, further causing phenomena such as UI interface switching and blocking, arrival of message response delay, and the like, and seriously affecting user experience.
Fig. 1 is a flowchart of a video playing method provided by an embodiment of the present disclosure, where the embodiment may be suitable for a case of screening a video frame delivered during video playing, and the method may be performed by a video playing device, where the device may be composed of hardware and/or software, and may be generally integrated in a device having a video playing function, and the device may be a CDN server. As shown in fig. 1, the method specifically includes the following steps:
S110, determining the media frame to be issued according to the playing time.
A media frame is understood to mean a multimedia frame, including video frames and audio frames. The play starting time may be a time for a user to randomly select to start playing in the video, and may be determined according to a duration from a start time of the video, for example: the start time is 5 seconds, and it can be understood that the video starts to be played from a place 5 seconds away from the video start time.
In this embodiment, in order to ensure that the video frames delivered can be decoded normally, the video frames between the forward I frame closest to the playing time and the playing time need to be delivered.
Specifically, the process of determining the media frame to be delivered according to the play time may be: determining a complete video frame group corresponding to the playing time; and determining the media frames between the first frame time and the playing time of the complete video frame group as the media frames to be sent.
In this embodiment, the GOP at which the play-up time is located may be determined by dividing the play-up time by the duration of the complete video frame group GOP. For example, assuming gop=2 seconds and a start time of 5 seconds, the start time is in the 3 rd GOP and the first frame time of the 3 rd GOP is 4 seconds, a multi-bit media frame between 4 seconds and 5 seconds is determined as a multimedia frame to be delivered. The number of multimedia frames to be delivered can be determined by the time interval and frame rate between the first frame time and the start time of the corresponding GOP. Assuming that the frame rate fps=25, the number of multimedia frames to be delivered is 25×1=25. For video frames, the video frames to be delivered include: an I frame at 4 seconds and a video frame between 4 seconds and 5 seconds; for audio frames, the audio frames to be dropped are audio frames between 4 seconds and 5 seconds.
S120, determining the number of target frames according to the minimum delay time length.
The minimum delay time can be obtained from configuration information of the video player, and can be dynamically set, and the unit of the minimum delay time is ms. The target frame number can be understood as the maximum frame number that ensures that the video starts the first frame smoothly.
In this embodiment, the manner of determining the number of target frames according to the minimum delay time length may be: acquiring a frame rate of a medium; the target number of frames is determined based on the frame rate and the minimum delay time length.
The method for determining the number of the target frames based on the frame rate and the minimum delay time length may be as follows: multiplying the minimum delay time length by the frame rate and dividing the minimum delay time length by the set value to obtain the target frame number. Wherein the set value may be set to 1000. For example, the calculation formula of the target frame number may be expressed as: Wherein, the method comprises the steps of, wherein, For the minimum delay duration, PFS is the frame rate. Assuming a minimum delay time of 200ms and a frame rate of 25, the target frame number is: 5.
And S130, if the number of the media frames to be issued is larger than the number of the target frames, carrying out frame loss processing on the media frames to be issued to obtain a first residual media frame.
In this embodiment, if the number of media frames to be delivered is less than or equal to the number of target frames, no frame loss processing is required for the media frames to be delivered; if the number of the media frames to be issued is larger than the number of the target frames, frame loss processing is needed to be carried out on the media frames to be issued. In this embodiment, in order to ensure normal decoding of video frames, I frames need to be reserved, and the rest of video frames undergo frame loss processing according to a certain rule.
Specifically, the method for performing frame loss processing on the media frame to be issued to obtain the first remaining media frame may be: if the video frames do not contain the two-way reference frames, determining a first frame loss number according to the number of target frames and the number of media frames to be issued; discarding video frames with a first frame loss number after a time stamp in the video frames to be issued; and discarding the audio frames of the first frame loss number with the front time stamp in the audio frames to be issued.
The bidirectional reference frame is a B frame, and if the video frame does not contain the B frame, the video frame to be issued is composed of an I frame and a P frame. The number of media frames to be issued is subtracted by the target number of frames to obtain the number of frames to be discarded, namely the first discarded number. Assuming that the number of media frames to be delivered is 9 and the target frame number is 5, the number of frames to be discarded is 4, i.e. the number of remaining frames is guaranteed to be less than or equal to the target frame number.
Specifically, after the first frame loss number is determined, discarding the video frames of the first frame loss number with the time stamps being later in the video frames to be issued; and discarding the audio frames of the first frame loss number with the front time stamp in the audio frames to be issued. Fig. 2a is an exemplary diagram of a frame dropping process in this embodiment, and as shown in fig. 2a, assuming that the number of media frames to be delivered is 10 and the target frame number is 5, the number of frames to be dropped is 5. For video frames, discarding video frames with sequence numbers of 1.5-1.9; for audio frames, audio frames with sequence numbers 1.0-1.4 are discarded. And discarding the video frames with the time stamps behind in the video frames, so that the normal decoding of the video frames can be ensured.
Specifically, the method for performing frame loss processing on the media frame to be issued to obtain the first remaining media frame may be: if the video frames contain the two-way reference frames, discarding all the two-way reference frames contained in the video frames to be issued to obtain a second residual video frame; if the number of the second residual video frames is smaller than or equal to the number of the target frames, acquiring the second frame loss number of all the discarded bidirectional reference frames; and discarding the audio frames with the second frame loss number with the front time stamp in the audio frames to be issued.
In this embodiment, after all B frames are discarded, if the number of remaining video frames is less than or equal to the number of target frames, and the number of remaining video frames already meets the requirement, it is not necessary to continue frame discarding. And determining the number of all the B frames to be discarded as a second frame loss number, and discarding the audio frames with the second frame loss number with the front time stamp in the audio frames to be issued. For example, fig. 2B is an exemplary diagram of a frame dropping process in this embodiment, as shown in fig. 2B, in the video frames, the thickened and inclined B frames are indicated, that is, the corresponding video frames with the serial numbers of 1.1, 1.3, 1.5, 1.7 and 1.9 are B frames, all the B frames are dropped first, and 5 video frames are left, where the number meets the requirement, and no frame dropping is needed. 5 frames of video are dropped together in the video frames, and therefore, audio frames with sequence numbers 1.0-1.4 need to be dropped. In this embodiment, the B frame is discarded first, that is, normal decoding is not affected, and the frame discarding efficiency can be improved.
Optionally, if the number of the second remaining video frames is greater than the target number of frames, determining a third number of dropped frames according to the number of the second remaining video frames and the target number of frames; discarding video frames with a third frame loss number after the time stamp in the second residual video frames; determining the total frame loss number according to the third frame loss number and the second frame loss number; and discarding the audio frames with the total frame loss number before the time stamp in the audio frames to be issued.
In this embodiment, if the number of the second remaining video frames is greater than the number of the target frames, the frame loss requirement is not satisfied at this time, and the frame loss needs to be continued. And subtracting the target frame number from the second residual video frame number to obtain the frame number to be continuously lost, namely the third frame number to be lost, and discarding the video frames with the third frame number to be lost and with the time stamps behind in the second residual video frames. And summing the number of the discarded B frames and the third frame loss number to obtain the total frame loss number, and finally discarding the audio frames with the total frame loss number with the front time stamp in the audio frames to be transmitted. For example, fig. 2c is an exemplary diagram of a frame dropping process in this embodiment, as shown in fig. 2c, assuming that the number of target frames is 5 and the number of multimedia frames to be delivered is 15, where the number of B frames is 7, after all B frames are dropped, the remaining 8 frames of video are still greater than the number of target frames, 3 frames of video need to be continuously dropped, so video frames with sequence numbers of 2.0, 2.2 and 2.4 are continuously dropped. This discards a total of 10 frames of video and therefore requires discarding audio frames with time stamps 1.0-1.9.
And S140, updating the time stamp of the first residual media frame, and transmitting the first residual media frame after updating the time stamp to the client so that the client plays the video according to the first residual media frame.
In this embodiment, after the frame loss processing is performed on the video frame, the time stamps in the video frame are not continuous any more, and in order to ensure normal decoding of the video frame, the time stamps of the remaining video frames need to be updated. For audio frames, the remaining audio frame time stamps are still continuous since the audio frame with the front time stamp is discarded, so that the time stamps of the remaining audio frames do not need to be updated.
Specifically, the manner of updating the timestamp of the first remaining media frame may be: for each remaining video frame, determining a number of video frames discarded after a timestamp of the remaining video frame; determining the time stamp offset of the rest video frames according to the number of the discarded video frames and the frame rate; the time stamps of the remaining video frames are updated based on the time stamp offset.
In this embodiment, the number of discarded video frames is set at the frame rate to obtain the timestamp offset of the current remaining video frame, and then the timestamp of the current remaining video frame is accumulated by the timestamp offset to obtain the updated timestamp. 3 a-3 b are exemplary diagrams of an update timestamp. As shown in fig. 3a, for the remaining video frames 1.0-1.4, since 5 video frames are discarded after that, the time stamps of the video frames with sequence numbers 1.0-1.4 are all shifted backward by the time length corresponding to 5 video frames, that is, the video frame with sequence number 1.0 is shifted to the time stamp corresponding to the video frame with sequence number 1.5, the video frame with sequence number 1.1 is shifted to the time stamp corresponding to the video frame with sequence number 1.6, and so on, so as to update the time stamp of each remaining video frame. As shown in fig. 3b, for the remaining video frame with sequence number 1.0, since 5 video frames are discarded after that, the duration corresponding to 5 videos needs to be shifted backward, i.e. to the timestamp corresponding to the video frame with sequence number 1.5; for the remaining video frames with the sequence number of 1.2, since 4 video frames are discarded after that, the time length corresponding to 4 videos needs to be shifted backward, namely, the time length is shifted to the time stamp corresponding to the video frame with the original sequence number of 1.6; for the remaining video frames with the sequence number of 1.4, since 3 video frames are discarded after that, the duration corresponding to 3 videos needs to be shifted backward, namely, the time stamp corresponding to the video frame with the original sequence number of 1.7 is shifted; and so on, the time stamps of all the remaining video frames are updated.
In this embodiment, after updating the time stamp of the remaining video frame, the remaining media frame is issued to the client, so that the client decodes the received media frame, and plays the video based on the decoded media frame.
According to the technical scheme, the media frames to be issued are determined according to the playing time; wherein the media frames include video frames and audio frames; determining the number of target frames according to the minimum delay time length; if the number of the media frames to be issued is larger than the number of the target frames, carrying out frame loss processing on the media frames to be issued to obtain a first residual media frame; and updating the time stamp of the first residual media frame, and transmitting the first residual media frame after updating the time stamp to the client so that the client plays the video according to the first residual media frame. According to the video playing method provided by the embodiment of the disclosure, when the number of the media frames to be sent is larger than the number of the target frames, the frame loss processing is carried out on the media frames to be sent, so that the decoding pressure can be reduced, the timeliness of video playing can be ensured, and the playing clamping and stopping can be reduced.
Fig. 4 is a flowchart of a video playing method provided in an embodiment of the present disclosure, where the embodiment may be applicable to a case of screening a received video frame during video playing, and the method may be performed by a video playing apparatus, where the apparatus may be composed of hardware and/or software, and may be generally integrated in a device having a video playing function, and the device may be a client. Based on the above embodiments, as shown in fig. 1, the method specifically includes the following steps:
S410, receiving the media frame to be decoded issued by the CDN.
Wherein the media frames include video frames and audio frames. In this embodiment, a user triggers a play operation at any time in a video through a video APP, generates a play request according to the triggered play operation, and finally sends the play request to a CDN server. And the CDN server determines a media frame to be delivered according to the playing time in the playing request and delivers the media frame to be delivered to the client. The manner in which the CDN determines the media frame to be delivered according to the start time may refer to the above embodiment, which is not described herein.
S420, determining the number of target frames according to the minimum delay time length.
The minimum delay time can be obtained from configuration information of the video player, and can be dynamically set, and the unit of the minimum delay time is ms. The target frame number can be understood as the maximum frame number that ensures that the video starts the first frame smoothly. Specifically, the method for determining the number of target frames according to the minimum delay time length may refer to the above embodiment, which is not described herein.
And S430, if the number of the media frames to be decoded is larger than the number of the target frames, carrying out frame loss processing on the media frames to be decoded to obtain a first residual media frame.
In this embodiment, if the number of media frames to be delivered is less than or equal to the number of target frames, no frame loss processing is required for the media frames to be delivered; if the number of the media frames to be issued is larger than the number of the target frames, frame loss processing is needed to be carried out on the media frames to be issued. In this embodiment, in order to ensure normal decoding of video frames, I frames need to be reserved, and the rest of video frames undergo frame loss processing according to a certain rule.
Specifically, the method of performing frame loss processing on the media frame to be decoded may refer to the method of performing frame loss processing on the media frame to be issued in the foregoing embodiment, which is not described herein.
S440, updating the time stamp of the first residual media frame, decoding the first residual media frame after updating the time stamp, and playing the video based on the decoded first residual media frame.
In this embodiment, after the frame loss processing is performed on the video frame, the time stamps in the video frame are not continuous any more, and in order to ensure normal decoding of the video frame, the time stamps of the remaining video frames need to be updated. For audio frames, the remaining audio frame time stamps are still continuous since the audio frame with the front time stamp is discarded, so that the time stamps of the remaining audio frames do not need to be updated.
In particular, the time stamp for updating the first remaining media frame may refer to the above embodiment, and will not be described herein.
In this embodiment, after the time stamp of the video frame is updated, the remaining media frame is decoded, and video playing is performed based on the decoded remaining media frame.
According to the technical scheme, the media frames to be decoded, which are issued by the CDN, are received; wherein the media frames include video frames and audio frames; determining the number of target frames according to the minimum delay time length; if the number of the media frames to be decoded is larger than the number of the target frames, carrying out frame loss treatment on the media frames to be decoded to obtain a first residual media frame; and updating the time stamp of the first residual media frame, decoding the first residual media frame after updating the time stamp, and playing the video based on the decoded first residual media frame. According to the video playing method provided by the embodiment of the disclosure, when the number of the media frames to be decoded is larger than the number of the target frames, the frame loss processing is carried out on the media frames to be decoded, so that the decoding pressure can be reduced, the timeliness of video playing can be ensured, and the playing clamping and stopping can be reduced.
Fig. 5 is a schematic structural diagram of a video playing device according to an embodiment of the present disclosure, where the device is disposed in a content delivery server CDN, and as shown in fig. 5, the device includes:
The to-be-delivered media frame determining module 510 is configured to determine a to-be-delivered media frame according to the play time; wherein the media frames include video frames and audio frames;
A target frame number determining module 520, configured to determine a target frame number according to the minimum delay time length;
the frame loss processing module 530 is configured to perform frame loss processing on the to-be-delivered media frame to obtain a first remaining media frame if the to-be-delivered media frame number is greater than the target frame number;
The timestamp updating module 540 is configured to update the timestamp of the first remaining media frame, and send the first remaining media frame after the timestamp update to the client, so that the client plays the video according to the first remaining media frame.
Optionally, the to-be-delivered media frame determining module 510 is further configured to:
determining a complete video frame group corresponding to the playing time;
and determining the media frames between the first frame time of the complete video frame group and the playing time as media frames to be issued.
Optionally, the target frame number determining module 520 is further configured to:
Acquiring the frame rate of the media;
a target number of frames is determined based on the frame rate and the minimum delay duration.
Optionally, the frame loss processing module 530 is further configured to:
if the video frame does not contain the two-way reference frame, determining a first frame loss number according to the target frame number and the number of the media frames to be issued;
Discarding the video frames of the first frame loss number with the time stamps behind in the video frames to be issued;
And discarding the audio frames of the first frame loss number with the front time stamp in the audio frames to be downloaded.
Optionally, the frame loss processing module 530 is further configured to:
If the video frames contain the two-way reference frames, discarding all the two-way reference frames contained in the video frames to be issued to obtain a second residual video frame;
if the number of the second residual video frames is smaller than or equal to the number of the target frames, acquiring a second frame loss number of all the discarded bidirectional reference frames;
And discarding the audio frames with the second frame loss number with the front time stamp in the audio frames to be downloaded.
Optionally, the frame loss processing module 530 is further configured to:
If the number of the second residual video frames is larger than the target frame number, determining a third frame loss number according to the number of the second residual video frames and the target frame number;
discarding the video frames of the third frame loss number with the time stamps behind in the second residual video frames;
determining the total frame loss number according to the third frame loss number and the second frame loss number;
And discarding the audio frames with the total frame loss number with the front time stamp in the audio frames to be downloaded.
Optionally, the timestamp updating module 540 is further configured to:
For each remaining video frame, determining a number of video frames discarded after a timestamp of the remaining video frame;
determining the timestamp offset of the remaining video frames according to the number of the discarded video frames and the frame rate;
And updating the time stamp of the residual video frame based on the time stamp offset.
Fig. 6 is a video playing device provided in an embodiment of the present disclosure, where the device is disposed in a client, as shown in fig. 6, and the device includes:
A to-be-decoded media frame receiving module 610, configured to receive a to-be-decoded media frame issued by the CDN; wherein the media frames include video frames and audio frames;
A target frame number determining module 620, configured to determine a target frame number according to the minimum delay time length;
The frame loss processing module 630 is configured to perform frame loss processing on the media frame to be decoded to obtain a first remaining media frame if the number of the media frames to be decoded is greater than the number of the target frames;
The timestamp updating module 640 is configured to update the timestamp of the first remaining media frame, decode the first remaining media frame after updating the timestamp, and play video based on the decoded first remaining media frame.
The device can execute the method provided by all the embodiments of the disclosure, and has the corresponding functional modules and beneficial effects of executing the method. Technical details not described in detail in this embodiment can be found in the methods provided by all of the foregoing embodiments of the present disclosure.
Referring now to fig. 7, a schematic diagram of an electronic device 300 suitable for use in implementing embodiments of the present disclosure is shown. The electronic devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), car terminals (e.g., car navigation terminals), etc., as well as fixed terminals such as digital TVs, desktop computers, etc., or various forms of servers such as stand-alone servers or server clusters. The electronic device shown in fig. 7 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 7, the electronic apparatus 300 may include a processing device (e.g., a central processing unit, a graphics processor, etc.) 301, which may perform various appropriate actions and processes according to a program stored in a read-only memory device (ROM) 302 or a program loaded from a storage device 308 into a random access memory device (RAM) 303. In the RAM 303, various programs and data required for the operation of the electronic apparatus 300 are also stored. The processing device 301, the ROM 302, and the RAM 303 are connected to each other via a bus 304. An input/output (I/O) interface 305 is also connected to bus 304.
In general, the following devices may be connected to the I/O interface 305: input devices 306 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 307 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 308 including, for example, magnetic tape, hard disk, etc.; and communication means 309. The communication means 309 may allow the electronic device 300 to communicate with other devices wirelessly or by wire to exchange data. While fig. 7 shows an electronic device 300 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program containing program code for performing a recommended method of words. In such an embodiment, the computer program may be downloaded and installed from a network via a communication device 309, or installed from a storage device 308, or installed from a ROM 302. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing means 301.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: determining a media frame to be issued according to the playing time; wherein the media frames include video frames and audio frames; determining the number of target frames according to the minimum delay time length; if the number of the media frames to be issued is larger than the number of the target frames, carrying out frame loss processing on the media frames to be issued to obtain a first residual media frame; and updating the time stamp of the first residual media frame, and transmitting the first residual media frame after updating the time stamp to the client so that the client plays the video according to the first residual media frame. Or receiving a media frame to be decoded issued by the CDN; wherein the media frames include video frames and audio frames; determining the number of target frames according to the minimum delay time length; if the number of the media frames to be decoded is larger than the number of the target frames, carrying out frame loss processing on the media frames to be decoded to obtain a first residual media frame; and updating the time stamp of the first residual media frame, decoding the first residual media frame after updating the time stamp, and playing the video based on the decoded first residual media frame.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including, but not limited to, an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, the embodiments of the present disclosure disclose a video playing method, which is performed by a content delivery server CDN, comprising:
determining a media frame to be issued according to the playing time; wherein the media frames include video frames and audio frames;
Determining the number of target frames according to the minimum delay time length;
if the number of the media frames to be issued is larger than the number of the target frames, carrying out frame loss processing on the media frames to be issued to obtain a first residual media frame;
And updating the time stamp of the first residual media frame, and transmitting the first residual media frame after updating the time stamp to the client so that the client plays the video according to the first residual media frame.
Further, determining the media frame to be delivered according to the play-up time includes:
determining a complete video frame group corresponding to the playing time;
and determining the media frames between the first frame time of the complete video frame group and the playing time as media frames to be issued.
Further, determining the number of target frames according to the minimum delay duration includes:
Acquiring the frame rate of the media;
a target number of frames is determined based on the frame rate and the minimum delay duration.
Further, performing frame loss processing on the to-be-issued media frame to obtain a first remaining media frame, including:
if the video frame does not contain the two-way reference frame, determining a first frame loss number according to the target frame number and the number of the media frames to be issued;
Discarding the video frames of the first frame loss number with the time stamps behind in the video frames to be issued;
And discarding the audio frames of the first frame loss number with the front time stamp in the audio frames to be downloaded.
Further, performing frame loss processing on the to-be-issued media frame to obtain a first remaining media frame, including:
If the video frames contain the two-way reference frames, discarding all the two-way reference frames contained in the video frames to be issued to obtain a second residual video frame;
if the number of the second residual video frames is smaller than or equal to the number of the target frames, acquiring a second frame loss number of all the discarded bidirectional reference frames;
And discarding the audio frames with the second frame loss number with the front time stamp in the audio frames to be downloaded.
Further, the method further comprises the following steps:
If the number of the second residual video frames is larger than the target frame number, determining a third frame loss number according to the number of the second residual video frames and the target frame number;
discarding the video frames of the third frame loss number with the time stamps behind in the second residual video frames;
determining the total frame loss number according to the third frame loss number and the second frame loss number;
And discarding the audio frames with the total frame loss number with the front time stamp in the audio frames to be downloaded.
Further, updating the timestamp of the first remaining media frame includes:
For each remaining video frame, determining a number of video frames discarded after a timestamp of the remaining video frame;
determining the timestamp offset of the remaining video frames according to the number of the discarded video frames and the frame rate;
And updating the time stamp of the residual video frame based on the time stamp offset.
The embodiment of the disclosure also provides a video playing method, which is executed by the client and comprises the following steps:
Receiving a media frame to be decoded issued by the CDN; wherein the media frames include video frames and audio frames;
Determining the number of target frames according to the minimum delay time length;
if the number of the media frames to be decoded is larger than the number of the target frames, carrying out frame loss processing on the media frames to be decoded to obtain a first residual media frame;
and updating the time stamp of the first residual media frame, decoding the first residual media frame after updating the time stamp, and playing the video based on the decoded first residual media frame.
It should be appreciated that the various forms of procedures shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially, or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (11)

1. A video playing method, comprising:
determining a media frame to be issued according to the playing time; wherein the media frames include video frames and audio frames;
Determining the number of target frames according to the minimum delay time length;
if the number of the media frames to be issued is larger than the number of the target frames, carrying out frame loss processing on the media frames to be issued to obtain a first residual media frame;
Updating the time stamp of the first residual media frame, and transmitting the first residual media frame after updating the time stamp to the client so that the client plays the video according to the first residual media frame;
performing frame loss processing on the to-be-issued media frame to obtain a first residual media frame, including:
if the video frame does not contain the two-way reference frame, determining a first frame loss number according to the target frame number and the number of the media frames to be issued;
discarding the video frames with the first frame loss number and the time stamps of the video frames to be issued;
Discarding the audio frames of the first frame loss number with the front time stamp in the audio frames to be issued;
Determining a first frame loss number according to the target frame number and the number of media frames to be issued, including: and subtracting the target frame number from the number of the media frames to be issued to obtain the first frame loss number.
2. The method of claim 1, wherein determining the media frame to be delivered based on the start time comprises:
determining a complete video frame group corresponding to the playing time;
and determining the media frames between the first frame time of the complete video frame group and the playing time as media frames to be issued.
3. The method of claim 1, wherein determining the target number of frames based on the minimum delay duration comprises:
Acquiring a frame rate of the media frame;
a target number of frames is determined based on the frame rate and the minimum delay duration.
4. The method of claim 1, wherein performing frame loss processing on the to-be-delivered media frame to obtain a first remaining media frame comprises:
If the video frames contain the two-way reference frames, discarding all the two-way reference frames contained in the video frames to be issued to obtain a second residual video frame;
if the number of the second residual video frames is smaller than or equal to the number of the target frames, acquiring a second frame loss number of all the discarded bidirectional reference frames;
And discarding the audio frames with the second frame loss number with the front time stamp in the audio frames to be downloaded.
5. The method as recited in claim 4, further comprising:
If the number of the second residual video frames is larger than the target frame number, determining a third frame loss number according to the number of the second residual video frames and the target frame number;
discarding the video frames of the third frame loss number with the time stamps behind in the second residual video frames;
determining the total frame loss number according to the third frame loss number and the second frame loss number;
And discarding the audio frames with the total frame loss number with the front time stamp in the audio frames to be downloaded.
6. The method of claim 1, wherein updating the timestamp of the first remaining media frame comprises:
For each remaining video frame, determining a number of video frames discarded after a timestamp of the remaining video frame;
determining the timestamp offset of the remaining video frames according to the number of the discarded video frames and the frame rate;
And updating the time stamp of the residual video frame based on the time stamp offset.
7. A video playing method, comprising:
Receiving a media frame to be decoded issued by the CDN; wherein the media frames include video frames and audio frames;
Determining the number of target frames according to the minimum delay time length;
if the number of the media frames to be decoded is larger than the number of the target frames, carrying out frame loss processing on the media frames to be decoded to obtain a first residual media frame;
updating the time stamp of the first residual media frame, decoding the first residual media frame after updating the time stamp, and playing video based on the decoded residual media frame;
performing frame loss processing on the media frame to be decoded to obtain a first residual media frame, including:
If the video frame does not contain the two-way reference frame, determining a first frame loss number according to the target frame number and the number of the media frames to be decoded;
discarding the video frames with the first frame loss number and the time stamps behind the video frames to be decoded;
discarding the audio frames of the first frame loss number with the front time stamp in the audio frames to be decoded;
determining a first frame loss number according to the target frame number and the number of media frames to be decoded, including: and subtracting the target frame number from the media frame number to be decoded to obtain the first frame loss number.
8. A video playing device, comprising:
the to-be-delivered media frame determining module is used for determining to-be-delivered media frames according to the playing time; wherein the media frames include video frames and audio frames;
the target frame number determining module is used for determining the target frame number according to the minimum delay time length;
The frame loss processing module is used for carrying out frame loss processing on the to-be-issued media frames to obtain first residual media frames if the number of to-be-issued media frames is larger than the number of target frames;
The time stamp updating module is used for updating the time stamp of the first residual media frame, and sending the first residual media frame after the time stamp is updated to the client side so that the client side plays the video according to the first residual media frame;
the frame loss processing module is further configured to:
if the video frame does not contain the two-way reference frame, determining a first frame loss number according to the target frame number and the number of the media frames to be issued;
discarding the video frames with the first frame loss number and the time stamps of the video frames to be issued;
Discarding the audio frames of the first frame loss number with the front time stamp in the audio frames to be issued;
Determining a first frame loss number according to the target frame number and the number of media frames to be issued, including: and subtracting the target frame number from the number of the media frames to be issued to obtain the first frame loss number.
9. A video playing device, comprising:
The media frame receiving module to be decoded is used for receiving the media frame to be decoded issued by the CDN; wherein the media frames include video frames and audio frames;
the target frame number determining module is used for determining the target frame number according to the minimum delay time length;
The frame loss processing module is used for carrying out frame loss processing on the media frames to be decoded to obtain a first residual media frame if the number of the media frames to be decoded is larger than the number of the target frames;
The time stamp updating module is used for updating the time stamp of the first residual media frame, decoding the first residual media frame after the time stamp is updated, and playing video based on the decoded first residual media frame;
the frame loss processing module is further configured to:
If the video frame does not contain the two-way reference frame, determining a first frame loss number according to the target frame number and the number of the media frames to be decoded;
discarding the video frames with the first frame loss number and the time stamps behind the video frames to be decoded;
discarding the audio frames of the first frame loss number with the front time stamp in the audio frames to be decoded;
determining a first frame loss number according to the target frame number and the number of media frames to be decoded, including: and subtracting the target frame number from the media frame number to be decoded to obtain the first frame loss number.
10. An electronic device, the electronic device comprising:
one or more processing devices;
a storage means for storing one or more programs;
When the one or more programs are executed by the one or more processing devices, the one or more processing devices are caused to implement the video-playing method of any of claims 1-7.
11. A computer readable medium on which a computer program is stored, characterized in that the program, when being executed by a processing device, implements a video-playing method as claimed in any one of claims 1-7.
CN202210522521.8A 2022-05-13 2022-05-13 Video playing method, device, equipment and storage medium Active CN114979712B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210522521.8A CN114979712B (en) 2022-05-13 2022-05-13 Video playing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210522521.8A CN114979712B (en) 2022-05-13 2022-05-13 Video playing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114979712A CN114979712A (en) 2022-08-30
CN114979712B true CN114979712B (en) 2024-07-26

Family

ID=82984165

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210522521.8A Active CN114979712B (en) 2022-05-13 2022-05-13 Video playing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114979712B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119420847B (en) * 2025-01-03 2025-05-16 浙江大华技术股份有限公司 PTZ control method and device, storage medium and electronic device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110113621A (en) * 2018-02-01 2019-08-09 腾讯科技(深圳)有限公司 Playing method and device, storage medium, the electronic device of media information
CN114189711A (en) * 2021-11-16 2022-03-15 北京金山云网络技术有限公司 Video processing method and apparatus, electronic device, storage medium

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103856812B (en) * 2014-03-25 2018-08-07 北京奇艺世纪科技有限公司 A kind of video broadcasting method and device
CN105933800A (en) * 2016-04-29 2016-09-07 联发科技(新加坡)私人有限公司 Video play method and control terminal
US10116989B1 (en) * 2016-09-12 2018-10-30 Twitch Interactive, Inc. Buffer reduction using frame dropping
CN106817614B (en) * 2017-01-20 2020-08-04 浙江瑞华康源科技有限公司 Audio and video frame loss device and method
US10862944B1 (en) * 2017-06-23 2020-12-08 Amazon Technologies, Inc. Real-time video streaming with latency control
CN110392269B (en) * 2018-04-17 2021-11-30 腾讯科技(深圳)有限公司 Media data processing method and device and media data playing method and device
CN109714634B (en) * 2018-12-29 2021-06-29 海信视像科技股份有限公司 Decoding synchronization method, device and equipment for live data stream
CN111436009B (en) * 2019-01-11 2023-10-27 厦门雅迅网络股份有限公司 Real-time video stream transmission and display method and transmission and play system
CN110572695A (en) * 2019-08-07 2019-12-13 苏州科达科技股份有限公司 media data encoding and decoding methods and electronic equipment
CN112135163A (en) * 2020-09-27 2020-12-25 京东方科技集团股份有限公司 Video playing starting method and device
CN113490055B (en) * 2021-07-06 2023-09-19 三星电子(中国)研发中心 Data processing method and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110113621A (en) * 2018-02-01 2019-08-09 腾讯科技(深圳)有限公司 Playing method and device, storage medium, the electronic device of media information
CN114189711A (en) * 2021-11-16 2022-03-15 北京金山云网络技术有限公司 Video processing method and apparatus, electronic device, storage medium

Also Published As

Publication number Publication date
CN114979712A (en) 2022-08-30

Similar Documents

Publication Publication Date Title
CN111147606B (en) Data transmission method, device, terminal and storage medium
WO2022068488A1 (en) Message sending control method and apparatus, and electronic device and computer-readable storage medium
JP2025508422A (en) VIDEO PLAYBACK METHOD, DEVICE, ELECTRONIC APPARATUS, STORAGE MEDIUM AND PROGRAM PRODUCT
CN114567812B (en) Audio playback method, device, system, electronic device and storage medium
CN114567796A (en) Frame loss method, device, server and medium
CN113364767B (en) Streaming media data display method and device, electronic equipment and storage medium
CN114786055A (en) A preloading method, device, electronic device and medium
CN113242446B (en) Video frame caching method, video frame forwarding method, communication server and program product
CN113891132A (en) Audio and video synchronization monitoring method and device, electronic equipment and storage medium
CN114979712B (en) Video playing method, device, equipment and storage medium
CN113037751A (en) Method and system for creating audio and video receiving stream
CN114979762B (en) Video downloading, transmission method, device, terminal equipment, server and medium
CN110960857B (en) Game data monitoring method and device, electronic equipment and storage medium
CN113542856B (en) Method, device, equipment and computer readable medium for reverse playing of online video
CN113115074B (en) Video jamming processing method and device
CN111478916B (en) Data transmission method, device and storage medium based on video stream
CN114257870A (en) Short video playback method, device, device and storage medium
CN111225255B (en) Target video push playing method and device, electronic equipment and storage medium
CN118101553A (en) Data transmission method, device, equipment and storage medium
CN117201894A (en) Media stream slicing method, device, system, equipment and storage medium
CN112153322B (en) Data distribution method, device, equipment and storage medium
CN112995780B (en) Network state evaluation method, device, equipment and storage medium
CN115225917A (en) Recording plug-flow method, device, equipment and medium
CN114630170A (en) Audio and video synchronization method and device, electronic equipment and storage medium
CN115209166A (en) Message sending method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant