TW201442496A - System and method for synchronizing video data and audio data - Google Patents
System and method for synchronizing video data and audio data Download PDFInfo
- Publication number
- TW201442496A TW201442496A TW102114397A TW102114397A TW201442496A TW 201442496 A TW201442496 A TW 201442496A TW 102114397 A TW102114397 A TW 102114397A TW 102114397 A TW102114397 A TW 102114397A TW 201442496 A TW201442496 A TW 201442496A
- Authority
- TW
- Taiwan
- Prior art keywords
- video
- decoded
- audio
- packet
- data
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 18
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 4
- 230000006835 compression Effects 0.000 description 3
- 238000007906 compression Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000001360 synchronised effect Effects 0.000 description 2
- 230000003139 buffering effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Landscapes
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
Description
本發明涉及一種編碼系統及方法,尤其涉及一種影音同步系統及方法。The present invention relates to an encoding system and method, and more particularly to an audio and video synchronization system and method.
一般來說,影像(視訊封包)與聲音(音訊封包)的同步是藉由資料段所帶的時間戳記(如多媒體時間,multimedia time,簡稱MM Time)來進行,影像與聲音共同參考同一個MM Time,音訊封包負責更新MM Time,而程式則依照影像的MM Time來決定是否立即呈現畫面、稍後呈現、或者丟棄該張已過期的畫面。Generally speaking, the synchronization of the image (video packet) and the sound (audio packet) is performed by the time stamp (such as multimedia time, MM Time) carried in the data segment, and the image and the sound refer to the same MM. Time, the audio packet is responsible for updating the MM Time, and the program decides whether to present the picture immediately, render it later, or discard the expired picture according to the MM Time of the image.
許多以網路為傳輸的多媒體影音應用(如視訊會議、網路視訊電話、遠端桌面影音播放)為了減少頻寬的耗損,影像部分會採用壓縮技術如H.264,然而由於先天上的限制,壓縮端(encode,用於傳送bitstream)必須將影像資料(slice data)先暫時存在緩衝區(buffer),以致無法立即透過網路輸出至解碼端(decode,用於接收bitstream),而解碼端在解碼時亦需等到該影像資料已無需被參考或使用時才會將該畫面(frame)由緩衝區輸出,造成畫面與聲音不同步的情況。Many network-based multimedia audio and video applications (such as video conferencing, network video telephony, remote desktop audio and video playback) in order to reduce the bandwidth loss, the image part will use compression technology such as H.264, but due to inherent limitations The compression end (encode for transmitting bitstream) must temporarily store the slice data in the buffer so that it cannot be immediately output to the decoder (decode for receiving bitstream) through the network, and the decoding end In decoding, it is also necessary to wait until the image data has not been referenced or used, and the frame is output from the buffer, resulting in a situation in which the picture and the sound are not synchronized.
若此時仍使用音訊封包去更新MM Time,那麼該張畫面有很大的機率是會被認為是過期而丟掉(frame drop),而frame drop情況愈嚴重將導致影音播放就像投影片效果。If the audio packet is still used to update the MM Time, then the screen has a high probability that it will be considered as expired and dropped (frame drop), and the more serious the frame drop situation will cause the video playback to be like a slide effect.
鑒於以上內容,有必要提供一種影音同步系統及方法,其可使用一個佇列當作音訊封包的緩衝區,同步視訊封包與音訊封包的多媒體時間(Multimedia time),以達到視訊與音訊同步的目的。In view of the above, it is necessary to provide an audio-visual synchronization system and method, which can use a queue as a buffer of audio packets, and synchronize the multimedia time of the video packet and the audio packet to achieve the purpose of video and audio synchronization. .
一種影音同步系統,應用於電子裝置,該系統包括:視訊解碼模組,用於對接收到的視訊封包進行解碼,並將解碼後的視訊資料儲存至第一緩衝區;所述視訊解碼模組,還用於從第一緩衝區中依次讀取解碼後的視訊資料;視訊輸出模組,用於當該視訊封包的時間戳記符合預設要求時,將讀取的視訊資料輸出至電子裝置的顯示設備上;音訊解碼模組,用於當進行視訊解碼的同時,對接收到的音訊封包進行解碼,將解碼後的音訊資料儲存至第二緩衝區,並將音訊封包的時間戳記傳遞給視訊封包;生成模組,用於從第二緩衝區中讀取解碼後的音訊資料,將解碼後的音訊資料移至指定佇列,並每隔預設時間,生成一個消費模組;消費模組,用於從該指定佇列中讀取解碼後的音訊資料,將解碼後的音訊資料傳送至第三緩衝區;音訊輸出模組,用於從第三緩衝區中依次讀取解碼後的音訊資料,將讀取的音訊資料輸出至顯示設備上。An audio-visual synchronization system is applied to an electronic device, the system comprising: a video decoding module, configured to decode a received video packet, and store the decoded video data into a first buffer; the video decoding module The video output module is configured to sequentially read the decoded video data from the first buffer, and the video output module is configured to output the read video data to the electronic device when the time stamp of the video packet meets the preset requirement. On the display device, the audio decoding module is configured to decode the received audio packet while decoding the video, store the decoded audio data in the second buffer, and transmit the time stamp of the audio packet to the video. a packet; a generating module, configured to read the decoded audio data from the second buffer, move the decoded audio data to a specified queue, and generate a consumption module at a preset time; the consumption module , for reading the decoded audio data from the specified queue, transmitting the decoded audio data to a third buffer; and an audio output module for buffering from the third buffer Sequentially reading the decoded audio data, the read audio data output to the display device.
一種影音同步方法,應用於電子裝置,該方法包括:視訊解碼步驟一,用於對接收到的視訊封包進行解碼,並將解碼後的視訊資料儲存至第一緩衝區;視訊解碼步驟二,從第一緩衝區中依次讀取解碼後的視訊資料;視訊輸出步驟,當該視訊封包的時間戳記符合預設要求時,將讀取的視訊資料輸出至電子裝置的顯示設備上;音訊解碼步驟,當進行視訊解碼的同時,對接收到的音訊封包進行解碼,將解碼後的音訊資料儲存至第二緩衝區,並將音訊封包的時間戳記傳遞給視訊封包;生成步驟,從第二緩衝區中讀取解碼後的音訊資料,將解碼後的音訊資料移至指定佇列,並每隔預設時間,生成一個消費步驟;消費步驟,從該指定佇列中讀取解碼後的音訊資料,將解碼後的音訊資料傳送至第三緩衝區;音訊輸出步驟,從第三緩衝區中依次讀取解碼後的音訊資料,將讀取的音訊資料輸出至顯示設備上。An audio-video synchronization method is applied to an electronic device, the method comprising: video decoding step 1 for decoding a received video packet, and storing the decoded video data in a first buffer; video decoding step two, from The decoded video data is sequentially read in the first buffer; the video output step is, when the time stamp of the video packet meets the preset requirement, the read video data is output to the display device of the electronic device; the audio decoding step, While performing video decoding, the received audio packet is decoded, the decoded audio data is stored in the second buffer, and the time stamp of the audio packet is transmitted to the video packet; and the generating step is performed from the second buffer. Reading the decoded audio data, moving the decoded audio data to a specified queue, and generating a consumption step every preset time; in the consumption step, reading the decoded audio data from the specified queue, The decoded audio data is transmitted to the third buffer; the audio output step reads the decoded audio resources sequentially from the third buffer , Reads the audio data output to the display device.
相較於習知技術,所述的影音同步系統及方法,其可使用一個佇列當作音訊封包的緩衝區,同步視訊封包與音訊封包的多媒體時間(Multimedia time),以達到視訊與音訊同步的目的,且無需更改伺服器端(即壓縮端)的程式碼。Compared with the prior art, the video synchronization system and method can use a queue as a buffer of audio packets, and synchronize the multimedia time of the video packet and the audio packet to achieve video and audio synchronization. For the purpose, and without changing the code of the server side (ie, the compression side).
2...電子裝置2. . . Electronic device
20...顯示設備20. . . display screen
22...輸入設備twenty two. . . input device
23...儲存器twenty three. . . Storage
24...影音同步系統twenty four. . . Video synchronization system
25...處理器25. . . processor
240...視訊解碼模組240. . . Video decoding module
241...音訊解碼模組241. . . Audio decoding module
242...生成模組242. . . Generation module
243...消費模組243. . . Consumer module
244...視訊輸出模組244. . . Video output module
245...音訊輸出模組245. . . Audio output module
圖1係本發明影音同步系統的運行環境示意圖。1 is a schematic diagram of an operating environment of a video and audio synchronization system of the present invention.
圖2係本發明影音同步系統的功能模組圖。2 is a functional block diagram of the video and audio synchronization system of the present invention.
圖3係本發明影音同步方法的流程圖。3 is a flow chart of a method for synchronizing video and audio according to the present invention.
圖4係圖3的另外一種描述方式示意圖。FIG. 4 is a schematic diagram showing another manner of description of FIG. 3.
參閱圖1所示,係本發明影音同步系統的運行環境示意圖。該影音同步系統24運行於電子裝置2中。該電子裝置2還包括透過資料匯流排相連的輸入設備22、儲存器23和處理器25。所述電子裝置2可以是電腦、手機、PDA(Personal Digital Assistant,個人數位助理)等。Referring to FIG. 1 , it is a schematic diagram of an operating environment of the audio-visual synchronization system of the present invention. The AV synchronization system 24 operates in the electronic device 2. The electronic device 2 further includes an input device 22, a memory 23, and a processor 25 connected through a data bus. The electronic device 2 may be a computer, a mobile phone, a PDA (Personal Digital Assistant), or the like.
所述儲存器23用於儲存所述影音同步系統24的程式碼和影像等資料。所述輸入設備22用於輸入用戶設置的各種資料,例如,鍵盤、滑鼠等。在一個特殊實施方式中,所述電子裝置2可包括和資料匯流排相連的顯示設備20,所述顯示設備20用於顯示所述影像等資料,該顯示設備20可以是電腦的液晶顯示螢幕、手機的觸摸屏等。The storage 23 is configured to store data such as code and video of the AV synchronization system 24. The input device 22 is used to input various materials set by the user, such as a keyboard, a mouse, and the like. In a special embodiment, the electronic device 2 may include a display device 20 connected to the data bus, the display device 20 is configured to display the image and the like, and the display device 20 may be a liquid crystal display screen of the computer. The touch screen of the mobile phone, etc.
在本實施方式中,所述影音同步系統24可以被分割成一個或多個模組,所述一個或多個模組被儲存在所述儲存器23中並被配置成由一個或多個處理器(本實施方式為一個處理器25)執行,以完成本發明。例如,參閱圖2所示,所述影音同步系統24被分割成視訊解碼模組240、音訊解碼模組241、生成模組242、消費模組243、視訊輸出模組244和音訊輸出模組245。本發明所稱的模組是完成一特定功能的程式段,比程式更適合於描述軟體在電子裝置2中的執行過程。以下將結合圖3和圖4說明各模組的具體功能。In this embodiment, the video synchronization system 24 may be divided into one or more modules, the one or more modules being stored in the storage 23 and configured to be processed by one or more The present invention (this embodiment is a processor 25) is executed to complete the present invention. For example, as shown in FIG. 2, the video synchronization system 24 is divided into a video decoding module 240, an audio decoding module 241, a generation module 242, a consumption module 243, a video output module 244, and an audio output module 245. . The module referred to in the present invention is a program segment for performing a specific function, and is more suitable for describing the execution process of the software in the electronic device 2 than the program. The specific functions of each module will be described below with reference to FIGS. 3 and 4.
參閱圖3所示,係本發明影音同步方法的流程圖。Referring to FIG. 3, it is a flowchart of the video and audio synchronization method of the present invention.
在以下描述中,視訊解碼步驟S10-S13與音訊解碼步驟S20-S23同步執行。當使用者在虛擬機器上播放一部影片或使用影音軟體時,伺服器端會與用戶端(如電子裝置2)建立一個視訊串流通道(Video Stream Channel)與一個音訊串流通道(Audio Stream Channel),用來傳送視訊封包(即影像封包)和音訊封包(聲音封包)。電子裝置2將持續經由該兩個通道接收視訊封包與音訊封包。In the following description, the video decoding steps S10-S13 are performed in synchronization with the audio decoding steps S20-S23. When the user plays a movie or uses the audio/video software on the virtual machine, the server side establishes a video stream channel (Video Stream Channel) and an audio stream channel (Audio Stream) with the user end (such as the electronic device 2). Channel), used to transmit video packets (ie video packets) and audio packets (sound packets). The electronic device 2 will continue to receive the video packet and the audio packet via the two channels.
步驟S10,視訊解碼模組240透過視訊串流通道從伺服器端接收視訊封包(Video packet)。In step S10, the video decoding module 240 receives a video packet from the server through the video stream channel.
步驟S11,視訊解碼模組240解碼該視訊封包,並將解碼後的視訊資料(即位元資料,raw data)儲存至第一緩衝區,參閱圖4中的Frame緩衝區。在本實施方式中,視訊解碼模組240根據該視訊封包的編碼演算法,採用對應的解碼演算法對該視訊封包進行解碼。例如,視訊封包採用H.264技術進行編碼,則視訊解碼模組240利用H.264解碼器對該視訊封包進行解碼。In step S11, the video decoding module 240 decodes the video packet, and stores the decoded video data (ie, raw data) into the first buffer, as shown in the frame buffer in FIG. In this embodiment, the video decoding module 240 decodes the video packet by using a corresponding decoding algorithm according to the encoding algorithm of the video packet. For example, if the video packet is encoded by the H.264 technology, the video decoding module 240 decodes the video packet by using an H.264 decoder.
在其他實施方式中,還可以進一步包括:視訊解碼模組240根據電子裝置2的作業系統類型,對該解碼後的視訊資料進行色域轉換。例如,假設用戶端(如電子裝置2)的作業系統為Windows,在Windows上顯示的色域是RGBA (或RGB32、RGB系列),但在伺服器端將影像編碼(如H.264)是採用的是YUV色域(如YUV420、YUV440、YUV444),所以視訊解碼模組240一開始解碼出來的畫面(frame)是YUV色域,然後視訊解碼模組240會再將解碼後的視訊資料轉成RGB色域,使得解碼後的視訊資料能夠以最佳方式顯示於用戶端。In other embodiments, the video decoding module 240 further performs color gamut conversion on the decoded video data according to the operating system type of the electronic device 2. For example, suppose the operating system of the client (such as the electronic device 2) is Windows, the color gamut displayed on Windows is RGBA (or RGB32, RGB series), but the image encoding (such as H.264) is adopted on the server side. The YUV color gamut (such as YUV420, YUV440, YUV444), so the video decoding module 240 initially decodes the frame (frame) is the YUV color gamut, and then the video decoding module 240 will convert the decoded video data into The RGB color gamut enables the decoded video material to be displayed on the user end in an optimal manner.
步驟S12,視訊解碼模組240從第一緩衝區中依次讀取解碼後的視訊資料,例如,讀取一幀影像畫面。In step S12, the video decoding module 240 sequentially reads the decoded video data from the first buffer, for example, reads a frame of the video image.
步驟S13,視訊解碼模組240判斷該視訊封包的時間戳記是否符合預設要求。本實施方式中,該時間戳記以多媒體時間(Multimedia time,MM Time)為例進行說明,該視訊封包的多媒體時間MM Time從音訊封包中獲取。In step S13, the video decoding module 240 determines whether the time stamp of the video packet meets the preset requirement. In this embodiment, the time stamp is taken as an example of a multimedia time (MM Time), and the multimedia time MM Time of the video packet is obtained from the audio packet.
如果該視訊封包的MM Time與電子裝置2的當前時間一致(如相等),則視訊解碼模組240判定該視訊封包的時間戳記符合預設要求,執行步驟S24,視訊輸出模組244將讀取的視訊資料輸出至顯示設備20上。所述電子裝置的當前時間為電子裝置的作業系統(Operating system)記錄的當前時間。If the MM Time of the video packet is consistent with the current time of the electronic device 2 (eg, equal), the video decoding module 240 determines that the time stamp of the video packet meets the preset requirement, and performs step S24, and the video output module 244 reads The video material is output to the display device 20. The current time of the electronic device is the current time recorded by the operating system of the electronic device.
如果該視訊封包的MM Time與電子裝置2的當前時間不一致,則視訊解碼模組240判定該視訊封包的時間戳記不符合預設要求,流程返回步驟S12,視訊解碼模組240讀取下一幀影像畫面。If the MM Time of the video packet does not match the current time of the electronic device 2, the video decoding module 240 determines that the time stamp of the video packet does not meet the preset requirement, and the process returns to step S12, and the video decoding module 240 reads the next frame. Image screen.
步驟S20,在視訊解碼模組240接收到視訊封包並進行解碼的同時,音訊解碼模組241透過音訊串流通道從伺服器端接收音訊封包(Audio packet)。In step S20, the audio decoding module 241 receives the audio packet from the server through the audio stream channel while receiving the video packet and decoding the video decoding module 240.
步驟S21,音訊解碼模組241解碼該音訊封包,並將解碼後的音訊資料(即位元資料,raw data)儲存至第二緩衝區,參閱圖4中的PCM(Pulse Code Modulation,脈衝編碼調製)緩衝區。同時,音訊解碼模組241將音訊封包的時間戳記(如MM Time)傳遞給視訊封包,視訊封包參照音訊封包的MM Time進行同步(參閱步驟S13)。In step S21, the audio decoding module 241 decodes the audio packet, and stores the decoded audio data (ie, raw data) into the second buffer. See PCM (Pulse Code Modulation) in FIG. Buffer. At the same time, the audio decoding module 241 transmits the time stamp of the audio packet (such as MM Time) to the video packet, and the video packet is synchronized with reference to the MM Time of the audio packet (refer to step S13).
在本實施方式中,音訊解碼模組241根據該音訊封包的編碼演算法,採用對應的解碼演算法對該音訊封包進行解碼。例如,音訊封包採用PCM編碼技術進行編碼,則音訊解碼模組241利用PCM解碼器對該音訊封包進行解碼。In this embodiment, the audio decoding module 241 decodes the audio packet by using a corresponding decoding algorithm according to the encoding algorithm of the audio packet. For example, if the audio packet is encoded by PCM coding technology, the audio decoding module 241 decodes the audio packet by using a PCM decoder.
步驟S22,生成模組242從第二緩衝區中讀取解碼後的音訊資料,將解碼後的音訊資料移至一個指定佇列,參閱圖4中的PCM Ring。在本實施方式中,生成模組242為一個線程, 例如Producer thread。In step S22, the generating module 242 reads the decoded audio data from the second buffer, and moves the decoded audio data to a designated queue. Referring to the PCM Ring in FIG. In the present embodiment, the generation module 242 is a thread, such as a Producer thread.
步驟S23,生成模組242每隔預設時間,生成一個消費模組243。然後,消費模組243從指定佇列中讀取解碼後的音訊資料,將解碼後的音訊資料傳送至第三緩衝區,參閱圖4中的Wave Ring。在本實施方式中,消費模組243為一個線程, 例如Consumer thread,該消費模組243在將解碼後的音訊資料傳送至第三緩衝區後會自行結束。In step S23, the generating module 242 generates a consumption module 243 every preset time. Then, the consumption module 243 reads the decoded audio data from the specified queue, and transmits the decoded audio data to the third buffer, as shown in the Wave Ring in FIG. In this embodiment, the consumption module 243 is a thread, such as a Consumer thread, and the consumption module 243 ends itself after transmitting the decoded audio data to the third buffer.
在本實施方式中,所述預設時間為第一個解碼出的音訊資料與第一張解碼出的視訊資料的時間差。也就是說,在本發明中,解碼後的音訊資料並不會馬上傳送至第三緩衝區進行輸出,而是將解碼後的音訊資料先存放至一個指定佇列,等到視訊封包解碼出第一張畫面後,生成模組242才開始生成一個消費模組243來消費該指定佇列中的資料,從而使聲音和畫面達到同步。In this embodiment, the preset time is a time difference between the first decoded audio data and the first decoded video data. That is to say, in the present invention, the decoded audio data is not immediately transmitted to the third buffer for output, but the decoded audio data is first stored in a designated queue, and the video packet is decoded first. After the picture is displayed, the generation module 242 begins to generate a consumption module 243 to consume the data in the specified queue, thereby synchronizing the sound and the picture.
步驟S24,音訊輸出模組245從第三緩衝區中依次讀取解碼後的音訊資料,將讀取的音訊資料輸出至顯示設備20上。In step S24, the audio output module 245 sequentially reads the decoded audio data from the third buffer, and outputs the read audio data to the display device 20.
本發明可以應用於遠端桌面、視訊會議及網路視訊電話等,以遠端桌面應用為例,可以採取如下步驟:The present invention can be applied to a remote desktop, a video conference, and a network video call. For example, the remote desktop application can take the following steps:
(1)於電子裝置2中安裝用戶端程式並聯機至遠端桌面。(1) Install the client program in the electronic device 2 and connect to the remote desktop.
(2)選取遠端桌面上的影音播放軟體或具多媒體播放功能的應用程式,其中,影像部分採用H.264編碼。(2) Select the audio/video playback software on the remote desktop or the application with multimedia playback function, wherein the image portion is encoded by H.264.
(3)用戶端程式同步播放影像資料和聲音資料。(3) The user program synchronizes playback of image data and sound data.
最後應說明的是,以上實施方式僅用以說明本發明的技術方案而非限制,儘管參照較佳實施方式對本發明進行了詳細說明,本領域的普通技術人員應當理解,可以對本發明的技術方案進行修改或等同替換,而不脫離本發明技術方案的精神和範圍。It should be noted that the above embodiments are merely illustrative of the technical solutions of the present invention, and the present invention is not limited thereto. Although the present invention has been described in detail with reference to the preferred embodiments, those skilled in the art should understand that Modifications or equivalents are made without departing from the spirit and scope of the invention.
2...電子裝置2. . . Electronic device
20...顯示設備20. . . display screen
22...輸入設備twenty two. . . input device
23...儲存器twenty three. . . Storage
24...影音同步系統twenty four. . . Video synchronization system
25...處理器25. . . processor
Claims (10)
視訊解碼模組,用於對接收到的視訊封包進行解碼,並將解碼後的視訊資料儲存至第一緩衝區;
所述視訊解碼模組,還用於從第一緩衝區中依次讀取解碼後的視訊資料;
視訊輸出模組,用於當該視訊封包的時間戳記符合預設要求時,將讀取的視訊資料輸出至電子裝置的顯示設備上;
音訊解碼模組,用於當進行視訊解碼的同時,對接收到的音訊封包進行解碼,將解碼後的音訊資料儲存至第二緩衝區,並將音訊封包的時間戳記傳遞給視訊封包;
生成模組,用於從第二緩衝區中讀取解碼後的音訊資料,將解碼後的音訊資料移至指定佇列,並每隔預設時間,生成一個消費模組;
消費模組,用於從該指定佇列中讀取解碼後的音訊資料,將解碼後的音訊資料傳送至第三緩衝區;及
音訊輸出模組,用於從第三緩衝區中依次讀取解碼後的音訊資料,將讀取的音訊資料輸出至顯示設備上。An audio-visual synchronization system is applied to an electronic device, the system comprising:
a video decoding module, configured to decode the received video packet, and store the decoded video data into the first buffer;
The video decoding module is further configured to sequentially read the decoded video data from the first buffer.
The video output module is configured to output the read video data to the display device of the electronic device when the time stamp of the video packet meets the preset requirement;
The audio decoding module is configured to decode the received audio packet while the video decoding is performed, store the decoded audio data into the second buffer, and transmit the time stamp of the audio packet to the video packet;
a generating module, configured to read the decoded audio data from the second buffer, move the decoded audio data to a specified queue, and generate a consumption module every preset time;
a consumption module, configured to read the decoded audio data from the specified queue, and transmit the decoded audio data to a third buffer; and an audio output module for sequentially reading from the third buffer The decoded audio data is output to the display device.
視訊解碼步驟一,用於對接收到的視訊封包進行解碼,並將解碼後的視訊資料儲存至第一緩衝區;
視訊解碼步驟二,從第一緩衝區中依次讀取解碼後的視訊資料;
視訊輸出步驟,當該視訊封包的時間戳記符合預設要求時,將讀取的視訊資料輸出至電子裝置的顯示設備上;
音訊解碼步驟,當進行視訊解碼的同時,對接收到的音訊封包進行解碼,將解碼後的音訊資料儲存至第二緩衝區,並將音訊封包的時間戳記傳遞給視訊封包;
生成步驟,從第二緩衝區中讀取解碼後的音訊資料,將解碼後的音訊資料移至指定佇列,並每隔預設時間,生成一個消費步驟;
消費步驟,從該指定佇列中讀取解碼後的音訊資料,將解碼後的音訊資料傳送至第三緩衝區;及
音訊輸出步驟,從第三緩衝區中依次讀取解碼後的音訊資料,將讀取的音訊資料輸出至顯示設備上。An audio-video synchronization method is applied to an electronic device, and the method includes:
Video decoding step 1 is configured to decode the received video packet, and store the decoded video data into the first buffer;
Video decoding step 2, sequentially reading the decoded video data from the first buffer;
In the video output step, when the time stamp of the video packet meets the preset requirement, the read video data is output to the display device of the electronic device;
In the audio decoding step, when the video decoding is performed, the received audio packet is decoded, the decoded audio data is stored in the second buffer, and the time stamp of the audio packet is transmitted to the video packet;
a generating step of reading the decoded audio data from the second buffer, moving the decoded audio data to the specified queue, and generating a consumption step every preset time;
In the consumption step, the decoded audio data is read from the specified queue, and the decoded audio data is transmitted to the third buffer; and the audio output step sequentially reads the decoded audio data from the third buffer. The read audio data is output to the display device.
根據電子裝置的作業系統類型,對該解碼後的視訊資料進行色域轉換。The video synchronization method of claim 6, wherein the video decoding step 1 further includes:
The decoded video data is subjected to color gamut conversion according to the type of the operating system of the electronic device.
如果該視訊封包的時間戳記與電子裝置的當前時間一致,則判定該視訊封包的時間戳記符合預設要求。The video synchronization method of claim 6, wherein the video decoding step 2 further includes:
If the time stamp of the video packet matches the current time of the electronic device, it is determined that the time stamp of the video packet meets the preset requirement.
The video synchronization method of claim 6, wherein the preset time is a time difference between the first decoded audio data and the first decoded video data.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| TW102114397A TW201442496A (en) | 2013-04-23 | 2013-04-23 | System and method for synchronizing video data and audio data |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| TW102114397A TW201442496A (en) | 2013-04-23 | 2013-04-23 | System and method for synchronizing video data and audio data |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| TW201442496A true TW201442496A (en) | 2014-11-01 |
Family
ID=52423080
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| TW102114397A TW201442496A (en) | 2013-04-23 | 2013-04-23 | System and method for synchronizing video data and audio data |
Country Status (1)
| Country | Link |
|---|---|
| TW (1) | TW201442496A (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| TWI600319B (en) * | 2016-09-26 | 2017-09-21 | A method for capturing video and audio simultaneous for one-to-many video streaming |
-
2013
- 2013-04-23 TW TW102114397A patent/TW201442496A/en unknown
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| TWI600319B (en) * | 2016-09-26 | 2017-09-21 | A method for capturing video and audio simultaneous for one-to-many video streaming |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP6621827B2 (en) | Replay of old packets for video decoding latency adjustment based on radio link conditions and concealment of video decoding errors | |
| US10582258B2 (en) | Method and system of rendering late or early audio-video frames | |
| CN110858827B (en) | Broadcast starting acceleration method and device and computer readable storage medium | |
| KR102471088B1 (en) | Method and apparatus for converting mmtp stream to mpeg-2 ts | |
| WO2018010662A1 (en) | Video file transcoding method and device, and storage medium | |
| US10819951B2 (en) | Recording video from a bitstream | |
| US10382809B2 (en) | Method and decoder for decoding a video bitstream using information in an SEI message | |
| CN114222156B (en) | Video editing method, device, computer equipment and storage medium | |
| CN107770600A (en) | Streaming media data transmission method, device, equipment and storage medium | |
| CN107197369A (en) | A Multi-substream Coordinated Video Streaming Parallel Decoding Method | |
| JP2023508945A (en) | Synchronization of wireless audio with video | |
| CN107077313B (en) | Improved latency and efficiency for remote display of non-media content | |
| CN116074544A (en) | Multi-platform live broadcast method, system, equipment and medium | |
| CN107147887B (en) | Wireless display method and device | |
| CN118338093A (en) | Soft solution method for playing H.265 video stream based on web front end | |
| US9723610B2 (en) | Multi-layer timing synchronization framework | |
| KR100651566B1 (en) | Multimedia playback device and its control method using output buffering in mobile communication terminal | |
| CN108882010A (en) | A kind of method and system that multi-screen plays | |
| JP2018201159A (en) | Video processing method, video processing system, and video transmitting apparatus | |
| TW201442496A (en) | System and method for synchronizing video data and audio data | |
| CN114189727B (en) | Synchronous playing method, device, system, electronic equipment and readable storage medium | |
| US20190028522A1 (en) | Transmission of subtitle data for wireless display | |
| CN104125493A (en) | Audio-video synchronization system and method | |
| CN107124641A (en) | A control method for synchronous playback of audio and video | |
| WO2022037444A1 (en) | Encoding and decoding methods and apparatuses, medium, and electronic device |