HK1114491A - Method of controlling an apparatus to synchronize av data with text subtitle data - Google Patents
Method of controlling an apparatus to synchronize av data with text subtitle data Download PDFInfo
- Publication number
- HK1114491A HK1114491A HK08103862.1A HK08103862A HK1114491A HK 1114491 A HK1114491 A HK 1114491A HK 08103862 A HK08103862 A HK 08103862A HK 1114491 A HK1114491 A HK 1114491A
- Authority
- HK
- Hong Kong
- Prior art keywords
- data
- time
- stream
- subtitle
- information
- Prior art date
Links
Description
The present application is a divisional application of an invention patent application having an application date of 19/2/2005, an application number of 200580005501.8, and an invention title of "information storage medium on which text subtitle data synchronized with AV data is recorded, and a reproducing method and apparatus".
Technical Field
The present invention relates to subtitles, and more particularly, to an information storage medium of a recording/reproducing medium including text subtitle data that is rendered to be output in synchronization with audio/video (AV) data. The invention also relates to a reproduction method and to an apparatus using the reproduction method.
Background
In order to display subtitles during reproduction of audio/video (AV) data, a presentation graphics stream containing subtitle data is formed into bitmap data and then multiplexed together with a video stream and an audio stream, thereby creating AV data. Since such bitmap-format subtitle data is multiplexed together with a video stream and an audio stream, such bitmap-format subtitle data can be smoothly reproduced in synchronization with the video stream. However, these techniques have problems in that the bitmap-format subtitle data is large and there is a limit to the number of subtitles that can be reused within the maximum bit rate. The maximum bit rate is defined when a specific information storage medium is applied.
In addition to the bitmap-format subtitle data, there is text subtitle data. Text subtitle data is designed to eliminate difficulties in creating and editing bitmap-format subtitle data. However, the text subtitle data exists separately without being multiplexed with the video stream. As a result, unlike a presentation graphics stream containing subtitle data in a conventional bitmap format, it is difficult to synchronize text subtitle data with a video stream using only Presentation Time Stamps (PTSs) defined in headers of Packetized Elementary Stream (PES) packets. Further, when jumping to a random position and reproducing data at the random position, it is also difficult to resynchronize text subtitle data with a video stream.
DISCLOSURE OF THE INVENTION
Technical solution
The present invention provides an information storage medium of a recording/reproducing apparatus, in which an output start time and an output end time of each subtitle item are specified, on which text subtitle data is recorded, and a method and apparatus for reproducing text subtitle data in synchronization with a video stream during normal or trick play of the video stream.
Advantageous effects
According to aspects of the present invention, text subtitle data can be reproduced in synchronization with an AV stream not only during normal play, but also during trick play (such as jumping to other portions of the AV stream, still frame, slow motion, fast play).
Drawings
Fig. 1A to 1E illustrate a process of multiplexing a video stream, an audio stream, and other streams into source packets to construct an AV stream and storing the AV stream in an information storage medium according to an aspect of the present invention;
fig. 2 is a schematic block diagram of an apparatus for reproducing an AV stream according to an aspect of the present invention;
fig. 3A and 3B illustrate an operation of inputting source packets constituting an AV stream stored in an information storage medium to an apparatus reproducing the AV stream according to an aspect of the present invention;
fig. 4A to 4C illustrate a variation of a System Time Clock (STC) of an apparatus reproducing an AV stream when a source packet having one Arrival Time Clock (ATC) sequence is input to the apparatus reproducing the AV stream, according to an aspect of the present invention;
fig. 5 illustrates a relationship between navigation information for specifying a reproduction order and a reproduction position of AV streams stored in an information storage medium and the AV streams according to an aspect of the present invention;
fig. 6A and 6B are diagrams for explaining a problem of text subtitle data according to an aspect of the present invention;
fig. 7A and 7B illustrate a method of reproducing a subtitle with addition of reference playitem information in which a subtitle should be displayed according to an aspect of the present invention;
fig. 8A and 8B illustrate a second method of reproducing subtitles by allocating PTS based on total time to record text subtitles according to an aspect of the present invention;
fig. 9 illustrates a relationship between time information indicated by each playitem and a total time of a playlist according to an aspect of the invention; and
fig. 10 is a schematic block diagram of an apparatus for reproducing text subtitle data and AV data according to an aspect of the present invention.
Best mode for carrying out the invention
According to an aspect of the present invention, an information storage medium of a recording/reproducing apparatus includes subtitle data that is output in synchronization with audio/video (AV) data and output time information indicating an output start time and/or an output end time of the subtitle data.
According to an aspect of the present invention, the information storage medium may further include playitem information indicating AV data with which output of subtitle data should be synchronized.
According to another aspect of the present invention, the output time information is created by referring to a System Time Clock (STC) of the playitem information.
According to another aspect of the present invention, the output time information is created by referring to a total time included in a playlist indicating AV data with which the output of subtitle data should be synchronized.
According to another aspect of the present invention, a method of reproducing text subtitle data and AV data, the method comprising: reading output time information indicating an output start time and/or an output end time of subtitle data output in synchronization with AV data; and outputting the caption data according to the output time information. In the step of outputting subtitle data, information indicating AV data with which output of subtitle data should be synchronized may also be read.
According to another aspect of the present invention, an apparatus for reproducing AV data and text subtitle data includes: an AV data processing unit, an output time information extraction unit, and a subtitle output unit. The AV data processing unit displays AV data. The output time information extraction unit reads output time information indicating an output start time and/or an output end time of subtitle data output in synchronization with the AV data. The subtitle output unit reads subtitle data according to the output time information and outputs the read subtitle data in synchronization with the AV data.
According to an aspect of the present invention, the output time information extraction unit further reads information indicating AV data with which output of subtitle data should be synchronized.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings. In the drawings, like numbering represents like elements throughout. The embodiments are described below in order to explain the present invention by referring to the figures.
Fig. 1A to 1E illustrate a process of multiplexing a video stream, an audio stream, and other streams into source packets to construct an audio/video (AV) stream and storing the AV stream in an information storage medium according to an aspect of the present invention. Referring to fig. 1A, an AV stream includes at least a video stream, an audio stream, and a presentation graphics stream containing subtitle data in a bitmap format. The AV stream may also include other data streams made by the manufacturer for specific purposes. Each data stream, such as a video stream, an audio stream, or other data stream, is referred to as an elementary stream. These elementary streams are packetized into a Packetized Elementary Stream (PES) as shown in fig. 1B.
Each PES includes a PES header and PES packet data. Stream _ ID information is recorded in the PES header to identify the type of PES packet data among video data, audio data, and other data, time information such as the Decoding Time Stamp (DTS) and Presentation Time Stamp (PTS) of the PES packet, and other information.
Such a video PES, an audio PES, and PES of other data are multiplexed to construct MPEG (moving picture experts group) -2 Transport Stream (TS) packets of 188 bytes shown in fig. 1C. The MPEG-2TS packet of 188 bytes includes an MPEG-2TS header in which information about payload data is recorded. The MPEG-2TS header includes: packet ID information indicating the type of payload data, an adaptation field including a Program Clock Reference (PCR) for setting a System Time Clock (STC) of an apparatus for reproducing the AV stream, and other information. The STC is a reference time of a DTS for decoding the PES packet and a PTS for outputting the PES packet.
A header of 4 bytes is added to the MPEG-2TS packet of 188 bytes to construct a source packet as shown in fig. 1D, and a group of such source packets constitutes an AV stream. The header of the source packet includes copy permission information containing content protection information for preventing illegal copying of the source packet and an Arrival Time Stamp (ATS) indicating a time when the source packet arrives at an apparatus for reproducing the AV stream. The constructed AV stream is recorded in the information storage medium shown in fig. 1E. It should be understood that the information storage medium may be an optical medium (such as a CD, DVD, blu-ray disc), a magnetic medium (such as a DVR, flash memory, hard drive), a magneto-optical medium, or other medium.
Fig. 2 is a schematic block diagram of an apparatus for reproducing an AV stream according to an aspect of the present invention. As shown in fig. 2, the reading unit 210 reads an AV stream constructed as described above with reference to fig. 1A to 1E from an information storage medium in which the AV stream is recorded. Further, the reading unit 210 transmits the source packet to the demultiplexing unit 220 according to the ATS recorded in the header of the source packet. The demultiplexing unit 220 removes a header from the received source packet to reconstruct the MPEG-2TS packet. In the case where PCR information is included in the header of the MPEG-2TS packet, the demultiplexing unit 220 sets an STC counter 250 of an apparatus to reproduce the AV stream based on the PCR information, and the demultiplexing unit 220 divides the MPEG-2TS packet into one of a video stream, an audio stream, or other data stream based on the packet ID information, thereby reconstructing a PES packet of the corresponding data stream. The STC counter 250 continuously increases according to the count of the system clock, and then is occasionally reset to a value that does not gradually increase according to the PCR, but instead is indicated by the PCR. It should be understood that the device may also record data and need not include a reading unit in all aspects as long as the device receives the transport stream.
The PES packets reconstructed in this manner are transmitted to the video decoder 230 and the audio decoder 240 when the DTS included in the header of the PES packets is identical to the STC counter 250 of the apparatus to reproduce the AV stream. When the PTS is the same as the value of the STC counter 250, decoded video data or decoded audio data is output. At this time, a video stream unit output at a specific time is referred to as a Video Presentation Unit (VPU), and an audio stream unit output at a specific time is referred to as an Audio Presentation Unit (APU). In addition, a video stream unit containing data to be decoded by the video decoder 230 to create a VPU is referred to as a Video Access Unit (VAU), and an audio stream unit containing data to be decoded by the audio decoder 240 to create an APU is referred to as an Audio Access Unit (AAU).
In other words, the source packets recorded on the information storage medium are demultiplexed into VAUs and AAUs to be decoded at a specific time. Then, when the DTS recorded in the corresponding access unit is substantially the same as the value of the STC counter 250, the VAU and the AAU are transmitted to the video decoder 230 and the audio decoder 240. Thus, VPUs and APUs are created. The created VPU and APU are output when the PTS of the corresponding presentation unit is the same as the value of the STC counter 250. The PTS of the audio stream may indicate a time when the AAU is input to the audio decoder 240 or output from the audio decoder 240. Although not required in all aspects, the VAU and/or AAU may be buffered to synchronize the VAU and AAU with the STC counter.
Like a video stream or an audio stream, a presentation graphics stream for subtitle data in bitmap format is also formed as access units and presentation units, the DTS and PTS of each unit operating in synchronization with the STC counter 250. The synchronous operation of the DTS and PTS with the STC counter 250 achieves reproduction synchronization between the presentation graphics stream and the bitmap-format subtitle data.
Fig. 3A and 3B illustrate an operation of inputting source packets constituting an AV stream stored in an information storage medium to an apparatus reproducing the AV stream. Referring to fig. 3A, an AV stream includes a plurality of source packets. ATS information, which is time information regarding the time when each source packet is input to an apparatus that reproduces an AV stream, is included in the header of each source packet. Further, as shown in fig. 3B, an Arrival Time Clock (ATC) counter of the apparatus reproducing the AV stream is reset to the ATS of the source packet first input to the apparatus reproducing the AV stream. The ATS included in the header of the source packet input after the first input source packet is compared with the count of the ATC counter, and the source packet is input to the apparatus reproducing the AV stream at the time when its ATS is identical to the count of the ATC counter. At this time, if the ATS of the source packets are connected without interruption, the source packets have the same ATC sequence. Typically, one AV stream includes one ATC sequence, but may include a plurality of ATS sequences.
Fig. 4A to 4C illustrate a variation in STC of an apparatus reproducing an AV stream when a source packet having one ATC sequence is input to the apparatus reproducing the AV stream. Referring to fig. 4A, source packets included in one ATC sequence are sequentially input to an apparatus reproducing an AV stream according to their ATS and then reconstructed into MPEG-2TS packets. At this time, if the PCR information is included in the header of the MPEG-2TS packet, the apparatus reproducing the AV stream resets its STC using the PCR information, as shown in fig. 4B and 4C. The STC sequence indicates a sequence of MPEG-2TS packets controlled by the STC, which continuously increases according to PCR information included in a header of the MPEG-2TS packet.
An ATC sequence includes at least one STC sequence. In this case, when the STC-sequence changes to another STC-sequence, i.e., when an STC-break occurs in the first MPEG-2TS packet of a new STC-sequence, PCR information for resetting the STC should be recorded.
Referring to fig. 4A to 4C, when an AV stream having one ATC sequence is reproduced, a total time using 0 as a starting point of reproducing the AV stream gradually increases, and STC sequences #0, #1, and #2 have different STC values.
In the case of a video stream, an audio stream, and a presentation graphics stream, even if an interruption occurs in an STC sequence and an STC is reset, since the video stream, the audio stream, and the presentation graphics stream are multiplexed into one AV stream, a DTS and a PTS of each data stream can be processed by the STC in the STC sequence that controls the corresponding time information. However, the text subtitle data is not included in a specific STC sequence because the text subtitle data exists separately from the AV stream or because there may exist a plurality of ATC sequences due to the fact that the text subtitle data may be stored through a plurality of AV streams. As a result, the text subtitle data cannot have an output start time and an output end time by using the PTS based on the STC.
Fig. 5 illustrates a relationship between navigation information for specifying a reproduction order and a reproduction position of AV streams stored in an information storage medium and the AV streams. Referring to fig. 5, an AV stream, clip information including attribute information on the AV stream, and navigation information indicating a reproduction order of the AV stream are included in an information storage medium. The navigation information includes title information on at least one title included in the information storage medium and at least one playlist including a reproduction order of AV streams reproduced according to each title.
Referring to fig. 5, a playlist includes at least one playitem including reference information indicating an AV stream to be reproduced. The playitem includes: clip _ info _ file indicating Clip information including attribute information on an AV stream to be reproduced; ref _ to _ STC _ id, which indicates the number of STC sequences including the STC of the AV stream indicated by the playitem IN the AV stream, and IN _ time and OUT _ time information, which indicates the start and end of the playitem IN the STC sequence indicated by the playitem.
Hereinafter, a process of reproducing an AV stream from an information storage medium having the data structure as described above will be described. A playlist indicated by a title to be reproduced is selected, and if a playitem included in the selected playlist is normally played, the playitem is sequentially selected from the top. On the other hand, if the playitems included in the selected playlist are randomly accessed, the playitems are sequentially selected from among the designated playitems.
If playitem #0 is first selected, Clip information #1 is selected based on Clip _ info _ file ═ 1 information included in playitem # 0. Based on ref _ to _ STC _ id ═ 0 information included in the playitem #0, an STC sequence #0 is selected among ATC sequences of the AV stream indicated by the clip information # 1. The AV stream is reproduced from an IN 1 position to an OUT 1 position indicated by an STC corresponding to an STC sequence #0 based on IN _ time _ IN 1 and OUT _ time _ OUT 1 information included IN the playitem.
Next, if playitem #1 is selected, Clip information #2 is selected based on Clip _ info _ file ═ 2 information included in playitem # 1. Based on ref _ to _ STC _ id ═ 0 information included in the playitem #1, an STC sequence #0 is selected among ATC sequences of the AV stream indicated by the clip information # 2. The AV stream is reproduced from an IN 2 position to an OUT 2 position indicated by an STC corresponding to an STC sequence #0 based on IN _ time _ IN 2 and OUT _ time _ OUT 2 information included IN the playitem # 1. Therefore, it should be understood that any of the following playitems, such as playitem #3, will be reproduced in the same manner.
In other words, a playlist is selected, and a playitem is selected from the selected playlist to search for a position of an AV stream to be reproduced. After transmitting the AV stream starting from the found position to the apparatus reproducing the AV stream according to the ATS, the STC of the apparatus reproducing the AV stream is reset using the MPEG-2TS packet including the PCR information in the transmitted data. The VAU and AAU start to be decoded at the time when the PTS included in each access unit is the same as the STC. Decoding the VAU and AAU at this time creates a VPU and APU. When the PTS of each presentation unit is the same as the STC, the created VPU and APU are output.
Further, in order for an apparatus reproducing the AV stream to display subtitles corresponding to the video data, the text subtitle data defines an output start time and an output end time (begin, end) to output each subtitle item defined in the text subtitle data. At this time, when PTS based on STC in video and audio streams in AV streams is used as attribute information of an output start time and an output end time (begin, end) of each subtitle item defined in text subtitle data, the output start time and the output end time (begin, end) of the subtitle items defined sequentially are not continuously increased, and specific time ranges overlap. Here, it should be understood that the order definition of the output start time and the output end time coincides with the reproduction order in one text subtitle. As a result, the ordering relationship between the subtitle items cannot be recognized.
In addition, the same output start time and output end time (begin, end) may be used between different subtitle items. Thus, when a playlist is selected and reproduced, if a jump is made to a random position and data at the random position is reproduced instead of normal sequential reproduction, it is substantially impossible to accurately search for a subtitle item at the same position as a video stream.
Hereinafter, a method for solving the above-described problem of the text subtitle data will be described. In an aspect of the present invention, text subtitle data produced in a markup language form is taken as an example of structured text subtitle data, but according to other aspects of the present invention, the text subtitle data may have a structure in a binary form. The structure of the binary form is obtained by giving a meaning to each specific byte of the binary data sequence, thereby structuring the text subtitle data. In other words, the text subtitle data is structured in the following manner: the first few bytes indicate information on the subtitle item 1 and the next few bytes indicate information on the subtitle item 2. However, it should be understood that the text subtitle data may be structured in another alternative method.
Fig. 6A and 6B are diagrams for explaining a problem of text subtitle data. Referring to fig. 6A and 6B, the subtitle item of the subtitle 610 corresponds to the STC-sequence #0, where (begin, end) of the subtitle "text 1" is (10, 12) and (begin, end) of the subtitle "text 2" is (20, 22). The subtitle item of the subtitle 620 corresponds to the STC-sequence #1, where (begin, end) of the subtitle "text 3" is (17, 19), (begin, end) of the subtitle "text 4" is (25, 27), and (begin, end) of the subtitle "text 5" is (30, 33). The subtitle item of the subtitle 630 corresponds to the STC-sequence #2, where (begin, end) of the subtitle "text 6" is (5, 8), and (begin, end) of the subtitle "text 7" is (25, 27).
In the case of normal play, the output order of each subtitle is 610, 620, and then 630, but (begin, end) of each subtitle item is not kept constant. As a result, only (begin, end) information cannot be used to identify the ordering relationship between subtitles. In addition, the subtitle "text 4" and the subtitle "text 7" have the same (begin, end). The text subtitle data constructed in this manner should be output in synchronization with the video data. If normal play is not performed from the first playitem of the video stream but jumps to a position corresponding to time "25" of the STC-sequence #2 and is reproduced during reproduction of the video stream, a decoder processing text subtitle data cannot determine which of the subtitles "text 4" and "text 7" is a subtitle item corresponding to the position of the current video data.
Thus, in order to output each subtitle item defined in the text subtitle data in synchronization with the video stream, the following two methods are used:
(1) each subtitle item also includes reference playitem information with which the corresponding subtitle item is displayed, and a PTS created based on the STC is assigned as (begin, end).
(2) A PTS created based on the total time of a playlist including at least one AV stream, with which reproduction of corresponding subtitle data should be synchronized, is assigned to an output start time and an output end time (begin, end) of each subtitle item.
In both methods, one of the output start time information (begin) and the output end time information (end) may be included instead of both of them as the time information.
Fig. 7A and 7B illustrate a method of reproducing a subtitle with addition of reference playitem information in which a subtitle should be displayed according to an aspect of the present invention. Referring to fig. 7A and 7B, a subtitle item of the subtitle 710 is included in an STC-sequence #0, which is indicated by a playitem # 0. The subtitle item of the subtitle 720 is included in the STC-sequence #1, and the STC-sequence #1 is indicated by the playitem # 1. In addition, a subtitle item of the subtitle 730 is included in the STC-sequence #2, and the STC-sequence #2 is indicated by the playitem # 2. The PTS created based on the STC is used as (begin, end) of each subtitle item.
In this case, the subtitle item of the subtitle 710 specifies the number of a PlayItem using the subtitle item of the subtitle 710 using the additional information < PlayItem _ number ═ 0 >. Thus, a PTS used as (begin, end) by the subtitle item of the subtitle 710 is created based on the STC of the STC sequence #0 indicated by the playitem #0 and should be controlled according to the STC.
Similarly, the subtitle item of the subtitle 720 and the subtitle item of the subtitle 730 use the additional information < PlayItem _ number ═ 1> and < PlayItem _ number ═ 2> to specify the number of playitems using the subtitle items of the subtitles 720 and 730, thereby solving the problem described with reference to fig. 6. In addition, the reference playitem information included in the text subtitle data may be individually included in each subtitle item.
Fig. 8A and 8B illustrate a second method of reproducing subtitles by allocating PTS based on total time according to an aspect of the present invention. According to fig. 8A and 8B, a separate storage space in which the total time is recorded to store the running time of the AV stream is allocated to an apparatus for reproducing the AV stream from the information storage medium according to the present invention. In this way, an apparatus for reproducing an AV stream has a Player Status Register (PSR) as a space for storing information required for reproduction, and when a playlist is selected to reproduce an AV stream, a register for storing a total time is set to 0, and the registers sequentially increase as reproduction of the AV stream indicated by the playlist proceeds. IN other words, the register is set to 0 at the IN _ time of the first playitem of the selected playlist, and sequentially increases until the OUT _ time of the corresponding playitem. Once the next playitem is selected, the total time stored IN the register is sequentially increased from the IN _ time of the corresponding playitem.
Fig. 9 shows a relationship between time information indicated by each playitem and the total time of the playlist. Referring to fig. 9, X indicates a time interval for reproducing a play item indicated by PlayItem _ id ═ 0, Y indicates a time interval for reproducing a play item indicated by PlayItem _ id ═ 1, and Z indicates a time interval for reproducing a play item indicated by PlayItem _ id ═ 2. In other words, the time information included in the playlist on the overall time axis is matched one by one with the time in a specific STC sequence among specific ATC sequences included in each playitem.
Thus, as described with reference to fig. 8A and 8B, each item of text subtitle data indicates a PTS by using time information on the overall time axis as an output start time and an output end time of the corresponding subtitle item, thereby smoothly reproducing an AV stream by referring to a register storing the overall time of the current reproduction time instant during synchronization and resynchronization with the AV stream.
Fig. 10 is a schematic block diagram of an apparatus for reproducing text subtitle data and AV data according to the present invention. Referring to fig. 10, an AV data processing unit 1010 reads AV data stored in an information storage medium and outputs the read AV data. The output time information extraction unit 1020 reads output time information indicating an output start time and an output end time of subtitle data output in synchronization with AV data. As described above, the output start time and the output end time are expressed by (begin, end), and the output start time and the output end time are created by referring to the STC of the playitem. Further, the output time information may include one or both of an output start time and an output end time. In this case, the playitem information of the AV data indicating that the output of the subtitle data should be synchronized therewith is also read to determine the playitem to which the output of the subtitle data with the STC should be synchronized. Further, as described above, the output time information is created by referring to the total time of the playlist indicating the AV data, wherein the output of the subtitle data should be synchronized with the AV data. The subtitle output unit 1030 reads subtitle data according to the output time information and outputs the read subtitle data in synchronization with AV data.
The method of reproducing text subtitle data and AV data may also be embodied as a computer program. Codes and code segments forming the computer program may be easily constructed by a computer programmer in the art. Further, the computer program is stored in a computer-readable medium, read and executed by a computer, thereby implementing a method for reproducing text-based subtitle data and AV data. Examples of the computer readable medium include magnetic tape, optical data storage devices, and carrier waves.
Although a few embodiments of the present invention have been shown and described, it would be appreciated by those skilled in the art that changes may be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the claims and their equivalents.
Claims (14)
1. A method of controlling an apparatus to synchronize AV data with text subtitle data, the method comprising:
forming the AV data and the text subtitle data into an access unit;
decoding the access unit when the first time data in the access unit is substantially the same as the time data in the apparatus, thereby creating a presentation unit of AV data and text subtitle data; and
outputting the decoded presentation unit when second time data in the presentation units of the AV data and the text subtitle data, which coincide with each other, is substantially the same as time data in the apparatus.
2. The method of claim 1, further comprising:
packing the AV data into a packed elementary stream;
multiplexing the packed basic code stream into an MPEG-2TS packet; and
headers are added to the MPEG-2TS packets to construct source packets, so that a group of source packets constitutes AV data.
3. The method of claim 2, wherein the AV data includes at least a video stream, an audio stream, and a presentation graphic stream containing subtitle data in a bitmap format.
4. The method of claim 2, wherein each packetized elementary stream comprises a packetized elementary stream header and packetized elementary stream packet data, the packetized elementary stream header comprising stream ID information to identify a type of the packetized elementary stream packet data.
5. The method of claim 2, wherein the MPEG-2TS packet is 188 bytes, and comprises:
a header including information on payload data and packet ID information indicating a type of the payload data; and
an adaptation field comprising a program clock reference for setting a system clock of the apparatus.
6. The method of claim 2, wherein the head comprises:
copy permission information containing content protection information for preventing illegal copying of the source packet; and
an arrival time stamp indicating the time of arrival of the source packet at the device.
7. The method of claim 1, further comprising:
detecting reference playitem information and output time information indicating an output start time and/or an output end time of text subtitle data to be output in synchronization with AV data; and
text subtitle data is output in synchronization with the output AV data according to the detected output time information.
8. The method of claim 7, wherein a subtitle item for each subtitle in the text subtitle data specifies a number of a playitem for reproducing the subtitle item.
9. The method of claim 7, wherein the subtitle item of the various subtitles specifies a number of a playltem using the subtitle item of the various subtitles using the additional information.
10. The method of claim 9, wherein the reference playitem information included in the text subtitle data is included in each subtitle item separately.
11. The method of claim 1, further comprising:
detecting a presentation time stamp based on a total time of a playlist including AV data with which reproduction of corresponding subtitle data is to be synchronized;
assigning the presentation time stamp to an output start time and an output end time of each subtitle item; and
text subtitle data is output in synchronization with the output AV data according to the detected output time information.
12. The method of claim 11, further comprising: during synchronization and resynchronization with the AV stream, the register storing the total time is referenced.
13. The method of claim 1, wherein the text subtitle data is reproduced in synchronization with the AV data during the trick play mode.
14. The method of claim 13, wherein the trick play mode includes providing a method of jumping to another portion of the AV data, presenting still frames of the AV data, slow motion play, fast play, or a combination thereof.
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| KR10-2004-0011678 | 2004-02-21 |
Related Parent Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| HK07107866.9A Addition HK1103845B (en) | 2004-02-21 | 2005-02-19 | Information storage medium having recorded thereon text subtitle data synchronized with av data, and reproducing method and apparatus therefor |
Related Child Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| HK07107866.9A Division HK1103845B (en) | 2004-02-21 | 2005-02-19 | Information storage medium having recorded thereon text subtitle data synchronized with av data, and reproducing method and apparatus therefor |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| HK1114491A true HK1114491A (en) | 2008-10-31 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN1922681B (en) | Information storage medium recorded with text subtitle data synchronized with AV data, and reproduction method and apparatus | |
| JP4678761B2 (en) | Method for synchronizing a data stream comprising audio data and / or video data and / or another data | |
| JP2019079591A (en) | Reproduction device and reproduction method | |
| CA2517025C (en) | Methods and apparatuses for reproducing and recording still picture and audio data and recording medium having data structure for managing reproduction of still picture and audio data | |
| CN1685420A (en) | Method and apparatus for recording a multi-component data stream and a high-density recording medium having a multi-component data stream recorded theron and reproducing method and apparatus of said r | |
| JP5052763B2 (en) | Information storage medium in which video data is recorded, recording method, recording apparatus, reproducing method, and reproducing apparatus | |
| JP2017204319A (en) | recoding media | |
| CN101124636A (en) | Information storage medium, method and apparatus for reproducing information from information storage medium, and recording apparatus and recording method for recording video data on information storage medium | |
| WO2005004146A1 (en) | Recording medium having data structure including graphic data and recording and reproducing methods and apparatuses | |
| JP2005354706A (en) | Information recording medium for recording AV stream including graphic data, reproducing method and reproducing apparatus | |
| CN1685425A (en) | Recording medium having data structure for managing reproduction of multiple graphic streams, recording and reproduction method and apparatus | |
| HK1114491A (en) | Method of controlling an apparatus to synchronize av data with text subtitle data | |
| CN1708121A (en) | Information storage medium containing AV stream including graphic data, and reproducing method and apparatus therefor | |
| HK1103845B (en) | Information storage medium having recorded thereon text subtitle data synchronized with av data, and reproducing method and apparatus therefor | |
| MXPA06009466A (en) | Information storage medium having recorded thereon text subtitle data synchronized with av data, and reproducing method and apparatus therefor | |
| JP2019067481A (en) | recoding media |