HK1082111B - Reproducing apparatus - Google Patents
Reproducing apparatus Download PDFInfo
- Publication number
- HK1082111B HK1082111B HK06104039.9A HK06104039A HK1082111B HK 1082111 B HK1082111 B HK 1082111B HK 06104039 A HK06104039 A HK 06104039A HK 1082111 B HK1082111 B HK 1082111B
- Authority
- HK
- Hong Kong
- Prior art keywords
- data
- text data
- text
- language
- storage medium
- Prior art date
Links
Description
Technical Field
The present invention relates to an information storage medium on which subtitles for supporting multiple languages using text data and downloadable fonts are recorded, and an apparatus therefor.
Background
A conventional Digital Versatile Disc (DVD) uses bitmap images as subtitles. Subtitle data of a bitmap image is losslessly encoded and recorded on a DVD on which up to 32 kinds of subtitles can be recorded.
Now, a data structure of video data on a DVD, which is one of several types of conventional multimedia information storage media, will be explained.
Fig. 1 is a diagram of a data structure of a DVD.
Referring to fig. 1, a disc space of a DVD, which is a multimedia storage medium, is divided into a VMG area and a plurality of VTS areas. Title information and information on title menus are stored in the VMG area, and information on titles is stored in the plurality of VTS areas. The VMG area includes 2 to 3 files and each VTS area includes 3 to 12 files.
Fig. 2 is a detailed diagram of the VMG area.
Referring to fig. 2, the VMG area includes: a VMGI area storing additional information on the VMG; a VOBS area storing video information (video object) on the menu; and a backup area for the VMGI. These areas exist as one file and the existence of the VOBS area between them is optional.
In the VTS area, information on a title as a reproduction unit and information on a VOBS as video data are stored. In a VTS, at least one title is recorded.
Fig. 3 is a detailed diagram of the VTS area.
Referring to fig. 3, the VTS area includes: video Title Set Information (VTSI), VOBS of video data as a menu screen, VOBS of video data as a video title set, and backup data of the VTSI. The presence of the VOBS for displaying the menu screen is optional. Each VOBS is divided into VOBS and cells (cells) as recording units again. One VOB includes a plurality of cells. The lowest recording unit referred to in the present invention is a unit.
Fig. 4 is a detailed diagram of a VOBS as video data.
Referring to fig. 4, one VOBS includes a plurality of VOBS, and one VOB includes a plurality of cells. The cell includes a plurality of VOBUs. The VOBU is data encoded according to a Moving Picture Experts Group (MPEG) method for encoding moving pictures used in the DVD. According to the MPEG method, since an image is space-time compression encoded, a previous or subsequent image is required in order to decode the image. Therefore, in order to support a random access function by which reproduction can be started from an arbitrary position, intra encoding (intra encoding) that does not require a previous or subsequent image is performed on each predetermined image. This image is called an intra picture or an I picture in MPEG, and an image between an I picture and the next I picture is called a group of pictures (GOP). Typically, a GOP includes 12 to 15 pictures.
MPEG defines a systematic code (ISO/IEC13818-1) for packing video data and audio data into one bit stream. The system code defines two multiplexing methods, which include: a Program Stream (PS) multiplexing method adapted to generate a program and store the program in an information storage medium; and a transport stream multiplexing method suitable for generating and transmitting a plurality of programs. Among these methods, the DVD employs a PS coding method. According to the PS encoding method, video data and audio data are respectively divided in units of Packets (PCKs) and multiplexed by time division (time division) of the packets. Data other than the video and audio data defined by MPEG is named private stream (private stream) and is also included in the PCK so that the data can be multiplexed together with the audio and video data.
The VOBU includes a plurality of PCKs. A first PCK of the plurality of PCKs is a navigation pack (NV _ PCK). Then, the remaining portion includes a video pack (V _ PCK), an audio pack (a _ PCK), and a secondary picture pack (SP _ PCK). The video data contained in the video packets includes a plurality of GOPs.
SP _ PCK is used for two-dimensional graphic data and subtitle data. That is, in the DVD, subtitle data appearing in layer with a video picture is encoded by the same method as that for 2-dimensional graphic data. That is, for the DVD, a separate encoding method for supporting multiple languages is not employed and after converting each subtitle data into graphic data, the graphic data is processed by one encoding method and then recorded. The graphic data for subtitles is called a sub picture. The secondary picture includes a Secondary Picture Unit (SPU). One sub-picture element corresponds to one graphic data table.
Fig. 5 is a diagram showing the relationship between the SPU and the SP _ PCK.
Referring to fig. 5, one SPU includes a sub-picture unit header (SPUH), pixel data (PXD), and a sub-picture display control sequence table (SP _ DCSQT), which is divided in this order and recorded as a plurality of 2048 bytes of SP _ PCK. At this time, if the last data item of the SPU does not completely fill one SP _ PCK, the remaining part of the last SP _ PCK is filled to have the same size as the other SP _ PCKs. Thus, one SPU includes multiple SP _ PCKs.
In SPUH, the size of the entire SPU and the location from which SP _ DCSQT data begins are recorded. The PXD data is obtained by encoding the secondary picture. The pixel data forming the sprite may have 4 different types of values, which are background (background), pattern pixels (pattern pixels), emphasized pixels (emphaspixel) -1, and emphasized pixels-2, which may be represented by 2 bit values and have binary values of 00, 01, 10, and 11, respectively. Thus, a sprite can be viewed as a set of data having four pixel values and formed with a plurality of lines. Encoding is performed for each row. As shown in fig. 6, the SPU is run-length (run-length) encoded. That is, if 1 to 3 predetermined pixel data items are consecutive, the number of consecutive pixels (No _ P) is represented by 2 bits, and thereafter a 2-bit pixel data value (PD) is recorded. If 4 to 15 pixel data items are consecutive, the first 2 bits are recorded as 0, and then No _ P is recorded by using 4 bits and PD is recorded by using 2 bits. If 16 to 63 pixel data items are consecutive, the first 4 bits are recorded as 0, then No _ P is recorded by using 8 bits, and PD is recorded by using 2 bits. If the pixel data items continue to the end of the line, the first 14 bits are recorded as 0, and then the PD is recorded by using 2 bits. If the arrangement in bytes is not implemented when the encoding of the line is completed, 4 bits are recorded as 0. The length of data encoded in one row cannot exceed 1440 bits.
Fig. 7 is a diagram of a data structure of SP _ DCSQT.
Referring to fig. 7, SP _ DCSQT contains display control information for outputting PXD data. The SP _ DCSQT includes a plurality of sub-screen display control sequences (SP _ DCSQ). One SP _ DCSQT is a set of display control commands (SP _ DCCMD) that are executed at a time, and includes: SP _ DCSQ _ STM, which represents the start time; SP _ NXT _ DCSQ _ SA containing information about the location of the next SP _ DCSQ; and a plurality of SP _ DCCMDs.
SP _ DCCMD is control information on how pixel data (PXD) and a video picture are combined and output, and contains pixel data color information, information on contrast with video data, and information on an output time and a completion time.
Fig. 8 is a diagram showing a case where output of the sub-picture data is considered.
Referring to fig. 8, the pixel data itself is losslessly encoded as PXD. SP _ DCSQT contains information on an SP display area which is a sub-picture display area in which a sub-picture is displayed, among video display areas which are video image areas; and information about the start time and the completion time of the output.
In the DVD, sub-picture data for subtitle data of up to 32 different languages can be multiplexed and recorded together with video data. The differentiation of these different languages is performed by the stream id provided by the MPEG system coding and the secondary stream id defined in the DVD. Therefore, if a user selects a language, the SPU is extracted only from the SP _ PCK having the stream id and the sub-stream id corresponding to the selected language, and then decoded, and the subtitle data is extracted. Then, the output is controlled according to the display control command.
The fact that subtitle data is multiplexed together with video data as described above causes many problems.
First, the number of bits to be generated for the sub-picture data when the video data is encoded should be considered. That is, since subtitle data is converted into graphic data and processed, the amounts of data generated for respective languages are different from each other and are huge. Generally, after encoding of a moving image is performed once, sub-picture data for each language is multiplexed again to be added to the encoded output so that a DVD suitable for each region is produced. However, according to the language, the number of the sub-picture data is large so that when the sub-picture data is multiplexed together with the video data, the total number of generated bits exceeds the maximum allowable amount. In addition, since the sub-picture data is multiplexed between the video data, the start point of each VOBU differs according to the region. Since the start point of the VOBU is managed separately, this information should be updated whenever the multiplexing process is newly started.
Second, since the content of each sub-picture cannot be known, the sub-picture data cannot be used for other purposes, such as outputting two languages simultaneously, since the subtitle data outputs only one language.
Disclosure of Invention
The present invention provides an information storage medium on which sub-picture data is recorded using a data structure in which a bitstream to be generated for the sub-picture data is not considered in advance when video data is encoded, and an apparatus therefor.
The present invention also provides an information storage medium on which sub-picture data is recorded using a data structure, wherein the sub-picture data can be used for purposes other than subtitles, and an apparatus therefor.
Additional aspects and/or advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
According to an aspect of the present invention, there is provided an information storage medium on which video data is recorded, the information storage medium including: a plurality of clips as a recording unit in which video data is stored; and text data of subtitles, which is stored separately from the plurality of clips and is stackable with an image according to the video data and then outputtable, the text data including data for providing subtitles in at least one language.
The information storage medium may include: character font data, recorded separately from the plurality of clips, for a graphic representation of the text data, and usable in the text data.
When the text data is in a plurality of languages, each text data for the plurality of languages may be recorded in a separate space.
The text data may include character data that can be converted into graphic data and output synchronization information for synchronizing the graphic data and the video data.
The text data may include character data that can be converted into graphic data and output position information indicating a position where the graphic data will be displayed when the graphic data can be layered with an image according to the video data.
The text data may include character data that can be converted into graphic data and information for representing output of graphic data of various sizes when the graphic data is layered with an image.
Video data can be divided into units that can be continuously reproduced, and the size of all text data corresponding to one unit is limited.
The video data may be divided into a plurality of continuously reproducible units, the text data corresponding to each reproducible unit is divided into a plurality of language groups, and the size of all the text data forming one language group is limited.
Data forming text data is expressed and recorded in Unicode to support multilingual character sets.
When text data of a subtitle is formed using only one character of ASCII as a basic english character set and ISO8859-1 as an extended latin character set, the text data may be encoded and recorded by using UTF-8 by which one character is encoded into a plurality of 8-bit units.
When the text data includes characters having codepoint values of 2 bytes in size of Unicode, the text data can be encoded and recorded by using UFT-16, by which one character is encoded into a plurality of 16-bit units.
The information storage medium may be a removable type.
The information storage medium may be an optical disc readable by an optical apparatus of a reproducing apparatus.
According to another aspect of the present invention, there is provided a reproducing apparatus for reproducing data from an information storage medium on which video data is recorded, the video data being encoded and divided into clips that are recording units and recorded in a plurality of clips, and on which text data of subtitles, which is formed with data of a plurality of languages and is stackable with an image based on the video data as graphic data, is recorded separately from the clips, the reproducing apparatus comprising: a data reproducing unit for reading data from the information storage medium; a decoder for decoding the encoded video data; a translator for converting text data into graphic data; a mixer for layering graphics data and video data to generate an image; a first buffer for temporarily storing video data; and a second buffer for storing the text data.
Font data may be stored in the third buffer and may be used for a graphic representation of text data in the text data and recorded on the information storage medium separately from the clip, and the interpreter converts the text data into the graphic data by using the font data.
When the text data is data of a plurality of languages, the text data may be recorded in a separate space for each language, wherein text data of one language selected by a user and set as an initial reproduction language is temporarily stored in the second buffer, font data for converting the text data into graphic data may be temporarily stored in the third buffer, and, at the same time, when video data is reproduced, the text data may be converted into graphic data and the graphic data may be output.
The apparatus may include: a controller for controlling an output start time and an output end time of text data by using synchronization information, the text data being recordable on an information storage medium, the text data including: and synchronization information by which the text data is converted into graphic data layered with the image based on the video data.
The apparatus may include: a controller for controlling a position at which the text data is layered with the image based on the video data by using the output position information. Text data may be recorded on the information storage medium, the text data including: character data which can be converted into graphic data; and output position information indicating a position at which the graphic data will be output when the graphic data is layered with an image based on the video data.
Video data recorded on an information storage medium is divided into units that can be continuously reproduced, and text data is recorded within a limited size of all text data corresponding to the recording unit. All text data whose size is limited may be stored in the second buffer before reproducing the continuously reproducible unit, and when a language change occurs during reproduction, subtitle data corresponding to the language stored in the buffer may be output.
Video data may be divided into units that can be continuously reproduced, text data corresponding to one unit is divided into a plurality of language groups, and text data of subtitles forming one language group is recorded so that all text data is limited. Before reproducing a unit that can be continuously reproduced, text data corresponding to a language group containing subtitle data output simultaneously with video data may be stored in a buffer, and when a language change occurs during reproduction, when the text data of the language is in the buffer, the text data of the language is output, and when the text data of the language is not in the buffer, the text data corresponding to the language group containing the text data of the language is stored in the buffer and the text data of the language may be output.
The apparatus may include: a subtitle size selector for selecting a size of the subtitle data based on a user input. The text data may include character data that can be converted into graphic data, and information indicating output of a plurality of graphic data items when the graphic data is layered with an image based on the video data may be recorded on the information storage medium.
Data forming text data is expressed and recorded in Unicode to support multiple language sets, and a translator converts characters expressed in Unicode into graphic data.
On the information storage medium, when text data of subtitles is formed using only one character of ASCII as a basic english character set and ISO8859-1 as an extended latin character set, the text data may be encoded and recorded by using UTF-8 by which one character is encoded into a plurality of 8-bit units, and the translator may convert the character represented by UFT-8 into graphic data.
On the information storage medium, when text data includes characters having codepoint values of 2 bytes size of Unicode, the text data may be encoded and recorded by using UFT-16 by which one character is encoded into a plurality of 16-bit units, and a translator converts the characters represented by UTF-16 into graphic data.
The information storage medium may be a removable type, and the reproducing apparatus may reproduce data recorded on the removable information storage medium.
The information storage medium may be an optical disc readable by an optical apparatus of a reproducing apparatus, and the reproducing apparatus may reproduce data recorded on the optical disc.
The reproducing apparatus may output graphic data without reproducing video data recorded on the information storage medium.
The subtitle data may include subtitle data for one or more languages, and the translator may convert text data for one or more languages into graphic data.
The subtitle data may be layered in synchronization with the video image and then output.
According to another aspect of the present invention, there is provided a recording apparatus for recording video data on an information storage medium, comprising: a data writer for writing data on the information storage medium; an encoder for encoding video data; a subtitle generator for generating subtitle data that can be added to the video data; a Central Processing Unit (CPU); a fixed type of memory; and a buffer. After an encoder divides a video image into clips as recording units and compression-encodes the clips, video data is stored in a fixed type of memory. The subtitle generator generates subtitle data in a plurality of languages in the form of text, which can be reproduced along with an image based on video data and stored in a fixed type of memory. The buffer temporarily stores data stored in a fixed type of memory. The data writer records the encoded video data and subtitle data temporarily stored in the buffer on the information storage medium. The CPU controls encoding of video data, and records the encoded video data and subtitle data in respective separate areas on the information storage medium.
The apparatus may include: and a font data generator for generating font data for converting text data of the subtitle into graphic data. The font data generator may generate font data required for converting subtitle data into graphic data, and may store the font data in a fixed type of memory. The buffer may temporarily store the font data stored in the fixed type memory, the data writer may record the font data temporarily stored in the fixed type memory on the information storage medium, and the CPU may control generation of the font data and record the font data in a separate area of the information storage medium.
When the text data is multilingual data, the CPU can control the subtitle data so that the subtitle data is recorded in a separate space for each language.
The apparatus may include: and a subtitle generator generating subtitle data by including character data that can be converted into graphic data and then output and output synchronization information for synchronizing with reproduction of the video image.
The subtitle generator may generate subtitle data by including character data that may be converted into graphic data and may output position information indicating a position where the graphic data will be output when the graphic data is layered with an image based on the video data.
The subtitle generator may generate the text data by including character data that can be converted into graphic data and information for representing output of graphic data of various sizes when the graphic data is layered with an image based on the video data.
The encoded video data may be divided into recording units that can be continuously reproduced, and the subtitle generator may generate the text data such that the size of all subtitle data corresponding to the recording units is limited.
The encoded video data may be divided into recording units that can be continuously reproduced, and after text data corresponding to the recording units is divided into a plurality of language groups, the subtitle generator generates the text data such that the size of the entire subtitle data forming one language group is limited.
The caption generator may generate data that forms text data in Unicode to support multilingual character sets.
When text data is formed using only one character of ASCII as a basic english character set and ISO8859-1 as an extended latin character set, the encoder may encode by using UTF-8 encoded into a plurality of 8-bit units by one character thereof.
When the text data includes characters having 2-byte-sized codepoint values of Unicode, the encoder encodes by using UFT-16 by which one character is encoded into a plurality of 16-bit units.
The information storage medium may be a removable type.
The information storage medium may be an optical disc.
According to another aspect of the present invention, there is provided a method of reproducing data stored on an information storage medium, including: reading Audio Visual (AV) data and text data; translating the subtitle image data from the text data; decoding the AV data and outputting the decoded AV data; and mixing the subtitle image data and the decoded AV data.
According to another aspect of the present invention, there is provided a reproducing apparatus including: a reading section for reading audio-visual (AV) data, text data, and font data; a decoder section for decoding the AV data and outputting moving image data; a translation section for translating the caption image data from the text data; and a mixing section for synchronizing the moving image data and the subtitle image data.
According to another aspect of the present invention, there is provided a reproducing apparatus including: a reading section for reading text data and font data; a translation section for translating the caption image data from the text data; an output section for outputting subtitle image data; and an input receiving part for receiving an input of subtitle data for a next line to control an output time of the subtitle data.
According to another aspect of the present invention, there is provided a data recording and/or reproducing apparatus comprising: a storage section; an encoder for encoding audio-visual (AV) data to generate encoded AV data; a caption generator for generating translatable text data of a caption; a data writer for writing the encoded AV data and the translatable text data to the storage portion; a reading section for reading the encoded AV data and the translatable text data; a decoder section for decoding the encoded AV data to generate moving image data; a translation section for translating the caption image data from the translatable text data; and a mixing section for combining the moving image data and the subtitle image data to generate mixed moving image data.
To achieve the above and/or other aspects and advantages, in an information storage medium according to various embodiments of the present invention, each subtitle data item is not within AV data, is not encoded together with the AV data, and is recorded in a separate recording space in the form of separate text data. In addition, separate font data for translating subtitle data in the form of text data is recorded on the information storage medium. In addition, AV moving image synchronization information for interlocking subtitle data and completing decoding processing, and output information for screen output are recorded. The subtitle data corresponds to the sub-picture data in the conventional DVD. That is, on an information storage medium according to embodiments of the present invention, the following elements are recorded:
1) AV data (clip) into which video information is compression-encoded;
2) text data of multi-language subtitles; and
3) font data for translating text data.
Drawings
FIG. 1 is a diagram of a data structure of a DVD;
FIG. 2 is a detailed view of the VMG area;
FIG. 3 is a detailed view of the VTS area;
fig. 4 is a detailed view of a VOBS as video data;
FIG. 5 is a diagram showing the relationship between SPU and SP _ PCK;
fig. 6 is a diagram of a data structure of a secondary picture when the secondary picture is encoded;
FIG. 7 is a diagram of a data structure of SP _ DCSQT;
fig. 8 is a diagram showing an output case in consideration of the sub-picture data;
fig. 9 is a block diagram of a reproducing apparatus according to an embodiment of the present invention;
fig. 10 is a diagram of a data structure of text data stored in an information storage medium according to an embodiment of the present invention;
fig. 11 is an embodiment of text data of a subtitle according to an embodiment of the present invention;
FIG. 12 is a diagram of a data structure of text data in a language different from that of FIG. 11;
FIG. 13 is an example of a text file used in the present invention;
fig. 14 is an example of subtitles to which different fonts are applied;
fig. 15 is an example of subtitles displayed after line feed;
fig. 16 is a view showing an example of a case where a user performs a language change while subtitles in one language are being reproduced;
fig. 17 is an example of a plurality of language groups of subtitle data and font data for a plurality of languages;
fig. 18 is a diagram showing the interrelationship of playlists, playitems, clip information, and clips;
FIG. 19 is an example of a directory structure according to the present invention;
fig. 20 is an example showing a case where the reproducing apparatus outputs only subtitle data;
fig. 21 is an example showing a case where a reproducing apparatus simultaneously outputs subtitle data for more than one language;
fig. 22 is a view showing an example of a case where normal reproduction of video data starts from video data corresponding to subtitle line data during reproduction of subtitle-only data; and
fig. 23 is a block diagram of a recording apparatus according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to the embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the like elements throughout. The embodiments are described below in order to explain the present invention by referring to the figures.
Fig. 9 is a block diagram of a reproducing apparatus according to an embodiment of the present invention.
Referring to fig. 9, the reproducing apparatus includes: a reader for reading AV data, text data of subtitles, and downloaded font data stored in an information storage medium; a decoder for decoding the AV data; a translator (render) for translating the text file; and a mixer for combining the moving picture output from the decoder with the subtitle data output from the translator.
In addition, the reproducing apparatus further includes a buffer for buffering data between the reader and the decoder and the translator and storing the determined font data, and may further include a memory (not shown) for storing inherent font data stored in advance as a default.
As used herein, translation (rendering) includes all actions required to involve converting subtitle text data into graphics data to be displayed on a display device. That is, the translation includes generating graphic data to form a subtitle image by repeating a process for finding a font matching a character code of each character in text data among downloaded font data read from an information storage medium or from inherent font data, and converting the font data into graphic data. Translation also includes selecting or converting colors, selecting or converting the size of characters, and generating graphics data suitable for writing in horizontal or vertical lines. Specifically, when the font data being used is an outer frame font (outline font), the font data defines the shape of each character as a curve equation. In this case, the translation also includes a rasterizing process (rasterizing process) for generating graphic data by processing the curve equation.
Fig. 10 is a diagram of a data structure of text data (i.e., subtitle data) stored in an information storage medium according to an embodiment of the present invention.
Referring to fig. 10, text data is recorded separately from an AV stream. The text data includes synchronization information, display region information, and display font box information. The synchronization information may be added to data to be output together with the subtitle in the translation process and may be used to synchronize the subtitle with video information decoded from the AV stream data. The display area information indicates a position on the screen where the translated subtitle data is displayed. The display font box information contains information on the size of characters in the display area, the subtitle data written in horizontal lines or vertical lines, and arrangement, color, contrast, and the like. In addition, since text data of each of the plurality of languages can be written, the text data also contains information representing a language among the plurality of languages. Such so-called multilingual data may be stored in a separate space for each of the respective languages, or may be stored in one space after being multiplexed in the order of output time.
Fig. 11 illustrates text data of a subtitle according to an embodiment of the present invention.
Referring to fig. 11, a markup language is used as text data of subtitles in the present embodiment. In consideration that the purpose of use is for subtitles, a minimum number of tags (tags) or elements (elements) in a markup language for subtitles are used, and as described above, tags or attributes for synchronization and screen display may be included. Here, subtitle, head, meta, body, p elements are shown as examples. In the present embodiment, information is displayed together with attributes. The attributes used in this example are as follows:
-a start: time, at which, when the start time of a moving image that should be reproduced together with subtitle data is set to 0, subtitle data corresponding to the moving image should be output. The time when the subtitles are displayed is expressed in terms of time (HH), minutes (MM), seconds (SS), and frames (FF). Time may be expressed in units of 1/1000 seconds. In addition, if the video data is MPEG video, the time may have a Presentation Time Stamp (PTS) value of a video image on which subtitles are layered and displayed. Typically, the PTS value is a count value operating at 27MHz or 90 MHz. If the PTS value is used, the subtitle data can be accurately matched with the video data and manipulated.
-end: time, a subtitle displayed thereon disappears and has the same type of attribute value as 'start'.
-position: this indicates the coordinates of the top left vertex in the video area in the display area in which the subtitle data is to be displayed.
-direction: this indicates the direction of subtitle data to be displayed.
-size: this indicates the width or height of the display area in which the subtitle data is to be displayed. If the attribute value of "direction" is "horizontal", a fixed width value of the subtitle data box is indicated, and if "vertical", a fixed height value of the subtitle data box is indicated.
Among the elements used, a subtitle element is used to indicate the root of text data, and a head element is used to include a meta element that processes information required by all text data, or a style element that is not shown in the example of fig. 11. In the present embodiment, meta elements are used to express the title of the corresponding text data and the language to be used. That is, when a plurality of languages are selected, a desired language text file can be conveniently selected by using meta information in text data. In addition, if different directories for each language text file are prepared, the languages may be distinguished by the name of the text file, or by the directory name.
The subtitle data thus stored is loaded into a buffer of a reproducing apparatus before the video data is reproduced, and the subtitle data is converted into graphic data and caused to stack video images by a translator as the video data is reproduced. Accordingly, subtitle data such as korean is displayed in the display area at a precise time. As described above, for text data, control information may be written in a format or syntax in addition to subtitle character data. The translator therefore has a parser function for verifying whether the text file to be stored is written according to the grammar. In addition, in order to synchronize subtitle data with a video image decoded by a decoder by using synchronization information included in a text file, there is a channel through which an event for transmitting or determining information on a reproduction time and a reproduction state of the decoder is exchanged with the decoder.
Fig. 12 is a diagram of a data structure of text data for a language other than korean than fig. 11.
Referring to fig. 12, when video data and text data are recorded in different areas, support for multiple languages is achievable by encoding the video data angstrom with subtitle data and then adding text data of each different language to the encoded video data. In addition, when subtitle data and font data that are not stored on an information storage medium together with video data are downloaded through a network or loaded from an additional information storage medium to a reproducing apparatus, the subtitle data is easily used in other cases.
When a plurality of languages are thus supported, character codes to be used for text data should be determined. In an embodiment, Unicode is used. Unicode is a language used to express the entire world with more than 65,000 characters. According to Unicode, each character is represented by a codepoint (code point) in Unicode. Characters representing various languages are groups of code points with regular continuous values. Characters having a continuous interval of code points are called code charts (code charts). In addition, Unicode supports UTF-8, UTF-16, UTF-32 as the encoding format to actually store or transmit character data, i.e., codepoints. These formats will represent one character by using a plurality of data items having a length of 8 bits, a length of 16 bits, and a length of 32 bits, respectively.
ASCII codes for representing english characters and ISO8859-1 codes for expressing languages of european countries by expanding latin have codepoint values from 0x00 to 0xFF in Unicode. The japanese Hirakana characters have codepoint values from 0x3040 to 0x 309F. The 11, 172 characters used to represent modern korean languages have codepoint values from 0xAC00 to 0XD7 AF. Here, 0x denotes that the code point value is expressed in hexadecimal numbers.
If the subtitle data includes only english characters, encoding is performed by using UTF-8. For korean and japanese subtitle data, if UTF-8 is used, 3 bytes can be used to represent one character. If UTF-18 is used, one character can be represented by 2 bytes, and each of english characters included in the subtitle data can also be represented by 2 bytes.
Each country has its own character code that is different from Unicode. For example, in the korean character code set KSC5601, korean characters have 2-byte code point values and english characters have 1-byte code point values. If subtitle data is generated by using codes other than Unicode, but not character sets of each country, each reproducing apparatus understands all of the character sets, thereby increasing the burden of implementation.
Font data is required to process subtitle data as text data. In addition, to support multiple languages, font data supports multiple languages. However, it is difficult to produce all the reproducing apparatuses having such fonts supporting a plurality of languages. Therefore, in this embodiment of the present invention, font data only for using characters in an information storage medium is recorded in the information storage medium as subtitle data, so that such font data is loaded to a buffer and then used before reproducing video data in a reproducing apparatus. That is, the reproducing apparatus links each segment of subtitle text data with font data and then reproduces the data. The link information of the subtitle text data and font data is recorded in the text data of the subtitle or in a separate area. Considering a case where a user performs a language change during reproduction of data, a reproducing apparatus loads subtitle data and font data corresponding to video data and being continuously reproducible before reproduction and then uses the data. Here, the continuous reproduction includes no pause, or interruption of reproduction in video and audio output of the video data. In general, a reproducing apparatus reproduces data by storing an amount of data in a video or audio buffer, and if underflow in the buffer of the reproducing apparatus is prevented, continuous reproduction is possible. When subtitle or font data corresponding to video data is read again by a reader to change subtitles during reproduction, preloading may not be required if underflow of video and audio data does not occur during the time.
Fig. 13 is an example of a text file used in this embodiment of the present invention.
Referring to fig. 13, in this embodiment of the present invention, a style element is used in the head element to use the CSS file format as an application for implementing a font in a markup language of a text file. By using the CSS, the subtitle data can use a variety of fonts having different sizes and colors.
In some applications, or for some users, the default subtitle font is inconvenient. For example, if the font size of subtitle text is small, a person with poor eyesight may feel inconvenient. Therefore, it is desirable to apply and display fonts to satisfy ordinary users or people with poor eyesight when applied to the same text file. Accordingly, by allowing a user to determine a font, for example, a size of a font through a menu when reproducing an information storage medium in a first reproducing apparatus, a font table for applying a font according to a user's setting and having a plurality of options selectable by the user can be used.
In the present invention, the @ user rule through which can be set according to the font of the user subtitle will now be explained. The user type is a set of CSS attributes. In the present embodiment, the detailed difference of the user types, i.e., the degree of poor eyesight is not appropriate, and thus only two following cases will be explained as follows:
-small: fonts for users with ordinary vision; and
-large: fonts for users with bad eyesight;
as shown in fig. 14, subtitles, which are preset or to which different fonts are applied to a user having good eyesight or having bad eyesight, can be displayed by using the @ user rule.
The reproducing apparatus may also output the subtitle by applying different positions and sizes according to the user's taste without using the position and size determined by the subtitle data.
Fig. 15 is an example in which text data of korean subtitles implemented in fig. 11 is displayed on an actual screen.
Referring to fig. 15, since the width value of the subtitle data display area is determined to be 520 by the "size" attribute in the screen represented by the second < p > element, subtitle data that cannot be expressed within one line is displayed after line feed. On the other hand, subtitle data is outputable only in the display area and line feed can be forcibly selected by using a line feed element (br).
The third < p > element is an example in which display of subtitle data by the "direction" attribute is performed vertically.
Fig. 16 is a diagram showing an example of a case where the user performs language change while subtitles in one language are being reproduced.
Referring to fig. 16, when a language change is required, the reproducing apparatus changes subtitle text data (e.g., in korean) being reproduced, links font data corresponding to the text data, translates data of the changed language (e.g., english), and by doing so, outputs subtitles. If both the data of the subtitle and the font data of the data are loaded into the buffer, continuous reproduction of the video data can be easily performed. If text data or font data that is desired to be changed is not loaded into the buffer, the data should be loaded into the buffer. At this time, a pause, or interruption may occur in the reproduction of the video data.
For multi-language conversion without pause, or interruption of video reproduction, the size of data for subtitles and font data may be limited to be smaller than the size of each buffer. However, in this case, the number of supported languages is limited. Therefore, in the present embodiment of the present invention, this problem is solved by creating a unit called a language group.
Fig. 17 is an example of a plurality of language groups for subtitle data and font data for multiple languages.
Referring to fig. 17, subtitle data and font data for a plurality of languages added to one video image are divided into a plurality of language groups. The subtitle data and font data corresponding to one language group are limited to a size smaller than the size of the buffer. Reproduction of the video data starts after a language group containing subtitle data selected by a user or selected by a reproducing apparatus as a default language is loaded into a buffer before reproduction of the video data. When the user performs a language change, since data has already been loaded into the buffer, the language subtitle change can be made without interruption since subtitle data is included in this language group. However, if a change is made to a language not included in this language group, the reproducing apparatus loads subtitle data and font data for the desired language group again. In this case, the data of the existing language group is deleted. At this time, in reproducing the video data, pause, or interruption may occur. Thereafter, if a language change is performed, the language change operation is performed again according to the relationship between the language and the language group loaded in the buffer. The information on the language group may be recorded in the information storage medium or the reproducing apparatus arbitrarily determines the information on the language set when reproducing data by considering data stored in the information storage medium and the size of a buffer in the reproducing apparatus.
The relationship between information required to reproduce video data and subtitle data will now be explained with reference to the embodiments.
As used herein, a clip is a recording unit of video data, and a play list (PlayList) and a play item (PlayItem) are to be used to indicate a reproduction unit.
In the information storage medium according to an embodiment of the present invention, an AV stream is divided and recorded in clip units. Typically, clips are recorded in a continuous space. To reduce its amount, the AV is compressed and recorded. Therefore, in order to reproduce the compressed AV stream, attribute information of the compressed video data should be notified. Thus, clip information is recorded in each clip. The clip information contains an audio-visual attribute of the clip and an Entry Point Map (Entry Point Map) in which information on the position of an Entry Point (Entry Point) in which random access is available in each interval is recorded. In MPEG widely used as a video compression technique, an entry point is a position of an I picture in which an intra picture is compressed, and an entry point map is mainly used for a time inquiry for finding a point at a time interval after a start point of reproduction.
A playlist is a basic unit of reproduction. In the information storage medium according to the present embodiment, a plurality of playlists are stored. A playlist includes a series of a plurality of playitems. The playlist corresponds to a portion of the clip, and more particularly, is used in a form determined by reproduction start time and end time in the clip thereof. Thus, by using the clip information, the position of the portion in the actual clip corresponding to the playitem is identified.
Fig. 18 is a diagram showing the interrelationship of playlists, playitems, clip information, and clips.
Referring to fig. 18, in the present embodiment of the invention, a plurality of text data items of subtitles per clip are recorded in a space separate from the clip, in addition to a playlist, a playitem, clip information, and a clip. A plurality of data items of a subtitle are linked to one clip and this link information may be recorded in the clip information. For some clips, multiple data items for subtitles are linked, while for other clips, no data item or only one data item of subtitles may be linked. When the playlist is reproduced, the playitems included in the playlist are sequentially reproduced. As a result, any one of the clips linked to each playitem and the plurality of subtitles linked to the clip are translated and output. Since continuous reproduction between playlists is not generally guaranteed, all linked text data for subtitles may be loaded into a buffer before a playlist is reproduced. In FIG. 18, font data is not separately marked.
Typically, font data is generated for each language. Thus, font data is recorded in a separate space for each language.
Fig. 19 is an example of a directory structure according to an embodiment of the present invention.
Referring to fig. 19, in a directory, clips, clip information, a playlist, subtitle text data, and font data are stored in the form of files and are stored in different directory spaces according to respective types. As shown, text data and font files for subtitles may be stored in a separate directory space from video data.
The information storage medium according to various embodiments of the present invention is a removable information storage medium (i.e., one that is not fixed to a reproducing apparatus but can be placed and used only when data is reproduced). Unlike fixed information storage media, such as hard disks, which have a high capacity, removable information storage media have a limited capacity. In addition, a reproducing apparatus for reproducing such a medium often has a buffer having a limited size and low-level functions having limited performance. Accordingly, along with video data recorded on a removable information storage medium, only subtitle data and font data for the subtitle data are recorded in the information storage medium and by using the data when the video data is reproduced from the information storage medium, the amount of data that should be prepared in advance can be minimized. A representative example of such a removable recording medium is an optical disc.
On an information storage medium according to an embodiment of the present invention, video data is stored in a space separate from subtitle text data. If such subtitle text data is used for a plurality of languages and has font data for outputting subtitle data, the reproducing apparatus loads only the subtitle data and font data in the buffer, and then, when reproducing video data, the subtitle data is layered with the video image and the subtitle data is output.
Fig. 20 is an example showing a case where the reproducing apparatus outputs only subtitle data.
Referring to fig. 20, the reproducing apparatus according to an embodiment of the present invention may output only subtitle data. That is, according to one of the specific reproduction functions, video data is not reproduced, and only subtitle data to be output to be layered with the video data is converted into graphic data and then output. In this case, the subtitle data may be used, for example, to learn a foreign language. Here, the video data is not laminated and only the subtitle data is output. In addition, both the synchronization information and the position information are ignored or not included, and the reproducing apparatus outputs a plurality of lines of data items including subtitle data on the entire screen and waits for user input. After viewing all the output subtitle data, the user transmits a signal for displaying subtitle data for the next line to the reproducing apparatus to control the output time of the subtitle data.
Fig. 21 is an example showing a case where a reproducing apparatus simultaneously outputs subtitle data for more than one language.
Referring to fig. 21, as an embodiment, a reproducing apparatus may have a function of simultaneously outputting subtitle data for two or more languages when the subtitle data includes a plurality of languages. At this time, subtitle data to be displayed on the screen is selected by using synchronization information of subtitle data for each language. That is, the subtitle data is output in order of the output start time, and when the output start times are the same, the subtitle data is output according to the language.
A function by which normal reproduction of video data can be started from video data corresponding to a subtitle line data item only when subtitle data is reproduced is also achievable.
Fig. 22 is a diagram showing an example of a case where normal reproduction of video data is started from video data corresponding to subtitle line data during reproduction of only subtitle data.
As shown in fig. 22, when a subtitle line data item is selected, the reproduction time corresponding to the line data item is selected again, and video data corresponding to the time is normally reproduced.
A recording apparatus according to an embodiment of the present invention records video data and subtitle data on an information storage medium.
Fig. 23 is a block diagram of a recording apparatus according to an embodiment of the present invention.
Referring to fig. 23, the recording apparatus includes a Central Processing Unit (CPU), a fixed high-capacity memory, an encoder, a subtitle generator, a font generator, a writer, and a buffer.
The encoder, subtitle generator, and font generator may be implemented by software on a CPU.
In addition, a video input unit for receiving video data in real time may be included.
The memory stores a video image that is an object of encoding, or video data encoded by the encoder. In addition, the memory stores dialogue and large-volume font data attached to the video data. The subtitle generator receives information on an output time of the subtitle line data item from the encoder, receives the subtitle line data from the dialog data, generates subtitle data for a subtitle, and stores the subtitle data in a fixed type of storage device. A font generator generates font data containing characters used in subtitle data for subtitles from large-capacity font data and stores the font data in a fixed-type storage device. That is, font data stored in an information storage medium is a part of large-capacity font data stored in a fixed-type storage device. This process of generating data to be stored in the information storage medium in this form is called editing (editing).
If the editing process is completed, the encoded video data stored in the fixed-type storage device is divided into clips, which are recording units, and recorded on the information storage medium. In addition, subtitle data for subtitles added to video data contained in a clip is recorded in a separate area. In addition, font data that requires conversion of subtitle data into graphics data is recorded in a separate area.
The video data is divided into reproduction units that can be continuously reproduced, and generally, this reproduction unit includes a plurality of clips. As an embodiment, the size of subtitle data that can be layered with a video image included in one reproduction unit and output is limited to be smaller than when data for a plurality of languages is added to the subtitle data. On the other hand, subtitle data that should be layered with a video image included in one reproduction unit is divided into language groups with which language change can be continuously performed when the video data is reproduced. The subtitle data included in one reproduction unit includes a plurality of language groups and the size of the subtitle data, the additional data for a plurality of languages, included in one language group is limited to less than one size.
The subtitle data includes character codes using Unicode and the data format of the actual recording may be actually encoded in UTF-8 or UTF-16.
Video data, subtitle data for subtitles, and font data recorded in a fixed-type storage device are temporarily stored in a buffer and recorded on an information storage medium by a writer. The CPU executes a software program that controls each device so that these functions are sequentially executed.
As described above, according to the above-described embodiments of the present invention, text data for multi-language subtitles is made into a text file and then recorded in a space separate from an AV stream, so that more different subtitles can be provided to a user and recording space arrangement can be conveniently performed.
Font data for this purpose is generated to have a minimum size by collecting characters required for subtitle text and is separately stored in an information storage medium and used.
While certain embodiments of the present invention have been shown and described, the present invention is not limited to the disclosed embodiments. In addition, it would be appreciated by those skilled in the art that changes may be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the claims and their equivalents.
Industrial applicability
The present invention can be applied to fields related to recording and reproduction of moving images, in particular, fields in which text data of a plurality of languages must be provided when reproducing moving images.
Claims (20)
1. A reproducing apparatus for reproducing data from an information storage medium on which video data is recorded, the video data being encoded and divided into clips as recording units and recorded in a plurality of clips, and on which text data of subtitles, which is formed with data of one or more languages and is layered as graphic data with an image that can be based on the video data, is layered as graphic data, the text data being recorded separately from the clips, the reproducing apparatus comprising:
a data reproducing unit for reading data from the information storage medium;
a decoder for decoding the encoded video data;
a translator for converting text data into graphic data;
a mixer for layering graphics data and video data to generate an image;
a first buffer for temporarily storing video data;
a second buffer for temporarily storing text data;
a controller for controlling an output start time and an output end time of the text data by using the synchronization information, and controlling a position where the text data is layered with an image based on the video data by using the output position information,
wherein text data is recorded on the information storage medium, the text data including: synchronization information, display font information, character data, and output position information, by which the character data included in the text data is converted into graphic data and layered with the image based on the video data, the output position information indicating a position at which the graphic data is to be output when the graphic data is layered with the image based on the video data.
2. The reproducing apparatus of claim 1, wherein font data is stored in the third buffer and is recordable on the information storage medium separately from the clip using a graphic representation for the text data in the text data, and the interpreter converts the text data into the graphic data by using the font data.
3. The reproducing apparatus of claim 1, wherein the text data is recorded in a separate space for each language, wherein text data of a language selected by a user or text data of a language set as an initial reproducing language of the reproducing apparatus is temporarily stored in the second buffer, wherein font data for converting the text data into graphic data is temporarily stored in the third buffer, wherein, simultaneously, when reproducing the video data, the text data is converted into graphic data and the graphic data is output.
4. The reproducing apparatus of claim 3, wherein the video data recorded on the information storage medium is divided into continuously reproducible units, and the text data is recorded within a limited size of all text data corresponding to the recording units, wherein all the text data of which the size is limited is stored in the second buffer before the continuously reproducible units are reproduced, and when a language change occurs during reproduction, text data corresponding to subtitles of the language stored in the second buffer is output.
5. The reproducing apparatus of claim 3, wherein the video data is divided into units that can be continuously reproduced, the text data corresponding to one unit is divided into a plurality of language groups, the text data of the subtitles forming one language group is recorded so that all the text data is limited, wherein before the continuously reproducible unit is reproduced, the text data corresponding to the language group containing the subtitles output simultaneously with the video data is stored in the second buffer, and when a language change occurs during reproduction, the text data of the language is output when the text data of the language is in the second buffer, and when the text data of the language is not in the second buffer, the text data corresponding to the language group containing the text data of the language is stored in the second buffer and the text data of the language is output.
6. The reproduction apparatus of claim 1, further comprising: a subtitle size selector for selecting a size of text data of a subtitle based on a user input, wherein the text data includes character data convertible into graphic data, and information indicating output of a plurality of graphic data items when the graphic data is layered with an image based on video data is recorded on an information storage medium.
7. The reproducing apparatus of claim 1, wherein on the information storage medium, data forming the text data is expressed and recorded in Unicode to support multiple language sets, and the translator converts characters expressed in Unicode into graphic data.
8. The reproducing apparatus of claim 7, wherein, on the information storage medium, when text data of subtitles is formed using only one character of ASCII as a basic english character set and ISO8859-1 as an extended latin character set, the text data is encoded and recorded by using UTF-8 by which one character is encoded into a plurality of 8-bit units, and the translator converts the character represented by UFT-8 into graphic data.
9. The reproducing apparatus of claim 7, wherein on the information storage medium, when the text data includes characters having codepoint values of 2 bytes size of Unicode, the text data is encoded and recorded by using UFT-16 by which one character is encoded into a plurality of 16-bit units, and the translator converts the characters represented by UTF-16 into graphic data.
10. The reproducing apparatus of claim 1, wherein the information storage medium is a removable type, and the reproducing apparatus reads data from the removable storage medium and reproduces data on the removable information storage medium.
11. The reproducing apparatus of claim 10, wherein the information storage medium is an optical disc readable by an optical apparatus of the reproducing apparatus, and the reproducing apparatus reads and reproduces data recorded on the optical disc.
12. The reproducing apparatus of claim 1, wherein the reproducing apparatus outputs the graphic data without reproducing the video data recorded on the information storage medium.
13. The reproducing apparatus of claim 1, wherein the text data of the subtitle is layered in synchronization with the image of the video data and then output.
14. A reproduction apparatus comprising:
a reading section for reading audio-visual AV data, text data, and font data from an information storage medium on which the text data and font data are recorded separately from the AV data;
a decoder section for decoding the AV data and outputting moving image data;
a translation section for translating the caption image data from the text data;
a mixing section for combining the moving image data and the subtitle image data;
a controller for controlling an output start time and an output end time of the text data by using the synchronization information, and controlling a position where the text data is layered with an image based on the moving image data by using the output position information;
a buffer section for buffering data between the reading section and the decoder section and data between the reading section and the translation section,
wherein text data is recorded on the information storage medium, the text data including: synchronization information, display font information, character data, and output position information, by which the character data included in the text data is converted into subtitle image data and combined with the moving image data, the output position information indicating a position at which the graphic data is to be output when the subtitle image data is layered with an image based on the moving image data.
15. The reproducing apparatus of claim 14, wherein the AV data, the text data, and the font data are stored in an information storage medium readable by the reading part.
16. The reproducing apparatus of claim 14, wherein at least one of the text data and the font data is stored in a downloadable database.
17. The reproducing apparatus of claim 14, wherein the AV data is stored in an information storage medium readable by the reading part.
18. The reproduction apparatus according to claim 14, wherein the translation section finds a font matching a character code of each character in the text data, the font being stored in one of a downloadable database and a storage section of the reproduction apparatus.
19. The reproduction apparatus of claim 14, wherein the text data includes data for each of the one or more languages, and the text data contains information indicating one of the one or more languages.
20. The reproduction apparatus of claim 14, wherein when the text data includes data of each of the one or more languages, the text data is one that is stored in the area as multiplexed data and is stored in a separate area for each of the one or more languages.
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| HK08103980.8A HK1113851B (en) | 2002-10-15 | 2006-04-01 | Method of reproducing data stored on an information storage medium |
| HK08103983.5A HK1113853B (en) | 2002-10-15 | 2006-04-01 | Recording apparatus and method for recording video data on an information storage medium |
Applications Claiming Priority (5)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| KR20020062632 | 2002-10-15 | ||
| KR10-2002-0062632 | 2002-10-15 | ||
| US45254403P | 2003-03-07 | 2003-03-07 | |
| US60/452,544 | 2003-03-07 | ||
| PCT/KR2003/002120 WO2004036574A1 (en) | 2002-10-15 | 2003-10-14 | Information storage medium containing subtitle data for multiple languages using text data and downloadable fonts and apparatus therefor |
Related Parent Applications (2)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| HK08103980.8A Division HK1113851B (en) | 2002-10-15 | 2006-04-01 | Method of reproducing data stored on an information storage medium |
| HK08103983.5A Division HK1113853B (en) | 2002-10-15 | 2006-04-01 | Recording apparatus and method for recording video data on an information storage medium |
Related Child Applications (2)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| HK08103980.8A Addition HK1113851B (en) | 2002-10-15 | 2006-04-01 | Method of reproducing data stored on an information storage medium |
| HK08103983.5A Addition HK1113853B (en) | 2002-10-15 | 2006-04-01 | Recording apparatus and method for recording video data on an information storage medium |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| HK1082111A1 HK1082111A1 (en) | 2006-05-26 |
| HK1082111B true HK1082111B (en) | 2011-09-02 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP5620116B2 (en) | Reproducing apparatus and data recording and / or reproducing apparatus for reproducing data stored in an information storage medium in which subtitle data for multilingual support using text data and downloaded fonts is recorded | |
| US20100266262A1 (en) | Information storage medium containing subtitle data for multiple languages using text data and downloadable fonts and apparatus therefor | |
| KR100970735B1 (en) | Reproducing method for information storage medium recording audio-visual data and recording apparatus therefor | |
| KR100667751B1 (en) | A storage medium, a playback device, and a playback method including text-based subtitle information | |
| KR101119116B1 (en) | Text subtitle decoder and method for decoding text subtitle streams | |
| US8437599B2 (en) | Recording medium, method, and apparatus for reproducing text subtitle streams | |
| KR101024922B1 (en) | A recording medium having a data structure for managing reproduction of subtitle data, and a method and apparatus for recording and reproducing accordingly | |
| HK1082111B (en) | Reproducing apparatus | |
| HK1113853B (en) | Recording apparatus and method for recording video data on an information storage medium | |
| HK1113851B (en) | Method of reproducing data stored on an information storage medium | |
| WO2005031739A1 (en) | Storage medium for recording subtitle information based on text corresponding to av data having multiple playback routes, reproducing apparatus and method therefor |