HK1050420B - Method for recording moving picture - Google Patents
Method for recording moving picture Download PDFInfo
- Publication number
- HK1050420B HK1050420B HK03102543.5A HK03102543A HK1050420B HK 1050420 B HK1050420 B HK 1050420B HK 03102543 A HK03102543 A HK 03102543A HK 1050420 B HK1050420 B HK 1050420B
- Authority
- HK
- Hong Kong
- Prior art keywords
- audio
- data
- state
- information
- recording
- Prior art date
Links
Description
This application is a divisional application of a patent application having an application date of 2000/2/21, an application number of 00102384.5, entitled "apparatus and method for recording/reproducing moving images and recording medium".
Technical Field
The present invention relates to the field of moving pictures, and more particularly, to a method for recording and/or reproducing status information related to audio for moving pictures.
Background
Now, developments in the field of digital compression technology and increases in the capacity of recording media enable moving image information to be compressed into digital data and recorded. Compression techniques include a variety of standards. In terms of video, the MPEG (moving picture experts group) -2 video (ISO/IEC 13818-2) MP @ ML (Main Profile with Main Profile Level, Main Profile at Main layer) standard (a moving picture compression and file format standard) having the same image quality as current analog television is now most widely used. The use of the MPEG-2MP @ HL (main profile at High Level) standard, which is capable of achieving image quality for High Definition Television (HDTV), is rapidly increasing. In terms of Audio, north america and surrounding areas generally use AC (Audio Coding) -3, and european countries are using the MPEG 1/2 Audio (ISO/IEC 13818-3) standard. The total amount of audio data is less than the total amount of video data, so that a Linear Pulse Code Modulation (LPCM) standard in which there is no compression function can also be applied to audio.
In this manner, audio data and video data are signal-processed according to respective standards and then combined into data in the form of a bitstream. At this time, the MPEG-2 systems (ISO/IEC 13818-1) standard is widely used. That is, audio data and video data are each grouped into packets, and identification information for identifying audio and video, buffer control information, and time stamp (timing) information for keeping an audio signal synchronized with a video signal are added to each packet (packet). Time stamp information associated with the clock signal used by the decoder is also added to each packet, resulting in packet (pack) data. Here, the Digital Versatile Disc (DVD) video standard specifies a packet size of 2048 bytes.
A general moving image recording apparatus includes several important functions, among which a post-recording function of later replacing and rewriting only an audio part in moving image data that has been recorded on a recording medium. An existing analog recording medium has tracks in which video signals and audio signals for moving images are separately recorded, so that a post-recording function is easily implemented. Further, the analog signal is not recorded in a specific recording unit, so that an operation required for post-recording is completed by rewriting a desired portion.
Here, the audio that has been recorded initially is referred to as original audio, and the audio that is replaced and recorded later is referred to as secondary audio. In order to preserve the original audio when recording the secondary audio, two audio tracks separately recording the original audio and the secondary audio must be prepared. Thus, the original audio and the secondary audio are distinguished from each other by their track positions.
When two tracks are played back simultaneously, only the secondary audio signal is output when the audio signal exists among the tracks for the secondary audio, and otherwise, the audio signal existing among the tracks for the original audio is output. In this way, the secondary audio that has been partially recorded on the track can be reproduced. When it is desired to reproduce only the original audio, the audio signal on the original audio track can be reproduced regardless of the presence or absence of the audio signal on the secondary audio track.
However, in a digital recording medium, audio/video (a/V) signals have been mixed together and recorded in a recording area of a predetermined recording unit without classifying them, so that recording and reproducing of secondary audio using an overwrite method used by an analog recording medium becomes impossible.
Disclosure of Invention
In order to solve the above problems, it is an object of the present invention to provide a recording method for dividing moving image data into a plurality of basic units, generating basic unit information associated with each basic unit, and storing audio state information in the basic unit information.
It is still another object of the present invention to provide a reproducing method for selectively reproducing original audio or secondary audio according to audio state information.
In order to achieve the first object of the present invention, the present invention provides a moving image recording method for a recording medium including moving image data and moving image data information on the moving image data, the moving image data including at least one of video data and audio data, the method including: recording the first audio or the first audio and the second audio as audio data; and recording first state information of a first audio of the moving image data information, and second state information of a second audio of the second audio if present, wherein the first state information on the first audio represents an original audio state in which the first audio is an original audio, or a modified audio state in which the first audio is modified on a part or all of the original audio, and when the second audio is present, the second state information on the second audio represents a state in which the second audio is an original audio, a state in which the second audio is modified on a part or all of the original audio as the second audio, a state in which the second audio is the same dummy audio as the first audio, or a state in which the second audio is modified on a part or all of the dummy audio as the second audio.
To achieve the second object of the present invention, the present invention provides a method of playing back a recording medium on which moving image data including video data and audio data and moving image data information on the moving image data are recorded, wherein the audio data includes first audio or first audio and second audio, the moving image data information includes first state information on the first audio and second state information on the second audio when the second audio is present, the state information on the first audio represents an original audio state in which the first audio is original audio or a modified audio state in which the first audio is modified over a part or all of the original audio, and when the second audio is present, the state information on the second audio represents a first state in which the second audio is original audio, a second state in which the second audio is modified over a part or all of the original audio as the second audio, a first state, a second and a third state, A third state in which the second audio is identical dummy audio to the first audio, or a fourth state in which the second audio is modified over part or all of the dummy audio as the second audio, the method comprising: analyzing the first state information and second state information in the presence of the second audio; and reproducing the original audio and the modified audio according to the analyzed first state information and the second state information when the second audio exists.
Drawings
The above objects and advantages of the present invention will become more apparent by describing in detail preferred embodiments thereof with reference to the attached drawings, in which:
fig. 1 is a block diagram of an optical recording/reproducing apparatus to which the present invention is applied;
fig. 2 shows a hierarchical structure of moving image data drawn for the purpose of facilitating understanding of the present invention;
FIGS. 3A and 3B show the Video Object (VOB) shown in FIG. 2;
FIGS. 4A through 4D illustrate the packet shown in FIG. 3;
fig. 5 shows an example of the structure of audio state information among video object information (VOBI) according to the present invention;
FIG. 6 shows the state change from before the secondary audio is re-recorded to after the secondary audio has been re-recorded when only the first audio is present; and
fig. 7 shows a state change from before the secondary audio is re-recorded to after the secondary audio has been re-recorded when both the first and second audio are present.
Detailed Description
A Digital Versatile Disc (DVD) video recorder shown in fig. 1 is suitable for use as a preferred embodiment of a recording apparatus and a reproducing apparatus to which the present invention is applied.
Referring to fig. 1, the modules 102 to 114 are for reproduction, and the modules 116 to 126 are for recording. The reproducing apparatus may include a module for reproduction only, and the recording apparatus may include a module for recording only. An optical pickup unit 100, a key input unit and a display 128 for interacting with a user and displaying a user interface, and a system controller 130 for controlling the operation of each module are included in both the recording apparatus and the reproducing apparatus.
A typical recorder has recording and reproducing capabilities such that all modules can be installed in one device. Therefore, in the present invention, when the recording apparatus is used to perform a reproducing operation, it may be a reproducing apparatus.
In the operation of the module among general reproducing apparatuses, the optical pickup unit 100 includes an optical system for reading a signal from a recording medium and converting the read signal into an electrical signal, and a mechanism for moving the optical system so that the optical system reads and writes data at a predetermined position of the recording medium. This mechanism is controlled by the digital servo unit 102. A Radio Frequency (RF) Amplifier (AMP)104 amplifies an electrical signal read from the recording medium by the optical system and supplies the resultant signal to a data decoder 106. Also, the RF amplifier 104 supplies a servo signal for correcting the position of the optical system to the digital servo unit 102.
The data decoder 106 converts the amplified electrical signal output by the RF amplifier 104 into digital signals represented by "0" and "1" levels according to appropriate signal levels (referred to as binarization levels). The digital signal has been modulated according to the recording characteristics of a typical recording medium. The data decoder 106 demodulates the digital signal in accordance with a demodulation mode corresponding to the modulation mode used for modulation. The demodulated digital signal is an Error Correction Code (ECC) signal to which a parity is connected to correct errors on the recording medium caused by scratches, defects or the like. The data decoder 106 corrects the generated errors by error correction decoding the demodulated data and provides error correction coded data to an audio/video (a/V) decoder 108. The data decoder 106 outputs data having a format in which at least one of video data, audio data, and graphic data has been compressed. Since the audio data has a small amount of information relative to the amount of video data, the audio data may not be compressed. Video data is typically compressed in accordance with the MPEG standard. The graphics data is compressed according to a lossless compression mode in which no information is lost. The a/V decoder 108 decodes the video, audio and/or graphic data in accordance with the respective compression modes to restore the video, audio and/or graphic data. In particular, graphics data is mixed with video data.
The memory 110 connected to the a/V decoder 108 temporarily stores data received by the a/V decoder 108 before the data is decoded or temporarily stores restored data before the data is output. Next, the data supplied from the a/V decoder 108 is converted and output to an output device. That is, the video digital/analog converter (DAC)112 converts the restored digital video data into an analog video signal and outputs the analog video signal to a television or a monitor. The audio DAC converter 114 converts the restored digital audio data into an analog audio signal, and outputs the analog audio signal to a speaker or an audio amplifier. The TV, monitor, speaker and audio amplifier as final output devices are not shown in fig. 1.
In the operation of the module in the recording apparatus, an audio signal or a video signal is received from an external input apparatus. Here, the external input device may be a TV, a video camera, or other similar devices, which are not shown in fig. 1.
The received video and audio signals are in analog or digital form and are suitably pre-processed and converted to digital data, in fig. 1 analog video and audio signals are received. More specifically, the video preprocessor 116 performs functions such as a filtering operation that minimizes secondary effects generated when an analog signal is converted into digital data, and then converts the analog video signal into a digital video signal. The audio preprocessor 118 performs functions such as a filtering operation that minimizes secondary effects generated when an analog signal is converted into digital data, and then converts the analog audio signal into a digital audio signal. The a/V encoder 120 compresses the digital audio and/or video signals to reduce the amount of audio data and/or video data and appropriately processes the compressed audio and/or video signals. That is, video is typically encoded in a compressed mode known as MPEG video (ISO/IEC 13818-2), and audio is typically encoded in a compressed mode such as AC-3 or MPEG audio (ISO/IEC 13818-3). However, since audio has a small data amount with respect to the video data amount, audio may not be compressed. It is common practice to add information based on the MPEG systems (ISO/IEC 13818-1) standard to encoded video data and encoded audio data. Such information is required for decoding each of the video data and the audio data, and may be information on buffer occupancy control at the time of decoding, and time stamp information required to keep the audio signal synchronized with the video signal.
The graphical data is typically received by a dedicated input device. Alternatively, the graphical data is generated by the system controller 130 receiving user input, compressed by a dedicated compressor, and mixed with the A/V data. In the a/V encoder 120, compression and mixing of graphics data may be performed. However, the graphic data portion is not drawn in fig. 1.
The memory 122 connected to the a/V encoder 120 temporarily stores the data received by the a/V encoder before the received data is encoded or temporarily stores the encoded data before the encoded data is output. The data encoder 124 error-correction encodes the encoded data output from the a/V encoder 120 and modulates the error-correction encoded data according to the recording characteristics of the recording medium. A Laser Diode (LD) power controller 126 emits an optical signal corresponding to data output from the data encoder 124 onto a recording medium using a laser beam, thereby completing a recording operation.
The key input unit and the display 128, which are used to enable a user to interact with the recording or reproducing apparatus, receive instructions such as reproduction start, reproduction stop, recording start or recording stop from the user, transmit the received instructions to the system controller 130, and display the user's selections on a menu, on-screen display (OSD), or on-screen graphics (OSG). The system controller 130 transmits a function required according to each operation instruction set by the user to all the modules and controls the modules, thereby completing the operation set by the user.
The a/V decoder 108 and a/V encoder 120, which are separately installed in fig. 1, may be integrated into a single module that can perform encoding and decoding. Also, the memory for decoding 110 and the memory for encoding 120 may be integrated into a single memory.
It can be seen that, when the optical recording/reproducing apparatus shown in fig. 1 records a moving image on a recording medium, it divides moving image information into a plurality of basic units using the recording modules 116 to 126 and 100 and records the plurality of basic units on a recording unit. The system controller 130 generates information required for reproduction/encoding of each basic unit and manages this information as basic unit information. The basic unit information generated by the system controller 130 is recorded on the recording medium through the data encoder 124, the LD power controller 126, and the optical pickup unit 100.
Here, the basic unit includes at least one of video data, audio data, and graphic data. In particular, the audio data includes only one type of audio, or includes the first audio and the second audio. When the user records the secondary audio, the secondary audio is recorded over part or all of the first audio or the second audio, and the modified state of the first audio or the second audio is managed as the state information of the first audio or the second audio in the basic unit information. In the case of DVD-video recording as a preferred embodiment of the present invention, the basic cell is referred to as a Video Object (VOB), and the basic cell information is referred to as video object information (VOBI).
At the time of reproduction, the system controller 130 checks first or second audio status information stored among the basic unit information reproduced through the optical pickup unit 100, the RF AMP104, and the data decoder 106, and controls the key input unit and the display 128 to display the audio status information among the basic unit information on a menu or similar interface so as to allow the user to recognize the audio status information. Accordingly, audio data in moving image data recorded among a plurality of basic units on a recording medium is reproduced by the blocks for reproduction 100 to 114 shown in fig. 1 according to a user's selection. This will be explained later in connection with fig. 5 to 7.
In order to facilitate understanding of the present invention, a hierarchical structure of moving image data will now be described with reference to fig. 2.
When the user records moving image data, the moving image data is actually recorded on the recording medium using the modules for recording 116 to 126 shown in fig. 1. In DVD video recording, each recorded data is divided into a plurality of cells of Video Objects (VOBs). That is, moving image data recorded during a period from when the user presses the recording start button until when the user presses the recording stop button is one VOB.
A plurality of VOBs are recorded on the recording medium, for example, VOB #1, VOB #2, and VOB #3 shown in fig. 2. As described above, video data, audio data, and graphic data are mixed together and recorded in one VOB. In the present invention, these VOBs are referred to as real-time bitstream data, and in the case of DVD-video recording, each VOB is recorded in a single file.
When reproduced, the recorded VOBs are decoded and reproduced by the blocks for reproduction 100 to 114 shown in fig. 1. It is useful to separately record information required for VOB reproduction, for example, the width and length (resolution) of video data, the coding mode of audio data, and the like in the corresponding VOB. Further, when the VOB has been encoded at a Variable Bit Rate (VBR), the position of data in the VOB does not match the reproduction time, and therefore, it is useful to separately record the position of data according to the reproduction time to perform a specific reproduction function such as time search. These data constitute the VOBI, that is to say. VOBIs (VOBI #1, VOBI #2, and VOBI #3 shown in fig. 2) exist in each VOB, and each VOBI includes information required for reproducing or editing the corresponding VOB.
One program may be one unit of moving image information to the user. That is, the user knows that a plurality of programs are recorded on the recording medium. In DVD video recording, there is a relationship between a program and a VOB such that one program includes a plurality of cells, and one cell represents a part or the whole of one VOB. Therefore, one program contains a part or all of a plurality of VOBs.
Generally, one program includes one cell, and one cell corresponds to one complete VOB. Here, when a program is subjected to an editing process in response to an instruction from the user (for example, partial deletion of the program, merging of the programs, or generation of a program in an order desired by the user), the form of the program is slightly more complicated than the general form described above.
The information about the plurality of programs constitutes program chain information (PGC 2), and the VOBI and PGCI constitute navigation data. That is, a real-time bit stream as moving image data is recorded on a recording medium together with navigation data as information required to reproduce the moving image data.
As described above, the program is finally identified by the user, where the program is delivered to the user by using a menu or similar interface. A menu corresponding to each program (for example, program #1, program #2, and program #3 shown in fig. 2) is displayed. When a user selects a program # n, cells associated with this selected program are searched, and the corresponding portions of the VOBs indicated by these cells are reproduced. Information required for such reproduction can be obtained from the corresponding VOBI.
Fig. 3A and 3B show the internal structure of the VOB shown in fig. 2. Referring to fig. 3A and 3B, one VOB includes a plurality of video object units (VOBUs), and each VOBU includes a plurality of video packs, audio packs, and/or graphic packs. VOBUs relate to methods of encoding video data. The MPEG standard used as a video encoding method utilizes correlation between frames of moving image data.
In moving image data composed of several tens of frames per second, generally, each frame contains the same information, and for example, in the case of a moving image in which a human body is moving, the background of each image is kept constant, and there is only a small amount of motion due to the human body in each frame. Thus, the first frame is recorded in its entirety, while in the following frames only a portion different from the previous frame is recorded, in such a way that the total amount of data to be recorded is significantly reduced.
The MPEG encoding method conceptually uses such an encoding method. In this case, there is also a disadvantage that the intermediate frame needs to be recovered from the previous frame. Therefore, even when it is desired to reproduce the middle frame, all the previous frames must be reproduced starting from the foremost frame. MPEG overcomes this disadvantage using a group of pictures (GOP) structure, i.e. a group consisting of a predetermined number of frames and in which all information about a frame is recorded on a previous frame. In this case, in order to reproduce a certain intermediate frame, the first frame in the GOP related to this intermediate frame may start to be reproduced. A typical GOP is composed of 12 to 15 frames, and a VOBU includes a plurality of GOPs.
A VOBU comprises a plurality of video packs, audio packs, and/or graphic data packs generated based on the MPEG system standard format. Each packet includes information about its type.
Fig. 3A relates to a case in which one type of audio stream exists for one video stream generated as packets, and fig. 3B relates to a case in which two types of audio streams exist for one video stream generated as packets. When there are a plurality of types of audio streams as described above, a user can select and reproduce one desired type of audio stream.
The packet structure will now be described in more detail with reference to fig. 4A to 4D. A packet typically comprises a packet of information. In the case of DVD video recording, one packet requires at most two packets. When two packets are required, one of the two packets must be a padding packet that simply occupies one place on the data.
Each packet is divided into a packet header and a payload portion. In the packet header, information indicating the type of the corresponding packet is recorded as a parameter called stream identifier (stream _ id). In the case of video, the stream identifier (stream _ id) is the binary number "11100000 b" shown in fig. 4A. In the case of audio, only MPEG audio can be represented by a stream identifier (stream _ id) shown in fig. 4B, which is "1100000 xb", and here, "x" taking "0" or "1" can provide two types of audio.
In the case when the audio is AC-3 audio or LPCM (Linear PCM) audio, a process is also required to be able to identify the AC-3 audio or LPCM audio, that is, the AC-3 audio and the LPCM audio have the same stream identifier (stream _ id) whose value is "10111101 b". Header information corresponding to the AC-3 audio or the LPCM audio is recorded in the payload part, and real audio data follows the recorded header information. The header information includes a parameter called a sub stream identifier substreamid. In the case of AC-3, "1000000 xb" is stored in the sub stream identifier parameter (substream _ id) shown in FIG. 4D. In the case of LPCM, "1010000 xb" is stored in the sub stream identifier parameter (substreamjd) shown in fig. 4C, where "x" may be "0" or "1", and thus two types of audio can be provided. In the present invention, when "x" is "0", this case is related to the first audio (audio 1), and when "x" is "1", this case is related to the second audio (audio 2).
The first audio is used to record the basic original audio. Later, the user may record secondary audio on a portion of the first audio. At this time, the original audio overwritten by the secondary audio is deleted.
The secondary audio may be used to record the primary audio or to record the secondary audio. When the second audio is used to record the original audio, there are two original audio and the first audio, but without the first audio, the original audio cannot be recorded only on the second audio. When the second audio is the original audio, the user recognizes the second audio as the original audio from the beginning. Thus, the first audio and the second audio have the same priority. When the secondary audio is recorded, either the first audio or the second audio is selected, and then the secondary audio is recorded on the selected audio.
When the second audio is recorded from the beginning as secondary audio, then the following restrictions must be observed. At the time of initial recording, a second audio having exactly the same content as the first audio must be recorded. That is, in terms of content, the second audio is the same as the first audio, so that the user cannot recognize the second audio. Such a state is called a mute audio state.
In this way, when the user later wishes to record the secondary audio, the secondary audio is recorded on the corresponding portion of the second audio in a mute audio state. When the second audio is overwritten by the secondary audio in this manner, it is first recognized by the user, and the user selects and reproduces one of the first audio and the second audio. Since the second audio is the same as the first audio except for the portion on which the secondary audio is recorded, the same content is reproduced even if the selection of the audio is changed. The reason why the first audio having the same content as that of the first audio is recorded is that it is difficult for the digital recording medium to extract and reproduce a portion on which the secondary audio has been recorded. That is, when the second audio is selected, if the second audio is left and only a part thereof is occupied by the secondary audio, the audio cannot be reproduced from the part on which the secondary audio is not recorded, which may cause user confusion.
Alternatively, the first audio may be reproduced on a null portion of the second audio and the second audio may be reproduced on a portion of the second audio on which the secondary audio has been recorded. However, in this case, a determination must be made as to whether the audio has been recorded on the second audio, which may make it difficult to implement other methods.
In the present invention, the states of the first audio and the second audio are known to the user before they are recorded on and reproduced from the corresponding VOBI in the VOB, or before they are used to complete appropriate operations when the user has made a change in the audio during reproduction.
Fig. 5 shows an example of the structure of first audio state information and second audio state information in video object information (VOBI) according to the present invention.
The first audio state information a0_ STATUS represents a state (00b) in which original audio has been recorded, or a state (01b) in which a part or all of the original audio is secondary audio.
State 00b represents that the first audio is the original audio. When the second audio is not recorded, the user records the secondary audio on part or all of the original audio. At this time, the first audio is overwritten by the secondary audio, and the state information of the first audio is changed from "00 b" to "01 b".
The second audio state information a1_ STATUS may be, in addition to the above-described two states, a dummy state 10b in which the content of the second audio is completely the same as the content of the first audio, or a state (11b) in which part or all of the second audio has been overwritten by the secondary audio. Accordingly, one state among the total four states is recorded as the second audio state information a1_ STATUS.
Accordingly, the first audio STATUS information A0STATUS and the second audio STATUS information a1_ STATUS are defined as follows:
A0_STATUS:
00b … the first audio is the original audio.
01b … the first audio is secondary audio that is re-recorded over part or all of the original audio.
A1_STATUS:
00b … the second audio is the original audio.
01b … the second audio is secondary audio that is re-recorded over part or all of the original audio.
10b … the second audio is dummy audio whose content is identical to that of the original audio.
11b … the second audio is secondary audio re-recorded over part or all of the dummy audio.
In the method of using the first and second audio information, the case where the second audio is in a mute audio state is most important. When the state information representing that the second audio is the dummy audio is recorded, the user feels like the case where the second audio is not recorded, and therefore, such a situation can be displayed on a menu or similar interface. At this time, even when the user completes the audio conversion function, the first audio can continue to be reproduced. That is, in the state "10 b" representing the dummy audio, reproduction of the first audio may be set as a default.
When the second audio is in one of the three states 00b, 01b, and 11b other than the mute audio state 10b, the three states of the second audio indicate that audio data having a content different from that of the first audio has been recorded. Thus, the user knows this state of the second audio, thereby enabling audio changes. In particular, when the second audio state information a1_ STATUS is a state 11b in which the second audio is the secondary audio re-recorded over part or all of the dummy audio, reproduction of the second audio may be set as a default.
Thus, in the present invention, by means of a menu or similar interface, the user knows whether the first audio or the second audio is the original audio or the secondary audio re-recorded over a part or all of the first audio or the second audio, so that the user can recognize the state of the audio of the corresponding VOB.
Fig. 6 shows the state change from before the secondary audio is re-recorded to after the secondary audio is re-recorded when only the first audio is present. That is, fig. 6 shows a condition of the first audio (a0_ STATUS ═ 00b) in the original audio state, and a condition in which only some of the portions a5 to A8 of the original audio are rewritten by the secondary audio b1 to b4 through the rewriting/editing operation (a0_ STATUS ═ 0 b). In this case, the first audio STATUS information a0_ STATUS is updated from "00 b" to "01 b", and the change in the audio STATUS can be displayed through a menu or similar interface on the key input unit and the display 128 shown in fig. 1, so that the user is aware of the change in the audio STATUS.
Fig. 7 shows a state change from before the secondary audio is re-recorded to after the secondary audio has been re-recorded when both the first and second audio are present. When the first audio is the original audio (i.e., a0_ STATUS ═ 00b) and the second audio is in a dummy state (a1_ STATUS ═ 10b) in which the second audio is the same as the first audio, only the reproduction of the first audio is set as a default, and the direction of reproduction is indicated by a thick solid arrow.
Fig. 7 shows the second audio in a mute audio state in which some portions a 5-a 8 are overwritten by secondary audio b 1-b 4. The second audio STATUS information a1_ STATUS is updated from "10 b" to "11 b". Here, the first audio is maintained in an original audio state (i.e., a0_ STATUS ═ 00 b).
A portion of the second audio in the mute state is overwritten by the secondary audio, and then the change of audio is effected by the user through a menu or similar interface. At this time, the content of the reproduced audio is changed, which is indicated by a thick solid arrow indicating the reproduction direction. That is, as shown in fig. 7, first audios a1 through a4 are first reproduced, then second audios b1 through b3 among the second audios are reproduced due to audio changes made by the user, and finally, the first audio is reproduced again from a8 due to audio changes made by the user.
As described above, in the present invention, audio state information related to a moving image, which is obtained by recording/rewriting/editing, is stored in a recording unit information VOBI corresponding to a recording unit VOB, and this audio state information is notified to a user before the moving image is reproduced, so that the user can recognize the state of the audio, and further, the user can appropriately handle changes in the audio during reproduction.
Claims (2)
1. A moving image recording method for a recording medium including moving image data and moving image data information on the moving image data, the moving image data including at least one of video data and audio data, the method comprising:
recording the first audio or the first audio and the second audio as audio data; and
first state information of a first audio of moving image data information and second state information of a second audio of the second audio if present are recorded, wherein the first state information on the first audio represents an original audio state in which the first audio is an original audio or a modified audio state in which a part or all of the original audio is a secondary audio by recording the secondary audio on the original audio, and when the second audio is present, the second state information on the second audio represents a state in which the second audio is the original audio, a state in which a part or all of the original audio is a modified state of the secondary audio by recording the secondary audio on the original audio, a state in which the second audio is the same dummy audio as the first audio, or a state in which the second audio is a modified on a part or all of the dummy audio as the second audio.
2. The recording method of claim 1, further comprising:
recording secondary audio over the first audio or the second audio; and
the first state information of the first audio is updated when the secondary audio is recorded on the first audio, and the second state information of the second audio is updated when the secondary audio is recorded on the second audio.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| KR1019990037307A KR100657241B1 (en) | 1999-09-03 | 1999-09-03 | Video recording / playback apparatus and method and recording medium |
| KR37307/1999 | 1999-09-03 |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| HK1050420A1 HK1050420A1 (en) | 2003-06-20 |
| HK1050420B true HK1050420B (en) | 2006-10-27 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN1263030C (en) | Method for recording moving images | |
| CN1275258C (en) | Digital Video Processing Method | |
| CN1146902C (en) | Digital recording/reproducing method and apparatus for video having superimposed sub-information | |
| CN1134985C (en) | Digital signal editing apparatus and method | |
| CN100483534C (en) | Data allocation in DVD recording | |
| US7734149B2 (en) | Apparatus and method for recording/reproducing moving picture and recording medium | |
| HK1050420B (en) | Method for recording moving picture | |
| HK1050419B (en) | Apparatus for recording moving picture and method for reproducing moving picture | |
| HK1050418A (en) | Apparatus and method for recording/reproducing moving picture and recording medium | |
| HK1061303B (en) | Method for recording/reproducing moving picture | |
| KR100708208B1 (en) | Video playback device and method | |
| JPH11213564A (en) | Information encoding device, information decoding device, and information encoding / decoding recording / reproducing device |