[go: up one dir, main page]

WO2002089491A1 - Dispositif et procede de traitement d'images - Google Patents

Dispositif et procede de traitement d'images Download PDF

Info

Publication number
WO2002089491A1
WO2002089491A1 PCT/JP2002/004074 JP0204074W WO02089491A1 WO 2002089491 A1 WO2002089491 A1 WO 2002089491A1 JP 0204074 W JP0204074 W JP 0204074W WO 02089491 A1 WO02089491 A1 WO 02089491A1
Authority
WO
WIPO (PCT)
Prior art keywords
dct
field
block
image processing
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/JP2002/004074
Other languages
English (en)
Japanese (ja)
Inventor
Haruo Togashi
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Corp
Original Assignee
Sony Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Corp filed Critical Sony Corp
Publication of WO2002089491A1 publication Critical patent/WO2002089491A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/112Selection of coding mode or of prediction mode according to a given display mode, e.g. for interlaced or progressive display mode

Definitions

  • the present invention relates to an image processing device and an image processing method, and more particularly to an image processing device and an image processing method for performing image compression processing.
  • Digital video coding systems have been internationally standardized by MPEG (Moving Pictures Experts Group) of IS0 / IEC and H.262 of ITU_T, and with the development of multimedia communications in recent years, Further higher quality is required.
  • MPEG Motion Picture Experts Group
  • DCT Discrete Cosine Transform
  • Switching between the frame DCT mode and the field DCT mode is determined based on a picture in macroblock units.
  • a conventional DCT mode determination method first, the correlation between each pixel in a field and the correlation between each pixel in a frame are calculated. Then, these are simply compared, and if the field correlation is stronger than the frame correlation, the field DCT is selected. If the frame correlation is stronger than the field correlation, the frame DCT is selected.
  • the conventional DCT mode determination technology as described above, when dubbing the VTR or cascading the encoder and decoder, it differs from the previous DCT mode due to distortion during encoding. There was a problem that the DCT mode might be determined and image quality deteriorated.
  • VTR Video Tape Recorder
  • a macroblock that had been coded by performing frame DCT was repeatedly dubbed, but was subject to distortion due to encoding / decoding. May be added, and the mode may be switched to the field DCT mode.
  • the present invention has been made in view of such a point, and an object of the present invention is to provide an image processing apparatus and an image processing method that prevent inversion of the DCT mode for the same block and reduce image quality deterioration. .
  • the present invention detects, in an image processing apparatus that compresses and codes an image signal, correlation information in which one of a frame correlation and a field correlation is weighted for a block of the image signal.
  • An image comprising: correlation information detecting means; and coding means for performing intra-frame coding or intra-field coding on a block of an image signal according to the correlation information detected by the correlation information detecting means. It is a processing device.
  • the present invention also provides an image processing method for compressing and encoding an image signal, comprising: a step of detecting correlation information in which one of a frame correlation and a field correlation is weighted with respect to a block of the image signal; An encoding step of performing intra-frame encoding or intra-field encoding on a block of the image signal in accordance with the correlation information detected in the detecting step.
  • the present invention detects correlation information in which one of the frame correlation and the field correlation is weighted for a block of an image signal, and detects the correlation information according to the detected correlation information.
  • Intra-frame coding or intra-field coding is performed, so that even if the frame correlation and the field correlation of the image signal are close to each other, the intra-frame coding or the intra-field coding for the block is performed. Can be performed adaptively.
  • FIG. 1 is a principle diagram of the image processing apparatus of the present invention
  • FIG. 2 is a diagram showing a macro block
  • FIG. 3 is a diagram for explaining frame correlation information Var1
  • FIG. 5 is a diagram for explaining the field correlation information Var 2
  • FIG. 5 is a diagram showing a determination formula in a graph
  • FIG. 6 is a diagram showing a flowchart of an image processing method of the present invention
  • FIG. Figures A and 7B show the structure of a digital VTR
  • Figure 8 shows the structure of an MPEG encoder
  • Figure 9 shows an overview of packing
  • Figure 10 shows the structure of packing. It is a figure showing an outline.
  • BEST MODE FOR CARRYING OUT THE INVENTION FIG. 1 is a principle diagram of the image processing apparatus of the present invention.
  • the image processing apparatus 10 performs image compression processing using orthogonal transform coding on the input image.
  • orthogonal transform coding for example, DCT (Discrete Cosine Transform) can be applied.
  • DCT Discrete Cosine Transform
  • the orthogonal transform coding will be described assuming that it is DCT.
  • the frame DCT unit 11 performs frame DCT on the block of the image signal based on the determination result of the coding determination unit 13.
  • Field DCT section 12 performs field DCT on the block of the image signal based on the determination result of coding determination section 13.
  • the coding determination unit 13 weights at least one of the frame correlation information indicating the frame correlation of the block or the field correlation information indicating the field correlation so that the determination result is constant for the same block. Then, whether to perform frame DCT or field DCT coding is adaptively determined and controlled.
  • the frame DCT is selected in the first encoding for the block bL of the image G.
  • the field DCT is selected for the block b L of the image G a which is conventionally encoded N times due to the distortion due to dubbing and the like.
  • the DCT mode is inverted and different coding is performed on the same block bL, the image quality of the image Ga is degraded.
  • the image G is thereafter dubbed N times, and the image Gb to which distortion due to dubbing is added is also performed.
  • the encoding determination unit 13 always outputs the block bL without inverting the DCT mode.
  • the first executed frame DCT is selected. As a result, it is possible to suppress the occurrence of image quality degradation of the image Gb.
  • the adaptive switching between the frame DCT and the field DCT for a block is performed by using a vertically adjacent pixel considered to be highly correlated with the vertical component of the DCT coefficient. Calculates the sum of squares of the difference between blocks in the frame and field, and weights the calculated value to determine whether to execute the DCT mode of frame DCT or field DCT. Decide adaptively.
  • Varl in equation (1) indicates the sum of squares of the difference values between vertically adjacent pixels in the frame
  • Var2 in equation (2) indicates the sum of squares between vertically adjacent pixels in the field.
  • V a r1 is the frame correlation information in this embodiment
  • V a r 2 is the field correlation information.
  • u means the horizontal direction
  • v means the vertical direction
  • X [a] [b] indicates the value of the pixel located at the vertical coordinate value a
  • the horizontal coordinate value b. I have.
  • the values of the correlation information V ar 1 and V ar 2 are small when the correlation is strong, and large when the correlation is weak.
  • FIG. 2 is a diagram showing a macroblock.
  • One macroblock is 1 It consists of 6 lines x 16 pixels.
  • u is set in the horizontal direction and V is set in the vertical direction, and the pixel values of the coordinates in the macroblock are represented by X [a] [b].
  • the top left pixel value of the macroblock is X [0] [0]
  • the top right pixel value is X [0] [15]
  • the bottom left pixel value is X [15] [0]
  • right The pixel value at the bottom can be expressed as X [15] [15].
  • FIG. 3 is a diagram for explaining the frame correlation information V a r1.
  • FIG. 4 is a diagram for explaining the field correlation information V ar 2.
  • the DCT mode is determined by weighting Var 1 and Var 2 calculated in this way.
  • the weighted judgment formula is the following formula (3). Note that a, b, c, and d are constants. a XV ar 1 + b> c XV ar 2 + d (3)
  • Equation (3) the macroblock to be encoded is recognized as a macroblock with strong field correlation and strong motion, and the field DCT is selected. If Equation (3) is not satisfied, the macroblock to be encoded is recognized as a macroblock with strong frame correlation and little movement, and the frame DCT is selected.
  • the DCT mode is determined by comparing the weighted mutual correlation information, instead of simply comparing the correlation for the frame and the field.
  • the values of the constants a, b, c, and d in Equation (3) may be fixedly determined or may be variably set according to the input image.
  • FIG. 5 is a diagram in a case where a determination formula is shown in a graph.
  • the vertical axis shows Var 1 and the horizontal axis shows Var 2.
  • V arl V ar 2 is determined in the DCT mode determination.
  • a DCT mode inversion occurs, and a field DCT different from the coding initially applied to the same block is selected at this time.
  • the block group BL near V ar 1 V ar 2 satisfies V ar 1 ⁇ AXV ar 2 + B for the decision formula.
  • a block that exists in a direction that satisfies the condition and has a remarkable motion is positioned in a direction that satisfies Var1> AXVar2 + B with respect to the determination formula.
  • a field DCT is selected for a block having a remarkable motion
  • a frame DCT is set for a block existing in a direction satisfying V ar 1 ⁇ AX V ar 2 + B with respect to the determination formula of this embodiment.
  • FIG. 6 is a diagram showing a flowchart of the image processing method of this embodiment.
  • At least one of the frame correlation information V arl and the field correlation information V a2 of the block is weighted so that the determination result is constant for the same block of the image signal.
  • step S2 Based on the decision formula of the above-mentioned formula (3), the decision control is adaptively performed as to which of the frame DCT and the field DCT is to be coded. If equation (3) is satisfied, go to step S3; otherwise, go to step S4.
  • FIG. 7A and 7B show an example of the configuration of a digital VTR 100 to which this embodiment is applied.
  • the digital VTR 100 is capable of directly recording a digital video signal compression-encoded by the MPEG system on a recording medium.
  • the signals input from the outside to this recording system are two types of serial digital input and output signals, a serial data interface (SDI) signal and a serial data transfer interface (SDTI) signal.
  • SDI serial data interface
  • SDTI serial data transfer interface
  • SDI is an interface specified by SMPTE for transmitting (4: 2: 2) component video signals, digital audio signals, and additional data.
  • the SDTI is an interface for transmitting an MPEG elementary stream (hereinafter referred to as MPEG ES), which is a stream in which a digital video signal is compression-encoded by the MPEG method.
  • MPEG ES MPEG elementary stream
  • SDT I—CP (Content Package) format uses MPEG ES Access unit and is packed into packets in frame units.
  • SDT I-CP uses a sufficient transmission bandwidth (27 MHz or 36 MHz at clock rate, 270 Mbps or 360 Mbps at stream bit rate), and bursts in one frame period. It is possible to send the ES in a way.
  • the SDI signal transmitted by the SDI is input to the SDI input unit 101.
  • the SDI input section 101 converts the supplied SDI signal from a serial signal to a parallel signal and outputs the signal, extracts the input synchronization signal that is the phase reference of the input included in the SDI signal, and outputs the timing signal. Output to TG102.
  • the SDI input unit 101 separates a video signal and an audio signal from the converted parallel signal.
  • the separated video input signal and audio input signal are output to the MPEG encoder 103 and the delay circuit 104, respectively.
  • the timing generator TG102 extracts a reference synchronization signal from the input external reference signal REF.
  • the timing generator TG synchronizes the reference synchronizing signal and the input synchronizing signal supplied from the SDI input section 101 with a predetermined designated reference signal, and generates a signal necessary for this digital VTR 100. Generates a timing signal and supplies it to each block as a timing pulse.
  • the MPEG encoder 103 is a component including the functions of the image processing device 10 or the image processing method according to the present embodiment.
  • the MPEG encoder 103 converts the input image signal, that is, the video input signal, into coefficient data by DCT conversion, quantizes the coefficient data, and then performs variable length coding.
  • the variable length coded (VLC) data output from the MPEG encoder 103 is an elementary stream conforming to MPEG2. (ES). This output is supplied to one input terminal of a recording-side multi-format comparator (hereinafter, referred to as recording-side MFC) 106.
  • recording-side MFC recording-side multi-format comparator
  • the delay circuit 104 functions as a delay line for adjusting the input audio input signal to the processing delay of the video signal in the MPEG encoder 103 without any compression. It is.
  • the audio signal delayed by the delay circuit 104 is output to the ECC encoder 107. This is because the audio signal appears as an uncompressed signal in the digital VTR 100 according to this embodiment.
  • the SDTI signal transmitted and supplied from the outside by the SDTI is input to the SDTI input section 105.
  • the SDT I signal is synchronously detected by the input section 105 of the 30-piece. Then, it is temporarily stored in the buffer, and the elementary stream is extracted. The extracted elementary stream is supplied to the other input terminal of the MFC 106 on the recording side.
  • the synchronization signal obtained by the synchronization detection is supplied to the above-described timing generator TG 102 (not shown).
  • the SDTI receiving section 108 further extracts a digital audio signal from the input SDTI signal.
  • the extracted digital audio signal is supplied to the ECC encoder 107.
  • the digital VTR 100 can directly input MPEGES independently of the baseband video signal input from the SDI input section 101.
  • the recording-side MFC circuit 106 has a stream comparator and a selector, and selects one of the MPEGES supplied from the SDI input unit 101 and the SDTI input unit 105. Selected and selected MP EG
  • the DCT coefficients of the ES are grouped for each frequency component through a plurality of DCT blocks constituting one macroblock, and the grouped frequency components are sorted in order from the low frequency component.
  • the stream in which the MPEG ES coefficients are rearranged is hereinafter referred to as a “conversion elementary stream”. By rearranging MPEGES in this way, as many DC coefficients and low-order AC coefficients as possible during search playback are picked up, contributing to the improvement of the quality of search images.
  • the converted elementary stream is supplied to the ECC encoder 107.
  • the ECC encoder 107 is connected to a large-capacity main memory (not shown), and has a packing and shuffling unit, an outer code encoder for audio, an outer code encoder for video, an inner code encoder, a shuffling unit for audio, and Built-in video shuffling unit. Further, the ECC encoder 109 includes a circuit for adding an ID in sync block units and a circuit for adding a synchronization signal.
  • a product code is used as an error correction code for a video signal and an audio signal.
  • the product code encodes the outer code in the vertical direction of a two-dimensional array of video signals or audio signals, encodes the inner code in the horizontal direction, and double encodes the data symbol. is there.
  • a Reed-Solomon code can be used as the outer code and the inner code.
  • the conversion element stream output from the MFC circuit 106 is supplied to the ECC encoder 107, and the audio signal output from the SDTI input section 105 and the delay circuit 104 is supplied to the ECC encoder 107. . EC.
  • the encoder 107 performs shuffling and error correction coding on the supplied conversion elementary stream and audio signal, adds an ID and a synchronization signal for each sync block, and records the data. Output as overnight.
  • the recording data output from the ECC encoder 107 is converted into a recording RF signal by an equalizer EQ 108 including a recording amplifier.
  • the recording RF signal is supplied to a rotating drum 109 provided with a rotating head in a predetermined manner, and is recorded on a magnetic tape 110.
  • the rotating drum 109 is actually provided with a plurality of magnetic heads having different azimuths of heads forming adjacent tracks.
  • the recording data may be subjected to scramble processing as needed. Further, digital modulation may be performed at the time of recording, and further, a partial response class 4 and Viterbi code may be used.
  • the equalizer 108 includes both a recording-side configuration and a reproduction-side configuration.
  • a reproduction signal reproduced by the rotating drum 109 from the magnetic tape 110 is supplied to a reproduction-side configuration of an equalizer 108 including a reproduction amplifier and the like.
  • equalizer 108 equalization and waveform shaping are performed on the reproduced signal. Also, demodulation of digital modulation, Viterbi decoding, etc. are performed as needed.
  • the output of the equalizer 108 is supplied to the ECC decoder 111.
  • the ECC decoder 111 performs processing reverse to that of the ECC encoder 107 described above, and includes a large-capacity main memory, an inner code decoder, a deshuffling unit for audio and video, and an outer code decoder. . Further, the ECC decoder 111 includes a deshuffling and depacking unit and a data interpolation unit for video. Similarly, it includes an audio AUX separation unit and a data interpolation unit for audio.
  • the ECC decoder 1 1 1 performs synchronization detection on the playback data, 'Detect the synchronization signal added to the beginning of the lock. Cut out the lock.
  • the reproduced data is subjected to error correction of the inner code for each sync block, and thereafter, ID interpolation processing is performed on the sync block.
  • the playback data with the interpolated ID is separated into video data and audio 5 data.
  • Video data and audio data are each subjected to deshuffling processing, and the order of the data shuffled during recording is restored.
  • Each of the deshuffled data is subjected to outer code error correction.
  • an error flag is set for data having an error that exceeds the error correction capability and cannot be corrected.
  • a signal ERR indicating the data including the error is output.
  • the error-corrected reproduction audio data is supplied to the SDTI output unit 115, and is also supplied to the SDDI output unit 116 after being given a predetermined delay by the delay circuit 114.
  • the delay circuit 114 will be described later.
  • the error-corrected video data is supplied to the playback-side MFC circuit 112 as a playback conversion element stream.
  • the signal 0 ERR described above is also supplied to the reproduction-side MFC circuit 112. Playback side MFC 1 1
  • Numeral 2 performs a process reverse to that of the recording-side MFC 106 described above, and includes a stream converter.
  • the reverse process is performed with the stream converter on the recording side. That is, the DCT coefficients arranged for each frequency component across the DCT blocks are rearranged for every DCT 5 block.
  • the reproduced signal is converted into an elementary stream compliant with MPEG2.
  • ECC When the signal ERR is supplied from the data 111, the corresponding data is replaced with a signal completely compliant with MPEG2 and output.
  • the MPEGES output from the reproduction side MFC circuit 112 is supplied to the MPEG decoder 113 and the SDTI output unit 115.
  • the MPEG decoder 113 decodes the supplied MPEGES and restores the original uncompressed video signal. That is, the MPEG decoder 113 performs an inverse quantization process and an inverse DCT process on the supplied MPEGES.
  • the decoded video signal is supplied to the SDI output unit 116.
  • the 301 output unit 116 is supplied with the audio data separated from the video data by the ECC decoder 111 via the delay 114.
  • the supplied video data and audio data are mapped into an SDI format and converted into an SDI signal having an SDI format data structure. This SDI signal is output to the outside.
  • audio data separated from video data by the ECC decoder 111 is supplied to the SDTI output unit 115 as described above.
  • the SDT I output section 115 the supplied video data and audio data as elementary streams are mapped to the SDT I format, and the SDT I format has an SDT format data structure.
  • the SDTI signal converted to the DTI signal is output to the outside.
  • the system controller 117 (abbreviated as system controller 117 in FIGS. 7A and 7B) is composed of, for example, a microcomputer and communicates with each block by a signal SY-IO. Thus, the entire operation of the digital VTR 100 is controlled.
  • the Servo 118 communicates with the System Controller 117 by the signal SY-SV, while the signal SV__I ⁇ controls the running and rotation of the magnetic tape 110.
  • the drive control of the transfer drum 109 is performed.
  • FIG. 8 schematically shows an example of the configuration of the MPEG encoder 103 in the digital VTR 100 shown in FIG.
  • the MPEG encoder 103 is roughly composed of a blocking circuit 300, a delay circuit 301, and a DCT mode determination circuit 302,0. It has a chopping circuit 303, a quantization circuit 304, and a variable length coding (VLC) circuit 305.
  • VLC variable length coding
  • the digital video signal supplied to the MPEG encoder 103 is divided by a blocking circuit 300 into 16-line ⁇ 16-pixel block (macroblock) units and output.
  • the data output from the blocking circuit 300 in block units is given a predetermined delay by the delay circuit 301, supplied to the DCT circuit 303, and supplied to the DCT mode determination circuit 302.
  • the DCT mode determination circuit 302 determines whether to perform the field DCT or the frame DCT on the blocked digital video signal by the determination control according to the above-described embodiment. The result of the determination is output from the DCT mode determination circuit 302 as a mode determination signal and supplied to the DCT circuit 303.
  • the DCT circuit 303 performs a DCT operation on the data in block units output from the delay circuit 301 based on the DCT mode determined by the DCT mode determination circuit 302 to generate a DCT coefficient. .
  • the generated DCT coefficient is supplied to the quantization circuit 304.
  • a DCT operation is performed in units of a DCT block composed of 8 lines and 8 pixels.
  • a magnetic tape is generally used as a recording medium for recording a video signal.
  • Video signal recording on magnetic tape is performed by a magnetic head (rotating head) provided on a rotating drum, forming a helical track that is inclined with respect to the tape running method. Also, during playback, the rotating head accurately traces the helical tracks formed during recording.
  • the running speed of the tape higher than the speed at the time of recording during reproduction, it is possible to perform reproduction such as double speed, triple speed, or search.
  • the trace angle of the rotary head on the tape is different from the tilt angle of the helical track. This makes it impossible to trace all the signals recorded on the helical track. That is, at the time of high-speed reproduction, reproduction is performed by scanning a part of each helical track.
  • Packing refers to applying different length streams to a fixed frame. That is, the streams created by the MPEG encoders 113 are variable-length coded, and therefore have different lengths for each macroblock. Therefore, data is recorded by performing a packing operation of applying the different streams to a fixed frame.
  • FIG. 9 and FIG. 10 are diagrams showing an outline of packing. Here, an example in which 8 macro blocks are applied to a fixed frame is shown. Number each macro block from # 1 to # 8.
  • the lengths of the eight macroblocks are different from each other due to the variable length coding.
  • the fixed frame is 1 Macro block # 1, data # 3, data # 6 are longer than macro block # 1, macro block # 2, data # 5, data # 7, and data # 8 Are short.
  • the length of macro block # 4 is equal to one sync block.
  • the macroblock is poured into a fixed-length frame of one sync block length, and the entire data generated in one frame period is fixed-length. Therefore, a macro block longer than one sync block is divided at a position corresponding to the sync block length. Of the divided macroblocks, the portion that protrudes from the sync block length is stuffed into the extra area in order from the beginning, that is, after the macroblock whose length is less than the sync block length.
  • each macroblock is packed into a fixed frame of the sync block length.
  • the DCT mode is fixed to either the field DCT or the frame DCT for encoding.
  • the frame DCT is fixed, a large amount of streams will be allocated to a rapidly moving image with strong temporal correlation because there is little spatial correlation, and the length will greatly exceed the sync block length.
  • the coding efficiency is inferior to that of the fixed frame DCT for an image having a strong spatial correlation which is common in general images. For this reason, image quality is deteriorated not only during high-speed reproduction and high-speed reverse reproduction, but also during normal reproduction.
  • the above-described determination control is performed so that the determination result is constant for the same block (so that the DCT mode is not inverted even when dubbing or cascading the encoder and the decoder is performed).
  • the DCT mode is adaptively selected based on. This makes it possible to reduce the number of data pieces that extend beyond the sink block length, thereby improving the image quality not only during normal playback but also during high-speed playback and high-speed reverse playback.
  • the image processing apparatus weights at least one of the frame correlation information and the field correlation information of a block so that the determination result is constant for the same block. Therefore, a configuration is adopted in which whether frame DCT or field DCT is to be encoded is adaptively determined and controlled. As a result, the inversion of the DCT mode for the same block can be prevented, and the deterioration of the image quality can be reduced.
  • the image processing method performs weighting on at least one of the frame correlation information and the field correlation information of a block so that the determination result is constant for the same block.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

L'invention empêche l'inversion du mode DCT pour un même bloc et réduit la dégradation de l'image. Une unité de trame DCT exécute le DCT de trame du bloc du signal d'image, une unité de champ DCT exécute le DCT de champ du bloc du signal d'image, et une unité de détermination du codage pondère au moins l'une au moins des informations de corrélation des trames de bloc ou de corrélation des champs de bloc. Ainsi, le résultat de la détermination du même bloc peut être le même et déterminer adaptativement si on doit exécuter le codage du DCT de trame ou du DCT de champ.
PCT/JP2002/004074 2001-04-25 2002-04-24 Dispositif et procede de traitement d'images Ceased WO2002089491A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2001127264A JP2005051276A (ja) 2001-04-25 2001-04-25 画像処理装置
JP2001-127264 2001-04-25

Publications (1)

Publication Number Publication Date
WO2002089491A1 true WO2002089491A1 (fr) 2002-11-07

Family

ID=18976159

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2002/004074 Ceased WO2002089491A1 (fr) 2001-04-25 2002-04-24 Dispositif et procede de traitement d'images

Country Status (2)

Country Link
JP (1) JP2005051276A (fr)
WO (1) WO2002089491A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04369192A (ja) * 1991-06-17 1992-12-21 Matsushita Electric Ind Co Ltd 画像符号化方法及び装置
JPH0595545A (ja) * 1991-07-30 1993-04-16 Sony Corp 画像信号の高能率符号化及び復号化装置
JPH06268993A (ja) * 1993-03-15 1994-09-22 Mitsubishi Electric Corp 飛越し走査動画像符号化装置
JPH06343171A (ja) * 1993-03-31 1994-12-13 Sony Corp 画像符号化方法及び装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04369192A (ja) * 1991-06-17 1992-12-21 Matsushita Electric Ind Co Ltd 画像符号化方法及び装置
JPH0595545A (ja) * 1991-07-30 1993-04-16 Sony Corp 画像信号の高能率符号化及び復号化装置
JPH06268993A (ja) * 1993-03-15 1994-09-22 Mitsubishi Electric Corp 飛越し走査動画像符号化装置
JPH06343171A (ja) * 1993-03-31 1994-12-13 Sony Corp 画像符号化方法及び装置

Also Published As

Publication number Publication date
JP2005051276A (ja) 2005-02-24

Similar Documents

Publication Publication Date Title
US6516034B2 (en) Stream processing apparatus and method
US5933567A (en) Method and apparatus for controlling the position of the heads of a digital video tape recorder during trick play operation and for recording digital data on a tape
US5377051A (en) Digital video recorder compatible receiver with trick play image enhancement
US5729649A (en) Methods and apparatus for recording data on a digital storage medium in a manner that facilitates the reading back of data during trick play operation
US5576902A (en) Method and apparatus directed to processing trick play video data to compensate for intentionally omitted data
JP4010066B2 (ja) 画像データ記録装置および記録方法、並びに画像データ記録再生装置および記録再生方法
US20030009722A1 (en) Stream processing apparatus
US7072568B2 (en) Recording apparatus, recording method, reproducing apparatus, and reproducing method
US7286715B2 (en) Quantization apparatus, quantization method, quantization program, and recording medium
US20030070040A1 (en) Data processing apparatus and data recording apparatus
KR100796885B1 (ko) 신호 프로세서
US20020071491A1 (en) Signal processor
WO2001041436A1 (fr) Dispositif et technique d'enregistrement, dispositif et technique de reproduction
US20040131116A1 (en) Image processing apparatus, image processing method, image processing program, and recording medium
KR100681992B1 (ko) 기록 장치 및 방법
WO2002089491A1 (fr) Dispositif et procede de traitement d'images
JP3167590B2 (ja) ディジタル記録再生装置
JP3572659B2 (ja) ディジタルビデオ信号の記録装置、再生装置、記録再生装置及び記録媒体
JP4038949B2 (ja) 再生装置および方法
US7260315B2 (en) Signal processing apparatus
JP2001169243A (ja) 記録装置および方法、ならびに、再生装置および方法
JP2000134110A (ja) データ伝送装置および伝送方法
JP2000312341A (ja) データ伝送装置および方法、記録装置、ならびに、記録再生装置
JP2000293435A (ja) データ再生装置および方法
JP2000123485A (ja) 記録装置および方法

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): JP US

NENP Non-entry into the national phase

Ref country code: JP