WO1998003020A1 - Device and method for processing image and device and method for encoding image - Google Patents
Device and method for processing image and device and method for encoding image Download PDFInfo
- Publication number
- WO1998003020A1 WO1998003020A1 PCT/JP1997/002481 JP9702481W WO9803020A1 WO 1998003020 A1 WO1998003020 A1 WO 1998003020A1 JP 9702481 W JP9702481 W JP 9702481W WO 9803020 A1 WO9803020 A1 WO 9803020A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- image
- circuit
- prediction
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N11/00—Colour television systems
- H04N11/04—Colour television systems using pulse code modulation
- H04N11/042—Codec means
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4015—Image demosaicing, e.g. colour filter arrays [CFA] or Bayer patterns
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4023—Scaling of whole images or parts thereof, e.g. expanding or contracting based on decimating pixels or lines of pixels; based on inserting pixels or lines of pixels
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/587—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal sub-sampling or interpolation, e.g. decimation or subsequent interpolation of pictures in a video sequence
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/593—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/90—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
- H04N19/98—Adaptive-dynamic-range coding [ADRC]
Definitions
- Image processing apparatus and method and image encoding apparatus and method
- the present invention relates to an image processing apparatus and method, and an image encoding apparatus and method, and in particular, to an image processing apparatus and method capable of performing prediction efficiently and accurately, and an image encoding apparatus. And methods. Background art
- an image having a low spatial resolution has been converted to an image having a higher spatial resolution and displayed.
- more pixel data is interpolated (generated) from pixel data with lower spatial resolution.
- interpolation processing when pixel data having a low spatial resolution is composed of, for example, R, G, and B component signals, interpolation processing is performed independently for each component signal. Had to do it.
- pixel data of R having high spatial resolution is generated from pixel data of R having low spatial resolution
- pixel data of G having high spatial resolution is generated from pixel data having low spatial resolution [;
- the high resolution B pixel data was generated from the low spatial resolution B pixel data.
- the present invention has been made in view of such circumstances, and aims to improve efficiency and accuracy.
- An image processing apparatus wherein at least one of the acquisition unit for acquiring the first pixel data of the first image and the component signal constituting the first pixel data is The second image signal using the component signal of the second and the second component signal.
- the first component signal constituting the second pixel data of the first pixel data is predicted, and at least the first component signal and the second component signal of the component signals constituting the first pixel data are predicted.
- the image processing method further comprising: an acquiring step of acquiring a first pixel data of the first image; and at least a first signal of a component signal constituting the first pixel data. Using the first component signal and the second component signal, the first component signal that constitutes the second pixel data of the second image is predicted, and the first pixel data is formed. A prediction for predicting a second component signal constituting the i: th pixel data of the second image by using at least the first component signal and the second component signal of the component signals. And a step.
- the image coding apparatus further comprising: a compression unit that compresses the image data by reducing a plurality of pixel data represented by a vector in a color space; and a classification unit that classifies a class of the compressed pixel data.
- storage means for storing prediction data including pixel data represented by a vector in a color space corresponding to the class, and prediction means for predicting both images using the prediction data. It is characterized by.
- the image encoding method includes a compression step of compressing by reducing a plurality of pixel data represented by a vector of a color space h, and a classification step of classifying a class of the compressed pixel data. And a storage step of storing prediction data including pixel data represented by a vector in a color space corresponding to the class, and a prediction step of predicting an image using the prediction data. It is characterized by.
- one component signal of the second image having higher spatial resolution is the first image having low spatial resolution. Is generated from a plurality of component signals.
- an image is predicted using prediction data including pixel data represented by a vector in a color space.
- FIG. 1 is a block diagram showing a configuration example of a system to which the image processing device of the present invention is applied.
- FIG. 2 is a diagram illustrating the operation of the sub-sampling circuit of FIG.
- FIG. 3 is a diagram illustrating pixel data in the embodiment of FIG.
- FIG. 4 is a block diagram showing a configuration example of an apparatus for generating the storage contents of ROM 218 in FIG.
- FIG. 5 is a block diagram showing another configuration example of the transmitting device 1 of FIG.
- 13 ⁇ 416 is a block diagram showing a functional configuration example of the transmission device 1 of FIG.
- FIG. 7 is a flowchart for explaining the operation of the transmitting apparatus 1 of FIG.
- FIG. 8 is a block diagram showing a configuration example of the compression section 21 of FIG.
- FIG. 9 is a flowchart for explaining the operation of the compression section 21 of FIG.
- FIG. 10 is a block diagram showing a configuration example of the local decoding unit 22 of FIG.
- FIG. 11 is a diagram for explaining the class classification process.
- FIG. 12 is a diagram for explaining the ADRC process.
- FIG. 13 is a flowchart for explaining the operation of the local decoding unit 22 in FIG.
- FIG. 14 is a block diagram illustrating a configuration example of the error calculator 23 of FIG.
- FIG. 15 is a flowchart for explaining the operation of the error calculator 23 of FIG.
- FIG. 16 is a block diagram illustrating a configuration example of the determination unit 24 of FIG.
- FIG. 17 is a flowchart for explaining the operation of the determination section 24 of FIG.
- FIG. 18 is a block diagram showing still another configuration example of the receiving device 4 of FIG.
- FIG. 19 is a block diagram showing another example of the configuration of the mouthpiece card 22 of FIG. You.
- FIG. 20 is a block diagram illustrating a configuration of an embodiment of an image processing apparatus that calculates a prediction coefficient stored in the prediction coefficient ROM 81 of FIG.
- FIG. 21 is a block diagram showing another configuration example of the transmission device 1 of FIG.
- FIG. 22 is a flowchart for explaining the operation of the transmitting apparatus of FIG.
- FIG. 23 is a block diagram illustrating a configuration of a first embodiment of an image processing apparatus that performs learning for obtaining mapping coefficients.
- FIG. 24 is a flowchart for explaining the operation of the image processing device shown in FIG.
- FIG. 25 is a block diagram illustrating a configuration example of the local decoding unit 127 of FIG.
- FIG. 26 is a flow chart for explaining the processing of the local decoding unit 127 in FIG.
- FIG. 27 is a block diagram illustrating a configuration of a second embodiment of the image processing apparatus that performs learning for obtaining mapping coefficients.
- FIG. 28 is a flowchart for explaining the operation of the image processing apparatus of FIG.
- FIG. 29 is a block I illustrating another configuration example of the receiving apparatus 4 in FIG. BEST MODE FOR CARRYING OUT THE INVENTION
- the image processing device includes an acquisition unit (for example, the decoder 13 in FIG. 1) configured to acquire the first pixel data of the first image, and the first pixel data. At least the first component signal and the second The first component signal constituting the second pixel data of the second image is predicted by using the component signal of the second image, and the component signal constituting the first pixel data is predicted. Of at least the first component signal and the second component signal, the second component signal constituting the second pixel data of the second image is predicted. Prediction means (for example, the data generation circuit 219 in FIG. 1).
- the image encoding device includes a compression unit (for example, a decimation circuit 31 in FIG. 8) that compresses by reducing a plurality of pixel data represented by vectors on the color space.
- Classification means for classifying the class of compressed pixel data for example, the class classification circuit 45 in FIG. 10
- prediction data including pixel data represented by a vector in a color space corresponding to the class.
- the prediction coefficient RO 4 81 in FIG. 19 and a prediction means for predicting an image using the prediction data (for example, the prediction in FIG. 19).
- Circuit 82 ).
- FIG. 1 shows a configuration example of a system in which image data is decimated from a transmission side and transmitted, and a reception side generates and reproduces decimated pixels.
- the digital video data to be transmitted is input from the input terminal 201 of the transmitting device 1 to the sub-sampling circuit 202, where every other pixel data is thinned out in the horizontal direction, and the amount of data to be transmitted is reduced. Is halved.
- the encoder 203 highly efficiently encodes the data supplied from the sub-sampling circuit 202 by using, for example, an orthogonal transform code such as DCT (Discrete Cosine Transform) or ADRC (Adaptive Dynamic Range Coding). The data amount is further reduced.
- DCT Discrete Cosine Transform
- ADRC Adaptive Dynamic Range Coding
- the transmission processing circuit 204 performs processing such as error correction coding, framing, and channel coding on the output of the encoder 203, and outputs the result to the transmission path 3 from the output terminal 205. Or recording on a recording medium 2 such as an optical disk or a magnetic disk.
- Decoder 2 1 3 Is configured to perform a decoding process corresponding to the encoder 203 on the transmission device 1 side.
- the output of the decoder 213 is supplied to the synchronization circuit 215 and the synthesizing circuit 214.
- the synchronization circuit 215 adjusts the timing of the output of the decoder 213 so that the pixel data to be processed is generated at the same timing, and compares the adjusted data with the ADRC processing circuit 216. It is output to the data generation circuit 219.
- the ADRC processing circuit 216 performs ADRC processing on the data supplied from the synchronization circuit 215 with one bit, and outputs the processing result to the class classification circuit 217.
- the class classification circuit 217 performs a class classification process corresponding to the data supplied from the ADRC processing circuit 216, and outputs a signal indicating the classified class to a ROM (Read Only Memory) 218 as an address. It is made to do.
- the ROM 218 reads out the coefficient data stored in the address corresponding to the class supplied from the class classification circuit 217 and outputs it to the data generation circuit 219.
- the data generation circuit 219 multiplies the data supplied from the synchronization circuit 215 by the coefficient data supplied from the ROM 218 to generate a new pixel data, Output to the combining circuit 2 1 4.
- the synthesizing circuit 214 synthesizes the original pixel data supplied from the decoder 211 and the pixel data generated by the data generating circuit 219 and outputs the synthesized data from the output terminal 220. It is designed to output to a CRT and display it.
- the digital image data input from the input terminal 201 is decimated in the sub-sampling circuit 202 every other one in the horizontal direction, for example, as shown in FIG.
- the symbol ⁇ represents the pixel data remaining after being decimated
- the symbol X represents the pixel data that is decimated and not transmitted. This halves the pixel data to be transmitted.
- the predetermined processing is performed by 204, and the data is transmitted from the output terminal 205 to the transmission path 3 or the recording medium 2.
- the reception processing circuit 211 receives transmission data from the transmission path 3 or the recording medium 2 from the input terminal 211, and outputs the data to the decoder 211.
- the decoder 213 decodes the input data and outputs the pixel data obtained as a result of the decoding (the pixel data indicated by ⁇ ⁇ ⁇ ⁇ in FIG. 2) to the synthesizing circuit 214 and the synchronizing circuit. Output to 2 1 5
- the synchronization circuit 215 performs a process of applying a predetermined delay so that the pixel data to be processed is generated at the same timing.
- a predetermined delay for example, the pixel data XI to X4 located above (XI), left (X2), right (X3) and below (X4) of the omitted pixel Y1 shown in FIG.
- the same timing is supplied to 216 and the data generation circuit 219.
- the ADRC processing circuit 2 16 executes ADRC processing of one block composed of the input four pixel data X1 to X4.
- each pixel data X is composed of vectors (XR, XG, XB) on a color space defined by R, G, and B components. ing. XR, XG, and XB respectively represent the R, G, and B component components of the pixel data X.
- each is represented by 8 bits.
- the ADRC processing circuit 216 performs 1-bit ADRC processing.For example, the R component XR1 of the pixel data XI is represented by 1 bit, the G component XG1 is represented by 1 bit, and the B component is XB1 is represented by one bit. That is, originally ⁇ '4 bits
- the other element data ⁇ 2 to ⁇ 4 are similarly converted to 3-bit pixel data, and each pixel data is represented by 3 bits (XI, ⁇ 2, ⁇ 3, ⁇ 4). It is supplied to the class classification circuit 2 17.
- the ROM 218 stores a prediction coefficient w for each class.
- a signal representing a predetermined class is supplied from the class classification circuit 217, an address corresponding to the class is supplied.
- the prediction coefficient w stored in the memory is read and supplied to the data generation circuit 219.
- the data generation circuit 219 uses the prediction coefficient w supplied from the ROM 218 and the pixel data XI to X4 supplied from the synchronization circuit 215, the data generation circuit 219 performs an operation as shown in the following equation, The pixel data Y1 shown in FIG. 2 is generated.
- wi (R), wi (G), and wi (B) represent prediction coefficients for G and B, respectively.
- the (; component YG1 and the B component YB1 of the pixel data Y1 are not only the components corresponding to the pixel data X1 to X4, but also all the components XR1_ _ ⁇ -XR4, XG1 to XG4, XB1 To XB4.
- Images especially natural images such as those captured using a television camera, have a correlation, and pixels closer to each other have a stronger correlation. Therefore, when a new pixel data is generated by calculation, it is more efficient and more accurate to generate new pixel data based on the pixel data in the vicinity. be able to.
- the R component (same for the G component and the B component) of the target pixel is generated using the R, G, and B components of each pixel as in the present embodiment, a pixel at a closer position can be obtained.
- the required number of data can be obtained from the data. Therefore, highly accurate pixel data can be generated more efficiently.
- the combining circuit 214 combines the new pixel data Y generated by the data generating circuit 219 and the originally existing pixel data X supplied from the decoder 213 as described above. Output from output terminal 220. Therefore, the pixel data output from the output terminal 220 has a higher spatial resolution than the image composed of the pixel data X received by the receiver circuit 212 (the sub-pixel in FIG. 1). The image has the same resolution as the image before being subsampled by the sampling circuit 202).
- R OM 218 stores the prediction coefficient w of the above equation.
- the table of the prediction coefficient w can be obtained from, for example, the apparatus shown in FIG.
- a digital video signal is input from the input terminal 230 and is supplied to the synchronization circuit 231.
- the digital video signal input to the human input terminal 230 is preferably a standard signal required for creating a table (hence, a signal of a high-resolution image before being thinned out).
- a signal consisting of a standard picture still image can be employed.
- Synchronization circuit 2 3 1 Performs the evening adjustment so that the pixel data Y1 and XI to X4 shown in FIG. 2 are output simultaneously.
- the pixel data output from the synchronization circuit 231 is supplied to the subsampling circuit 232 and the data memory 237.
- the sub-sampling circuit 23 extracts the image data X 1 to X 4 shown in FIG. 2 from the input high-resolution image signal and supplies it to the ADRC processing circuit 233.
- the ADRC processing circuit 233 subjects the input pixel data to ADRC processing with one bit and outputs the result to the classification circuit 234.
- the class classification circuit 234 classifies the data input from the ADRC processing circuit 233 as a class, and supplies a signal corresponding to the classified class to the data memory 237 via the contact A of the switch 235 as an address. That is, the synchronization circuit 232, the ADRC processing circuit 233, and the class classification circuit 234 perform the same processing as in the synchronization circuit 215, the ADRC processing circuit 216, and the class classification circuit 217 in FIG. I do.
- the counter 236 counts a clock CK supplied from a circuit (not shown), and supplies the count value as an address to the data memory 237 via the contact C of the switch 235.
- the data memory 237 When an address is supplied from the class classification circuit 234 via the switch 235, the data memory 237 writes the data supplied from the synchronization circuit 232 to the address, and the counter 236 transmits the address via the switch 235 to the address.
- data stored at the address is read and output to the least-mean-square arithmetic circuit 238.
- the least-squares operation circuit 238 performs an operation based on the least-square method on the pixel data supplied from the data memory 237, calculates a prediction coefficient wi, and outputs the result to the memory 239.
- the memory 239 is configured to write the prediction coefficient wi supplied from the least squares operation circuit 238 to the address supplied from the counter 236 via the switch 235. Next, the operation will be described.
- Digital video data for learning for determining prediction coefficients is synchronized in a synchronization circuit 231, and thinned out in a sub-sampling circuit 232 to extract XI to X4 in FIG.
- 1-bit ADRC processing is performed in the processing circuit 233, it is input to the classifying circuit 234 and classified.
- the ADRC processing circuit 233 in which each of R, G, and B components is 1 bit. Since the data is processed, the 12-bit class data is supplied from the class classification circuit 234 to the data memory 237 via the contact A of the switch 235 as an address.
- the memory 237 stores the pixel data supplied from the synchronization circuit 232 at this address.
- the pixel data to be stored is a pixel data of an image having a higher spatial resolution before sub-sampling by the sub-sampling circuit 202 in FIG. Therefore, the image data Yi indicated by the mark X as well as the pixel data Xi indicated by the mark in FIG. 2 are stored.
- the data memory 237 stores at least the necessary number of pixel data to solve the simultaneous equations.
- the switch 235 is switched to the contact C side. Since the count 236 counts the clock CK and outputs the count value, the value to be incremented by 1 is input to the memory 237 as a read address.
- the data memory 237 reads the pixel data corresponding to the input read address and outputs the pixel data to the least squares arithmetic circuit 238.
- the least squares arithmetic circuit 238 applies a specific algorithm to the above equation, generates a simultaneous equation using the prediction coefficient wi as a variable, solves the simultaneous equation, and obtains the prediction coefficient wi.
- predetermined pixel data for example, the R component YR1 of the pixel data Y1 described above
- predetermined pixel data for example, the R component YR1 of the pixel data Y1 described above
- the error between the YR1 value obtained by the calculation (forecast) and the actual pixel data YR1 is calculated, and the error is calculated.
- the prediction coefficient Wi is calculated so that the square of is minimized.
- the prediction coefficient wi obtained by the calculation is written to the address of the memory 239 corresponding to the address of the pixel data currently read from the data memory 237.
- the prediction coefficient wi is stored in the memory 239.
- the stored contents are written into the ROM 218 shown in FIG.
- the prediction coefficient wi is written in the R ⁇ M 218 (memory 239).
- the data itself after the multiplication by the coefficient may be written. By doing so, the overnight generation circuit 219 in FIG. 1 becomes unnecessary.
- FIG. 5 shows another configuration example of the transmission device 1.
- the I / ⁇ (InterFace) 11 performs a process of receiving image data supplied from the outside and a process of transmitting encoded data to the transmitter / recorder 16.
- the ROM (Read Only Memory) 12 stores a program for IPL (Initial Program Loading) and others.
- a RAM (Random Access Memory) 13 is a system program (OS (Operating Operating System) recorded in the external storage device 15.
- the CPU 14 expands the system program and the application program from the external storage device 15 to the RAM I3, and executes the application program under the control of the system program. In this way, an encoding process as described later is performed for the image data supplied from the I / F 11.
- the external storage device 15 is, for example, a magnetic disk device, and stores a system program and an application program to be executed by the CPU 14 as described above, as well as data necessary for the operation of the CPLJ 14. I also remember.
- the transmitter recording device 16 records encoded data supplied from the IZF 11 on the recording medium 2 or transmits the encoded data via the transmission path 3.
- the I / Fll, ROM 12, RAMI 3, CPU 14 and external storage device 15 are mutually connected via a bus.
- the transmission device 1 configured as described above, when image data is supplied to the IZF 11, the image data is supplied to the CPU 14.
- the CPU 14 encodes the image data and supplies the resulting encoded data to the IZF 11.
- the I / F 11 supplies the encoded data to the transmitter recording device 16.
- the encoded data from the IZF 11 is recorded on the recording medium 2 or transmitted via the transmission path 3.
- FIG. 6 is a functional block diagram of a portion of the transmitting device 1 of FIG. 5 except for the transmitter recording device 16.
- the image data to be encoded is supplied to a compression unit 21, a local decoding unit 22, and an error calculation unit 23.
- the compression unit 21 compresses the image data by simply thinning out the pixels, and compresses the resulting compressed data (both image data after the thinning is performed) according to the control from the determination unit 24.
- the correction is made.
- the correction data obtained as a result of the correction in the compression unit 21 is supplied to the oral card decoding unit 22 and the determination unit 24.
- the oral decoder 22 predicts an original image based on the correction data from the compressor 21 and supplies the predicted value to the error calculator 23.
- the oral decoding unit 22 performs an adaptive process for obtaining a prediction coefficient for calculating a prediction value by a linear combination with the correction data, as described later, and performs prediction processing based on the prediction coefficient.
- the prediction value is supplied to the error calculation unit 23, and the prediction coefficient obtained at that time is supplied to the determination unit 24, as described above.
- the error calculation unit 23 calculates a prediction error of a prediction value from the local decoding unit 22 with respect to the original image data (original image) input thereto. This prediction error is supplied to the determination unit 24 as error information.
- the judging unit 24 outputs the data output from the compressing unit 21 based on the error information from the error calculating unit 23.
- the correction data is determined to be appropriate as an encoding result of the original image.
- the determination unit 24 determines that the correction data output from the compression unit 21 is not appropriate as the encoding result of the original image, the determination unit 24 controls the compression unit 21 and further performs compression. The data is corrected, and the resulting new corrected data is output.
- the determination unit 24 determines that the correction data output from the compression unit 21 is appropriate as the encoding result of the original image
- the determination unit 24 supplies the correction data supplied from the compression unit 21.
- the data is supplied to the multiplexing unit 25 as optimal compressed data (hereinafter, referred to as optimal compressed data as appropriate), and the prediction coefficient supplied from the oral decoding unit 22 is supplied to the multiplexing unit 2. 5 is made to supply.
- the multiplexing unit 25 multiplexes the optimal compression data (correction data) from the determination unit 24 and the prediction coefficient, and uses the multiplexed result as coded data, and transmits the multiplexed data to the transmitter recording device 16 (Fig. 5).
- the compression unit 21 compresses the image by thinning out the image data in step S 1, and first performs local correction without performing correction. Output to section 22 and decision section 24.
- the local decoding unit 22 locally decodes the corrected data from the compression unit 21 (at first, as described above, the compressed data itself obtained by simply thinning out the image data). That is, in step S2, adaptive processing for obtaining a prediction coefficient for calculating a prediction value of an original image is performed by linear combination with the correction data from the compression unit 21. Based on the prediction coefficient, Then, a predicted value is obtained.
- the prediction value obtained by the local decoding unit 22 is supplied to the error calculation unit 23, and the prediction coefficient is supplied to the determination unit 24.
- the image composed of the predicted values output from the local decoding unit 22 is the same as the decoded image obtained on the receiving device 4 side.
- the error calculation unit 23 Upon receiving the predicted value of the original image from the oral decoding unit 22, the error calculation unit 23 receives the original image data from the oral decoding unit 22 in step S 3. The prediction error of the prediction value is calculated and supplied to the judgment unit 24 as error information. Judgment unit 24, upon receiving the error information from the error calculating unit 23, in Step S4, based on the error information, sets the correction data output from the compression unit 21 as the encoding result of the original image. Determine adequacy.
- step S4 it is determined whether the error information is equal to or less than a predetermined threshold ⁇ . If it is determined in step S4 that the error information is not equal to or smaller than the predetermined threshold ⁇ , it is recognized that it is not appropriate to use the corrected data output from the compression unit 21 as the encoded data of the original image. Then, proceeding to step S5, the determination section 24 controls the compression section 21 to thereby correct the compression time.
- the compression unit 21 corrects the compressed data by changing the correction amount (correction value ⁇ ⁇ described later) in accordance with the control of the determination unit 24, and outputs the resulting correction data to the oral decoding unit 22 and Output to judgment section 24. Then, returning to step S2, the same processing is repeated thereafter.
- step S4 when it is determined in step S4 that the error information is equal to or smaller than the predetermined threshold value £, it is recognized that it is appropriate to use the correction data output from the compression unit 21 as the encoding result of the original image. Then, the determination unit 24 outputs the correction data when error information equal to or smaller than the predetermined threshold value ⁇ is obtained to the multiplexing unit 25 together with the prediction coefficient as optimal compressed data. In step S6, the multiplexing unit 25 multiplexes the optimum compressed data and the prediction coefficient from the determination unit 24, outputs the resulting encoded data, and ends the process.
- the correction data obtained by correcting the compressed data is used as the encoding result of the original image.
- an image almost identical to the original image (original image) can be obtained.
- FIG. 8 shows a configuration example of the compression unit 21 of FIG.
- the image data to be encoded is input to a decimation circuit 31.
- the decimation circuit 31 asks the input image data to 1 ZN (in this case, 1 2). Has been made. Therefore, the decimation circuit 31 outputs a compressed image obtained by compressing the image data to 1 ZN. This compressed data is thinned out
- the correction circuit 32 is supplied from the circuit 31.
- the correction circuit 32 gives an address to the correction value ROM 33 according to the control signal from the determination unit 24 (FIG. 6), and thereby reads out the correction value ⁇ . Then, the correction circuit 32 generates a correction data by adding, for example, the correction value ⁇ ⁇ from the correction value ROM 33 to the compression data from the thinning circuit 31, It is supplied to the local decoding section 22 and the judgment section 24.
- the correction value ROM 33 is a combination of various correction values ⁇ for correcting the compression data output from the thinning circuit 31 (for example, the correction value for correcting one frame of compressed data). The combination of the correction value ⁇ corresponding to the address supplied from the correction circuit 32 is read and supplied to the correction circuit 32.
- the thinning circuit 31 thins out the image data to 1N in step S11.
- the resulting compressed data is output to the correction circuit 32.
- the thinning circuit 31 thins out the image data, for example, to ⁇ for each line.
- the thinning circuit 31 performs the following processing, for example, in units of one frame (field). Accordingly, the image data of one frame is supplied from the thinning circuit 31 to the correction circuit 32 as compressed data thinned to 12.
- the decimation process in the decimation circuit 31 can also be performed by dividing an image of one frame into several blocks and performing the block unit.
- the correction circuit 32 Upon receiving the compressed data from the thinning circuit 31, the correction circuit 32 determines in step S12 whether a control signal has been received from the determination unit 24 (FIG. 6). If it is determined in step S12 that the control signal has not been received, the process proceeds to step S15, and the correction circuit 32 uses the compressed data from the thinning circuit 31 as correction data as it is. It outputs to the oral decoding section 22 and the judgment section 24, and returns to step S12. That is, as described above, the determination unit 24 controls the compression unit 21 (correction circuit 32) based on the error information, and the compressed data is output from the thinning circuit 31.
- the correction circuit 32 does not correct the compressed data (corrects by adding 0), but as it is as the corrected data. It is output to the decoding unit 22 and the judgment unit 24.
- step S13 the correction circuit 32 outputs the address according to the control signal to the correction value. Output to ROM33.
- the combination (set) of correction values ⁇ ⁇ for correcting one frame of compressed data stored at that address is read from the complementary iH value ROM 33. And supplied to the correction circuit 32.
- the correction circuit 32 adds the corresponding correction value ⁇ to each of the compression data of one frame in step S14. Correction data obtained by correcting the compression data is calculated. Thereafter, the process proceeds to step S15, in which the correction data is output from the correction circuit 32 to the oral decoding unit 22 and the determination unit 24, and the process returns to step S12.
- the compression unit 21 repeats outputting corrected data obtained by correcting the compressed data to various values.
- the determination unit 24 supplies a control signal indicating this to the compression unit 21 when the encoding of the image of one frame is completed, and the compression unit 21 performs the control of the control signal.
- processing according to the flowchart of FIG. 9 is performed on the image of the next frame.
- the decimation circuit 31 extracts the pixel data (pixel value) at a rate of one pixel per two pixels, thereby generating the compressed data. For example, it is also possible to calculate the average value of 3 ⁇ 3 pixels and generate the compressed data as the pixel value of the pixel at the center of the 3 ⁇ 3 pixel. is there.
- FIG. 10 shows a configuration example of the local decoding unit 22 of FIG.
- the correction data from the compression unit 21 is supplied to the class classification blocking circuit 41 and the predicted value calculation blocking circuit 42.
- the class classification block circuit 41 is configured to block the corrected data into a class classification block, which is a unit for classifying the corrected data into a predetermined class according to its properties.
- the class classification blocking circuit 41 is configured to form a class classification block including four pixels XI, X2, X3, and X4 shown in FIG.
- the classification block is supplied to the classification adaptive processing circuit 43.
- the class classification block is composed of four-pixel cross-shaped blocks, but the shape of the class classification block may be any other shape such as a rectangle, a square, or any other shape. It can be shaped. Also, the two prime numbers constituting the class classification block are not limited to 4 prime.
- the prediction value calculation blocking circuit 42 blocks the correction data into a prediction value calculation block which is a unit for calculating a prediction value of an original image.
- the block is the same as the block for class classification, and the block is constituted by the pixel data XI to X4 in FIG.
- the prediction value calculation blocking circuit 42 blocks the same range as the class classification blocking circuit 41, both may be shared.
- the prediction value calculation block obtained in the prediction value calculation block circuit 42 is supplied to the class classification adaptive processing circuit 43.
- the number of pixels and the shape of the prediction value calculation block are not limited to those described above, as in the case of the class classification block. However, it is desirable that the number of pixels constituting the prediction value calculation block be equal to or larger than the number of pixels constituting the class classification block.
- the above-described blocking there is a case where there is no corresponding pixel near the image frame of the image. In this case, for example, The processing is performed assuming that the same pixel as the pixel constituting the image frame exists outside the pixel.
- the class classification adaptive processing circuit 43 is an ADR C (Adaptive Dynamic Range Coding
- It comprises a processing circuit, a class classification circuit 45, and an adaptive processing circuit 46, and performs class classification adaptive processing.
- Classification adaptive processing is to classify an input signal into several classes based on its characteristics, and to apply an appropriate adaptive processing to the input signal of each class for that class. And adaptive processing.
- a certain pixel of interest and three adjacent pixels form a block of 2 X 2 pixels (class classification block). , 1 bit (takes one of 0 or 1 level).
- Such pattern division is a class classification process, and is performed in the class classification circuit 45.
- the class classification process can be performed in consideration of the activity (complexity of the image) (the degree of change) of the image (the image in the block).
- the activity usually, for example, about 8 bits are assigned to each pixel.
- the class classification block is composed of 3 ⁇ 3 9 pixels. Therefore, if the classification processing is performed on such a classification block, it is classified into an enormous number of classes (28) 9.
- the ADRC processing circuit 44 performs ADRC processing on the block for class classification.
- the number of classes is reduced by reducing the number of bits of the pixels that make up the data classification block.
- DR MAX-MIN is set as the local dynamic range of the block, and based on this dynamic range DR, the pixel values of the pixels constituting the block are requantized to K bits. That is, the minimum value M IN is subtracted from each pixel value in the block, and the subtracted value is divided by DRZ 2K. Then, it is converted to a code (ADRC code) corresponding to the resulting division value.
- ADRC code a code
- the ADRC code 00 ⁇ , 0113, 10B, or 11B is the center value L00 of the lowest level range obtained by dividing the dynamic range DR into four equal parts, Is converted to the center value L01 of the range from the second level to the center, the center value L10 of the range of the third level from the bottom, or the center value L11 of the range of the highest level, and the minimum value MIN is converted to that value.
- the decoding is performed by the addition.
- the ADRC code 00 B or 11 B is converted to M IN 'or the average value MAX' of the pixel values belonging to the highest level range, and MAX '—
- the ADRC code is decoded by converting the ADRC codes 0 1 B and 10 B to a level that divides the dynamic range DR, specified by MIN ', into three equal parts. There is such ADRC processing, which is called edge matching.
- ADRC process The details of the ADRC process are disclosed in, for example, Japanese Patent Application Laid-Open No. 3-53778 filed by the present applicant.
- the number of classes can be reduced by performing ADRC processing for performing re-quantization with a smaller number of bits than the number of bits allocated to the pixels constituting the block. This is performed in the ADRC processing circuit 44.
- the class classification circuit 45 performs the class classification processing based on the ADRC code output from the ADRC processing circuit 44. ), BTC (Block Truncation Codin), VQ (Vector Quantization), DCT (Discrete Cosine Transform), and Hadamard Transform.
- the predicted value E [y] of the pixel value y of the original image is now converted into the pixel values of some pixels around it (hereinafter, appropriately referred to as learning data) xl, 2,,.
- learning data pixel values of some pixels around it
- the predicted value E [y] can be expressed by the following equation.
- the prediction coefficient wi for obtaining a prediction value E [y] close to the pixel value y of the original image is a square error Can be obtained by minimizing.
- Equation (7) The normal equation in Eq. (7) can be set as many as the number of prediction coefficients W to be obtained. Therefore, by solving Eq. (7), the optimal prediction coefficient w is obtained. be able to. In solving equation (7), for example, a sweeping method (Gauss-Jordan elimination method) can be applied.
- the adaptive processing is to find the optimal prediction coefficient w, and further, to use the prediction coefficient w to find a prediction value E [y] that is close to the pixel value y of the original image by Equation (1).
- This adaptive processing is performed in the adaptive processing circuit 46.
- the adaptive processing differs from the interpolation processing in that components included in the original image that are not included in the thinned image are reproduced. That is, the adaptive processing is the same as the interpolation processing using the so-called interpolation filter as far as only Equation (1) is observed, but the prediction coefficient w corresponding to the tap coefficient of the interpolation filter is equal to the teacher data. Evening is obtained by learning using y, so to speak, so that the components contained in the original image can be reproduced. From this, it can be said that the adaptive processing has a so-called image creation action. Next, the processing of the local decoding unit 22 in FIG. 10 will be described with reference to the flowchart in FIG.
- step S21 the correction data from the compression unit 21 is blocked. That is, in the class classification blocking circuit 41, the corrected data is divided into four pixel class classification blocks and supplied to the class classification adaptive processing circuit 43, and the prediction value calculation block is generated. In the circuit 42, the correction data is divided into four pixel prediction value calculation blocks and supplied to the classification adaptive processing circuit 43.
- the class classification adaptive processing circuit 43 is supplied with the original image data in addition to the class classification block and the predicted value calculation block.
- the processing unit 44 supplies the prediction value calculation block and the original image data to the adaptive processing circuit 46.
- the processing circuit 44 When the classifying block is received, the processing circuit 44 performs, for example, 1-bit ADRC (ADRC for 1-bit requantization) processing on the classifying block in step S22. As a result, the corrected data is converted (encoded) into one bit and output to the classification circuit 45. In step S23, the class classification circuit 45 performs a class classification process on the class classification block on which the ADRC process has been performed, and determines the class to which the class classification block belongs. The result of this class determination is supplied to the adaptive processing circuit 46 as class information.
- ADRC 1-bit requantization
- the classification block composed of four pixels is subjected to the classification processing.
- step S24 the adaptive processing circuit 46 performs an adaptive process for each class based on the class information from the class classification circuit 45, whereby the prediction coefficient and the predicted value of the original image data are obtained. Is calculated.
- adaptive processing is performed using a prediction value calculation block including four pixels adjacent to the attention pixel. .
- the class information C about the class classification block consisting of 4 is output from the class classification circuit 45, and a four-pixel correction data XI, X2, X Assuming that the prediction value calculation block consisting of 3 and X4 is output from the prediction value calculation blocking circuit 42, first, the correction data constituting the prediction value calculation block is set to the training data and The corrected data Y1 in the original image is used as the teacher data, and the normal equation shown in equation (7) is established.
- a normal equation is similarly established for other prediction value calculation blocks classified into the class information C, and a prediction coefficient wl (R) for obtaining a prediction value E [YR1] of the pixel value YR1 is similarly obtained. If the number of normal equations that can calculate wl2 (R) is obtained (therefore, until such a number of normal equations are obtained, in step S24, the processing until the normal equations are set up) Then, by solving the normal equation, the optimum prediction coefficients wl (R) to wl2 (R) for calculating the predicted value E [YR1] of the pixel value YR1 are calculated for the class information C. Then, the predicted value E [YR1] is obtained according to the following equation corresponding to equation (1). The same applies to YG1, YB1, etc.
- E [YR1] wl (R) XR1 + W2 (R) XGl + w3 (R) XB1 + w4 (R) XR2 + w5 (R) XG2 + w6 (R) XB2 + w7 (R) XR3 + w8
- E [YG1] wl (G) XR1 + W2 (G) XGl + w3 (G) XB1 + w4 (G) XR2 + w5 (G) XG2 + W6 (G) XB2 + w7 (G) XR3 + w8 (G ) XG3 + w9 (G) XB3 + wlO (G) XR4 + wll (G) XG4 + wl (G) XG4 + wl
- E [YB1] wl (B) XR1 + W2 (B) XGl + w3 (B) XB1 + w4 (B) XR2 + w5 (B) XG2 + W6 (B) XB2 + w7 (B) XR3 + w8
- step S24 the R, G, B
- the prediction value is output to the error calculation unit 23, and the prediction coefficient is output to the determination unit 24, and the process returns to step S21. The process is repeated.
- FIG. 14 shows an example of the configuration of the error calculator 23 of FIG.
- the original image data is supplied to the blocking circuit 51, and the blocking circuit 51 converts the pixel data into a pixel corresponding to the prediction value output from the local decoding unit 22. Then, the pixels of the resulting block (in this case, this block is composed of one pixel (Y 1 in FIG. 2)) are output to the square error calculation circuit 52. It has been done.
- the squared error calculator 52 is supplied with the pixel data from the blocking circuit 51, and is also supplied with the pixel data as a predicted value from the local decoder 22.
- the square error calculating circuit 52 calculates a square error as a prediction error of a predicted value with respect to the original image, and supplies the calculated square error to the integrating unit 55.
- the automatic error calculation circuit 52 is composed of arithmetic units 53 and 54.
- the arithmetic unit 53 subtracts the corresponding predicted value from each of the blocked image data from the blocking circuit 51, and supplies the subtracted value to the arithmetic unit 54.
- the computing unit 54 squares the output of the computing unit 53 (the difference between the original image data and the predicted value) and supplies the result to the integrating unit 55.
- the integrating unit 55 Upon receiving the square error from the square error calculation circuit 52, the integrating unit 55 reads the stored value of the memory 56, adds the stored value and the square error, and supplies the sum to the memory 56 again for storage. By repeating this, the integrated value (error variance) of the square error is calculated. Further, when the integration of the S-th power error for a predetermined amount (for example, for one frame) is completed, the integrating unit 55 reads the integrated value from the memory 56 and determines the error value as error information. To be supplied. The memory 56 stores the output value of the integrating unit 55 while clearing the stored value each time the processing for one frame is completed.
- step S31 the stored value of the memory 56 is cleared to, for example, 0, and the process proceeds to step S32.
- the data is divided into blocks as described above, and the resulting block is supplied to the square error calculation circuit 52.
- step S 33 the image data of the original image, which constitutes the block supplied from the blocking circuit 51, is supplied from the oral decoding unit 22.
- a square error with the predicted value is calculated. That is, in step S33, the corresponding prediction value is subtracted from each of the blocked image data supplied from the blocking circuit 51 in the computing unit 53, and the subtracted value is supplied to the computing unit 54. Further, in step S33, the output of the computing unit 53 is squared in the computing unit 54 and supplied to the integrating unit 55.
- the integrating section 55 Upon receiving the square error from the square error calculation circuit 52, the integrating section 55 reads the stored value of the memory 56 in step S34, and adds the stored value and the square error to obtain the square error. Find the integrated value. The integrated value of the square error calculated by the integrating unit 55 is supplied to the memory 56 and stored by overwriting the previous stored value. Then, in step S35, the integrating section 55 determines whether or not the integration of the square error for a predetermined amount, for example, for one frame is completed. If it is determined in step S35 that the integration of the square error for one frame has not been completed, the process returns to step S32, and the processing from step S32 is repeated again.
- step S35 If it is determined in step S35 that the integration of the square error for one frame has been completed, the process proceeds to step S36, and the integration unit 55 executes the processing for the one frame stored in the memory 56. The integrated value of the square error of the minute is read and output to the determination unit 24 as error information. Then, the process returns to step S31, and the processing from step S31 is repeated again.
- the error calculation unit 23 calculates the error information Q by performing the calculation according to the following equation. .
- ⁇ means a shark for one frame.
- FIG. 16 illustrates a configuration example of the determination unit 24 of FIG.
- the prediction coefficient memory 61 stores the prediction coefficients supplied from the local decoding unit 22.
- the correction data memory 62 stores the correction data supplied from the compression unit 21.
- the compression unit 21 newly corrects the compressed data, and when new correction data is supplied, the correction data stored in the correction data memory 62 is stored in the correction data memory 62. Instead of (previous correction data), new correction data is stored. Also, at the timing when the correction data is updated to a new one, a new set of prediction coefficients corresponding to the new correction data is output from the local decoding unit 22. When a new prediction coefficient is thus supplied to the coefficient memory 61, the new prediction coefficient is stored in place of the already stored prediction coefficient (previous prediction coefficient). ing.
- the error information memory 63 stores the error information supplied from the error calculator 23.
- the error information memory 63 stores the previously supplied error information in addition to the currently supplied error information from the error calculator 23 (when new error information is supplied). However, until new error information is supplied, the previously stored error information is retained.) Note that the error information memory 63 is cleared each time processing for a new frame is started.
- the comparison circuit 64 compares the current error information stored in the error information memory 63 with a predetermined threshold value ⁇ , and further, if necessary, compares the current error information with the previous error information. The comparison is also performed. The comparison result in the comparison circuit 64 is supplied to the control circuit 65.
- the control circuit 65 determines the appropriateness (optimality) of using the correction data stored in the correction data memory 62 as the encoding result of the original image based on the comparison result in the comparison circuit 64. Then, if it is recognized (determined) that it is not optimal, a control signal requesting the output of a new correction data is supplied to the compression unit 21 (correction circuit 32) (FIG. 8). ing. The control circuit 65 also stores the correction data stored in the correction data memory 62. If it is recognized that it is optimal to obtain the result of encoding the original image, the prediction coefficient stored in the prediction coefficient memory 61 is read out, output to the multiplexing unit 25, and corrected.
- the correction data stored in the data memory 62 is read out and supplied to the multiplexing unit 25 as an optimal compressed data. Further, in this case, the control circuit 65 outputs a control signal indicating that coding of the image of one frame has been completed to the compression unit 21, and thereby, as described above, the compression unit 21 Then, the processing for the next frame is started.
- step S41 the comparison circuit 64 determines whether or not error information has been received from the error calculation section 23, and determines that the difference information has not been received. If so, the process returns to step S41. If it is determined in step S41 that the error information has been received, that is, if the error information has been stored in the error information memory 63, the process proceeds to step S42, where The error information (current error information) currently stored in the difference information memory 63 is compared with a predetermined threshold value ⁇ , and it is determined which is larger.
- step S42 If it is determined in step S42 that the current error information is equal to or greater than the predetermined threshold value £, the comparison circuit 64 reads the previous error information stored in the error information memory 63. . Then, in step S43, the comparison circuit 64 compares the error information of the previous tnl with the error information of this time, and determines which is larger.
- the error information memory 63 does not store the previous error information.
- the judgment unit In step 24 the processing after step S43 is not performed, and a control signal for controlling the correction circuit 32 (FIG. 8) is output in the control circuit 65 so as to output a predetermined initial address. It has been made like that.
- step S43 if it is determined that the current error information is equal to or less than the current error information, that is, if the error information is reduced by performing the correction of the compressed data, the process proceeds to step S44.
- the control circuit 65 changes the correction value ⁇ as before. Is output to the correction circuit 32, and the process returns to step S41.
- step S43 if it is determined that the current error information is larger than the previous error information, that is, if the error information increases due to the correction of the compressed data, the process proceeds to step S45. Then, the control circuit 65 outputs to the correction circuit 32 a control signal for instructing the correction value ⁇ to be changed in the opposite direction to the previous time, and returns to step S41.
- the control circuit 65 sets the correction value to the previous value, for example, the size of 1 to 2 in the previous time. In contrast, a control signal for instructing a change is output.
- step S42 the control circuit 65 reads the prediction coefficient stored in the prediction coefficient memory 61, reads the correction data stored in the correction data memory 62, and outputs Feed to 5 to finish the process.
- the correction circuit 32 may be configured to perform the correction of the compressed data over the entire compressed data of one frame, or to perform the correction of only a part of the compressed data. You can also.
- the control circuit 65 may detect, for example, a pixel having a strong influence on the error information and correct only the compressed data of such a pixel. can do. Pixels having a strong influence on the error information can be detected, for example, as follows. That is, first, error information is obtained by performing processing using compressed data of pixels remaining after thinning as it is.
- a control signal is output from the control circuit 65 to the correction circuit 32 so as to perform processing of correcting the compressed data of the pixels remaining after the thinning one by one by the same correction value ⁇ .
- the resulting This error information is compared with error information obtained when the compressed data is used as it is, and a pixel whose difference is equal to or more than a predetermined value is detected as a pixel having a strong influence on the error information.
- the correction of the compression data is repeated until the error information is made smaller than the predetermined threshold ⁇ (below), and the correction data when the error information becomes smaller than the predetermined threshold ⁇ is obtained. Since one night is output as the image encoding result, the receiver 4 corrects the pixel values of the pixels constituting the decimated image to the most appropriate values for restoring the original image. From one night, it is possible to obtain a decoded image that is the same (almost the same) as the original image.
- the image is compressed not only by the thinning process but also by the ADRC process and the class classification adaptive process, so that encoded data with a very high compression rate can be obtained.
- the above-described encoding processing in the transmitting device 1 realizes high-efficiency compression by using, as it were, the compression processing by thinning and the classification adaptive processing in an organic manner. Therefore, it can be said that it is an integrated coding process.
- FIG. 18 shows still another configuration example of the receiving device 4 of FIG.
- the encoded data recorded on the recording medium 2 is reproduced, or the encoded data transmitted via the transmission path 3 is received and supplied to the separation unit 72.
- the separation unit 72 separates the encoded data into correction data and a prediction coefficient, and the correction data is supplied to a classifying blocking circuit 73 and a prediction value calculating blocking circuit 77 to perform prediction prediction.
- the coefficients are supplied to a prediction circuit 76.
- Classification blocking circuit 73, 80 10 (processing circuit 74, class classification circuit 75, or predictive value calculation blocking circuit 77, is a class classification blocking circuit 41, 1 80 shaku (The processing circuit 44, the class classification circuit 45, or the prediction value calculation blocking circuit 42 are each configured in the same manner. Therefore, these blocks are the same as those in FIG. The same processing is performed, whereby the predicted value calculation block is output from the predicted value calculation block circuit 77, and the The class information is output from the classifying circuit 75. These prediction value calculation block and class information are supplied to the prediction circuit 76.
- the prediction circuit 76 uses the prediction coefficient corresponding to the class information and the correction data constituting the prediction value calculation block supplied from the prediction value calculation block circuit 77 to calculate the prediction value according to the equation (1).
- An image of one frame calculated and composed of such predicted values is output as a decoded image. This decoded image is almost the same as the original image, as described above.
- a device that decodes the queried image by simple interpolation can perform normal interpolation without using prediction coefficients. By doing so, a decoded image can be obtained.
- the decoded image obtained in this case has deteriorated image quality (resolution).
- FIG. 19 shows another configuration example of the oral decoding unit 22 of FIG. In the figure, parts corresponding to those in FIG. 10 are denoted by the same reference numerals. That is, the oral decoding unit 22 in FIG. 19 is the same as that in FIG. 10 except that an adaptive measurement circuit ROM 81 and a prediction circuit 82 are provided instead of the adaptive processing circuit 46. It is configured similarly.
- the prediction coefficient ROMS 1 stores a prediction coefficient for each class previously obtained by learning (described later), receives the class information output by the class classification circuit 45, and stores the prediction information in an address corresponding to the class information. The stored prediction coefficient is read and supplied to the prediction circuit 82.
- the prediction circuit 82 uses the prediction value calculation block from the prediction value calculation blocking circuit 42 and the prediction coefficient from the prediction coefficient ROM 81 to calculate the equation (1) (specifically, the linear linear equation shown in equation (8)) is calculated, and the predicted value of the original image is calculated. Therefore, according to the classification adaptive processing circuit 43 of FIG. 19, the predicted value is calculated without using the original image.
- FIG. 20 illustrates a configuration example of an image processing apparatus that performs learning for obtaining a prediction coefficient stored in the prediction coefficient ROM 81 of FIG.
- the learning blocking circuit 91 and the teacher blocking circuit 92 are supplied with learning image data (learning images) to obtain prediction coefficients applicable to any image (thus, before thinning processing is performed). It has been made to be.
- the learning blocking circuit 91 extracts, for example, four pixels (for example, XI to X4 in FIG. 2) from the image data to be manually input, and uses the block composed of these four pixels as a learning river block. , ADRC processing 93 and learning data memory 96.
- the teacher blocking circuit 92 generates a block composed of, for example, one pixel (Y1 in FIG. 2) from the image data that is manually input. Is supplied to the teacher data memory 98 as a block for use.
- the teacher blocking circuit 92 When the learning blocker 91 generates a learning block composed of f pixels, the teacher blocking circuit 92 generates a corresponding elementary teacher block. It has been made.
- the ADRC processing circuit 93 performs a 1-bit ADRC process on the four-pixel block constituting the learning block, as in the case of the ADRC processing circuit 44 in FIG.
- the 4-pixel block subjected to the ADRC processing is supplied to the classifying circuit 94.
- the classification circuit 94 is blocked classification from ADRC processing circuit 93, the class information obtained it, via the terminal a of the switch 95 is supplied to the learning data memory 9 6 and teacher de Isseki memory 98 You.
- the address corresponding to the class information supplied thereto is stored in the learning block from the learning block circuit 91 or the learning block from the teacher block circuit 92.
- Teacher block power Each is memorized. Therefore, in the learning data memory 96, if a block consisting of four pixels (XI to X4 in FIG. 2) is stored at a certain address as a learning block, the teacher data In the memory 98, a block of one pixel (Y1 in FIG. 2) corresponding to the same address is stored as a teacher block.
- the teacher block composed of one pixel for which the predicted value is calculated using the predicted value calculation block composed of four correction data having the same positional relationship as the four pixels
- the same address is stored in the evening memory 96 and the teacher data memory 98.
- the learning data memory 96 and the teacher data memory 98 can store a plurality of pieces of information in the same address. A plurality of learning blocks and teacher blocks can be stored.
- the switch that has selected the terminal a 95 is connected to the terminal b. Then, the output of the counter 97 is supplied to the learning data memory 96 and the teacher data memory 98 as an address.
- the counter 97 counts a predetermined clock and outputs the count value, and the learning data memory 96 or the teacher data memory 98 stores the count value in the address corresponding to the count value.
- the learning block or the teacher block is read and supplied to the arithmetic circuit 99.
- a set of learning blocks of a class corresponding to the count value of the count 97 and a set of teacher blocks are supplied to the arithmetic circuit 99.
- the arithmetic circuit 99 When the arithmetic circuit 99 receives the set of learning blocks and the set of teacher blocks for a certain class, it uses them to calculate a prediction coefficient that minimizes the error by the least square method.
- the prediction coefficients to be obtained are wl, w2, w3,.
- the prediction coefficients wl, w2, w3, are obtained in order to obtain the pixel value y of a certain pixel constituting the teacher block by the linear linear combination of them.
- the arithmetic circuit 99 minimizes the square error of the predicted value wl xl + w2 x2 + w3 x3 + with respect to the true value y from the learning block of the same class and the corresponding teacher block.
- the prediction coefficients wl, w2, w3,... are obtained by solving the normal equation shown in the above equation (7).
- the prediction coefficient for each class obtained in the arithmetic circuit 99 is supplied to the memory 100.
- the memory 100 is supplied with a count value from the counter 977 in addition to the prediction coefficient from the arithmetic circuit 99, and thus, in the memory 100, the prediction coefficient from the arithmetic circuit 99 is obtained. It is stored at the address corresponding to the count value from the count 97.
- the memory 100 stores, at the address corresponding to each class, a prediction coefficient optimal for predicting a pixel of a block of the class.
- the prediction coefficient R ⁇ M81 in FIG. 19 stores the prediction coefficient stored in the memory 100 as described above.
- the prediction coefficient ROM 81 it is possible to store the average value of the elementary values constituting the teacher block instead of storing the prediction coefficient at the address corresponding to each class. It is. In this case, when the class information is given, the pixel value corresponding to the class is output, and in the local decoding unit 22 in FIG. 19, the prediction value calculation blocking circuit 42 and the prediction circuit 8 It is not necessary to set 2.
- the oral decoding section 22 is configured as shown in FIG. 19, the receiving device 4 shown in FIG. The configuration may be the same as that of the classification adaptive processing circuit 43.
- the sum of the squares of the error is used as the error information.
- Other examples of the error information include, for example, the sum of the absolute values of the error and the square of the error. It is possible to use the sum of Which one to use as the error information can be determined based on, for example, its convergence.
- the correction of the compressed data is repeatedly performed until the error information becomes equal to or less than the predetermined threshold value £.
- an upper limit is set for the number of times of the correction of the compressed data. Is also possible. That is, for example, in the case of transmitting an image in real time, it is necessary that the processing for one frame be completed within a predetermined period, but the error information is transmitted within such a predetermined period. It does not always converge.
- a block is configured from an image of one frame.
- a block may be configured by other pixels, for example, at the same position in a plurality of frames continuous in time series. It is possible.
- the compression unit 21 simply thins out the image, that is, extracts one pixel for every two pixels, and uses this as a compressed image. 21.
- the average value of a plurality of pixels constituting a block is determined, and the average value is used as the pixel value of the center pixel in the block to reduce the number of pixels ( It is also possible to make this into a compressed file.
- FIG. 21 shows a configuration example of the transmission device 1 in this case.
- An image data to be encoded is input to the blocking circuit 111, and the blocking circuit 111 classifies the image data into a predetermined class according to its properties.
- the block is divided into a class classification block, which is a unit, and supplied to the ADRC processing circuit 112 and the delay circuit 115.
- the ADRC processing circuit 112 performs ADRC processing on the blocks (blocks for class classification) from the blocking circuit 111, and constructs the ADRC code obtained as a result.
- the block to be formed is supplied to a classification circuit 113.
- the class classification circuit 113 performs a class classification process of classifying the block from the ADRC processing circuit 112 into a predetermined class according to its property, and determines to which class the block belongs as class information.
- the coefficient memory 114 is supplied.
- the mapping coefficient memory 114 stores mapping coefficients obtained by learning (mapping coefficient learning) described later for each class information, and uses the class information supplied from the class classification circuit 113 as an address.
- the mapping coefficient stored in the address is read and supplied to the arithmetic circuit 116.
- the delay circuit 115 delays the block supplied from the blocking circuit 111 until the mapping coefficient corresponding to the class information of the block is read from the mapping coefficient memory 114, and the delay circuit 115 outputs the block to the arithmetic circuit 116. It is made to supply.
- the arithmetic circuit 116 uses the pixel values of the pixels constituting the block supplied from the delay circuit 115 and the mapping coefficients supplied from the mapping coefficient memory 114 corresponding to the class of the block. By performing a predetermined operation, an image is coded by thinning (reducing) the number of pixels of the image to calculate coded data. That is, the arithmetic circuit 116 sets the pixel values (pixel values of the original image) of each pixel constituting the block output by the blocking circuit 111 to yl, y2,.
- mapping coefficients output from the memory 114 corresponding to the class of the block are kl, k2, '', a predetermined function value f (yl, y2,, kl, k2 , ⁇ ⁇ ⁇ ) And the function value ⁇ (yl, y2
- ⁇ ⁇ ⁇ , Kl, k2, ⁇ ⁇ ⁇ ) are output as the pixel value of the center pixel, for example, of the pixels that constitute the block (class classification block) output by the blocking circuit 1 1 1 It has been made to be.
- the arithmetic circuit 116 thins out the image data to 1ZN and outputs this as encoded data.
- the encoded data output by the arithmetic circuit 116 is not obtained by a so-called simple thinning process in which one pixel at the center of a block composed of N pixels is selected and output.
- the function value f (yl, y2,..., Kl, k2,%) Defined by the N pixels constituting the block is the function value f (yl, y2, ,, Kl, k2,,,) are, in other words, corrected pixel values of the pixel at the center of the block obtained by simple decimation based on the pixel values of the surrounding pixels. You can think. Therefore, the encoded data, which is the data obtained as a result of the operation of the mapping coefficients and the pixels constituting the block, is hereinafter also referred to as correction data as appropriate.
- mapping coefficients are called mapping coefficients.
- the transmitter recording device 117 records correction data supplied as encoded data from the arithmetic circuit 116 on the recording medium 2 or transmits the data via the transmission path 3.
- image data is supplied to the blocking circuit 1 1 1 in units of one frame (field).
- the blocking circuit 1 1 1 the image of one frame is , It is divided into the classification block. That is, the blocking circuit 111 divides, for example, into a class classification block including five pixels, and supplies the block to the ADRC processing circuit 112 and the delay circuit 115.
- the class classification block is composed of a cross-shaped block composed of 5 pixels, but the shape of the class classification block may be other, for example, long. It can be square, square, or any other shape. Also, the number of pixels constituting the class classification block is not limited to five pixels. Furthermore, the block for class classification can be configured not to be composed of adjacent pixels but to be composed of distant pixels. However, the shape and the number of pixels need to match those at the time of learning (mapping coefficient learning) described later.
- the ADRC processing circuit 112 When the ADRC processing circuit 112 receives the classification block from the blocking circuit 111, in step S62, the ADRC processing circuit 112 excludes the four pixels (FIG. 2) excluding the center pixel (Y1 in FIG. 2) of the block. XI to X4) are subjected to, for example, 1-bit ADRC processing, whereby each pixel of R, G, and B is a block composed of pixels represented by 1 bit.
- the classification block subjected to ADRC processing is supplied to the classification circuit 113.
- step S63 the class classification block from the ADRC processing circuit 112 is classified, and the resulting class information is supplied to the mapping coefficient memory 114 as an address. Thereby, the mapping coefficient corresponding to the class information supplied from the class classification circuit 113 is read from the mapping coefficient memory 114 and supplied to the arithmetic circuit 116.
- the delay circuit 115 the 5-pixel data of the class classification block from the blocking circuit 111 is delayed and read out from the mapping coefficient memory 114 corresponding to the class information of the block. Is supplied to the arithmetic unit 116.
- the arithmetic unit 116 uses the pixel value of each pixel constituting the class classification block from the delay circuit 115 and the mapping coefficient from the mapping coefficient memory 114 to calculate the above-described function value.
- Correction data is calculated by correcting the pixel value of the central pixel (central pixel) constituting the class classification block.
- one pixel data at the position of the pixel data Yl (X5) is generated from the pixel data XI to X4 and the pixel data Yl (X5) in FIG.
- this blocking is used for pixel data Evening is repeated for the evening, and finally 12 pixel data is decimated.
- This correction data is supplied to the transmitter Z recording device 117 as encoded data obtained by encoding an image.
- step S66 it is determined whether or not the processing for one frame of image data has been completed. If it is determined in step S66 that the processing for one frame of image data has not been completed, the process returns to step S62, and the next class classification block is processed. 62 The following process is repeated. If it is determined in step S66 that the processing for one frame of image data has been completed, the process returns to step S61, and the processing in step S61 and subsequent steps is performed for the next frame. Is repeated.
- FIG. 23 shows a configuration example of an image processing apparatus that performs learning (mapping coefficient learning) processing for calculating mapping coefficients stored in the mapping coefficient memory 114 of FIG. .
- the memory 122 stores one or more frames of digital image data suitable for learning (hereinafter, appropriately referred to as learning images).
- the blocking circuit 1 2 2 reads the image data stored in the memory 1 2 1 and forms the same block as the class classification block output from the blocking circuit 1 1 1 in FIG. It is supplied to a processing circuit 123 and an arithmetic circuit 126.
- the ADRC processing circuit 123 or the class classification circuit 124 is configured to perform the same processing as in the case of the ADRC processing circuit 112 or the class classification circuit 113 of FIG. Therefore, the class classification circuit 124 outputs the block class information output by the blocking circuit 122. And this The class information is supplied to the mapping coefficient memory 131, as an address.
- the arithmetic circuit 1 26 uses the pixels constituting the block supplied from the blocking circuit 122 and the mapping coefficients supplied from the mapping coefficient memory 13 1 to calculate the arithmetic circuit 1 1 1 in FIG. The same operation as in the case of 6 is performed, and the resulting correction data (function value f ( ⁇ )) is supplied to the local decoding unit 127.
- the oral decoding section 127 constitutes a predicted value of the original learning image (a block output by the blocking circuit 122) based on the correction data supplied from the arithmetic circuit 126.
- the predicted value of the pixel value of the pixel is predicted (calculated) and supplied to the error calculation unit 128.
- the error calculating unit 128 reads the pixel value (true value) of the learning image corresponding to the predicted value supplied from the local decoding unit 127 from the memory 122, and reads the pixel value of the learning image. Then, a prediction error of the prediction value is calculated (detected), and the prediction error is supplied to the determination unit 129 as error information.
- the determination unit 1229 compares the error information from the error calculation unit 128 with a predetermined threshold ⁇ , and controls the mapping coefficient setting circuit 13 () according to the comparison result. It has been done.
- the mapping coefficient setting circuit 130 sets (changes) a set of matching coefficients of the same number as the number of classes obtained as a result of the class classification in the class classification circuit 124 according to the control of the decision unit 128.
- the mapping coefficient memory 13 is supplied to 13 1.
- the mapping coefficient memory 13 1 is adapted to temporarily store the mapping coefficient supplied from the mapping coefficient setting circuit 130.
- the mapping coefficient memory 13 1 has a storage area capable of storing the number of mapping coefficients (sets of mapping coefficients) of the number of classes to be classified in the classification circuit 124. In the storage area, when a new mapping coefficient is supplied from the mapping coefficient setting circuit 130, the new mapping coefficient is stored instead of the already stored mapping coefficient. I have. Further, the mapping coefficient memory 131 reads out the mapping coefficients stored at the address corresponding to the class information supplied from the class classification circuit 124, and supplies the read mapping coefficients to the arithmetic circuit 126.
- the mapping coefficient setting circuit 130 sets a set of initial values of the mapping coefficients by the number of classes to be classified in the classification circuit 124, and supplies the set to the mapping coefficient memory 131. I do.
- the mapping coefficient (initial value) from the mapping coefficient setting circuit 130 is stored in the address of the corresponding class.
- step S72 the blocking circuit 122 divides all the learning images stored in the memory 121 into five pixels (see FIG. 2) as in the case of the blocking circuit 111 in FIG. XI to X4, Y1). Further, the blocking circuit 121 reads the block from the memory 121 and sequentially supplies the block to the ADRC processing circuit 123 and the arithmetic circuit 126.
- step S73 the ADRC processing circuit 123 applies four pixels (XI to X4 in FIG. 2) of the block from the blocking circuit 122 to 1 in the same manner as in the ADRC processing circuit 112 in FIG.
- the bits are subjected to ADRC processing and supplied to the classifying circuit 124.
- the class classification circuit 124 in step S74, the class of the block supplied from the ADRC processing circuit 123 is determined, and the class information is supplied to the mapping coefficient memory 1331 as an address. Thereby, in step S75, the mapping coefficient is read from the address corresponding to the class information supplied from the class classification circuit 124 in the mapping coefficient memory 131, and supplied to the arithmetic circuit 126.
- the arithmetic circuit 126 receives the five pixels (XI to X4, Y1 in FIG. 2) of the block from the blocking circuit 122, and receives the mapping coefficients corresponding to the class of the block from the mapping coefficient memory 131.
- the mapping coefficient and the block supplied from the blocking circuit 122 are configured.
- the above-described function value ⁇ ( ⁇ ) is calculated using the pixel values of the five pixels.
- the calculation result is supplied to the local decoding unit 127 as correction data obtained by correcting the pixel value of the central pixel of the block supplied from the blocking circuit 122.
- the arithmetic circuit 126 corrects the pixel data by correcting the pixel value. Is obtained and output to the local decoding unit 27.
- the blocking in the blocking circuit 122 is performed on the pixel data redundantly, the number of pixels constituting the learning image is interrogated to 1Z2, and the local decoding is performed. Supplied to part 27.
- step S77 in which correction data for all the learning images stored in the memory 121 is obtained. Is determined. If it is determined in step S77 that the correction data for all the learning images has not been obtained yet, the process returns to step S73 to obtain correction data for all the learning images. Until the above, the processing of steps S73 to S77 is repeated.
- step S77 when it is determined that the correction data for all the learning images has been obtained, that is, all the learning images stored in the memory 121 are set to 1--2. If a thinned-out image is obtained (however, the thinned-out image is not simply a thinned-out learning image to 12 but a pixel value obtained by an operation with a mapping coefficient). Proceeding to step S78, the oral decoding unit 127 locally decodes the thinned image to calculate the predicted value of the original learning image. This predicted value is supplied to the error calculator 128.
- an image composed of predicted values obtained by the oral decoding unit 127 (however, as will be described later, when the error information output from the error calculating unit 128 becomes smaller than the threshold ⁇ 1). Is the same as the decoded image obtained on the receiving device 4 side.
- step S79 the error calculation unit 1 28 stores the learning image from the memory 1 2 1 Is read, and the prediction error of the prediction value supplied from the local decoding unit 127 with respect to the learning image is calculated. That is, the pixel value of the learning image is represented by ⁇ , and the predicted value output from the local decoding unit 127 is represented by ⁇ [ ⁇ ].
- the error variance (the sum of squares of the error) Q is calculated, and this is supplied to the determination unit 129 as error information.
- ⁇ represents a shark for all pixels of the learning image.
- the determining unit 129 Upon receiving the error information from the error calculating unit 128, the determining unit 129 compares the error information with a predetermined threshold ⁇ 1, and determines the magnitude relationship in step S80. If it is determined in step S80 that the error information is equal to or greater than the threshold value of £ 1, the image composed of the predicted values obtained by the local decoding unit 127 immediately becomes the same as the original learning image. If it is not determined that there is, the judging unit 1 29 outputs a control signal to the mapping coefficient setting circuit 1 30. In step S81, the mapping coefficient setting circuit 130 changes the mapping coefficient in accordance with the control signal from the determination section 129, and newly stores the changed mapping coefficient in the mapping coefficient memory 131.
- the mapping coefficient setting circuit 130 changes the mapping coefficient in accordance with the control signal from the determination section 129, and newly stores the changed mapping coefficient in the mapping coefficient memory 131.
- step S73 the processing from step S73 is repeated again using the changed mapping coefficient stored in mapping coefficient memory 1331.
- mapping coefficient in the mapping coefficient setting circuit 130 may be changed at random, or when the current error information is smaller than the previous error information, the same as the previous time. If the current error information becomes larger than the previous error information, it can be changed in the opposite trend.
- mapping coefficients can be changed for all classes or only for some of them. Part of In the case where only the mapping coefficient for a class is changed, for example, a class having a strong influence on error information can be detected, and only the mapping coefficient for such a class can be changed.
- a class having a strong influence on error information can be detected, for example, as follows. That is, first, the error information is obtained by performing processing using the initial value of the matching coefficient. Then, the mapping coefficient is changed by the same amount for each class, and the resulting error information is compared with the error information obtained when the initial value is used, and the difference is equal to or more than a predetermined value. The class may be detected as a class that has a strong influence on the error information. If the mapping coefficients are multiple and one set like kl, k 2, ⁇ , as described above, change only those that have a strong influence on the error information. Can also.
- mapping coefficient is set for each class.
- the mapping coefficient may be set independently, for example, for each block, or for each adjacent block. It is possible to make it.
- mapping coefficients when the mapping coefficients are independently set for each block, a plurality of sets of mapping coefficients may be obtained for a certain class (inversely, the mapping coefficients may be different). However, there may be classes where no set can be obtained.) Since the mapping coefficients must ultimately be determined for each class, as described above, if multiple sets of mapping coefficients are obtained for a certain class, the mapping coefficients for multiple sets are determined. However, it is necessary to determine the mapping coefficient for one set by performing some processing.
- step S80 if it is determined in step S80 that the error information is smaller than the threshold ⁇ 1, that is, the image composed of the predicted values obtained in the local decoding unit 127 is identical to the original learning image. If it is determined to be, the process ends.
- mapping coefficient memory 131 in which the matching coefficient for each class is considered to be the same as the original image, is restored. It is set in the mapping coefficient memory 1 14 in Fig. 21 as the optimal one to obtain the correction data that can be obtained.
- the receiving device 4 can obtain an image substantially the same as the original image.
- the image is divided into four pixels by the blocking circuit 122, and the ADRC processing circuit 123 forms a one-bit ADRC. Since the processing is performed, the number of classes obtained by the class classification by the class classification circuit 124 is 409, and therefore, 496 sets of mapping coefficients are obtained.
- FIG. 25 shows a configuration example of the local decoding section 127 of FIG.
- the correction data from the arithmetic circuit 126 is supplied to the classifying blocking circuit 141 and the predicted value calculating blocking circuit 142.
- the class classification blocking circuit 141 is configured to block the correction data into class classification blocks, which are units for classifying the correction data into predetermined classes according to their properties.
- the classifying block obtained in the classifying blocking circuit 141 of FIG. 25 is configured to determine the class of the block for which the predicted value is to be obtained. In order to determine the class of the block for calculating the evening, it differs from that generated by the blocking circuit 111 in FIG.
- the predicted value calculation blocking circuit 1442 blocks the corrected data into a predicted value calculation block, which is a unit for calculating the predicted value of the original image (here, the learning image). ing.
- the prediction value calculation block obtained in the prediction value calculation block circuit 142 is supplied to the prediction circuit 144.
- the number of pixels and the shape of the prediction value calculation block are not limited to those described above, as in the case of the class classification block. However, the number of pixels constituting the prediction value calculation block in the local decoding unit 127 Is preferably larger than the number of pixels constituting the classification block.
- the ADRC processing circuit 143 performs, for example, 1-bit ADRC processing on the blocks (blocks for class classification) output from the class classification blocking circuit 141 and supplies the blocks to the class classification circuit 144.
- the class classification circuit 144 classifies the blocks from the ADRC processing circuit 143 into classes, and supplies class information as a result of the classification to the prediction coefficient ROM 145.
- the prediction coefficient ROM 145 stores the prediction coefficient.
- the prediction coefficient ROM 145 reads the prediction coefficient stored at the address corresponding to the class information and supplies the prediction coefficient to the prediction circuit 146. It has been made like that.
- the prediction coefficient stored in the prediction coefficient ROM 145 is obtained by learning (prediction coefficient learning) described later.
- the prediction circuit 146 calculates the prediction value of the original image (learning image) using the prediction value calculation block from the prediction value calculation blocking circuit 142 and the prediction coefficient from the prediction coefficient ROM 145. (Forecast).
- step S91 the correction data from the arithmetic circuit 126 is sequentially received and divided into blocks. That is, the correction data is divided into four pixel (XI to X4 in FIG. 2) class classification blocks in the class classification block forming circuit 141, and is supplied to the ADRC processing circuit 143 and the prediction value is calculated.
- the calculation block circuit 142 the correction data is divided into four pixel prediction value calculation blocks and supplied to the prediction circuit 146.
- the blocking circuit for class classification 141 and the blocking circuit for prediction value calculation 142 generate a corresponding block for class classification and a block for prediction value calculation.
- the ADRC processing circuit 143 Upon receiving the classification block, the ADRC processing circuit 143 subjects the classification block to, for example, 1-bit ADRC (ADRC for performing requantization with 1 bit) processing on the classification block in step S92. Thus, the corrected data is converted (encoded) into one bit and output to the class classification circuit 144. In step S93, the classification circuit 144 performs the classification processing on the ADRC-processed classification block, and determines the class to which the classification block belongs. The result of this class determination is supplied to the prediction coefficient ROM 145 as class information.
- ADRC ADRC for performing requantization with 1 bit
- step S94 the prediction coefficient is read from the address corresponding to the class information from the classification circuit 144 in the prediction coefficient ROM 145, and in step S95, the prediction circuit 146 Using the four pixel values constituting the predicted value calculation block from the predicted value calculation blocking circuit 142, the prediction of the pixel value y of the original image is performed according to the following linear linear expression, for example. Calculate the value E [y].
- the prediction value of one pixel is calculated from the four pixels constituting the prediction value calculation block.
- the class information C about the classifying block consisting of the correction data XI to X4 shown in FIG. 2 is output from the classifying circuit 144, and as the predicted value calculating block, The prediction value calculation block consisting of XI to X4 It is assumed that the data is output from the arithmetic blocking circuit 142.
- the prediction coefficient ROM 145 stores, as a set of prediction coefficients, wl (R) to wl2 (R), wl (G) to wl2 (G), wl (B) to wl2 at the address corresponding to the class information C. Assuming that (B) is stored, the predicted values E [YRi], E [YGi], E [YBi] of the components YRi, YGi, and YBi of each pixel are stored in the same manner as described above. Is calculated.
- step S94 when the predicted value is obtained as described above, the process returns to step S91, and thereafter, the processing in steps S91 to S94 is repeated, whereby the predicted value is obtained in units of four pixels. Go.
- the image processing apparatus that performs learning (prediction coefficient learning) for obtaining the prediction coefficients stored in the prediction coefficient ROM 145 in FIG. 25 has the same configuration as that shown in FIG. Therefore, the description is omitted.
- FIG. 27 illustrates another configuration example of an image processing apparatus that performs learning (mapping coefficient learning) processing for calculating mapping coefficients stored in the mapping coefficient memory 114 in FIG.
- the optimal prediction is performed not only when the function f is expressed by a linear expression but also by a nonlinear expression or a quadratic expression.
- the image processing apparatus shown in FIG. 27 can obtain the optimum prediction coefficient only when the function f is represented by a linear linear expression.
- the pixel values of four pixels (XI, X2, X3, X4 in FIG. 2) constituting the block output from the blocking circuit 111 are changed to yl, y2, y3. , y4 (each having R, G, and B components), and the mapping coefficients output by the mapping coefficient memory 114 are kl, k2, k3, and k4 (respectively, R, G, and B components, respectively). ),
- the arithmetic circuit 1 16 is calculated by calculating the function value ⁇ (yl, y2, ⁇ , kl, k2, ⁇ ) according to the following equation. Can be used when
- the optimal correction data calculation section 170 consists of a compression section 171, a correction section 1772, a mouth decoding section 1773, an error calculation section 1774, and a judgment section 1775. From the input learning image, calculate the pixel value (hereinafter referred to as the optimal correction data as appropriate) that is an image that is compressed by reducing the number of pixels and that is optimal for predicting the original image Then, it is supplied to the latch circuit 176.
- the learning image supplied to the optimal correction data calculation unit 170 is supplied to the compression unit 1771 and the error calculation unit 174.
- the compression unit 17 1 simply interrogates the learning images at the same rate as the rate at which the arithmetic circuit 1 16 in FIG. 21 thins out the pixels. That is, in this embodiment, Z 2 is simply thinned out, thereby compressing the learning image and supplying it to the correction unit 17 2.
- the correction section 172 outputs the data supplied from the compression section 171, which has been compressed by simple decimation (hereinafter, referred to as compressed data as appropriate) from the determination section 1755.
- the correction is made in accordance with the control of.
- Data obtained as a result of the correction in the correction unit 17 2 (This data is also obtained by correcting the pixel value of the central pixel of the 5-pixel block, like the output of the arithmetic circuit 1 16 in FIG. 21. , Hereinafter referred to as correction data as appropriate) are supplied to the oral decoding unit 173.
- the error calculating unit 174 calculates the predicted value from the oral decoding unit 173 with respect to the original image data input thereto in the same manner as in the case of the error calculating unit 128 in FIG. Is calculated. This prediction error is supplied to the determination units 1 to 5 as error information.
- the determination unit 1775 determines the suitability of using the output correction data as the compression result of the original image based on the error information from the error calculation unit 1774. like It has been done. Then, the determining unit 1775 controls the correcting unit 1772 when determining that the correction data output from the correcting unit 172 is not appropriate as the compression result of the original image. Further, the compression data is corrected, and a new corrected data obtained as a result is output. The judging unit 175 supplies the correction data output by the correcting unit 172 from the correcting unit 172 when determining that it is appropriate to use the compression result of the original image as appropriate. The obtained correction data is supplied to the latch circuit 176 as the optimum correction data.
- the latch circuit 176 has a built-in memory 176 ⁇ , and the memory 176A stores the optimum correction data supplied from the correction unit 172. Further, the latch circuit 176 corresponds to the central pixel of the block read out from the memory 177 A of the blocking circuit 1 ⁇ 7 out of the optimum correction data stored in the memory 176 A. Is read out and supplied to the memory 180. When the correction data for one frame is stored in the memory 176A, the latch circuit 176 outputs a control signal indicating this to the blocking circuit 177. It has been done. As in the case of the optimum correction data calculation section 170, the blocking circuit 177 is supplied with a learning ghost image in units of one frame. The blocking circuit 177 has a built-in memory 177A, and the memory 177 # stores a learning image supplied thereto.
- the blocking circuit 1 7 7 is a latch circuit
- both learning images stored in memory 177A are divided into blocks consisting of 5 pixels, as in the case of the blocking circuit 111 in FIG.
- the blocking circuit 177 supplies a control signal indicating the position of the block to the latch circuit 176.
- the latch circuit 176 recognizes a 5-pixel block read from the memory 177 based on this control signal, and as described above, the optimal correction data corresponding to the central pixel of the block is stored in the memory. Read from 1 7 6 A It has been made like that. That is, a block of five pixels and the optimum correction data corresponding to the block are simultaneously supplied to the memory 180.
- the ADRC processing circuit 1 ⁇ 8 or the class classification circuit 179 is configured in the same manner as the ADRC processing circuit 112 or the class classification circuit 113 of FIG. 21, respectively. Then, the class information about the block from the blocking circuit 177 output from the classifying circuit 179 is supplied to the memory 180 as an address.
- the memory 180 stores, in an address corresponding to the class information supplied from the class classification circuit 179, the optimum correction data supplied from the latch circuit 176 and the block supplied from the blocking circuit 177. Are stored in association with each other.
- the memory 180 is capable of storing a plurality of pieces of information at one address, thereby storing a plurality of sets of optimal correction data and blocks corresponding to certain class information. It has been made possible.
- the arithmetic circuit 18 1 is associated with the five pixels yl, y 2, y 3, y 4, y 5 constituting the five-pixel block of both learning images stored in the memory 180 and associated with the block.
- the mapping coefficients kl to k5 are obtained for each class and supplied to the memory 18 2.
- the memory 182 stores the mapping coefficients kl to k5 for each class supplied from the arithmetic circuit 181, at addresses corresponding to the classes.
- the learning image When the learning image is input, the learning image is stored in the memory 177A of the blocking circuit 177 and supplied to the optimal correction data calculating unit 170. Upon receiving the learning image, the optimum correction data calculation unit 170 calculates the optimum correction data for the learning image in step S101.
- step S101 The processing of this step S101 is the same as the processing of the tip of FIG. You That is, first, in step S1, the compression unit 17 1 generates a compressed image by thinning out the learning image to 1 ⁇ 2, and passes through the correction unit 17 2 The signal is output to the local decoding unit 173 without correction. In step S2, the local decoding unit 173 determines based on the correction data from the correction unit 172 (at first, as described above, the compressed data obtained by simply thinning out the image data). Then, the predicted value of the original image is calculated (local decoding is performed). This predicted value is supplied to the error calculation unit 174.
- the error calculation unit 174 determines in step S3 the prediction error of the prediction value from the local decoding unit 173 with respect to the original image data. Is calculated and supplied as error information to the determination unit 1775.
- the determining unit 1775 converts the correction data output by the correcting unit 1772 based on the error information into a compression result of the original image in step S4. Is determined.
- step S4 it is determined whether the error information is equal to or less than a predetermined threshold ⁇ . If it is determined in step S4 that the error information is not equal to or smaller than the predetermined threshold ⁇ , it is recognized that it is not appropriate to use the correction data output by the correction unit 172 as the compression result of the original image, and Proceeding to step S5, the determination section 1775 controls the correction section 172, thereby correcting the compressed data output from the compression section 1771.
- the correction unit 172 corrects the compressed data by changing the correction amount (correction value ⁇ ) according to the control of the determination unit 1775, and converts the resulting corrected data into the local decoding unit 1 7 3 Output to. Then, the process returns to step S2, and thereafter, the processing of step ⁇ is repeated.
- the correction of the compressed data can be performed, for example, in the same manner as the change of the mapping coefficient described with reference to FIG.
- step S4 if it is determined in step S4 that the error information is equal to or less than the predetermined threshold, it is recognized that it is appropriate to use the correction data output by the correction unit 172 as the compression results of the original images.
- the determination unit 1775 outputs the correction data when error information equal to or less than the predetermined threshold value ⁇ is obtained to the latch circuit 1776 from the correction unit 1772 as optimal correction data. And return it to the built-in memory 1 76 A, and return.
- the correction data obtained by correcting the compression data is stored in the memory 176 as the optimum correction data.
- the error information is set to be equal to or less than a predetermined threshold value £ in the optimal correction data, a prediction value is calculated using the error information, thereby obtaining substantially the same as the original image (original image). Images can be obtained.
- the latch circuit 176 outputs the control signal to the blocking circuit 177 when the optimal correction data for one frame is stored in the memory 176 #.
- the block forming circuit 177 converts the learning image stored in the memory 177A into a block composed of five pixels in step S102. To divide. Then, the blocking circuit 177 reads the blocks of the learning image stored in the memory 177A and supplies the blocks to the ADC processing circuit 178 and the memory 180.
- the blocking circuit 177 supplies a control signal indicating the position of the block to the latch circuit 176, and the latch circuit 177 Recognizes the 5-pixel block read from the memory 177A in response to the control signal, reads the optimal correction data corresponding to the central pixel of the block, and stores it in the memory 180. Supply.
- step S103 the block from the blocking circuit 177 is subjected to the ADRC processing in the ADRC processing circuit 178, and further, the block is classified in the class classification circuit 179.
- This classification result is supplied to the memory 180 as an address.
- step S 104 an address corresponding to the class information supplied from the class classification circuit 179 is stored in the address corresponding to the optimal correction data supplied from the latch circuit 176,
- the block (learning data) supplied from the circuit 177 is stored in association with the block.
- step S105 the block for one frame is stored in the memory 180. It is determined whether the optimum correction data has been stored. In step S105, when it is determined that the block for one frame and the optimal correction data have not been stored in the memory 180, the next block is read from the blocking circuit 177. At the same time, the optimum correction data corresponding to the block is read out from the latch circuit 176, and the process returns to step S103, and thereafter, the processes after step S103 are repeated.
- step S105 If it is determined in step S105 that the blocks for one frame and the optimal correction data have been stored in the memory 180, the process proceeds to step S106, and the processing is completed for all the learning images. Is determined. If it is determined in step S106 that the processing for all the learning images has not been completed, the process returns to step S101, and the process returns from step S1 () 1 for the next two learning images. Is repeated.
- step S106 determines whether the processing for all the learning images has been completed. If it is determined in step S106 that the processing for all the learning images has been completed, the process proceeds to step S107, and the arithmetic circuit 1811 executes the optimal correction stored in the memory 180. The data and blocks are read out for each class, and a normal equation as shown in equation (7) is established by these. Further, in step S108, the arithmetic operation path 181 calculates a mapping coefficient for each class that minimizes the error by solving the normal equation. This mapping coefficient is supplied to and stored in the memory 12 in step S 1 () 9, and the processing ends.
- mapping coefficients stored in 2 are stored in the mapping coefficient memory 114 in FIG. 21 and the image can be coded using this.
- FIG. 29 illustrates a configuration example of the reception device 4 corresponding to the transmission device of FIG.
- the encoded data recorded on the recording medium 2 is reproduced, or the encoded data transmitted via the transmission path 3 is received and supplied to the decoding unit 192. Is done.
- the decoding unit 192 is composed of a class classification blocking circuit 193 to a prediction circuit 198 corresponding to the class classification blocking circuit 141 to the prediction circuit 146 in the local decoding unit 127 shown in FIG. 25, respectively.
- a prediction value is obtained from the correction data in the same manner as in the case of the local decoding unit 127 in FIG. 25, and an image composed of the prediction value is output as a decoded image.
- the correction data sets the error information to be equal to or less than a predetermined threshold value. Therefore, the receiving device 4 can obtain an image substantially the same as the original image.
- a decoded image can be obtained by performing normal interpolation using a device that decodes the queried image by interpolation, even if the receiving device is not the receiving device 4 as shown in FIG. .
- the decoded image obtained in this case has deteriorated image quality (resolution).
- the pixel data is represented by using the R, G, and B component components.
- Other component signals include a luminance signal Y and a chrominance signal represented by the following equations, respectively.
- C, M, and R are represented by additive color mixture in which R, G, and B are each 8 bits.
- the invention's effect is represented by additive color mixture in which R, G, and B are each 8 bits.
- the first component signal of the second image is converted to the first component of the first image. Prediction from the signal and the second component signal, and prediction of the second component signal of the second image from the first component signal and the second component signal of the first image. As a result, prediction processing can be performed efficiently and with high accuracy.
- a prediction decoding method including a plurality of pixel data represented by vectors in a color space is performed. Is used to predict the image, so that the image can be efficiently encoded and the image can be encoded so that the image can be decoded with high accuracy. .
- the image processing apparatus and method, and the image coding apparatus and method according to the present invention are particularly effective and accurate image processing apparatus and method for performing prediction, and image processing. Suitable for use in encoding devices and methods.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
- Color Television Systems (AREA)
- Compression Of Band Width Or Redundancy In Fax (AREA)
Description
Claims
Priority Applications (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| EP97930821A EP0851690A4 (en) | 1996-07-17 | 1997-07-17 | DEVICE AND METHOD FOR IMAGE PROCESSING AND DEVICE AND METHOD FOR IMAGE CODING |
| KR10-1998-0701960A KR100518100B1 (ko) | 1996-07-17 | 1997-07-17 | 화상처리장치및방법,및화상부호화장치및방법 |
| US09/043,359 US6266454B1 (en) | 1996-07-17 | 1997-07-17 | Device and method for processing, image and device and method for encoding image |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP8/206625 | 1996-07-17 | ||
| JP20662596A JP3748088B2 (ja) | 1996-07-17 | 1996-07-17 | 画像処理装置および方法、並びに学習装置および方法 |
Related Child Applications (2)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US09/043,359 A-371-Of-International US6266454B1 (en) | 1996-07-17 | 1997-07-17 | Device and method for processing, image and device and method for encoding image |
| US09/776,025 Continuation US6757435B2 (en) | 1996-07-17 | 2001-02-02 | Apparatus for and method of processing image and apparatus for and method of encoding image |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO1998003020A1 true WO1998003020A1 (en) | 1998-01-22 |
Family
ID=16526477
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP1997/002481 Ceased WO1998003020A1 (en) | 1996-07-17 | 1997-07-17 | Device and method for processing image and device and method for encoding image |
Country Status (5)
| Country | Link |
|---|---|
| US (3) | US6266454B1 (ja) |
| EP (1) | EP0851690A4 (ja) |
| JP (1) | JP3748088B2 (ja) |
| KR (1) | KR100518100B1 (ja) |
| WO (1) | WO1998003020A1 (ja) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104079899A (zh) * | 2013-03-29 | 2014-10-01 | 索尼公司 | 图像处理装置、图像处理方法和程序 |
Families Citing this family (36)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2000308021A (ja) * | 1999-04-20 | 2000-11-02 | Niigata Seimitsu Kk | 画像処理回路 |
| US6549672B1 (en) * | 1999-06-29 | 2003-04-15 | Sony Corporation | Method and apparatus for recovery of encoded data using central value |
| US6337122B1 (en) * | 2000-01-11 | 2002-01-08 | Micron Technology, Inc. | Stereolithographically marked semiconductors devices and methods |
| US9503789B2 (en) * | 2000-08-03 | 2016-11-22 | Cox Communications, Inc. | Customized user interface generation in a video on demand environment |
| US7171059B1 (en) * | 2002-05-23 | 2007-01-30 | Pixelworks, Inc. | Method and apparatus for two-dimensional image scaling |
| US7324709B1 (en) * | 2001-07-13 | 2008-01-29 | Pixelworks, Inc. | Method and apparatus for two-dimensional image scaling |
| US9113846B2 (en) * | 2001-07-26 | 2015-08-25 | Given Imaging Ltd. | In-vivo imaging device providing data compression |
| US20050187433A1 (en) * | 2001-07-26 | 2005-08-25 | Given Imaging Ltd. | In-vivo imaging device providing constant bit rate transmission |
| TW559737B (en) * | 2001-11-02 | 2003-11-01 | Ind Tech Res Inst | Color conversion method for preference color |
| US7453936B2 (en) * | 2001-11-09 | 2008-11-18 | Sony Corporation | Transmitting apparatus and method, receiving apparatus and method, program and recording medium, and transmitting/receiving system |
| JP4055203B2 (ja) * | 2002-09-12 | 2008-03-05 | ソニー株式会社 | データ処理装置およびデータ処理方法、記録媒体、並びにプログラム |
| US7212676B2 (en) * | 2002-12-30 | 2007-05-01 | Intel Corporation | Match MSB digital image compression |
| DE60315407D1 (de) * | 2003-02-06 | 2007-09-20 | St Microelectronics Srl | Verfahren und Vorrichtung zum Komprimierung von Texturen |
| US7382937B2 (en) * | 2003-03-07 | 2008-06-03 | Hewlett-Packard Development Company, L.P. | Method and apparatus for re-constructing high-resolution images |
| US7758896B2 (en) * | 2004-04-16 | 2010-07-20 | University Of Massachusetts | Porous calcium phosphate networks for synthetic bone material |
| JP2006259663A (ja) * | 2004-06-30 | 2006-09-28 | Canon Inc | 画像処理方法、画像表示装置、映像受信表示装置および画像処理装置 |
| US6995526B1 (en) * | 2004-08-09 | 2006-02-07 | National Semiconductor Corporation | Digitally controlled vertical C linearity correction with constant start and end points without using an AGC |
| DE102005016827A1 (de) | 2005-04-12 | 2006-10-19 | Siemens Ag | Adaptive Interpolation bei der Bild- oder Videokodierung |
| US20060262210A1 (en) * | 2005-05-19 | 2006-11-23 | Micron Technology, Inc. | Method and apparatus for column-wise suppression of noise in an imager |
| JP4351658B2 (ja) * | 2005-07-21 | 2009-10-28 | マイクロン テクノロジー, インク. | メモリ容量低減化方法、メモリ容量低減化雑音低減化回路及びメモリ容量低減化装置 |
| US8369629B2 (en) * | 2006-01-23 | 2013-02-05 | Telefonaktiebolaget L M Ericsson (Publ) | Image processing using resolution numbers to determine additional component values |
| US8605797B2 (en) * | 2006-02-15 | 2013-12-10 | Samsung Electronics Co., Ltd. | Method and system for partitioning and encoding of uncompressed video for transmission over wireless medium |
| JP4169768B2 (ja) * | 2006-02-24 | 2008-10-22 | 三菱電機株式会社 | 画像符号化装置、画像処理装置、画像符号化方法、及び画像処理方法 |
| JP2007312126A (ja) * | 2006-05-18 | 2007-11-29 | Toshiba Corp | 画像処理回路 |
| US8331663B2 (en) * | 2007-06-28 | 2012-12-11 | Qualcomm Incorporated | Efficient image compression scheme to minimize storage and bus bandwidth requirements |
| US8842739B2 (en) * | 2007-07-20 | 2014-09-23 | Samsung Electronics Co., Ltd. | Method and system for communication of uncompressed video information in wireless systems |
| US8243823B2 (en) * | 2007-08-29 | 2012-08-14 | Samsung Electronics Co., Ltd. | Method and system for wireless communication of uncompressed video information |
| US20090323810A1 (en) * | 2008-06-26 | 2009-12-31 | Mediatek Inc. | Video encoding apparatuses and methods with decoupled data dependency |
| US8270466B2 (en) * | 2008-10-03 | 2012-09-18 | Sony Corporation | Adaptive decimation filter |
| US8249160B2 (en) | 2008-10-03 | 2012-08-21 | Sony Corporation | Extracting multiple classified adaptive decimation filters |
| RU2420021C2 (ru) * | 2009-03-24 | 2011-05-27 | Корпорация "САМСУНГ ЭЛЕКТРОНИКС Ко., Лтд." | Способ сжатия изображений и видеопоследовательностей |
| US9369759B2 (en) * | 2009-04-15 | 2016-06-14 | Samsung Electronics Co., Ltd. | Method and system for progressive rate adaptation for uncompressed video communication in wireless systems |
| US9176928B2 (en) * | 2009-07-07 | 2015-11-03 | L3 Communication Integrated Systems, L.P. | System for convergence evaluation for stationary method iterative linear solvers |
| UA109312C2 (uk) | 2011-03-04 | 2015-08-10 | Імпульсно-кодова модуляція з квантуванням при кодуванні відеоінформації | |
| WO2018225056A1 (en) * | 2017-06-04 | 2018-12-13 | Killer Whale L.T.D | Toilet cleaning devices systems and methods |
| US12198391B2 (en) * | 2021-05-19 | 2025-01-14 | Zibra Al, Inc. | Information compression method and apparatus |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPS61205093A (ja) * | 1985-03-08 | 1986-09-11 | Mitsubishi Electric Corp | カラ−静止画像高能率符号化装置 |
| JPS639390A (ja) * | 1986-06-30 | 1988-01-16 | アメリカン テレフオン アンド テレグラフ カムパニ− | デジタルビデオ伝送システム |
| JPS63269894A (ja) * | 1987-04-28 | 1988-11-08 | Sony Corp | カラ−テレビジヨン信号の高能率符号化装置 |
| JPH0461591A (ja) * | 1990-06-29 | 1992-02-27 | Sony Corp | 画像信号の高能率符号化装置及び符号化方法 |
| JPH0779453A (ja) * | 1993-09-07 | 1995-03-20 | Sony Corp | 画像情報符号化装置および復号装置 |
Family Cites Families (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| FR2575351B1 (fr) * | 1984-12-21 | 1988-05-13 | Thomson Csf | Procede adaptatif de codage et de decodage d'une suite d'images par transformation, et dispositifs pour la mise en oeuvre de ce procede |
| EP0632656A3 (en) | 1985-02-28 | 1995-03-08 | Mitsubishi Electric Corp | Inter-frame encoder device for adaptive vector quantification. |
| GB2231460B (en) * | 1989-05-04 | 1993-06-30 | Sony Corp | Spatial interpolation of digital video signals |
| DE69033411T2 (de) * | 1989-09-05 | 2008-10-09 | Canon K.K. | Farbbildkodierung |
| US5631979A (en) * | 1992-10-26 | 1997-05-20 | Eastman Kodak Company | Pixel value estimation technique using non-linear prediction |
| JPH06152970A (ja) * | 1992-11-02 | 1994-05-31 | Fujitsu Ltd | 画像圧縮方法及び画像処理装置 |
| KR100360206B1 (ko) * | 1992-12-10 | 2003-02-11 | 소니 가부시끼 가이샤 | 화상신호변환장치 |
| US5663764A (en) * | 1993-09-30 | 1997-09-02 | Sony Corporation | Hierarchical encoding and decoding apparatus for a digital image signal |
| CN1136381A (zh) * | 1994-08-31 | 1996-11-20 | 索尼公司 | 摄象装置 |
| GB2293938B (en) * | 1994-10-04 | 1999-01-20 | Winbond Electronics Corp | Apparatus for digital video format conversion |
-
1996
- 1996-07-17 JP JP20662596A patent/JP3748088B2/ja not_active Expired - Fee Related
-
1997
- 1997-07-17 KR KR10-1998-0701960A patent/KR100518100B1/ko not_active Expired - Fee Related
- 1997-07-17 US US09/043,359 patent/US6266454B1/en not_active Expired - Lifetime
- 1997-07-17 WO PCT/JP1997/002481 patent/WO1998003020A1/ja not_active Ceased
- 1997-07-17 EP EP97930821A patent/EP0851690A4/en not_active Ceased
-
2001
- 2001-02-02 US US09/776,025 patent/US6757435B2/en not_active Expired - Fee Related
-
2004
- 2004-01-21 US US10/761,534 patent/US7155057B2/en not_active Expired - Fee Related
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPS61205093A (ja) * | 1985-03-08 | 1986-09-11 | Mitsubishi Electric Corp | カラ−静止画像高能率符号化装置 |
| JPS639390A (ja) * | 1986-06-30 | 1988-01-16 | アメリカン テレフオン アンド テレグラフ カムパニ− | デジタルビデオ伝送システム |
| JPS63269894A (ja) * | 1987-04-28 | 1988-11-08 | Sony Corp | カラ−テレビジヨン信号の高能率符号化装置 |
| JPH0461591A (ja) * | 1990-06-29 | 1992-02-27 | Sony Corp | 画像信号の高能率符号化装置及び符号化方法 |
| JPH0779453A (ja) * | 1993-09-07 | 1995-03-20 | Sony Corp | 画像情報符号化装置および復号装置 |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104079899A (zh) * | 2013-03-29 | 2014-10-01 | 索尼公司 | 图像处理装置、图像处理方法和程序 |
| CN104079899B (zh) * | 2013-03-29 | 2016-08-17 | 索尼公司 | 图像处理装置、图像处理方法和程序 |
Also Published As
| Publication number | Publication date |
|---|---|
| US20050018916A1 (en) | 2005-01-27 |
| US6266454B1 (en) | 2001-07-24 |
| JPH1032837A (ja) | 1998-02-03 |
| US20010031091A1 (en) | 2001-10-18 |
| EP0851690A1 (en) | 1998-07-01 |
| EP0851690A4 (en) | 2002-05-29 |
| US6757435B2 (en) | 2004-06-29 |
| KR100518100B1 (ko) | 2005-12-30 |
| US7155057B2 (en) | 2006-12-26 |
| JP3748088B2 (ja) | 2006-02-22 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO1998003020A1 (en) | Device and method for processing image and device and method for encoding image | |
| US5812787A (en) | Video coding scheme with foreground/background separation | |
| JP4494789B2 (ja) | 動的フィルタのコーディング | |
| US5703966A (en) | Block selection using motion estimation error | |
| KR100538731B1 (ko) | 화소블록의클래스정보에대응되는매핑계수를이용하는화상부호화및화상복호화 | |
| JPH05191653A (ja) | カラー画像の符号化と復号化の方法およびそれを用いた符号器と復号器 | |
| KR100574732B1 (ko) | 영상코딩장치,영상코딩방법,영상디코딩방법,영상디코딩장치,영상데이터전송방법및기록매체 | |
| WO1998030028A1 (fr) | Dispositif de codage d'image, procede de codage d'image, dispositif de decodage d'image, procede de decodage d'image et support d'enregistrement | |
| KR19990087249A (ko) | 화상 신호 부호화 장치, 화상 신호 부호화 방법,화상 신호 복호 장치, 화상 신호 복호 방법 및 기록 매체 | |
| JP3844030B2 (ja) | 画像信号符号化装置および画像信号符号化方法、画像信号復号装置および画像信号復号方法 | |
| JP3844031B2 (ja) | 画像符号化装置および画像符号化方法、並びに、画像復号装置および画像復号方法 | |
| JPH10191353A (ja) | 画像符号化装置および画像符号化方法、画像復号化装置および画像復号化方法、並びに記録媒体 | |
| JP3912558B2 (ja) | 画像符号化装置および画像符号化方法、並びに記録媒体 | |
| JP4566877B2 (ja) | 画像処理装置および方法 | |
| JP4807349B2 (ja) | 学習装置および方法 | |
| JP3867697B2 (ja) | 画像信号生成装置および生成方法 | |
| JP3906770B2 (ja) | ディジタル画像信号処理装置および方法 | |
| JP4534951B2 (ja) | 画像符号化装置および画像符号化方法、画像処理システムおよび画像処理方法、伝送方法、並びに記録媒体 | |
| JP3617080B2 (ja) | 信号処理装置及び信号処理方法 | |
| JP4582416B2 (ja) | 画像符号化装置および画像符号化方法 | |
| JP3963184B2 (ja) | 信号処理装置及び信号処理方法 | |
| JP3844520B2 (ja) | 信号処理装置及び信号処理方法 | |
| JP3906832B2 (ja) | 画像信号処理装置および処理方法 | |
| JP3952326B2 (ja) | 画像符号化装置および画像符号化方法、画像処理システムおよび画像処理方法、伝送方法、並びに記録媒体 | |
| JP4487900B2 (ja) | 画像処理システム |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AK | Designated states |
Kind code of ref document: A1 Designated state(s): KR US |
|
| AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 1019980701960 Country of ref document: KR |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 1997930821 Country of ref document: EP |
|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
| WWE | Wipo information: entry into national phase |
Ref document number: 09043359 Country of ref document: US |
|
| WWP | Wipo information: published in national office |
Ref document number: 1997930821 Country of ref document: EP |
|
| WWP | Wipo information: published in national office |
Ref document number: 1019980701960 Country of ref document: KR |
|
| WWG | Wipo information: grant in national office |
Ref document number: 1019980701960 Country of ref document: KR |