WO2010123057A1 - 画像処理装置および方法 - Google Patents
画像処理装置および方法 Download PDFInfo
- Publication number
- WO2010123057A1 WO2010123057A1 PCT/JP2010/057128 JP2010057128W WO2010123057A1 WO 2010123057 A1 WO2010123057 A1 WO 2010123057A1 JP 2010057128 W JP2010057128 W JP 2010057128W WO 2010123057 A1 WO2010123057 A1 WO 2010123057A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- prediction
- adjacent pixel
- pixel
- unit
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/577—Motion compensation with bidirectional frame interpolation, i.e. using B-pictures
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/11—Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/146—Data rate or code amount at the encoder output
- H04N19/147—Data rate or code amount at the encoder output according to rate distortion criteria
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/186—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/189—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
- H04N19/19—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding using optimisation based on Lagrange multipliers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/46—Embedding additional information in the video signal during the compression process
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/57—Motion estimation characterised by a search window with variable size or shape
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/593—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
Definitions
- the present invention relates to an image processing apparatus and method, and more particularly, to an image processing apparatus and method capable of performing secondary prediction even when adjacent pixels adjacent to a reference block exist outside the image frame.
- MPEG2 (ISO / IEC 13818-2) is defined as a general-purpose image encoding system, and is a standard that covers both interlaced scanning images and progressive scanning images, as well as standard resolution images and high-definition images.
- MPEG2 is currently widely used in a wide range of applications for professional and consumer applications.
- a code amount (bit rate) of 4 to 8 Mbps is assigned to an interlaced scanned image having a standard resolution of 720 ⁇ 480 pixels.
- a high resolution interlaced scanned image having 1920 ⁇ 1088 pixels is assigned a code amount (bit rate) of 18 to 22 Mbps.
- bit rate code amount
- MPEG2 was mainly intended for high-quality encoding suitable for broadcasting, but it did not support encoding methods with a lower code amount (bit rate) than MPEG1, that is, a higher compression rate. With the widespread use of mobile terminals, the need for such an encoding system is expected to increase in the future, and the MPEG4 encoding system has been standardized accordingly. Regarding the image coding system, the standard was approved as an international standard in December 1998 as ISO / IEC 14496-2.
- H. The standardization of 26L (ITU-T Q6 / 16 ⁇ VCEG) is in progress.
- H. 26L is known to achieve higher encoding efficiency than the conventional encoding schemes such as MPEG2 and MPEG4, although a large amount of calculation is required for encoding and decoding.
- this H. Based on 26L, H. Standardization to achieve higher coding efficiency by incorporating functions that are not supported by 26L is performed as JointJModel of Enhanced-Compression Video Coding.
- H. H.264 and MPEG-4 Part 10 Advanced Video Coding, hereinafter referred to as H.264 / AVC).
- Non-Patent Document 1 proposes a secondary prediction method that further improves coding efficiency in inter prediction. This secondary prediction method will be described with reference to FIG.
- a target frame and a reference frame are shown, and a target block A is shown in the target frame.
- difference information (remaining information) between the target block A and the block B associated with the target block A by the motion vector mv Difference) is calculated.
- the address of each pixel in the adjacent pixel group A ′ is obtained from the upper left address (x, y) of the target block A. Further, the address of each pixel of the adjacent pixel group B ′ is obtained from the upper left address (x + mv_x, y + mv_y) of the block B associated with the target block A by the motion vector mv (mv_x, mv_y). Using these addresses, difference information of the adjacent pixel group B ′ is calculated.
- the difference between the calculated difference information regarding the target block and the difference information regarding adjacent pixels is H.264.
- Intra prediction in the H.264 / AVC format is performed, whereby secondary difference information is generated.
- the generated secondary difference information is orthogonally transformed and quantized, encoded with the compressed image, and sent to the decoding side.
- the target block A always exists in the image frame of the target frame, but whether or not the reference block B exists in the image frame of the reference frame depends on the address of the target block A and the value of the motion vector. Determined.
- motion vectors mv1 and mv2 are detected for the target block A in the reference frame.
- a part of the reference block B1 associated with the target block A by the motion vector mv1 protrudes outside at the lower part of the image frame.
- the adjacent pixel group B1 ′ adjacent to the reference block B1 also has its one. The part protrudes outside the lower part of the image frame.
- the reference block B2 associated with the target block A by the motion vector mv2 exists in the image frame, but a part of the adjacent pixel group B2 ′ adjacent to the reference block B2 is outside in the right part of the image frame. It sticks out.
- H.264 / AVC intra prediction is diverted to perform secondary prediction.
- H. In the intra prediction of the H.264 / AVC system, it is not necessary to determine whether or not the adjacent pixel can be used. The intra prediction of 264 / AVC method could not be diverted.
- the present invention has been made in view of such a situation, and makes it possible to perform secondary prediction even when adjacent pixels adjacent to a reference block exist outside the image frame.
- the image processing apparatus uses the relative address of the target adjacent pixel adjacent to the target block in the target frame, so that the reference adjacent pixel adjacent to the reference block in the reference frame is the image frame of the reference frame.
- a determination unit that determines whether the reference adjacent pixel does not exist in the image frame, and an end point processing unit that performs an end point process on the reference adjacent pixel when the determination unit determines that the reference adjacent pixel does not exist within the image frame;
- Secondary difference information by performing prediction between difference information between the target block and the reference block and difference information between the target adjacent pixel and the reference adjacent pixel that has been subjected to end point processing by the end point processing means.
- Secondary prediction means for generating the second difference information, and encoding means for encoding the secondary difference information generated by the secondary prediction means.
- the address of the target block (x, y), the motion vector information (dx, dy) by which the target block refers to the reference block, the relative address ( ⁇ x, ⁇ y) of the target adjacent pixel, and the relative of the reference adjacent pixel A calculating unit that calculates an address (x + dx + ⁇ x, y + dy + ⁇ y); and the determination unit includes a relative address (x + dx + ⁇ x, y) of the reference adjacent pixel calculated by the calculating unit. It is possible to determine whether (+ dy + ⁇ y) exists in the image frame.
- the end point processing means when the pixel value is represented by n bits, x + dx + ⁇ x ⁇ 0 or y + dy + ⁇ y ⁇ 0
- the end point processing with the pixel value of 2 n-1 can be performed on the reference adjacent pixel that holds.
- the end point processing means uses WIDTH as the number of pixels in the horizontal direction of the image frame, x + dx + ⁇ x> WIDTH-1 When the above holds, the end point processing using the pixel value indicated by the address (WIDTH ⁇ 1, y + dy + ⁇ y) as the pixel value of the reference adjacent pixel can be performed.
- the end point processing means has HEIGHT as the number of pixels in the vertical direction of the image frame, y + dy + ⁇ y> HEIGHT-1
- HEIGHT the number of pixels in the vertical direction of the image frame
- the end point processing means WIDTH is the number of pixels in the horizontal direction of the image frame
- HEIGHT is the number of pixels in the vertical direction of the image frame
- the end point processing means can perform the end point processing for generating a pixel value by mirror image processing with the boundary of the image frame symmetrical with respect to the reference adjacent pixel that does not exist in the image frame.
- the secondary prediction unit performs prediction using difference information between the target adjacent pixel and the reference adjacent pixel subjected to end point processing by the end point processing unit, and generates an intra prediction image for the target block.
- Means, and secondary difference generation means for generating the secondary difference information by subtracting the difference information between the target block and the reference block and the intra prediction image generated by the intra prediction means. be able to.
- the secondary prediction unit when the determination unit determines that the reference adjacent pixel is present in the image frame, difference information between the target block and the reference block, the target adjacent pixel and the reference adjacent pixel And the difference information can be predicted.
- the image processing apparatus uses the relative address of the target adjacent pixel adjacent to the target block in the target frame, so that the reference adjacent pixel adjacent to the reference block in the reference frame is It is determined whether or not the reference adjacent pixel exists in the image frame of the reference frame, and if it is determined that the reference adjacent pixel does not exist in the image frame, end point processing is performed on the reference adjacent pixel, and the target block and the reference
- the secondary difference information is generated by performing prediction between the difference information between the block and the difference information between the target adjacent pixel and the reference adjacent pixel on which the end point processing has been performed, and the generated secondary difference Encoding the difference information.
- the image processing apparatus uses the decoding means for decoding the image of the target block in the encoded target frame and the relative address of the target adjacent pixel adjacent to the target block, A determination unit that determines whether or not a reference adjacent pixel adjacent to the reference block exists in the image frame of the reference frame, and the determination unit determines that the reference adjacent pixel does not exist in the image frame; By performing quadratic prediction using end point processing means for performing end point processing on the reference adjacent pixel, and difference information between the target adjacent pixel and the reference adjacent pixel subjected to end point processing by the end point processing means, Secondary prediction means for generating a prediction image, the image of the target block, the prediction image generated by the secondary prediction means, and the image of the reference block Adding to, and a computing means for generating a decoded image of the current block.
- the address of the target block (x, y), the motion vector information (dx, dy) by which the target block refers to the reference block, the relative address ( ⁇ x, ⁇ y) of the target adjacent pixel, and the relative of the reference adjacent pixel A calculating unit that calculates an address (x + dx + ⁇ x, y + dy + ⁇ y); and the determination unit includes a relative address (x + dx + ⁇ x, y) of the reference adjacent pixel calculated by the calculating unit. It is possible to determine whether (+ dy + ⁇ y) exists in the image frame.
- the end point processing means when the pixel value is represented by n bits, x + dx + ⁇ x ⁇ 0 or y + dy + ⁇ y ⁇ 0
- the end point processing with the pixel value of 2 n-1 can be performed on the reference adjacent pixel that holds.
- the end point processing means uses WIDTH as the number of pixels in the horizontal direction of the image frame, x + dx + ⁇ x> WIDTH-1 When the above holds, the end point processing using the pixel value indicated by the address (WIDTH ⁇ 1, y + dy + ⁇ y) as the pixel value of the reference adjacent pixel can be performed.
- the end point processing means has HEIGHT as the number of pixels in the vertical direction of the image frame, y + dy + ⁇ y> HEIGHT-1
- HEIGHT the number of pixels in the vertical direction of the image frame
- the end point processing means WIDTH is the number of pixels in the horizontal direction of the image frame
- HEIGHT is the number of pixels in the vertical direction of the image frame
- the end point processing means can perform the end point processing for generating a pixel value by mirror image processing with the boundary of the image frame symmetrical with respect to the reference adjacent pixel that does not exist in the image frame.
- the secondary prediction unit generates a predicted image by performing secondary prediction using difference information between the target adjacent pixel and the reference adjacent pixel subjected to the end point processing by the end point processing unit. Can be provided.
- the secondary prediction unit can perform prediction using difference information between the target adjacent pixel and the reference adjacent pixel when the determination unit determines that the reference adjacent pixel exists in the image frame. .
- the image processing device decodes the image of the target block in the encoded target frame, and uses the relative address of the target adjacent pixel adjacent to the target block, It is determined whether a reference adjacent pixel adjacent to a reference block in a reference frame exists within the image frame of the reference frame, and when it is determined that the reference adjacent pixel does not exist within the image frame, End point processing is performed, and a prediction image is generated by performing secondary prediction using difference information between the target adjacent pixel and the reference adjacent pixel subjected to end point processing by the end point processing unit, and the target block Adding the generated image, the generated predicted image, and the reference block image to generate a decoded image of the target block.
- the reference adjacent pixel adjacent to the reference block in the reference frame is present in the image frame of the reference frame using the relative address of the target adjacent pixel adjacent to the target block in the target frame. It is determined whether to do it. When it is determined that the reference adjacent pixel does not exist in the image frame, end point processing is performed on the reference adjacent pixel, difference information between the target block and the reference block, and the target adjacent pixel Secondary difference information is generated by performing prediction between the difference information from the reference adjacent pixel on which the end point processing has been performed, and the generated secondary difference information is encoded.
- the image of the target block in the encoded target frame is decoded and adjacent to the reference block in the reference frame using the relative address of the target adjacent pixel adjacent to the target block. It is determined whether the reference adjacent pixel is present in the image frame of the reference frame. When it is determined that the reference adjacent pixel does not exist in the image frame, end point processing is performed on the reference adjacent pixel, and the target adjacent pixel and the reference adjacent pixel on which the end point processing is performed A prediction image is generated by performing secondary prediction using the difference information, and the image of the target block, the generated prediction image, and the image of the reference block are added to obtain a decoded image of the target block. Generated.
- each of the above-described image processing apparatuses may be an independent apparatus, or may be an internal block constituting one image encoding apparatus or image decoding apparatus.
- an image can be encoded. Further, according to the first aspect of the present invention, it is possible to perform secondary prediction even when adjacent pixels adjacent to the reference block exist outside the image frame.
- an image can be decoded. Further, according to the second aspect of the present invention, it is possible to perform secondary prediction even when adjacent pixels adjacent to the reference block exist outside the image frame.
- FIG. 32 is a block diagram illustrating a configuration example of a secondary prediction unit in FIG. 31. It is a flowchart explaining the decoding process of the image decoding apparatus of FIG. It is a flowchart explaining the prediction process of step S138 of FIG. It is a flowchart explaining the 2nd order inter prediction process of FIG.34 S179. It is a block diagram which shows the structural example of the hardware of a computer.
- FIG. 3 shows a configuration of an embodiment of an image encoding apparatus as an image processing apparatus to which the present invention is applied.
- This image encoding device 51 is, for example, H.264. 264 and MPEG-4 Part10 (Advanced Video Coding) (hereinafter referred to as H.264 / AVC) format is used for compression coding.
- H.264 / AVC Advanced Video Coding
- the image encoding device 51 includes an A / D conversion unit 61, a screen rearrangement buffer 62, a calculation unit 63, an orthogonal transformation unit 64, a quantization unit 65, a lossless encoding unit 66, a storage buffer 67, Inverse quantization unit 68, inverse orthogonal transform unit 69, operation unit 70, deblock filter 71, frame memory 72, switch 73, intra prediction unit 74, motion prediction / compensation unit 75, secondary prediction unit 76, reference adjacency determination unit 77, a predicted image selection unit 78, and a rate control unit 79.
- the A / D converter 61 A / D converts the input image, outputs it to the screen rearrangement buffer 62, and stores it.
- the screen rearrangement buffer 62 rearranges the stored frames in the display order in the order of frames for encoding in accordance with GOP (Group of Picture).
- the calculation unit 63 subtracts the prediction image from the intra prediction unit 74 or the prediction image from the motion prediction / compensation unit 75 selected by the prediction image selection unit 78 from the image read from the screen rearrangement buffer 62, The difference information is output to the orthogonal transform unit 64.
- the orthogonal transform unit 64 subjects the difference information from the calculation unit 63 to orthogonal transform such as discrete cosine transform and Karhunen-Loeve transform, and outputs the transform coefficient.
- the quantization unit 65 quantizes the transform coefficient output from the orthogonal transform unit 64.
- the quantized transform coefficient that is the output of the quantization unit 65 is input to the lossless encoding unit 66, where lossless encoding such as variable length encoding and arithmetic encoding is performed and compressed.
- the lossless encoding unit 66 acquires information indicating intra prediction from the intra prediction unit 74 and acquires information indicating inter prediction mode from the motion prediction / compensation unit 75. Note that the information indicating intra prediction and the information indicating inter prediction are also referred to as intra prediction mode information and inter prediction mode information, respectively.
- the lossless encoding unit 66 encodes the quantized transform coefficient, encodes information indicating intra prediction, information indicating inter prediction mode, and the like, and uses it as a part of header information in the compressed image.
- the lossless encoding unit 66 supplies the encoded data to the accumulation buffer 67 for accumulation.
- lossless encoding processing such as variable length encoding or arithmetic encoding is performed.
- variable length coding include H.264.
- CAVLC Context-Adaptive Variable Length Coding
- arithmetic coding include CABAC (Context-Adaptive Binary Arithmetic Coding).
- the accumulation buffer 67 converts the data supplied from the lossless encoding unit 66 to H.264. As a compressed image encoded by the H.264 / AVC format, for example, it is output to a recording device or a transmission path (not shown) in the subsequent stage.
- the quantized transform coefficient output from the quantization unit 65 is also input to the inverse quantization unit 68, and after inverse quantization, the inverse orthogonal transform unit 69 further performs inverse orthogonal transform.
- the output subjected to the inverse orthogonal transform is added to the predicted image supplied from the predicted image selection unit 78 by the calculation unit 70, and becomes a locally decoded image.
- the deblocking filter 71 removes block distortion from the decoded image, and then supplies the deblocking filter 71 to the frame memory 72 for accumulation.
- the image before the deblocking filter processing by the deblocking filter 71 is also supplied to the frame memory 72 and accumulated.
- the switch 73 outputs the reference image stored in the frame memory 72 to the motion prediction / compensation unit 75 or the intra prediction unit 74.
- an I picture, a B picture, and a P picture from the screen rearrangement buffer 62 are supplied to the intra prediction unit 74 as images to be intra predicted (also referred to as intra processing). Further, the B picture and the P picture read from the screen rearrangement buffer 62 are supplied to the motion prediction / compensation unit 75 as an image to be inter-predicted (also referred to as inter-processing).
- the intra prediction unit 74 performs intra prediction processing of all candidate intra prediction modes based on the image to be intra predicted read from the screen rearrangement buffer 62 and the reference image supplied from the frame memory 72, and performs prediction. Generate an image.
- the intra prediction unit 74 calculates cost function values for all candidate intra prediction modes, and selects an intra prediction mode in which the calculated cost function value gives the minimum value as the optimal intra prediction mode.
- the intra prediction unit 74 supplies the predicted image generated in the optimal intra prediction mode and its cost function value to the predicted image selection unit 78.
- the intra prediction unit 74 supplies information indicating the optimal intra prediction mode to the lossless encoding unit 66.
- the lossless encoding unit 66 encodes this information and uses it as a part of header information in the compressed image.
- the motion prediction / compensation unit 75 performs motion prediction / compensation processing for all candidate inter prediction modes.
- the inter prediction image read from the screen rearrangement buffer 62 and the reference image from the frame memory 72 are supplied to the motion prediction / compensation unit 75 via the switch 73.
- the motion prediction / compensation unit 75 detects motion vectors of all candidate inter prediction modes based on the inter-processed image and the reference image, performs compensation processing on the reference image based on the motion vector, and converts the predicted image into a predicted image. Generate.
- the motion prediction / compensation unit 75 performs secondary prediction on the detected motion vector information, information on the image to be inter-processed (address, etc.), and the primary residual that is the difference between the image to be inter-processed and the generated predicted image. To the unit 76.
- the secondary prediction unit 76 obtains the address of the reference adjacent pixel adjacent to the reference block associated with the target block based on the motion vector information, and supplies it to the reference adjacent determination unit 77. In response to the determination result from the reference adjacency determination unit 77 input corresponding thereto, the secondary prediction unit 76 performs end point processing, reads out the corresponding pixel from the frame memory 72, and performs secondary prediction processing. .
- the end point processing is processing for determining a pixel value to be used for a reference adjacent pixel which is determined to be outside the image frame of the reference frame, using another pixel value existing in the image frame.
- the secondary prediction process is a process for generating secondary difference information (secondary residual) by performing prediction between the primary residual and the difference between the target adjacent pixel and the reference adjacent pixel.
- the secondary prediction unit 76 uses the secondary residual generated by the secondary prediction process and the information of the intra prediction mode used in the secondary prediction process as information of the intra prediction mode in the secondary prediction. Output to 75.
- the reference adjacency determination unit 77 determines whether or not the reference adjacent pixel exists within the image frame of the reference frame using the address of the reference adjacent pixel from the motion prediction / compensation unit 75, and the determination result is This is supplied to the secondary prediction unit 76.
- the motion prediction / compensation unit 75 compares the secondary residuals from the secondary prediction unit 76 to determine an optimal intra prediction mode in the secondary prediction. Also, the motion prediction / compensation unit 75 compares the secondary residual with the primary residual to determine whether or not to perform the secondary prediction processing (that is, encode the secondary residual or Whether to encode the residual). Note that these processes are performed for all candidate inter prediction modes.
- the motion prediction / compensation unit 75 calculates cost function values for all candidate inter prediction modes. At this time, a cost function value is calculated using a residual determined for each inter prediction mode among the primary residual and the secondary residual. The motion prediction / compensation unit 75 determines a prediction mode that gives the minimum value among the calculated cost function values as the optimal inter prediction mode.
- the motion prediction / compensation unit 75 supplies the predicted image generated in the optimal inter prediction mode (or the difference between the interpolated image and the secondary residual) and its cost function value to the predicted image selection unit 78.
- the motion prediction / compensation unit 75 outputs information indicating the optimal inter prediction mode to the lossless encoding unit 66.
- the lossless encoding unit 66 performs lossless encoding processing such as variable length encoding and arithmetic encoding on the information from the motion prediction / compensation unit 75 and inserts the information into the header portion of the compressed image.
- the predicted image selection unit 78 determines the optimal prediction mode from the optimal intra prediction mode and the optimal inter prediction mode based on each cost function value output from the intra prediction unit 74 or the motion prediction / compensation unit 75. Then, the predicted image selection unit 78 selects a predicted image in the determined optimal prediction mode and supplies the selected predicted image to the calculation units 63 and 70. At this time, the predicted image selection unit 78 supplies the selection information of the predicted image to the intra prediction unit 74 or the motion prediction / compensation unit 75.
- the rate control unit 79 controls the rate of the quantization operation of the quantization unit 65 based on the compressed image stored in the storage buffer 67 so that overflow or underflow does not occur.
- FIG. 3 is a diagram illustrating an example of a block size for motion prediction / compensation in the H.264 / AVC format.
- macroblocks composed of 16 ⁇ 16 pixels divided into 16 ⁇ 16 pixels, 16 ⁇ 8 pixels, 8 ⁇ 16 pixels, and 8 ⁇ 8 pixel partitions are sequentially shown from the left. ing. Further, in the lower part of FIG. 4, an 8 ⁇ 8 pixel partition divided into 8 ⁇ 8 pixel, 8 ⁇ 4 pixel, 4 ⁇ 8 pixel, and 4 ⁇ 4 pixel subpartitions is sequentially shown from the left. Yes.
- one macroblock is divided into any partition of 16 ⁇ 16 pixels, 16 ⁇ 8 pixels, 8 ⁇ 16 pixels, or 8 ⁇ 8 pixels, and independent motion vector information is obtained. It is possible to have.
- an 8 ⁇ 8 pixel partition is divided into 8 ⁇ 8 pixel, 8 ⁇ 4 pixel, 4 ⁇ 8 pixel, or 4 ⁇ 4 pixel subpartitions and has independent motion vector information. Is possible.
- FIG. It is a figure explaining the prediction and compensation process of the 1/4 pixel precision in a H.264 / AVC system.
- FIR Finite Impulse Response Filter
- the position A is the position of the integer precision pixel
- the positions b, c, and d are the positions of the 1/2 pixel precision
- the positions e1, e2, and e3 are the positions of the 1/4 pixel precision.
- max_pix When the input image has 8-bit precision, the value of max_pix is 255.
- the pixel values at the positions b and d are generated by the following equation (2) using a 6-tap FIR filter.
- the pixel value at the position c is generated as in the following Expression (3) by applying a 6-tap FIR filter in the horizontal direction and the vertical direction.
- the clip process is executed only once at the end after performing both the horizontal and vertical product-sum processes.
- the positions e1 to e3 are generated by linear interpolation as in the following equation (4).
- FIG. 6 shows H. 6 is a diagram for describing prediction / compensation processing of a multi-reference frame in the H.264 / AVC format.
- a target frame Fn to be encoded from now and encoded frames Fn-5,..., Fn-1 are shown.
- the frame Fn-1 is a frame immediately before the target frame Fn on the time axis
- the frame Fn-2 is a frame two before the target frame Fn
- the frame Fn-3 is the frame of the target frame Fn. This is the previous three frames.
- the frame Fn-4 is a frame four times before the target frame Fn
- the frame Fn-5 is a frame five times before the target frame Fn.
- a smaller reference picture number (ref_id) is added to a frame closer to the time axis than the target frame Fn. That is, frame Fn-1 has the smallest reference picture number, and thereafter, the reference picture numbers are smallest in the order of Fn-2,..., Fn-5.
- a block A1 and a block A2 are shown in the target frame Fn.
- the block A1 is considered to be correlated with the block A1 'of the previous frame Fn-2, and the motion vector V1 is searched.
- the block A2 is considered to be correlated with the block A1 'of the previous frame Fn-4, and the motion vector V2 is searched.
- the block indicates any of the 16 ⁇ 16 pixel, 16 ⁇ 8 pixel, 8 ⁇ 16 pixel, and 8 ⁇ 8 pixel partitions described above with reference to FIG.
- the reference frames within the 8x8 sub-block must be the same.
- FIG. It is a figure explaining the production
- a target block E to be encoded (for example, 16 ⁇ 16 pixels) and blocks A to D that have already been encoded and are adjacent to the target block E are shown.
- the block D is adjacent to the upper left of the target block E
- the block B is adjacent to the upper side of the target block E
- the block C is adjacent to the upper right of the target block E
- the block A is , Adjacent to the left of the target block E.
- the blocks A to D are not divided represent blocks having any one of the 16 ⁇ 16 pixels to 4 ⁇ 4 pixels described above with reference to FIG.
- the predicted motion vector information for the current block E pmv E is block A, B, by using the motion vector information on C, is generated as in the following equation by median prediction (5).
- the motion vector information related to the block C may be unavailable (unavailable) because it is at the edge of the image frame or is not yet encoded. In this case, the motion vector information regarding the block C is substituted with the motion vector information regarding the block D.
- the data mvd E added to the header portion of the compressed image as motion vector information for the target block E is generated as in the following equation (6) using pmv E.
- mvd E mv E -pmv E (6)
- processing is performed independently for each of the horizontal and vertical components of the motion vector information.
- FIG. 8 is a block diagram illustrating a detailed configuration example of the secondary prediction unit.
- the secondary prediction unit 76 includes a reference block address calculation unit 81, a reference adjacent address calculation unit 82, a reference adjacent pixel determination unit 83, a target adjacent pixel readout unit 84, an adjacent pixel difference calculation unit 85, an intra
- the prediction unit 86 and the target block difference buffer 87 are configured.
- the motion vector prediction / compensation unit 75 supplies the motion vector (dx, dy) for the target block to the reference block address calculation unit 81.
- the motion vector prediction / compensation unit 75 supplies the target block address (x, y) to the reference block address calculation unit 81 and the target adjacent pixel readout unit 84.
- the motion vector prediction / compensation unit 75 supplies the primary residual that is the difference between the target block and the reference block (predicted image) to the target block difference buffer 87.
- the reference block address calculation unit 81 calculates the reference block address (x + dx, y + dy) from the target block address (x, y) from the motion vector prediction / compensation unit 75 and the motion vector (dx, dy) for the target block. ).
- the reference block address calculation unit 81 supplies the determined reference block address (x + dx, y + dy) to the reference adjacent address calculation unit 82.
- the reference adjacent address calculation unit 82 calculates a reference adjacent address that is a relative address of the reference adjacent pixel based on the reference block address (x + dx, y + dy) and the relative address of the target adjacent pixel adjacent to the target block. .
- the reference adjacent address calculation unit 82 supplies the calculated reference adjacent address (x + dx + ⁇ x, y + dy + ⁇ y) to the reference adjacent determination unit 77.
- the reference adjacent pixel determination unit 83 receives a determination result from the reference adjacent determination unit 77 as to whether or not the reference adjacent pixel exists within the image frame of the reference frame. When the reference adjacent pixel exists in the image frame of the reference frame, the reference adjacent pixel determination unit 83 determines from the frame memory 72 that the H.P.
- the adjacent pixels defined in the H.264 / AVC format are read out and stored in a built-in buffer (not shown).
- the reference adjacent pixel determination unit 83 performs end point processing on the non-existing adjacent pixel, determines the pixel value of the reference adjacent pixel, The data is read from the memory 72 and stored in a built-in buffer (not shown).
- the end point processing is, for example, processing in which another pixel value existing in the image frame of the reference frame is used as a pixel value of an adjacent pixel not existing in the image frame, which will be described in detail later with reference to FIG. Is done.
- the target adjacent pixel reading unit 84 reads the pixel value of the target block from the frame memory 72 using the target block address (x, y) from the motion prediction / compensation unit 75 and stores it in a built-in buffer (not shown). To do.
- the adjacent pixel difference calculation unit 85 reads the target adjacent pixel [A ′] from the built-in buffer of the target adjacent pixel reading unit 84, and the reference adjacent corresponding to the target adjacent pixel from the built-in buffer of the reference adjacent pixel determination unit 85. Read out pixel [B '].
- the adjacent pixel difference calculation unit 85 calculates a difference between the target adjacent pixel [A ′] read from each built-in buffer and the reference adjacent pixel [B ′], and obtains a residual [A′ ⁇ B ′] with respect to the adjacent pixel. Are stored in a built-in buffer (not shown).
- the intra prediction unit 86 reads the residual [A′ ⁇ B ′] for the adjacent pixel from the internal buffer of the adjacent pixel difference calculation unit 85, and calculates the primary residual [AB] for the target block from the target block difference buffer 87. read out.
- the intra prediction unit 86 performs intra prediction on the target block in each intra prediction mode [mode] using the residual [A′ ⁇ B ′] with respect to adjacent pixels, and generates an intra predicted image Ipred (A′ ⁇ B ′) [ mode] is generated.
- the intra prediction unit 86 generates a secondary residual that is a difference between the primary residual for the target block and the intra-predicted image predicted for the target block, and the generated secondary residual, Information on the intra prediction mode is supplied to the motion prediction / compensation unit 75.
- circuit that performs intra prediction as the secondary prediction in the intra prediction unit 86 in the example of FIG. 8 can share the circuit with the intra prediction unit 74.
- a target frame and a reference frame are shown, and the target block A and a target adjacent pixel A ′ adjacent to the target block A are shown in the target frame.
- a motion vector mv (dx, dy) obtained for the target frame A is shown in the reference frame.
- the reference frame shows a reference block B associated with the target block A by the motion vector mv (dx, dy) and a reference adjacent pixel B ′ adjacent to the reference block B.
- the target adjacent pixel A ′ and the reference adjacent pixel B ′ are hatched to distinguish them from the pixels of the target block A and the reference block B.
- the secondary prediction unit 76 the secondary prediction process described above with reference to FIG. 1 is performed. At that time, it is determined whether or not the reference adjacent pixel B ′ for the reference block B exists in the image frame by the reference adjacent determination unit 77, and is set as follows by the secondary prediction unit 76.
- the address of the target adjacent pixel A ′ is defined as (x + ⁇ x, y + ⁇ y), and the address of the target adjacent pixel B ′ is (x + dx + ⁇ x) , y + dy + ⁇ y).
- ( ⁇ x, ⁇ y) ((-1, -1), (0, -1), (1, -1), (2, -1), (3, -1), (4, -1), (5, -1), (6, -1), (7, -1), (-1,0), (-1,1), (-1,2), (-1,3) ⁇ ... (7)
- the setting of the reference adjacent pixel B ′ for the reference block B will be described with reference to FIGS. 10 and 11 using these addresses.
- the definition of the target adjacent pixel A ′ for the target block A is as follows. It conforms to the definition of H.264 / AVC format. That is, the details will be described later with reference to FIGS. 13 and 14.
- the secondary prediction unit 76 sets the pixel value to 2 n ⁇ 1 .
- the pixel value is represented by n bits. In the case of 8 bits, the pixel value is 128. x + dx + ⁇ x ⁇ 0 or y + dy + ⁇ y> 0 ... (8)
- the image frame sizes of the target frame and the reference frame are WIDTH ⁇ HEIGHT.
- the image frame size is WIDTH ⁇ HEIGHT, as shown in FIG. 11A, that is, when the following equation (9) holds, the secondary prediction unit 76 uses the address (WIDTH ⁇ 1, y +).
- the pixel indicated by dy + ⁇ y) is set as the reference adjacent pixel. x + dx + ⁇ x> WIDTH-1 (9)
- the secondary prediction unit 76 when the image frame size is WIDTH ⁇ HEIGHT, as shown in FIG. 11B, that is, when the following equation (10) is satisfied, the secondary prediction unit 76 generates the address (x + dx + The pixel indicated by ⁇ x, HEIGHT-1) is set as the reference adjacent pixel. y + dy + ⁇ y> HEIGHT-1 (10)
- the secondary prediction unit 76 is indicated by the address (WIDTH-1, HEIGHT-1).
- a pixel is set as a reference adjacent pixel.
- the process of setting the reference adjacent pixel of the secondary prediction unit 76 is one of the end point processes for the reference adjacent pixel that protrudes from the image frame, as indicated by arrows A in FIG. 11 and B in FIG. This is nothing but the processing using the same value as the reference adjacent pixel existing in the image frame.
- This process is called a hold process. Instead of the hold process, a mirror process which is another one of the end point processes may be applied.
- FIG. 12A the range of E shown in FIG. 11B is enlarged and shown as an example of hold processing, and in the example of FIG. 12B, as an example of mirror processing. Has been.
- the reference adjacent pixel on the left side in the figure from the image frame boundary exists in the image frame, and has pixel values of a0, a1, and a2, for example, in order from the image frame boundary side.
- the reference adjacent pixel on the right side in the figure from the image frame boundary exists outside the image frame.
- the pixel value a0 of the reference adjacent pixel in the image frame closest to the image frame boundary is used as the pixel value of the reference adjacent pixel outside the image frame. Is generated.
- the process is performed on the assumption that there is a virtual pixel value as a mirror image centering on the image frame boundary.
- the pixel value of the reference adjacent pixel closest to the image frame boundary side outside the image frame is virtually calculated by using the pixel value a0 of the reference adjacent pixel in the image frame closest to the image frame boundary. Is generated.
- the pixel value of the reference adjacent pixel that is second closest from the image frame boundary side outside the image frame is virtually generated by using the pixel value a1 of the reference adjacent pixel in the image frame that is second closest to the image frame boundary.
- the pixel value of the reference adjacent pixel that is third closest from the image frame boundary side outside the image frame is virtually generated using the pixel value a2 of the reference adjacent pixel in the image frame that is third closest to the image frame boundary.
- ( ⁇ x, ⁇ y) ((-1, -1), (0, -1), (1, -1), (2, -1), (3, -1), (4, -1), (5, -1), (6, -1), (7, -1), (8, -1), (9, -1), (10, -1), (11, -1), (12, -1), (13, -1), (14, -1), (15, -1), (-1,0), (-1,1), (-1,2), (-1,3), (-1,4), (-1,5), (-1,6), (-1,7), (-1,8), (-1,9), (-1,10), (-1,11), (-1,12), (-1,13), (-1,14), (-1,15) ⁇ (12)
- the image encoding device 51 it is determined whether or not the reference adjacent pixel exists outside the image frame, and when the reference adjacent pixel exists outside the image frame, the hold or mirror is applied to the pixel. The end point processing was performed.
- the secondary prediction process can be performed, and as a result, the encoding efficiency can be improved.
- step S11 the A / D converter 61 performs A / D conversion on the input image.
- step S12 the screen rearrangement buffer 62 stores the image supplied from the A / D conversion unit 61, and rearranges the picture from the display order to the encoding order.
- step S13 the calculation unit 63 calculates the difference between the image rearranged in step S12 and the predicted image.
- the predicted image is supplied from the motion prediction / compensation unit 75 in the case of inter prediction and from the intra prediction unit 74 in the case of intra prediction to the calculation unit 63 via the predicted image selection unit 78.
- ⁇ Difference data has a smaller data volume than the original image data. Therefore, the data amount can be compressed as compared with the case where the image is encoded as it is.
- step S14 the orthogonal transformation unit 64 orthogonally transforms the difference information supplied from the calculation unit 63. Specifically, orthogonal transformation such as discrete cosine transformation and Karhunen-Loeve transformation is performed, and transformation coefficients are output.
- step S15 the quantization unit 65 quantizes the transform coefficient. At the time of this quantization, the rate is controlled as described in the process of step S25 described later.
- step S ⁇ b> 16 the inverse quantization unit 68 inversely quantizes the transform coefficient quantized by the quantization unit 65 with characteristics corresponding to the characteristics of the quantization unit 65.
- step S ⁇ b> 17 the inverse orthogonal transform unit 69 performs inverse orthogonal transform on the transform coefficient inversely quantized by the inverse quantization unit 68 with characteristics corresponding to the characteristics of the orthogonal transform unit 64.
- step S ⁇ b> 18 the calculation unit 70 adds the predicted image input via the predicted image selection unit 78 to the locally decoded difference information, and outputs the locally decoded image (input to the calculation unit 63. Corresponding image).
- step S ⁇ b> 19 the deblock filter 71 filters the image output from the calculation unit 70. Thereby, block distortion is removed.
- step S20 the frame memory 72 stores the filtered image. Note that an image that has not been filtered by the deblocking filter 71 is also supplied to the frame memory 72 from the computing unit 70 and stored therein.
- step S21 the intra prediction unit 74 and the motion prediction / compensation unit 75 each perform image prediction processing. That is, in step S21, the intra prediction unit 74 performs an intra prediction process in the intra prediction mode.
- the motion prediction / compensation unit 75 performs inter prediction mode motion prediction / compensation processing.
- the secondary prediction unit 76 determines the reference adjacent according to the determination result. After the end point processing is performed on the pixel, secondary prediction is performed to generate a secondary residual. Then, the motion prediction / compensation unit 75 determines a residual with good coding efficiency, out of the primary residual and the secondary residual.
- step S21 The details of the prediction process in step S21 will be described later with reference to FIG. 14.
- prediction processes in all the candidate intra prediction modes are performed, and in all the intra prediction modes that are candidates.
- Cost function values are respectively calculated.
- the optimal intra prediction mode is selected, and the predicted image generated by the intra prediction in the optimal intra prediction mode and its cost function value are supplied to the predicted image selection unit 78.
- prediction processing is performed in all candidate inter prediction modes, and the cost function values in all candidate inter prediction modes are calculated using the determined residuals.
- the optimal inter prediction mode is determined from the inter prediction modes, and the predicted image generated in the optimal inter prediction mode and its cost function value are supplied to the predicted image selection unit 78.
- the difference between the inter-image and the second-order residual is supplied to the predicted image selection unit 78 as a predicted image.
- step S ⁇ b> 22 the predicted image selection unit 78 optimizes one of the optimal intra prediction mode and the optimal inter prediction mode based on the cost function values output from the intra prediction unit 74 and the motion prediction / compensation unit 75. Determine the prediction mode. Then, the predicted image selection unit 78 selects the predicted image in the determined optimal prediction mode and supplies it to the calculation units 63 and 70. As described above, this predicted image (if the second-order prediction is performed, the difference between the interpolated image and the second-order residual) is used for the calculations in steps S13 and S18.
- the prediction image selection information is supplied to the intra prediction unit 74 or the motion prediction / compensation unit 75.
- the intra prediction unit 74 supplies information indicating the optimal intra prediction mode (that is, intra prediction mode information) to the lossless encoding unit 66.
- the motion prediction / compensation unit 75 sends information indicating the optimal inter prediction mode and, if necessary, information corresponding to the optimal inter prediction mode to the lossless encoding unit 66. Output.
- Information according to the optimal inter prediction mode includes a secondary prediction flag indicating that secondary prediction is performed, information indicating an intra prediction mode in secondary prediction, reference frame information, and the like.
- the lossless encoding unit 66 encodes the quantized transform coefficient output from the quantization unit 65. That is, the difference image (secondary difference image in the case of secondary prediction) is subjected to lossless encoding such as variable length encoding and arithmetic encoding, and is compressed.
- the intra prediction mode information from the intra prediction unit 74 or the information corresponding to the optimal inter prediction mode from the motion prediction / compensation unit 75, which is input to the lossless encoding unit 66 in step S22 described above, is also encoded. And added to the header information.
- step S24 the accumulation buffer 67 accumulates the difference image as a compressed image.
- the compressed image stored in the storage buffer 67 is appropriately read and transmitted to the decoding side via the transmission path.
- step S25 the rate control unit 79 controls the quantization operation rate of the quantization unit 65 based on the compressed image stored in the storage buffer 67 so that overflow or underflow does not occur.
- the decoded image to be referred to is read from the frame memory 72, and the intra prediction unit 74 via the switch 73. To be supplied. Based on these images, in step S31, the intra prediction unit 74 performs intra prediction on the pixels of the block to be processed in all candidate intra prediction modes. Note that pixels that have not been deblocked filtered by the deblocking filter 71 are used as decoded pixels that are referred to.
- intra prediction is performed in all candidate intra prediction modes, and for all candidate intra prediction modes.
- a cost function value is calculated.
- the optimal intra prediction mode is selected, and the predicted image generated by the intra prediction in the optimal intra prediction mode and its cost function value are supplied to the predicted image selection unit 78.
- the processing target image supplied from the screen rearrangement buffer 62 is an image to be inter-processed
- the referenced image is read from the frame memory 72 and supplied to the motion prediction / compensation unit 75 via the switch 73.
- the motion prediction / compensation unit 75 performs an inter motion prediction process. That is, the motion prediction / compensation unit 75 refers to the image supplied from the frame memory 72 and performs motion prediction processing in all candidate inter prediction modes.
- the reference adjacency determining unit 77 uses the address of the reference adjacent pixel from the motion prediction / compensation unit 75 to determine whether or not the reference adjacent pixel exists in the image frame of the reference frame.
- the secondary prediction unit 76 performs end point processing according to the determination result from the reference adjacency determination unit 77, and outputs a secondary residual as a result of performing the secondary prediction processing to the motion prediction / compensation unit 75.
- the motion prediction / compensation unit 75 determines a residual with good coding efficiency out of the primary residual and the secondary residual and uses it for the subsequent processing.
- step S32 Details of the inter motion prediction process in step S32 will be described later with reference to FIG.
- motion prediction processing is performed in all candidate inter prediction modes, and a cost function value is calculated for all candidate inter prediction modes using a primary difference or a secondary difference. .
- step S33 the motion prediction / compensation unit 75 compares the cost function value for the inter prediction mode calculated in step S32.
- the motion prediction / compensation unit 75 determines the prediction mode giving the minimum value as the optimal inter prediction mode, and supplies the prediction image generated in the optimal inter prediction mode and its cost function value to the prediction image selection unit 78.
- the intra prediction mode for the luminance signal will be described.
- three methods are defined: an intra 4 ⁇ 4 prediction mode, an intra 8 ⁇ 8 prediction mode, and an intra 16 ⁇ 16 prediction mode.
- This is a mode for determining a block unit, and is set for each macroblock.
- an intra prediction mode independent of the luminance signal can be set for each macroblock.
- one prediction mode can be set from nine types of prediction modes for each target block of 4 ⁇ 4 pixels.
- one prediction mode can be set from nine types of prediction modes for each target block of 8 ⁇ 8 pixels.
- one prediction mode can be set from four types of prediction modes for a target macroblock of 16 ⁇ 16 pixels.
- the intra 4 ⁇ 4 prediction mode, the intra 8 ⁇ 8 prediction mode, and the intra 16 ⁇ 16 prediction mode will be referred to as 4 ⁇ 4 pixel intra prediction mode, 8 ⁇ 8 pixel intra prediction mode, and 16 ⁇ , respectively. This is also referred to as a 16-pixel intra prediction mode as appropriate.
- the numbers -1 to 25 attached to each block indicate the bit stream order (processing order on the decoding side) of each block.
- the macroblock is divided into 4 ⁇ 4 pixels, and DCT of 4 ⁇ 4 pixels is performed. Only in the case of the intra 16 ⁇ 16 prediction mode, as shown in the block of ⁇ 1, the DC components of each block are collected to generate a 4 ⁇ 4 matrix, and further, orthogonal transformation is performed on this. Is done.
- the color difference signal after the macroblock is divided into 4 ⁇ 4 pixels and the DCT of 4 ⁇ 4 pixels is performed, the DC components of each block are collected as shown in the blocks 16 and 17. A 2 ⁇ 2 matrix is generated, and is further subjected to orthogonal transformation.
- 16 and 17 are diagrams showing nine types of luminance signal 4 ⁇ 4 pixel intra prediction modes (Intra — 4 ⁇ 4_pred_mode). Eight types of modes other than mode 2 indicating average value (DC) prediction correspond to directions indicated by numbers 0, 1, 3 to 8 in FIG.
- pixels a to p represent pixels of a target block to be intra-processed
- pixel values A to M represent pixel values of pixels belonging to adjacent blocks. That is, the pixels a to p are images to be processed that are read from the screen rearrangement buffer 62, and the pixel values A to M are pixel values of a decoded image that is read from the frame memory 72 and referred to. It is.
- the prediction pixel values of the pixels a to p are generated as follows using the pixel values A to M of the pixels belonging to the adjacent blocks.
- the pixel value “available” indicates that the pixel value can be used without any reason such as being at the end of the image frame or not yet encoded.
- the pixel value “unavailable” indicates that the pixel value is not usable because it is at the end of the image frame or has not been encoded yet.
- Mode 0 is the Vertical Prediction mode, and is applied only when the pixel values A to D are “available”.
- the predicted pixel values of the pixels a to p are generated as in the following Expression (14).
- Mode 1 is a horizontal prediction mode and is applied only when the pixel values I to L are “available”.
- the predicted pixel values of the pixels a to p are generated as in the following Expression (15).
- Predicted pixel value of pixels a, b, c, d I
- Predicted pixel value of pixels e, f, g, h J
- Predicted pixel value of pixels i, j, k, l K
- Predicted pixel value of pixels m, n, o, p L
- Mode 2 is a DC Prediction mode.
- the predicted pixel value is generated as shown in Expression (16). (A + B + C + D + I + J + K + L + 4) >> 3 (16)
- Mode 3 is a Diagonal_Down_Left Prediction mode, and is applied only when the pixel values A, B, C, D, I, J, K, L, and M are “available”.
- the predicted pixel values of the pixels a to p are generated as in the following Expression (19).
- Mode 4 is a Diagonal_Down_Right Prediction mode, and is applied only when the pixel values A, B, C, D, I, J, K, L, and M are “available”. In this case, the predicted pixel values of the pixels a to p are generated as in the following Expression (20).
- Mode 5 is a Diagonal_Vertical_Right Prediction mode, and is applied only when the pixel values A, B, C, D, I, J, K, L, and M are “available”. In this case, the predicted pixel values of the pixels a to p are generated as in the following Expression (21).
- Mode 6 is a Horizontal_Down Prediction mode, and is applied only when the pixel values A, B, C, D, I, J, K, L, and M are “available”.
- the predicted pixel values of the pixels a to p are generated as in the following Expression (22).
- Mode 7 is a Vertical_Left Prediction mode, and is applied only when the pixel values A, B, C, D, I, J, K, L, and M are “available”.
- the predicted pixel values of the pixels a to p are generated as in the following Expression (23).
- Mode 8 is a Horizontal_Up Prediction mode, and is applied only when the pixel values A, B, C, D, I, J, K, L, and M are “available”.
- the predicted pixel values of the pixels a to p are generated as in the following Expression (24).
- a 4 ⁇ 4 pixel intra prediction mode (Intra — 4 ⁇ 4_pred_mode) encoding method for luminance signals will be described with reference to FIG.
- a target block C that is 4 ⁇ 4 pixels and is an encoding target is illustrated, and a block A and a block B that are 4 ⁇ 4 pixels adjacent to the target block C are illustrated.
- Intra_4x4_pred_mode in the target block C and Intra_4x4_pred_mode in the block A and the block B are highly correlated.
- Intra_4x4_pred_mode in the block A and the block B are respectively Intra_4x4_pred_modeA and Intra_4x4_pred_modeB, and MostProbableMode is defined as the following equation (25).
- MostProbableMode Min (Intra_4x4_pred_modeA, Intra_4x4_pred_modeB) ... (25)
- MostProbableMode the one to which a smaller mode_number is assigned is referred to as MostProbableMode.
- prev_intra4x4_pred_mode_flag [luma4x4BlkIdx]
- rem_intra4x4_pred_mode [luma4x4BlkIdx]
- Intra_4x4_pred_mode and Intra4x4PredMode [luma4x4BlkIdx] for the target block C can be obtained.
- FIGS. 21 and 22 are diagrams illustrating nine types of luminance signal 8 ⁇ 8 pixel intra prediction modes (Intra_8 ⁇ 8_pred_mode).
- the pixel value in the target 8 ⁇ 8 block is p [x, y] (0 ⁇ x ⁇ 7; 0 ⁇ y ⁇ 7), and the pixel value of the adjacent block is p [-1, -1],. [-1,15], p [-1,0], ..., [p-1,7].
- a low-pass filtering process is performed on adjacent pixels prior to generating a prediction value.
- the pixel values before the low-pass filtering process are p [-1, -1], ..., p [-1,15], p [-1,0], ... p [-1,7], and after the process Are represented as p ′ [ ⁇ 1, ⁇ 1],..., P ′ [ ⁇ 1,15], p ′ [ ⁇ 1,0],... P ′ [ ⁇ 1,7].
- p ′ [0, -1] is calculated as in the following equation (27) when p [-1, -1] is “available”, and when “not available”: Is calculated as in the following equation (28).
- p '[0, -1] (p [-1, -1] + 2 * p [0, -1] + p [1, -1] + 2) >> 2 ...
- p '[0, -1] (3 * p [0, -1] + p [1, -1] + 2) >> 2 ... (28)
- p '[x, -1] (p [x-1, -1] + 2 * p [x, -1] + p [x + 1, -1] + 2) >> 2 ... (29)
- p '[x, -1] (p [x-1, -1] + 2 * p [x, -1] + p [x + 1, -1] + 2) >> 2
- p '[15, -1] (p [14, -1] + 3 * p [15, -1] + 2) >> 2 ... (30)
- p '[-1, -1] is calculated as follows when p [-1, -1] is "available”. That is, p ′ [ ⁇ 1, ⁇ 1] is calculated as in Expression (31) when both p [0, ⁇ 1] and p [ ⁇ 1,0] are available, and p [ -1,0] is “unavailable”, it is calculated as shown in equation (32). Further, p ′ [ ⁇ 1, ⁇ 1] is calculated as in Expression (33) when p [0, ⁇ 1] is “unavailable”.
- p '[-1, -1] (p [0, -1] + 2 * p [-1, -1] + p [-1,0] + 2) >> 2 ...
- p '[-1,0] (p [-1, -1] + 2 * p [-1,0] + p [-1,1] + 2) >> 2 ... (34)
- p '[-1,0] (3 * p [-1,0] + p [-1,1] + 2) >> 2 ... (35)
- p [-1, y] (p [-1, y-1] + 2 * p [-1, y] + p [-1, y + 1] + 2) >> 2 ... (36)
- p '[-1,7] (p [-1,6] + 3 * p [-1,7] + 2) >> 2 ... (37)
- the prediction value in each intra prediction mode shown in FIG. 21 and FIG. 22 is generated as follows using p ′ calculated in this way.
- pred8x8 L [x, y] (p '[14, -1] + 3 * p [15, -1] + 2) >> 2 ...
- red8x8 L [x, y] (p '[x + y, -1] + 2 * p' [x + y + 1, -1] + p '[x + y + 2, -1] + 2) >> 2 ... (45)
- zVR is defined as the following equation (54).
- zHD 2 * y-x ... (54)
- pred8x8 L [x, y] (p '[x + (y >> 1),-1] + p' [x + (y >> 1) + 1, -1] + 1) >> 1 ...
- FIG. 23 and FIG. 24 are diagrams illustrating four types of luminance signal 16 ⁇ 16 pixel intra prediction modes (Intra — 16 ⁇ 16_pred_mode).
- the predicted pixel value Pred (x, y) of each pixel of the target macroblock A is generated as in the following Expression (66).
- the predicted pixel value Pred (x, y) of each pixel of the target macroblock A is generated as in the following Expression (67).
- the predicted pixel value Pred (x, y) of each pixel is generated as in the following equation (68).
- the predicted pixel value Pred (x, y) of each pixel of the target macroblock A is generated as in the following Expression (71).
- FIG. 26 is a diagram illustrating four types of color difference signal intra prediction modes (Intra_chroma_pred_mode).
- the color difference signal intra prediction mode can be set independently of the luminance signal intra prediction mode.
- the intra prediction mode for the color difference signal is in accordance with the 16 ⁇ 16 pixel intra prediction mode of the luminance signal described above.
- the 16 ⁇ 16 pixel intra prediction mode of the luminance signal is intended for a block of 16 ⁇ 16 pixels
- the intra prediction mode for the color difference signal is intended for a block of 8 ⁇ 8 pixels.
- the mode numbers do not correspond to each other.
- the predicted pixel value Pred (x, y) of each pixel is generated as in the following equation (72).
- the predicted pixel value Pred (x, y) of each pixel of the target macroblock A is generated as in the following equation (75).
- the predicted pixel value Pred (x, y) of each pixel of the target macroblock A is generated as in the following Expression (76).
- the predicted pixel value Pred (x, y) of each pixel of the target macroblock A is generated as in the following equation (77).
- the luminance signal intra prediction modes include nine types of 4 ⁇ 4 pixel and 8 ⁇ 8 pixel block units, and four types of 16 ⁇ 16 pixel macroblock unit prediction modes. This block unit mode is set for each macroblock unit.
- the color difference signal intra prediction modes include four types of prediction modes in units of 8 ⁇ 8 pixel blocks. This color difference signal intra prediction mode can be set independently of the luminance signal intra prediction mode.
- the 4 ⁇ 4 pixel intra prediction mode (intra 4 ⁇ 4 prediction mode) and the 8 ⁇ 8 pixel intra prediction mode (intra 8 ⁇ 8 prediction mode) of the luminance signal are 4 ⁇ 4 pixels and 8 ⁇ 8 pixels.
- One intra prediction mode is set for each block of luminance signals.
- 16 ⁇ 16 pixel intra prediction mode for luminance signals (intra 16 ⁇ 16 prediction mode) and the intra prediction mode for color difference signals one prediction mode is set for one macroblock.
- Prediction mode 2 is average value prediction.
- step S41 the intra prediction unit 74 performs intra prediction for each of the 4 ⁇ 4 pixel, 8 ⁇ 8 pixel, and 16 ⁇ 16 pixel intra prediction modes.
- the intra prediction unit 74 refers to a decoded image read from the frame memory 72 and supplied via the switch 73, and performs intra prediction on the pixel of the processing target block. By performing this intra prediction process in each intra prediction mode, a prediction image in each intra prediction mode is generated. Note that pixels that have not been deblocked filtered by the deblocking filter 71 are used as decoded pixels that are referred to.
- the intra prediction unit 74 calculates a cost function value for each intra prediction mode of 4 ⁇ 4 pixels, 8 ⁇ 8 pixels, and 16 ⁇ 16 pixels.
- the cost function value is obtained based on a method of either High Complexity mode or Low Complexity mode.
- These modes are H.264. It is defined by JM (Joint Model) which is reference software in the H.264 / AVC format.
- the encoding process is temporarily performed for all candidate prediction modes as the process in step S41. Then, the cost function value represented by the following equation (78) is calculated for each prediction mode, and the prediction mode that gives the minimum value is selected as the optimum prediction mode.
- Cost (Mode) D + ⁇ ⁇ R (78)
- D is a difference (distortion) between the original image and the decoded image
- R is a generated code amount including up to the orthogonal transform coefficient
- ⁇ is a Lagrange multiplier given as a function of the quantization parameter QP.
- step S41 generation of predicted images and header bits such as motion vector information, prediction mode information, and flag information are calculated for all candidate prediction modes. The Then, the cost function value represented by the following equation (79) is calculated for each prediction mode, and the prediction mode that gives the minimum value is selected as the optimal prediction mode.
- Cost (Mode) D + QPtoQuant (QP) ⁇ Header_Bit (79)
- D is a difference (distortion) between the original image and the decoded image
- Header_Bit is a header bit for the prediction mode
- QPtoQuant is a function given as a function of the quantization parameter QP.
- the intra prediction unit 74 determines an optimum mode for each of the 4 ⁇ 4 pixel, 8 ⁇ 8 pixel, and 16 ⁇ 16 pixel intra prediction modes. That is, as described above, in the case of the intra 4 ⁇ 4 prediction mode and the intra 8 ⁇ 8 prediction mode, there are nine types of prediction modes, and in the case of the intra 16 ⁇ 16 prediction mode, there are types of prediction modes. There are four types. Therefore, the intra prediction unit 74 selects the optimal intra 4 ⁇ 4 prediction mode, the optimal intra 8 ⁇ 8 prediction mode, and the optimal intra 16 ⁇ 16 prediction mode from among the cost function values calculated in step S42. decide.
- the intra prediction unit 74 calculates the cost calculated in step S42 from among the optimum modes determined for the 4 ⁇ 4 pixel, 8 ⁇ 8 pixel, and 16 ⁇ 16 pixel intra prediction modes in step S44.
- the optimal intra prediction mode is selected based on the function value. That is, the mode having the minimum cost function value is selected as the optimal intra prediction mode from among the optimal modes determined for 4 ⁇ 4 pixels, 8 ⁇ 8 pixels, and 16 ⁇ 16 pixels.
- the intra prediction unit 74 supplies the predicted image generated in the optimal intra prediction mode and its cost function value to the predicted image selection unit 78.
- step S51 the motion prediction / compensation unit 75 determines a motion vector and a reference image for each of the eight types of inter prediction modes including 16 ⁇ 16 pixels to 4 ⁇ 4 pixels described above with reference to FIG. . That is, a motion vector and a reference image are determined for each block to be processed in each inter prediction mode.
- step S52 the motion prediction / compensation unit 75 performs motion prediction on the reference image based on the motion vector determined in step S51 for each of the eight types of inter prediction modes including 16 ⁇ 16 pixels to 4 ⁇ 4 pixels. Perform compensation processing. With this motion prediction and compensation processing, a prediction image in each inter prediction mode is generated for the target block based on the pixel value of the reference block, and a primary residual that is a difference between the target block and the prediction image is a secondary prediction unit. 76 is output. The motion prediction / compensation unit 75 also outputs the detected motion vector information and the address of the image to be inter-processed to the secondary prediction unit 76.
- step S53 the secondary prediction unit 76 and the reference adjacency determination unit 77 perform reference adjacent pixel determination processing. Details of the reference adjacent pixel determination processing will be described later with reference to FIG.
- step S53 it is determined whether or not the reference adjacent pixel adjacent to the reference block exists in the image frame of the reference frame, and the end point processing is performed according to the determination result, so that the reference adjacent pixel Are determined.
- step S54 the secondary prediction unit 76 and the motion prediction / compensation unit 75 perform secondary prediction processing using the determined reference neighboring pixels. Details of the secondary prediction process will be described later with reference to FIG.
- a secondary residual is generated by performing prediction between the primary residual that is the difference between the image of the target block and the predicted image and the difference between the target adjacent pixel and the reference adjacent pixel. Then, by comparing the primary residual and the secondary residual, it is determined whether or not to perform the secondary prediction process.
- the secondary residual is used for calculating the cost function value in step S56 described later instead of the primary residual.
- a secondary prediction flag indicating that the secondary prediction is performed and information indicating the intra prediction mode in the secondary prediction are also output to the motion prediction / compensation unit 75.
- step S55 the motion prediction / compensation unit 75 generates motion vector information mvd E for the motion vectors determined for each of the eight types of inter prediction modes including 16 ⁇ 16 pixels to 4 ⁇ 4 pixels. At this time, the motion vector generation method described above with reference to FIG. 7 is used.
- the generated motion vector information is also used when calculating the cost function value in the next step S56, and finally when the corresponding predicted image is selected by the predicted image selection unit 78, the prediction mode information and reference It is output to the lossless encoding unit 66 together with the frame information.
- step S56 the mode determination unit 86 calculates the cost function value represented by the equation (78) or the equation (79) described above for each of the eight types of inter prediction modes including 16 ⁇ 16 pixels to 4 ⁇ 4 pixels. calculate.
- the cost function value calculated here is used when determining the optimal inter prediction mode in step S33 of FIG. 14 described above.
- the target block address (x, y) from the motion prediction / compensation unit 75 is supplied to the reference block address calculation unit 81 and the target adjacent pixel readout unit 84.
- the reference block address calculation unit 81 acquires the target block address (x, y).
- the motion vector information (dx, dy) obtained for the target block in step S51 of FIG. 28 is input to the reference block address calculation unit 81.
- the reference block address calculation unit 81 calculates the reference block address (x + dx, y + dy) from the target block address (x, y) and the motion vector information (dx, dy), and uses it. , And supplied to the reference adjacent address calculation unit 82.
- step S63 the reference adjacent address calculation unit 82 calculates a reference adjacent address (x + dx + ⁇ x, y + dy + ⁇ y) that is an address of the reference adjacent pixel with respect to the reference block, and sends it to the reference adjacent determination unit 77. Supply.
- step S64 based on the reference adjacent address (x + dx + ⁇ x, y + dy + ⁇ y), the reference adjacent determination unit 77 determines whether or not the reference adjacent pixel exists in the image frame, and the determination The result is supplied to the reference adjacent pixel determining unit 83.
- the reference adjacent pixel determination unit 83 performs the end point processing described above with reference to FIG. Thus, the pixel value of the reference adjacent pixel is determined.
- the reference adjacent pixel determining unit 83 reads the determined pixel value from the frame memory 72 and accumulates it in a built-in buffer (not shown) as the pixel value of the reference adjacent pixel.
- step S64 determines whether the reference adjacent pixel exists within the image frame.
- step S66 determines an adjacent pixel according to a normal definition and reads it from the frame memory 72. That is, the reference adjacent pixel determining unit 83 determines whether the H.264 The pixel value of the reference adjacent pixel defined in the H.264 / AVC format is read from the frame memory 72 and stored in a built-in buffer (not shown).
- step S54 in FIG. 28 will be described with reference to the flowchart in FIG. Note that the example of FIG. 30 is described by taking 4 ⁇ 4 pixel intra prediction as an example.
- the pixel value of the reference adjacent pixel is accumulated in the built-in buffer of the reference adjacent pixel determining unit 82. Further, the target adjacent pixel reading unit 84 reads the pixel value of the target block from the frame memory 72 using the target block address (x, y) from the motion prediction / compensation unit 75, and stores the pixel value in an internal buffer (not shown) Has accumulated.
- the adjacent pixel difference calculation unit 85 reads the target adjacent pixel [A ′] from the built-in buffer of the target adjacent pixel reading unit 84, and the reference adjacent corresponding to the target adjacent pixel from the built-in buffer of the reference adjacent pixel determination unit 85. Read out pixel [B ']. In step S71, the adjacent pixel difference calculation unit 85 calculates the difference between the target adjacent pixel [A ′] read from each built-in buffer and the reference adjacent pixel [B ′], and the residual [A′ ⁇ B '] is stored in a built-in buffer (not shown).
- step S72 the intra prediction unit 86 selects one intra prediction mode among the nine types of intra prediction modes described above with reference to FIGS.
- step S73 the intra prediction unit 86 performs an intra prediction process using a difference (residual) in the selected intra prediction mode.
- the intra prediction unit 86 reads the residual [A′ ⁇ B ′] for the adjacent pixel from the built-in buffer of the adjacent pixel difference calculation unit 85. Then, the intra prediction unit 86 performs intra prediction on the target block in the selected intra prediction mode [mode] using the residual [A′ ⁇ B ′] with respect to the read adjacent pixels, and performs the intra prediction image Ipred (A ′ -B ') Generate [mode].
- the intra prediction unit 86 generates a secondary residual. That is, when generating the intra prediction image Ipred (A′ ⁇ B ′) [mode] based on the difference, the secondary residual generation unit 82 reads the corresponding primary residual (AB) from the target block difference buffer 87. . The secondary residual generator 82 generates a secondary residual that is a difference between the primary residual and the intra-predicted image Ipred (A′ ⁇ B ′) [mode], and the generated secondary residual is Output to the motion prediction / compensation unit 75. At this time, information on the intra prediction mode in the corresponding secondary prediction is also output to the motion prediction / compensation unit 75.
- step S75 the adjacent pixel prediction unit 83 determines whether the processing for all intra prediction modes has been completed. If it is determined that the processing has not ended, the neighboring pixel prediction unit 83 returns to step S72 and repeats the subsequent processing. That is, in step S72, another intra prediction mode is selected, and the subsequent processing is repeated.
- step S75 If it is determined in step S75 that the processing for all intra prediction modes has been completed, the processing proceeds to step S76.
- step S76 the motion prediction / compensation unit 75 compares the secondary residuals of the respective intra prediction modes from the secondary prediction unit 76, and the secondary residual intra that is considered to have the best coding efficiency among them.
- the prediction mode is determined as the intra prediction mode of the target block. That is, the intra prediction mode having the smallest secondary residual value is determined as the intra prediction mode of the target block.
- the motion prediction / compensation unit 75 further compares the secondary residual in the determined intra prediction mode with the primary residual, and determines whether or not to use secondary prediction. That is, when it is determined that the secondary residual is more efficient in encoding, it is determined to use secondary prediction, and the difference between the interpolated image and the secondary residual becomes a candidate for inter prediction as a predicted image. . If it is determined that the primary residual has better encoding efficiency, it is determined that secondary prediction is not used, and the prediction image obtained in step S52 of FIG. 28 is a candidate for inter prediction.
- the secondary residual is encoded and sent to the decoding side only when the secondary residual gives higher encoding efficiency than the primary residual.
- step S77 the values of the residuals themselves are compared, and a smaller value may be determined as having good coding efficiency, or the cost function represented by the above-described equation (78) or equation (79). You may make it determine a thing with favorable encoding efficiency by calculating a value.
- the end point processing is performed to determine the pixel value of the reference adjacent pixel. Next predictions can be made. Thereby, encoding efficiency can be improved.
- the encoded compressed image is transmitted via a predetermined transmission path and decoded by an image decoding device.
- FIG. 31 shows the configuration of an embodiment of an image decoding apparatus as an image processing apparatus to which the present invention is applied.
- the image decoding apparatus 101 includes a storage buffer 111, a lossless decoding unit 112, an inverse quantization unit 113, an inverse orthogonal transform unit 114, a calculation unit 115, a deblock filter 116, a screen rearrangement buffer 117, a D / A conversion unit 118, a frame
- the memory 119, the switch 120, the intra prediction unit 121, the motion prediction / compensation unit 122, the secondary prediction unit 123, the reference adjacency determination unit 124, and the switch 125 are configured.
- the accumulation buffer 111 accumulates the transmitted compressed image.
- the lossless decoding unit 112 decodes the information supplied from the accumulation buffer 111 and encoded by the lossless encoding unit 66 in FIG. 3 by a method corresponding to the encoding method of the lossless encoding unit 66.
- the inverse quantization unit 113 inversely quantizes the image decoded by the lossless decoding unit 112 by a method corresponding to the quantization method of the quantization unit 65 in FIG.
- the inverse orthogonal transform unit 114 performs inverse orthogonal transform on the output of the inverse quantization unit 113 by a method corresponding to the orthogonal transform method of the orthogonal transform unit 64 in FIG.
- the output subjected to inverse orthogonal transform is added to the prediction image supplied from the switch 125 by the arithmetic unit 115 and decoded.
- the deblocking filter 116 removes block distortion of the decoded image, and then supplies the frame to the frame memory 119 for storage and outputs it to the screen rearrangement buffer 117.
- the screen rearrangement buffer 117 rearranges images. That is, the order of frames rearranged for the encoding order by the screen rearrangement buffer 62 in FIG. 3 is rearranged in the original display order.
- the D / A conversion unit 118 performs D / A conversion on the image supplied from the screen rearrangement buffer 117, and outputs and displays the image on a display (not shown).
- the switch 120 reads the inter-processed image and the referenced image from the frame memory 119 and outputs them to the motion prediction / compensation unit 122, and also reads an image used for intra prediction from the frame memory 119, and sends it to the intra prediction unit 121. Supply.
- the information indicating the intra prediction mode obtained by decoding the header information is supplied from the lossless decoding unit 112 to the intra prediction unit 121.
- the intra prediction unit 121 generates a prediction image based on this information, and outputs the generated prediction image to the switch 125.
- the motion prediction / compensation unit 122 is supplied with prediction mode information, motion vector information, reference frame information, and the like from the lossless decoding unit 112.
- the motion prediction / compensation unit 122 has a secondary prediction flag indicating that the secondary prediction is performed, and an intra prediction mode in the secondary prediction.
- Information is also supplied from the lossless decoding unit 122.
- the motion prediction / compensation unit 122 refers to the secondary prediction flag from the lossless decoding unit 112 and determines whether the secondary prediction process is applied. When the motion prediction / compensation unit 122 determines that the secondary prediction process is applied, the motion prediction / compensation unit 122 outputs the result to the secondary prediction unit 123 and causes the secondary prediction unit 123 to perform secondary prediction.
- the motion prediction / compensation unit 122 performs motion prediction and compensation processing on the image based on the motion vector information and the reference frame information, and generates a predicted image. That is, the predicted image of the target block is generated using the pixel value of the reference block associated with the target block by a motion vector in the reference frame. Then, the motion prediction / compensation unit 122 adds the generated prediction image and the prediction difference value from the secondary prediction unit 123, and outputs the result to the switch 125 as a prediction image.
- the secondary prediction unit 123 performs secondary prediction using the difference between the target adjacent pixel and the reference adjacent pixel read from the frame memory 119. That is, the secondary prediction unit 123 performs intra prediction on the target block in the intra prediction mode in the secondary prediction from the lossless decoding unit 112, generates an intra prediction image, and uses the motion prediction / compensation unit 122 as a prediction difference value. Output to.
- the motion prediction / compensation unit 122 performs motion prediction and compensation processing on the image based on the motion vector information and the reference frame information, and generates a predicted image.
- the motion prediction / compensation unit 122 outputs the prediction image generated in the inter prediction mode to the switch 125.
- the switch 125 selects the prediction image (or the prediction image and the prediction difference value) generated by the motion prediction / compensation unit 122 or the intra prediction unit 121, and supplies the selected prediction image to the calculation unit 115.
- FIG. 32 is a block diagram illustrating a detailed configuration example of the secondary prediction unit.
- the secondary prediction unit 123 includes a reference block address calculation unit 131, a reference adjacent address calculation unit 132, a reference adjacent pixel determination unit 133, a target adjacent pixel read unit 134, an adjacent pixel difference calculation unit 135, and
- the intra prediction unit 136 is configured.
- the reference block address calculating unit 131, the reference adjacent address calculating unit 132, the reference adjacent pixel determining unit 133, the target adjacent pixel reading unit 134, and the adjacent pixel difference calculating unit 135 in FIG. 32 are basically the same as those in FIG.
- the same processing as that performed by the block address calculating unit 81, the reference adjacent address calculating unit 82, the reference adjacent pixel determining unit 83, the target adjacent pixel reading unit 84, and the adjacent pixel difference calculating unit 85 is performed.
- the motion vector prediction / compensation unit 122 supplies the motion vector (dx, dy) for the target block to the reference block address calculation unit 131.
- the motion vector prediction / compensation unit 122 supplies the target block address (x, y) to the reference block address calculation unit 131 and the target adjacent pixel readout unit 134.
- the reference block address calculation unit 131 calculates the reference block address (x + dx, y + dy) from the target block address (x, y) from the motion vector prediction / compensation unit 122 and the motion vector (dx, dy) for the target block. ).
- the reference block address calculation unit 131 supplies the determined reference block address (x + dx, y + dy) to the reference adjacent address calculation unit 132.
- the reference adjacent address calculation unit 132 calculates a reference adjacent address that is a relative address of the reference adjacent pixel based on the reference block address (x + dx, y + dy) and the relative address of the target adjacent pixel adjacent to the target block. .
- the reference adjacent address calculation unit 132 supplies the calculated reference adjacent address (x + dx + ⁇ x, y + dy + ⁇ y) to the reference adjacent determination unit 124.
- the reference adjacent pixel determination unit 133 receives a determination result from the reference adjacent determination unit 124 as to whether or not the reference adjacent pixel exists within the image frame of the reference frame.
- the reference adjacent pixel determination unit 133 determines from the frame memory 119 the H.264 reference
- the adjacent pixels defined in the H.264 / AVC format are read out and stored in a built-in buffer (not shown).
- the reference adjacent pixel determination unit 133 performs the end point processing described above with reference to FIG. Determine the value. Then, the reference adjacent pixel determination unit 133 reads the determined pixel value from the frame memory 119 and accumulates it in a built-in buffer (not shown).
- the target adjacent pixel reading unit 134 reads the pixel value of the target block from the frame memory 119 using the target block address (x, y) from the motion prediction / compensation unit 122, and stores it in a built-in buffer (not shown). To do.
- the adjacent pixel difference calculation unit 135 reads the target adjacent pixel [A ′] from the built-in buffer of the target adjacent pixel reading unit 134, and the reference adjacent corresponding to the target adjacent pixel from the built-in buffer of the reference adjacent pixel determination unit 135. Read out pixel [B '].
- the adjacent pixel difference calculation unit 135 calculates the difference between the target adjacent pixel [A ′] and the reference adjacent pixel [B ′] read from each built-in buffer, and sets the adjacent pixel difference value [A′ ⁇ B ′] as It accumulates in a built-in buffer (not shown).
- the intra prediction unit 136 reads the residual [A′ ⁇ B ′] for the adjacent pixel from the built-in buffer of the adjacent pixel difference calculation unit 135.
- the intra prediction unit 136 performs intra prediction on the target block in the intra prediction mode [mode] from the lossless decoding unit 112 using the adjacent pixel difference value [A′ ⁇ B ′], and performs the intra prediction image Ipred (A′ ⁇ B ') [mode] is generated.
- the intra prediction unit 136 outputs the generated intra prediction image to the motion prediction / compensation unit 122 as a difference prediction value.
- circuit that performs intra prediction as the secondary prediction in the intra prediction unit 136 in the example of FIG. 32 can share the circuit with the intra prediction unit 122.
- the motion prediction / compensation unit 122 determines whether or not the secondary prediction is performed on the target block by the secondary prediction flag decoded by the lossless decoding unit 112.
- the image decoding apparatus 101 performs an inter prediction process based on the secondary prediction.
- the image decoding apparatus 101 Inter prediction processing is performed.
- the secondary prediction in the image encoding device 51 is a process of generating the secondary residual Res_2nd as shown in the following equation (80) as described above.
- Res_2nd (A-B)-Ipred (A'-B ') [mode] (80)
- Ipred () [mode] indicates a prediction image generated using the intra prediction mode mode with the pixel value of () as an input.
- the secondary residual Res_2nd is a value obtained as a result of inverse quantization and inverse orthogonal transform, in other words, a value input from the inverse orthogonal transform unit 114 to the calculation unit 115. is there.
- the prediction difference value Ipred (B ⁇ B ′) [mode] is generated by the secondary prediction unit 123, and the pixel value [B] of the reference block is calculated by the motion prediction / compensation unit 122. They are generated and output to the calculation unit 115.
- the pixel value [A] of the target block is obtained as the output of the calculation unit 115.
- step S131 the storage buffer 111 stores the transmitted image.
- step S132 the lossless decoding unit 112 decodes the compressed image supplied from the accumulation buffer 111. That is, the I picture, P picture, and B picture encoded by the lossless encoding unit 66 in FIG. 3 are decoded.
- motion vector information reference frame information
- prediction mode information prediction mode information
- secondary prediction flag information indicating the intra prediction mode in the secondary prediction, and the like are also decoded.
- the prediction mode information is intra prediction mode information
- the prediction mode information is supplied to the intra prediction unit 121.
- the prediction mode information is inter prediction mode information
- motion vector information and reference frame information corresponding to the prediction mode information are supplied to the motion prediction / compensation unit 122.
- the secondary prediction flag is supplied to the motion prediction / compensation unit 122, and information indicating the intra prediction mode in the secondary prediction is the secondary prediction. Supplied to the unit 123.
- step S133 the inverse quantization unit 113 inversely quantizes the transform coefficient decoded by the lossless decoding unit 112 with characteristics corresponding to the characteristics of the quantization unit 65 in FIG.
- step S134 the inverse orthogonal transform unit 114 performs inverse orthogonal transform on the transform coefficient inversely quantized by the inverse quantization unit 113 with characteristics corresponding to the characteristics of the orthogonal transform unit 64 in FIG. As a result, the difference information corresponding to the input of the orthogonal transform unit 64 of FIG. 3 (the output of the calculation unit 63) is decoded.
- step S135 the calculation unit 115 adds the prediction image selected in the process of step S141 described later and input via the switch 125 to the difference information. As a result, the original image is decoded.
- step S136 the deblocking filter 116 filters the image output from the calculation unit 115. Thereby, block distortion is removed.
- step S137 the frame memory 119 stores the filtered image.
- step S138 the intra prediction unit 121 or the motion prediction / compensation unit 122 performs image prediction processing corresponding to the prediction mode information supplied from the lossless decoding unit 112, respectively.
- the intra prediction unit 121 performs an intra prediction process in the intra prediction mode.
- the motion prediction / compensation unit 122 performs a motion prediction / compensation process in the inter prediction mode.
- the motion prediction / compensation unit 122 performs inter prediction processing based on the secondary prediction or normal inter prediction processing with reference to the secondary prediction flag.
- step S138 the prediction image generated by the intra prediction unit 121 or the prediction image (or the prediction image and the prediction difference value) generated by the motion prediction / compensation unit 122 is supplied to the switch 125.
- step S139 the switch 125 selects a predicted image. That is, a prediction image generated by the intra prediction unit 121 or a prediction image generated by the motion prediction / compensation unit 122 is supplied. Therefore, the supplied predicted image is selected and supplied to the calculation unit 115, and is added to the output of the inverse orthogonal transform unit 114 in step S134 as described above.
- step S140 the screen rearrangement buffer 117 performs rearrangement. That is, the order of frames rearranged for encoding by the screen rearrangement buffer 62 of the image encoding device 51 is rearranged to the original display order.
- step S141 the D / A conversion unit 118 D / A converts the image from the screen rearrangement buffer 117. This image is output to a display (not shown), and the image is displayed.
- step S171 the intra prediction unit 121 determines whether the target block is intra-coded.
- the intra prediction unit 121 determines in step 171 that the target block is intra-coded, and the process proceeds to step S172. .
- the intra prediction unit 121 acquires the intra prediction mode information in step S172, and performs intra prediction in step S173.
- the intra prediction unit 121 performs intra prediction according to the intra prediction mode information acquired in step S172, and generates a predicted image.
- the generated prediction image is output to the switch 125.
- step S171 determines whether the intra encoding has been performed. If it is determined in step S171 that the intra encoding has not been performed, the process proceeds to step S174.
- step S174 the motion prediction / compensation unit 122 acquires the prediction mode information from the lossless decoding unit 112 and the like.
- the inter prediction mode information, the reference frame information, the motion vector information, and the secondary prediction flag are supplied from the lossless decoding unit 112 to the motion prediction / compensation unit 122.
- the motion prediction / compensation unit 122 acquires inter prediction mode information, reference frame information, and motion vector information.
- the motion prediction / compensation unit 122 acquires a secondary prediction flag in step S175, and determines in step S176 whether secondary prediction processing is applied to the target block. If it is determined in step S176 that the secondary prediction process has not been applied to the target block, the process proceeds to step S177.
- step S177 the motion prediction / compensation unit 122 performs normal inter prediction. That is, when the processing target image is an image subjected to inter prediction processing, a necessary image is read from the frame memory 169 and supplied to the motion prediction / compensation unit 122 via the switch 170. In step S177, the motion prediction / compensation unit 122 performs motion prediction in the inter prediction mode based on the motion vector acquired in step S174, and generates a predicted image. The generated prediction image is output to the switch 125.
- step S176 If it is determined in step S176 that the secondary prediction process is applied to the target block, the process proceeds to step S178.
- the lossless decoding unit 112 also decodes information indicating the intra prediction mode related to the secondary prediction, and supplies the decoded information to the secondary prediction unit 123.
- step S178 the secondary prediction unit 123 acquires information indicating the intra prediction mode in the secondary prediction supplied from the lossless decoding unit 112.
- step S179 the inter prediction based on the secondary prediction is performed.
- secondary inter prediction processing is performed. This secondary inter prediction process will be described later with reference to FIG.
- inter prediction is performed to generate a prediction image
- secondary prediction is performed to generate prediction difference values, which are added and output to the switch 125.
- step S191 the motion prediction / compensation unit 122 performs motion prediction in the inter prediction mode based on the motion vector acquired in step S174 in FIG. 34, and generates an inter prediction image. That is, by the process of step S191, a predicted image of the target block is generated using the pixel value of the reference block associated with the target block by the motion vector in the reference frame.
- the motion prediction / compensation unit 122 supplies the target block address (x, y) and the motion vector (dx, dy) to the reference block address calculation unit 131, and the target block address (x, y) This is supplied to the pixel readout unit 134.
- step S192 the secondary prediction unit 123 and the reference adjacent determination unit 124 perform a reference adjacent pixel determination process.
- the details of the reference adjacent pixel determination process are the same as those described above with reference to FIG. 29 and are repeated, so that the description thereof is omitted.
- step S192 it is determined whether or not the reference adjacent pixel adjacent to the reference block is present in the image frame of the reference frame, and the end point processing is performed according to the determination result, so that the reference adjacent pixel Are determined.
- the determined pixel value of the reference adjacent pixel is accumulated in the built-in buffer of the reference adjacent pixel determination unit 133.
- the target adjacent pixel reading unit 134 reads the pixel value of the target block from the frame memory 119 using the target block address (x, y) from the motion prediction / compensation unit 122, and stores the pixel value in an internal buffer (not shown). Has accumulated.
- the adjacent pixel difference calculation unit 135 reads the target adjacent pixel [A ′] from the built-in buffer of the target adjacent pixel reading unit 134, and the reference adjacent corresponding to the target adjacent pixel from the built-in buffer of the reference adjacent pixel determination unit 133 Read out pixel [B ']. In step S193, the adjacent pixel difference calculation unit 135 determines an adjacent pixel difference value [A′ ⁇ B ′] that is a difference between the target adjacent pixel [A ′] read from each built-in buffer and the reference adjacent pixel [B ′]. Is calculated and stored in a built-in buffer (not shown).
- step S194 the intra prediction unit 136 performs an intra prediction process using a difference in the intra prediction mode in the secondary prediction acquired in step S178 of FIG. 34, and performs a prediction difference value Ipred (A′ ⁇ B ′) [mode ] Is generated.
- the intra prediction unit 136 reads the adjacent pixel difference value [A′ ⁇ B ′] from the built-in buffer of the adjacent pixel difference calculation unit 135. Then, the intra prediction unit 136 performs intra prediction on the target block in the acquired intra prediction mode [mode] using the read adjacent pixel difference value [A′ ⁇ B ′], and performs the prediction difference value Ipred (A′ ⁇ B ') [mode] is generated. The generated prediction difference value is output to the motion prediction / compensation unit 122.
- step S195 the motion prediction / compensation unit 122 adds the inter prediction image generated in step S191 and the prediction difference value from the intra prediction unit 136, and outputs the result to the switch 125 as a prediction image.
- the inter prediction image and the prediction difference value are output to the calculation unit 115 as a prediction image by the switch 125 in step S139 in FIG. Then, the inter prediction image and the prediction difference value are added to the difference information from the inverse orthogonal transform unit 114 by the calculation unit 115 in step S135 of FIG. 33, thereby decoding the image of the target block.
- the reference adjacent pixel is outside the image frame. Therefore, the reference adjacent pixel is outside the image frame. Even in this case, second-order prediction can be performed.
- the present invention is not limited to this, and can be applied to any encoding device and decoding device that perform block-based motion prediction / compensation.
- the present invention can also be applied to the intra 8 ⁇ 8 prediction mode, the intra 16 ⁇ 16 prediction mode, and the intra prediction mode for color difference signals.
- H.264 / AVC format is used, but other encoding / decoding methods can also be used.
- the present invention is, for example, MPEG, H.264, When receiving image information (bitstream) compressed by orthogonal transformation such as discrete cosine transformation and motion compensation, such as 26x, via network media such as satellite broadcasting, cable television, the Internet, or mobile phones.
- image information bitstream
- orthogonal transformation such as discrete cosine transformation and motion compensation, such as 26x
- network media such as satellite broadcasting, cable television, the Internet, or mobile phones.
- the present invention can be applied to an image encoding device and an image decoding device used in the above. Further, the present invention can be applied to an image encoding device and an image decoding device used when processing on a storage medium such as an optical, magnetic disk, and flash memory. Furthermore, the present invention can also be applied to motion prediction / compensation devices included in such image encoding devices and image decoding devices.
- the series of processes described above can be executed by hardware or software.
- a program constituting the software is installed in the computer.
- the computer includes a computer incorporated in dedicated hardware, a general-purpose personal computer capable of executing various functions by installing various programs, and the like.
- FIG. 36 is a block diagram showing an example of the hardware configuration of a computer that executes the above-described series of processing by a program.
- a CPU Central Processing Unit
- ROM Read Only Memory
- RAM Random Access Memory
- An input / output interface 305 is further connected to the bus 304.
- An input unit 306, an output unit 307, a storage unit 308, a communication unit 309, and a drive 310 are connected to the input / output interface 305.
- the input unit 306 includes a keyboard, a mouse, a microphone, and the like.
- the output unit 307 includes a display, a speaker, and the like.
- the storage unit 308 includes a hard disk, a nonvolatile memory, and the like.
- the communication unit 309 includes a network interface and the like.
- the drive 310 drives a removable medium 311 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory.
- the CPU 301 loads the program stored in the storage unit 308 to the RAM 303 via the input / output interface 305 and the bus 304 and executes the program, thereby performing the series of processes described above. Is done.
- the program executed by the computer (CPU 301) can be provided by being recorded on the removable medium 311 as a package medium or the like, for example.
- the program can be provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital broadcasting.
- the program can be installed in the storage unit 308 via the input / output interface 305 by attaching the removable medium 311 to the drive 310.
- the program can be received by the communication unit 309 via a wired or wireless transmission medium and installed in the storage unit 308.
- the program can be installed in advance in the ROM 302 or the storage unit 308.
- the program executed by the computer may be a program that is processed in time series in the order described in this specification, or in parallel or at a necessary timing such as when a call is made. It may be a program for processing.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
x+dx+δx < 0 、または、y+dy+δy < 0
が成り立つ前記参照隣接画素に対して、その画素値を2n-1とする前記端点処理を行うことができる。
x+dx+δx > WIDTH - 1
が成り立つ場合、前記参照隣接画素の画素値として、アドレス(WIDTH - 1, y+dy+δy)により指し示される画素値を用いる前記端点処理を行うことができる。
y+dy+δy > HEIGHT - 1
が成り立つ場合、前記参照隣接画素の画素値として、アドレス(x+dx+δx, HEIGHT - 1)により指し示される画素値を用いる前記端点処理を行うことができる。
x+dx+δx > WIDTH - 1 、かつ、y+dy+δy > HEIGHT - 1
が成り立つ場合、前記参照隣接画素の画素値として、アドレス(WIDTH - 1, HEIGHT - 1)により指し示される画素値を用いる前記端点処理を行うことができる。
x+dx+δx < 0 、または、y+dy+δy < 0
が成り立つ前記参照隣接画素に対して、その画素値を2n-1とする前記端点処理を行うことができる。
x+dx+δx > WIDTH - 1
が成り立つ場合、前記参照隣接画素の画素値として、アドレス(WIDTH - 1, y+dy+δy)により指し示される画素値を用いる前記端点処理を行うことができる。
y+dy+δy > HEIGHT - 1
が成り立つ場合、前記参照隣接画素の画素値として、アドレス(x+dx+δx, HEIGHT - 1)により指し示される画素値を用いる前記端点処理を行うことができる。
x+dx+δx > WIDTH - 1 、かつ、y+dy+δy > HEIGHT - 1
が成り立つ場合、前記参照隣接画素の画素値として、アドレス(WIDTH - 1, HEIGHT - 1)により指し示される画素値を用いる前記端点処理を行うことができる。
図3は、本発明を適用した画像処理装置としての画像符号化装置の一実施の形態の構成を表している。
図4は、H.264/AVC方式における動き予測・補償のブロックサイズの例を示す図である。H.264/AVC方式においては、ブロックサイズを可変にして、動き予測・補償が行われる。
ブロックCに関する動きベクトル情報が、画枠の端であったり、あるいは、まだ符号化されていないなどの理由により、利用可能でない(unavailableである)場合がある。この場合には、ブロックCに関する動きベクトル情報は、ブロックDに関する動きベクトル情報で代用される。
mvdE = mvE - pmvE ・・・(6)
図8は、2次予測部の詳細な構成例を示すブロック図である。
次に、図9を参照して、2次予測部76および参照隣接判定部77の動作について説明する。なお、以下、対象ブロックのブロックサイズが4×4画素である場合を例に説明する。
(δx,δy) = {(-1,-1),(0,-1),(1,-1),(2,-1),(3,-1),(4,-1),(5,-1),
(6,-1),(7,-1),(-1,0),(-1,1),(-1,2),(-1,3)}
・・・(7)
x+dx+δx < 0 または y+dy+δy > 0
・・・(8)
x+dx+δx > WIDTH - 1 ・・・(9)
y+dy+δy > HEIGHT - 1 ・・・(10)
(δx,δy) = {(-1,-1),(0,-1),(1,-1),(2,-1),(3,-1),(4,-1),(5,-1),
(6,-1),(7,-1),(8,-1),(9,-1),(10,-1),(11,-1),(12,-1),
(13,-1),(14,-1),(15,-1),(-1,0),(-1,1),(-1,2),(-1,3),
(-1,4),(-1,5),(-1,6),(-1,7)}
・・・(11)
(δx,δy) = {(-1,-1),(0,-1),(1,-1),(2,-1),(3,-1),(4,-1),(5,-1),
(6,-1),(7,-1),(8,-1),(9,-1),(10,-1),(11,-1),(12,-1),
(13,-1),(14,-1),(15,-1),(-1,0),(-1,1),(-1,2),(-1,3),
(-1,4),(-1,5),(-1,6),(-1,7),(-1,8),(-1,9),(-1,10),
(-1,11),(-1,12),(-1,13),(-1,14),(-1,15)}
・・・(12)
(δx,δy) = {(-1,-1),(0,-1),(1,-1),(2,-1),(3,-1),(4,-1),(5,-1),
(6,-1),(7,-1),(-1,0),(-1,1),(-1,2),(-1,3),
(-1,4),(-1,5),(-1,6),(-1,7)}
・・・(13)
次に、図13のフローチャートを参照して、図3の画像符号化装置51の符号化処理について説明する。
次に、図14のフローチャートを参照して、図13のステップS21における予測処理を説明する。
次に、H.264/AVC方式で定められているイントラ予測の各モードについて説明する。
画素a, e, i, mの予測画素値 = A
画素b, f, j, nの予測画素値 = B
画素c, g, k, oの予測画素値 = C
画素d, h, l, pの予測画素値 = D ・・・(14)
画素a, b, c, dの予測画素値 = I
画素e, f, g, hの予測画素値 = J
画素i, j, k, lの予測画素値 = K
画素m, n, o, pの予測画素値 = L ・・・(15)
(A+B+C+D+I+J+K+L+4) >> 3 ・・・(16)
(I+J+K+L+2) >> 2 ・・・(17)
(A+B+C+D+2) >> 2 ・・・(18)
画素aの予測画素値 = (A+2B+C+2) >> 2
画素b,eの予測画素値 = (B+2C+D+2) >> 2
画素c,f,iの予測画素値 = (C+2D+E+2) >> 2
画素d,g,j,mの予測画素値 = (D+2E+F+2) >> 2
画素h,k,nの予測画素値 = (E+2F+G+2) >> 2
画素l,oの予測画素値 = (F+2G+H+2) >> 2
画素pの予測画素値 = (G+3H+2) >> 2
・・・(19)
画素mの予測画素値 = (J+2K+L+2) >> 2
画素i,nの予測画素値 = (I+2J+K+2) >> 2
画素e,j,oの予測画素値 = (M+2I+J+2) >> 2
画素a,f,k,pの予測画素値 = (A+2M+I+2) >> 2
画素b,g,lの予測画素値 = (M+2A+B+2) >> 2
画素c,hの予測画素値 = (A+2B+C+2) >> 2
画素dの予測画素値 = (B+2C+D+2) >> 2
・・・(20)
画素a,jの予測画素値 = (M+A+1) >> 1
画素b,kの予測画素値 = (A+B+1) >> 1
画素c,lの予測画素値 = (B+C+1) >> 1
画素dの予測画素値 = (C+D+1) >> 1
画素e,nの予測画素値 = (I+2M+A+2) >> 2
画素f,oの予測画素値 = (M+2A+B+2) >> 2
画素g,pの予測画素値 = (A+2B+C+2) >> 2
画素hの予測画素値 = (B+2C+D+2) >> 2
画素iの予測画素値 = (M+2I+J+2) >> 2
画素mの予測画素値 = (I+2J+K+2) >> 2
・・・(21)
画素a,gの予測画素値 = (M+I+1) >> 1
画素b,hの予測画素値 = (I+2M+A+2) >> 2
画素cの予測画素値 = (M+2A+B+2) >> 2
画素dの予測画素値 = (A+2B+C+2) >> 2
画素e,kの予測画素値 = (I+J+1) >> 1
画素f,lの予測画素値 = (M+2I+J+2) >> 2
画素i,oの予測画素値 = (J+K+1) >> 1
画素j,pの予測画素値 = (I+2J+K+2) >> 2
画素mの予測画素値 = (K+L+1) >> 1
画素nの予測画素値 = (J+2K+L+2) >> 2
・・・(22)
画素aの予測画素値 = (A+B+1) >> 1
画素b,iの予測画素値 = (B+C+1) >> 1
画素c,jの予測画素値 = (C+D+1) >> 1
画素d,kの予測画素値 = (D+E+1) >> 1
画素lの予測画素値 = (E+F+1) >> 1
画素eの予測画素値 = (A+2B+C+2) >> 2
画素f,mの予測画素値 = (B+2C+D+2) >> 2
画素g,nの予測画素値 = (C+2D+E+2) >> 2
画素h,oの予測画素値 = (D+2E+F+2) >> 2
画素pの予測画素値 = (E+2F+G+2) >> 2
・・・(23)
画素aの予測画素値 = (I+J+1) >> 1
画素bの予測画素値 = (I+2J+K+2) >> 2
画素c,eの予測画素値 = (J+K+1) >> 1
画素d,fの予測画素値 = (J+2K+L+2) >> 2
画素g,iの予測画素値 = (K+L+1) >> 1
画素h,jの予測画素値 = (K+3L+2) >> 2
画素k,l,m,n,o,pの予測画素値 = L
・・・(24)
MostProbableMode=Min(Intra_4x4_pred_modeA, Intra_4x4_pred_modeB)
・・・(25)
Intra4x4PredMode[luma4x4BlkIdx] = MostProbableMode
else
if(rem_intra4x4_pred_mode[luma4x4BlkIdx] < MostProbableMode)
Intra4x4PredMode[luma4x4BlkIdx]=rem_intra4x4_pred_mode[luma4x4BlkIdx]
else
Intra4x4PredMode[luma4x4BlkIdx]=rem_intra4x4_pred_mode[luma4x4BlkIdx] + 1
・・・(26)
p'[0,-1] = (p[-1,-1] + 2*p[0,-1] + p[1,-1] + 2) >> 2
・・・(27)
p'[0,-1] = (3*p[0,-1] + p[1,-1] + 2) >> 2
・・・(28)
p'[x,-1] = (p[x-1,-1] + 2*p[x,-1] + p[x+1,-1] + 2) >>2
・・・(29)
p'[x,-1] = (p[x-1,-1] + 2*p[x,-1] + p[x+1,-1] + 2) >>2
p'[15,-1] = (p[14,-1] + 3*p[15,-1] + 2) >>2
・・・(30)
p'[-1,-1] = (p[0,-1] + 2*p[-1,-1] + p[-1,0] + 2) >>2
・・・(31)
p'[-1,-1] = (3*p[-1,-1] + p[0,-1] + 2) >>2
・・・(32)
p'[-1,-1] = (3*p[-1,-1] + p[-1,0] + 2) >>2
・・・(33)
p'[-1,0] = (p[-1,-1] + 2*p[-1,0] + p[-1,1] + 2) >>2
・・・(34)
p'[-1,0] = (3*p[-1,0] + p[-1,1] + 2) >>2
・・・(35)
p[-1,y] = (p[-1,y-1] + 2*p[-1,y] + p[-1,y+1] + 2) >>2
・・・(36)
p'[-1,7] = (p[-1,6] + 3*p[-1,7] + 2) >>2
・・・(37)
pred8x8L[x,y] = p'[x,-1] x,y=0,...,7
・・・(38)
pred8x8L[x,y] = p'[-1,y] x,y=0,...,7
・・・(39)
pred8x8L[x,y] = 128
・・・(43)
ただし、式(43)は、8ビット入力の場合を表している。
pred8x8L[x,y] = (p'[14,-1] + 3*p[15,-1] + 2) >> 2
・・・(44)
red8x8L[x,y] = (p'[x+y,-1] + 2*p'[x+y+1,-1] + p'[x+y+2,-1] + 2) >> 2
・・・(45)
pred8x8L[x,y] = (p'[x-y-2,-1] + 2*p'[x-y-1,-1] + p'[x-y,-1] + 2) >> 2
・・・(46)
pred8x8L[x,y] = (p'[-1,y-x-2] + 2*p'[-1,y-x-1] + p'[-1,y-x] + 2) >> 2
・・・(47)
pred8x8L[x,y] = (p'[0,-1] + 2*p'[-1,-1] + p'[-1,0] + 2) >> 2
・・・(48)
zVR = 2*x - y
・・・(49)
pred8x8L[x,y] = (p'[x-(y>>1)-1,-1] + p'[x-(y>>1),-1] + 1) >> 1
・・・(50)
pred8x8L[x,y]
= (p'[x-(y>>1)-2,-1] + 2*p'[x-(y>>1)-1,-1] + p'[x-(y>>1),-1] + 2) >> 2
・・・(51)
pred8x8L[x,y] = (p'[-1,0] + 2*p'[-1,-1] + p'[0,-1] + 2) >> 2
・・・(52)
pred8x8L[x,y] = (p'[-1,y-2*x-1] + 2*p'[-1,y-2*x-2] + p'[-1,y-2*x-3] + 2) >> 2
・・・(53)
zHD = 2*y - x
・・・(54)
pred8x8L[x,y] = (p'[-1,y-(x>>1)-1] + p'[-1,y-(x>>1) + 1] >> 1
・・・(55)
pred8x8L[x,y]
= (p'[-1,y-(x>>1)-2] + 2*p'[-1,y-(x>>1)-1] + p'[-1,y-(x>>1)] + 2) >> 2
・・・(56)
pred8x8L[x,y] = (p'[-1,0] + 2*p[-1,-1] + p'[0,-1] + 2) >> 2
・・・(57)
pred8x8L[x,y] = (p'[x-2*y-1,-1] + 2*p'[x-2*y-2,-1] + p'[x-2*y-3,-1] + 2) >> 2
・・・(58)
0)のように生成される。
pred8x8L[x,y] = (p'[x+(y>>1),-1] + p'[x+(y>>1)+1,-1] + 1) >> 1
・・・(59)
pred8x8L[x,y]
= (p'[x+(y>>1),-1] + 2*p'[x+(y>>1)+1,-1] + p'[x+(y>>1)+2,-1] + 2) >> 2
・・・(60)
zHU = x + 2*y
・・・(61)
pred8x8L[x,y] = (p'[-1,y+(x>>1)] + p'[-1,y+(x>>1)+1] + 1) >> 1
・・・(62)
pred8x8L[x,y] = (p'[-1,y+(x>>1)]
・・・(63)
pred8x8L[x,y] = (p'[-1,6] + 3*p'[-1,7] + 2) >> 2
・・・(64)
pred8x8L[x,y] = p'[-1,7]
・・・(65)
Pred(x,y) = P(x,-1);x,y=0,…,15
・・・(66)
Pred(x,y) = P(-1,y);x,y=0,…,15
・・・(67)
Pred(x,y) = P(-1,y);x,y=0,…,7
・・・(75)
Pred(x,y) = P(x,-1);x,y=0,…,7
・・・(76)
次に、図27のフローチャートを参照して、これらの予測モードに対して行われる処理である、図14のステップS31におけるイントラ予測処理を説明する。なお、図27の例においては、輝度信号の場合を例として説明する。
Dは、原画像と復号画像の差分(歪)、Rは、直交変換係数まで含んだ発生符号量、λは、量子化パラメータQPの関数として与えられるラグランジュ乗数である。
Dは、原画像と復号画像の差分(歪)、Header_Bitは、予測モードに対するヘッダビット、QPtoQuantは、量子化パラメータQPの関数として与えられる関数である。
次に、図28のフローチャートを参照して、図14のステップS32のインター動き予測処理について説明する。
次に、図29のフローチャートを参照して、図28のステップS53の参照隣接画素決定処理について説明する。
図31は、本発明を適用した画像処理装置としての画像復号装置の一実施の形態の構成を表している。
図32は、2次予測部の詳細な構成例を示すブロック図である。
Res_2nd = (A - B) - Ipred(A'-B')[mode] ・・・(80)
なお、Ipred()[mode]は、()の画素値を入力として、イントラ予測モードmodeにより生成した予測画像を示す。
A = Res_2nd + B + Ipred(A'-B')[mode] ・・・(81)
次に、図33のフローチャートを参照して、画像復号装置101が実行する復号処理について説明する。
次に、図34のフローチャートを参照して、図33のステップS138の予測処理を説明する。
Claims (20)
- 対象フレームにおける対象ブロックに隣接する対象隣接画素の相対アドレスを用いて、前記参照フレームにおける参照ブロックに隣接する参照隣接画素が前記参照フレームの画枠内に存在するかを判定する判定手段と、
前記判定手段により前記参照隣接画素が前記画枠内に存在しないと判定された場合、前記参照隣接画素に対して端点処理を行う端点処理手段と、
前記対象ブロックと前記参照ブロックとの差分情報と、前記対象隣接画素と前記端点処理手段により端点処理が行われた前記参照隣接画素との差分情報との間で予測を行うことで2次差分情報を生成する2次予測手段と、
前記2次予測手段により生成された前記2次差分情報を符号化する符号化手段と
を備える画像処理装置。 - 前記対象ブロックのアドレス(x,y)、前記対象ブロックが前記参照ブロックを参照する動きベクトル情報(dx,dy)、前記対象隣接画素の相対アドレス(δx,δy)として、前記参照隣接画素の相対アドレス(x+dx+δx,y+dy+δy)を算出する算出手段をさらに備え、
前記判定手段は、前記算出手段により算出された前記参照隣接画素の相対アドレス(x+dx+δx,y+dy+δy)が前記画枠内に存在するかを判定する
請求項1に記載の画像処理装置。 - 前記端点処理手段は、画素値がnビットで表される場合、
x+dx+δx < 0 、または、y+dy+δy < 0
が成り立つ前記参照隣接画素に対して、その画素値を2n-1とする前記端点処理を行う
請求項2に記載の画像処理装置。 - 前記端点処理手段は、前記画枠の水平方向の画素数をWIDTHとして、
x+dx+δx > WIDTH - 1
が成り立つ場合、前記参照隣接画素の画素値として、アドレス(WIDTH - 1, y+dy+δy)により指し示される画素値を用いる前記端点処理を行う
請求項2に記載の画像処理装置。 - 前記端点処理手段は、前記画枠の垂直方向の画素数をHEIGHTとして、
y+dy+δy > HEIGHT - 1
が成り立つ場合、前記参照隣接画素の画素値として、アドレス(x+dx+δx, HEIGHT - 1)により指し示される画素値を用いる前記端点処理を行う
請求項2に記載の画像処理装置。 - 前記端点処理手段は、前記画枠の水平方向の画素数をWIDTHとし、前記画枠の垂直方向の画素数をHEIGHTとして、
x+dx+δx > WIDTH - 1 、かつ、y+dy+δy > HEIGHT - 1
が成り立つ場合、前記参照隣接画素の画素値として、アドレス(WIDTH - 1, HEIGHT - 1)により指し示される画素値を用いる前記端点処理を行う
請求項2に記載の画像処理装置。 - 前記端点処理手段は、前記画枠内に存在しない前記参照隣接画素に対して、前記画枠の境界を対称にして鏡像処理により画素値を生成する前記端点処理を行う
請求項2に記載の画像処理装置。 - 前記2次予測手段は、
前記対象隣接画素と前記端点処理手段により端点処理が行われた前記参照隣接画素との差分情報を用いて予測を行い、前記対象ブロックに対するイントラ予測画像を生成するイントラ予測手段と、
前記対象ブロックと前記参照ブロックとの差分情報と、前記イントラ予測手段により生成された前記イントラ予測画像とを差分して、前記2次差分情報を生成する2次差分生成手段と
を備える請求項1に記載の画像処理装置。 - 前記2次予測手段は、前記判定手段により前記参照隣接画素が前記画枠内に存在すると判定された場合、前記対象ブロックと前記参照ブロックとの差分情報と、前記対象隣接画素と前記参照隣接画素との差分情報との間で予測を行う
請求項1に記載の画像処理装置。 - 画像処理装置が、
対象フレームにおける対象ブロックに隣接する対象隣接画素の相対アドレスを用いて、前記参照フレームにおける参照ブロックに隣接する参照隣接画素が前記参照フレームの画枠内に存在するかを判定し、
前記参照隣接画素が前記画枠内に存在しないと判定された場合、前記参照隣接画素に対して端点処理を行い、
前記対象ブロックと前記参照ブロックとの差分情報と、前記対象隣接画素と前記端点処理が行われた前記参照隣接画素との差分情報との間で予測を行うことで2次差分情報を生成し、
生成された前記2次差分情報を符号化するステップを
含む画像処理方法。 - 符号化された対象フレームにおける対象ブロックの画像を復号する復号手段と、
前記対象ブロックに隣接する対象隣接画素の相対アドレスを用いて、前記参照フレームにおける参照ブロックに隣接する参照隣接画素が前記参照フレームの画枠内に存在するかを判定する判定手段と、
前記判定手段により前記参照隣接画素が前記画枠内に存在しないと判定された場合、前記参照隣接画素に対して端点処理を行う端点処理手段と、
前記対象隣接画素と前記端点処理手段により端点処理が行われた前記参照隣接画素との差分情報を用いて2次予測を行うことで予測画像を生成する2次予測手段と、
前記対象ブロックの画像、前記2次予測手段により生成された前記予測画像、および前記参照ブロックの画像を加算して、前記対象ブロックの復号画像を生成する演算手段と
を備える画像処理装置。 - 前記対象ブロックのアドレス(x,y)、前記対象ブロックが前記参照ブロックを参照する動きベクトル情報(dx,dy)、前記対象隣接画素の相対アドレス(δx,δy)として、前記参照隣接画素の相対アドレス(x+dx+δx,y+dy+δy)を算出する算出手段をさらに備え、
前記判定手段は、前記算出手段により算出された前記参照隣接画素の相対アドレス(x+dx+δx,y+dy+δy)が前記画枠内に存在するかを判定する
請求項11に記載の画像処理装置。 - 前記端点処理手段は、画素値がnビットで表される場合、
x+dx+δx < 0 、または、y+dy+δy < 0
が成り立つ前記参照隣接画素に対して、その画素値を2n-1とする前記端点処理を行う
請求項12に記載の画像処理装置。 - 前記端点処理手段は、前記画枠の水平方向の画素数をWIDTHとして、
x+dx+δx > WIDTH - 1
が成り立つ場合、前記参照隣接画素の画素値として、アドレス(WIDTH - 1, y+dy+δy)により指し示される画素値を用いる前記端点処理を行う
請求項12に記載の画像処理装置。 - 前記端点処理手段は、前記画枠の垂直方向の画素数をHEIGHTとして、
y+dy+δy > HEIGHT - 1
が成り立つ場合、前記参照隣接画素の画素値として、アドレス(x+dx+δx, HEIGHT - 1)により指し示される画素値を用いる前記端点処理を行う
請求項12に記載の画像処理装置。 - 前記端点処理手段は、前記画枠の水平方向の画素数をWIDTHとし、前記画枠の垂直方向の画素数をHEIGHTとして、
x+dx+δx > WIDTH - 1 、かつ、y+dy+δy > HEIGHT - 1
が成り立つ場合、前記参照隣接画素の画素値として、アドレス(WIDTH - 1, HEIGHT - 1)により指し示される画素値を用いる前記端点処理を行う
請求項12に記載の画像処理装置。 - 前記端点処理手段は、前記画枠内に存在しない前記参照隣接画素に対して、前記画枠の境界を対称にして鏡像処理により画素値を生成する前記端点処理を行う
請求項12に記載の画像処理装置。 - 前記2次予測手段は、
前記対象隣接画素と前記端点処理手段により端点処理が行われた前記参照隣接画素との差分情報を用いて2次予測を行うことで予測画像を生成する予測画像生成手段
を備える請求項11に記載の画像処理装置。 - 前記2次予測手段は、前記判定手段により前記参照隣接画素が前記画枠内に存在すると判定された場合、前記対象隣接画素と前記参照隣接画素との差分情報を用いて予測を行う
請求項11に記載の画像処理装置。 - 画像処理装置が、
符号化された対象フレームにおける対象ブロックの画像を復号し、
前記対象ブロックに隣接する対象隣接画素の相対アドレスを用いて、前記参照フレームにおける参照ブロックに隣接する参照隣接画素が前記参照フレームの画枠内に存在するかを判定し、
前記参照隣接画素が前記画枠内に存在しないと判定された場合、前記参照隣接画素に対して端点処理を行い、
前記対象隣接画素と前記端点処理手段により端点処理が行われた前記参照隣接画素との差分情報を用いて2次予測を行うことで予測画像を生成し、
前記対象ブロックの画像、生成された前記予測画像、および前記参照ブロックの画像を加算して、前記対象ブロックの復号画像を生成するステップを
含む画像処理方法。
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN2010800174643A CN102396231A (zh) | 2009-04-24 | 2010-04-22 | 图像处理设备及方法 |
| US13/264,242 US20120121019A1 (en) | 2009-04-24 | 2010-04-22 | Image processing device and method |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2009105938A JP2010258741A (ja) | 2009-04-24 | 2009-04-24 | 画像処理装置および方法、並びにプログラム |
| JP2009-105938 | 2009-04-24 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2010123057A1 true WO2010123057A1 (ja) | 2010-10-28 |
Family
ID=43011173
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2010/057128 Ceased WO2010123057A1 (ja) | 2009-04-24 | 2010-04-22 | 画像処理装置および方法 |
Country Status (5)
| Country | Link |
|---|---|
| US (1) | US20120121019A1 (ja) |
| JP (1) | JP2010258741A (ja) |
| CN (1) | CN102396231A (ja) |
| TW (1) | TW201127069A (ja) |
| WO (1) | WO2010123057A1 (ja) |
Families Citing this family (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US8433142B2 (en) | 2010-04-05 | 2013-04-30 | The Nielsen Company (Us), Llc | Methods and apparatus to detect differences between images |
| KR102295680B1 (ko) * | 2010-12-08 | 2021-08-31 | 엘지전자 주식회사 | 인트라 예측 방법과 이를 이용한 부호화 장치 및 복호화 장치 |
| CN102685504B (zh) * | 2011-03-10 | 2015-08-19 | 华为技术有限公司 | 视频图像的编解码方法、编码装置、解码装置及其系统 |
| TWI666917B (zh) * | 2012-04-13 | 2019-07-21 | 日商Jvc建伍股份有限公司 | 影像編碼裝置、影像編碼方法、影像編碼程式 |
| WO2014015032A2 (en) * | 2012-07-19 | 2014-01-23 | Cypress Semiconductor Corporation | Touchscreen data processing |
| US10366404B2 (en) | 2015-09-10 | 2019-07-30 | The Nielsen Company (Us), Llc | Methods and apparatus to group advertisements by advertisement campaign |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH11243552A (ja) * | 1997-12-25 | 1999-09-07 | Matsushita Electric Ind Co Ltd | 画像データ圧縮伸長処理装置 |
| JP2005101728A (ja) * | 2003-09-22 | 2005-04-14 | Hitachi Ulsi Systems Co Ltd | 画像処理装置 |
| EP1988502A1 (en) * | 2007-05-04 | 2008-11-05 | Deutsche Thomson OHG | Method and device for retrieving a test block from a blockwise stored reference image |
Family Cites Families (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP0497586A3 (en) * | 1991-01-31 | 1994-05-18 | Sony Corp | Motion detection circuit |
| US20020015513A1 (en) * | 1998-07-15 | 2002-02-07 | Sony Corporation | Motion vector detecting method, record medium on which motion vector calculating program has been recorded, motion detecting apparatus, motion detecting method, picture encoding apparatus, picture encoding method, motion vector calculating method, record medium on which motion vector calculating program has been recorded |
| JP2001204026A (ja) * | 2000-01-21 | 2001-07-27 | Sony Corp | 画像情報変換装置及び方法 |
| US7623682B2 (en) * | 2004-08-13 | 2009-11-24 | Samsung Electronics Co., Ltd. | Method and device for motion estimation and compensation for panorama image |
| CN101159875B (zh) * | 2007-10-15 | 2011-10-05 | 浙江大学 | 二重预测视频编解码方法和装置 |
| US8208563B2 (en) * | 2008-04-23 | 2012-06-26 | Qualcomm Incorporated | Boundary artifact correction within video units |
| US8665964B2 (en) * | 2009-06-30 | 2014-03-04 | Qualcomm Incorporated | Video coding based on first order prediction and pre-defined second order prediction mode |
| US20110122950A1 (en) * | 2009-11-26 | 2011-05-26 | Ji Tianying | Video decoder and method for motion compensation for out-of-boundary pixels |
-
2009
- 2009-04-24 JP JP2009105938A patent/JP2010258741A/ja not_active Withdrawn
-
2010
- 2010-03-23 TW TW99108534A patent/TW201127069A/zh unknown
- 2010-04-22 WO PCT/JP2010/057128 patent/WO2010123057A1/ja not_active Ceased
- 2010-04-22 US US13/264,242 patent/US20120121019A1/en not_active Abandoned
- 2010-04-22 CN CN2010800174643A patent/CN102396231A/zh active Pending
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH11243552A (ja) * | 1997-12-25 | 1999-09-07 | Matsushita Electric Ind Co Ltd | 画像データ圧縮伸長処理装置 |
| JP2005101728A (ja) * | 2003-09-22 | 2005-04-14 | Hitachi Ulsi Systems Co Ltd | 画像処理装置 |
| EP1988502A1 (en) * | 2007-05-04 | 2008-11-05 | Deutsche Thomson OHG | Method and device for retrieving a test block from a blockwise stored reference image |
Non-Patent Citations (1)
Also Published As
| Publication number | Publication date |
|---|---|
| CN102396231A (zh) | 2012-03-28 |
| US20120121019A1 (en) | 2012-05-17 |
| JP2010258741A (ja) | 2010-11-11 |
| TW201127069A (en) | 2011-08-01 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN104320665B (zh) | 图像处理设备和方法 | |
| JP2010268259A (ja) | 画像処理装置および方法、並びにプログラム | |
| CN102415098B (zh) | 图像处理设备和方法 | |
| WO2010001916A1 (ja) | 画像処理装置および方法 | |
| WO2010001917A1 (ja) | 画像処理装置および方法 | |
| JP2010035137A (ja) | 画像処理装置および方法、並びにプログラム | |
| WO2010123055A1 (ja) | 画像処理装置および方法 | |
| WO2010123057A1 (ja) | 画像処理装置および方法 | |
| JP5488684B2 (ja) | 画像処理装置および方法、プログラム、並びに記録媒体 | |
| JP5488685B2 (ja) | 画像処理装置および方法、プログラム、並びに記録媒体 | |
| JP6102977B2 (ja) | 画像処理装置および方法、プログラム、並びに記録媒体 | |
| JP5776803B2 (ja) | 画像処理装置および方法、並びに記録媒体 | |
| JP5776804B2 (ja) | 画像処理装置および方法、並びに記録媒体 | |
| JP6102978B2 (ja) | 画像処理装置および方法、プログラム、並びに記録媒体 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| WWE | Wipo information: entry into national phase |
Ref document number: 201080017464.3 Country of ref document: CN |
|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 10767114 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 13264242 Country of ref document: US |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 10767114 Country of ref document: EP Kind code of ref document: A1 |