[go: up one dir, main page]

WO2012140889A1 - Image encoding apparatus, image encoding method, image encoding program, image decoding apparatus, image decoding method, and image decoding program - Google Patents

Image encoding apparatus, image encoding method, image encoding program, image decoding apparatus, image decoding method, and image decoding program Download PDF

Info

Publication number
WO2012140889A1
WO2012140889A1 PCT/JP2012/002525 JP2012002525W WO2012140889A1 WO 2012140889 A1 WO2012140889 A1 WO 2012140889A1 JP 2012002525 W JP2012002525 W JP 2012002525W WO 2012140889 A1 WO2012140889 A1 WO 2012140889A1
Authority
WO
WIPO (PCT)
Prior art keywords
block
quantization parameter
sub
unit
encoding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/JP2012/002525
Other languages
French (fr)
Inventor
Mitsuru Maeda
Masato Shima
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Publication of WO2012140889A1 publication Critical patent/WO2012140889A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • H04N19/463Embedding additional information in the video signal during the compression process by compressing encoding parameters before transmission
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/189Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
    • H04N19/196Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding being specially adapted for the computation of encoding parameters, e.g. by averaging previously computed encoding parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Definitions

  • An LCU is further divided into small sub-blocks, that is, coding units (CUs) to be subjected to, for example, transform and quantization.
  • a quadtree partition structure is adopted.
  • a region is divided into two in the vertical direction and is divided into two in the horizontal direction.
  • Fig. 2A illustrates a quadtree partition structure.
  • a basic block 1000 is indicated by a bold line.
  • the basic block 1000 has a size of 64 pixels by 64 pixels.
  • Sub-blocks 1001 and 1010 of the basic block 1000 have a size of 32 pixels by 32 pixels.
  • Sub-blocks 1002 to 1009 of the basic block 1000 have a size of 16 pixels by 16 pixels. In this way, an LCU is divided into smaller blocks, and an ending process including transform is performed on each of the sub-blocks.
  • moving image data is received on a frame basis.
  • still image data for one frame may be received.
  • a terminal 1100 receives an encoded bit stream.
  • a decoding/separating unit 1101 decodes the header information of the bit stream and separates necessary code from the bit stream. Thereafter, the decoding/separating unit 1101 outputs the code to a unit disposed downstream thereof.
  • the decoding/separating unit 1101 performs an operation that is the reverse of the operation performed by the code integrating unit 11009.
  • a selection flag decoding unit 1109 decodes the qp_delta_select_flag code 20005 and reproduces the selection flag.
  • a quantization parameter decoding unit 1102 decodes the encoded data of the quantization parameter.
  • a block decoding unit 1103 decodes the quantization coefficient code of each of the sub-blocks and reproduces a quantization coefficient.
  • the division information (the split_coding_flag code 20010) is decoded by the decoding/separating unit 1101 on a basic block basis and is input to the quantization parameter decoding unit 1102. Subsequently, for each of the sub-blocks, the quantization parameter difference value code (the cu_qp_delta code) of the sub-block is input to the quantization parameter decoding unit 1102.
  • a sub-block quantization parameter addition unit 104 sums the prediction value of the determined sub-block quantization parameter and the sub-block quantization parameter difference value and reproduces the sub-block quantization parameter.
  • a terminal 105 outputs the reproduced sub-block quantization parameter to the block inverse quantization unit 1104 illustrated in Fig. 6.
  • the quantization parameter prediction value determination unit 108 determines, as the prediction value, the quantization parameter of a sub-block decoded in the decoding order of a block stored in the quantization parameter storage unit 106.
  • step S106 of the second exemplary embodiment illustrated in Fig. 8 if a referenceable sub-block is not present on the left of the sub-block to be decoded, prediction is performed in the decoding order.
  • steps S301, S105, and S302 illustrated in Fig. 12 may be performed instead of step S106.
  • Fig. 21 is a flowchart of the image encoding process performed by the image encoding apparatus according to the seventh exemplary embodiment. As illustrated in Fig. 21, the flowchart differs from that of the fifth exemplary embodiment illustrated in Fig. 15 in that steps S001, S003, and S403 are eliminated.
  • prediction of a quantization parameter can be adaptively changed by referencing the encoding mode of, for example, a slice that allows the difference value of each of the sub-block quantization parameters to be encoded.
  • the decoding apparatus can decode a bit stream providing a decrease in a delay of the processing time and an increase in the quality of an image.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The present invention relates to an image encoding apparatus. The apparatus includes a first computing unit configured to compute a prediction value of a quantization parameter of a block to be processed using a quantization parameter of a block that neighbors the block to be processed, a second computing unit configured to compute a prediction value of the quantization parameter of the block to be processed using a quantization parameter of a block encoded in an encoding order of blocks, a selection unit configured to select one of the first computing unit and the second computing unit, a difference value computing unit configured to compute a difference value between the selected prediction value and the quantization parameter of the block to be processed, and an encoding unit configured to encode the difference value and generate the encoded difference value data.

Description

IMAGE ENCODING APPARATUS, IMAGE ENCODING METHOD, IMAGE ENCODING PROGRAM, IMAGE DECODING APPARATUS, IMAGE DECODING METHOD, AND IMAGE DECODING PROGRAM
The present invention relates to an image encoding method and an image decoding method and, in particular, to a predictive encoding method and a predictive decoding method for a quantization parameter in an image.
H.264/MPEG-4 AVC (hereinafter referred to as "H.264") has been used for encoding a moving image (refer to PTL 1).
In H.264, the quantization parameter can be changed for each of macroblocks (16 pixels by 16 pixels) using mb_qp_delta of the standard. According to equation (7-23) described in NPL 1, the quantization parameter is changed by adding mb_qp_delta, which represents a difference value, to the quantization parameter QPYPREV of a macroblock decoded in the encoding order in which the macroblock was encoded.
In recent years, the activity for internationally standardizing a highly efficient encoding method, which is a successor of H.264, has been started. JCT-VC (Joint Collaborative Team on Video Coding) has been established by ISO/IEC and ITU-T. JCT-VC standardizes a coding method known as an HEVC (High Efficiency Video Coding) method (hereinafter referred to as "HEVC"). Since a target display size has been increased, the display size is divided into blocks each having a block size that is larger than the existing macroblock size (16 pixels by 16 pixels). Such a basic block having the large size is referred to as a "Largest Coding Unit (LCU)". The maximum size of the LCU is 64 pixels by 64 pixels (refer to, for example, NPL 2).
An LCU is further divided into small sub-blocks, that is, coding units (CUs) to be subjected to, for example, transform and quantization. In order to divide the LCU, a quadtree partition structure is adopted. In a quadtree partition structure, a region is divided into two in the vertical direction and is divided into two in the horizontal direction. Fig. 2A illustrates a quadtree partition structure. A basic block 1000 is indicated by a bold line. For simplicity, the basic block 1000 has a size of 64 pixels by 64 pixels. Sub-blocks 1001 and 1010 of the basic block 1000 have a size of 32 pixels by 32 pixels. Sub-blocks 1002 to 1009 of the basic block 1000 have a size of 16 pixels by 16 pixels. In this way, an LCU is divided into smaller blocks, and an ending process including transform is performed on each of the sub-blocks.
In HEVC, like a macroblock of H.264, the quantization parameter control can be performed on a basic block basis. However, in reality, in order to increase the image quality, it is desirable that the quantization parameter control be performed on a sub-block basis.
If the quantization parameter control is performed on a sub-block basis, a basic block is processed using a quadtree partition structure. That is, in Fig. 2A, the sub-block 1001 having a size of 32 pixels by 32 pixels is processed first. Subsequently, the sub-blocks 1002 to 1009 each having a size of 16 pixels by 16 pixels are sequentially processed in this order. Finally, the sub-block 1010 having a size of 32 pixels by 32 pixels is processed. A difference value is computed by using the quantization parameter of the previous block as a prediction value, and the quantization parameter of each of the sub-blocks is encoded. Fig. 2D illustrates such an encoding process. In Fig. 2D, an arrow indicates the direction of prediction of the quantization parameter. Since computation of a difference is performed in the same order as the order in which the encoding process is performed, the computation of a difference can be performed while controlling the amount of code. As a result, a delay of the processing time can be decreased. Thus, this method is significantly advantageous for applications that require real-time encoding.
However, if the quantization parameter of each of the sub-blocks is optimized, the difference values are not close to one another, since the difference values of the quantization parameters are obtained using a quadtree partition structure. For example, Fig. 2B illustrates the quantization parameter values of the sub-blocks. In Fig. 2B, the value of the quantization parameter gradually varies from the upper left to the lower right. The value of the quantization parameter of the sub-block 1001 is 12. For the sub-block 1002, the difference value is +2. The subsequent difference values are +4, -6, +6, -6, 0, +2, +4, and +2. Accordingly, if a quadtree partition structure is adopted, the difference value frequently increases and decreases. Thus, the amount of the generated code disadvantageously increases.
It is known that the quantization parameters of neighboring sub-blocks of an image are close to each other. This feature can be used to solve the above-described problem. If, as illustrated in Fig. 2E, a sub-block to be processed has a neighboring sub-block on the left, the quantization parameter of the sub-block located on the left can be used as a prediction value. In addition, NPL 3 describes a technique for using the quantization parameter of a sub-block located in a lateral direction as the prediction value, as illustrated in Figs. 2F and 2G. In Fig. 2F, a sub-block to be processed has a neighboring sub-block on the left. In such a case, prediction can be performed using the quantization parameter of the sub-block located on the left. In contrast, Fig. 2G illustrates prediction of a quantization parameter performed when a sub-block to be processed is located on the leftmost side of the image and, thus, the sub-block does not have a neighboring sub-block on the left.
However, it is difficult to apply these techniques to the case in which, for example, the quantization parameter needs to be abruptly changed. That is, Fig. 2C illustrates the following case. When sub-blocks up to the sub-block 1001 are encoded, the amount of code is increased. Thus, in order to decrease the amount of code, the values of the quantization parameters for the sub-block 1002 and the subsequent sub-blocks are abruptly increased. In such a case, if, for example, the order in which the prediction values are referenced indicated by Fig. 2E is adopted, large difference values 22 and 20 appear for the sub-block 1002 and the sub-block 1004, respectively. In contrast, if the order indicated by Fig. 2D is adopted, a relatively small difference value -6 can be obtained for the sub-block 1004, although a large difference value 22 is obtained only for the sub-block 1002.
ISO/IEC14496-10: 2004 Information technology - Coding of audio-visual objects - Part10: Advanced Video Coding JCT-VC contribution JCTVC-A205.doc available at http://wftp3.itu.int/av-arch/jctvc-site/2010_04_A_Dresden/ JCT-VC contribution "CU-Level QP Prediction" in JCTVC-E391.doc available at http://phenix.int-evry.fr/jct/doc_end_USER/documents/5_Geneva/wg11/JCTVC-E392-v3.zip
The present invention provides an adaptive encoding technique for reducing a delay of the processing time and providing highly efficient encoding in accordance with an intended use. The present invention further provides an increase in coding efficiency through a slight modification by using a coding mode.
According to an embodiment of the present invention, an image encoding apparatus includes a first computing unit configured to compute a prediction value of a quantization parameter of a block to be processed using a quantization parameter of a block that neighbors the block to be processed, a second computing unit configured to compute a prediction value of the quantization parameter of the block to be processed using a quantization parameter of a block encoded in an encoding order of blocks, a selection unit configured to select one of the first computing unit and the second computing unit, a difference value computing unit configured to compute a difference value between the selected prediction value and the quantization parameter of the block to be processed, and an encoding unit configured to encode the difference value and generate encoded difference value data.
According to the embodiment, encoding that reduces a delay of the processing time and that provide high efficiency can be adaptively provided. In addition, by switching between quantization parameter prediction methods in accordance with the encoding mode of a slice or a picture, the difference value of the quantization parameter can be encoded in an optimal manner for each of the encoding modes.
Further features of the present invention will become apparent from the following description of embodiments with reference to the attached drawings.
Fig. 1 is a block diagram of the configuration of an image encoding apparatus according to a first exemplary embodiment of the present invention. Fig. 2A illustrates an example of block division and prediction of a quantization parameter. Fig. 2B illustrates an example of block division and prediction of a quantization parameter. Fig. 2C illustrates an example of block division and prediction of a quantization parameter. Fig. 2D illustrates an example of block division and prediction of a quantization parameter. Fig. 2E illustrates an example of block division and prediction of a quantization parameter. Fig. 2F illustrates an example of block division and prediction of a quantization parameter. Fig. 2G illustrates an example of block division and prediction of a quantization parameter. Fig. 3 is a detailed block diagram of a quantization parameter encoding unit of the image encoding apparatus according to the first exemplary embodiment. Fig. 4 is a flowchart of an image encoding process performed by the image encoding apparatus according to the first exemplary embodiment. Fig. 5 illustrates an example of a bit stream generated in the first exemplary embodiment. Fig. 6 is a block diagram of the configuration of an image decoding apparatus according to a second exemplary embodiment of the present invention. Fig. 7 is a detailed block diagram of a quantization parameter decoding unit according to the second exemplary embodiment. Fig. 8 is a flowchart of the image decoding process performed by the image decoding apparatus according to the second exemplary embodiment. Fig. 9 is a detailed block diagram of a quantization parameter encoding unit according to a third exemplary embodiment of the present invention. Fig. 10 is a flowchart of the image encoding process performed by the image encoding apparatus according to the third exemplary embodiment. Fig. 11 is a detailed block diagram of a quantization parameter decoding unit of an image decoding apparatus according to a fourth exemplary embodiment. Fig. 12 is a flowchart of the image decoding process performed by the image decoding apparatus according to the fourth exemplary embodiment. Fig. 13 is a block diagram of the configuration of an image encoding apparatus according to a fifth exemplary embodiment. Fig. 14 is a detailed block diagram of a quantization parameter encoding unit of the image encoding apparatus according to the fifth exemplary embodiment. Fig. 15 is a flowchart of the image encoding process performed by the image encoding apparatus according to the fifth exemplary embodiment. Fig. 16 is a block diagram of the configuration of an image decoding apparatus according to a sixth exemplary embodiment of the present invention. Fig. 17 is a detailed block diagram of a quantization parameter decoding unit of the image decoding apparatus according to the sixth exemplary embodiment. Fig. 18 is a flowchart of the image decoding process performed by the image decoding apparatus according to the sixth exemplary embodiment. Fig. 19 is a block diagram of the configuration of an image encoding apparatus according to a seventh exemplary embodiment of the present invention. Fig. 20 is a detailed block diagram of a quantization parameter encoding unit according to the seventh exemplary embodiment. Fig. 21 is a flowchart of the image encoding process performed by the image encoding apparatus according to the seventh exemplary embodiment. Fig. 22 illustrates an example of a bit stream generated in the seventh exemplary embodiment. Fig. 23 is a block diagram of an image decoding apparatus according to an eighth exemplary embodiment of the present invention. Fig. 24 is a detailed block diagram of a quantization parameter decoding unit of the image decoding apparatus according to the eighth exemplary embodiment. Fig. 25 is a flowchart of the image decoding process performed by the image decoding apparatus according to the eighth exemplary embodiment. Fig. 26 is a block diagram of an example of the computer hardware configuration applicable to the image encoding apparatus and the image decoding apparatus according to the present invention. Fig. 27 is a block diagram illustrating another example of a bit stream generated in the first exemplary embodiment.
Exemplary embodiments of the present invention are described in detail below with reference to the accompanying drawings. It should be noted that configurations illustrated in the following embodiments are only illustrative, and the present invention is not limited to the illustrated configurations.
First Exemplary Embodiment
A first exemplary embodiment of the present invention is described next with reference to the accompanying drawings. Fig. 1 is a block diagram of an image encoding apparatus according to the first exemplary embodiment of the present invention.
In Fig. 1, a terminal 11000 receives image data. A block dividing unit 11001 divides an input image into a plurality of basic blocks. In addition, the block dividing unit 11001 divides the basic block into sub-blocks as needed. For simplicity, the input image has an 8-bit pixel value. However, the present invention is not limited thereto. In addition, a basic block has a size of 64 pixels by 64 pixels. A sub-block has a minimum size of 8 pixels by 8 pixels. In order to divide the block, a technique for dividing a block into four partitions by dividing the block into two in horizontal and vertical directions is employed. However, the shape and the size of the block are not limited thereto. In addition, any dividing technique for the sub-block can be employed. For example, an entire block may be divided into sub-blocks. Thereafter, the amount of edge may be computed, and clustering may be performed. In this way, the entire block may be divided. That is, a portion including many edges may be divided into small sub-blocks, and a flat portion may be divided into large sub-blocks. Information as to how to divide the entire block into sub-blocks is output to a unit disposed downstream of the block dividing unit 11001 as sub-block division information indicating whether each of the sub-blocks is divided or not in the order in which the block is divided using the quadtree partition structure.
A technique for computing predictive quantization parameter of a sub-block used for computing a difference value of the quantization parameter of the sub-block is described next with reference to Figs. 2E and 2F. Fig. 2E illustrates a basic block having no neighboring sub-block on the left (e.g., a basic block located on the leftmost side of an image). Fig. 2F illustrates a basic block having a neighboring basic block that has been encoded on the left. In both cases, the quantization parameter of a sub-block that is located so as to be adjacent to a target sub-block on the left and that has a pixel that is in contact with the upper left corner pixel of the target block is used as the prediction value. If a sub-block that can be referenced is not present on the left, reference is performed in an order that is the same as the decoding order illustrated in Fig. 2D. As illustrated in Fig. 2E, for the sub-blocks 1006 and 1008, the quantization parameters of the sub-blocks 1005 and 1007 that were encoded in the order in which the sub-blocks are encoded are referenced, respectively.
A quantization parameter determination unit 11002 determines the quantization parameter of each of the sub-blocks of a block to be processed. A block predicting unit 11003 predicts, on a sub-block basis, each of the basic blocks divided by the block dividing unit 11001. Thereafter, the block predicting unit 11003 computes a difference value and computes a prediction error in each of the sub-blocks. For an intra frame of a still image or a moving image, the block predicting unit 11003 performs intra prediction. In addition, the block predicting unit 11003 performs motion compensating prediction for the moving image. A block transform unit 11004 performs orthogonal transform on the prediction error in each of the sub-blocks and computes an orthogonal transform coefficient. Any technique for orthogonal transform can be employed. For example, discrete cosine transform or Hadamard transform can be employed. A block quantization unit 11005 quantizes the orthogonal transform coefficient using the quantization parameter determined by the quantization parameter determination unit 11002. Through the quantization, a quantization coefficient can be obtained. A block encoding unit 11006 encodes the quantization coefficient obtained in the above-described manner and generates quantization coefficient code data. Any technique for encoding the quantization coefficient can be employed. For example, Huffman coding or arithmetic coding can be employed. A block reproduction image generating unit 11007 performs operations that are the reverse of those performed by the block quantization unit 11005 and the block transform unit 11004 and reproduces the prediction error. Thereafter, the block reproduction image generating unit 11007 generates a decoded image of the basic block using the result output from the block predicting unit 11003. The reproduced image data is held and is used for prediction performed by the block predicting unit 11003. An operation unit 11011 is used by a user when the user selects one of encoding having a small delay of the processing time and highly efficient encoding. The operation unit 11011 outputs a selection flag of "0" when encoding having a small delay of the processing time is selected. However, the operation unit 11011 outputs a selection flag of "1" when highly efficient encoding is selected. A quantization parameter encoding unit 11008 encodes the quantization parameter of each of the sub-blocks determined by the quantization parameter determination unit 11002 and generates quantization parameter code data. A code integrating unit 11009 generates code related to header information and prediction and integrates the quantization parameter code data generated by the quantization parameter encoding unit 11008 and the quantization coefficient code data generated by the block encoding unit 11006. A terminal 11010 outputs a bit stream generated through the integration performed by the code integrating unit 11009 to the outside.
An exemplary operation for encoding an image performed by the image encoding apparatus is described below. According to the present exemplary embodiment, moving image data is received on a frame basis. However, still image data for one frame may be received.
Image data for one frame is input to the block dividing unit 11001 through the terminal 11000. The image data is divided into basic blocks each having a size of 64 pixels by 64 pixels. Thereafter, the basic block is divided into sub-blocks having a minimum size of 8 pixels by 8 pixels as needed. Information regarding the sub-block division and the divided image data are input to the quantization parameter determination unit 11002 and the block predicting unit 11003.
The block predicting unit 11003 references a reproduction image held in the block reproduction image generating unit 11007 and performs prediction. Thus, the block predicting unit 11003 generates a prediction error and inputs the generated prediction error to the block transform unit 11004 and the block reproduction image generating unit 11007. The block transform unit 11004 performs orthogonal transform on the input prediction error, computes the orthogonal transform coefficient, and inputs the orthogonal transform coefficient to the block quantization unit 11005.
In addition, the quantization parameter determination unit 11002 evaluates the amount of input code generated for each of the sub-blocks and determines an optimum quantization parameter while taking into account the balance between the image quality of the sub-block and the amount of code. The determined quantization parameter of the sub-block is input to the block quantization unit 11005, the block reproduction image generating unit 11007, and the quantization parameter encoding unit 11008. Furthermore, the division information is input from the block dividing unit 11001 in the same manner.
The block quantization unit 11005 quantizes the orthogonal transform coefficient output from the block transform unit 11004 using the quantization parameter determined by the quantization parameter determination unit 11002 and generates a quantization coefficient. The generated quantization coefficient is input to the block encoding unit 11006 and the block reproduction image generating unit 11007. The block reproduction image generating unit 11007 reproduces the orthogonal transform coefficient from the input quantization coefficient using the quantization parameter determined by the quantization parameter determination unit 11002. The orthogonal transform coefficient is subjected to inverse orthogonal transform, and the prediction error is reproduced. The reproduced prediction error is generated as a reproduced image using, for example, the pixel values referenced at a prediction time and is held. In addition, the block encoding unit 11006 encodes the quantization coefficient and generates the quantization coefficient code data. Thereafter, the block encoding unit 11006 outputs the generated quantization coefficient code data to the code integrating unit 11009.
The quantization parameter determined by the quantization parameter determination unit 11002 is encoded by the quantization parameter encoding unit 11008 on a basic block basis.
Fig. 3 is a detailed block diagram of the quantization parameter encoding unit 11008. In Fig. 3, a terminal 1 receives the quantization parameter of each of the sub-blocks and the sub-block division information from the quantization parameter determination unit 11002 illustrated in Fig. 1. A quantization parameter storage unit 2 temporarily stores the input quantization parameter of each of the sub-blocks. A quantization parameter storage unit 3 stores the quantization parameters of the sub-blocks encoded in the encoding order. For example, the quantization parameter storage unit 3 stores the quantization parameter of the sub-block that is immediately previously encoded. A quantization parameter storage unit 4 stores the quantization parameter of an encoded sub-block that is adjacent to the sub-block to be encoded (according to the present exemplary embodiment, an encoded sub-block located on the left). The quantization parameter stored in the quantization parameter storage unit 4 is reset on the basis of the sub-block division information. For example, in the case illustrated in Fig. 2A, the quantization parameter of the sub-block 1001 is stored when the sub-block 1002 is encoded. In addition, the quantization parameters of the sub-blocks 1001 to 1004 are stored when the sub-block 1005 is encoded. Each of the sub-block 1006 and the sub-block 1008 does not have an encoded sub-block that is adjacent thereto. At that time, the quantization parameter storage unit 4 is reset.
A sub-block quantization parameter prediction value determination unit 5 determines, from the quantization parameter of each of the sub-blocks stored in the quantization parameter storage unit 4, the prediction value of the quantization parameter of a sub-block to be encoded.
A terminal 6 receives the selection flag from the operation unit 11011 illustrated in Fig. 1. A selection flag encoding unit 7 encodes the received selection flag. Hereinafter, the result of encoding is referred to as "qp_delta_select_flag code". A terminal 8 outputs the qp_delta_select_flag code. A selector 9 selects one of the units from which data is input on the basis of the selection flag input from the terminal 6. If the selection flag is "0", the selector 9 selects the output of the quantization parameter storage unit 3. However, if the selection flag is "1", the selector 9 selects the output of the sub-block quantization parameter prediction value determination unit 5. A sub-block quantization parameter difference unit 10 subtracts the value of the output selected by the selector 9 from the quantization parameter stored in the quantization parameter storage unit 2 and computes a sub-block quantization parameter difference value. A sub-block quantization parameter encoding unit 11 encodes the sub-block quantization parameter difference value. The generated encoded data is referred to as "cu_qp_delta code" and is output to the code integrating unit 11009 illustrated in Fig. 1.
In the above-described configuration, before processing is started, the terminal 6 receives the selection flag. Thereafter, the received selection flag is input to the selector 9 and the selection flag encoding unit 7. The selection flag encoding unit 7 encodes the input selection flag and outputs the qp_delta_select_flag code from the terminal 8. In addition, if the selection flag is "0", the selector 9 selects the output of the quantization parameter storage unit 3. However, if the selection flag is "1", the selector 9 selects the output of the sub-block quantization parameter prediction value determination unit 5 as an input value.
The quantization parameter storage unit 2 stores the sub-block quantization parameter input from the terminal 1. Concurrently, the sub-block quantization parameter prediction value determination unit 5 determines a prediction value of the quantization parameter of the sub-block from the quantization parameters of the adjacent sub-blocks in the quantization parameter storage units 3 and 4. The sub-block quantization parameter prediction value determination unit 5 references the quantization parameter storage unit 4 and determines whether an encoded sub-block that is adjacent to the sub-block to be encoded (an encoded sub-block located on the left according to the present exemplary embodiment) is stored. If the quantization parameter storage unit 4 stores the information regarding an encoded sub-block that is in contact with the upper left corner pixel of the sub-block to be encoded, the quantization parameter of the sub-block is selected as the prediction value. Otherwise, if the information regarding a sub-block encoded in the encoding order of the block is present in the quantization parameter storage unit 3, the quantization parameter of the sub-block is selected as the prediction value.
If the selection flag is "0", the selector 9 determines the prediction value of the quantization parameter in accordance with the decoding order. Thus, the selector 9 inputs the output of the quantization parameter storage unit 3 to the sub-block quantization parameter difference unit 10. However, if the selection flag is "1", the quantization parameter of a sub-block located on the left of the sub-block to be encoded or the quantization parameter of the sub-block encoded in the encoding order of the block is selected as the prediction value in order to increase the efficiency of coding. Thus, the selector 9 inputs the output of the sub-block quantization parameter prediction value determination unit 5 to the sub-block quantization parameter difference unit 10.
The sub-block quantization parameter difference unit 10 reads the quantization parameter of the sub-block to be encoded from the quantization parameter storage unit 2. Thereafter, the sub-block quantization parameter difference unit 10 subtracts the prediction value of the quantization parameter that is input from the selector 9 from the quantization parameter and inputs the difference value to the sub-block quantization parameter encoding unit 11. The sub-block quantization parameter encoding unit 11 performs Golomb coding on the input quantization parameter difference value and outputs the cu_qp_delta code from a terminal 12.
Referring back to Fig. 1, the code integrating unit 11009 generates code, such as the sequence header of the image and the frame header. In addition, the code integrating unit 11009 inserts the qp_delta_select_flag code into the sequence header (Sequence Parameter Set (SPS) defined in H.264). For each of the basic blocks, the code integrating unit 11009 acquires information, such as a prediction mode, from the block predicting unit 11003 and encodes the information. Subsequently, the code integrating unit 11009 receives, from the quantization parameter encoding unit 11008, the sub-block division information indicating how the basic block is divided and performs encoding. Any encoding method can be employed. For example, the method described in NPL 2 can be employed. Thereafter, for each of the sub-blocks, the code integrating unit 11009 outputs the cu_qp_delta code and the quantization coefficient code data in the form of a bit stream from the terminal 11010.
Fig. 4 is a flowchart of an exemplary image encoding process performed by the image encoding apparatus according to the first exemplary embodiment. In step S001, the user selects one of encoding having a small delay of the processing time and highly efficient encoding using the operation unit 11011. The selection flag is generated in accordance with the selected encoding.
In step S002, the selection flag is encoded, and the qp_delta_select_flag code is generated. In step S003, the code integrating unit 11009 generates code, such as the sequence header and the frame header, inserts the qp_delta_select_flag code into the code, and outputs the code. In step S004, the block dividing unit 11001 sequentially retrieves the basic blocks of the input image from the upper left of the image. In step S005, the block dividing unit 11001 further divides the basic block into sub-blocks. In step S006, the quantization parameter determination unit 11002 determines the quantization parameter of each of the sub-blocks. In step S007, the quantization parameter encoding unit 11008 determines whether the selection flag determined in step S001 indicates a mode in which the quantization parameter is predicted in the decoding order. If the selection flag indicates a mode in which the quantization parameter is predicted in the decoding order (i.e., if the selection flag is "0"), the processing proceeds to step S008. Otherwise, the processing proceeds to step S009. In step S008, the quantization parameter encoding unit 11008 selects the quantization parameter of the sub-block encoded in the encoding order of the block as the prediction value. In step S009, the quantization parameter encoding unit 11008 selects, as the prediction value, the quantization parameter of an encoded sub-block that is adjacent to the sub-block to be encoded (an encoded sub-block located on the left according to the present exemplary embodiment) or the quantization parameter of a sub-block encoded in the encoding order of the block. In step S010, the quantization parameter encoding unit 11008 subtracts the prediction value from the quantization parameter of the sub-block to be encoded. Thus, the quantization parameter encoding unit 11008 computes the quantization parameter difference value of the sub-block. In step S011, the quantization parameter encoding unit 11008 performs Golomb coding on the sub-block quantization parameter difference value and outputs the cu_qp_delta code. In step S012, the quantization parameter encoding unit 11008 performs prediction on the image data of the sub-block and performs orthogonal transform and quantization on the prediction error. Thereafter, the quantization parameter encoding unit 11008 encodes the obtained quantization coefficient and outputs quantization coefficient code data. In step S013, inverse quantization and inverse transform are performed on the obtained quantization coefficient, and the prediction error is computed. A reproduction image of the sub-block is generated from the prediction error and the reproduced image. In step S014, the image encoding apparatus determines whether all of the sub-blocks in the target basic block have been encoded. If all of the sub-blocks in the target basic block have been encoded, the processing proceeds to step S015. However, if all of the sub-blocks in the target basic block have not been encoded, the processing returns to step S005, where the processing for the next sub-block starts. In step S015, the image encoding apparatus determines whether all of the basic blocks have been encoded. If all of the basic blocks have been encoded, all the operations are stopped, and the processing is completed. However, if all of the basic blocks have not been encoded, the processing returns to step S004, where the processing for the next basic block starts.
As a result, a bit stream illustrated in Fig. 5 can be generated. A sequence header (Sequence Parameter Set) 20001 includes a qp_delta_select_flag code 20005. Thus, a method for computing the prediction value used for obtaining a difference value of the quantization parameter of the encoded data can be identified. In addition, the basic block (the LCU) includes a split_coding_flag code 20010 generated by encoding the division information. Furthermore, the encoded data (coding_unit()) of each of the sub-blocks includes a cu_qp_delta code 20009 generated by encoding the quantization parameter difference value of the sub-block.
By using the above-described configuration and operations and, in particular, the processing from step S005 to step S011, the difference value of the sub-block quantization parameter appropriate for the intended use can be encoded. While the present exemplary embodiment has been described with reference to the basic block having a size of 64 pixels by 64 pixels and the sub-block having a minimum size of 8 pixels by 8 pixels, the sizes are not limited thereto. For example, the basic block may have a block size of 128 pixels by 128 pixels. In addition, the shapes of the basic block and the sub-block are not limited to a square. For example, the shapes may be a rectangle having a size of 8 pixels by 4 pixels.
In addition, while the present exemplary embodiment has been described with reference to Golomb coding that is used for encoding the basic block quantization parameter, the sub-block quantization parameter difference value, and the quantization coefficient, the coding technique is not limited thereto. For example, Huffman coding or arithmetic coding other than Huffman coding can be employed. Alternatively, the values may be output without encoding. In addition, combinations of the code of the selection flag and the performed operation are not limited to those of the present exemplary embodiment.
Furthermore, referencing of a quantization parameter that is out of decoding order is not limited to that described in the above-described exemplary embodiment. Still furthermore, a plurality of sub-blocks having different orthogonal transform sizes may be present. Yet still furthermore, in the present exemplary embodiment, the qp_delta_select_flag code 20005 is included in the sequence header (Sequence Parameter Set) 20001. However, the location of the qp_delta_select_flag code 20005 is not limited thereto. For example, the qp_delta_select_flag code 20005 may be included in another header, such as a picture header (Picture Parameter Set) 20002 or a slice header (Slice Header) 20007. Fig. 27 illustrates an example of a bit stream having a qp_delta_select_flag code 20030 embedded in the slice header (Slice Header) 20007.
Note that according to the present exemplary embodiment, a unit that predicts the quantization parameter in decoding order and a unit that predicts the quantization parameter using the quantization parameter of a sub-block adjacent to a sub-block to be encoded are provided. However, if only one of the units is provided, the qp_delta_select_flag code indicating which one of the units is provided is inserted.
Second Exemplary Embodiment
Fig. 6 is a block diagram of the configuration of an image decoding apparatus according to a second exemplary embodiment of the present invention. According to the present exemplary embodiment, decoding of the encoded data generated in the first exemplary embodiment illustrated in Fig. 5 is described.
A terminal 1100 receives an encoded bit stream. A decoding/separating unit 1101 decodes the header information of the bit stream and separates necessary code from the bit stream. Thereafter, the decoding/separating unit 1101 outputs the code to a unit disposed downstream thereof. The decoding/separating unit 1101 performs an operation that is the reverse of the operation performed by the code integrating unit 11009. A selection flag decoding unit 1109 decodes the qp_delta_select_flag code 20005 and reproduces the selection flag. A quantization parameter decoding unit 1102 decodes the encoded data of the quantization parameter. A block decoding unit 1103 decodes the quantization coefficient code of each of the sub-blocks and reproduces a quantization coefficient. A block inverse quantization unit 1104 performs inverse quantization on the quantization coefficient using the sub-block quantization parameter reproduced by the quantization parameter decoding unit 1102. Thus, the block inverse quantization unit 1104 reproduces the orthogonal transform coefficient. A block inverse transform unit 1105 performs inverse orthogonal transform that is the reverse of the orthogonal transform performed by the block transform unit 11004 illustrated in Fig. 1 and reproduces the prediction error. A block reproducing unit 1106 reproduces the image data of a sub-block using the prediction error and the decoded image data. A block combining unit 1107 places the image data items of the reproduced sub-blocks in positions and reproduces the image data of the basic block.
An exemplary image decoding operation performed by the above-described image decoding apparatus is described below. According to the present exemplary embodiment, the bit stream of a moving image generated in the first exemplary embodiment is input on a frame basis. However, a configuration for receiving the bit stream of a still image for one frame may be employed.
In Fig. 6, stream data for one frame is input from the terminal 1100 and is input to the decoding/separating unit 1101. The decoding/separating unit 1101 decodes the header information required for reproducing the image. The qp_delta_select_flag code 20005 included in the header is input to the selection flag decoding unit 1109. The selection flag decoding unit 1109 reproduces the flag indicating which one of the prediction techniques is selected to reproduce the quantization parameter. The selection flag is input to the quantization parameter decoding unit 1102.
In addition, the division information (the split_coding_flag code 20010) is decoded by the decoding/separating unit 1101 on a basic block basis and is input to the quantization parameter decoding unit 1102. Subsequently, for each of the sub-blocks, the quantization parameter difference value code (the cu_qp_delta code) of the sub-block is input to the quantization parameter decoding unit 1102.
Fig. 7 is a detailed block diagram of the quantization parameter decoding unit 1102. A terminal 100 receives, from the decoding/separating unit 1101, the division information regarding sub-blocks of a basic block including a sub-block to be decoded. A terminal 101 receives the selection flag from the selection flag decoding unit 1109. A terminal 102 receives the sub-block quantization parameter difference value (the cu_qp_delta code 20009) from the decoding/separating unit 1101 illustrated in Fig. 5. A sub-block quantization parameter difference value decoding unit 103 receives sub-block quantization parameter difference code and decodes the sub-block quantization parameter difference code. Thus, the sub-block quantization parameter difference value decoding unit 103 reproduces the difference value of the sub-block quantization parameter. A sub-block quantization parameter addition unit 104 sums the prediction value of the determined sub-block quantization parameter and the sub-block quantization parameter difference value and reproduces the sub-block quantization parameter. A terminal 105 outputs the reproduced sub-block quantization parameter to the block inverse quantization unit 1104 illustrated in Fig. 6.
The selection flag received by the terminal 101 is input to a selector 109. If the selection flag is "0", the selector 109 determines that data is input from a quantization parameter storage unit 106. However, if the selection flag is "1", the selector 109 determines that data is input from a quantization parameter prediction value determination unit 108.
The sub-block quantization parameter difference code received by a terminal 102 is input to the sub-block quantization parameter difference value decoding unit 103 and is decoded using Golomb coding. Thus, the sub-block quantization parameter difference value is reproduced. The sub-block quantization parameter difference value is input to the sub-block quantization parameter addition unit 104. The sub-block quantization parameter addition unit 104 sums the prediction value of the sub-block quantization parameter input via the selector 109 and the sub-block quantization parameter difference value. In this way, the sub-block quantization parameter addition unit 104 reproduces the sub-block quantization parameter. The reproduced sub-block quantization parameter is output from the terminal 105 to the block inverse quantization unit 1104 illustrated in Fig. 6. In addition, the reproduced sub-block quantization parameter is input to the quantization parameter storage unit 106 and a quantization parameter storage unit 107. The quantization parameter storage unit 106 stores the quantization parameter of a sub-block previously decoded at all times. Like the quantization parameter storage unit 4 according to the first exemplary embodiment illustrated in Fig. 3, the quantization parameter storage unit 107 stores the quantization parameter of a sub-block located on the left of the sub-block to be decoded.
The quantization parameter prediction value determination unit 108 receives the division information regarding a sub-block from the terminal 100. In addition, the quantization parameter prediction value determination unit 108 receives, from the quantization parameter storage unit 106, the quantization parameter of a sub-block decoded in encoding order and receives, from the quantization parameter storage unit 107, the quantization parameter of a sub-block adjacent to the sub-block to be decoded (a sub-block located on the left according to the present exemplary embodiment). The quantization parameter prediction value determination unit 108 determines, as the prediction value, the quantization parameter of a sub-block with which the upper left pixel of the target block is in contact. If such a sub-block is not present on the left, the quantization parameter prediction value determination unit 108 determines, as the prediction value, the quantization parameter of a sub-block decoded in the decoding order of a block stored in the quantization parameter storage unit 106.
If the selection flag is "0", that is, if the prediction value of the quantization parameter of the sub-block is determined in accordance with the decoding order, the selector 109 determines that data is input from a quantization parameter storage unit 106. However, if the selection flag is "1", that is, if the quantization parameter is determined from a sub-block adjacent to the target sub-block, the selector 109 determines that data is input from a quantization parameter prediction value determination unit 108.
Referring back to Fig. 6, the quantization coefficient code data of the sub-block separated by the decoding/separating unit 1101 is input to the block decoding unit 1103. The block decoding unit 1103 decodes the quantization coefficient code data using the Golomb coding. Thus, the block decoding unit 1103 reproduces the quantization coefficients. After the quantization coefficients of the sub-blocks are reproduced, the quantization coefficients are input to the block inverse quantization unit 1104. The block inverse quantization unit 1104 performs inverse quantization using the quantization coefficient and the sub-block quantization parameter of the input sub-block and reproduces the orthogonal transform coefficient. The reproduced orthogonal transform coefficient is subjected to inverse transform in the block inverse transform unit 1105. Thus, the prediction error is reproduced. The reproduced prediction error is input to the block reproducing unit 1106. The block reproducing unit 1106 performs prediction using the decoded pixel data adjacent to the target sub-block or the pixel data of the previous frame. Thus, the block reproducing unit 1106 reproduces the image data of the sub-block. The block combining unit 1107 places the reproduced image data items of the sub-blocks in positions. Thus, the block combining unit 1107 reproduces the image data of the basic block. The image data of the reproduced basic block is output from a terminal 1108 to the outside.
Fig. 8 is a flowchart of the image decoding process performed by the image decoding apparatus according to the second exemplary embodiment.
In step S101, the decoding/separating unit 1101 decodes the header information. In step S102, the selection flag decoding unit 1109 decodes the qp_delta_select_flag code 20005. In step S103, the sub-block quantization parameter difference value decoding unit 103 decodes sub-block quantization parameter difference code data and reproduces the sub-block quantization parameter difference value. In step S104, it is determined whether the selection flag decoded in step S102 indicates a mode in which the quantization parameter is predicted in decoding order (a coding mode for reducing a delay of the processing time). If the decoded selection flag indicates a mode in which the quantization parameter is predicted in decoding order (if the selection flag is "0"), the processing proceeds to step S105. Otherwise, the processing proceeds to step S106. In step S105, the quantization parameter of a sub-block decoded in encoding order of the block is selected as a prediction value. In step S106, the quantization parameter of a sub-block adjacent to the sub-block to be decoded (a sub-block located on the left according to the present exemplary embodiment) or the quantization parameter of a sub-block encoded in encoding order of the block is selected as a prediction value. In step S107, the prediction value of the quantization parameter obtained in step S105 or S106 and the difference value obtained in step S103 are summed. In this way, the sub-block quantization parameter is reproduced. In step S108, the quantization coefficient code data of the sub-blocks is decoded, and the quantization coefficient is reproduced. Inverse quantization and inverse orthogonal transform are performed. Thus, the prediction error is reproduced. Thereafter, prediction is performed using the decoded adjacent pixel data or the pixel data of the previous frame, and a decoded image of the sub-block is reproduced. In step S109, the decoded images of the sub-blocks are arranged in the decoded image of the basic block. In step S110, the image decoding apparatus determines whether all of the sub-blocks of the target basic block have been decoded. If all of the sub-blocks of the target basic block have been decoded, the processing proceeds to step S111. However, if all of the sub-blocks of the target basic block have not been decoded, the processing returns to step S103, where the processing for the next sub-block starts. In step S111, the decoded images of the basic blocks are arranged in the decoded image of the frame. In step S112, the image decoding apparatus determines whether all of the basic blocks have been decoded. If all of the basic blocks have been decoded, the image decoding apparatus stops its operation and completes the processing. Otherwise, the processing returns to step S103, where the processing for the next basic block starts.
By using the above-described configuration and operations, different bit streams generated for different intended uses in the first exemplary embodiment are decoded, and a reproduction image can be obtained.
In addition, like the first exemplary embodiment, any size of a block, any size of the unit of processing, any positions of the unit of processing to be referenced and the pixels, and any code can be employed. Furthermore, referencing of a quantization parameter that is out of decoding order is not limited to that described in the present exemplary embodiment. Still furthermore, while the present exemplary embodiment has been described with reference to Golomb coding used for decoding the sub-block quantization parameter, the sub-block quantization parameter difference value, and the quantization coefficient, the coding technique is not limited thereto. For example, Huffman coding or arithmetic coding other than Huffman coding can be employed.
Third Exemplary Embodiment
According to a third exemplary embodiment, an encoding apparatus has a configuration similar to that illustrated in Fig. 1. However, the quantization parameter encoding unit 11008 has a different configuration.
Fig. 9 is a detailed block diagram of the quantization parameter encoding unit 11008 according to the present exemplary embodiment. In Fig. 9, the same reference numerals are used for identical or similar components as used in the first exemplary embodiment illustrated in Fig. 1, and descriptions of the components are not repeated.
An end-portion determination unit 200 receives the division information regarding a sub-block input from the terminal 1 and determines whether a sub-block to be encoded is located in the end portion of a frame or a slice. If the target sub-block is located in the end portion, the end-portion determination unit 200 outputs "1" as the value of an end-portion determination signal. Otherwise, the end-portion determination unit 200 outputs "0" as the value of the end-portion determination signal. A sub-block quantization parameter prediction value determination unit 201 determines the prediction value of the quantization parameter of a sub-block to be encoded using the quantization parameters of sub-blocks stored in the quantization parameter storage unit 4. That is, the sub-block quantization parameter prediction value determination unit 201 determines the prediction value by referencing the quantization parameter of a sub-block adjacent to the sub-block to be encoded (a sub-block located on the left according to the present exemplary embodiment). A quantization parameter storage unit 202 stores the quantization parameter of an encoded sub-block. The quantization parameter storage unit 202 sequentially receives the quantization parameter from the quantization parameter storage unit 107. In this way, the quantization parameter storage unit 202 stores the quantization parameters of the encoded sub-blocks up to the sub-block to be encoded. An end-portion quantization parameter prediction value determination unit 203 determines the prediction value of the quantization parameter of the sub-block to be encoded using the quantization parameters of the sub-blocks stored in the quantization parameter storage unit 202. That is, if the sub-block is located in the left end portion of a frame or a slice, the end-portion quantization parameter prediction value determination unit 203 determines the prediction value by referencing the quantization parameter of the sub-block located on top of the sub-block to be encoded. Like the selector 9 according to the first exemplary embodiment illustrated in Fig. 3, a selector 204 selects one of the units from which data is input using the selection flag input from the terminal 6. If the selection flag is "0", the selector 204 selects the quantization parameter storage unit 3 as a unit from which data is input. However, if the selection flag is "1", the selector 204 selects the end-portion quantization parameter prediction value determination unit 203 as a unit from which data is input. A selector 205 selects one of the units from which data is input on the basis of the end-portion determination signal output from the end-portion determination unit 200. If the end-portion determination signal is "0", the selector 205 selects the selector 204 as a unit from which data is input. However, if the end-portion determination signal is "1", the selector 204 selects the sub-block quantization parameter prediction value determination unit 201 as a unit from which data is input.
Like the first exemplary embodiment, in the above-described configuration, image data is input from the terminal 11000. The block dividing unit 11001 divides the image data into sub-blocks. The quantization parameter determination unit 11002 determines the quantization parameter of each of the sub-blocks. The determined sub-block quantization parameter is input to the quantization parameter encoding unit 11008.
Referring back to Fig. 9, like the first exemplary embodiment, the selection flag encoding unit 7 encodes the input selection flag and outputs the qp_delta_select_flag code from the terminal 8. In addition, if the selection flag is "0", the selector 204 selects the output of the quantization parameter storage unit 3 as an input. However, if the selection flag is "1", the selector 204 selects the output of the end-portion quantization parameter prediction value determination unit 203 as an input. The quantization parameter storage unit 2 stores the sub-block quantization parameter input from the terminal 1.
Concurrently, the sub-block quantization parameter prediction value determination unit 201 determines a prediction value of the quantization parameter of the target sub-block from the quantization parameter of the adjacent sub-block (a sub-block located on the left according to the present exemplary embodiment) in the quantization parameter storage unit 4. The sub-block quantization parameter prediction value determination unit 201 references the quantization parameter storage unit 4 and selects the quantization parameter of a sub-block with which the upper left corner pixel of the target sub-block is in contact as the prediction value. In addition, the end-portion quantization parameter prediction value determination unit 203 references the quantization parameter storage unit 202 and determines a prediction value of the quantization parameter of the target sub-block from the quantization parameter of an adjacent sub-block (a sub-block located on top of the target block according to the present exemplary embodiment). The end-portion quantization parameter prediction value determination unit 203 references the quantization parameter storage unit 202 and selects the quantization parameter of a sub-block with which the upper left corner pixel of the target sub-block is in contact as the prediction value. The quantization parameter storage unit 3 selects the quantization parameter of a sub-block encoded in the encoding order of the block (e.g., the sub-block encoded immediately prior to the target block).
If the selection flag is "0", the selector 204 determines the prediction value of the quantization parameter in accordance with the decoding order. Thus, the selector 204 inputs the output of the quantization parameter storage unit 3 to the selector 205. However, if the selection flag is "1", the selector 204 selects the quantization parameter of a sub-block located on the left of the sub-block to be encoded as the prediction value in order to increase the coding efficiency. Thus, the selector 204 inputs the output of the end-portion quantization parameter prediction value determination unit 203 to the selector 205.
The end-portion determination unit 200 determines whether the sub-block to be encoded is located in an end portion of a frame or a slice from the input division information and the progress of the processing and generates the end-portion determination signal.
If the end-portion determination signal is "0", the selector 205 determines that the sub-block to be processed is not located in an end portion and selects the sub-block quantization parameter prediction value determination unit 201 as a unit from which data is input. At that time, the quantization parameter of a sub-block located on the left of the sub-block to be encoded is selected as the prediction value. The prediction value is input to the sub-block quantization parameter difference unit 10. However, if the end-portion determination signal is "1", the selector 205 determines that the sub-block to be processed is located in an end portion and selects the selector 204 as a unit from which data is input. At that time, if the selection flag is "0", the quantization parameter selected in the decoding order is selected as the prediction value. However, if the selection flag is "1", the quantization parameter of a sub-block located on top of the sub-block to be encoded is selected as the prediction value. The prediction value is input to the sub-block quantization parameter difference unit 10. Thereafter, the difference value is computed as in the first exemplary embodiment, and the difference value is encoded into the sub-block quantization parameter difference code (the cu_qp_delta code). The cu_qp_delta code is output from the terminal 12.
Fig. 10 is a flowchart of the image encoding process performed by the image encoding apparatus according to the third exemplary embodiment. In Fig. 10, the same reference numerals are used for identical or similar components as used in the first exemplary embodiment illustrated in Fig. 4, and descriptions of the components are not repeated.
In the processes from step S001 through step S006, like the first exemplary embodiment, one of the quantization parameter prediction methods is selected, and a selection mode is determined. Information indicating the selection mode is encoded together with header information. A frame is divided into basic blocks, each of which is further divided into sub-blocks. Thereafter, the quantization parameter of each of the sub-blocks is determined.
In step S200, it is determined whether the sub-block to be encoded is located in an end portion of the frame or the slice from the division information regarding the sub-block. That is, it is determined whether a referenceable encoded sub-block is present on the left of the sub-block to be encoded. If such a sub-block is present, the processing proceeds to S009. Otherwise, the processing proceeds to step S201. In step S201, like step S007 of the first exemplary embodiment illustrated in Fig. 4, if the selection flag is "0", the processing proceeds to step S008. Otherwise, the processing proceeds to step S202. In step S202, the quantization parameter of a sub-block with which the upper left corner pixel of the sub-block to be encoded is in contact is selected as the prediction value. If the sub-block to be encoded is located at the upper left corner of the frame or the slice, the quantization parameter designated by the slice is selected as the prediction value. However, a technique for determining the prediction value is not limited thereto. Subsequently, processes from step S010 through step S013 are performed. Thus, for each of the sub-blocks and the basic blocks, the difference value of the sub-block quantization parameter is computed and encoded, the prediction error in the sub-block is encoded, and the reproduction image is generated.
Prediction of a quantization parameter is described in more detail with reference to Fig. 2G. The prediction illustrated in Fig. 2G differs from the prediction of the first exemplary embodiment illustrated in Fig. 2E in terms of the prediction performed for the sub-blocks 1006 and 1008. If the selection flag is "0", a decoding-order prediction is performed. Accordingly, like the first exemplary embodiment, prediction for the sub-blocks in an end portion is performed in accordance with the encoding order. However, if the selection flag is "1", prediction for the sub-blocks is performed using the output of the end-portion quantization parameter prediction value determination unit 203. That is, the prediction is performed using the quantization parameter of a sub-block on top of the sub-block to be encoded, as illustrated in Fig. 2G.
By using the above-described configuration and operations and by selecting one of an end-portion quantization parameter prediction method using the decoding order and an end-portion quantization parameter prediction method using a referenceable adjacent quantization parameter, a user can advantageously select coding that reduces a delay of the processing time or highly efficient coding for an end-portion quantization parameter.
In step S009 of the first exemplary embodiment illustrated in Fig. 4, if a referenceable sub-block is not present on the left, a prediction method using decoding order is employed. However, steps S201, S008, and S202 illustrated in Fig. 10 may be employed instead of step S009. While the present exemplary embodiment has been described with reference to Golomb coding used for encoding the basic block quantization parameter, the sub-block quantization parameter difference value, and the quantization coefficient, the coding technique is not limited thereto. For example, Huffman coding or arithmetic coding other than Huffman coding can be employed.
Fourth Exemplary Embodiment
According to a fourth exemplary embodiment, an encoding apparatus has a configuration similar to that of the second exemplary embodiment illustrated in Fig. 6. However, a quantization parameter decoding unit 1102 has a different configuration. The present exemplary embodiment is described with reference to decoding of the encoded data generated in the third exemplary embodiment.
Fig. 11 is a detailed block diagram of the quantization parameter decoding unit 1102 according to the present exemplary embodiment. In Fig. 11, the same reference numerals are used for identical or similar components as used in the second exemplary embodiment illustrated in Fig. 7, and descriptions of the components are not repeated. In Fig. 11, an end-portion determination unit 300 determines whether a sub-block to be decoded is located in an end portion of a frame or a slice. The end-portion determination unit 300 operates in the same manner as the end-portion determination unit 200 of the third exemplary embodiment illustrated in Fig. 7 and generates an end-portion determination signal.
A sub-block quantization parameter prediction value determination unit 301 determines the prediction value of the quantization parameter of a sub-block to be decoded from the quantization parameters of the sub-blocks stored in the quantization parameter storage unit 107. That is, according to the present exemplary embodiment, the quantization parameter of a sub-block located on the left of the sub-block to be decoded is referenced, and the prediction value is determined. A quantization parameter storage unit 302 stores the quantization parameters of decoded sub-blocks. The quantization parameter storage unit 302 sequentially receives the quantization parameters from the quantization parameter storage unit 107. In this way, the quantization parameter storage unit 302 stores the quantization parameters of the decoded sub-blocks up to the sub-block to be decoded. An end-portion quantization parameter prediction value determination unit 303 determines the prediction value of the quantization parameter of the sub-block to be decoded using the quantization parameters of the sub-blocks stored in the quantization parameter storage unit 302. That is, the end-portion quantization parameter prediction value determination unit 303 references the quantization parameter of a sub-block on top of the sub-block to be decoded and determines the prediction value. Like the selector 109 of the second exemplary embodiment illustrated in Fig. 7, a selector 304 selects a unit from which data is input using the selection flag input from the terminal 101. If the selection flag is "0", the selector 109 selects the quantization parameter storage unit 106 as a unit from which data is input. However, if the selection flag is "1", the selector 109 selects the end-portion quantization parameter prediction value determination unit 303 as a unit from which data is input. A selector 305 selects one of units from which data is input using the end-portion determination signal output from the end-portion determination unit 300. If the end-portion determination signal is "0", the selector 305 selects the selector 304 as a unit from which data is input. However, if the end-portion determination signal is "1", the selector 305 selects the sub-block quantization parameter prediction value determination unit 301 as a unit from which data is input.
An exemplary image decoding operation performed by the image decoding apparatus is described below. Like the second exemplary embodiment, division information regarding the sub-block is input from the terminal 100. The selection flag is input from the terminal 101. The quantization parameter difference value code (the cu_qp_delta code) of the sub-block is input from the terminal 102. In the above-described configuration, the quantization parameter difference value code of the sub-block input from the terminal 102 is input to the sub-block quantization parameter difference value decoding unit 103 and is subjected to Golomb decoding. Thus, the sub-block quantization parameter difference value is reproduced. The sub-block quantization parameter addition unit 104 sums the prediction value of the sub-block quantization parameter input through the selector 305 and the sub-block quantization parameter difference value. Thus, the sub-block quantization parameter is reproduced. The sub-block quantization parameter is input to the quantization parameter storage unit 106 and the quantization parameter storage unit 107. The quantization parameter storage unit 302 receives, from the quantization parameter storage unit 107, the quantization parameter of a sub-block that may be subsequently located on top of the sub-block to be encoded and stores the received quantization parameter.
Like the second exemplary embodiment, the quantization parameter storage unit 106 stores the quantization parameter of a sub-block decoded prior to the decoding operation in accordance with the decoding order at all times. The quantization parameter storage unit 107 stores the quantization parameter of a sub-block that is adjacent to the sub-block to be decoded (a sub-block located on the left according to the present exemplary embodiment).
The end-portion quantization parameter prediction value determination unit 303 receives, from the quantization parameter storage unit 107, the quantization parameter of a sub-block located on top of the sub-block to be decoded. According to the present exemplary embodiment, the quantization parameter of a sub-block with which the upper left pixel of the target sub-block is in contact is selected as the prediction value.
If the selection flag is "0", that is, if the prediction value of the quantization parameter is determined in the decoding order, the selector 304 selects the quantization parameter storage unit 106 as a unit from which data is input. However, if the selection flag is "1", that is, if the prediction value of the quantization parameter is determined from the adjacent sub-block, the selector 304 selects the end-portion quantization parameter prediction value determination unit 303 as a unit from which data is input.
Fig. 12 is a flowchart of the image decoding process performed by the image decoding apparatus according to the fourth exemplary embodiment. In Fig. 12, the same reference numerals are used for identical or similar components as used in the second exemplary embodiment illustrated in Fig. 8, and descriptions of the components are not repeated.
In the processes performed in steps S101 and S102, like the second exemplary embodiment, the header information and the qp_delta_select flag code indicating the type of quantization parameter prediction method are decoded. In step S103, the cu_qp_delta code is decoded, and the difference value of the quantization parameter of the sub-block is computed.
In step S300, it is determined whether the sub-block to be decoded is located in an end portion of the frame or the slice from the division information regarding the sub-block. That is, it is determined whether a referenceable encoded sub-block is present on the left of the sub-block to be decoded. If such a sub-block is present, the processing proceeds to S106. Otherwise, the processing proceeds to step S301. In step S301, like step S007 of the first exemplary embodiment illustrated in Fig. 4, if the selection flag is "0", the processing performed by the quantization parameter decoding unit 1102 proceeds to step S105. Otherwise, the processing proceeds to step S302. In step S302, the quantization parameter decoding unit 1102 selects, as the prediction value, the quantization parameter of an immediately above sub-block with which the upper left corner pixel of the sub-block to be decoded is in contact. If the sub-block to be encoded is located at the upper left corner of the frame or the slice, the quantization parameter designated by the slice is selected as the prediction value. However, a technique for determining the prediction value is not limited thereto. Subsequently, processes from step S107 through step S112 are performed. Thus, for each of the sub-blocks and the basic blocks, the sub-block quantization parameter is reproduced, the prediction error in the sub-block is decoded, and the reproduction image is generated.
By using the above-described configuration and operations and by selecting one of an end-portion quantization parameter prediction method using the decoding order and an end-portion quantization parameter prediction method using an adjacent referenceable quantization parameter, the encoded data generated in the third exemplary embodiment can be decoded in an end portion.
While the present exemplary embodiment has been described with reference to Golomb coding used for decoding the basic block quantization parameter, the sub-block quantization parameter difference value, and the quantization coefficient, the coding technique is not limited thereto. For example, Huffman coding or arithmetic coding other than Huffman coding can be employed. Note that in step S106 of the second exemplary embodiment illustrated in Fig. 8, if a referenceable sub-block is not present on the left of the sub-block to be decoded, prediction is performed in the decoding order. However, steps S301, S105, and S302 illustrated in Fig. 12 may be performed instead of step S106.
Fifth Exemplary Embodiment
Fig. 13 is a block diagram of an image encoding apparatus according to a fifth exemplary embodiment. In Fig. 13, the same reference numerals are used for identical or similar components as used in the first exemplary embodiment illustrated in Fig. 1, and descriptions of the components are not repeated.
A slice encoding mode determination unit 11100 determines a coding mode on a slice basis. For simplicity, in the present exemplary embodiment, an inter coding mode is employed. In the inter coding mode, intra-frame coding and motion compensation are performed. Thus, coding is performed. A block predicting unit 11103 performs intra prediction or prediction based on motion compensation in accordance with the type of slice encoding mode. A quantization parameter encoding unit 11108 differs from the quantization parameter encoding unit 11008 of the first exemplary embodiment illustrated in Fig. 1 in that it receives the slice encoding mode.
Before a frame is encoded, the slice encoding mode determination unit 11100 determines whether a slice of the frame is intra coded or inter coded. For example, the slice encoding mode determination unit 11100 can periodically select an intra coding mode and select an inter coding mode for the other period of time in accordance with the number of the previously input frames. The selected slice encoding mode is input to the quantization parameter encoding unit 11108 and the block predicting unit 11103. If a slice is subjected to intra prediction, the block predicting unit 11103 outputs the mode of the intra prediction. For example, one of the eight prediction modes described in NPL 1 may be employed. This prediction mode is input to the quantization parameter encoding unit 11108.
Fig. 14 is a detailed block diagram of the quantization parameter encoding unit 11108 according to the present exemplary embodiment. In Fig. 14, the same reference numerals are used for identical or similar components as used in the first exemplary embodiment illustrated in Fig. 3, and descriptions of the components are not repeated.
A terminal 400 receives the slice encoding mode from the slice encoding mode determination unit 11100 illustrated in Fig. 13. A control unit 401 generates a selection signal used by the selector 9 that selects a unit from which data is input on the basis of the slice encoding mode input from the terminal 400 and the selection flag input from the terminal 6. A terminal 402 receives the intra prediction mode used for coefficient prediction from the block predicting unit 11103 illustrated in Fig. 13. A quantization parameter prediction value determination unit 403 selects the quantization parameter of an adjacent sub-block in accordance with the intra prediction mode input from the terminal 402.
Before starting the processing, the selection flag encoding unit 7 encodes the input selection flag and outputs the qp_delta_select_flag code from the terminal 8.
In addition, when slice coding is started, the slice encoding mode is input from the terminal 400 to the control unit 401. The control unit 401 generates a selection signal for the selector 9 using the input slice encoding mode and the selection flag. That is, when the selection flag is "0" and if the slice encoding mode indicates intra prediction, the control unit 401 selects the quantization parameter storage unit 3 as a unit from which the selector 9 receives data. At that time, the control signal is "0". When the selection flag is "0" and if the slice encoding mode indicates inter prediction, the control unit 401 selects the quantization parameter storage unit 3 as a unit from which the selector 9 receives data. At that time, the control signal is "0". When the selection flag is "1" and if the slice encoding mode indicates intra prediction, the control unit 401 selects the quantization parameter prediction value determination unit 403 as a unit from which the selector 9 receives data. At that time, the control signal is "1". When the selection flag is "1" and if the slice encoding mode indicates inter prediction, the control unit 401 selects the quantization parameter storage unit 3 as a unit from which the selector 9 receives data. At that time, the control signal is "0".
The quantization parameter prediction value determination unit 403 receives the intra prediction mode from the terminal 402. When intra prediction described in NPL 1 is performed and if the prediction mode is one of "0", "3", "5", and "7", the quantization parameter prediction value determination unit 403 references the quantization parameter of a sub-block on top of the sub-block to be encoded. If the prediction mode is one of "1", "6", and "8", the quantization parameter prediction value determination unit 403 references the quantization parameter of a sub-block on the left of the sub-block to be encoded. If the prediction mode is one of "2" and "4", the quantization parameter prediction value determination unit 403 selects, as the prediction value, the average of the quantization parameters of sub-blocks on top and the left of the sub-block to be encoded. While the present exemplary embodiment has been described with reference to the above-described combinations of a prediction mode and a referenced quantization parameter, the combinations are not limited thereto.
If the control signal is "0", the selector 9 selects the output of the quantization parameter storage unit 3 as an input. Otherwise, the selector 9 selects the output of the quantization parameter prediction value determination unit 403 as an input. The quantization parameter obtained via the selector 9 is input to the sub-block quantization parameter difference unit 10 and the quantization parameter storage units 3 and 4. If the control signal is "0", the selector 9 determines the prediction value of the quantization parameter in accordance with the decoding order. Accordingly, the selector 9 inputs the output of the quantization parameter storage unit 3 to the sub-block quantization parameter difference unit 10. However, if the control signal is "1", the selector 9 selects, as the prediction value, the quantization parameter of a sub-block located on the left or top of the sub-block to be encoded in accordance with the intra prediction mode in order to increase the coding efficiency. Thus, the selector 9 inputs the output of the quantization parameter prediction value determination unit 403 to the sub-block quantization parameter difference unit 10.
The sub-block quantization parameter difference unit 10 reads the quantization parameter of the sub-block to be encoded out of the quantization parameter storage unit 2. Thereafter, the sub-block quantization parameter difference unit 10 subtracts the prediction value of the quantization parameter input from the selector 9 from the quantization parameter and inputs the difference to the sub-block quantization parameter encoding unit 11. The sub-block quantization parameter encoding unit 11 encodes the input quantization parameter difference value using Golomb coding and outputs the cu_qp_delta code from the terminal 12.
Fig. 15 is a flowchart of the image encoding process performed by the image encoding apparatus according to the fifth exemplary embodiment. In Fig. 15, the same reference numerals are used for identical or similar components as used in the first exemplary embodiment illustrated in Fig. 4, and descriptions of the components are not repeated.
In the processes performed from step S001 through step S003, like the first exemplary embodiment, one of the quantization parameter prediction methods is selected, and the selection mode is determined. Information indicating the selection mode is encoded together with header information. In step S401, the slice encoding mode determination unit 11100 determines the slice encoding mode at the top of the slice and encodes the slice encoding mode. Thus, a slice_type code is generated. Fig. 5 illustrates encoded data of a slice header. A slice_type code 20011 is included in the slice header. In the processes from step S004 through step S006, the block dividing unit 11001 divides a frame into basic blocks, each of which is further divided into sub-blocks. Thereafter, the quantization parameter of each of the sub-blocks is determined. In step S402, the quantization parameter encoding unit 11108 determines whether the slice encoding mode is an intra coding mode or an inter coding mode. If the slice encoding mode is an intra coding mode, the processing proceeds to step S403. However, if the slice encoding mode is an inter coding mode, the processing proceeds to step S008. In step S403, as in step S007 of the first exemplary embodiment illustrated in Fig. 4, if the selection flag is "0", the processing performed by the quantization parameter encoding unit 11108 proceeds to step S008. Otherwise, the processing proceeds to step S404. In step S404, the quantization parameter encoding unit 11108 determines the prediction value from the quantization parameter of a sub-block adjacent to the sub-block to be encoded in accordance with the intra prediction mode. Subsequently, the processes in step S010 through S016 are performed. Thus, for each of the sub-blocks, the basic blocks, and the slices, the difference value of the sub-block quantization parameter is computed and encoded, the prediction error in the sub-block is encoded, and the reproduction image is generated.
By using the above-described configuration and operations, the difference value of each of the sub-block quantization parameters can be encoded for the intended use. In addition, since, in inter coding, the structure of an image has a low correlation with the quantization parameter, prediction of the quantization parameter can be performed in the decoding order. In intra coding, prediction that is suitable for the property of an image can be performed. Thus, a user can advantageously obtain a decrease in a delay of the processing time and highly efficient encoding.
While the present exemplary embodiment has been described with reference to an intra prediction mode for prediction of a sub-block quantization parameter, the mode is not limited thereto. For example, the prediction technique described in the first and second exemplary embodiments or any other technique may be employed.
In addition, while the present exemplary embodiment has been described with reference to use of the average of the sub-block quantization parameters of the previous basic block as the basic block quantization parameter, the basic block quantization parameter is not limited thereto. For example, a median value of the sub-blocks may be used as the basic block quantization parameter. Alternatively, the value of the sub-block quantization parameter that most frequently appears may be used as the basic block quantization parameter. Still alternatively, a plurality of computing techniques, such as the above-described techniques, may be prepared. By selecting the most efficient basic block quantization parameter, the code indicating the technique may be used for encoding.
While the present exemplary embodiment has been described with reference to determination of an encoding mode on a slice basis, determination of an encoding mode is not limited thereto. For example, if, like MPEG-1, MPEG-2, and MPEG-4, an encoding mode can be set on a frame basis or a picture basis, that encoding mode may be employed. Furthermore, the encoding mode may be applied to a smaller block (e.g., on a basic block (macroblock) basis).
Sixth Exemplary Embodiment
Fig. 16 is a block diagram of an image decoding apparatus according to the present exemplary embodiment. In Fig. 16, the same reference numerals are used for identical or similar components as used in the second exemplary embodiment illustrated in Fig. 6, and descriptions of the components are not repeated.
A slice encoding mode decoding unit 1501 decodes the encoded data indicating the slice coding type (i.e., the slice_type code). A quantization parameter decoding unit 1502 differs from the quantization parameter decoding unit 1102 according to the second exemplary embodiment illustrated in Fig. 6 in that it receives a slice encoding mode.
In Fig. 16, stream data for one frame is input from the terminal 1100 and is input to the decoding/separating unit 1101. The decoding/separating unit 1101 decodes header information required for reproducing the image. The qp_delta_select_flag code 20005 included in the header is input to the selection flag decoding unit 1109. The selection flag decoding unit 1109 reproduces the flag indicating which one of the prediction techniques for reproducing the quantization parameter is used. The selection flag is input to the quantization parameter decoding unit 1502.
In addition, at the top of the slice, code indicating the slice encoding mode (the slice_type code 20011) is input to the slice encoding mode decoding unit 1501. The slice encoding mode decoding unit 1501 decodes the code. Thus, the slice encoding mode is reproduced. The reproduced slice encoding mode is input to the quantization parameter decoding unit 1502.
Furthermore, the division information (the split_coding_flag code 20010) is decoded by the decoding/separating unit 1101 on a basic block basis and is input to the quantization parameter decoding unit 1102. Thereafter, for each of the subsequent sub-blocks, the division information is input to the quantization parameter decoding unit 1502 together with the quantization parameter difference value code of the sub-block (the cu_qp_delta code).
Fig. 17 is a block diagram of the configuration of the quantization parameter decoding unit 1502 according to the sixth exemplary embodiment of the present invention. The present exemplary embodiment is described with reference to decoding of the encoded data generated in the fifth exemplary embodiment. In Fig. 17, the same reference numerals are used for identical or similar components as used in the second exemplary embodiment illustrated in Fig. 7, and descriptions of the components are not repeated.
A terminal 501 receives the slice encoding mode reproduced by the slice encoding mode decoding unit 1501. A control unit 502 generates a selection signal used for selecting a unit from which the selector 109 receives data on the basis of the slice encoding mode input from the terminal 501 and the selection flag input from the terminal 101. A quantization parameter prediction value determination unit 503 selects the quantization parameter of an adjacent sub-block in accordance with the intra prediction mode input from the terminal 402.
An exemplary image decoding operation performed by the image decoding unit is described below. According to the present exemplary embodiment, a moving-image bit stream is input on a frame basis. However, the configuration may be designed so that a still-image bit stream for one frame is input.
Before starting the decoding process on a slice basis, the slice encoding mode is input from the terminal 501 to the control unit 502. The control unit 502 generates a selection signal for the selector 109 using the input slice encoding mode and the selection flag. That is, when the selection flag is "0" and if the slice encoding mode indicates intra prediction, the control unit 502 selects the quantization parameter storage unit 106 as a unit from which the selector 109 receives data. At that time, the control signal is "0". When the selection flag is "0" and if the slice encoding mode indicates inter prediction, the control unit 502 selects the quantization parameter storage unit 106 as a unit from which the selector 109 receives data. At that time, the control signal is "0". When the selection flag is "1" and if the slice encoding mode indicates intra prediction, the control unit 502 selects the quantization parameter prediction value determination unit 503 as a unit from which the selector 109 receives data. At that time, the control signal is "1". When the selection flag is "1" and if the slice encoding mode indicates inter prediction, the control unit 502 selects the quantization parameter storage unit 106 as a unit from which the selector 109 receives data. At that time, the control signal is "0".
When the quantization parameter prediction value determination unit 503 performs intra prediction in which it receives the intra prediction mode from the terminal 501 and if the prediction mode is one of "0", "3", "5", and "7", the quantization parameter prediction value determination unit 503 references the quantization parameter of a sub-block on top of the sub-block to be encoded. If the prediction mode is one of "1", "6", and "8", the quantization parameter prediction value determination unit 503 references the quantization parameter of a sub-block on the left of the sub-block to be encoded. If the prediction mode is one of "2" and "4", the quantization parameter prediction value determination unit 503 selects, as the prediction value, the average of the quantization parameters of sub-blocks on top and the left of the sub-block to be encoded. While the present exemplary embodiment has been described with reference to the above-described combinations of a prediction mode and a referenced quantization parameter, the combinations are not limited thereto.
The prediction value of the quantization parameter of the sub-block input via the selector 109 is input to the sub-block quantization parameter addition unit 104 and is added to the decoded difference value. Thus, the quantization parameter is reproduced. The reproduced quantization parameter is output from the terminal 105 to the block inverse quantization unit 1104. In addition, the reproduced quantization parameter is input to the quantization parameter storage unit 106 and the quantization parameter storage unit 107 and is used for computing a prediction value of the quantization parameter of the subsequent sub-block.
Fig. 18 is a flowchart of the image decoding process performed by the image decoding apparatus according to the sixth exemplary embodiment.
In the processes of steps S101 and S102, the decoding/separating unit 1101 and the selection flag decoding unit 1109 decode the header information and the qp_delta_select flag code included in the header. In step S501, the slice encoding mode decoding unit 1501 decodes the slice_type code included in the slice header and obtains the slice encoding mode. The processing for each of the sub-blocks starts from step S103. In step S103, the quantization parameter decoding unit 1502 decodes the cu_qp_delta code (the quantization parameter difference value code) and reproduces the quantization parameter difference value. In step S502, the quantization parameter decoding unit 1502 determines whether the slice encoding mode is an intra coding mode or an inter coding mode. If the slice encoding mode is an intra coding mode, the processing proceeds to step S503. However, if the slice encoding mode is an inter coding mode, the processing proceeds to step S105. In step S503, the quantization parameter decoding unit 1502 makes determination as in step S104 of the second exemplary embodiment illustrated in Fig. 8. That is, if the selection flag is "0", the processing proceeds to step S105. However, if the selection flag is "1", the processing proceeds to step S504. In step S504, the quantization parameter decoding unit 1502 determines the prediction value from the quantization parameter of a sub-block adjacent to the sub-block to be decoded in accordance with the intra prediction mode.
Subsequently, the processes in step S107 through S113 are performed. Thus, for each of the sub-blocks, the basic blocks, and the slices, the difference value of the sub-block quantization parameter is reproduced, the prediction error in the sub-block is decoded, and the reproduction image is generated.
Through the above-described configuration and operations, prediction of a quantization parameter can be adaptively changed by referencing the coding mode of, for example, a slice that allows the difference value of each of the sub-block quantization parameters to be encoded. Thus, a user can advantageously obtain a decrease in a delay of the processing time and highly efficient encoding.
Seventh Exemplary Embodiment
Fig. 19 is a block diagram of an image encoding apparatus according to the present exemplary embodiment. In Fig. 19, the same reference numerals are used for identical or similar components as used in the fifth exemplary embodiment illustrated in Fig. 13, and descriptions of the components are not repeated. The seventh exemplary embodiment differs from the fifth exemplary embodiment illustrated in Fig. 13 in that the need for the operation unit 11011 is eliminated. In addition, a quantization parameter encoding unit 14008 differs from the quantization parameter encoding unit 11108 of the fifth exemplary embodiment illustrated in Fig. 13 in that it does not receive a selection flag.
Like the fifth exemplary embodiment, before encoding a frame, the slice encoding mode determination unit 11100 determines whether a slice of the frame is to be intra coded or inter coded. The determined slice encoding mode is input to the quantization parameter encoding unit 14008 and the block predicting unit 11103.
Fig. 20 is a detailed block diagram of the quantization parameter encoding unit 14008 according to the present exemplary embodiment. In Fig. 20, the same reference numerals are used for identical or similar components as used in the fifth exemplary embodiment illustrated in Fig. 14, and descriptions of the components are not repeated.
A selector 600 switches between the units from which data is input on the basis of the slice encoding mode input from the terminal 400.
Like the fifth exemplary embodiment, the quantization parameter determined by the quantization parameter determination unit 11002 is stored in the quantization parameter storage unit 2.
The slice encoding mode input from the terminal 400 is input to the selector 600. If the slice encoding mode indicates intra prediction, the quantization parameter prediction value determination unit 403 is selected by the selector 600 as a unit from which data is input. However, if the slice encoding mode indicates inter prediction, the quantization parameter storage unit 3 is selected by the selector 600 as a unit from which data is input.
Like the fifth exemplary embodiment, the quantization parameter prediction value determination unit 403 computes the prediction value from the quantization parameter of an adjacent sub-block in accordance with the intra prediction mode.
The quantization parameter received via the selector 600 is input to the sub-block quantization parameter difference unit 10 and the quantization parameter storage units 3 and 4. The information stored in the quantization parameter storage units 3 and 4 is used for prediction of the quantization parameter of the subsequent sub-blocks.
Subsequently, like the fifth exemplary embodiment, the sub-block quantization parameter difference unit 10 computes a quantization parameter difference value. The sub-block quantization parameter encoding unit 11 encodes the quantization parameter difference value, and the cu_qp_delta code is output from the terminal 12.
Fig. 21 is a flowchart of the image encoding process performed by the image encoding apparatus according to the seventh exemplary embodiment. As illustrated in Fig. 21, the flowchart differs from that of the fifth exemplary embodiment illustrated in Fig. 15 in that steps S001, S003, and S403 are eliminated.
Like the fifth exemplary embodiment, the code integrating unit 11009 encodes the header in step S002. Subsequently, the code integrating unit 11009 performs an encoding process on each of the slices. In step S401, the slice encoding mode is determined, and the determined slice encoding mode is encoded.
In the processes from step S004 through step S006, the frame is divided into basic blocks, each of which is further divided into the sub-block. Thereafter, the quantization parameter of each of the sub-blocks is determined. In step S402, the quantization parameter encoding unit 14008 determines whether the slice encoding mode is an intra coding mode or an inter coding mode. If the slice encoding mode is an intra coding mode, the processing proceeds to step S403. However, if the slice encoding mode is an inter coding mode, the processing proceeds to step S008. In step S404, the quantization parameter encoding unit 14008 determines the prediction value from the quantization parameter of an adjacent sub-block in accordance with the intra prediction mode.
Subsequently, the processes in step S010 through S016 are performed. Thus, for each of the sub-blocks, the basic blocks, and the slices, the difference value of the sub-block quantization parameter is computed and encoded, the prediction error in the sub-block is encoded, and the reproduction image is generated.
Fig. 22 illustrates an example of a bit stream generated in the present exemplary embodiment. The sequence header (Sequence Parameter Set) 20001 does not include the qp_delta_select_flag code 20005. The slice_type code 20011 indicating the slice encoding mode is included in the slice header.
By using the encoding mode instead of the selection flag through the above-described configuration and operations, the overhead can be reduced. Furthermore, by referencing the encoding mode of, for example, a slice that allows the difference value of each of the sub-block quantization parameters to be encoded and adaptively changing prediction of the quantization parameter, a decrease in a delay of the processing time and highly efficient encoding can be advantageously obtained.
While the present exemplary embodiment has been described with reference to determination of an encoding mode on a slice basis, determination of an encoding mode is not limited thereto. For example, if, like MPEG-1, MPEG-2, and MPEG-4, an encoding mode can be set on a frame basis or a picture basis, that encoding mode may be employed. Furthermore, the encoding mode may be applied to a smaller block (e.g., on a basic block (macroblock) basis).
Eighth Exemplary Embodiment
Fig. 23 is a block diagram of an image decoding apparatus according to the present exemplary embodiment. In Fig. 23, the same reference numerals are used for identical or similar components as used in the sixth exemplary embodiment illustrated in Fig. 16, and descriptions of the components are not repeated. The present exemplary embodiment differs from the sixth exemplary embodiment illustrated in Fig. 16 in that it does not need the selection flag decoding unit 1109. In addition, the present exemplary embodiment decodes the bit stream generated in the seventh exemplary embodiment (refer to Fig. 22).
In Fig. 23, stream data for one frame is input from the terminal 1100 and is input to the decoding/separating unit 1101. The decoding/separating unit 1101 decodes header information required for reproducing the image.
In addition, at the top of the slice, code indicating the slice encoding mode (the slice_type code 20011) is input to the slice encoding mode decoding unit 1501 and is decoded. Thus, the slice encoding mode is reproduced. The reproduced slice encoding mode is input to the quantization parameter decoding unit 1502.
Furthermore, the division information (the split_coding_flag code 20010) is decoded by the decoding/separating unit 1101 on a basic block basis and is input to the quantization parameter decoding unit 1102. Thereafter, for each of the subsequent sub-blocks, the quantization parameter difference value code of the sub-block (the cu_qp_delta code) is input to the quantization parameter decoding unit 1502.
Fig. 24 is a block diagram of the configuration of the quantization parameter decoding unit 1502 according to the eighth exemplary embodiment of the present invention. The present exemplary embodiment is described with reference to decoding of the encoded data generated in the seventh exemplary embodiment. In Fig. 24, the same reference numerals are used for identical or similar components as used in the sixth exemplary embodiment illustrated in Fig. 17, and descriptions of the components are not repeated. A selector 600 switches between the units from which data is input on the basis of the slice encoding mode input from the terminal 400.
Like the fifth exemplary embodiment, before the slice-based decoding process is started, the slice encoding mode is input from the terminal 501 to a selector 701. If the input slice encoding mode indicates intra prediction, the selector 701 selects the quantization parameter prediction value determination unit 503 as a unit from which data is input. However, if the slice encoding mode indicates inter prediction, the selector 701 selects the quantization parameter storage unit 106 as a unit from which data is input.
Subsequently, like the sixth exemplary embodiment, the quantization parameter prediction value determination unit 503 computes the prediction value from the quantization parameter of an adjacent sub-block in accordance with the intra prediction mode input from the terminal 501.
The prediction value of the quantization parameter of the sub-block input via the selector 701 is input to the sub-block quantization parameter addition unit 104 and is added to the decoded difference value. Thus, the quantization parameter is reproduced. The reproduced quantization parameter is output from the terminal 105 to the block inverse quantization unit 1104. In addition, the reproduced quantization parameter is input to the quantization parameter storage unit 106 and the quantization parameter storage unit 107 and is used for computing a prediction value of the quantization parameter of the subsequent sub-block.
Fig. 25 is a flowchart of the image decoding process performed by the image decoding apparatus according to the eighth exemplary embodiment. As illustrated in Fig. 25, the present exemplary embodiment differs from the sixth exemplary embodiment illustrated in Fig. 18 in that the need for steps S002 and S503 is eliminated.
In step S101, the header information is decoded. In step S501, the slice encoding mode decoding unit 1501 decodes the slice_type code included in the slice header and obtains the slice encoding mode. The processing for each of the sub-blocks starts from step S103. In step S103, the quantization parameter decoding unit 1502 decodes the cu_qp_delta code (the quantization parameter difference value code) and reproduces the quantization parameter difference value. In step S502, the quantization parameter decoding unit 1502 determines whether the slice encoding mode is an intra coding mode or an inter coding mode. If the slice encoding mode is an intra coding mode, the processing proceeds to step S504. However, if the slice encoding mode is an inter coding mode, the processing proceeds to step S105. In step S504, the quantization parameter decoding unit 1502 determines the prediction value from the quantization parameter of a sub-block adjacent to the sub-block to be decoded in accordance with the intra prediction mode.
Subsequently, the processes in step S107 through S113 are performed. Thus, for each of the sub-blocks, the basic blocks, and the slices, the difference value of the sub-block quantization parameter is reproduced, the prediction error in the sub-block is decoded, and the reproduction image is generated.
Through the above-described configuration and operations, prediction of a quantization parameter can be adaptively changed by referencing the encoding mode of, for example, a slice that allows the difference value of each of the sub-block quantization parameters to be encoded. Thus, the decoding apparatus can decode a bit stream providing a decrease in a delay of the processing time and an increase in the quality of an image.
While the first to eighth exemplary embodiments have been described with reference to the selection flag having a value of one of "0" and "1", the value of the selection flag is not limited thereto. For example, the value indicated by a multi-bit flag capable of switching among the encoding methods of the first, third, and fifth exemplary embodiments may be employed. In addition, while the above exemplary embodiments have been described with reference to determination as to whether, on the basis of a slice-based encoding mode, the prediction value is obtained using the decoding order or using the quantization parameter of the adjacent sub-block, the determination may be made on a basic-block basis. For example, the determination may be made on an encoding mode of H.264 macroblock.
Ninth Exemplary Embodiment
While the above exemplary embodiments have been described with reference to the processing units (illustrated in Figs. 1, 3, 6, 7, 9, 11, 13, 14, 16, 17, 19, 20, 23, and 24) formed as hardware, the processes performed by these processing units may be realized by computer programs. Fig. 26 is a block diagram of the computer hardware applicable to the image encoding apparatus and the image decoding apparatus according to the above-described exemplary embodiments.
A central processing unit (CPU) 1401 performs overall control of the computer using computer programs and data stored in a random access memory (RAM) 1402 and a read only memory (ROM) 1403. In addition, the CPU 1401 performs the above-described processing performed by the image encoding apparatus and the image decoding apparatus according to the above-described exemplary embodiments. That is, the CPU 1401 functions as the processing units illustrated in Figs. 1, 3, 6, 7, 9, 11, 13, 14, 16, 17, 19, 20, 23, and 24. The RAM 1402 has a memory area that temporarily holds a computer program and data loaded from an external storage unit 1406 and data externally acquired via an interface (I/F) 1407. In addition, the RAM 1402 has a work area used when the CPU 1401 performs a variety of processes. That is, the RAM 1402 is used as, for example, a frame memory or a variety of memory areas. The ROM 1403 holds the setting data and a boot program of the computer. An operation unit 1404 includes, for example, a keyboard and a mouse. A user of the computer manipulates the operation unit 1404 and can input a variety of instructions to the CPU 1401. A display unit 1405 displays the result of processing performed by the CPU 1401. The display unit 1405 is formed from, for example, a hold display unit, such as a liquid crystal display, or an impulse display unit, such as a field emission display unit.
The external storage unit 1406 is a high-capacity information storage unit, such as a hard disk drive. The external storage unit 1406 stores an operating system (OS) and the computer programs used by the CPU 1401 that realizes the functions of the units illustrated in Figs. 1, 6, 13, 16, 19, and 23. Furthermore, the external storage unit 1406 may store image data items to be processed. The computer program and data stored in the external storage unit 1406 are loaded into the RAM 1402 as needed under the control of the CPU 1401 and are processed by the CPU 1401. The I/F 1407 can have a local area network (LAN), a network such as the Internet, a projector, and a display unit connected thereto. The computer can receive and transmit a variety of information items via the I/F 1407. A bus 1408 connects the above-described devices with one another. The CPU 1401 mainly controls the operations illustrated in the above-described flowcharts.
Other Embodiments
The functions of the above-described exemplary embodiments can be also realized by mounting a storage medium having a computer program that realizes the functions in a system that reads the code of the computer program and executes the functions. In such a case, the code of the computer program read out of the storage medium realizes the functions of the above-described exemplary embodiments, and the storage medium storing the code of the computer program provides the present invention. In addition, the case in which, for example, the operating system (OS) that runs in a computer in accordance with the instructions of the program code executes some of or all of the above-described functions to realize the functions is encompassed within the invention.
Furthermore, the present invention may be realized through the following embodiment. That is, the computer program code read out of the storage medium is written to a memory of an add-on expansion board mounted in a computer or a memory of an add-on expansion unit connected to a computer. After the program code is written, a CPU in the add-on expansion board or in the add-on expansion unit executes some of or all of the functions of the above-described embodiments under the control of the computer program code.
When the present invention is applied to the above-described storage medium, the storage medium stores computer programs corresponding to the above-described flowcharts. While the above exemplary embodiments have been described with reference to determination as to whether, on the basis of a slice-based coding mode, the prediction value is obtained using the decoding order or using the quantization parameter of the adjacent sub-block, the determination may be made on a basic-block basis. For example, the determination may be made on the basis of the result of decoding of the mb_type code in an encoding mode of H.264 macroblock.
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Application No. 2011-091346, filed April 15, 2011, which is hereby incorporated by reference herein in its entirety.

Claims (3)

  1. An image encoding apparatus comprising:
    a first computing unit configured to compute a prediction value of a quantization parameter of a block to be processed using a quantization parameter of a block that neighbors the block to be processed;
    a second computing unit configured to compute a prediction value of the quantization parameter of the block to be processed using a quantization parameter of a block encoded in an encoding order of blocks;
    a selection unit configured to select one of the first computing unit and the second computing unit;
    a difference value computing unit configured to compute a difference value between the selected prediction value and the quantization parameter of the block to be processed; and
    an encoding unit configured to encode the difference value and generate encoded difference value data.
  2. An image encoding method comprising:
    computing a prediction value of a quantization parameter of a block to be processed using a quantization parameter of a block that neighbors the block to be processed;
    computing a prediction value of the quantization parameter of the block to be processed from a quantization parameter of a block encoded in an encoding order of blocks;
    selecting one of the computing a prediction value of a quantization parameter of a block to be processed using a quantization parameter of a block that neighbors the block to be processed and the computing a prediction value of the quantization parameter of the block to be processed using a quantization parameter of a block encoded in an encoding order of blocks;
    computing a difference value between the selected prediction value and the quantization parameter of the block to be processed; and
    encoding the difference value and generating encoded difference value data.
  3. A computer-readable program comprising:
    program code that causes a computer to function as the image encoding apparatus according to Claim 1.
PCT/JP2012/002525 2011-04-15 2012-04-12 Image encoding apparatus, image encoding method, image encoding program, image decoding apparatus, image decoding method, and image decoding program Ceased WO2012140889A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2011091346A JP6039163B2 (en) 2011-04-15 2011-04-15 Image encoding device, image encoding method and program, image decoding device, image decoding method and program
JP2011-091346 2011-04-15

Publications (1)

Publication Number Publication Date
WO2012140889A1 true WO2012140889A1 (en) 2012-10-18

Family

ID=47009080

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2012/002525 Ceased WO2012140889A1 (en) 2011-04-15 2012-04-12 Image encoding apparatus, image encoding method, image encoding program, image decoding apparatus, image decoding method, and image decoding program

Country Status (2)

Country Link
JP (1) JP6039163B2 (en)
WO (1) WO2012140889A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111937395A (en) * 2018-04-01 2020-11-13 金起佰 Method and apparatus for encoding/decoding images

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120016980A (en) 2010-08-17 2012-02-27 한국전자통신연구원 Image encoding method and apparatus, and decoding method and apparatus

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008126135A1 (en) * 2007-03-20 2008-10-23 Fujitsu Limited Time-varying image encoding method and device, and time-varying image decoding device
JP2009531999A (en) * 2006-03-29 2009-09-03 クゥアルコム・インコーポレイテッド Scalable video processing
WO2009158113A2 (en) * 2008-06-03 2009-12-30 Microsoft Corporation Adaptive quantization for enhancement layer video coding
JP2010035146A (en) * 2008-07-02 2010-02-12 Canon Inc Encoder, and encoding method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009531999A (en) * 2006-03-29 2009-09-03 クゥアルコム・インコーポレイテッド Scalable video processing
WO2008126135A1 (en) * 2007-03-20 2008-10-23 Fujitsu Limited Time-varying image encoding method and device, and time-varying image decoding device
WO2009158113A2 (en) * 2008-06-03 2009-12-30 Microsoft Corporation Adaptive quantization for enhancement layer video coding
JP2010035146A (en) * 2008-07-02 2010-02-12 Canon Inc Encoder, and encoding method

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111937395A (en) * 2018-04-01 2020-11-13 金起佰 Method and apparatus for encoding/decoding images
EP3780620A4 (en) * 2018-04-01 2022-01-26 B1 Institute of Image Technology, Inc. IMAGE CODING/DECODING METHOD AND APPARATUS
US11297309B2 (en) 2018-04-01 2022-04-05 B1 Institute Of Image Technology, Inc. Method and apparatus for encoding/decoding image
CN111937395B (en) * 2018-04-01 2024-05-17 有限公司B1影像技术研究所 Method and apparatus for encoding/decoding image
US12075026B2 (en) 2018-04-01 2024-08-27 B1 Institute Of Image Technology, Inc. Method and apparatus for encoding/decoding image
EP4443878A3 (en) * 2018-04-01 2024-12-18 B1 Institute of Image Technology, Inc. Method and apparatus for encoding/decoding image

Also Published As

Publication number Publication date
JP2012227612A (en) 2012-11-15
JP6039163B2 (en) 2016-12-07

Similar Documents

Publication Publication Date Title
RU2722536C1 (en) Output of reference mode values and encoding and decoding of information representing prediction modes
US10334247B2 (en) Methods and apparatus for determining quantization parameter predictors from a plurality of neighboring quantization parameters
KR102257269B1 (en) Features of intra block copy prediction mode for video and image coding and decoding
CN107087170B (en) Encoding device, encoding method, decoding device, and decoding method
RU2551800C2 (en) Image coding device, image coding method, software for this, image decoding device, image decoding method and software for this
WO2012140889A1 (en) Image encoding apparatus, image encoding method, image encoding program, image decoding apparatus, image decoding method, and image decoding program
JP6415637B2 (en) Decoding device, decoding method, program, and storage medium
JP6150912B2 (en) Image encoding device, image encoding method and program, image decoding device, image decoding method and program
JP6953576B2 (en) Coding device, coding method, program and storage medium
JP6686095B2 (en) Decoding device, decoding method, program, and storage medium
JP6874844B2 (en) Moving image coding device, moving image coding method, and moving image coding program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12771349

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12771349

Country of ref document: EP

Kind code of ref document: A1