HK1186609B - Methods and apparatus for determining quantization parameter predictors from a plurality of neighboring quantization parameters - Google Patents
Methods and apparatus for determining quantization parameter predictors from a plurality of neighboring quantization parameters Download PDFInfo
- Publication number
- HK1186609B HK1186609B HK13113711.6A HK13113711A HK1186609B HK 1186609 B HK1186609 B HK 1186609B HK 13113711 A HK13113711 A HK 13113711A HK 1186609 B HK1186609 B HK 1186609B
- Authority
- HK
- Hong Kong
- Prior art keywords
- quantization parameter
- quantization
- predictor
- parameters
- image data
- Prior art date
Links
Description
Cross Reference to Related Applications
This application claims the benefit of U.S. provisional patent application serial No.61/353,365, filed on 10/6/2010, the entire contents of which are incorporated herein by reference.
Technical Field
The present principles relate generally to video encoding and decoding and, more particularly, to methods and apparatus for determining a quantization parameter predictor (predictor) from a plurality of neighboring quantization parameters.
Background
Most video applications seek the highest perceived quality possible for a given set of bit rate constraints. For example, in low bit rate applications such as video telephony systems, video encoders provide higher quality by eliminating strong visual artifacts in regions of interest that are more visually noticeable and therefore more important. On the other hand, in high bit rate applications, visually lossless quality is desired at each of the pictures and the video encoder should also achieve transparent (transparent) quality. One challenge in obtaining transparent visual quality in high bit rate applications is preserving detail, especially in smooth regions where detail loss is more visible than that of non-smooth regions, due to the texture masking properties of the human visual system.
Increasing the available bit rate is one of the most straightforward ways to improve objective and subjective quality. When given a bit rate, the encoder manipulates its bit allocation module to spend the available bits where the greatest visual quality improvement can be obtained. In non-real-time applications, such as Digital Video Disc (DVD) editing, a video encoder may utilize a Variable Bit Rate (VBR) design to produce video of constant quality over time on both difficult-to-encode content and easy-to-encode content. In such applications, the available bits are distributed appropriately over different video segments in order to obtain a constant quality. In contrast, Constant Bit Rate (CBR) systems assign the same number of bits to the interval of one or more pictures regardless of their encoding difficulty and produce visual quality that varies with video content. For both variable bit rate and constant bit rate coding systems, the encoder may allocate bits according to a perceptual model within the picture. One feature of human perception is texture masking, which explains why the human eye is more sensitive to quality degradation in smooth areas than in textured areas. This property can be exploited to increase the number of bits allocated to the smooth region to achieve higher visual quality.
The quantization process in a video encoder controls the number and quality of the coded bits. The quality is typically adjusted by adjusting the Quantization Parameter (QP). The quantization parameters may include a quantization step size, a rounding offset, and a scaling matrix. In the international organization for standardization/international electrotechnical commission (ISO/ICE) moving picture experts group-4 (MPEG-4) part 10 Advanced Video Coding (AVC) standard/international telecommunication union, telecommunication sector (ITU-T) h.264 recommendation (hereinafter the "MPEG-4 AVC standard"), the quantization parameter values may be adjusted on a slice or Macroblock (MB) level. The encoder has the flexibility to adjust the quantization parameter and signal this adjustment to the decoder. The quantization parameter signaling requires overhead costs.
QP coding in the MPEG-4AVC Standard
The syntax in the MPEG-4AVC standard allows the quantization parameter to be different for each slice and Macroblock (MB). The values of the quantization parameters are integers and are in the range of 0-51. The initial value of each slice can be derived from the syntax element pic _ init _ qp _ minus 26. The initial value is modified at the slice level when a non-zero value of slice _ qp _ delta is encoded, and is also modified when a non-zero value of mb _ qp _ delta is encoded at the macroblock level.
Mathematically, the initial quantization parameter for a chip is calculated as follows:
SliceQPY=26+pic_init_qp_minus26+slice_qp_delta (1)
at the macroblock layer, the value of QP is derived as follows:
QPY=QPY,PREV+mb_qp_delta (2)
wherein QPY,PREVIs the quantization parameter of the previous macroblock in the current chip decoding order.
Quantization parameter coding of the first prior art method
In the first prior art approach (and in a second prior art approach described in more detail herein below), motion partitions larger than 16 × 16 pixels are implemented. Using the first prior art approach as an example, 64 × 64, 64 × 32, 32 × 64, 32 × 32, 32 × 16, and 16 × 32 sized macroblocks are used in addition to the existing MPEG-4AVC standard partition size. Two new syntax elements mb64_ delta _ qp and mb32_ delta _ qp are introduced to encode the quantization parameter for the large block.
If the 64 × 64 block is partitioned into four separate 32 × 32 blocks, each 32 × 032 block may have its own quantization parameter, if the 32 × 32 block is further partitioned into four 16 × 16 blocks, each 16 × 16 block may also have its own quantization parameter, this information is signaled to the decoder using delta _ qp syntax for the 64 × 64 block, if mb64_ type is not P8 × 8 (meaning no further partitioning), mb64_ delta _ qp is encoded to signal the relative change of the luma quantizer step with respect to the block on the top left of the current block, which may be the size of 64 × 64, 32 × 32 or 16 × 16. mb64_ qp _ delta is limited to the range [ -26,25]In (1). When in any block (including P _ Skip and B _ Skip block types)Is not present, the mb64_ qp _ delta value is inferred to be equal to 0. The quantization value QP of the luminance of the current block is derived as followsY:
QPY=(QPY,PREV+mb64_qp_delta+52)%52, (3)
Wherein QPY,PREVLuminance QP., which is the previous 64 × 64 blocks in the current slice decoding order, for the first 64 × 64 blocks, QPs, in the sliceY,PREVIs set equal to the chip quantization parameter sent in the chip header.
If mb64_ type is P8 × 8 (meaning that a 64 × 64 block is partitioned into four 32 × 32 blocks), the same process is repeated for each 32 × 32 block. That is, if mb32_ type is not P8 × 8 (meaning no further partitioning), mb32_ delta _ qp is encoded. Otherwise, delta _ qp for each 16 × 16 macroblock is sent to the decoder as in the MPEG-4AVC standard. It should be noted that when delta _ qp is signaled on a 64 × 64 or 32 × 32 block size, it can be applied to all blocks in the motion partition.
Quantization parameter coding in a second prior art approach
In a second prior art approach, large blocks are supported by the concept of coding units. In a second prior art approach, a Coding Unit (CU) is defined as a basic unit with a square shape. Although it has a similar effect to the macro-blocks and sub-macro-blocks in the MPEG-4AVC standard, the main difference lies in the fact that: the coding unit may have various sizes without a difference (discontinuity) corresponding to the size thereof. All processing, including intra/inter prediction, transformation, quantization, and entropy coding, except frame-based loop filtering is performed on a coding unit basis. Two special terms are defined, Largest Coding Unit (LCU) and Smallest Coding Unit (SCU). For ease of implementation, the LCU size and SCU size are limited to powers of 2 and values greater than or equal to 8.
Assume that a picture consists of non-overlapping LCUs. Since the coding units are constrained to be square, the coding unit structure within the LCU can be represented as a recursive tree representation adapted to the picture. That is, a coding unit is characterized by a maximum coding unit size and a hierarchical depth in a maximum coding unit to which the coding unit belongs.
In combination with the coding unit, the second prior art approach introduces the basic unit of prediction mode: a Prediction Unit (PU). It should be noted that a prediction unit is defined only for a last-depth (last-depth) coding unit, and its size is limited to the size of the coding unit. Similar to conventional standards, two different terms are defined to specify the prediction method: prediction type and prediction unit partitioning. The prediction type is one of values in skip, intra, or inter, which roughly describes the nature of the prediction method. Thereafter, possible prediction unit partitions are defined according to the prediction type. For a coding unit size of 2 nx 2N size, the prediction unit within a frame has two different possible partitions: 2N × 2N (i.e., no segmentation) and N × N (i.e., quarter segmentation). The prediction unit between frames has eight different possible partitions: four symmetric segmentations (2N × 2N, 2N × N, N × 2N, N × N) and four asymmetric segmentations (2N × nU, 2N × nD, nL × 2N and nR × 2N).
In addition to the coding unit and prediction coding unit definitions, a Transform Unit (TU) for transform and quantization is separately defined. It should be noted that the size of the transform unit may be larger than the size of the prediction unit, which is different from the previous video standard, but the transform unit cannot exceed the size of the coding unit. However, the transform unit size is not arbitrary, and once the prediction unit structure is defined for the coding unit, only two transform unit partitions are possible. As a result, the size of the transform unit in the coding unit is determined by transform _ unit _ size _ flag. If transform _ unit _ size _ flag is set to 0, the size of a transform unit is the same as the size of the coding unit to which the transform unit belongs. Otherwise, the transform unit size is set to N × N or N/2 × N/2 according to the prediction unit partition.
The rationale for quantizing and de-quantizing coefficients for large transforms is the same as that used in the MPEG-4AVC Standard, i.e., with deadA scalar quantizer of the region. Rather, the same quantization parameter range and corresponding quantization step size have been used in the proposed codec. The proposal allows to vary the quantization parameter for each coding unit. The quantization value QP of the luminance of the current block is derived as followsY,
QPY=SliceQPY+qp_delta, (4)
Wherein SliceQPYIs the quantization parameter for the chip and qp delta is the difference between the current coding unit and the quantization parameter for the chip. The same quantization parameter is applied to the entire coding unit.
Typical QP coding procedure-QP predictor from a single QP
Turning to fig. 1, a conventional quantization parameter encoding process in a video encoder is indicated generally by the reference numeral 100. The method 100 includes a start block 105 that passes control to a function block 110. The function block 110 sets a Quantization Parameter (QP) for a slice to SliceQPYSliceQPYStored as a QP predictor, and passes control to a loop limit block 115. The loop limit block 115 begins a loop using a variable i ranging from 1 to the number (#) of coding units, and passes control to a function block 120. The function block 120 sets the QP for each coding unit to QPCUAnd passes control to a function block 125. Function block 125 encodes delta _ QP = QPCU-SliceQPYAnd passes control to a function block 130. The function block 130 encodes coding unit i, and passes control to a loop limit block 135. The loop limit block 135 ends the loop over the coding units, and passes control to an end block 199.
Thus, in the method 100, a single QP, i.e., a slice QP (SliceQP)Y) Used as a predictor of the QP to encode. With respect to function block 120, the QP of the coding unit is adjusted based on the content of the coding unit and/or previous coding results. For example, a smooth coding unit will lower the QP in order to improve the perceptual quality. In another example, the current coding unit uses more bits than the specified bits if the previous coding unit used more bits than the specified bitsThe QP will be increased to consume fewer bits than originally specified. In this example, the QP (QP) for the current coding unitCU) And QP predictor SliceQPYThe difference between is encoded (via function block 125).
Turning to fig. 2, a conventional quantization parameter decoding process in a video decoder is indicated generally by the reference numeral 200. The method 200 includes a start block 205 that passes control to a function block 210. Function block 210 decodes SliceQPYSliceQPYStored as a QP predictor, and passes control to a loop limit block 215. The loop limit block 215 starts a loop using a variable i ranging from 1 to the number (#) of coding units, and passes control to a function block 220. The function block 220 decodes delta _ QP, and passes control to a function block 225. The function block 225 sets the QP for each coding unit to QPCU=SliceQPY+ delta _ QP, and passes control to a function block 230. The function block 230 encodes coding unit i, and passes control to a loop limit block 235. The loop limit block 235 ends the loop over the coding units, and passes control to an end block 299. With respect to block 230, the coding unit is thus reconstructed.
Disclosure of Invention
These and other drawbacks and disadvantages of the prior art are addressed by the present principles, which are directed to methods and apparatus for determining quantization parameter predictors from a plurality of neighboring quantization parameters.
According to an aspect of the present principles, there is provided an apparatus. The apparatus includes an encoder for encoding image data of at least a portion of a picture using a quantization parameter prediction value for a current quantization parameter to be applied to the image data. The quantization parameter predictor is determined using a plurality of quantization parameters from previously encoded neighboring portions. The difference between the current quantization parameter and the quantization parameter predictor is encoded for signaling to the corresponding decoder.
According to another aspect of the present principles, there is provided a method in a video encoder. The method includes encoding image data of at least a portion of a picture using a quantization parameter predictor for a current quantization parameter to be applied to the image data. The quantization parameter predictor is determined using a plurality of quantization parameters from previously encoded neighboring portions. The method also includes encoding a difference between the current quantization parameter and the quantization parameter predictor for signaling to a corresponding decoder.
According to yet another aspect of the present principles, there is provided an apparatus. The apparatus includes a decoder for decoding image data of at least a portion of a picture using a quantization parameter predictor for a current quantization parameter to be applied to the image data. The quantization parameter predictor is determined using a plurality of quantization parameters from previously encoded neighboring portions. Decoding a difference between a current quantization parameter and a quantization parameter predictor for decoding the image data.
According to another aspect of the present principles, there is provided a method in a video decoder. The method includes decoding image data of at least a portion of a picture using a quantization parameter predictor for a current quantization parameter to be applied to the image data. The quantization parameter predictor is determined using a plurality of quantization parameters from previously encoded neighboring portions. The method also includes decoding a difference between the current quantization parameter and the quantization parameter predictor for decoding the image data.
These and other aspects, features and advantages of the present principles will become apparent from the following detailed description of exemplary embodiments, which is to be read in connection with the accompanying drawings.
Drawings
The present principles may be better understood in light of the following exemplary figures, in which:
fig. 1 is a flowchart illustrating a conventional quantization parameter encoding process in a video encoder according to the related art;
fig. 2 is a flowchart illustrating a conventional quantization parameter decoding process in a video decoder according to the related art;
fig. 3 is a block diagram illustrating an exemplary video encoder to which the present principles may be applied, according to an embodiment of the present principles;
fig. 4 is a block diagram illustrating an exemplary video decoder to which the present principles may be applied, according to an embodiment of the present principles;
fig. 5 is a flow diagram illustrating an exemplary quantization parameter encoding process in a video encoder, in accordance with an embodiment of the present principles;
fig. 6 is a flow diagram illustrating an exemplary quantization parameter decoding process in a video decoder, in accordance with an embodiment of the present principles;
FIG. 7 is a diagram illustrating an exemplary neighboring coding unit, according to an embodiment of the present principles;
fig. 8 is a flow diagram illustrating another exemplary quantization parameter encoding process in a video encoder, in accordance with an embodiment of the present principles;
fig. 9 is a flow diagram illustrating another exemplary quantization parameter decoding process in a video decoder, in accordance with an embodiment of the present principles;
fig. 10 is a flow diagram illustrating yet another exemplary quantization parameter encoding process in a video encoder, in accordance with an embodiment of the present principles; and
fig. 11 is a flow diagram illustrating yet another exemplary quantization parameter decoding process in a video decoder, in accordance with an embodiment of the present principles.
Detailed Description
The present principles are directed to a method and apparatus for determining a quantization parameter predictor from a plurality of neighboring quantization parameters.
This description illustrates the present principles. It will thus be appreciated that those skilled in the art will be able to devise various arrangements that, although not explicitly described or shown herein, embody the present principles and are included within its spirit and scope.
All examples and conditional language recited herein are intended to aid the reader in understanding the teaching objectives contributed by the inventor(s) to furthering the art of the present principles and concepts, and are to be construed as being without limitation to such specifically recited examples and conditions.
Moreover, all statements herein reciting principles, aspects, and embodiments of the present principles, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Further, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future, i.e., any elements developed that perform the same function, regardless of structure.
Thus, for example, it will be appreciated by those skilled in the art that the block diagrams presented herein represent conceptual views of illustrative circuitry embodying the present principles. Similarly, it will be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudocode, and the like represent various processes which may be substantially represented in computer readable media and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.
The functions of the various elements shown in the figures may be provided through the use of dedicated hardware as well as hardware capable of executing software in association with appropriate software. When provided by a processor, the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which may be shared. Moreover, explicit use of the term "processor" or "controller" should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, digital signal processor ("DSP") hardware, read-only memory ("ROM") for storing software, random access memory ("RAM"), and non-volatile storage.
Other hardware, conventional and/or custom, may also be included. Similarly, any switches shown in the figures are conceptual only. Their function may be carried out through the operation of program logic, through dedicated logic, through the interaction of program control and dedicated logic, or even manually, the particular technique being selectable by the implementer as more specifically understood from the context.
In the claims hereof, any element expressed as a means for performing a specified function is intended to encompass any way of performing that function including, for example, a) a combination of circuit elements that performs that function or b) software in any form, including, therefore, firmware, microcode or the like, combined with appropriate circuitry for executing that software to perform the function. The present principles as defined by such claims reside in the fact that the functionalities provided by the various recited means are combined and brought together in the manner which the claims call for. It is thus regarded that any means that can provide those functionalities are equivalent to those shown herein.
Reference in the specification to "one embodiment" or "an embodiment" of the present principles, as well as other variations thereof, means that a particular feature, structure, characteristic, or the like described in connection with the embodiment is included in at least one embodiment of the present principles. Thus, the appearances of the phrase "in one embodiment" or "in an embodiment" in various places throughout this specification, and other variations thereof, are not necessarily all referring to the same embodiment.
It is to be understood that the use of any of the following "/", "and/or" and at least one of "…, for example in the case of" a/B "," a and/or B "and" at least one of a and B ", is intended to cover the selection of only the listed first option (a), or only the listed second option (B), or both options (a and B). As another example, in the case of "A, B, and/or C" and "at least one of A, B and C," such phrases are intended to encompass selecting only the first listed option (a), or only the second listed option (B), or only the third listed option (C), or only the first and second listed options (a and B), or only the first and third listed options (a and C), or only the second and third listed options (B and C), or all three options (a and B and C). This can be extended to many of the items listed, as will be readily apparent to those of ordinary skill in this and related arts.
Further, as used herein, the words "picture" and "image" are used interchangeably and refer to either a still image or a picture from a video sequence. As is known, a picture may be a frame or a field.
Also, as used herein, the phrase "Coding Unit (CU)" refers to a basic unit having a square shape. Although it has a similar effect to the macro-blocks and sub-macro-blocks in the MPEG-4AVC standard, the main difference lies in the fact that: the coding unit may have various sizes without a difference corresponding to the size thereof. All processing, including intra/inter prediction, transformation, quantization, and entropy coding, except frame-based loop filtering is performed on a coding unit basis.
Also, as used herein, the phrase "prediction unit" (PU) refers to a basic unit of a prediction mode. It should be noted that a PU is defined only for the last depth of a CU and its size is limited to the size of the CU. All information related to prediction is signaled on a PU basis.
Also, as used herein, the phrase "transform unit" (TU) refers to a basic unit of transform. It should be noted that the size of the transform unit may be larger than the size of the prediction unit, which is different from the previous video standard, but the transform unit cannot exceed the size of the coding unit. However, the transform unit size is not arbitrary and once the prediction unit structure has been defined for the coding unit, only two transform unit partitions are possible. As a result, the size of the transform unit in the coding unit is determined by transform _ unit _ size _ flag. If transform _ unit _ size _ flag is set to 0, the size of a transform unit is the same as the size of the coding unit to which the transform unit belongs. Otherwise, the transform unit size is set to N × N or N/2 × N/2 according to the prediction unit partition.
In addition, as used herein, the phrase "skip mode" refers to a prediction mode in which motion information is inferred from a motion vector predictor and neither motion information nor texture information is sent.
Further, it is to be understood that for the sake of simplicity and clarity of the description, we start with the basis defined by the second prior art method and define new variables, principles, syntax, etc. as modifications to the second prior art method. It will be apparent, however, to one skilled in the art that the principles and concepts disclosed and described herein in connection with the present invention may be applied to any new or modified standard or proprietary system-and in no way are merely intended to be a modification of the second prior art approach. Nor does it attempt to modify the first prior art method, the MPEG-4AVC standard, or any other method or standard.
Turning to fig. 3, an exemplary video encoder to which the present principles may be applied is indicated generally by the reference numeral 300. The video encoder 300 includes a frame ordering buffer 310 having an output in signal communication with a non-inverting input of a combiner 385. An output of the combiner 385 is connected in signal communication with a first input of a transformer and quantizer (having a plurality of predictors) 325. An output of the transformer and quantizer (with multiple predictors) 325 is connected in signal communication with a first input of an entropy coder 345 and a first input of an inverse transformer and inverse quantizer (with multiple predictors) 350. An output of the entropy coder 345 is connected in signal communication with a first non-inverting input of a combiner 390. An output of the combiner 390 is connected in signal communication with a first input of an output buffer 335.
A first output of the encoder controller 305 is connected in signal communication with a second input of the frame ordering buffer 310, a second input of the inverse transformer and inverse quantizer (with multiple predictors) 350, an input of a picture type decision module 315, a first input of a macroblock type (MB-type) decision module 320, a second input of an intra prediction module 360, a second input of a deblocking filter 365, a first input of a motion compensator 370, a first input of a motion estimator 375, and a second input of a reference picture buffer 380.
A second output of the encoder controller 305 is connected in signal communication with a first input of a Supplemental Enhancement Information (SEI) inserter 330, a second input of the transformer and quantizer (with multiple predictors) 325, a second input of the entropy coder 345, a second input of the output buffer 335, and an input of the Sequence Parameter Set (SPS) and Picture Parameter Set (PPS) inserter 340.
An output of the SEI inserter 330 is connected in signal communication with a second non-inverting input of the combiner 390.
A first output of the picture-type decision module 315 is connected in signal communication with a third input of the frame ordering buffer 310. A second output of the picture-type decision module 315 is connected in signal communication with a second input of a macroblock-type decision module 320.
An output of the Sequence Parameter Set (SPS) and Picture Parameter Set (PPS) inserter 340 is connected in signal communication with a third, non-inverting input of a combiner 390.
An output of the inverse quantizer and inverse transformer (with multiple predictors) 350 is connected in signal communication with a first non-inverting input of a combiner 319. An output of the combiner 319 is connected in signal communication with a first input of the intra prediction module 360 and a first input of the deblocking filter 365. An output of the deblocking filter 365 is connected in signal communication with a first input of a reference picture buffer 380. An output of the reference picture buffer 380 is connected in signal communication with a second input of the motion estimator 375 and a third input of the motion compensator 370. A first output of the motion estimator 375 is connected in signal communication with a second input of the motion compensator 370. A second output of the motion estimator 375 is connected in signal communication with a third input of the entropy coder 345.
An output of the motion compensator 370 is connected in signal communication with a first input of a switch 397. An output of the intra prediction module 360 is connected in signal communication with a second input of the switch 397. An output of the macroblock-type decision module 320 is connected in signal communication with a third input of the switch 397. A third input of the switch 397 determines whether the "data" input of the switch (as compared to the control input, i.e., the third input) is provided by the motion compensator 370 or the intra prediction module 360. An output of the switch 397 is connected in signal communication with a second non-inverting input of the combiner 319 and an inverting input of the combiner 385.
A first input of the frame ordering buffer 310 and an input of the encoder controller 305 are available as inputs of the encoder 300, for receiving an input picture. Further, a second input of the Supplemental Enhancement Information (SEI) inserter 330 is available as an input of the encoder 300, for receiving metadata. An output of the output buffer 335 is available as an output of the encoder 300, for outputting a bitstream.
Turning to fig. 4, an exemplary video decoder to which the present principles may be applied is indicated generally by the reference numeral 400. The video decoder 400 includes an input buffer 410 having an output connected in signal communication with a first input of an entropy decoder 445. A first output of the entropy decoder 445 is connected in signal communication with a first input of an inverse transformer and inverse quantizer (with multiple predictors) 450. An output of the inverse transformer and inverse quantizer (with multiple predictors) 450 is connected in signal communication with a second non-inverting input of a combiner 425. An output of the combiner 425 is connected in signal communication with a second input of the deblocking filter 465 and a first input of an intra prediction module 460. A second output of the deblocking filter 465 is connected in signal communication with a first input of a reference picture buffer 480. An output of the reference picture buffer 480 is connected in signal communication with a second input of the motion compensator 470.
A second output of the entropy decoder 445 is connected in signal communication with a third input of the motion compensator 470, a first input of the deblocking filter 465, and a third input of the intra predictor 460. A third output of the entropy decoder 445 is connected in signal communication with an input of the decoder controller 405. A first output of the decoder controller 405 is connected in signal communication with a second input of the entropy decoder 445. A second output of the decoder controller 405 is connected in signal communication with a second input of an inverse transformer and inverse quantizer (with multiple predictors) 450. A third output of the decoder controller 405 is connected in signal communication with a third input of the deblocking filter 465. A fourth output of the decoder controller 405 is connected in signal communication with a second input of the intra prediction module 460, a first input of a motion compensator 470, and a second input of a reference picture buffer 480.
An output of the motion compensator 470 is connected in signal communication with a first input of a switch 497. An output of the intra prediction module 460 is connected in signal communication with a second input of the switch 497. An output of the switch 497 is connected in signal communication with a first non-inverting input of the combiner 425.
An input of the input buffer 410 is available as an input of the decoder 400, for receiving an input bitstream. A first output of the deblocking filter 465 is available as an output of the decoder 400, for outputting an output picture.
As noted above, the present principles are directed to methods and apparatus for determining a quantization parameter predictor from a plurality of neighboring quantization parameters.
With respect to the aforementioned first and second prior art approaches, it is noted that adjustment of the quantization parameter at the block level is also supported, where a block may be a macroblock, a large block (as in the first prior art approach), or a coding unit (as in the second prior art approach). The quantization parameter values are differentially (differentially) encoded. In the MPEG-4AVC standard and the first prior art approach, the quantization parameter of the previous block in coding order in the current slice is used as a prediction value. In a second prior art approach, a chip quantization parameter is used as a predictor.
In accordance with the present principles, a method and apparatus are provided for determining a quantization parameter predictor using multiple quantization parameters of adjacent coding blocks. The quantization parameter predictor calculation is defined by rules known to both the encoder and decoder. One benefit of this scheme over existing schemes is that the overhead required to signal the quantization parameters to the decoder is reduced.
The quantization parameter is typically adjusted to meet a target bitrate or to adapt to the content in order to improve the visual quality. This results in QP variation between coding units. For the purposes of describing the present principles, the word "coding unit(s)" is meant to include a broad range of image partitions and regions, including, but not limited to, blocks, macroblocks, super-blocks, super-macroblocks, sub-blocks, image partitions, image geometric partitions, image regions, prediction units, and transform units. To reduce overhead costs in signaling QP differences, methods and apparatus to improve QP predictor performance are disclosed and described. The QP predictor is formed by multiple QPs from neighboring blocks of previously encoded/decoded coding units using the same method at both the encoder and decoder, thereby reducing the required signaling overhead.
Turning to fig. 5, an exemplary quantization parameter encoding process in a video encoder is indicated generally by the reference numeral 500. The method 500 includes a start block 505 that passes control to a loop limit block 510. The loop limit block 510 begins a loop using a variable i ranging from 1 to the number (#) of coding units, and passes control to a function block 515. The function block 515 forms a QP Predictor (QP) using multiple QPs for previously encoded coding unitsPRED) And passes control to a function block 520.The function block 520 sets the QP for each coding unit to QPCUAnd passes control to a function block 525. Function block 525 encodes delta _ QP = QPCU-QPPREDAnd passes control to a function block 530. The function block 530 encodes coding unit i, and passes control to a loop limit block 535. The loop limit block 535 ends the loop over the coding units, and passes control to an end block 599. With respect to function block 515, the same coding unit used for motion vector prediction may be used to form the predictor QPPRED. For example, a coding unit for forming an intermediate motion vector in the MPEG-4AVC standard, or a coding unit for motion vector competition may be used.
Turning to fig. 6, an exemplary quantization parameter decoding process in a video decoder is indicated generally by the reference numeral 600. The method 600 includes a start block 605 that passes control to a loop limit block 610. The loop limit block 610 starts a loop using a variable i ranging from 1 to the number (#) of coding units, and passes control to a function block 615. The function block 615 forms a QP Predictor (QP) using multiple QPs for previously decoded coding unitsPRED) And passes control to a function block 620. The function block 620 decodes delta _ QP, and passes control to a function block 625. The function block 625 sets the QP for each coding unit to QPCU=delta_QP+QPPREDAnd passes control to a function block 630. The function block 630 decodes coding unit i, and passes control to a loop limit block 635. The loop limit block 635 ends the loop over the coding units, and passes control to an end block 699. With respect to functional block 615, the same coding unit used for motion vector prediction may be used to form the predictor QPPRED. For example, a coding unit for forming an intermediate motion vector in the MPEG-4AVC standard, or a coding unit for motion vector competition may be used.
QP Predictor (QP)PRED) Derivation of
Hereinafter, a scheme of forming a QP predictor is disclosed and described. The same approach is used at both the encoder and decoder for synchronization.
Providing high perceptual quality in a region of interest has a significant impact in overall perceptual quality. Thus, the general guideline in QP adjustment is to specify a lower QP for regions of interest to improve perceptual quality and a higher large QP for other regions to reduce the number of bits. Because picture content has large continuity, the QP of neighboring coding units is typically correlated. In the prior art, the correlation of the current QP and the QP of the previously coded block is exploited. Since the QP may also be related to QPs from other neighboring blocks, the QP predictor is improved by considering more QPs. Turning to fig. 7, an exemplary neighboring coding unit is indicated generally by the reference numeral 700. The neighboring coding unit 700 includes the following: left-side neighboring coding units indicated by a; a top neighboring coding unit indicated by B; an upper right adjacent coding unit indicated by C; and an upper left adjacent coding unit indicated by D. The neighboring coding units a to D are used to form a QP predictor for the current coding unit indicated by E. In one example, A, B, C and D are defined the same as the blocks used for MPEG-4AVC Standard motion vector prediction. More QPs from other neighboring coding units may also be included to obtain a QP predictor.
The QP predictor will be formed according to rules known to both the encoder and decoder. Using QP at coding unit A, B and C, several exemplary rules are provided as follows:
rule 1: QPPRED=median(QPA,QPB,QPC);
Rule 2: QPPRED=min(QPA,QPB,QPC);
Rule 3: QPPRED=max(QPA,QPB,QPC);
Rule 4: QPPRED=mean(QPA,QPB,QPC) Or QPPRED=mean(QPA,QPB);
If not all coding units (A, B, C) are available, SliceQP can be utilizedYReplace their QPs, or use only available QPs to form predictors. For example, when coding unit a is not available, rule 2 becomes QPPRED=min(QPB,QPC). In another example, when not all coding units are available, the missing QP may be replaced with a QP for another block, e.g., block C is replaced with block D.
Motion vector prediction in the MPEG-4AVC standard shares a similar basic principle of using the intermediate vectors of its neighboring motion vectors to generate a motion vector predictor. The difference between the motion vector and the motion vector predictor is encoded and sent in the bitstream. To make the prediction process of both motion vectors and QPs uniform, one embodiment uses the same neighboring coding units to predict both motion vectors and QPs when coding blocks in INTER mode.
VCEG "key technology area" (KTA) software (KTA software version KTA 2.6) has been provided as a common platform to integrate new advances in video coding after the MPEG-4AVC standard has been completed. The proposal of using motion vector competition is adopted in KTA. In the motion vector competition scheme, the coding block has a set of motion vector predictor candidates that include motion vectors of spatially or temporally neighboring blocks. The best motion vector predictor is selected from the candidate set based on rate-distortion optimization. If the set has more than one candidate, the index of the motion vector predictor in the set is explicitly transmitted to the decoder. To make the prediction process of both motion vectors and QPs uniform, one embodiment involves using the same neighboring coding units to predict both motion vectors and QPs when coding blocks in INTER mode. Since the index of the motion vector predictor is already sent for motion vector competition, no additional overhead is needed for the QP predictor.
Alternative 1 embodiment-QP adjustment for prediction unit
The coding unit may be as large as 128x 128. This translates to having very few coding units in the picture. To accurately meet the target bitrate, the QP variation between coding units may be large. One solution to smooth the QP variation is to apply QP adjustments at the prediction unit instead of the coding unit. When the prediction unit is not in skip mode, only the QP difference needs to be sent.
Turning to fig. 8, another exemplary quantization parameter encoding process in a video encoder is indicated generally by the reference numeral 800. It is to be understood that method 800 involves QP adjustment at a prediction unit. The method 800 includes a start block 805 that passes control to a loop limit block 810. The loop limit block 810 begins a loop using a variable i ranging from 1 to the number (#) of prediction units, and passes control to a function block 815. The function block 815 forms a QP Predictor (QP) using multiple QPs for previously encoded prediction unitsPRED) And passes control to a function block 820. The function block 820 sets the QP for each prediction unit to QPPUAnd passes control to a function block 825. Function block 825 encodes delta _ QP = QPPU-QPPREDAnd passes control to a function block 830. The function block 830 encodes the prediction unit i, if the prediction unit i is not in skip mode, and passes control to a loop limit block 835. The loop limit block 835 ends the loop over the prediction unit, and passes control to an end block 899. With respect to functional block 815, the same prediction unit used for motion vector prediction may be used to form the predictor QPPRED. For example, a prediction unit for forming an intermediate motion vector in the MPEG-4AVC standard, or a prediction unit for motion vector competition may be used.
Turning to fig. 9, another exemplary quantization parameter decoding process in a video decoder is indicated generally by the reference numeral 900. It is to be understood that the method 900 involves QP adjustment at the prediction unit. The method 900 includes a start block 905 that passes control to a loop limit block 910. The loop limit block 910 begins a loop using a variable i ranging from 1 to the number (#) of prediction units, and passes control to a function block 915. The function block 915 forms a QP Predictor (QP) using multiple QPs for previously decoded prediction unitsPRED) And transfer control toTo a function block 920. The function block 920 decodes delta _ QP, and passes control to a function block 925. The function block 925 sets the QP for each prediction unit to QPPU=delta_QP+QPPREDAnd passes control to a function block 930. If the prediction unit i is not in skip mode, the function block 930 decodes the prediction unit i, and passes control to a loop limit block 935. The loop limit block 935 ends the loop over the prediction units, and passes control to an end block 999. With respect to block 915, the same prediction unit used for motion vector prediction may be used to form the predictor QPPRED. For example, a prediction unit for forming an intermediate motion vector in the MPEG-4AVC standard, or a prediction unit for motion vector competition may be used.
Alternative 2 embodiment-QP adjustment for transform unit
Similar to variant 1, QP adjustments may be applied at the transform unit. When there are non-zero transform coefficients in a transform unit, only the QP difference needs to be sent.
Turning to fig. 10, yet another exemplary quantization parameter encoding process in a video encoder is indicated generally by the reference numeral 1000. It is to be understood that method 1000 involves QP adjustment at a transform unit. The method 1000 includes a start block 1005 that passes control to a loop limit block 1010. The loop limit block 1010 starts a loop using a variable i ranging from 1 to the number (#) of transform units, and passes control to a function block 1015. The function block 1015 forms a QP Predictor (QP) using multiple QPs for previously encoded transform unitsPRED) And passes control to a function block 1020. The function block 1020 sets QP per transform unit to QPTUAnd passes control to a function block 1025. The function block 1025 encodes delta _ QP = QPTU-QPPREDAnd passes control to a function block 1030. The function block 1030 encodes the transform unit i, if it has non-zero coefficients, and passes control to a loop limit block 1035. The loop limit block 1035 ends the loop over the transform units, and passes control to an end block 1099. For function block 1015, a predictor for motion vectors may be usedThe same transform unit measured to form the prediction value QPPRED. For example, a transform unit for forming an intermediate motion vector in the MPEG-4AVC standard, or a transform unit for motion vector competition may be used.
Turning to fig. 11, yet another exemplary quantization parameter decoding process in a video decoder is indicated generally by the reference numeral 1100. It is to be understood that method 1100 involves QP adjustment at a transform unit. The method 1100 includes a start block 1105 that passes control to a loop limit block 1110. The loop limit block 1110 begins a loop using a variable i ranging from 1 to the number (#) of transform units, and passes control to a function block 1115. The function block 1115 forms a QP Predictor (QP) using multiple QPs for a previously decoded transform unitPRED) And passes control to a function block 1120. The function block 1120 decodes delta _ QP, and passes control to a function block 1125. Function block 1125 sets the QP for each transform unit to QPTU=delta_QP+QPPREDAnd passes control to a function block 1130. The function block 1130 decodes transform unit i, if it has non-zero coefficients, and passes control to a loop limit block 1135. The loop limit block 1135 ends the loop over the transform units, and passes control to an end block 1199. With respect to functional block 1115, the same transform unit used for motion vector prediction may be used to form the predictor QPPRED. For example, a transform unit for forming an intermediate motion vector in the MPEG-4AVC standard, or a transform unit for motion vector competition may be used.
Grammar for grammar
The QP adjustment at the transform unit is used as an example to describe how to design the syntax applied to the present principles. The syntax element TU delta QP is used to specify the QP difference between the QP of the current transform unit and the QP predictor. The QP difference may also be specified at a prediction unit or a coding unit. Table 1 shows an exemplary syntax in a transform unit, in accordance with an embodiment of the present principles.
TABLE 1
The semantics of the syntax element TU _ delta _ QP shown in table 1 are as follows:
TU _ delta _ QP specifies the QP (QP) for the current transform unitTU) And QP Predictor (QP)PRED) The value of the QP difference between. QP (QP) for transform unitTU) Is derived as QPTU=QPPRED+ TU _ delta _ QP. Only TU _ delta _ QP is needed when there are non-zero coefficients in the transform unit (i.e., code _ block _ flag is not zero).
A description will now be given of some of the many attendant advantages/features of the present invention, some of which have been mentioned above. For example, one advantage/feature is an apparatus having an encoder that encodes image data of at least a portion of a picture using a quantization parameter predictor for a current quantization parameter to be applied to the image data, the quantization parameter predictor determined using a plurality of quantization parameters from previously encoded neighboring portions, wherein a difference between an encoding current quantization parameter and a quantization parameter predictor is used to signal to a corresponding decoder.
Another advantage/feature is the apparatus having the encoder as described above, wherein the quantization parameter predictor is implicitly derived based on rules known to both the encoder and the decoder.
Another advantage/feature is the apparatus having the encoder as described above, wherein the quantization parameter predictor is implicitly derived based on a rule known to both the encoder and the decoder as described above, wherein the rule is for at least one of determining the quantization parameter predictor and selecting the quantization parameter predictor in response to at least one quantization parameter: the quantization parameter having a minimum value from the plurality of quantization parameters, the quantization parameter having a maximum value from the plurality of quantization parameters, the quantization parameter calculated from intermediate values of at least some of the plurality of quantization parameters, and the quantization parameter calculated from an average value of at least some of the plurality of quantization parameters.
Yet another advantage/feature is the apparatus having the encoder as described above, wherein the quantization parameter predictor is selected from one or more quantization parameters corresponding to a same portion of the picture used for motion vector prediction, the one or more quantization parameters being among the plurality of quantization parameters.
Also, another advantage/feature is the apparatus having the encoder as described above, wherein the quantization parameter predictor is selected from one or more quantization parameters corresponding to a same portion of the picture used for motion vector prediction, the one or more quantization parameters being among the plurality of quantization parameters as described above, wherein the motion vector predictor is determined using motion vector competition.
Additionally, another advantage/feature is the apparatus having the encoder as described above, wherein the image data is one of a coding unit, a prediction unit, and a transform unit.
Also, another advantage/feature is the apparatus having the encoder as described above, wherein the image data is one of a coding unit, a prediction unit, and a transform unit as described above, wherein the image data is the prediction unit, and only a difference between a current quantization parameter and a quantization parameter prediction value is encoded for signaling to a corresponding decoder when the prediction unit is in a non-skip mode.
Additionally, another advantage/feature is the apparatus having the encoder as described above, wherein the image data is a transform unit and only a difference between a current quantization parameter and a quantization parameter prediction value is encoded for signaling to a corresponding decoder when the transform unit includes a non-zero coefficient.
These and other features and advantages of the present principles may be readily ascertained by one of ordinary skill in the pertinent art based on the teachings herein. It is to be understood that the teachings of the present principles may be implemented in various forms of hardware, software, firmware, special purpose processors, or combinations thereof.
Most preferably, the teachings of the present principles are implemented as a combination of hardware and software. Moreover, the software may be implemented as an application program tangibly embodied on a program storage unit. The application program may be uploaded to, and executed by, a machine comprising any suitable architecture. The machine is preferably implemented on a computer platform having hardware such as one or more central processing units ("CPU"), a random access memory ("RAM"), and input/output ("I/O") interfaces. The computer platform may also include an operating system and microinstruction code. The various processes and functions described herein may either be part of the microinstruction code or part of the application program, or any combination thereof, which may be executed by a CPU. In addition, various other peripheral units may be connected to the computer platform such as an additional data storage unit and a printing unit.
It is to be further understood that, because some of the constituent system components and methods depicted in the accompanying drawings are preferably implemented in software, the actual connections between the system components or the process function blocks may differ depending upon the manner in which the present principles are programmed. Given the teachings herein, one of ordinary skill in the related art will be able to contemplate these and similar implementations or configurations of the present principles.
Although the illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the present principles is not limited to those precise embodiments, and that various changes and modifications may be effected therein by one of ordinary skill in the pertinent art without departing from the scope or spirit of the present principles. All such changes and modifications are intended to be included within the scope of the present principles as set forth in the appended claims.
Claims (32)
1. An encoding apparatus comprising:
an encoder (300) for encoding image data of at least a portion of a picture using a quantization parameter predictor of a current quantization parameter to be applied to the image data, the quantization parameter predictor incorporating a plurality of quantization parameters from previously encoded neighboring portions,
the encoder signals the difference between the current quantization parameter and the quantization parameter predictor to the corresponding decoder, an
The quantization parameter predictor is an average of quantization parameters of previously encoded neighboring parts on the left side and previously encoded neighboring parts on the upper side of the part currently being encoded when the neighboring parts on the left side and the previously encoded neighboring parts on the upper side are available, and
the quantization parameter predictor is based on a quantization parameter of a current chip if at least one of the neighboring portions is unavailable.
2. The apparatus of claim 1, wherein the quantization parameter predictor is implicitly derived based on rules known to both the encoder and the decoder.
3. The apparatus of claim 2, wherein the rule is for at least one of determining a quantization parameter predictor and selecting a quantization parameter predictor in response to at least one quantization parameter: the quantization parameter having a minimum value from the plurality of quantization parameters, the quantization parameter having a maximum value from the plurality of quantization parameters, the quantization parameter calculated from intermediate values of at least some of the plurality of quantization parameters, and the quantization parameter calculated from an average value of at least some of the plurality of quantization parameters.
4. The apparatus of claim 1, wherein the quantization parameter predictor is selected from one or more quantization parameters corresponding to a same portion of the picture for motion vector prediction, the one or more quantization parameters being among the plurality of quantization parameters.
5. The apparatus of claim 4, wherein the motion vector predictor is determined using motion vector competition.
6. The apparatus of claim 1, wherein the image data is one of a coding unit, a prediction unit, and a transform unit.
7. The apparatus of claim 6, wherein the image data is a prediction unit, and only a difference between a current quantization parameter and a quantization parameter prediction value is encoded for signaling to a corresponding decoder when the prediction unit is in a non-skip mode.
8. The apparatus of claim 1, wherein the image data is a transform unit, and when the transform unit includes non-zero coefficients, only a difference between a current quantization parameter and a quantization parameter predictor is encoded for signaling to a corresponding decoder.
9. A method in a video encoder, comprising:
encoding (530, 830, 1030) image data of at least a portion of a picture using a quantization parameter predictor for a current quantization parameter to be applied to the image data, the quantization parameter predictor incorporating a plurality of quantization parameters from previously encoded neighboring portions (515, 815, 1015);
encoding (525, 825, 1025) a difference between the current quantization parameter and the quantization parameter predictor, signaling the difference to a corresponding decoder,
determining the quantization parameter predictor by calculating an average of the quantization parameters of previously coded neighboring parts on the left side and previously coded neighboring parts on the upper side of the part currently being coded, when the previously coded neighboring parts on the left side and the previously coded neighboring parts on the upper side of the part are available, and
the quantization parameter predictor is based on a quantization parameter of a current chip if at least one of the neighboring portions is unavailable.
10. The method of claim 9, wherein the quantization parameter predictor is implicitly derived based on rules known to both the encoder and the decoder.
11. The method of claim 10, wherein the rule is for at least one of determining a quantization parameter predictor and selecting a quantization parameter predictor in response to at least one quantization parameter: the quantization parameter having a minimum value from the plurality of quantization parameters, the quantization parameter having a maximum value from the plurality of quantization parameters, the quantization parameter calculated from intermediate values of at least some of the plurality of quantization parameters, and the quantization parameter calculated from an average value of at least some of the plurality of quantization parameters.
12. The method of claim 9, wherein the quantization parameter predictor is selected from one or more quantization parameters corresponding to a same portion of the picture used for motion vector prediction, the one or more quantization parameters being among the plurality of quantization parameters (515, 815, 1015).
13. The method of claim 12, wherein the motion vector predictor is determined using motion vector competition (515, 815, 1015).
14. The method of claim 9, wherein the image data is one of a coding unit (500), a prediction unit (800), and a transform unit (1100).
15. The method of claim 14, wherein the image data is a prediction unit, and only a difference between a current quantization parameter and a quantization parameter prediction value is encoded for signaling to a corresponding decoder (830) when the prediction unit is in the non-skip mode.
16. The method of claim 9, wherein the image data is a transform unit, and when the transform unit includes non-zero coefficients, only a difference between a current quantization parameter and a quantization parameter prediction value is encoded for signaling to a corresponding decoder (1030).
17. A decoding apparatus, comprising:
a decoder (400) for decoding image data of at least a portion of a picture using a quantization parameter predictor for a current quantization parameter to be applied to the image data, the quantization parameter predictor being determined using a plurality of quantization parameters from previously encoded neighboring portions,
wherein decoding a difference between a current quantization parameter and a quantization parameter predictor is used to decode the image data, an
Wherein the quantization parameter predictor is determined by finding an average of quantization parameters of previously coded neighboring parts on a left side and previously coded neighboring parts on an upper side of a part currently being coded, when the previously coded neighboring parts on the left side and the previously coded neighboring parts on the upper side are available, and
the quantization parameter predictor is based on a quantization parameter of a current chip if at least one of the neighboring portions is unavailable.
18. The apparatus of claim 17, wherein a quantization parameter predictor is implicitly derived based on rules known to both a decoder and a corresponding encoder.
19. The apparatus of claim 18, wherein the rule is for at least one of determining a quantization parameter predictor and selecting a quantization parameter predictor in response to at least one quantization parameter: the quantization parameter having a minimum value from the plurality of quantization parameters, the quantization parameter having a maximum value from the plurality of quantization parameters, the quantization parameter calculated from intermediate values of at least some of the plurality of quantization parameters, and the quantization parameter calculated from an average value of at least some of the plurality of quantization parameters.
20. The apparatus of claim 17, wherein the quantization parameter predictor is selected from one or more quantization parameters corresponding to a same portion of the picture for motion vector prediction, the one or more quantization parameters being among the plurality of quantization parameters.
21. The apparatus of claim 20, wherein the motion vector predictor is determined using motion vector competition.
22. The apparatus of claim 17, wherein the image data is one of a coding unit, a prediction unit, and a transform unit.
23. The apparatus of claim 22, wherein the image data is a prediction unit, and only a difference between the current quantization parameter and the quantization parameter prediction value is decoded when the prediction unit is in the non-skip mode.
24. The apparatus of claim 17, wherein the image data is a transform unit, and when the transform unit includes a non-zero coefficient, only a difference between the current quantization parameter and the quantization parameter prediction value is decoded.
25. A method in a video decoder, comprising:
decoding (630, 930, 1130) image data of at least a portion of a picture using a quantization parameter predictor for a current quantization parameter to be applied to the image data, the quantization parameter predictor being determined using a plurality of quantization parameters from previously encoded neighboring portions (615, 915, 1115); and
decoding (620, 920, 1120) a difference between a current quantization parameter and a quantization parameter predictor for decoding the image data, an
Wherein the quantization parameter predictor is determined by finding an average of quantization parameters of previously coded neighboring parts on a left side and previously coded neighboring parts on an upper side of a part currently being coded, when the previously coded neighboring parts on the left side and the previously coded neighboring parts on the upper side are available, and
the quantization parameter predictor is based on a quantization parameter of a current chip if at least one of the neighboring portions is unavailable.
26. The method of claim 25, wherein a quantization parameter predictor is implicitly derived based on rules known to both a decoder and a corresponding encoder.
27. The method of claim 26, wherein the rule is for at least one of determining a quantization parameter predictor and selecting a quantization parameter predictor in response to at least one quantization parameter: the quantization parameter having a minimum value from the plurality of quantization parameters, the quantization parameter having a maximum value from the plurality of quantization parameters, the quantization parameter calculated from intermediate values of at least some of the plurality of quantization parameters, and the quantization parameter calculated from an average value of at least some of the plurality of quantization parameters.
28. The method of claim 25, wherein the quantization parameter predictor is selected from one or more quantization parameters corresponding to a same portion of the picture for motion vector prediction, the one or more quantization parameters being among the plurality of quantization parameters (615, 915, 1115).
29. The method of claim 28, wherein the motion vector predictor is determined using motion vector competition (615, 915, 1115).
30. The method of claim 25, wherein the image data is one of a coding unit (600), a prediction unit (900), and a transform unit (1100).
31. The method of claim 30, wherein the image data is a prediction unit, and only a difference between the current quantization parameter and the quantization parameter prediction value is decoded (930) when the prediction unit is in the non-skip mode.
32. The method of claim 25, wherein the image data is a transform unit, and when the transform unit includes non-zero coefficients, only a difference between the current quantization parameter and the quantization parameter predictor is decoded (1130).
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US61/353,365 | 2010-06-10 |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| HK1186609A HK1186609A (en) | 2014-03-14 |
| HK1186609B true HK1186609B (en) | 2018-02-02 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US12526417B2 (en) | Methods and apparatus for determining quantization parameter predictors from a plurality of neighboring quantization parameters | |
| HK1186609B (en) | Methods and apparatus for determining quantization parameter predictors from a plurality of neighboring quantization parameters | |
| HK1186609A (en) | Methods and apparatus for determining quantization parameter predictors from a plurality of neighboring quantization parameters |