HK1195981B - Intra pcm (ipcm) and lossless coding mode video deblocking - Google Patents
Intra pcm (ipcm) and lossless coding mode video deblocking Download PDFInfo
- Publication number
- HK1195981B HK1195981B HK14109400.9A HK14109400A HK1195981B HK 1195981 B HK1195981 B HK 1195981B HK 14109400 A HK14109400 A HK 14109400A HK 1195981 B HK1195981 B HK 1195981B
- Authority
- HK
- Hong Kong
- Prior art keywords
- value
- block
- zero
- video data
- blocks
- Prior art date
Links
Description
The present application claims the benefit of united states provisional application No. 61/549,597, filed on day 10, month 20, 2011, united states provisional application No. 61/605,705, filed on day 3, month 1, 2012, united states provisional application No. 61/606,277, filed on day 3, month 2, 2012, united states provisional application No. 61/624,901, filed on day 16, month 4, 2012, and united states provisional application No. 61/641,775, filed on day 5, month 2, 2012, the entire contents of each of which are incorporated herein by reference.
Technical Field
This disclosure relates to video coding and, more particularly, to coding blocks of video data generated by a video coding process.
Background
Digital video capabilities can be incorporated into a wide range of devices, including digital televisions, digital direct broadcast systems, wireless broadcast systems, Personal Digital Assistants (PDAs), laptop or desktop computers, tablet computers, electronic book readers, digital cameras, digital recording devices, digital media players, video gaming devices, video gaming consoles, cellular or satellite radio telephones, so-called "smart phones," video teleconferencing devices, video streaming devices, and the like. Digital video devices implement video compression techniques, such as those described in: the standards defined by MPEG-2, MPEG-4, ITU-T H.263, ITU-T H.264/MPEG-4 part 10 (advanced video coding (AVC)), the High Efficiency Video Coding (HEVC) standard currently under development, and extensions of these standards. Video devices may transmit, receive, encode, decode, and/or store digital video information more efficiently by implementing these video compression techniques.
Video compression techniques perform spatial (intra-picture) prediction and/or temporal (inter-picture) prediction to reduce or remove redundancy inherent in video sequences. For block-based video coding, a video slice (i.e., a video frame or a portion of a video frame) may be partitioned into video blocks (which may also be referred to as treeblocks), Coding Units (CUs) and/or coding nodes. Video blocks in an intra-coded (I) slice of a picture may be encoded using spatial prediction with respect to reference samples in neighboring blocks in the same picture. Video blocks in an inter-coded (P or B) slice of a picture may use spatial prediction with respect to reference samples in neighboring blocks in the same picture or temporal prediction with respect to reference samples in other reference pictures. A picture may be referred to as a frame and a reference picture may be referred to as a reference frame.
Spatial prediction or temporal prediction results in coding a predictive block for a block. The residual data represents pixel differences between the original block to be coded and the predictive block. An inter-coded block is encoded according to a motion vector that points to a block of reference samples that forms a predictive block and residual data that indicates a difference between the coded block and the predictive block. An intra-coded block is encoded according to an intra-coding mode and residual data. For further compression, the residual data may be transformed from the pixel domain to the transform domain, resulting in residual transform coefficients, which may then be quantized. The quantized transform coefficients, initially arranged in a two-dimensional array, may be scanned in order to generate a one-dimensional vector of transform coefficients. Entropy coding may then be applied to achieve even greater compression.
Disclosure of Invention
In general, techniques are described for performing deblocking filtering with respect to blocks of video data coded using Intra Pulse Code Modulation (IPCM) coding and/or lossless coding modes. In particular, the techniques of this disclosure may include performing deblocking filtering on one or more blocks of video data that include one or more IPCM coding blocks, lossless coding blocks, and blocks coded using lossy coding techniques or "modes". The techniques described herein may improve the visual quality of one or more of the blocks of video data when coding the blocks, as compared to other techniques.
In particular, the described techniques may improve the visual quality of one or more of the IPCM coding blocks that include reconstructed video data by enabling deblocking filtering for the blocks and performing deblocking filtering in a particular manner. In addition, the techniques may improve the visual quality of one or more of the blocks by disabling deblocking filtering for lossless coding blocks that include the original video data. Moreover, the techniques may also improve the visual quality of one or more of the blocks coded using the lossy coding mode by performing deblocking filtering in a particular manner on blocks (e.g., blocks located adjacent to one or more of the IPCM and the lossless coding blocks). As a result, there may be a relative improvement in visual quality of one or more blocks of video data including blocks coded using IPCM, lossless, and lossy coding modes when using the techniques of this disclosure.
In one embodiment of this disclosure, a method of coding video data comprises: coding a plurality of blocks of video data, wherein at least one block of the plurality of blocks of video data is coded using a coding mode that comprises one of an IPCM coding mode and a lossless coding mode using prediction; assigning a non-zero Quantization Parameter (QP) value to the at least one block coded using the coding mode; and performing deblocking filtering on one or more of the plurality of blocks of video data based on the coding mode used to code the at least one block and the non-zero QP value assigned for the at least one block.
In another example of this disclosure, an apparatus configured to code video data comprises a video coder. In this example, the video coder is configured to: coding a plurality of blocks of video data, wherein the video coder is configured to code at least one block of the plurality of blocks of video data using a coding mode that comprises one of an IPCM coding mode and a lossless coding mode using prediction; assigning a non-zero QP value for the at least one block coded using the coding mode; and performing deblocking filtering on one or more of the plurality of blocks of video data based on the coding mode used to code the at least one block and the non-zero QP value assigned for the at least one block.
In another example of this disclosure, a device configured to code video data comprises: means for coding a plurality of blocks of video data, including means for coding at least one block of the plurality of blocks of video data using a coding mode that includes one of an IPCM coding mode and a lossless coding mode using prediction; means for assigning a non-zero QP value for the at least one block coded using the coding mode; and means for performing deblocking filtering on one or more of the plurality of blocks of video data based on the coding mode used to code the at least one block and the non-zero QP value assigned for the at least one block.
The techniques described in this disclosure may be implemented in hardware, software, firmware, or a combination thereof. If implemented in hardware, the device may be implemented as an integrated circuit, a processor, discrete logic, or any combination thereof. If implemented in software, the software may be executed in one or more processors, such as a microprocessor, an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), or a Digital Signal Processor (DSP). The software that executes the techniques may be initially stored in a tangible computer-readable medium and loaded and executed in a processor.
Thus, in another example, this disclosure contemplates a computer-readable storage medium storing instructions that, when executed, cause one or more processors to code video data. In this example, the instructions cause the one or more processors to: coding a plurality of blocks of video data, including coding at least one block of the plurality of blocks of video data using a coding mode that includes one of an IPCM coding mode and a lossless coding mode using prediction; assigning a non-zero QP value for the at least one block coded using the coding mode; and performing deblocking filtering on one or more of the plurality of blocks of video data based on the coding mode used to code the at least one block and the non-zero QP value assigned for the at least one block.
The details of one or more examples are set forth in the accompanying drawings and the description below. Other features, objects, and advantages will be apparent from the description and drawings, and from the claims.
Drawings
Fig. 1 is a block diagram illustrating an example of a video encoding and decoding system that may perform techniques for Intra Pulse Code Modulation (IPCM) and lossless coding mode deblocking consistent with the techniques of this disclosure.
Fig. 2 is a block diagram illustrating an example of a video encoder consistent with the techniques of this disclosure, which may perform techniques for IPCM and lossless coding mode deblocking.
Fig. 3 is a block diagram illustrating an example of a video decoder, consistent with the techniques of this disclosure, that may perform techniques for IPCM and lossless coding mode deblocking.
Fig. 4 is a conceptual diagram illustrating an example of deblocking filtering performed on boundaries of two adjacent blocks of video data consistent with the techniques of this disclosure.
Fig. 5 is a conceptual diagram illustrating an example of signaling delta QP values for each of one or more blocks of video data consistent with the techniques of this disclosure.
Fig. 6 is a flow diagram illustrating an example method of calculating a boundary strength value for a deblocking filter consistent with the techniques of this disclosure.
Fig. 7A-7B are conceptual diagrams illustrating examples of IPCM coding mode deblocking consistent with the techniques of this disclosure.
Fig. 8A-8B are conceptual diagrams illustrating examples of lossless coding mode deblocking consistent with the techniques of this disclosure.
Fig. 9-11 are flow diagrams illustrating example methods of IPCM and lossless coding mode deblocking consistent with the techniques of this disclosure.
Detailed Description
In general, techniques are described for performing deblocking filtering with respect to blocks of video data coded using Intra Pulse Code Modulation (IPCM) coding and/or lossless coding modes. In particular, the techniques of this disclosure may include performing deblocking filtering on one or more blocks of video data that include one or more IPCM coding blocks, lossless coding blocks, and blocks coded using so-called "lossy" coding techniques or "modes". The techniques described herein may improve the visual quality of one or more of the blocks of video data when coding the blocks, as compared to other techniques.
As one example, the described techniques may improve the visual quality of one or more IPCM coding blocks that include reconstructed video data by enabling deblocking filtering for the blocks and performing deblocking filtering in a particular manner. For example, the techniques include assigning a non-zero Quantization Parameter (QP) value to an IPCM coding block based on one or more of a signaled QP value including an assigned QP value, a predicted QP value, and a delta QP ("dQP") value representing a difference between the assigned non-zero QP value and the predicted QP value for the IPCM coding block. The techniques further include performing deblocking filtering on the IPCM coding block based on a non-zero QP value assigned for the IPCM coding block.
As another example, the described techniques may improve the visual quality of one or more lossless coding blocks that include original video data by disabling deblocking filtering for those blocks. For example, the techniques include signaling one or more syntax elements (e.g., 1-bit codes or "flags") that indicate that deblocking filtering is disabled for one or more lossless coding blocks. In some examples, the one or more syntax elements may indicate that deblocking filtering is disabled for all boundaries of one or more lossless coding blocks shared with other adjacent blocks of video data.
As yet another example, the described techniques may also improve the visual quality of one or more blocks of video data that are located adjacent to an IPCM coding block or a lossless coding block and that are coded using a lossy coding mode by performing deblocking filtering on lossy blocks in a particular manner. For example, the techniques include performing deblocking filtering on one or more lossy blocks based on non-zero QP values assigned for neighboring IPCMs or lossless coding blocks.
In this way, there may be a relative improvement in visual quality of one or more blocks of video data including blocks coded using IPCM, lossless, and lossy coding modes when using the techniques of this disclosure.
Fig. 1 is a block diagram illustrating an example of a video encoding and decoding system that may perform techniques for IPCM and lossless coding mode deblocking consistent with the techniques of this disclosure. As shown in fig. 1, system 10 includes a source device 12, source device 12 generating encoded video data to be later decoded by a destination device 14. Source device 12 and destination device 14 may comprise any of a wide range of devices, including desktop computers, notebook (i.e., laptop) computers, tablet computers, set-top boxes, telephone handsets (e.g., so-called "smart" phones), so-called "smart" pads, televisions, cameras, display devices, digital media players, video game consoles, video streaming devices, or the like. In some cases, source device 12 and destination device 14 may be equipped for wireless communication.
Destination device 14 may receive encoded video data to be decoded over link 16. Link 16 may comprise any type of media or device capable of moving encoded video data from source device 12 to destination device 14. In one example, link 16 may comprise a communication medium to enable source device 12 to transmit encoded video data directly to destination device 14 in real-time. The encoded video data may be modulated according to a communication standard, such as a wireless communication protocol, and transmitted to destination device 14. The communication medium may comprise any wireless or wired communication medium, such as a Radio Frequency (RF) spectrum or one or more physical transmission lines. The communication medium may form part of a packet-based network (e.g., a local area network, a wide area network, or a global network such as the internet). The communication medium may include routers, switches, base stations, or any other equipment that may be used to facilitate communication from source device 12 to destination device 14.
Alternatively, the encoded data may be output from output interface 22 to storage device 24. Similarly, encoded data may be accessed from storage device 24 through input interface 26. Storage device 24 may include any of a variety of distributed or locally accessed data storage media such as a hard drive, blu-ray discs, DVDs, CD-ROMs, flash memory, volatile or non-volatile memory, or any other suitable digital storage media for storing encoded video data. In another example, storage device 24 may correspond to a file server or another intermediate storage device that may hold the encoded video generated by source device 12. Destination device 14 may access the stored video data from storage device 24 via streaming or download. The file server may be any type of server capable of storing encoded video data and transmitting that encoded video data to destination device 14. Example file servers include web servers (e.g., for a website), FTP servers, Network Attached Storage (NAS) devices, or local disk drives. Destination device 14 may access the encoded video data over any standard data connection, including an internet connection. Such a data connection may include a wireless channel (e.g., a Wi-Fi connection), a wired connection (e.g., DSL, cable modem, etc.), or a combination of both, suitable for accessing encoded video data stored on a file server. The transmission of the encoded video data from storage device 24 may be a streaming transmission, a download transmission, or a combination of both.
The techniques of this disclosure are not necessarily limited to wireless applications or settings. The techniques may be applied to video coding in support of any of a variety of multimedia applications, such as, for example, over-the-air television broadcasting via the internet, cable television transmission, satellite television transmission, streaming video transmission, encoding of digital video for storage on a data storage medium, decoding of digital video stored on a data storage medium, or other applications. In some examples, system 10 may be configured to support one-way or two-way video transmission to support applications such as video streaming, video playback, video broadcasting, and/or video telephony.
In the example of fig. 1, source device 12 includes a video source 18, a video encoder 20, and an output interface 22. In some cases, output interface 22 may include a modulator/demodulator (modem) and/or a transmitter. In source device 12, video source 18 may include a source such as a video capture device (e.g., a video camera, a video archive containing previously captured video, a video feed interface to receive video from a video content provider), and/or a source of a computer graphics system used to generate computer graphics data as the source video, or a combination of these sources. As one example, if video source 18 is a video camera, source device 12 and destination device 14 may form so-called camera phones or video phones. In general, however, the techniques described in this disclosure are applicable to video coding and may be applied to wireless and/or wired applications.
Retrieved, pre-retrieved, or computer generated video may be encoded by video encoder 20. The encoded video data may be transmitted directly to destination device 14 via output interface 22 of source device 12. The encoded video data may also (or alternatively) be stored on storage device 24 for subsequent access by destination device 14 or other devices for decoding and/or playback.
Destination device 14 includes input interface 26, video decoder 30, and display device 28. In some cases, input interface 26 may include a receiver and/or a modem. Input interface 26 of destination device 14 receives the encoded video data over link 16 or from storage device 24. Encoded video data communicated over link 16 or provided on storage device 24 may include a variety of syntax elements generated by video encoder 20 for use by a video decoder, such as video decoder 30, in decoding the video data. These syntax elements may be included in encoded video data transmitted over a communication medium, stored on a storage medium, or stored on a file server.
The display device 28 may be integrated with the destination device 14 or external to the destination device 14. In some examples, destination device 14 may include an integrated display device and also be configured to interface with an external display device. In other examples, destination device 14 may be a display device. In general, display device 28 displays the decoded video data to a user, and may comprise any of a variety of display devices, such as a Liquid Crystal Display (LCD), a plasma display, an Organic Light Emitting Diode (OLED) display, or another type of display device.
Video encoder 20 and video decoder 30 may operate according to a video compression standard, such as the High Efficiency Video Coding (HEVC) standard currently being developed by the joint collaboration team of video coding (JCT-VC) of the ITU-T Video Coding Experts Group (VCEG) and the ISO/IEC Motion Picture Experts Group (MPEG), and may comply with the HEVC test model (HM). Alternatively, video encoder 20 and video decoder 30 may operate according to other proprietary or industry standards such as the ITU-T h.264 standard or what is known as MPEG-4 part 10 (advanced video coding (AVC)), or extensions of these standards. However, the techniques of this disclosure are not limited to any particular encoding standard. Other examples of video compression standards include MPEG-2 and ITU-T H.263. A recent draft of the HEVC standard, referred to as "HEVC working draft 8" or "WD 8", is described in the following documents: document JCTVC-J1003_ d7, Bross et al, "High Efficiency Video Coding (HEVC) text specification draft 8", ITU-T SG16WP3 and the video coding Joint collaboration team (JCT-VC) of ISO/IEC JTC 1/SC 29/WG 11, conference No. 10: swedish Stockholm (SE), 11/2012 to 20 days, downloadable from http:// phenix. int-evry. fr/jct/doc _ end _ user/documents/10 _ Stockholm/wg 11/JCTVC-J1003-v 8.zip from 2/2012.
Another draft of the HEVC standard, referred to in this disclosure as "HEVC working draft 4" or "WD 4", is described in the following documents: file JCTVC-F803_ d2, Bross et al, "WD 4: the Working Draft4of high-Efficiency video coding, ITU-T SG16WP3 and the video coding Joint collaboration team (JCT-VC) of ISO/IEC JTC 1/SC 29/WG 11, conference No. 6: duling Italy (Torino, IT), 7/month, 2011, 14 to 22 days, which are downloadable from http:// phenix.int-evry.fr/jct/doc _ end _ user/documents/6 _ Torino/wg 11/JCTVC-F803-v 8.zip, from 10/month, 2, 2012. Another draft of the HEVC standard, referred to in this disclosure as "HEVC working draft 6" or "WD 6", is described in the following documents: document JCTVC-H1003, Bross et al, "High Efficiency Video Coding (HEVC) text specification draft 6", ITU-T SG16WP3 and the video coding Joint collaboration team (JCT-VC) of ISO/IEC JTC 1/SC 29/WG 11, conference No. 8: san Jose, Calif., USA, month 2 2012, which was downloadable from http:// phenix. int-evry. fr/jct/doc _ end _ user/documents/8 _ San% 20 Jose/wg 11/JCTVC-H1003-v 22.zip, from 1 st 6 2012.
Although not shown in fig. 1, in some aspects, video encoder 20 and video decoder 30 may each be integrated with an audio encoder and decoder, and may include appropriate MUX-DEMUX units or other hardware and software to handle encoding of both audio and video in a common data stream or separate data streams. If applicable, in some examples, the MUX-DEMUX units may conform to the ITU H.223 multiplexer protocol, or other protocols such as the User Datagram Protocol (UDP).
Video encoder 20 and video decoder 30 may each be implemented as any of a variety of suitable encoder and decoder circuits, such as one or more microprocessors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), discrete logic, software, hardware, firmware, or any combinations thereof. When the techniques are implemented in part in software, a device may store instructions for the software in a suitable non-transitory computer-readable medium and execute the instructions in hardware using one or more processors to perform the techniques of this disclosure. Each of video encoder 20 and video decoder 30 may be included in one or more encoders or decoders, either of which may be integrated as part of a combined encoder/decoder (CODEC ") in the respective device.
HEVC standardization efforts are based on an evolution model of the video coding device, referred to as the HEVC test model (HM). The HM assumes several additional capabilities of video coding devices relative to existing devices in accordance with, for example, ITU-T h.264/AVC. For example, although h.264 provides nine intra-prediction encoding modes, the HM may provide up to 35 intra-prediction encoding modes.
In general, the working model for HM describes that a video frame or picture may be divided into a tree block or sequence of Largest Coding Units (LCUs) that includes both luma and chroma samples. The tree block has a similar purpose to that of the h.264 standard macro block. A slice includes a number of consecutive treeblocks in coding order. A video frame or picture may be partitioned into one or more slices. Each treeblock may be split into Coding Units (CUs) according to a quadtree. For example, a tree-type block (as the root node of a quadtree) may be split into four child nodes, and each child node may in turn be a parent node, and into four other child nodes. The last non-split child node (as a leaf node of the quadtree) comprises a coding node, i.e., a coded video block. Syntax data associated with a coded bitstream may define a maximum number of times a tree block may be split, and may also define a minimum size of a coding node.
A CU includes a coding node and a number of Prediction Units (PUs) and Transform Units (TUs) associated with the coding node. The size of a CU corresponds to the size of the coding node, and the shape must be square. The size of a CU may range from 8 × 8 pixels up to the size of a tree-type block with a maximum of 64 × 64 pixels or larger than 64 × 64 pixels. Each CU may contain one or more PUs and one or more TUs. Syntax data associated with a CU may describe, for example, partitioning of the CU to one or more PUs. The partition mode may be different depending on whether the CU is skipped or direct mode encoded, intra prediction mode encoded, or inter prediction mode encoded. The shape of the PU may be segmented into non-squares. Syntax data associated with a CU may also describe, for example, partitioning of the CU into one or more TUs according to a quadtree. The TU may be square or non-square in shape.
The HEVC standard allows for a transform according to a TU, which may be different for different CUs. TU sizes are typically set based on the size of PUs within a given CU defined for a partitioned LCU, but this may not always be the case. TUs are typically the same size as a PU, or smaller than a PU. In some examples, residual samples corresponding to a CU may be subdivided into smaller units using a quadtree structure referred to as a "residual quadtree" (RQT). The leaf nodes of the RQT may be referred to as TUs. The pixel difference values associated with the TUs may be transformed to produce quantifiable transform coefficients.
In general, a PU includes data related to a prediction process. For example, when the PU is encoded in intra mode, the PU may include data describing an intra prediction mode of the PU. As another example, when the PU is encoded in inter mode, the PU may include data defining a motion vector for the PU. The data defining the motion vector for the PU may describe, for example, a horizontal component of the motion vector, a vertical component of the motion vector, a resolution of the motion vector (e.g., one-quarter pixel precision or one-eighth pixel precision), a reference picture to which the motion vector points, and/or a reference picture list (e.g., list 0, list 1, or list C) of the motion vector.
In general, TUs are used for the transform process and the quantization process. A given CU with one or more PUs may also include one or more TUs. After prediction, video encoder 20 may calculate residual values corresponding to the PUs. The residual values comprise pixel difference values that may be transformed into transform coefficients that are quantized using TUs and scanned to generate serialized transform coefficients for entropy coding. This disclosure typically uses the term "video block" or simply "block" to refer to a coding node of a CU. In some particular cases, this disclosure may also use the term "video block" to refer to a tree block, i.e., an LCU or CU, that includes a coding node and PUs and TUs.
A video sequence typically comprises a series of video frames or pictures. A group of pictures (GOP) typically includes one or more of a series of video pictures. The GOP may include syntax data describing the number of pictures included in the GOP in a header of the GOP, a header of one or more of the pictures, or elsewhere. Each slice of a picture may include slice syntax data that describes an encoding mode of the respective slice. Video encoder 20 typically operates on video blocks within individual video slices in order to encode the video data. The video block may correspond to a coding node within a CU. Video blocks may have fixed or varying sizes and may differ in size according to a specified coding standard.
As an example, the HM supports prediction with various PU sizes. Assuming that the size of a particular CU is 2N × 2N, the HM supports intra prediction with PU sizes of 2N × 2N or N × N, and inter prediction with symmetric PU sizes of 2N × 2N, 2N × N, N × 2N, or N × N. The HM also supports asymmetric partitioning for inter prediction with PU sizes of 2 nxnu, 2 nxnd, nlx 2N and nR x 2N. In asymmetric partitioning, one direction of a CU is undivided, while the other direction is partitioned into 25% and 75%. The portion of the CU corresponding to 25% split is indicated by an indication of "n" followed by "U (up)", "D (down)", "L (left)", or "R (right)". Thus, for example, "2N × nU" refers to a 2N × 2NCU partitioned horizontally with a top 2N × 0.5N PU and a bottom 2N × 1.5N PU.
In this disclosure, "nxn" and "N by N" are used interchangeably to refer to pixel dimensions of a video block in terms of vertical and horizontal dimensions, e.g., 16 x 16 pixels or 16 by 16 pixels. In general, a 16 × 16 block will have 16 pixels in the vertical direction (y ═ 16) and 16 pixels in the horizontal direction (x ═ 16). Likewise, an nxn block typically has N pixels in the vertical direction and N pixels in the horizontal direction, where N represents a non-negative integer value. The pixels in a block may be arranged in rows and columns. In addition, the block does not necessarily need to have the same number of pixels in the horizontal direction as in the vertical direction. For example, a block may comprise N × M pixels, where M is not necessarily equal to N.
After intra-predictive or inter-predictive coding using PUs of the CU, video encoder 20 may calculate residual data for the TUs of the CU. The PU may comprise pixel data in a spatial domain, also referred to as a pixel domain, and the TU may comprise coefficients in a transform domain after applying a transform, such as a Discrete Cosine Transform (DCT), an integer transform, a wavelet transform, or a conceptually similar transform, to the residual video data. The residual data may correspond to pixel differences between pixels of the unencoded picture and prediction values corresponding to the PU. Video encoder 20 may form TUs that include the residual data of the CU and then transform the TUs to generate transform coefficients for the CU.
After applying any transform to generate transform coefficients, video encoder 20 may perform quantization of the transform coefficients. Quantization generally refers to the process of: transform coefficients are quantized to possibly reduce the amount of data used to represent the coefficients, providing further compression. The quantization process may reduce the bit depth associated with some or all of the coefficients. For example, an n-bit value may be truncated to an m-bit value during quantization, where n is greater than m.
In some examples, video encoder 20 may utilize a predefined scanning or "scan" order to scan the quantized transform coefficients to generate a serialized vector that may be entropy encoded. In other examples, video encoder 20 may perform adaptive scanning. After scanning the quantized transform coefficients to form a one-dimensional vector, video encoder 20 may entropy encode the one-dimensional vector, e.g., according to Context Adaptive Variable Length Coding (CAVLC), Context Adaptive Binary Arithmetic Coding (CABAC), syntax-based context adaptive binary arithmetic coding (SBAC), Probability Interval Partitioning Entropy (PIPE) coding, or another entropy encoding method. Video encoder 20 may also entropy encode syntax elements associated with the encoded video data for use by video decoder 30 in decoding the video data.
To perform CABAC, video encoder 20 may assign a context within the context model to a symbol to be transmitted. The context may relate to, for example, whether neighboring values of a symbol are zero values. To perform CAVLC, video encoder 20 may select a variable length code for a symbol to be transmitted. Codewords in VLC may be constructed such that relatively shorter codes correspond to more likely symbols and relatively longer codes correspond to less likely symbols. In this way, bit savings may be achieved using VLC over, for example, using equal length codewords for each symbol to be transmitted. The probability determination may be made based on the context assigned to the symbol.
The following is discussed with reference to video encoder 20 and video decoder 30, and their various components, as depicted in fig. 2 and 3, and as described in more detail below. According to some video coding techniques, in the event that video encoder 20 (e.g., using mode select unit 40 of fig. 2) selects an IPCM coding mode to code a particular "current" block of video data based on error results, video encoder 20 (e.g., using IPCM encoding unit 48A of fig. 2) may encode data or "samples" of the current block directly into the bitstream as unprocessed data or samples. More specifically, in some versions of HEVC working draft ("WD") (e.g., version 4, or "WD 4"), the IPCM intra coding mode allows video encoder 20 to directly represent luma and chroma samples of a block of video data as unprocessed data (i.e., coding luma and chroma samples or values without modification or "as is"). Thus, video encoder 20 may encode the current block as an IPCM coding block without compressing the data in the block.
In one example, video encoder 20 may select the IPCM coding mode when the number of bits required to represent a compressed version of the current block (e.g., a version of the current block coded by intra-prediction or inter-prediction) exceeds the number of bits required to send a compressed version of the data in the block. In this case, video encoder 20 (e.g., using IPCM encoding unit 48A) may encode the original uncompressed data or samples for the current block as IPCM samples. In some cases, the original uncompressed data may be filtered by a deblocking filter (e.g., deblocking filter 64 of fig. 2) prior to being encoded as IPCM samples by video encoder 20.
In other examples, video encoder 20 may use intra or inter prediction to generate a compressed version of the current block to be entropy encoded (e.g., using entropy encoding unit 56 of fig. 2), and generate a reconstructed block from the compressed version of the current block for use as a reference picture. If video encoder 20 determines that an encoder pipeline stall may occur at an entropy encoding unit (e.g., entropy encoding unit 56), video encoder 20 may encode reconstructed samples of the reconstructed block as IPCM samples. In the example of fig. 2 described below, the reconstructed block is filtered by a deblocking filter (i.e., deblocking filter 64) before being encoded into IPCM samples by the IPCM encoding unit (i.e., IPCM encoding unit 48A). In other examples, the reconstructed block may be encoded by the IPCM encoding unit without filtering.
When video decoder 30 receives an encoded video bitstream from video encoder 20 that represents a block of video data that includes IPCM samples as unprocessed video data, video decoder 30 (e.g., using IPCM decoding unit 98B of fig. 3) may decode the bitstream to generate the block of video data directly from the IPCM samples. As described above, in some draft versions of HEVC (e.g., WD4), the IPCM intra coding mode allows video encoder 20 to directly represent luma and chroma samples of a block of video data as unprocessed data in the bitstream. Thus, video decoder 30 (e.g., using IPCM decoding unit 98A) may decode the current block into an IPCM coded block without decompressing the encoded data of the block.
In one example, the IPCM samples in the bitstream for the current block may be the original uncompressed samples, such that the decoded block is identical to the original block. In this case, the original block generated by video decoder 30 (e.g., using IPCM decoding unit 98A) may be directly output as decoded video. In some cases, the original block generated by video decoder 30 may be filtered by a deblocking filter (e.g., deblocking filter 94 of fig. 3) before being used as a reference picture and output as decoded video.
In another example, the IPCM samples in the bitstream for the current block may be reconstructed samples of a reconstructed version of the current block. In this case, the decoded block may be equivalent to a reconstructed version of the original block, which may contain some distortion compared to the original block. In the example of fig. 3 described below, reconstructed blocks generated by video decoder 30 (e.g., using IPCM decoding unit 98A) may be filtered by a deblocking filter (i.e., deblocking filter 94) before being used as a reference picture and output as decoded video. In other examples, the reconstructed block may be output directly from video decoder 30 (e.g., using IPCM decoding unit 98A) as decoded video without filtering.
Thus, some draft versions of HEVC (e.g., WD4) support the IPCM intra coding mode described above, which allows an encoder (e.g., video encoder 20) to directly represent luma and chroma CU samples of a current block of video data as unprocessed data in a bitstream. As explained previously, there are several possible uses for these IPCM intra coding techniques. As one example, IPCM intra coding may be used as a means for an encoder to ensure that the bit size of a coded representation of a block of video data does not exceed the number of bits required to send uncompressed data of the block. In these cases, the encoder may encode the original samples of data in the current block as IPCM samples. As another example, IPCM intra coding may be used to avoid encoder pipeline stalls. In these cases, the encoder may encode non-original samples (e.g., reconstructed samples) of data in the reconstructed version of the current block as IPCM samples.
In addition, some draft versions of HEVC (e.g., WD4) also support signaling the syntax element "pcm _ loop _ filter _ disable _ flag" in a Sequence Parameter Set (SPS) associated with one or more blocks of video data to indicate that the loop filtering process is enabled for the IPCM coding block. The loop filtering process may include deblocking filtering, Adaptive Loop Filtering (ALF), and Sample Adaptive Offset (SAO). If the pcm _ loop _ filter _ disable _ flag value is equal to true or "1," then both deblocking of samples of the IPCM coded block and the adaptive loop filtering process are disabled. Otherwise, when the pcm _ loop _ filter _ disable _ flag value is equal to false or "0," then both deblocking and adaptive loop filtering processes are enabled for the samples of the IPCM coded block.
When the original uncompressed samples of the current block are coded as IPCM samples, the samples are distortion-free. Therefore, in-loop filtering such as deblocking filtering, ALF, and SAO are unnecessary and may be skipped. Conversely, when coding reconstructed samples of the reconstructed version of the current block as IPCM samples, a video decoder (e.g., video decoder 30) may need to perform in-loop filtering, including deblocking filtering, along edges of the IPCM block.
Deblocking filters in some draft versions of HEVC (e.g., deblocking filter 64 of video encoder 20 of fig. 2, or deblocking filter 94 of video decoder 30 of fig. 3) may filter certain TU and PU edges of blocks of video data based on results from boundary strength calculations (which are described in more detail below with reference to fig. 6) and deblocking decisions. For example, the deblocking decision may include whether a deblocking filter is turned on or off, whether the deblocking filter is weak or strong, and the strength of the weak filter for a given block of video data. The boundary strength calculation and deblocking decision depend on the threshold "tcFor example, a deblocking filter may obtain a QP value from a block containing a current edge to be deblocked (i.e., "luma QP" for a luma edge and that for a chroma edge)"chroma QP"). In some draft versions of HEVC (e.g., WD6), deblocking filtering, when applied, filters edges between two blocks (e.g., edges of certain TUs and/or PUs) (e.g., so-called "common edges"). According to these draft versions of HEVC, an average QP based on QP values for two blocks (e.g., "QP")ave") value filters the edge.
As another example, lossless coding mode has been adopted in some draft versions of HEVC (e.g., WD 6). In a lossless coding mode, in one example, the original or "unprocessed" data for a block of video data may be coded without performing the above-described prediction, summation, transform, quantization, and entropy coding steps. In another example, residual data for a block of video data is not quantized by an encoder (e.g., video encoder 20). Thus, in this example, when a decoder (e.g., video decoder 30) adds non-quantized residual data to the prediction data, the resulting video data may be a lossless rendition of the original video data encoded by the encoder. In any case, the lossless coding mode may be used, for example, by an encoder when encoding video data or by a decoder when decoding video data.
In a coded bitstream, setting or, in some examples, setting a syntax element "qppire y zero transquant bypass flag" in an SPS associated with one or more blocks of video data to a value of "1" may specify a bright QP or "QP" at a current block of video data'YWhere the "value equals" 0, "a lossless coding process will be applied to code the block. In lossless coding mode, the scaling and transform process and the in-loop filtering process described above may be skipped.
In some draft versions of HEVC (e.g., WD6), the photopic quantization parameter QP 'will be'YThe definitions are as follows:
QP′Y=QPY+QpBdOffsetYequation 1
Wherein "QpBdOffsetY=6×bit_depth_luma_minus8”。
In this example, if the bit depth is 8 bits, QpBdOffsetYEqual to "0", or, if the bit depth is 10 bits, QpBdOffsetYEqual to "12". QPYRanges from "-QpBdOffsetY"to" 51 "and QP'YRange from "0" to "(51 + QpBdOffset)Y)”。
According to some draft versions of HEVC (e.g., WD6), the intra-loop deblocking filter may skip "QP" for the current CU or where qpplice _ y _ zero _ transquant _ bypass _ flag of the block is equal to "1'YProcessing of a block of video data equal to 0 ". However, if the current CU or block is a CU or block that is not losslessly coded (e.g., "Qp" for the CU or block)'Y>0 "), the deblocking filter may skip processing of the left and upper edges of the current CU, while deblocking filtering may be performed on the right and lower edges of the current CU, as illustrated in fig. 8A described in more detail below. One potential problem associated with this approach is that the deblocking filter modifies the lossless samples along the right and lower edges of the current block, as shown by the dashed portion of the lossless CU (i.e., block 812) shown in fig. 8A.
In this example, the parameter "QP" may be basedL"to determine deblocking filter parameters β and tcParameter "QPLQPs for blocks on both sides of the current block being deblockedYAverage of the values. In the case where one side of the edge is losslessly coded, QP may be calculated using the following expressionLThe value:
QPL=(-QpBdOffsetY+QPY+1)>>1 equation 2
The various methods described above with respect to performing deblocking filtering on IPCM and lossless coding blocks of video data have several drawbacks.
As one example, in the case of IPCM coding blocks, some draft versions of HEVC (e.g., WD4) specify that the QP value for the block always equals "0". Setting the QP value equal to "0" for each IPCM block effectively disables deblocking filtering of the left and upper edges of the block, regardless of the value of pcm _ loop _ filter _ disable _ flag associated with the block. However, in some cases (e.g., when the IPCM block includes reconstructed samples), it may be desirable to perform deblocking filtering on the left and upper edges of the IPCM coding block. Additionally, in some cases, the right and lower edges of the IPCM block may be filtered depending on the type of neighboring blocks of the video data and the QP values. Furthermore, as previously described, some draft versions of HEVC (e.g., WD6) specify calculating an average of QP values for blocks to perform deblocking filtering on "common edges" between the blocks. Thus, in the case where one block is an IPCM block, the average calculation may result in half the QP value for the other block (i.e., because the QP value for the IPCM block is equal to "0"). This may result in too weak deblocking filtering on common edges regardless of the value of pcm _ loop _ filter _ disable _ flag.
As another example, the techniques described in this disclosure may be used to improve the manner in which the above-described intra-loop deblocking filtering process deblock filters or deblocks boundary edges of blocks of lossless CU or video data according to some draft versions of HEVC (e.g., WD 6). As one example, it may be undesirable to perform deblocking filtering on the right edge and the lower edge of a lossless CU or block (which may modify lossless samples of the block). As another example, in the case where lossy samples adjacent to a lossless coding CU or block are modified (which is similar to IPCM edge deblocking in the case where pcm _ loop _ filter _ disable _ flag is equal to true), as shown in fig. 8B, the QP derived using the above-described technique (i.e., equation 2)LThe value may be inappropriate.
This disclosure describes several techniques that may, in some cases, reduce or eliminate some of the above disadvantages. In particular, the techniques of this disclosure may provide support for performing deblocking filtering on IPCM coded blocks, lossless coded blocks, and so-called "lossy" coded blocks located adjacent to one or more IPCM or lossless coded blocks.
As one example, the disclosed techniques include assigning a non-zero QP value for an IPCM block when deblocking filtering is enabled based on the predicted QP value. For example, the predicted QP value may be a QP value for a quantization group that includes the IPCM block, or a QP value for a neighboring block of video data located adjacent or near the IPCM block. In some cases, the disclosed techniques may only be applicable to IPCM blocks composed of reconstructed samples, since the original samples are distortion-free and deblocking filtering is typically not required. In other cases, the techniques may be applied to IPCM blocks composed of reconstructed or original samples.
As another example, video decoder 30 may implicitly assign a non-zero QP value to an IPCM block based on a QP value for a known prediction. The predicted QP value may be a QP value for a quantization group that includes an IPCM block or for a neighboring block of the IPCM block. For example, when an IPCM block has a size less than the minimum CU quantization group size, video decoder 30 may set the assigned non-zero QP value for the IPCM block equal to the QP value for the quantization group that includes the IPCM block. The quantization groups may include one or more blocks or CUs of video data that are smaller than a minimum CU quantization group size and all have the same QP value. When the IPCM block has a size greater than or equal to the minimum CU quantization group size, video decoder 30 may set the assigned non-zero QP value for the IPCM block equal to the QP value of the neighboring block of the IPCM block. The neighboring block may be a block of video data located to the left of the IPCM block or a block that precedes the IPCM block closest in coding order.
As yet another example, video encoder 20 may assign a non-zero QP value to an IPCM block based on the predicted QP value and explicitly signal the assigned non-zero QP value to video decoder 30. For example, video encoder 20 may signal a dQP value for an IPCM block that represents a difference between an assigned non-zero QP value and a predicted QP value. In this case, video decoder 30 may assign a non-zero QP value to the IPCM block based on the received dQP value and the predicted QP value for the IPCM block. Video decoder 30 may then apply a deblocking filter to samples of the IPCM block based on the assigned non-zero QP value for the IPCM block. In other examples, video encoder 20 may signal the assigned QP value directly to video decoder 30.
As yet another example, in accordance with the techniques of this disclosure, as illustrated in FIG. 8B, a CU or block of video data may be coded losslessly (where Qp 'may be provided for a CU or block of video data where qpprime _ y _ zero _ transquant _ bypass _ flag of the block is equal to "1'Y0) all boundary edges (i.e., upper, lower, left, and right boundary edges) turn off the deblocking filter. For example, QP 'on both sides of the current edge to be deblocked may be checked'YValue (i.e., QP 'for lossless coded blocks)'YValue and Qp 'of neighboring blocks sharing the current edge to be deblocked'YValues), and deblocking may be skipped if at least one such value is equal to "0". Alternatively, QP on both sides of the current edge may be checkedYA value, and if at least one such value is equal to "-QpBdOffsetY", deblocking may be skipped. To avoid testing QP values for inner edges (e.g., "TU" edges) of a losslessly coded CU or block, the deblocking filter may disable processing of these edges. For example, in some examples, the parameter "bInternaledge" may be set to false for an entire CU or block.
As another example, according to the techniques of this disclosure, as illustrated in FIG. 8B, the deblocking filter may modify lossy block samples located adjacent to a lossless coding CU or block, while leaving lossless CU samples unmodifiedcQP of (2)LThe way of value, because of the QP calculated using these techniques (e.g., see equation 2 above)LThe values may be inadequate for aesthetic integration of CUs or blocks that provide lossless coding when surrounding areas of lossy coding. One potential solution proposed by the present invention is to use two values of QPY,P/QTo deblocking a block coded by lossless and an adjacent lossy blockThe current edge shared, as shown in the following expression:
QPL=max(QPY,P,QPY,Q) Equation 3
Where P and Q represent blocks (i.e., losslessly coded blocks and lossy blocks) on both sides of the current edge (e.g., left and right blocks, or upper and lower blocks). These effect the QP of a CU or block on the lossy coding side using an edge located between blocks P and QYValues, as illustrated in table 1 below.
| Block P is in a lossless CU? | Block Q in lossless CU? | QP for deblockingL |
| Is that | Whether or not | QPY,Q |
| Whether or not | Is that | QPY,P |
| Whether or not | Whether or not | (QPY,P+QPY,Q+1)/2 |
| Is that | Is that | Edge-free deblocking |
Table 1: QP for deblocking filtering an edge between two blocks according to lossless coding modes for block P and block QL
The following pseudocode may be used to obtain the proposed "modified QPLDerived ", without determining special conditions (e.g., lossless mode determination may already be available):
in some cases, the techniques proposed in this disclosure pass through a QP for determining deblocking strengthLThe proposed results in smoothing the edge discontinuity between lossy and lossless coding regions. Using other techniques, the boundary discontinuity between the lossless coding region and the lossy coding region may result in a clearly visible boundary. The techniques proposed in this disclosure include determining an appropriate QP for deblocking filteringLWhich in some cases may help reduce boundary discontinuities.
To enable the above deblocking behavior consistent with the techniques of this disclosure, a unit code or "flag" may be signaled in an SPS, Picture Parameter Set (PPS), Adaptation Parameter Set (APS), or slice header. For example, the syntax element "lossless _ loop _ filter _ enable _ flag" equal to "1" may be used to enable deblocking filtering for lossy coded samples adjacent to a lossless CU or block edge (as shown in fig. 8B), while the flag equal to "0" may be used to disable deblocking for all boundaries of a lossless CU. Alternatively, the definition of pcm _ loop _ filter _ disable _ flag described above with reference to IPCM coding blocks may be extended to the case where lossless coding of CUs or blocks is also covered. For example, in the case of pcm _ loop _ filter _ disable _ flag equal to true (e.g., equal to "1"), the deblocking behavior depicted in fig. 8B may be applicable to both IPCM boundary edges and lossless CU boundary edges. If pcm _ loop _ filter _ disable _ flag is equal to false (e.g., equal to "0"), deblocking on lossless CU boundaries may be fully provided, while IPCM deblocking may be enabled on both sides of IPCM boundaries.
In another example, if pcm _ loop _ filter _ disable _ flag is equal to false, deblocking on lossless CUs or block boundaries may be enabled on both sides of the boundary, as in the case of IPCM boundaries. In yet another example, if pcm _ loop _ filter _ disable _ flag is equal to false, deblocking of lossless CU or block boundaries and IPCM boundaries may be disabled on both sides of the boundaries, and if pcm _ loop _ filter _ disable _ flag is equal to true, deblocking of lossless CU boundaries and IPCM boundaries may be enabled on only one side, as depicted in fig. 8B. The pcm _ loop _ filter _ disable _ flag may be renamed to the syntax element "pcm _ transquant _ loop _ filter _ disable _ flag" to reflect its applicability to both IPCM coding modes and lossless coding modes and used in both.
Thus, two values of pcm _ loop _ filter _ disable _ flag may correspond to (1) enabling and disabling deblocking for both sides, (2) enabling deblocking and blending for both sides, or (3) disabling deblocking and blending for both sides. "both sides" in this context refer to both sides of the boundary between a lossy coding CU or block and a lossless coding CU or block (i.e., one boundary edge inside a lossless coding CU, and one boundary edge inside a lossy coding CU, as shown in fig. 8A and 8B). "blending" in this context generally refers to the techniques described herein in which deblocking filtering is performed on inner boundary edges of lossy coded CUs, but deblocking filtering is disabled for inner boundary edges of lossless coded CUs.
As yet another example, in accordance with the techniques of this disclosure, a QP value or syntax element "delta _ QP" value for the current block may be signaled to video decoder 30, e.g., from video encoder 20, along with lossless CU data for the block, to control deblocking filtering along boundary edges of the current block and one or more other blocks. According to yet another technique of this disclosure, a CU or block may be coded from QP from lossy codingYValue or delta _ QP value to predict to control edge lossCoding a QP value for a deblocking filtered current block for a CU or one or more boundary edges of a block. According to other techniques of this disclosure, a constant QP value (e.g., "0," or another value) may be assigned to a lossless CU to control deblocking along boundary edges.
Thus, in some examples consistent with the techniques of this disclosure, video encoder 20 of source device 12 may be configured to encode certain blocks of video data (e.g., one or more PUs or TUs of a CU). In these embodiments, video decoder 30 of destination device 14 may be configured to receive encoded video data from video encoder 20 and decode the video data. As one example, video encoder 20 and/or video decoder 30 may be configured to code a plurality of blocks of video data, wherein at least one block of the plurality of blocks of video data is coded using a coding mode that is one of an IPCM coding mode and a lossless coding mode. In some examples, the lossless coding mode may correspond to a lossless coding mode that uses prediction. In these examples, video encoder 20 and/or video decoder 30 may perform at least the prediction and summation steps described above to generate residual data for a block of video data. Moreover, in these examples, video encoder 20 and/or video decoder 30 may avoid quantizing the residual data. However, in other examples, the lossless coding mode may correspond to a lossless coding mode that does not use prediction (e.g., where the original or "unprocessed" data of a block of video data is coded without performing the prediction, summation, transform, quantization, and entropy coding steps described above). In any case, in this example, video encoder 20 and/or video decoder 30 may be further configured to assign a non-zero QP value for at least one block coded using the coding mode. Also, in this example, video encoder 20 and/or video decoder 30 may be further configured to perform deblocking filtering on one or more of the plurality of blocks of video data based on a coding mode used to code the at least one block and the assigned non-zero QP value for the at least one block.
In this way, the techniques of this disclosure may enable video encoder 20 and/or video decoder 30 to improve the visual quality of one or more blocks of video data when coding the one or more blocks, as compared to other techniques. In particular, the described techniques may improve the visual quality of one or more of the IPCM coding blocks consisting of reconstructed video data by enabling deblocking filtering for the blocks and performing deblocking filtering in a particular manner. Moreover, the techniques may improve the visual quality of one or more lossless coding blocks that include original video data (e.g., whether coded as original or "unprocessed" video data, or as residual unquantized video data) by disabling deblocking filtering. Moreover, the techniques may also improve the visual quality of one or more of the blocks coded using lossy coding techniques by performing deblocking filtering in a particular manner on blocks (e.g., blocks located adjacent to one or more of the IPCM or lossless coding blocks). As a result, there may be a relative improvement in visual quality of one or more blocks of video data including blocks coded using IPCM, lossless, and lossy coding modes when using the techniques of this disclosure.
Video encoder 20 and video decoder 30 may each be implemented as any of a variety of suitable encoder and decoder circuits, such as one or more microprocessors, DSPs, ASICs, FPGAs, discrete logic, software, hardware, firmware, or any combinations thereof, where appropriate. Each of video encoder 20 and video decoder 30 may be included in one or more encoders or decoders, either of which may be integrated as part of a combined video encoder/decoder (CODEC). An apparatus including video encoder 20 and/or video decoder 30 may comprise an Integrated Circuit (IC), a microprocessor, and/or a wireless communication device such as a cellular telephone.
Fig. 2 is a block diagram illustrating an example of a video encoder consistent with the techniques of this disclosure, which may perform techniques for IPCM and lossless coding mode deblocking. Video encoder 20 may perform intra-coding and inter-coding of video blocks within a video slice. Intra-coding relies on spatial prediction to reduce or remove spatial redundancy of video within a given video frame or picture. Inter-coding relies on temporal prediction to reduce or remove temporal redundancy of video within adjacent frames or pictures of a video sequence. Intra-mode (I-mode) may refer to any of a number of space-based compression modes. An inter mode, such as uni-directional prediction (P-mode) or bi-directional prediction (B-mode), may refer to any of a number of time-based compression modes.
In the example of fig. 2, video encoder 20 includes mode select unit 40, motion estimation unit 42, motion compensation unit 44, intra prediction module 46, IPCM encoding unit 48A, lossless encoding unit 48B, reference picture memory 66, summer 50, transform module 52, quantization unit 54, and entropy encoding unit 56. For video block reconstruction, video encoder 20 also includes an inverse quantization unit 58, an inverse transform module 60, and a summer 62. Deblocking filter 64 is also included to filter block boundaries to remove blocking artifacts from the reconstructed video.
As shown in fig. 2, video encoder 20 receives a current video block within a video slice to be encoded. The slice may be divided into a plurality of video blocks. Mode select unit 40 may select one of the coding modes (intra, inter, IPCM, or lossless) for the current video block based on the error results. If intra or inter mode is selected, mode select unit 40 provides the resulting intra or inter coded block to summer 50 to generate residual block data, and to summer 62 to reconstruct the encoded block for use as a reference picture. Intra-prediction module 46 intra-predictively codes the current video block relative to one or more neighboring blocks within the same frame or slice as the current block to be coded to provide spatial compression. Motion estimation unit 42 and motion compensation unit 44 perform inter-predictive coding on the current video block relative to one or more predictive blocks within one or more reference pictures to provide temporal compression.
In the case of intra-coding, motion estimation unit 42 may be configured to determine an inter-prediction mode for a video slice according to a predetermined pattern of a video sequence. The predetermined pattern may designate video slices in the sequence as P slices, B slices, or GPB slices. The motion estimation unit 42 and the motion compensation unit 44 may be highly integrated, but are illustrated separately for conceptual purposes. Motion estimation, performed by motion estimation unit 42, is the process of generating motion vectors that estimate the motion of video blocks. A motion vector, for example, may indicate a displacement of a PU of a video block within a current video frame or picture relative to a predictive block within a reference picture.
Predictive blocks are blocks that are found to closely match a PU of a video block to be coded in terms of pixel differences, which may be determined by Sum of Absolute Differences (SAD), Sum of Squared Differences (SSD), or other different metrics. In some examples, video encoder 20 may calculate values for sub-integer pixel positions of reference pictures stored in reference picture memory 66. For example, video encoder 20 may calculate values for quarter pixel positions, eighth pixel positions, or other fractional pixel positions of a reference picture. Thus, motion estimation unit 42 may perform a motion search relative to the full pixel position and the fractional pixel position and output a motion vector with fractional pixel precision.
Motion estimation unit 42 calculates a motion vector for a PU of a video block in an inter-coded slice by comparing the location of the PU to the location of a predictive block of a reference picture. The reference picture may be selected from a first reference picture list (list 0) or a second reference picture list (list 1), each of which identifies one or more reference pictures stored in reference picture memory 66. Motion estimation unit 42 sends the calculated motion vectors to entropy encoding unit 56 and motion compensation unit 44.
The motion compensation performed by motion compensation unit 44 may involve extracting or generating a predictive block based on a motion vector determined by motion estimation. Upon receiving the motion vector for the PU of the current video block, motion compensation unit 44 may locate the predictive block to which the motion vector points in one of the reference picture lists. Video encoder 20 forms a residual video block by subtracting the pixel values of the predictive block from the pixel values of the current video block being encoded, forming pixel difference values. The pixel difference values form residual data for the block and may include both luma and chroma difference components. Summer 50 represents one or more components that perform this subtraction operation. Motion compensation unit 44 may also generate syntax elements associated with the video blocks and the video slice for use by video decoder 30 in decoding the video blocks of the video slice.
After motion compensation unit 44 generates the predictive block for the current video block, video encoder 20 forms a residual video block by subtracting the predictive block from the current video block. The residual video data in the residual block may be included in one or more TUs and applied to transform module 52. Transform module 52 transforms the residual video data into residual transform coefficients using a transform, such as a Discrete Cosine Transform (DCT) or a conceptually similar transform. Transform module 52 may convert the residual video data from the pixel domain to a transform domain, such as the frequency domain.
Transform module 52 may send the resulting transform coefficients to quantization unit 54. Quantization unit 54 quantizes the transform coefficients to further reduce the bit rate. The quantization process may reduce the bit depth associated with some or all of the coefficients. The degree of quantization may be modified by adjusting the QP. In some examples, quantization unit 54 may then perform a scan of a matrix including quantized transform coefficients. Alternatively, entropy encoding unit 56 may perform the scanning.
After quantization, entropy encoding unit 56 entropy encodes the quantized transform coefficients. For example, entropy encoding unit 56 may perform CAVLC, CABAC, or another entropy encoding technique. After entropy encoding by entropy encoding unit 56, the encoded bitstream may be transmitted to video decoder 30 or archived for later transmission or retrieval by video decoder 30. Entropy encoding unit 56 may also entropy encode the motion vectors and other syntax elements of the current video slice being coded.
Inverse quantization unit 58 and inverse transform module 60 apply inverse quantization and inverse transform, respectively, to reconstruct residual blocks in the pixel domain for later use as reference blocks of a reference picture. Motion compensation unit 44 may calculate the reference block by adding the residual block to a predictive block of one of the reference pictures within one of the reference picture lists. Motion compensation unit 44 may also apply one or more interpolation filters to the reconstructed residual block to calculate sub-integer pixel values for use in motion estimation. Summer 62 adds the reconstructed residual block to the motion compensated prediction block produced by motion compensation unit 44 to produce a reference block for storage in reference picture memory 66. The reference block may be used by motion estimation unit 42 and motion compensation unit 44 as a reference block to inter-predict a block in a subsequent video frame or picture.
As described above with reference to fig. 1, video encoder 20 also includes IPCM encoding unit 48A and lossless encoding unit 48B, which IPCM encoding unit 48A and lossless encoding unit 48B may enable video encoder 20 to perform the IPCM and lossless coding techniques attributed to video encoder 20 in this disclosure.
As one example, video encoder 20 may be configured to encode one or more blocks of video data during a video coding process. For example, video encoder 20 may be configured to encode a plurality of blocks of video data, wherein video encoder 20 encodes at least one block of the plurality of blocks of video data using a coding mode that is one of an IPCM coding mode and a lossless coding mode. As previously explained, in some examples, the lossless coding mode may include performing prediction on at least one block to code the block (e.g., along with summing to generate residual data for the at least one block). However, in other examples, a lossless coding mode may be used to code at least one block without performing prediction (e.g., as raw, or "unprocessed" video data).
For example, as previously described, at least one block of the plurality of blocks of video data encoded using the IPCM coding mode may correspond to at least one block that includes reconstructed video data. For example, the reconstructed video data may be generated by video encoder 20 by performing the prediction, summation, transform, and quantization steps described above with reference to video encoder 20 of fig. 1 using blocks of the original video data. By performing the above steps, video encoder 20 may generate blocks of quantized and transformed residual coefficients. Subsequently, video encoder 20 may be configured to perform inverse quantization, inverse transform, prediction, and summation (also described above) on the quantized and transformed residual coefficients to generate blocks of reconstructed video data. Alternatively, as also previously described, the at least one block encoded using the lossless coding mode may correspond to at least one block that includes original video data or residual unquantized video data.
In any case, video encoder 20 may be further configured to assign a non-zero QP value for at least one block encoded using the coding mode. As previously described, video encoder 20 may be configured to assign a non-zero QP value for at least one block using, for example, a predicted QP value for the at least one block, which may be determined using the QP of each of one or more neighboring blocks of video data. Video encoder 20 may also be configured to perform deblocking filtering on one or more of the plurality of blocks of video data based on a coding mode used by video encoder 20 to encode the at least one block and a non-zero QP value assigned to the at least one block.
In some examples, to perform deblocking filtering on one or more of a plurality of blocks of video data based on a coding mode used to code at least one block and an assigned non-zero QP value, video encoder 20 may be configured to perform the following steps. For example, if the coding mode used to code the at least one block is an IPCM coding mode, video encoder 20 may be configured to perform deblocking filtering on the at least one block based on the assigned non-zero QP value. Moreover, if the coding mode used to code the at least one block is a lossless coding mode, video encoder 20 may be configured to perform deblocking filtering on adjacent blocks of the plurality of blocks of video data based on the assigned non-zero QP values. In this example, the neighboring block may be located adjacent to at least one block and coded using a lossy coding mode.
In some examples, to perform deblocking filtering on each of at least one block and an adjacent block based on an assigned non-zero QP value, video encoder 20 may be configured to select a filter for deblocking filtering based on the assigned non-zero QP value. For example, video encoder 20 may be configured to select a filter using the assigned non-zero QP value such that the filter includes one or more filtering parameters or characteristics that define the manner in which deblocking filtering is performed using the filter. In other examples, to perform deblocking filtering on each of the at least one block and the adjacent block based on the assigned non-zero QP value, video encoder 20 may be configured to determine a filter strength for deblocking filtering based on the assigned non-zero QP value, as described above with reference to deblocking decisions.
In some examples, video encoder 20 may be configured to enable deblocking filtering for one or more of a plurality of blocks of video data prior to performing deblocking filtering for the one or more of the plurality of blocks of video data based on a coding mode used to code at least one block and an assigned non-zero QP value. In other examples, the coding mode may be a lossless coding mode. In these examples, video encoder 20 may be further configured to disable deblocking filtering for at least one block. In these examples, disabling deblocking filtering for at least one block may include not performing deblocking filtering on an inner boundary edge of the at least one block.
In some examples, to assign a non-zero QP value for at least one block, video encoder 20 may be configured to determine the assigned non-zero QP value based on one or more of: (1) a signaled QP value for the at least one block (e.g., wherein the signaled QP value indicates an assigned non-zero QP value); (2) a predicted QP value for the at least one block (e.g., determined using QP values for each of one or more neighboring blocks of video data); and (3) a signaled dQP value for the at least one block (e.g., where the dQP value represents a difference between an assigned non-zero QP value and a predicted QP value). As one example, each of the signaled QP and dQP values may be determined by video encoder 20, where appropriate, and signaled in the bitstream to video decoder 30. As another example, the predicted QP value may be determined by video encoder 20.
In other examples, in order to assign a non-zero QP value for at least one block, video encoder 20 may be configured to perform the following steps, in the case that the coding mode used to code the at least one block is an IPCM coding mode. For example, when the size of at least one block is less than the minimum CU quantization group size, video encoder 20 may be configured to set at least one group QP value for the quantization group that includes the at least one block to the assigned non-zero QP value. In these examples, the quantization group may also include one or more blocks of video data coded using lossy coding modes.
As described above, in some examples, each of the blocks of video data included in a quantization group may have the same group QP value. In these examples, video encoder 20 may be configured to set this common group QP value to the assigned non-zero QP value. However, in other examples, only some blocks of video data (e.g., blocks starting from the first block of QP values that signal quantization groups as dQP values) may have the same group QP values. In these examples, video encoder 20 may be configured to set this particular group QP value common to only a subset of the blocks of the quantization group to the assigned non-zero QP value.
Moreover, when the size of at least one block is greater than or equal to the minimum CU quantization group size, video encoder 20 may be configured to set QP values for neighboring blocks of the plurality of blocks of video data to the assigned non-zero QP value. For example, a neighboring block may be one or more of a block located adjacent to at least one block and a previously coded block.
In other examples, in a case that the coding mode used to code the at least one block is the IPCM coding mode, to assign a non-zero QP value for the at least one block, video encoder 20 may be configured to set a QP value for a neighboring block in the plurality of blocks of video data to the assigned non-zero QP value when a size of the at least one block is less than a minimum CU quantization group size. In these examples, the neighboring blocks may be one or more of blocks located adjacent to the at least one block and previously coded blocks. For example, when at least one block is a so-called "edge" block (i.e., a block of video data located adjacent to a boundary of a frame of video data that includes the block), a block located adjacent to the at least one block may not exist. In these cases, the neighboring block may be a previously coded block, i.e., a block of video data that occurred prior to the at least one block in coding order associated with the frame of video data that includes the at least one block and the previously coded block.
In some examples, in a case that the coding mode used to code the at least one block is a lossless coding mode, to assign a non-zero QP value for the at least one block, video encoder 20 may be configured to set one of a QP value and a dQP value for a lossy block in the plurality of blocks of video data to the assigned non-zero QP value. In these examples, the dQP value may represent the difference between the QP value for the lossy block and the predicted QP value. Also, in these examples, the lossy block may be a block coded using a lossy coding mode, such as a coding mode that includes performing the prediction, summation, transform, and quantization steps described above, or similar steps.
In other examples, in order to assign a non-zero QP value for at least one block, video encoder 20 may be configured to set a constant value to the assigned non-zero QP value in the case that the coding mode used to code the at least one block is a lossless coding mode.
In some examples, the coding may be encoding. In these examples, to encode the at least one block, video encoder 20 may be configured to signal one of residual unquantized video data and reconstructed video data for the at least one block in the bitstream. Also, in these examples, to assign a non-zero QP value for at least one block, video encoder 20 may be configured to perform one of: the assigned non-zero QP value is signaled in the bitstream, and the dQP value for at least one block is signaled in the bitstream. For example, the dQP value may represent a difference between an assigned non-zero QP value and a predicted QP value for at least one block. In these examples, video encoder 20 may be further configured to signal one or more syntax elements in the bitstream. For example, the one or more syntax elements may indicate that deblocking filtering is enabled for one or more of a plurality of blocks of video data.
In the above example, the one or more syntax elements may be referred to as a "first" one or more syntax elements, particularly if the coding mode used to code the at least one block is a lossless coding mode. In these examples, video encoder 20 may be further configured to signal a second one or more syntax elements in the bitstream. For example, the second one or more syntax elements may indicate that deblocking filtering is disabled for the at least one block.
Thus, as explained above, the techniques of this disclosure may enable video encoder 20 to improve the visual quality of one or more blocks of video data when encoding the one or more blocks, as compared to other techniques. In particular, the described techniques may improve the visual quality of one or more of the IPCM coding blocks consisting of reconstructed video data by enabling deblocking filtering for the blocks and performing deblocking filtering in a particular manner. In addition, the techniques may improve the visual quality of one or more lossless coding blocks that include original video data by disabling deblocking filtering for the blocks. Moreover, the techniques may also improve the visual quality of lossy coding blocks by performing deblocking filtering in a particular manner on one or more blocks coded using lossy coding techniques (e.g., blocks located adjacent to one or more IPCM or lossless coding blocks). As a result, there may be a relative improvement in the visual quality of one or more blocks of video data, including IPCM, lossless, and lossy coding blocks, when using the techniques of this disclosure.
In this manner, video encoder 20 represents an example of a video coder configured to code a plurality of blocks of video data, where the video coder is configured to code at least one block of the plurality of blocks of video data using a coding mode that is one of an IPCM coding mode and a lossless coding mode using prediction. Also, in this example, the video coder is further configured to assign a non-zero QP value for at least one block using the coding mode, and perform deblocking filtering on one or more of the plurality of blocks of video data based on the coding mode used to code the at least one block and the assigned non-zero QP value for the at least one block.
Fig. 3 is a block diagram illustrating an example of a video decoder that may perform techniques for IPCM and lossless coding mode deblocking consistent with the techniques of this disclosure. In the example of fig. 3, video decoder 30 includes entropy decoding unit 80, IPCM decoding unit 98A, lossless decoding unit 98B, prediction module 82, inverse quantization unit 88, inverse transform module 90, summer 92, deblocking filter 94, and reference picture memory 96. Prediction module 82 includes a motion compensation unit 84 and an intra-prediction module 86. In some examples, video decoder 30 may perform a decoding pass that is substantially reciprocal to the encoding pass described with respect to video encoder 20 of fig. 2.
In the decoding process, video decoder 30 receives an encoded video bitstream representing video blocks of an encoded video slice and associated syntax elements from video encoder 20. When the video block represented in the bitstream includes compressed video data, entropy decoding unit 80 of video decoder 30 entropy decodes the bitstream to generate quantized coefficients, motion vectors, and other syntax elements. Entropy decoding unit 80 forwards the motion vectors and other syntax elements to prediction module 82. Video decoder 30 may receive syntax elements at the video slice level and/or the video block level.
When a video slice is coded as an intra-coded (I) slice, intra-prediction module 86 of prediction module 82 may generate prediction data for a video block of the current video slice based on the signaled intra-prediction mode and data from previously decoded blocks of the current frame or picture. When the video frame is coded as an inter-coded (i.e., B, P or GPB) slice, motion compensation unit 84 of prediction module 82 generates predictive blocks for the video blocks of the current video slice based on the motion vectors and other syntax elements received from entropy decoding unit 80. The predictive block may be generated from one of the reference pictures within one of the reference picture lists. Video decoder 30 may construct reference frame lists, list 0 and list 1, using default construction techniques based on the reference pictures stored in reference picture memory 96.
Motion compensation unit 84 determines prediction information for video blocks of the current video slice by parsing motion vectors and other syntax elements and uses the prediction information to generate predictive blocks for the current video block being decoded. For example, motion compensation unit 84 uses some received syntax elements to determine construction information for one or more of a prediction mode (e.g., intra-prediction or inter-prediction) used to code video blocks of a video slice, an inter-prediction slice type (e.g., a B slice, a P slice, or a GPB slice), a reference picture list for a slice, a motion vector for each inter-coded video block of a slice, an inter-prediction state for each inter-coded video block of a slice, and other information used to decode video blocks in a current video slice.
Motion compensation unit 84 may also perform interpolation based on the interpolation filters. Motion compensation unit 84 may use interpolation filters as used by video encoder 20 during encoding of video blocks to calculate interpolated values for sub-integer pixels of the reference block. Motion compensation unit 84 may determine the interpolation filters used by video encoder 20 from the received syntax elements and use the interpolation filters to generate predictive blocks.
Inverse quantization unit 88 inverse quantizes (i.e., dequantizes) the quantized transform coefficients provided in the bitstream and decoded by entropy decoding unit 80. The inverse quantization process may include using a Quantization Parameter (QP) for each video block in the video slice calculated by video encoder 20 to determine the degree of quantization and, likewise, the degree of inverse quantization that should be applied. The inverse transform module 90 applies an inverse transform (e.g., an inverse DCT, an inverse integer transform, or a conceptually similar inverse transform process) to the transform coefficients in order to generate a residual block in the pixel domain.
After motion compensation unit 84 generates the predictive block for the current video block based on the motion vector and other syntax elements, video decoder 30 forms a decoded video block by summing the residual block from inverse transform module 90 and the corresponding predictive block generated by motion compensation unit 84. Summer 92 represents one or more components that perform this summation operation. Deblocking filter 94 is adapted to filter the decoded blocks in order to remove blockiness artifacts. The decoded video blocks in a given frame or picture are then stored in reference picture memory 96, reference picture memory 96 storing reference pictures for subsequent motion compensation. Reference picture memory 96 also stores decoded video for later presentation on a display device (e.g., display device 28 of fig. 1).
As already described above with reference to fig. 1, video decoder 30 also includes IPCM decoding unit 98A and lossless decoding unit 98B, which IPCM decoding unit 98A and lossless decoding unit 98B may enable video decoder 30 to perform the IPCM and lossless coding techniques attributed to video decoder 30 in this disclosure.
As one example, video decoder 30 may be configured to decode one or more blocks of video data during a video coding process. For example, video decoder 30 may be configured to decode a plurality of blocks of video data, wherein at least one block of the plurality of blocks of video data is encoded (i.e., by video encoder 20) using a coding mode that is one of an IPCM coding mode and a lossless coding mode. As previously explained with reference to fig. 1 and 2, in some examples, the lossless coding mode may include performing prediction on at least one block to code the block (e.g., along with summing to generate residual data for the at least one block). However, in other examples, a lossless coding mode may be used to code at least one block without performing prediction (e.g., as raw, or "unprocessed" video data).
As one example, as previously described, at least one block of the plurality of blocks of video data encoded using the IPCM coding mode may correspond to at least one block that includes reconstructed video data. For example, reconstructed video data may be generated by video encoder 20 by performing the prediction, summation, transform, and quantization steps described above with reference to video encoder 20 of fig. 1 and 2 using blocks of original video data. By performing the above steps, video encoder 20 may generate blocks of quantized and transformed residual coefficients. Subsequently, video encoder 20 may be configured to perform inverse quantization, inverse transform, prediction, and summation (also described above) on the quantized and transformed residual coefficients to generate blocks of reconstructed video data. Alternatively, as also previously described, the at least one block encoded using the lossless coding mode may correspond to at least one block that includes original video data or residual unquantized video data.
In any case, video decoder 30 may be further configured to assign a non-zero QP value for at least one block encoded using the coding mode. As previously described, video decoder 30 may be configured to assign a non-zero QP value for at least one block using, for example, a predicted QP value for the at least one block, which may be determined using the QP of each of one or more neighboring blocks of video data. Video decoder 30 may also be configured to perform deblocking filtering on one or more of a plurality of blocks of video data based on a coding mode used to code the at least one block and a non-zero QP value assigned to the at least one block.
In some examples, to perform deblocking filtering on one or more of a plurality of blocks of video data based on a coding mode used to code at least one block and an assigned non-zero QP value, video decoder 30 may be configured to perform the following steps. For example, if the coding mode used to code the at least one block is an IPCM coding mode, video decoder 30 may be configured to perform deblocking filtering on the at least one block based on the assigned non-zero QP value. Moreover, if the coding mode used to code the at least one block is a lossless coding mode, video decoder 30 may be configured to perform deblocking filtering on adjacent blocks of the plurality of blocks of video data based on the assigned non-zero QP values. In this example, the neighboring block may be located adjacent to at least one block and coded using a lossy coding mode.
In some examples, to perform deblocking filtering on each of at least one block and an adjacent block based on an assigned non-zero QP value, video decoder 30 may be configured to select a filter for deblocking filtering based on the assigned non-zero QP value. For example, video decoder 30 may be configured to select a filter using the assigned non-zero QP value such that the filter includes one or more filtering parameters or characteristics that define the manner in which deblocking filtering is performed using the filter. In other examples, to perform deblocking filtering on each of at least one block and an adjacent block based on an assigned non-zero QP value, video decoder 30 may be configured to determine a filter strength for deblocking filtering based on the assigned non-zero QP value, as described above with reference to deblocking decisions.
In some examples, video decoder 30 may be configured to enable deblocking filtering for one or more of a plurality of blocks of video data prior to performing deblocking filtering for the one or more of the plurality of blocks of video data based on a coding mode used to code at least one block and an assigned non-zero QP value. In other examples, the coding mode may be a lossless coding mode. In these examples, video decoder 30 may be further configured to disable deblocking filtering for at least one block. In these examples, disabling deblocking filtering for at least one block may include not performing deblocking filtering on an inner boundary edge of the at least one block.
In some examples, to assign a non-zero QP value for at least one block, video decoder 30 may be configured to determine the assigned non-zero QP value based on one or more of: (1) a signaled QP value for the at least one block (e.g., wherein the signaled QP value indicates an assigned non-zero QP value); (2) a predicted QP value for the at least one block (e.g., determined using QP values for each of one or more neighboring blocks of video data); and (3) a signaled dQP value for the at least one block (e.g., where the dQP value represents a difference between an assigned non-zero QP value and a predicted QP value). As one example, each of the signaled QP and dQP values may be received by video decoder 30 in a bitstream from video encoder 20, where appropriate. As another example, the predicted QP value may be determined by video decoder 30.
In other examples, in order to assign a non-zero QP value for at least one block, video decoder 30 may be configured to perform the following steps, in the case that the coding mode used to code the at least one block is an IPCM coding mode. For example, when the size of the at least one block is less than the minimum CU quantization group size, video decoder 30 may set a group QP value (e.g., at least one group QP value for a quantization group that includes the at least one block) to the assigned non-zero QP value. In these examples, the quantization group may also include one or more blocks of video data coded using lossy coding modes. As described above, in some examples, each of the blocks of video data included in a quantization group may have the same group QP value. In these examples, video decoder 30 may be configured to set this common group QP value to the assigned non-zero QP value. However, in other examples, only some blocks of video data (e.g., blocks starting from the first block of QP values that signal quantization groups as dQP values) may have the same group QP values. In these examples, video decoder 30 may be configured to set this particular group QP value common to only a subset of the blocks of the quantization group to the assigned non-zero QP value. Moreover, when the size of at least one block is greater than or equal to the minimum CU quantization group size, video decoder 30 may be configured to set QP values for neighboring blocks of the plurality of blocks of video data to the assigned non-zero QP value. For example, a neighboring block may be one or more of a block located adjacent to at least one block and a previously coded block.
In other examples, where the coding mode used to code the at least one block is an IPCM coding mode, to assign a non-zero QP value for the at least one block, video decompressor 30 may be configured to set a QP value for a neighboring block in the plurality of blocks of video data to the assigned non-zero QP value when the size of the at least one block is less than the minimum CU quantization group size. In these examples, the neighboring blocks may be one or more of blocks located adjacent to the at least one block and previously coded blocks. For example, when at least one block is a so-called "edge" block (i.e., a block of video data located adjacent to a boundary of a frame of video data that includes the block), a block located adjacent to the at least one block may not exist. In these cases, the neighboring block may be a previously coded block, i.e., a block of video data that occurred prior to the at least one block in coding order associated with the frame of video data that includes the at least one block and the previously coded block.
In some examples, in a case that the coding mode used to code the at least one block is a lossless coding mode, to assign a non-zero QP value for the at least one block, video decoder 30 may be configured to set one of a QP value and a dQP value for a lossy block in the plurality of blocks of video data to the assigned non-zero QP value. In a similar manner as described above, in these examples, the dQP value may represent the difference between the QP value for the lossy block and the predicted QP value. Also, in these examples, the lossy block may be a block coded using a lossy coding mode, such as a coding mode that includes performing the prediction, summation, transform, and quantization steps described above, or similar steps.
In other examples, in order to assign a non-zero QP value for at least one block, video decoder 30 may be configured to set a constant value to the assigned non-zero QP value in the case that the coding mode used to code the at least one block is a lossless coding mode.
In some examples, the coding may be decoding. In these examples, to decode the at least one block, video decoder 30 may be configured to receive one of residual unquantized video data and reconstructed video data for the at least one block in a received bitstream. Also, in these examples, to assign a non-zero QP value for at least one block, video decoder 30 may be configured to perform one of: receiving the assigned non-zero QP value in the received bitstream, and receiving a dQP value for at least one block in the received bitstream. For example, the dQP value may represent a difference between an assigned non-zero QP value and a predicted QP value for at least one block. In examples where video decoder 30 is configured to receive a dQP value for at least one block, video decoder 30 may be further configured to determine an assigned non-zero QP value based on the dQP value and a predicted QP value. Video decoder 30 may be further configured to receive one or more syntax elements in the received bitstream. For example, the one or more syntax elements may indicate that deblocking filtering is enabled for one or more of a plurality of blocks of video data.
In the above example, the one or more syntax elements may be referred to as a "first" one or more syntax elements, particularly if the coding mode used to code the at least one block is a lossless coding mode. In these examples, video decoder 30 may be further configured to receive a "second" one or more syntax elements in the received bitstream. For example, the second one or more syntax elements may indicate that deblocking filtering is disabled for the at least one block.
Thus, as explained above, the techniques of this disclosure may enable video decoder 30 to improve the visual quality of one or more blocks of video data when encoding the one or more blocks, as compared to other techniques. In particular, the described techniques may improve the visual quality of one or more of the IPCM coding blocks consisting of reconstructed video data by enabling deblocking filtering for the blocks and performing deblocking filtering in a particular manner. In addition, the techniques may improve the visual quality of one or more lossless coding blocks that include original video data by disabling deblocking filtering for the blocks. Moreover, the techniques may also improve the visual quality of lossy coding blocks by performing deblocking filtering in a particular manner on one or more blocks coded using a lossy coding mode (e.g., one or more blocks located adjacent to one or more IPCMs or lossless coding blocks). As a result, the visual quality of one or more blocks of video data, including IPCM, lossless and lossy coding blocks, may be relatively improved when using the techniques of this disclosure.
In this manner, video decoder 30 represents an example of a video coder configured to code a plurality of blocks of video data, where the video coder is configured to code at least one block of the plurality of blocks of video data using a coding mode that is one of an IPCM coding mode and using a predicted lossless coding mode. Also, in this example, the video coder is further configured to assign a non-zero QP value for at least one block using the coding mode, and perform deblocking filtering on one or more of the plurality of blocks of video data based on the coding mode used to code the at least one block and the assigned non-zero QP value for the at least one block.
Fig. 4 is a conceptual diagram illustrating an example of deblocking filtering performed on boundaries of two adjacent blocks of video data consistent with the techniques of this disclosure. In the example of fig. 4, block 404 may be a currently coded (or "current") block of video data that includes a left edge to be deblock filtered or "deblocked" along with a corresponding right edge of block 402. In this example, block 402 is an adjacent block where the location of the video data is adjacent to block 404 (to the left of block 404 in this example). For example, block 402 may be a previously coded block of video data that was coded prior to coded block 404. In other examples, an upper edge of block 404 (not shown in the figures) may be deblocked along with respective lower edges of adjacent blocks of video data located on block 404 (also not shown in the figures).
In some HEVC test model versions (e.g., version 4, or "HM 4"), an eight sample edge may be deblocked by a particular deblocking filter. As illustrated in FIG. 4, the deblock edge region of blocks 402 and 404 includes four rows of pixel values q0 parallel to edge 400 in block 404iTo q3iAnd in block 402 comprises four rows of pixel values p0 parallel to edge 400iTo p3iWhere "i" indicates a row of pixels perpendicular to the edge 400. Each of the parallel rows of pixel values includes eight pixel values, e.g., q00To q07. In the case of a horizontal edge (not shown in the figure), such as the upper edge of the current block, the naming and numbering may be the same as that of the vertical edge illustrated in FIG. 4 (i.e., edge 400). Furthermore, pixel values p or q may be pre-deblocking filter values (i.e., reconstructed pixel values) or deblocking filter values.
In some versions of HM, such as HM4, deblocking filters (e.g., deblocking filter 64 of video encoder 20 or deblocking filter 94 of video decoder 30) may filter particular TU and PU edges of a block based on results from boundary strength calculations and deblocking decisions. The deblocking decision may include whether the deblocking filter is on or off, whether the deblocking filter is weak or strong, and the strength of the weak filter for a given block. The boundary strength calculation and deblocking decision, described in more detail below with reference to fig. 6, depends on the threshold tcAnd β.
In some versions of the HM, the threshold t of the deblocking filtercAnd β may depend on a parameter Q, which is used as followsThe expression is derived from a QP value and a boundary strength ("Bs") for a current block of video data.
If Bs >2, then TcOffset is 2
If Bs is less than or equal to 2, Tcoffset is 0
For tc:Q=Clip3(0,MAX_QP+4,QP+TcOffset);MAX_QP=51
For β: q ═ Clip3(0, MAX _ QP, QP)
Clip3 (first threshold, second threshold) ═ min (first threshold, max (second threshold))
Threshold value tcAnd β may be stored in a table that is accessible based on parameter Q, which is derived from the QP value for the current block, as described above.
The first deblocking decision is whether deblocking filtering is performed on edge 400 of block 404. To make this "on/off" decision, a video coding device (e.g., video encoder 20 and/or video decoder 30) calculates a degree of activity d across edge 400 for pixel values in a third row (i-2) perpendicular to edge 400 (i.e., row 406)1. The video coding device also calculates a degree of activity d2 across the edge 400 for pixel values in a sixth row (i-5) perpendicular to the edge 400 (i.e., row 408). These two activity measurements provide an indication of activity near the edge 400.
The activity measures are then summed and compared to a threshold β. If the summed activity measure is less than the threshold β, the deblocking filter is turned on and applied to the eight sample deblocking edge region. In this way, if activity across edge 400 is high, then the deblocking filter is not necessary because discontinuities across edge 400 will not be visible. However, if activity across edge 400 is low, then a deblocking filter should be applied to smooth discontinuities between blocks 402 and 404 at edge 400. The calculation may be performed according to the following expression:
d1=|p22-2·p12+p02|+|q22-2·q12+q02|
d2=|p25-2·p15+p05|+|q25-2·q15+q05|
d=d1+d2<β
the second deblocking decision includes determining whether the deblocking filter is a strong filter or a weak filter. The decision of whether the deblocking filter is strong or weak may include three distinct determinations, including texture/activity determination, gradient determination, and discontinuity determination across the edge 400. In some versions of HM, such as HM4, each of the three determinations must be performed for each row (i ═ 0.., 7) of pixel values that are perpendicular to edge 400. The three determinations may be performed according to the following expressions:
d<(β>>2);
(|p3i-p0i|+|q0i-q3i|)<(β>>3) (ii) a And is
|p0i-q0i|<((5·tc+1)>>1)。
The third deblocking decision may include determining a strength of a weak filter when the deblocking filter is a weak filter. In some versions of HM (e.g., HM4), a weak filter applied to edge 400 of block 404 may correct one or two samples on each side of edge 400. In some cases, the weak filter may be applied asymmetrically to correct only one sample on one side of the edge 400 and to correct two samples on the other side of the edge 400.
In some versions of HM (e.g., HM4), the weak filter corrects all p0 and q0 samples to the right and left of edge 400 based on a weak filter strength calculation according to the following equation.
Δ=(9*(q0-p0)-3*(q1-p1)+8)/16
Δ=Clip(-tc,tc,Δ);tcIs a threshold value depending on the QP value
p0′=p0+Δ
q0′=q0-Δ
The weak filter optionally corrects all p1 samples in the second row parallel to edge 400 in neighboring block 104 according to the following equation.
Δp=Clip(-tc/2,tc/2,(((p2+p0+1)/2)-p1+Δ)/2)
p 1' ═ p1+ Δ p; deblocking of p1 depends on decision conditions
Similarly, the weak filter optionally corrects all q1 samples in the second row parallel to edge 400 in current block 404 according to the following equation.
Δq=Clip(-tc/2,tc/2,(((q2+q0+1)/2)-q1-Δ)/2)
q 1' ═ q1+ Δ q; deblocking of q1 depends on decision conditions
The pixel values p or q may be pre-deblocking filter values (i.e., reconstructed pixel values) or deblocking filter values. Pixel values p 'and q' represent the resulting pixel values after deblocking filtering is performed on pixel values p and q, respectively. More specifically, the values q0 and q1 indicate pixel values in a first row and a second row parallel to an edge in the current block 404. The values p0 and p1 indicate pixel values in a first row and a second row parallel to the edge in adjacent blocks 402. Equations q0-p0 and q1-p1 indicate the step-like discontinuities between pixel values across edge 400.
Fig. 5 is a conceptual diagram illustrating an example of signaling a dQP value for each of one or more blocks of video data consistent with the techniques of this disclosure. Some draft versions of HEVC (e.g., WD6) support LCU level and secondary LCU level dQP techniques. For example, some secondary LCU level dQP methods allow dQP signaling for blocks of video data (i.e., CUs) that are smaller than the LCU size. The purpose of this is to allow finer granularity and visual quality control. According to some techniques, the "dqpminccusize" parameter may be defined as the minimum CU quantization group size over which dQP may be signaled. For blocks smaller than the minimum CU quantization group size, all leaf CUs (i.e., blocks of video data) within the quantization group having the minimum CU size may share the same dQP value. Alternatively, according to other techniques, for blocks smaller than the minimum CU quantization group size, only some of the leaf CUs or blocks within the quantization group having the minimum CU size may share the same dQP value. For example, only leaf CUs or blocks of video data starting from the first leaf CU or block of video data that first signals a dQP value for a quantization group may share such dQP value. In any case, for blocks greater than or equal to the minimum CU quantization group size, a dQP value may be signaled for a leaf CU of the LCU quadtree (i.e., a block of video data). A dQP value may be signaled only when at least one non-zero coefficient in a block, i.e., a syntax element coding block flag ("CBF") of the block, is equal to true or "1". Video decoder 30 may add the signaled dQP value for the block to the QP value predicted from the neighboring blocks of video data to generate the QP value for the current block. The neighboring block may be a neighboring block of video data located to the left of the current block, or a previous block of video data closest in coding order to the current block.
As shown in fig. 5, "LCU 0" 500 includes a single block of video data, i.e., "block 0," whose size is larger than the minimum CU quantization group size 502, which minimum CU quantization group size 502 may be indicated using the syntax element "QpMinCuSize. As also shown in fig. 5, LCU0500 is not split into any leaf-CUs, such that quantization group "Q0" associated with LCU0500 only includes block 0. In the example of fig. 5, block 0 includes at least one non-zero coefficient. In this example, block 0 may be signaled as bitstream 504 for LCU 0500. Also, in this example, bitstream 504 includes coding mode ("M0"), dQP value ("D0"), and coefficient ("C0") components for block 0.
As further illustrated in fig. 5, "LCU 1" 506 is split into multiple blocks or CUs of video data according to an LCU quadtree. For example, "block 1" and "block 10" of LCU1506 each have a size equal to minimum CU quantization group size 502. On the other hand, "blocks 2-9" each have a size that is less than the minimum CU quantization group size 502. In general, all leaf CUs (i.e., blocks) of video data within the minimum CU quantization group size may share the same QP and dQP values. For example, as shown in fig. 5, quantization group "Q1" includes only block 1, and quantization group "Q4" includes only block 10. However, quantization group "Q3" includes blocks 2-5, and as a result, each of blocks 2-5 has the same QP value. Similarly, quantization group "Q3" includes blocks 6-9, and as a result, each of blocks 6-9 has the same QP value.
As also shown in fig. 5, block 1 includes at least one non-zero coefficient and may be signaled as part of a bitstream 508 for the LCU1506, the bitstream 508 corresponding to Q1, and including coding mode ("M1"), dQP value ("D1") and coefficient ("C1") components for block 1. In the example of fig. 5, block 10 is in "skip mode" or includes all zero-valued coefficients, and may be signaled as part of a bitstream 508 for the LCU1506, the bitstream 508 corresponding to Q4, and including only the coding mode ("M10") component for block 10. In the same example, each of blocks 2-5 in quantization group Q2 is in skip mode or includes all zero-valued coefficients, and may be signaled as part of a bitstream 508 for LCU1506, bitstream 508 corresponding to Q2, and including only coding mode ("M2-M5") components for blocks 2-5. In this example, each of blocks 6 and 9 in quantization group Q3 is in skip mode or includes all zero-valued coefficients, and each of blocks 7 and 8 in quantization group Q3 includes at least one non-zero coefficient. Blocks 6-9 may be signaled as part of a bitstream 508 for LCU1506, bitstream 508 corresponding to Q3 and including only coding modes ("M6 and M9") for blocks 6 and 9, and including coding mode ("M7 and M8"), dQP values ("D3"), and coefficient ("C7 and C8") components for blocks 7 and 8.
Fig. 6 is a flow diagram illustrating an example method of calculating a boundary strength value for a deblocking filter consistent with the techniques of this disclosure. As illustrated in fig. 6, the boundary strength calculation 600 may be based on a coding mode (e.g., an "intra" or "inter" coding mode) of a current block of video data (e.g., block 404 of fig. 4) and an adjacent block of video data (e.g., block 402 of fig. 4) and whether pixel values in a deblock edge region (i.e., a region of the block along a shared edge (e.g., edge 400 of fig. 4) that is being deblock filtered or "deblocked") include non-zero coefficients.
More specifically, performing boundary strength calculations may include determining whether one of a current region and an adjacent block having an edge to be deblocked is intra coded (602). When one of the current block and the neighboring block is intra coded (602; y), a CU edge check operation may be performed to determine whether the edge to be deblocked is an outer CU boundary or an inner CU edge (604). If the edge to be deblocked is an outer CU boundary (604; y), a boundary strength ("Bs") value may be set equal to "4" (610), and if the edge is an inner CU edge (604; n), a Bs value may be set equal to "3" (612). In either case, the Bs value may be greater than "2" such that the threshold "t" at which the deblocking filter is identified isc"may apply a syntax element" TcOffset "equal to" 2 "to the respective QP value (i.e., the QP value for the current block).
When the current block and neighboring blocks are intra coded (602; n), non-zero coefficient checking may be performed to determine whether samples in a deblocking edge region around an edge to be deblocked include non-zero coefficients (606). Where the samples include non-zero coefficients (606; y), the Bs value may be set equal to "2" (614). However, in the case where the samples do not include non-zero coefficients (606; N), additional checks may be performed to determine any differences between the samples in the current block and the neighboring blocks (608). If the samples in the current block and the neighboring block have some differences (608; y), the Bs value may be set equal to "1" (616). However, if the samples in the current block and the neighboring block have very small differences (608; N), the Bs value may be set equal to "0" (618). When the Bs value is equal to "0," the deblocking filter may be turned off and not applied to the edge of the current block to be deblocked.
Fig. 7A-7B are conceptual diagrams illustrating examples of IPCM coding mode deblocking consistent with the techniques of this disclosure. Fig. 7A illustrates deblocking filtering performed on a current block 712 of video data coded using an IPCM coding mode (i.e., for which "QP ═ 0") as explained above with reference to fig. 1. As shown in fig. 7A, block 712 is deblocked on the right and lower edges, respectively, shared with lossy (i.e., for which "QP > 0") blocks 706 and 708. As also shown in fig. 7A, block 712 is not deblocked on the left and upper edges shared with lossy (i.e., for which "QP > 0") blocks 710 and 704. As explained previously, block 712 is deblocked in the manner described above consistent with the various draft versions of HEVC for IPCM coding blocks, since a zero value QP value is associated with block 712.
Fig. 7B also illustrates an exemplary QP inheritance technique used to assign non-zero QP values for IPCM blocks to implement the IPCM intra coding mode deblocking techniques of this disclosure. The QP inheritance technique described herein may operate in a somewhat similar manner to the dQP method described above with reference to fig. 5.
According to the disclosed techniques, when pcm _ loop _ filter _ disable _ flag is equal to false or "0", the loop filtering process is enabled and should be applied to the current IPCM block. To apply the deblocking filter, the disclosed techniques include assigning a non-zero QP value to the IPCM block based on the predicted QP value. For example, video decoder 30 may then apply the deblocking filter to samples of the current IPCM block based on the assigned non-zero QP value for the IPCM block.
As an example, video decoder 30 may implicitly assign a non-zero QP value to an IPCM block based on a QP value for a known prediction. The predicted QP value may be a QP value for a quantization group that includes the IPCM block or for a neighboring block of video data located close to the IPCM block. In the example of fig. 7B, when the current IPCM block 716 has a size greater than or equal to the minimum CU quantization group size (e.g., the size of quantization group 720 also shown in fig. 7B), video decoder 30 may assign a non-zero QP value ("QP value") for IPCM block 7161") is set equal to the QP value (" QP ") predicted from the neighboring block 7140") as indicated by the arrows in fig. 7B. In other words, IPCM blocks or CUs1716 can beFrom adjacent blocks or CUs0714 "inherit" QP0As IPCM blocks or CUs1QP of 7161. As illustrated in fig. 7B, neighboring block 714 may be a block of video data located to the left of IPCM block 716. Also as shown, the neighboring block 714 may be of QP as previously described0And a CU with a CBF equal to true or "1" ("CU)0"), a CBF equal to true or" 1 "indicates that the neighboring block 714 contains non-zero coefficients. In another example, neighboring block 714 may be the block that precedes the closest IPCM block 716 in coding order. In yet another example, an average QP may be calculated based on multiple neighboring blocks of video data, such as blocks to the left of (e.g., neighboring block 714 or another block) and above (not shown in the figure) the IPCM block 716, and used as a predicted QP value to assign a non-zero QP value for the IPCM block 716.
Also, in the example of fig. 7B, or when the current IPCM block 726 has a size that is less than the minimum CU quantization group size (e.g., again, the size of quantization group 720), video decoder 30 may assign a non-zero QP value ("QP value for QP") for IPCM block 7265") is set equal to the QP value (" QP ") of the quantization group 720 containing the IPCM block 726QG"). As illustrated in FIG. 7B, quantization group 720 includes four blocks, namely blocks 722, 724, 726, and 728, or CUs, respectively3-6Each smaller than the minimum CU quantization group size, and all blocks have the same QP value (i.e., QP)3=QP4=QP5=QP6=QPQG). In other words, IPCM blocks or CUs5726 may include blocks 722, 724, 726 and 728, or CUs3-6The quantization group 720 "inherits" the QPQGAs IPCM blocks or CUs5QP of 7265. As previously mentioned, in other examples, only CUs3-6May share a common QPQG. In these examples, CU5726 may be derived from a subset of the quantization group 720 (i.e., from blocks 722, 724, 726, and 728, or CUs)3-6Only some of the blocks) inherit this QPQGAs IPCM blocks or CUs5QP of 7265。
As another example, video encoder 20 may assign a non-zero QP value to an IPCM block based on the predicted QP value and explicitly signal the QP value to video decoder 30. For example, video encoder 20 may signal a dQP value for an IPCM block that represents a difference between an assigned non-zero QP value and a predicted QP value. In this example, video decoder 30 may assign a non-zero QP value to the IPCM block based on the received dQP value for the IPCM block. In this way, video encoder 20 may signal the exact QP value used to encode the samples of the IPCM block. For example, according to the techniques described herein, video encoder 20 may signal the syntax element "cu _ qp _ delta" in the PU syntax for an IPCM block to indicate the dQP value for the IPCM block with IPCM samples, as illustrated in table 2. Table 3, in turn, illustrates the case of IPCM burst mode operation based on WD6, where multiple cu _ qp _ delta values are signaled sequentially, one for each IPCM block.
Table 2: the PU syntax adds "cu _ qp _ delta" signaling IPCM (based on WD4)
Table 3: the PU syntax adds to signal "cu _ qp _ delta" for IPCM burst mode operation (based on WD6)
According to the PU syntax illustrated in table 2, if the current block of video data is indicated as an IPCM block (i.e., "pcm _ flag ═ true"), video decoder 30 may determine whether a dQP value has been signaled for the IPCM block. In this example, if the loop filtering process is enabled (i.e., "pcm _ loop _ filter _ disable _ flag ═ 0"), then the dQP method is enabled (i.e., "cu _ qp _ delta _ enabled _ flag ═ 1"), and the dQP value is coded for that block (i.e., "IsCuQPDeltaCodedflag ═ 0"), video decoder 30 may receive syntax element cu _ qp _ delta to indicate the dQP value for the IPCM block.
In the first example described above, when the IPCM block 716 has a size greater than or equal to the minimum CU quantization group size, video encoder 20 may signal a dQP value equal to "0" for the IPCM block 716. In this way, video decoder 30 may add the signaled dQP value of "0" to the QP value predicted from neighboring block 714 (the "QP0") to determine a QP value (" QP ") for the IPCM block 7161") (i.e.," QP1=QP0"). In another example, video encoder 20 may signal a dQP value for IPCM block 716 that is different than "0" and video decoder 30 may predict the QP value ("QP") from neighboring block 714 by adding the signaled dQP value to the QP value0") to determine a QP value (" QP ") for the IPCM block 7161") (i.e.," QP1=QP0+dQP”)。
In the second example described above, when the IPCM block 726 has a size less than the minimum CU quantization group size, the video encoder 20 may signal a dQP value for the IPCM block 726 that is equal to the dQP value for the quantization group 720 that includes the IPCM block 726. In this way, video decoder 30 may add the signaled dQP value to a QP value predicted from neighboring block 718 ("QP")2") to determine a QP value (" QP ") for the IPCM block 7265") (i.e.," QP5=QP2+ dQP "). Because the dQP value for IPCM block 726 is the same as the dQP value for all blocks in quantization group 720, video decoder 30 may determine the QP value ("QP") for IPCM block 7265") so that the QP value is equal to the QP value (i.e., QP) for the quantization group3=QP4=QP5=QP6=QPQG)。
In some cases, video encoder 20 may signal only the dQP value for one of the blocks (e.g., one of blocks 722, 724, 726, 728) in a quantization group (e.g., quantization group 720). The signaled dQP value may be a first coded dQP value for a block that is not an IPCM block and includes at least one non-zero coefficient (i.e., for which "CBF ═ 1"). As an example, a syntax element or flag "IsCuQPDeltaCoded" may be included in the PU syntax to ensure that only the first coded dQP value for a block in a quantization group is signaled to video decoder 30. Video decoder 30 may then set the dQP value for other blocks in the same quantization group equal to the first coded dQP value.
As described above, some draft versions of HEVC (e.g., WD6) support signaling pcm _ loop _ filter _ disable _ flag in SPS to indicate whether loop filtering process is enabled for IPCM blocks. In some cases, it may be desirable to indicate whether the loop filtering process is enabled with finer granularity for the IPCM block. Thus, the techniques of this disclosure further support signaling pcm _ loop _ filter _ disable _ flag in any of PPS, APS, slice header, CU syntax, and PU syntax.
In one example, video encoder 20 may determine whether to apply loop filtering processes, such as deblocking filtering, ALF, and SAO, based on whether the current IPCM block includes original samples or reconstructed samples. As discussed above, the original samples are distortion-free and require no in-loop filtering, while the reconstructed samples may include some distortion and may benefit from in-loop filtering. In other examples, video encoder 20 may determine to apply the loop filtering process to the IPCM block based on other considerations. In accordance with the techniques described herein, video encoder 20 may signal pcm _ loop _ filter _ disable _ flag in the PU syntax, as illustrated in table 4 below. In particular, table 4 illustrates the finest granularity of the available signaling loop filtering process.
Table 4: the PU syntax is added with an embedded "pcm _ loop _ filter _ disable _ flag"
As another example, as explained above, some draft versions of HEVC (e.g., WD6) also support lossless coding modes for CUs or blocks of video data. In some examples, qpprime _ y _ zero _ transquant _ bypass _ flag signaled in the SPS and equal to "1" may specify ifParameter "QP'Y"(e.g., wherein QP'Y=QPY+QpBdOffsetYWherein QpBdOffsetY6 × bit _ depth _ luma _ minus8) equals "0" for the CU, then the lossless coding process should be applied, as described above, in lossless coding, the scaling and transform process and the in-loop filtering process may be skipped, the lossless coding mode is similar to the case of IPCM blocks containing original samples as described above, with the difference that the prediction method used for lossless coding is not applied to the IPCM blocks.YThe value may be equal to "0". If Qp'YThe value is equal to "0" for the IPCM block, then loop filtering (e.g., deblocking, SAO, ALF) may also be disabled on IPCM samples, as is the case for lossless coding mode. These effects signal a pcm _ loop _ filter _ disable _ flag equal to a true or "1". Thus, if the QP value of the IPCM block is used to control the loop filter behavior, the signaling of the pcm _ loop _ filter _ disable _ flag may be omitted. QP'YEqual to "0" is equivalent to pcm _ loop _ filter _ disable _ flag equal to true or "1", and QP'YGreater than "0" is equivalent to pcm _ loop _ filter _ disable _ flag being equal to false or "0". The deblocking filter may calculate an average or maximum value of QP values for blocks, where at least one block is losslessly coded or an IPCM block.
Fig. 8A-8B are conceptual diagrams illustrating examples of lossless coding mode deblocking consistent with the techniques of this disclosure. As shown in FIG. 8A, losslessly coded "current" CU812 (i.e., for which "Qp'Y0 ") may be coded by CUs 804-810 that are not losslessly coded (i.e.," Qp "for each of them'Y>0 ") is present. As explained above with reference to fig. 1, in these cases, the deblocking filter may skip processing of the left and upper edges of current CU812 (e.g., because of "Qp" for CU 812'Y0 ") and deblocking filtering is performed on the right and lower edges of current CU812 (e.g., by coding CU806 when the respective CU is codedAnd 808 perform deblocking filtering) as illustrated in fig. 8A. As already explained, a potential problem associated with the above approach is that the deblocking filter may modify the lossless samples of the current CU812 along the right and lower edges, as shown by the "dashed" portion of the lossless CU812 surrounded by these edges in fig. 8A.
As also explained above with reference to fig. 1, the techniques of this disclosure may include disabling deblocking filtering for a lossless coding CU, such that any of the CU's edges along the CU are not deblock filtered, while allowing adjacent "lossy" coding CUs to be deblock filtered. For example, as shown in fig. 8B, losslessly coded "current" CU822 (i.e., for which "Qp ″) is included in another plurality of CUs (or blocks of video data) 802 (i.e., for which" Qp ″).Y0 ") may be coded by CUs 814-820 (i.e.," Qp "for each of them) that are not losslessly coded.Y>0 ") is present. In these cases, the deblocking filter may skip processing of each of the left edge, the upper edge, the right edge, and the lower edge of current CU822, allowing deblocking filtering of the respective edges of CUs 814-820, as shown in fig. 8B. Moreover, as also explained above with reference to fig. 1, the disclosed techniques may further include assigning a non-zero QP value to CU822 to perform deblocking filtering on respective edges of CUs 814-820.
Fig. 9-11 are flow diagrams illustrating example methods of IPCM and lossless coding mode deblocking consistent with the techniques of this disclosure. The techniques of fig. 9-11 may be performed by generally any processing unit or processor, whether implemented as hardware, software, firmware, or a combination thereof, and when implemented as software or firmware, the corresponding hardware may be provided to execute the instructions of the software or firmware. For purposes of example, the techniques of fig. 9-11 are described with respect to video encoder 20 (fig. 1 and 2) and/or video decoder 30 (fig. 1 and 3), although it is understood that other devices may be configured to perform similar techniques. Further, the steps illustrated in fig. 9-11 may be performed in a different order or in parallel, and additional steps may be added and certain steps omitted, without departing from the techniques of this disclosure.
Specifically, fig. 9 illustrates an example method of IPCM coding mode and lossless coding mode deblocking or "deblocking filtering" generally in the context of coding (i.e., encoding and/or decoding). Furthermore, fig. 10 and 11 illustrate example methods of IPCM coding mode and lossless coding mode deblocking, respectively, in the context of decoding and encoding.
As one example, as previously described, video encoder 20 and/or video decoder 30 may code (i.e., encode and/or decode) one or more blocks of video data during a video coding process. For example, the one or more blocks may be one or more PUs, TUs, or CUs, also as previously described. In this example, initially, video encoder 20 and/or video decoder 30 may code a plurality of blocks of video data, wherein at least one block of the plurality of blocks of video data is coded using a coding mode that is one of an IPCM coding mode and a lossless coding mode (900). As previously described, the at least one block coded using the IPCM coding mode may correspond to a block of reconstructed video data. For example, reconstructed video data may be generated by video encoder 20 by performing the prediction, summation, transform, and quantization steps described above with reference to fig. 1 and 2 using blocks of the original video data. By performing the above steps using blocks of original video data, video encoder 20 may generate blocks of transformed and quantized residual coefficients. Subsequently, video encoder 20 may perform inverse quantization, inverse transform, prediction, and summation (also described above) using the transformed and quantized residual coefficients to generate blocks of reconstructed video data. Alternatively, as also previously described, the at least one block coded using the lossless coding mode may correspond to a block of residual (e.g., generated using prediction) unquantized video data or original video data.
Video encoder 20 and/or video decoder 30 may further assign a non-zero QP value for at least one block coded using the coding mode (902). For example, as will be described in more detail below, video encoder 20 and/or video decoder 30 may assign a non-zero QP value for at least one block using any of a variety of methods. These methods may include determining the assigned non-zero QP value based on one or more of: (1) a signaled QP value for the at least one block (e.g., that directly indicates an assigned non-zero QP value); (2) a predicted QP value for the at least one block (e.g., a QP value for each of one or more neighboring blocks of the at least one block); and (3) a signaled dQP value for the at least one block (e.g., representing a difference between an assigned non-zero QP value and a predicted QP value).
Video encoder 20 and/or video decoder 30 may be further configured to perform deblocking filtering on one or more of the plurality of blocks of video data based on a coding mode used to code the at least one block and the assigned non-zero QP value for the at least one block (904). For example, as will also be described in more detail below, video encoder 20 and/or video decoder 30 may perform deblocking filtering on at least one block itself or one or more adjacent blocks of a plurality of blocks of video data (located adjacent to the at least one block). In this example, the one or more neighboring blocks may be coded using a lossy coding mode.
In particular, in some examples, to perform deblocking filtering on one or more of a plurality of blocks of video data based on a coding mode used to code at least one block and an assigned non-zero QP value, video encoder 20 and/or video decoder 30 may perform the following steps. As one example, where the coding mode used to code the at least one block is an IPCM coding mode, video encoder 20 and/or video decoder 30 may perform deblocking filtering on the at least one block based on the assigned non-zero QP value. As will be described in more detail below, video encoder 20 and/or video decoder 30 may also perform deblocking filtering on one or more other blocks of the plurality of blocks of video data based on the assigned non-zero QP values. As another example, where the coding mode used to code the at least one block is a lossless coding mode, video encoder 20 and/or video decoder 30 may perform deblocking filtering on one or more neighboring blocks in the plurality of blocks of video data based on the assigned non-zero QP values, while avoiding performing deblocking filtering on the at least one block itself. In this example, each of the one or more neighboring blocks may be located adjacent to the at least one block and coded using a lossy coding mode. For example, each of the one or more neighboring blocks may be a block of quantized and transformed residual coefficients generated by performing the prediction, summation, transform, and quantization steps described above with reference to fig. 1 and 2 using a block of original video data.
In the above example, to perform deblocking filtering on each of the at least one block and the adjacent block based on the assigned non-zero QP value, video encoder 20 and/or video decoder 30 may select a filter for deblocking filtering based on the assigned non-zero QP value. For example, video encoder 20 and/or video decoder 30 may be configured to select a filter using the assigned non-zero QP value such that the filter includes one or more filtering parameters or characteristics that define the manner in which deblocking filtering is performed using the filter. In other examples, to perform deblocking filtering on each of at least one block and an adjacent block based on an assigned non-zero QP value, video encoder 20 and/or video decoder 30 may be configured to determine a filter strength for deblocking filtering based on the assigned non-zero QP value, as described above with reference to deblocking decisions.
As one example, where the at least one block is coded using an IPCM coding mode, video encoder 20 and/or video decoder 30 may perform deblocking filtering on the at least one block and one or more neighboring blocks of a plurality of blocks of video data. In this example, each of the neighboring blocks may be located adjacent to the at least one block and coded using a lossy coding mode. For example, each of the neighboring blocks may be a block of quantized and transformed residual coefficients generated by performing the prediction, summation, transform, and quantization steps described above with reference to fig. 1 and 2 using a block of original video data.
In this example, video encoder 20 and/or video decoder 30 may perform deblocking filtering on one or more of the boundaries shared by the at least one block and the adjacent block. In particular, to perform deblocking filtering for a given boundary shared by the at least one block and a particular one of the neighboring blocks, video encoder 20 and/or video decoder 30 may determine a filter strength using an average of the non-zero QP value assigned for the at least one block and the QP value of the neighboring block. Thus, in accordance with the techniques of this disclosure, video encoder 20 and/or video decoder 30 may be configured to determine the filter strength at least in part using the assigned non-zero QP value instead of the previously described default "zero value" QP value for the at least one block. Subsequently, video encoder 20 and/or video decoder 30 may perform deblocking filtering on the boundary based on the determined filter strength. In this example, to perform deblocking filtering on the boundary, video encoder 20 and/or video decoder 30 may filter the inner boundary edges (e.g., one or more coefficients within each block located near the boundary shared by the two blocks) of both the at least one block and the adjacent block. In this manner, in some cases, assigning a non-zero QP value for at least one block and determining a filter strength to perform deblocking filtering based at least in part on the assigned non-zero QP value may improve visual quality of the at least one block and the neighboring block, as compared to other techniques.
As another example, as previously described, where the at least one block is coded using a lossless coding mode, video encoder 20 and/or video decoder 30 may perform deblocking filtering on one or more neighboring blocks of a plurality of blocks of video data while avoiding performing deblocking filtering on the at least one block itself. In a similar manner as described above, video encoder 20 and/or video decoder 30 may perform deblocking filtering on one or more of the boundaries shared by the at least one block and the adjacent block. For example, to perform deblocking filtering for a given boundary shared by the at least one block and a particular one of the neighboring blocks, video encoder 20 and/or video decoder 30 may again use an average of the non-zero QP value assigned for the at least one block and the QP value of the neighboring block to determine a filter strength. In accordance with the techniques of this disclosure, video encoder 20 and/or video decoder 30 may again determine the filter strength using, at least in part, the assigned non-zero QP value. Subsequently, video encoder 20 and/or video decoder 30 may perform deblocking filtering on the boundary based on the determined filter strength.
However, in contrast to the above-described example in which at least one block is coded using the IPCM coding mode, in this example, to perform deblocking filtering on the boundary, video encoder 20 and/or video decoder 30 may only filter the inner boundary edge of neighboring blocks (e.g., one or more coefficients within neighboring blocks located near the boundary shared by the two blocks). In other words, in this example, the inner boundary edge of the at least one block itself will remain unaffected by the deblocking filtering. In this manner, in some cases, assigning a non-zero QP value for at least one block and determining filter coefficients to perform deblocking filtering based at least in part on the assigned non-zero QP value may improve visual quality of the neighboring block compared to other techniques.
In some examples, video encoder 20 and/or video decoder 30 may further enable deblocking filtering for one or more of a plurality of blocks of video data prior to performing deblocking filtering on the one or more of the plurality of blocks of video data based on a coding mode used to code the at least one block and the assigned non-zero QP value. As one example, video encoder 20 may signal one or more syntax elements (e.g., a 1-bit code, or "flag") in the bitstream, e.g., to be received by video decoder 30 or stored in storage device 24. As another example, video decoder 30 may receive, in the bitstream, one or more syntax elements signaled, e.g., by video encoder 20 or storage device 24. In any of these examples, the one or more syntax elements may indicate that deblocking filtering is enabled for one or more of a plurality of blocks of video data.
In other examples, video encoder 20 and/or video decoder 30 may disable deblocking filtering for at least one block, particularly if the coding mode is a lossless coding mode. In these examples, to disable deblocking filtering, video encoder 20 and/or video decoder 30 may refrain from performing deblocking filtering on an inner boundary edge of at least one block. For example, in a manner similar to that described above with reference to one or more syntax elements (indicating that deblocking filtering is enabled for one or more of a plurality of blocks of video data), video encoder 20 may signal in the bitstream and/or video decoder 30 may receive one or more syntax elements (e.g., a 1-bit code, or flag) in the bitstream. However, in this example, the one or more syntax elements may indicate that deblocking filtering is disabled for the at least one block.
In some examples, to assign a non-zero QP value to at least one block, video encoder 20 and/or video decoder 30 may determine the assigned non-zero QP value based on one or more of: (1) a QP value signaled for the at least one block, wherein the signaled QP value indicates an assigned non-zero QP value; (2) a QP value predicted for the at least one block; and (3) a dQP value signaled for the at least one block, wherein the dQP value represents a difference between an assigned non-zero QP value and a predicted QP value.
As one example, in the case that the coding mode used to code the at least one block is an IPCM coding mode, to assign a non-zero QP value to the at least one block, video encoder 20 and/or video decoder 30 may perform the following steps. As one example, when the size of the at least one block is less than the smallest coding unit quantization group size, video encoder 20 and/or video decoder 30 may set a group QP value (e.g., at least one group QP value for a quantization group that includes the at least one block) to the assigned non-zero QP value. In this example, the quantization group may also include one or more blocks of video data coded using lossy coding modes.
As described above, in some examples, each of the blocks of video data included in a quantization group may have the same group QP value. In these examples, video encoder 20 and/or video decoder 30 may set this common group QP value to the assigned non-zero QP value. However, in other examples, only some blocks of video data (e.g., blocks starting from the first block of QP values that signal quantization groups as dQP values) may have the same group QP values. In these examples, video encoder 20 and/or video decoder 30 may set this particular group QP value common to only a subset of the blocks of the quantization group to the assigned non-zero QP value. In this way, when the size of the at least one block is less than the smallest coding unit quantization group size, video encoder 20 and/or video decoder 30 may set at least one group QP value for the quantization group that includes the at least one block to the assigned non-zero QP value.
As another example, when the size of at least one block is greater than or equal to the smallest coding unit quantization group size, video encoder 20 and/or video decoder 30 may set QP values for neighboring blocks of the plurality of blocks of video data to the assigned non-zero QP value. In this example, the neighboring blocks may be one or more of blocks located adjacent to the at least one block and previously coded blocks.
In another example, in the case that the coding mode used to code the at least one block is an IPCM coding mode, to assign a non-zero QP value to the at least one block, video encoder 20 and/or video decoder 30 may perform the following steps. For example, when the size of at least one block is less than the smallest coding unit quantization group size, video encoder 20 and/or video decoder 30 may set QP values for neighboring blocks of the plurality of blocks of video data to the assigned non-zero QP value. In this example, the neighboring blocks again may be one or more of blocks located adjacent to the at least one block and previously coded blocks.
In other examples, to assign a non-zero QP value to at least one block, video encoder 20 and/or video decoder 30 may set one of a QP value and a dQP value for a lossy block of a plurality of blocks of video data to the assigned non-zero QP value in the case that the coding mode used to code the at least one block is a lossless coding mode. In this example, the dQP value may represent the difference between the QP value for the lossy block and the predicted QP value. Also, in this example, the lossy block may be a block coded using a lossy coding mode.
In other examples, in the case that the coding mode used to code the at least one block is a lossless coding mode, instead of determining the assigned non-zero QP value using the techniques described above, to assign a non-zero QP value to the at least one block, video encoder 20 and/or video decoder 30 may set a constant value to the assigned non-zero QP value.
In this manner, in some examples, video encoder 20 and/or video decoder 30 may code a plurality of blocks of video data, wherein at least one block of the plurality of blocks of video data is coded using an IPCM coding mode, assign a non-zero QP value to the at least one block, and perform deblocking filtering on one or more of the plurality of blocks of video data based on the assigned non-zero QP value for the at least one block.
Alternatively, in other examples, video encoder 20 and/or video decoder 30 may code a plurality of blocks of video data, wherein at least one block of the plurality of blocks of video data is coded using a lossless coding mode using prediction, assign a non-zero QP value to the at least one block, and perform deblocking filtering for one or more blocks of the plurality of blocks of video data other than the at least one block based on the assigned non-zero QP value for the at least one block. In these examples, video encoder 20 and/or video decoder 30 may further refrain from performing deblocking filtering on the at least one block.
As another example, video decoder 30 may receive one of residual unquantized video data and reconstructed video data for a block of a plurality of blocks of video data in a received bitstream. In this example, the block may be coded using a coding mode that is one of an IPCM coding mode and a lossless coding mode (1000). Also, in this example, the lossless coding mode may correspond to a lossless coding mode that uses prediction, as previously described. Video decoder 30 may further receive one of the assigned non-zero QP value and dQP value for the block in the received bitstream. For example, the dQP value may represent the difference between the assigned non-zero QP value and the predicted QP value for the block (1002).
In some examples, particularly where video decoder 30 receives a dQP value, video decoder 30 may still further determine a predicted QP value (1004), and determine an assigned non-zero QP value based on the dQP value and the predicted QP value (1006). Video decoder 30 may also perform deblocking filtering on one or more of the plurality of blocks of video data based on the coding mode used to code the block and the assigned non-zero QP value (1008).
In the above example, video decoder 30 may further receive, in the received bitstream, a first one or more syntax elements (e.g., one or more unit codes, which may be referred to as "flags") indicating that deblocking filtering is enabled for one or more of the plurality of blocks of video data (1010). Also, in this example, video decoder 30 may still further receive, in the received bitstream, a second one or more syntax elements (e.g., again, one or more "flags") that indicate that deblocking filtering is disabled for the block (1012).
As yet another example, video encoder 20 may determine a non-zero QP value assigned for a block of a plurality of blocks of video data. In this example, the block may be coded using a coding mode that is one of an IPCM coding mode and a lossless coding mode (1100). Also, in this example, the lossless coding mode may again correspond to a lossless coding mode that uses prediction. Video encoder 20 may further perform deblocking filtering on one or more of the plurality of blocks of video data based on the coding mode used to code the block and the assigned non-zero QP value (1102). Video encoder 20 may further signal one of the residual unquantized video data and the reconstructed video data for the block in the bitstream (1104). In some examples, video encoder 20 may also determine a predicted QP value for the block (1106).
Video encoder 20 may also signal one of the assigned non-zero QP value and dQP value for the block in the bitstream. In this example, the dQP value may represent the difference (1108) between the assigned non-zero QP value and the predicted QP value described above with reference to step (1106).
In the above example, video encoder 20 may further signal, in the bitstream, a first one or more syntax elements (e.g., one or more "flags") indicating that deblocking filtering is enabled for one or more of the plurality of blocks of video data (1110). Also, in this example, video encoder 20 may further signal a second one or more syntax elements (e.g., again, one or more "flags") in the bitstream that indicate that deblocking filtering is disabled for the block (1112).
In this manner, the methods of each of fig. 9-11 represent examples of methods for coding video data, including: coding a plurality of blocks of video data, wherein at least one block of the plurality of blocks of video data is coded using a coding mode that comprises one of an IPCM coding mode and a lossless coding mode using prediction; assigning a non-zero QP value for the at least one block coded using the coding mode; and performing deblocking filtering on one or more of the plurality of blocks of video data based on the coding mode used to code the at least one block and the non-zero QP value assigned for the at least one block.
In one or more examples, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or program code on a computer-readable medium and executed by a hardware-based processing unit. The computer-readable medium may include a computer-readable storage medium, which may correspond to a tangible or non-transitory medium, such as a data storage medium, or a communication medium including any medium that facilitates transfer of a computer program from one place to another (e.g., according to a communication protocol). In this manner, a computer-readable medium may generally correspond to (1) a tangible computer-readable storage medium that is not transitory, or (2) a communication medium such as a signal or carrier wave. A data storage medium may be any available medium that can be accessed by one or more computers or one or more processors to retrieve instructions, program code and/or data structures for implementation of the techniques described herein. The computer program product may include a computer-readable medium.
By way of example, and not limitation, such computer-readable storage media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the instructions are transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, Digital Subscriber Line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. It should be understood, however, that computer-readable storage media and data storage media do not include connections, carrier waves, signals, or other transitory media, but instead refer to non-transitory or non-transitory tangible storage media. Disk and disc, as used herein, includes Compact Disc (CD), laser disc, optical disc, Digital Versatile Disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
The instructions may be executed by one or more processors, such as one or more general purpose microprocessors, DSPs, ASICs, FPGAs, or other equivalent integrated or discrete logic circuitry. Thus, as used herein, the term "processor" may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described in this disclosure. In addition, in some aspects, the functionality described herein may be provided within dedicated hardware and/or software modules configured for encoding and decoding, or incorporated in a combined codec. Also, the techniques may be fully implemented in one or more circuits or logic elements.
The techniques of this disclosure may be implemented in a variety of devices or apparatuses, including a wireless handset, an IC, or a set of ICs (e.g., a chip set). Various components, modules, or units are described in this disclosure to emphasize functional aspects of devices configured to perform the disclosed techniques, and do not necessarily require realization by different hardware components, modules, or units. Instead, the various units may be combined in a codec hardware unit, as described above, or provided by a collection of interoperability hardware units (including one or more processors as described above) in conjunction with appropriate software and/or firmware.
Various examples have been described. These and other examples are within the scope of the following claims.
Claims (65)
1. A method of coding video data, comprising:
coding a plurality of blocks of video data, wherein at least one block of the plurality of blocks of video data is coded using an Intra Pulse Code Modulation (IPCM) coding mode;
assigning a non-zero Quantization Parameter (QP) value to the at least one block coded using the IPCM coding mode; and
performing deblocking filtering on one or more of the plurality of blocks of video data based on the IPCM coding mode used to code the at least one block and the non-zero QP value assigned for the at least one block.
2. The method of claim 1, wherein performing the deblocking filtering on the one or more of the plurality of blocks of video data based on the IPCM coding mode and the assigned non-zero QP value comprises:
performing the deblocking filtering on the at least one block based on the assigned non-zero QP value.
3. The method of claim 2, wherein performing the deblocking filtering on the at least one block based on the assigned non-zero QP value comprises selecting a filter for the deblocking filtering based on the assigned non-zero QP value.
4. The method of claim 2, wherein performing the deblocking filtering on the at least one block based on the assigned non-zero QP value comprises determining a filter strength for the deblocking filtering based on the assigned non-zero QP value.
5. The method of claim 1, further comprising, prior to performing the deblocking filtering on the one or more of the plurality of blocks of video data based on the IPCM coding mode and the assigned non-zero QP value, enabling the deblocking filtering for the one or more of the plurality of blocks of video data.
6. The method of claim 1, wherein assigning the non-zero QP value for the at least one block comprises determining the assigned non-zero QP value based on one or more of:
a signaled QP value for the at least one block, wherein the signaled QP value indicates the non-zero QP value assigned;
a predicted QP value for the at least one block; or
A signaled delta QP value for the at least one block, wherein the delta QP value represents a difference between the assigned non-zero QP value and the predicted QP value.
7. The method of claim 1, wherein assigning the non-zero QP value for the at least one block comprises:
setting at least one group QP value for a quantization group including the at least one block to the assigned non-zero QP value when a size of the at least one block is less than a minimum coding unit quantization group size, wherein the quantization group also includes one or more blocks of video data coded using a lossy coding mode; and
setting a QP value for a neighboring block of the plurality of blocks of video data to the assigned non-zero QP value when the size of the at least one block is greater than or equal to the smallest coding unit quantization group size, the neighboring block comprising one or more of a location adjacent to the at least one block and previously coded.
8. The method of claim 1, wherein assigning the non-zero QP value for the at least one block comprises:
setting a QP value for a neighboring block of the plurality of blocks of video data to the assigned non-zero QP value when a size of the at least one block is less than a smallest coding unit quantization group size, the neighboring block comprising one or more of a previously coded block and the at least one block.
9. The method of claim 1, wherein coding comprises decoding, and wherein
Decoding the at least one block comprises receiving one of residual unquantized video data and reconstructed video data for the at least one block in a received bitstream, and
assigning the non-zero QP value for the at least one block comprises one of: receiving the assigned non-zero QP value in the received bitstream, and receiving a delta QP value for the at least one block in the received bitstream and determining the assigned non-zero QP value based on the delta QP value and a predicted QP value, the delta QP value representing a difference between the assigned non-zero QP value and the predicted QP value for the at least one block,
the method further comprises receiving, in the received bitstream, one or more syntax elements indicating that the deblocking filtering is enabled for the one or more of the plurality of blocks of video data.
10. The method of claim 1, wherein coding comprises encoding, and wherein
Encoding the at least one block comprises signaling one of residual unquantized video data and reconstructed video data for the at least one block in a bitstream, and
assigning the non-zero QP value for the at least one block comprises one of: signaling the assigned non-zero QP value in the bitstream, and signaling a delta QP value for the at least one block in the bitstream, the delta QP value representing a difference between the assigned non-zero QP value and a predicted QP value for the at least one block,
the method further comprises signaling, in the bitstream, one or more syntax elements indicating that the deblocking filtering is enabled for the one or more of the plurality of blocks of video data.
11. An apparatus configured to code video data, the apparatus comprising:
a video data memory configured to store the video data; and
a video coder comprising one or more processors configured to:
coding a plurality of blocks of the video data, wherein the video coder is configured to code at least one block of the plurality of blocks of video data using an Intra Pulse Code Modulation (IPCM) coding mode;
assigning a non-zero Quantization Parameter (QP) value to the at least one block coded using the IPCM coding mode; and
performing deblocking filtering on one or more of the plurality of blocks of video data based on the IPCM coding mode used to code the at least one block and the non-zero QP value assigned for the at least one block.
12. The apparatus of claim 11, wherein to perform the deblocking filtering on the one or more of the plurality of blocks of video data based on the IPCM coding mode and the assigned non-zero QP value, the video coder is configured to:
performing the deblocking filtering on the at least one block based on the assigned non-zero QP value.
13. The apparatus of claim 12, wherein to perform the deblocking filtering on the at least one block based on the assigned non-zero QP value, the video coder is configured to perform one or more of:
select a filter for the deblocking filtering based on the assigned non-zero QP value; and
determining a filter strength for the deblocking filtering based on the assigned non-zero QP value.
14. The apparatus of claim 11, wherein the video coder is further configured to enable the deblocking filtering for the one or more of the plurality of blocks of video data prior to performing the deblocking filtering on the one or more of the plurality of blocks of video data based on the IPCM coding mode and the assigned non-zero QP value.
15. The apparatus of claim 11, wherein to assign the non-zero QP value for the at least one block, the video coder is configured to determine the assigned non-zero QP value based on one or more of:
a signaled QP value for the at least one block, wherein the signaled QP value indicates the non-zero QP value assigned;
a predicted QP value for the at least one block; or
A signaled delta QP value for the at least one block, wherein the delta QP value represents a difference between the assigned non-zero QP value and the predicted QP value.
16. The apparatus of claim 11, wherein to assign the non-zero QP value for the at least one block, the video coder is configured to:
setting at least one group QP value for a quantization group including the at least one block to the assigned non-zero QP value when a size of the at least one block is less than a minimum coding unit quantization group size, wherein the quantization group also includes one or more blocks of video data coded using a lossy coding mode; and
setting a QP value for a neighboring block of the plurality of blocks of video data to the assigned non-zero QP value when the size of the at least one block is greater than or equal to the smallest coding unit quantization group size, the neighboring block comprising one or more of a location adjacent to the at least one block and previously coded.
17. The apparatus of claim 11, wherein to assign the non-zero QP value for the at least one block, the video coder is configured to:
setting a QP value for a neighboring block of the plurality of blocks of video data to the assigned non-zero QP value when a size of the at least one block is less than a smallest coding unit quantization group size, the neighboring block comprising one or more of a block located adjacent to the at least one block and a previously coded block.
18. The apparatus of claim 11, wherein to code the plurality of blocks of video data including the at least one block, the video coder is configured to decode the plurality of video blocks including the at least one block, and wherein
To decode the at least one block, the video coder is configured to receive one of residual unquantized video data and reconstructed video data for the at least one block in a received bitstream, and
to assign the non-zero QP value for the at least one block, the video coder is configured to perform one of: receiving the assigned non-zero QP value in the received bitstream, and receiving a delta QP value for the at least one block in the received bitstream and determining the assigned non-zero QP value based on the delta QP value and a predicted QP value, the delta QP value representing a difference between the assigned non-zero QP value and the predicted QP value for the at least one block,
wherein the video coder is further configured to receive, in the received bitstream, one or more syntax elements indicating that the deblocking filtering is enabled for the one or more of the plurality of blocks of video data.
19. The apparatus of claim 11, wherein to code the plurality of blocks of video data including the at least one block, the video coder is configured to encode the plurality of video blocks including the at least one block, and wherein
To encode the at least one block, the video coder is configured to signal one of residual unquantized video data and reconstructed video data for the at least one block in a bitstream, and
to assign the non-zero QP value for the at least one block, the video coder is configured to perform one of: signaling the assigned non-zero QP value in the bitstream, and signaling a delta QP value for the at least one block in the bitstream, the delta QP value representing a difference between the assigned non-zero QP value and a predicted QP value for the at least one block,
wherein the video coder is further configured to signal, in the bitstream, one or more syntax elements indicating that the deblocking filtering is enabled for the one or more of the plurality of blocks of video data.
20. The apparatus of claim 11, wherein the apparatus comprises at least one of:
an integrated circuit;
a microprocessor; and
a wireless communication device comprising the video coder.
21. A device configured to code video data, the device comprising:
means for coding a plurality of blocks of video data, including means for coding at least one block of the plurality of blocks of video data using an Intra Pulse Code Modulation (IPCM) coding mode;
means for assigning a non-zero Quantization Parameter (QP) value to the at least one block coded using the IPCM coding mode; and
means for performing deblocking filtering on one or more of the plurality of blocks of video data based on the IPCM coding mode and the non-zero QP value assigned for the at least one block to code the at least one block.
22. The device of claim 21, wherein the means for performing the deblocking filtering on the one or more of the plurality of blocks of video data based on the IPCM coding mode and the assigned non-zero QP value comprises:
means for performing the deblocking filtering on the at least one block based on the assigned non-zero QP value.
23. The device of claim 22, wherein the means for performing the deblocking filtering on the at least one block based on the assigned non-zero QP value comprises one or more of:
means for selecting a filter for the deblocking filtering based on the assigned non-zero QP value; and
means for determining a filter strength for the deblocking filtering based on the assigned non-zero QP value.
24. The device of claim 21, wherein the means for assigning the non-zero QP value for the at least one block comprises means for determining the assigned non-zero QP value based on one or more of:
a signaled QP value for the at least one block, wherein the signaled QP value indicates the non-zero QP value assigned;
a predicted QP value for the at least one block; or
A signaled delta QP value for the at least one block, wherein the delta QP value represents a difference between the assigned non-zero QP value and the predicted QP value.
25. The device of claim 21, wherein the means for assigning the non-zero QP value for the at least one block comprises:
means for setting at least one group QP value for a quantization group including the at least one block to the assigned non-zero QP value when a size of the at least one block is less than a minimum coding unit quantization group size, wherein the quantization group also includes one or more blocks of video data coded using a lossy coding mode; and
means for setting a QP value for a neighboring block of the plurality of blocks of video data to the assigned non-zero QP value when the size of the at least one block is greater than or equal to the smallest coding unit quantization group size, the neighboring block comprising one or more of a block located adjacent to the at least one block and a previously coded block.
26. The device of claim 21, wherein the means for assigning the non-zero QP value for the at least one block comprises:
means for setting a QP value for a neighboring block of the plurality of blocks of video data to the assigned non-zero QP value when a size of the at least one block is less than a smallest coding unit quantization group size, the neighboring block comprising one or more of a block located adjacent to the at least one block and a previously coded block.
27. The device of claim 21, wherein coding comprises decoding, and wherein
The means for decoding the at least one block comprises means for receiving one of residual unquantized video data and reconstructed video data for the at least one block in a received bitstream, and
the means for assigning the non-zero QP value for the at least one block comprises one of: means for receiving the assigned non-zero QP value in the received bitstream, and means for receiving a delta QP value for the at least one block in the received bitstream and determining the assigned non-zero QP value based on the delta QP value and a predicted QP value, the delta QP value representing a difference between the assigned non-zero QP value and the predicted QP value for the at least one block,
the device further comprises means for receiving, in the received bitstream, one or more syntax elements indicating that the deblocking filtering is enabled for the one or more of the plurality of blocks of video data.
28. The device of claim 21, wherein coding comprises encoding, and wherein
The means for encoding the at least one block comprises means for signaling one of residual unquantized video data and reconstructed video data for the at least one block in a bitstream, and
the means for assigning the non-zero QP value for the at least one block comprises one of: means for signaling the assigned non-zero QP value in the bitstream, and means for signaling a delta QP value for the at least one block in the bitstream, the delta QP value representing a difference between the assigned non-zero QP value and a predicted QP value for the at least one block,
the device further comprises means for signaling, in the bitstream, one or more syntax elements indicating that the deblocking filtering is enabled for the one or more of the plurality of blocks of video data.
29. A method of coding video data, comprising:
coding a plurality of blocks of video data, wherein at least one block of the plurality of blocks of video data is coded using an Intra Pulse Code Modulation (IPCM) coding mode;
assigning a non-zero Quantization Parameter (QP) value to the at least one block; and
performing deblocking filtering on one or more of the plurality of blocks of video data based on the non-zero QP value assigned for the at least one block.
30. A method of coding video data, comprising:
coding a plurality of blocks of video data, wherein at least one block of the plurality of blocks of video data is coded using a predicted lossless coding mode;
assigning a non-zero Quantization Parameter (QP) value to the at least one block;
performing deblocking filtering on one or more blocks of the plurality of blocks of video data other than the at least one block based on the non-zero QP value assigned for the at least one block; and
refraining from performing deblocking filtering on the at least one block.
31. The method of claim 30, wherein performing the deblocking filtering on the one or more of the plurality of blocks of video data based on the lossless coding mode and the assigned non-zero QP values comprises:
performing the deblocking filtering on an adjacent block of the plurality of blocks of video data that is located adjacent to the at least one block and that is coded using a lossy coding mode based on the assigned non-zero QP value.
32. The method of claim 31, wherein performing the deblocking filtering on each of the at least one block and the adjacent block based on the assigned non-zero QP value comprises selecting a filter for the deblocking filtering based on the assigned non-zero QP value.
33. The method of claim 31 or 32, wherein performing the deblocking filtering on each of the at least one block and the adjacent block based on the non-zero QP value assigned comprises determining a filter strength for the deblocking filtering based on the non-zero QP value assigned.
34. The method of claim 31 or 32, further comprising, prior to performing the deblocking filtering on the one or more of the plurality of blocks of video data based on the coding mode used to code the at least one block and the assigned non-zero QP value, enabling the deblocking filtering for the one or more of the plurality of blocks of video data.
35. The method of claim 31, further comprising disabling deblocking filtering for the at least one block, including not performing the deblocking filtering for an inner boundary edge of the at least one block.
36. The method of any one of claims 31, 32, and 35, wherein assigning the non-zero QP value for the at least one block comprises determining the assigned non-zero QP value based on one or more of:
a signaled QP value for the at least one block, wherein the signaled QP value indicates the non-zero QP value assigned;
a predicted QP value for the at least one block; and
a signaled delta QP value for the at least one block, wherein the delta QP value represents a difference between the assigned non-zero QP value and the predicted QP value.
37. The method of any one of claims 31, 32, and 35, wherein assigning the non-zero QP value for the at least one block comprises setting one of a QP value and a delta QP value for a lossy block of the plurality of blocks of video data to the assigned non-zero QP value, wherein the delta QP value represents a difference between the QP value and a predicted QP value for the lossy block, the lossy block comprising a block coded using a lossy coding mode.
38. The method of any one of claims 31, 32, and 35, wherein assigning the non-zero QP value for the at least one block comprises setting a constant value to the assigned non-zero QP value.
39. The method of any one of claims 31, 32, and 35, wherein coding comprises decoding, and wherein
Decoding the at least one block comprises receiving one of residual unquantized video data and reconstructed video data for the at least one block in a received bitstream, and
assigning the non-zero QP value for the at least one block comprises one of: receiving the assigned non-zero QP value in the received bitstream, and receiving a delta QP value for the at least one block in the received bitstream and determining the assigned non-zero QP value based on the delta QP value and a predicted QP value, the delta QP value representing a difference between the assigned non-zero QP value and the predicted QP value for the at least one block,
the method further comprises receiving, in the received bitstream, one or more syntax elements indicating that the deblocking filtering is enabled for the one or more of the plurality of blocks of video data.
40. The method of claim 39, wherein the one or more syntax elements comprise a first one or more syntax elements, the method further comprising receiving, in the received bitstream, a second one or more syntax elements indicating that the deblocking filtering is disabled for the at least one block.
41. The method of any one of claims 31, 32, and 35, wherein coding comprises encoding, and wherein
Encoding the at least one block comprises signaling one of residual unquantized video data and reconstructed video data for the at least one block in a bitstream, and
assigning the non-zero QP value for the at least one block comprises one of: signaling the assigned non-zero QP value in the bitstream, and signaling a delta QP value for the at least one block in the bitstream, the delta QP value representing a difference between the assigned non-zero QP value and a predicted QP value for the at least one block,
the method further comprises signaling, in the bitstream, one or more syntax elements indicating that the deblocking filtering is enabled for the one or more of the plurality of blocks of video data.
42. The method of claim 41, wherein the one or more syntax elements comprise a first one or more syntax elements, the method further comprising signaling, in the bitstream, a second one or more syntax elements indicating that the deblocking filtering is disabled for the at least one block.
43. A method of decoding video data, comprising:
decoding the plurality of blocks of video data using an Intra Pulse Code Modulation (IPCM) coding mode;
assigning a non-zero Quantization Parameter (QP) value to at least one block of the plurality of blocks coded using the IPCM coding mode, wherein assigning the non-zero QP value to the at least one block comprises:
setting at least one group QP value for a quantization group including the at least one block to the assigned non-zero QP value when a size of the at least one block is less than a minimum coding unit quantization group size, wherein the quantization group also includes one or more blocks of video data coded using a lossy coding mode; and
when the size of the at least one block is greater than or equal to the smallest coding unit quantization group size, setting a QP value for a neighboring block of the plurality of blocks of video data to the assigned non-zero QP value, the neighboring block comprising one or more of a position adjacent to the at least one block and previously coded; and performing deblocking filtering on one or more of the plurality of blocks of video data based on the IPCM coding mode used to code the at least one block and the non-zero QP value assigned for the at least one block.
44. A method of decoding video data, comprising:
decoding a plurality of blocks of video data, wherein at least a first block of the plurality of blocks of video data is coded using a lossless coding mode that uses prediction, and wherein at least a second block of the plurality of blocks of video data is coded using a lossy coding mode;
setting one of a non-zero Quantization Parameter (QP) value and a delta QP value for the second block to an assigned non-zero QP value for the first block coded using the lossless coding mode, wherein the delta QP value represents a difference between the non-zero QP value and a predicted QP value for the second block; and
performing deblocking filtering on the second block of video data based on the lossless coding mode used to code the at least first block and the non-zero QP value assigned for the second block.
45. An apparatus configured to code video data, the apparatus comprising a video coder configured to:
coding a plurality of blocks of video data, wherein the video coder is configured to code at least one block of the plurality of blocks of video data using a lossless coding mode that uses prediction;
assigning a non-zero Quantization Parameter (QP) value to the at least one block coded using the coding mode; and
performing deblocking filtering on one or more of the plurality of blocks of video data based on the coding mode used to code the at least one block and the assigned non-zero QP value for the at least one block.
46. The apparatus of claim 45, wherein to perform the deblocking filtering on the one or more of the plurality of blocks of video data based on the coding mode used to code the at least one block and the assigned non-zero QP value, the video coder is configured to:
performing the deblocking filtering on an adjacent block of the plurality of blocks of video data that is located adjacent to the at least one block and that is coded using a lossy coding mode based on the assigned non-zero QP value.
47. The apparatus of claim 46, wherein to perform the deblocking filtering on each of the at least one block and the adjacent block based on the assigned non-zero QP value, the video coder is configured to perform one or more of:
select a filter for the deblocking filtering based on the assigned non-zero QP value; and
determining a filter strength for the deblocking filtering based on the assigned non-zero QP value.
48. The apparatus of any of claims 45 to 47, wherein the video coder is further configured to enable the deblocking filtering for the one or more of the plurality of blocks of video data prior to performing the deblocking filtering on the one or more of the plurality of blocks of video data based on the coding mode used to code the at least one block and the assigned non-zero QP value.
49. The apparatus of claim 45, wherein the video coder is further configured to disable deblocking filtering for the at least one block, including the video coder not performing the deblocking filtering for inner boundary edges of the at least one block.
50. The apparatus of any one of claims 45-47 and 49, wherein to assign the non-zero QP value for the at least one block, the video coder is configured to determine the assigned non-zero QP value based on one or more of:
a signaled QP value for the at least one block, wherein the signaled QP value indicates the non-zero QP value assigned;
a predicted QP value for the at least one block; and
a signaled delta QP value for the at least one block, wherein the delta QP value represents a difference between the assigned non-zero QP value and the predicted QP value.
51. The apparatus of any one of claims 45 to 47 and 49, wherein to assign the non-zero QP value for the at least one block, the video coder is configured to set one of a QP value and a delta QP value for a lossy block of the plurality of blocks of video data to the assigned non-zero QP value, wherein the delta QP value represents a difference between the QP value and a predicted QP value for the lossy block, the lossy block comprising a block coded using a lossy coding mode.
52. The apparatus of any one of claims 45-47 and 49, wherein to assign the non-zero QP value for the at least one block, the video coder is configured to set a constant value to the assigned non-zero QP value.
53. The apparatus of any one of claims 45-47 and 49, wherein to code the plurality of blocks of video data including the at least one block, the video coder is configured to decode the plurality of video blocks including the at least one block, and wherein
To decode the at least one block, the video coder is configured to receive one of residual unquantized video data and reconstructed video data for the at least one block in a received bitstream, and
to assign the non-zero QP value for the at least one block, the video coder is configured to perform one of: receiving the assigned non-zero QP value in the received bitstream, and receiving a delta QP value for the at least one block in the received bitstream and determining the assigned non-zero QP value based on the delta QP value and a predicted QP value, the delta QP value representing a difference between the assigned non-zero QP value and the predicted QP value for the at least one block,
wherein the video coder is further configured to receive, in the received bitstream, one or more syntax elements indicating that the deblocking filtering is enabled for the one or more of the plurality of blocks of video data.
54. The apparatus of claim 53, wherein the one or more syntax elements comprise a first one or more syntax elements, and wherein the video coder is further configured to receive, in the received bitstream, a second one or more syntax elements indicating that the deblocking filtering is disabled for the at least one block.
55. The apparatus of any one of claims 45-47 and 49, wherein to code the plurality of blocks of video data including the at least one block, the video coder is configured to encode the plurality of video blocks including the at least one block, and wherein
To encode the at least one block, the video coder is configured to signal one of residual unquantized video data and reconstructed video data for the at least one block in a bitstream, and
to assign the non-zero QP value for the at least one block, the video coder is configured to perform one of: signaling the assigned non-zero QP value in the bitstream, and signaling a delta QP value for the at least one block in the bitstream, the delta QP value representing a difference between the assigned non-zero QP value and a predicted QP value for the at least one block,
wherein the video coder is further configured to signal, in the bitstream, one or more syntax elements indicating that the deblocking filtering is enabled for the one or more of the plurality of blocks of video data.
56. The apparatus of claim 55, wherein the one or more syntax elements comprise a first one or more syntax elements, and wherein the video coder is further configured to signal, in the bitstream, a second one or more syntax elements indicating that the deblocking filtering is disabled for the at least one block.
57. The apparatus according to any one of claims 45-47 and 49, wherein said apparatus comprises at least one of:
an integrated circuit;
a microprocessor; and
a wireless communication device comprising the video coder.
58. A device configured to code video data, the device comprising:
means for coding a plurality of blocks of video data, including means for coding at least one block of the plurality of blocks of video data using a lossless coding mode that uses prediction;
means for assigning a non-zero Quantization Parameter (QP) value to the at least one block coded using the coding mode; and
means for performing deblocking filtering on one or more of the plurality of blocks of video data based on the coding mode used to code the at least one block and the non-zero QP value assigned to the at least one block.
59. The device of claim 58, wherein the means for performing the deblocking filtering on the one or more of the plurality of blocks of video data based on the coding mode used to code the at least one block and the assigned non-zero QP value comprises:
means for performing the deblocking filtering on an adjacent block of the plurality of blocks of video data that is located adjacent to the at least one block and that is coded using a lossy coding mode based on the assigned non-zero QP value.
60. The device of claim 59, wherein the means for performing the deblocking filtering on each of the at least one block and the adjacent block based on the assigned non-zero QP value comprises one or more of:
means for selecting a filter for the deblocking filtering based on the assigned non-zero QP value; and
means for determining a filter strength for the deblocking filtering based on the assigned non-zero QP value.
61. The device of any one of claims 58 to 60, wherein the means for assigning the non-zero QP value for the at least one block comprises means for determining the assigned non-zero QP value based on one or more of:
a signaled QP value for the at least one block, wherein the signaled QP value indicates the non-zero QP value assigned;
a predicted QP value for the at least one block; and
a signaled delta QP value for the at least one block, wherein the delta QP value represents a difference between the assigned non-zero QP value and the predicted QP value.
62. The device of any one of claims 58 to 60, wherein the means for assigning the non-zero QP value for the at least one block comprises means for setting one of a QP value and a delta QP value for a lossy block of the plurality of blocks of video data to the assigned non-zero QP value, wherein the delta QP value represents a difference between the QP value and a predicted QP value for the lossy block, the lossy block comprising a block coded using a lossy coding mode.
63. The device of any one of claims 58 to 60, wherein the means for assigning the non-zero QP value for the at least one block comprises means for setting a constant value to the assigned non-zero QP value.
64. The device of any one of claims 58 to 60, wherein coding comprises decoding, and wherein
The means for decoding the at least one block comprises means for receiving one of residual unquantized video data and reconstructed video data for the at least one block in a received bitstream, and
the means for assigning the non-zero QP value for the at least one block comprises one of: means for receiving the assigned non-zero QP value in the received bitstream, and means for receiving a delta QP value for the at least one block in the received bitstream and determining the assigned non-zero QP value based on the delta QP value and a predicted QP value, the delta QP value representing a difference between the assigned non-zero QP value and the predicted QP value for the at least one block,
the device further comprises means for receiving, in the received bitstream, one or more syntax elements indicating that the deblocking filtering is enabled for the one or more of the plurality of blocks of video data.
65. The device of any one of claims 58 to 60, wherein coding comprises encoding, and wherein
The means for encoding the at least one block comprises means for signaling one of residual unquantized video data and reconstructed video data for the at least one block in a bitstream, and
the means for assigning the non-zero QP value for the at least one block comprises one of: means for signaling the assigned non-zero QP value in the bitstream, and means for signaling a delta QP value for the at least one block in the bitstream, the delta QP value representing a difference between the assigned non-zero QP value and a predicted QP value for the at least one block,
the device further comprises means for signaling, in the bitstream, one or more syntax elements indicating that the deblocking filtering is enabled for the one or more of the plurality of blocks of video data.
Applications Claiming Priority (6)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US61/549,597 | 2011-10-20 | ||
| US61/605,705 | 2012-03-01 | ||
| US61/606,277 | 2012-03-02 | ||
| US61/624,901 | 2012-04-16 | ||
| US61/641,775 | 2012-05-02 | ||
| US13/655,009 | 2012-10-18 |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| HK1195981A HK1195981A (en) | 2014-11-28 |
| HK1195981B true HK1195981B (en) | 2018-06-22 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US9560383B2 (en) | Intra pulse code modulation (IPCM) and lossless coding mode deblocking for video coding | |
| JP6165840B2 (en) | Chroma slice level QP offset and deblocking | |
| CN109964482B (en) | Method for indicating the use of bilateral filters in video coding | |
| US9883203B2 (en) | Adaptive overlapped block motion compensation | |
| US9445130B2 (en) | Blockiness metric for large HEVC block artifacts | |
| EP2984827B1 (en) | Sample adaptive offset scaling based on bit-depth | |
| US20170237982A1 (en) | Merging filters for multiple classes of blocks for video coding | |
| JP2015508251A (en) | Deblocking filter parameter signaling in video coding | |
| CN103999469A (en) | Performing transform dependent de-blocking filtering | |
| HK1195981A (en) | Intra pcm (ipcm) and lossless coding mode video deblocking | |
| HK1195981B (en) | Intra pcm (ipcm) and lossless coding mode video deblocking |