[go: up one dir, main page]

HK1254970B - System and methods for calculating distortion in display stream compression (dsc) - Google Patents

System and methods for calculating distortion in display stream compression (dsc) Download PDF

Info

Publication number
HK1254970B
HK1254970B HK18114077.7A HK18114077A HK1254970B HK 1254970 B HK1254970 B HK 1254970B HK 18114077 A HK18114077 A HK 18114077A HK 1254970 B HK1254970 B HK 1254970B
Authority
HK
Hong Kong
Prior art keywords
block
video
color space
color
video blocks
Prior art date
Application number
HK18114077.7A
Other languages
Chinese (zh)
Other versions
HK1254970A1 (en
Inventor
维贾伊拉加哈万‧提鲁马莱
纳坦‧海姆‧雅各布森
瑞珍‧雷克斯曼‧乔许
Original Assignee
高通股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US15/398,567 external-priority patent/US10448024B2/en
Application filed by 高通股份有限公司 filed Critical 高通股份有限公司
Publication of HK1254970A1 publication Critical patent/HK1254970A1/en
Publication of HK1254970B publication Critical patent/HK1254970B/en

Links

Description

System and method for computing distortion in Display Stream Compression (DSC)
Technical Field
The present disclosure relates to the field of video coding and compression, and in particular, to video compression, such as Display Stream Compression (DSC), for transmission over a display connection.
Background
Digital video capabilities can be incorporated into a wide range of displays, including digital televisions, Personal Digital Assistants (PDAs), laptop computers, desktop monitors, digital cameras, digital recording devices, digital media players, video gaming devices, video gaming consoles, cellular or satellite radio telephones, video teleconferencing devices, and the like. The display link is used to connect the display to the appropriate source device. The bandwidth requirement of a display link is proportional to the resolution of the display, and therefore, high resolution displays require large bandwidth display links. Some display nexus do not have the bandwidth to support high-resolution displays. Video compression may be used to reduce bandwidth requirements so that lower bandwidth display links may be used to provide digital video to high resolution displays.
Other techniques have attempted to utilize image compression of pixel data. However, such schemes are sometimes not visually lossless or can be difficult and expensive to implement in conventional display devices.
The Video Electronics Standards Association (VESA) has developed Display Stream Compression (DSC) as a standard for display-coupled video compression. Display-coupled video compression techniques, such as DSC, should provide, among other things, a visually lossless picture quality (i.e., pictures having a quality level such that a user cannot distinguish that compression is active). The display-link video compression technique should also provide a solution that is simple and inexpensive to implement in real-time with conventional hardware.
Disclosure of Invention
The systems, methods, and devices of the present disclosure each have several innovative aspects, no single one of which is solely responsible for the desirable attributes disclosed herein.
One innovation includes an apparatus for coding video data. The apparatus may include a memory for storing the video data and information regarding a plurality of coding modes, the video data comprising a plurality of video blocks. The apparatus may also include a hardware processor operatively coupled to the memory. The processor may be configured to: the method includes selecting one of a plurality of color spaces for a video block of the plurality of video blocks, applying a color transform to each video block of the plurality of video blocks that is not in the selected color space and verifying that all of the video blocks of the plurality of video blocks are in the selected color space, and determining a distortion value for each of the plurality of video blocks based on the selected color space.
For some embodiments, the apparatus may be configured to: determining an initial color space for each video block of the plurality of video blocks, the initial color space being the color space for each video block prior to applying the color transform; determining which of the plurality of coding modes are compatible with the initial color space; and encoding the video block of the plurality of video blocks with the compatible coding mode to provide an encoded block.
For some embodiments, the apparatus may be configured to: determining which of the plurality of coding modes are incompatible with the initial color space, the initial color space being the color space of the video blocks prior to applying the color transform; applying the color transform to the initial color space to provide a compatible color block; and encoding the compatible color block with the coding mode that is not compatible with the initial color space to provide an encoded block.
In some embodiments, the apparatus may be configured to calculate a residual block from the video block and the encoded block, the residual block indicating a difference between the video block and the encoded block.
In some embodiments, determining the distortion value comprises determining a distortion value for the residual block.
In some embodiments, the selected color space comprises a luma-chroma color space and wherein determining the distortion value comprises normalizing various chroma components of the luma-chroma color space.
In some embodiments, the video block comprises a number of color planes, and wherein determining the distortion value for the video block comprises at least one of: a sum of absolute differences of the respective color planes of the plurality of color planes, and a sum of squared errors of the respective color planes of the plurality of color planes.
In some embodiments, the color transform is based on a transform matrix defined by a number of columns indicative of a number of color planes of the selected color space, and wherein the hardware processor is further configured to determine weight values based on a Euclidean (Euclidean) norm of one of the number of columns.
In some embodiments, the distortion value for the transformed video block is based on at least one of: a sum of absolute differences of each color plane of the plurality of color planes, wherein each color plane is multiplied by a corresponding weight value of the plurality of weight values; and a sum of squared errors for each of the plurality of color planes, wherein each color plane is multiplied by the corresponding one of the plurality of weights.
In some embodiments, the selected color space is in at least one of a luma-chroma color space and an RGB color space.
In some embodiments, determining the distortion value further comprises determining a coding mode of the plurality of coding modes based on: (i) the distortion value for each of the plurality of video blocks, (ii) a lambda value, and (iii) a bitstream rate at which the video block is conveyed.
In some embodiments, each video block of the plurality of video blocks indicates a single video block that has been encoded using each coding mode of the plurality of coding modes.
Drawings
Fig. 1A is a block diagram illustrating an example video encoding and decoding system that may utilize techniques in accordance with aspects described in this disclosure.
Fig. 1B is a block diagram illustrating another example video encoding and decoding system that may perform techniques in accordance with aspects described in this disclosure.
Fig. 2 is a block diagram illustrating an example of a video encoder that may implement techniques in accordance with aspects described in this disclosure.
Fig. 3 is a block diagram illustrating an example implementation of a distortion circuit.
Fig. 4 is a block diagram illustrating an alternative implementation of a distortion circuit.
Fig. 5 is a block diagram illustrating an example of a video decoder that may implement techniques in accordance with aspects described in this disclosure.
Fig. 6 is a flow diagram illustrating an exemplary method for determining an encoding mode.
Detailed Description
Disclosed herein are DSC decoders that provide fixed rate and visually lossless compression. The coder is designed in a block or slice based approach (e.g., where the block size is P × Q), and may be implemented with one or more of a number of coding modes. For example, the available coding options for each block include transform mode (e.g., DCT, hadamard), block prediction mode, Differential Pulse Code Modulation (DPCM) mode, pattern mode, mid-point prediction (MPP) mode, and/or mid-point post-prediction down (MPPF) mode. Several coding modes may be used in a coder to compress different types of content or images. For example, text images may be compressed via pattern mode, while natural images may be captured via transform mode.
Although certain embodiments are described herein in the context of a DSC standard, one of ordinary skill in the art will generally appreciate that the systems and methods disclosed herein may be applied to any suitable video coding standard. For example, embodiments disclosed herein may be applicable to one or more of the following standards: the International Telecommunication Union (ITU) telecommunication standardization division (ITU-T) H.261, the International organization for standardization/International electrotechnical Commission (ISO/IEC) moving Picture experts group-1 (MPEG-1) Visual, ITU-T H.262, or ISO/IEC MPEG-2Visual, ITU-T H.263, ISO/IEC MPEG-4Visual, ITU-T H.264 (also known as ISO/IEC MPEG-4AVC), High Efficiency Video Coding (HEVC), and any extensions of such standards. Also, the techniques described in this disclosure may become part of a standard developed in the future. In other words, the techniques described in this disclosure may be applicable to previously developed video coding standards, video coding standards currently being developed, and upcoming video coding standards.
In DSC coders according to certain aspects, the rate-distortion ("RD") performance of each mode may be evaluated in multiple color spaces (e.g., any luma-chroma representation, such as YCoCg or YCbCr), or in RGB or CMYK color spaces.
According to certain aspects, the techniques described in this disclosure may provide various methods to calculate distortion for coding modes, e.g., where the modes are evaluated in different color spaces. For example, the distortion for all coding modes may be calculated in the same color space, e.g., by applying the appropriate color transform. A color transform may be applied to a residual block, where the residual block represents the difference between the original video block and the reconstructed video block (also referred to herein as an encoded block or an error block), or a color transform may be applied to both the original block and the reconstructed block prior to computing the residual.
Video coding standard
Digital images, such as video images, TV images, still images, or images generated by a video recorder or computer, may contain pixels or samples arranged in horizontal and vertical lines. The number of pixels in a single image is typically tens of thousands. Each pixel typically contains both luma and chroma information. Without compression, the absolute amount of information to be communicated from the image encoder to the image decoder would render real-time image transmission impractical. To reduce the amount of information to be transmitted, several different compression methods have been developed, such as JPEG, MPEG, and h.263 standards.
Video coding standards include ITU-T H.261, ISO/IEC MPEG-1Visual, ITU-T H.262, or ISO/IEC MPEG-2Visual, ITU-T H.263, ISO/IEC MPEG-4Visual, ITU-T H.264 (also known as ISO/IEC MPEG-4AVC), and HEVC, including extensions of such standards.
In addition, the video coding standard (i.e., DSC) has been developed by VESA. The DSC standard is a video compression standard that can compress video transmitted via a display link. As the resolution of the display increases, the bandwidth required by the video data to drive the display increases accordingly. Some display nexuses may not have the bandwidth to transmit all video data to displays of such resolutions. Thus, the DSC standard specifies an interoperable, visually lossless compression via display concatenations compression standard.
The DSC standard differs from other video coding standards (e.g., h.264 and HEVC). DSC includes intra-frame compression, but not inter-frame compression, which means that temporal information may not be used by the DSC standard for coding video data. By contrast, other video coding standards may employ inter-frame compression in their video coding techniques. Advanced DSCs are being developed, for example, to provide compression ratios of 4:1 or higher. A compression ratio of 4:1 or higher may be used for mobile devices, e.g., for high resolution displays such as 4K.
Section in DSC
As mentioned above, a slice generally refers to a spatially separate region in an image or frame that can be independently decoded without using information from the remaining regions in the image or frame. Each image or video frame may be encoded in a single slice or each image or video frame may be encoded in several slices. In DSC, the target bits assigned to encode each slice may be substantially fixed. This may be different for partial slices, which may occur if the image height cannot be divided by the slice height. For example, an image of size 1280 × 720 with a slice height of 108 would have 6 slices of height 108 and one partial slice of height 72(═ 720- (6 × 108)).
Advanced DSC slice sizes may be specified using variables or parameters slice width x slice height, where slice width and slice height are configurable. The slice height may be configured to a desired value, e.g., 16, 32, 108, etc. The slice width may be configured using a parameter N that determines the number of slices in a line and assumes that the number of pixels per line in each slice is equal, e.g., the slice width is equal to the image width/N. The image width may be a variable or parameter representing the width of the image.
Video decoding system
Various aspects of the novel systems, devices, and methods are described more fully below with reference to the accompanying drawings. This disclosure may, however, be embodied in many different forms and should not be construed as limited to any specific structure or function presented throughout this disclosure. Rather, these aspects are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art. Based on the teachings herein one skilled in the art should appreciate that the scope of the present disclosure is intended to cover any aspect of the novel systems, apparatuses, and methods disclosed herein, whether implemented independently of or in combination with any other aspect of the present disclosure. For example, an apparatus may be implemented using any number of the aspects set forth herein or a method may be practiced using any number of the aspects set forth herein. Additionally, the scope of the present disclosure is intended to cover such an apparatus or method practiced using other structure, functionality, or structure and functionality in addition to or other than the various aspects of the present disclosure set forth herein. It is to be understood that any aspect disclosed herein may be embodied by one or more elements of a claim.
Although specific aspects are described herein, many variations and permutations of these aspects are within the scope of the present disclosure. Although some benefits and advantages of the preferred aspects are mentioned, the scope of the present disclosure is not intended to be limited to a particular benefit, use, or objective. Rather, aspects of the present disclosure are intended to be broadly applicable to different wireless technologies, system configurations, networks, and transmission protocols, some of which are illustrated by way of example in the figures and in the following description of the preferred aspects. The embodiments and drawings are merely illustrative of the disclosure rather than limiting, the scope of the disclosure being defined by the appended claims and equivalents thereof.
The accompanying drawings illustrate examples. Elements indicated by reference numerals in the attached drawings correspond to elements indicated by the same reference numerals in the following description. In this disclosure, elements having names that begin with ordinal words (e.g., "first," "second," "third," etc.) do not necessarily imply a particular order to the elements. Rather, these ordinal words are used only to refer to different elements of the same or similar type.
Fig. 1A is a block diagram illustrating an example video coding system 10 that may utilize techniques in accordance with aspects described in this disclosure. As used herein, the term "video coder" or "coder" generally refers to both video encoders and video decoders. In this disclosure, the terms "video coding" or "coding" may generally refer to video encoding and video decoding. In addition to video encoders and video decoders, the aspects described in this application are extendable to other related devices, such as transcoders (e.g., devices that can decode a bitstream and re-encode another bitstream) and intermediate blocks (e.g., devices that can modify, transform, and/or otherwise manipulate a bitstream).
As shown in fig. 1A, video coding system 10 includes a source device 12 that generates source video data 13 and encoded video data 16 to be later decoded by a destination device 14. In the example of fig. 1A, source device 12 and destination device 14 constitute separate devices. It should be noted, however, that source device 12 and destination device 14 may be on or part of the same device, as shown in the example of fig. 1B.
Referring again to fig. 1A, source device 12 and destination device 14 may each comprise any of a wide range of devices, including desktop computers, notebook (e.g., laptop) computers, tablet computers, set-top boxes, telephone handsets (e.g., so-called "smart" phones, so-called "smart" pads), televisions, cameras, display devices, digital media players, video game consoles, on-board computers, video streaming devices, video devices (e.g., goggles and/or wearable computers) that may be worn (or removably attached) by an entity (e.g., a human, an animal, and/or another control device), devices or apparatuses that may be consumable, ingested, or placed within a physical, and/or the like. In various embodiments, source device 12 and destination device 14 may be equipped for wireless communication.
Destination device 14 may receive encoded video data 16 to be decoded over connection 17. Link 17 may comprise any type of media or device capable of moving encoded video data 16 from source device 12 to destination device 14. In the example of fig. 1A, link 17 may comprise a communication medium that enables source device 12 to transmit encoded video data 16 to destination device 14 in real-time. Encoded video data 16 may be modulated according to a communication standard, such as a wireless communication protocol, and transmitted to destination device 14. The communication medium may include any wireless or wired communication medium, such as a Radio Frequency (RF) spectrum or one or more physical transmission lines. The communication medium may form part of a packet-based network, such as a local area network, a wide area network, or a global network, such as the internet. The communication medium may include a router, switch, base station, or any other equipment that may be used to facilitate communication from source device 12 to destination device 14.
In the example of fig. 1A, source device 12 includes video source 18, video encoder 20, and output interface 22. In some cases, output interface 22 may include a modulator/demodulator (modem) and/or a transmitter. In source device 12, video source 18 may include, for example, a video capture device (e.g., a video camera), a video archive containing previously captured video, a video feed interface to receive video from a video content provider, and/or a source of a computer graphics system for generating computer graphics data as the source video, or a combination of these sources. As one example, if video source 18 is a video camera, source device 12 and destination device 14 may form so-called "camera phones" or "video phones," as illustrated in the example of fig. 1B. Video source 18 may output the captured, pre-captured, or computer-generated video as a source video data 13 bitstream to video encoder 20. However, the techniques described in this disclosure may be generally applicable to video coding, and may be applied to wireless and/or wired applications. For example, video source 18 may generate source video data 13 and output source video data 13 via a connection between video source 18 and video encoder 20. The connection may include any suitable wired connection (e.g., Universal Serial Bus (USB), FireWire, Thunderbolt, lightwave (Light Peak), Digital Video Interface (DVI), high-definition multimedia interface (HDMI), Video Graphics Array (VGA), etc.). The connection may also include any suitable wireless connection (e.g., bluetooth, Wi-Fi, 3G, 4G, LTE-advanced, 5G, etc.).
Source video data 13 may be received and encoded by video encoder 20. Encoded video data 16 may be transmitted to destination device 14 via output interface 22 of source device 12. Encoded video data 16 may also (or alternatively) be stored onto a storage device (not shown) for later access by destination device 14 or other devices for decoding and/or playback. The video encoder 20 illustrated in fig. 1A and 1B may comprise the video encoder 20 illustrated in fig. 2 or any other video encoder described herein.
In the example of fig. 1A, destination device 14 includes input interface 28, video decoder 30, and display device 32. In some cases, input interface 28 may include a receiver and/or a modem. Input interface 28 of destination device 14 may receive encoded video data 16 over link 17 and/or from a storage device. Encoded video data 16 communicated over connection 17 or provided on a storage device may include a variety of syntax elements generated by video encoder 20 for use by a video decoder, such as video decoder 30, in decoding video data 16. These syntax elements may be included with encoded video data 16 transmitted on a communication medium, stored on a storage medium, or stored in a file server. Video decoder 30 illustrated in fig. 1A and 1B may comprise video decoder 30 illustrated in fig. 5 or any other video decoder described herein.
The display device 32 may be integrated with the destination device 14 or external to the destination device 14. In some examples, destination device 14 may include an integrated display device and also be configured to interface with an external display device. In other examples, destination device 14 may be a display device. In general, display device 32 displays the decoded video data to a user, and may comprise any of a variety of display devices, such as a Liquid Crystal Display (LCD), a plasma display, an Organic Light Emitting Diode (OLED) display, or another type of display device.
In a related aspect, fig. 1B shows an example video coding system 10' in which source device 12 and destination device 14 are on device 11 or are part of device 11. The device 11 may be a telephone handset, such as a "smart" telephone or the like. Device 11 may include a processor/controller device 13 (optionally present) in operative communication with source device 12 and destination device 14. Video coding system 10' of fig. 1B and its components are otherwise similar to video coding system 10 and its components of fig. 1A.
Video encoder 20 and video decoder 30 may operate according to a video compression standard, such as DSC. Alternatively, video encoder 20 and video decoder 30 may operate according to other proprietary or industry standards, such as the ITU-T h.264 standard (alternatively referred to as MPEG-4 part 10), AVC, HEVC, or extensions of these standards. However, the techniques of this disclosure are not limited to any particular coding standard. Other examples of video compression standards include MPEG-2 and ITU-T H.263.
Although not shown in the examples of fig. 1A and 1B, video encoder 20 and video decoder 30 may each be integrated with an audio encoder and decoder and may include appropriate MUX-DEMUX units, or other hardware and software, to handle encoding of both audio and video in a common data stream or separate data streams. In some examples, the MUX-DEMUX unit may conform to the ITU h.223 multiplexer protocol or other protocols, such as the User Datagram Protocol (UDP), as applicable.
Video encoder 20 and video decoder 30 may each be implemented as any of a variety of suitable encoder circuits, such as one or more microprocessors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), discrete logic, software, hardware, firmware or any combinations thereof. When the techniques are implemented in part in software, a device may store instructions for the software in a suitable non-transitory computer-readable medium and execute the instructions in hardware using one or more processors to perform the techniques of this disclosure. Each of video encoder 20 and video decoder 30 may be included in one or more encoders or decoders, either of which may be integrated as part of a combined encoder/decoder in the respective device.
Video coding process
As mentioned briefly above, video encoder 20 encodes source video data 13. Source video data 13 may include one or more pictures. Each of the pictures is a still image forming part of the video. In some cases, a picture may be referred to as a video "frame. When video encoder 20 encodes source video data 13, video encoder 20 may generate a bitstream. The bitstream may include a series of bits that form a coded representation of video data 16. The bitstream may include coded pictures and associated data. A coded picture is a coded representation of a picture.
To generate the bitstream, video encoder 20 may perform an encoding operation on each picture in the video data. When video encoder 20 performs an encoding operation on a picture, video encoder 20 may generate a series of coded pictures and associated data. The associated data may include a set of coding parameters, such as a Quantization Parameter (QP). Quantization may introduce loss into the signal and the amount of loss may be controlled by the QP determined by the rate controller 120. The rate controller 120 is discussed in more detail in fig. 2. The scaling matrix may be specified in terms of QPs rather than storing quantization steps for each QP. The quantization step size for each QP may be derived from the scaling matrix, and the derived value may not necessarily be a power of two, i.e., the derived value may also be a power other than two.
To generate a coded picture, video encoder 20 may partition the picture into equally sized video blocks. The video blocks may be a two-dimensional array of samples. The coding parameters may define coding options (e.g., coding modes) for each block of video data. Coding options may be selected in order to achieve a desired rate-distortion performance.
In some examples, video encoder 20 may partition a picture into multiple slices. Each of the slices may include spatially distinct regions in an image (e.g., a frame) that may be independently decoded without information from the remaining regions in the image or frame. Each image or video frame may be encoded in a single slice or each image or video frame may be encoded in several slices. In DSC, the target bits assigned to encode each slice may be substantially fixed. As part of performing encoding operations on the picture, video encoder 20 may perform encoding operations on slices of the picture. When video encoder 20 performs an encoding operation on a slice, video encoder 20 may generate encoded data associated with the slice. The encoded data associated with a slice may be referred to as a "coded slice".
DSC video encoder
Fig. 2 is a block diagram illustrating an example of a video encoder 20 that may implement techniques in accordance with aspects described in this disclosure. Video encoder 20 may be configured to perform some or all of the techniques of this disclosure. In some examples, the techniques described in this disclosure may be shared among various components of video encoder 20. In some examples, additionally or alternatively, a processor (not shown) may be configured to perform some or all of the techniques described in this disclosure.
For purposes of explanation, this disclosure describes video encoder 20 in the context of DSC coding. However, the techniques of this disclosure may be applicable to other coding standards or methods.
In the example of fig. 2, video encoder 20 includes a plurality of functional components. Functional components of video encoder 20 include color space converter 105, buffer 110, flatness detector 115, rate controller 120, predictor, quantizer and reconstructor (PQR) component 125, distortion circuit 188, line buffer 130, indexed color history 135, entropy coder 140, substream multiplexer 145, and rate buffer 150. In other examples, video encoder 20 may include more, fewer, or different functional components.
Color space converter 105 may convert an input color space of source video data 13 to a color space used in a particular coding implementation. For example, the color space of source video data 13 may be in a red-green-blue (RGB) color space, while coding may be implemented in a luminance Y, chrominance green Cg, and chrominance orange co (ycgco) color space. Color space conversion may be performed by methods that include shifting and adding to video data. It should be noted that input video data in other color spaces may be processed, and that conversions to other color spaces may also be performed. In some implementations, the video data may bypass the color-space converter 105 if the color space of the input video data is already in the correct format for the particular coding mode. For example, if the input color space is RGB, the video data may bypass the color space converter 105 for coding by a midpoint prediction mode, which may encode the video data in an RGB or luma-chroma representation.
In a related aspect, video encoder 20 may include buffer 110, line buffer 130, and/or rate buffer 150. For example, buffer 110 may save the color-space-converted video data before other portions of video encoder 20 use the video data. In another example, video data may be stored in the RGB color space and color space conversion may be performed as needed, as more bits may be required for the color space converted data.
Rate buffer 150 may serve as part of a rate control mechanism in video encoder 20, which will be described in more detail below in conjunction with rate controller 120. The bits consumed to encode each block may vary highly, generally based on the nature of the block. The rate buffer 150 may smooth rate changes in the compressed video. In some embodiments, a Constant Bit Rate (CBR) buffer model is used, where bits are taken from the buffer at a constant bit rate. In the CBR buffer model, the rate buffer 150 may overflow if the video encoder 20 adds too many bits to the bitstream. On the other hand, video encoder 20 must add enough bits to prevent underflow of rate buffer 150.
On the video decoder side, bits may be added to rate buffer 155 of video decoder 30 at a constant bit rate (see fig. 5 described in more detail below), and video decoder 30 may remove a variable amount of bits for each block. To ensure proper decoding, rate buffer 155 of video decoder 30 should not "underflow" or "overflow" during decoding of the compressed bitstream.
In some embodiments, the Buffer Fullness (BF) may be defined based on a value BufferCurrentSize representing the number of bits currently in the buffer and BufferMaxSize representing the size of the rate buffer 150 (i.e., the maximum number of bits that may be stored in the rate buffer 150 at any point in time). Equation 1 below may be used to calculate BF:
BF=((BufferCurrentSize*100)/BufferMaxSize) (1)
the flatness detector 115 may detect a change from a complex (i.e., non-flat) area in the video data to a flat (i.e., simple or uniform) area in the video data. The terms "complex" and "flat" will be used herein to generally refer to the difficulty with which video encoder 20 encodes respective regions of video data. Thus, the term complex as used herein generally describes regions of video data that are complex to encode for video encoder 20, and may, for example, include textured video data, high spatial frequencies, and/or other features that are complex to encode. The term flat, as used herein, generally describes a region of video data that is simple to encode for video encoder 20, and may, for example, include gradual gradients in video data, low spatial frequencies, and/or other features that are simple to encode. The transition between the complex region and the flat region may be used by video encoder 20 to reduce quantization artifacts in encoded video data 16. In particular, the rate controller 120 and PQR component 125 may reduce such quantization artifacts when transitions from complex regions to flat regions are identified.
Rate controller 120 determines a set of coding parameters, such as QP. The QP may be adjusted by rate controller 120 based on the buffer fullness of rate buffer 150 and the image activity of the video data in order to maximize the picture quality at the target bit rate, which ensures that rate buffer 150 does not overflow or underflow. Rate controller 120 also selects particular coding options (e.g., particular modes) for each block of video data in order to achieve optimal rate-distortion performance. Rate controller 120 minimizes the distortion of the reconstructed image so that it satisfies the bitrate constraint (i.e., the overall actual coding rate fits into the target bitrate). Thus, one purpose of rate controller 120 is to determine a set of coding parameters (e.g., QPs), coding modes, etc. to satisfy instantaneous and average constraints on the rate while maximizing rate-distortion performance. PQR component 125 may select a coding mode for each block from among a plurality of candidate coding modes based on rate control techniques. The rate control techniques may involve utilizing a buffer model, and design considerations of the codec may include ensuring that rate buffer 150 is not in a state of underflow (e.g., less than zero bits in the buffer) or overflow (e.g., the buffer size has increased beyond a set/defined maximum size). In one embodiment, rate controller 120 may be designed to select the best coding mode for each block based on a trade-off between rate and distortion (e.g., a low cost coding option in terms of cost D + λ · R). Here, parameter R refers to the bit rate of the current block, which may be the total number of bits of the current block transmitted between encoder 20 and decoder 30; parameter D refers to the distortion of the current block, which may be the difference between the original block and the reconstructed block (or encoded block). The parameter D may be calculated in several different ways, e.g., as a Sum of Absolute Differences (SAD) between the original and reconstructed blocks (e.g., equations 4, 6, 8, 10, and 12), a sum of squared errors (e.g., equations 5, 7, 9, 11, and 13), and so on. The parameter λ or λ value is a lagrangian parameter that may be a trade-off between the parameters R and D. It should be noted that the lagrangian parameter λ may be calculated in various ways, and the selected method of calculating λ may vary depending on the context and application. For example, the lagrangian parameter λ may be calculated based on a number of factors, such as rate buffer (150, 155) state (i.e., buffer fullness), first-line or non-first-line conditions of the block, and so on. Even for non-first line conditions of the slice, the spatial prediction mode may be selected for multiple types of image content.
PQR component 125 may perform at least three encoding operations of video encoder 20. The PQR component 125 may perform prediction in several different modes. One example prediction mode is a modified median adaptive prediction version. Median adaptive prediction may be implemented by the lossless JPEG standard (JPEG-LS). The modified median adaptive prediction version that may be performed by the PQR component 125 may allow for parallel prediction of three consecutive sample values. Another example prediction mode is block prediction. In block prediction, samples are predicted from above the previously reconstructed pixels in a line or to the left in the same line. In some embodiments, video encoder 20 and video decoder 30 may both perform the same retrieval on reconstructed pixels to determine block prediction usage, and thus do not need to send bits in block prediction mode. In other embodiments, video encoder 20 may perform the retrieving and signal block prediction vectors in the bitstream such that video decoder 30 does not have to perform separate retrieving. A midpoint prediction mode may also be implemented, in which the midpoint of the component range is used to predict the samples. The midpoint prediction mode may enable the definition of the number of bits needed for compressed video in even the worst case samples. PQR component 125 may be configured to predict (e.g., encode or decode) a block of video data block (or any other prediction unit) by performing the methods and techniques of this disclosure.
The PQR component 125 also performs quantization. For example, quantization may be performed via a power of 2 quantizer, which may be implemented using a shifter. It should be noted that other quantization techniques may be implemented in place of the power of 2 quantizer. The quantization performed by the PQR component 125 may be based on the QP determined by the rate controller 120. Finally, the PQR component 125 also performs reconstruction, which includes adding the inverse quantized residue to the prediction value and ensuring that the result does not fall outside the valid range of sample values. Herein, the term "residual" may be used interchangeably with "residual".
It should be noted that the above example methods of prediction, quantization, and reconstruction performed by the PQR component 125 are merely illustrative, and other methods may be implemented. It should also be noted that the PQR component 125 may include subcomponents for performing prediction, quantization, and/or reconstruction. It should further be noted that the prediction, quantization, and/or reconstruction may be performed by several separate encoder components in place of the PQR component 125.
Still referring to fig. 2, the PQR component 125 may include a distortion circuit 188. The distortion circuit may correspond to a computing device for executing instructions related to the functions described below. The distortion circuit 188 may include a processor (e.g., a video processing unit or a general purpose processing unit) and a memory collectively configured to manage the communication and execution of tasks. The distortion circuit 188 may receive an input of video data, the video data having a plurality of color spaces. For example, the color space of the input video data may be an RGB or RCT color space, or a luma-chroma representation, such as YCbCr, YCoCg, or lossless YCoCg-R. The distortion circuit 188 may calculate the distortion of several coding modes when applied to input video data. The distortion circuit 188 may determine the best coding mode to be used on a particular slice or block of the input video data according to the calculated distortion and the cost function, and provide this information to the PQR component 125. The cost function controls the rate-distortion performance at the decoder. For example, a coding mode that produces relatively least distortion may cause the buffer to overflow when the rate is too high. Alternatively, a relatively high rate may be acceptable, but at the cost of the quality of the image. Thus, the distortion circuit 188 provides the advantage of using rate control techniques to determine the best coding mode for each block or slice of received image data, such that image quality and buffer rate are maintained at acceptable levels.
Line buffer 130 holds the output from PQR component 125 so that PQR component 125 and indexed color history 135 can use the buffered video data. Indexed color history 135 stores the most recently used pixel values. These recently used pixel values may be referenced directly by video encoder 20 via a dedicated syntax.
The entropy encoder 140 encodes the prediction residual and any other data received from the PQR component 125 (e.g., the index identified by the PQR component 125) based on the indexed color history 135 and the flatness transitions identified by the flatness detector 115. In some examples, the entropy encoder 140 may encode three samples per substream encoder per clock. The substream multiplexer 145 may multiplex the bitstreams based on a header-free packet multiplexing scheme. This allows video decoder 30 to run three entropy decoders in parallel, facilitating three pixels per clock decoding. The sub-stream multiplexer 145 may optimize the packet order so that the packets may be efficiently decoded by the video decoder 30. It should be noted that different entropy coding methods may be implemented that may facilitate decoding of powers of 2 pixels per clock (e.g., 2 pixels/clock or 4 pixels/clock).
Calculation of distortion
In some embodiments, distortion circuit 188 of video encoder 20 may calculate the distortion for all coding modes in the same color space. For example, the distortion circuit may calculate the distortion for all coding modes in the same color space by applying the appropriate color transform. Suitable color transforms may refer to the various color transforms disclosed above. Examples of color transforms include converting an input RGB signal to a luma-chroma representation and converting the luma-chroma representation to an RGB signal. In one implementation, the distortion circuit 188 may perform a color transform on a set of residual blocks 340 a-340 n, where the residual blocks 340 a-340 n represent differences between the original blocks (310, 315) and the reconstructed block 330 (or encoded block). For example, the original block (310, 315) may be a partitioned representation of an input frame that has been partitioned into blocks or slices prior to encoding. Reconstructed block 330 may represent one of the original blocks in a number of different color spaces encoded using multiple coding modes 325. In another implementation, the distortion circuit 188 may perform a color transform on both the original blocks (310, 315) and the reconstructed block 330 prior to computing the residual blocks 340 a-340 n.
Fig. 3 illustrates an example implementation of the distortion circuit 188 of fig. 2. The distortion circuit includes a plurality of functional components. The functional components of the distortion circuit include a block encoder 320, difference computations 335a through 335n components, and distortion computations 345a through 345n components. In other examples, distortion circuit 188 may include more, fewer, or different functional components.
Still referring to fig. 3, the distortion circuit 188 may receive the source video data 13 from the buffer 110, as well as the video data output from the color space converter 105. When the format of the source video data 13 is in the RGB color space, the color space converter 105 may use a linear color transform to decorrelate the data. The color space converter 105 may use various color transforms. For example, a transform that converts RGB to a luma-chroma representation (e.g., YCbCr, YCoCg, or RCT as used in JPEG). Also, color transforms using the lossy (YCoCg) and lossless (YCoCg-R) versions of RGB to YCoCg. In one implementation, color-space converter 105 is compatible with a reversible version of the source video data 13 color-space (e.g., YCoCg-R) such that the color transform does not introduce any loss. A reversible transform may require additional data bits for the chroma components. For example, for 8-bit RGB, the luma component or channel requires 8 bits, and each of the chroma components (Co and Cg) requires 9 bits. The YCoCg-R forward color transform may be given as:
the inverse color transform of YCoCg-R may be given as:
in the example equations above, the RGB and YCoCg color spaces each contain three color planes (i.e., R, G and B; or Y, Co and Cg). In video encoder 20, the rate-distortion ("RD") performance of each mode may be evaluated in either the YCoCg or RGB color spaces. For example, video encoder 20 may use pattern, MPP, and MPP fallback modes to evaluate RD performance in the RGB color space, while RD performance in the luma-chroma color space may use other modes. Source video data 13 received from video encoder 20 and color transformed data received from color space converter 105 may both be partitioned into blocks or slices. In one embodiment, source video data 13 may be split at any point (e.g., at video source 18) prior to being received by distortion circuit 188. In another embodiment, the distortion circuit 188 may split the source video data 13 to generate the RGB block 310 and the YCoCg block 315.
Still referring to fig. 3, the distortion circuit 188 may also include a block encoder 320. The block encoder 320 may include a processor (e.g., a video processing unit or a general purpose processing unit) and a memory collectively configured to store instructions and perform tasks. The block encoder 320 may apply a number of coding modes 325 (also referred to herein as "mode 1", "mode 2", or "mode n") to each block based on the color space of each block. For example, the coding modes 325 of each block (310, 315) may include a transform mode (e.g., DCT, Hadamard), a block prediction mode, a Differential Pulse Code Modulation (DPCM) mode, a pattern mode, a mid-point prediction (MPP) mode, and/or a mid-point prediction fall-back (MPPF) mode. The block encoder 320 may receive the RGB block 310 and the YCoCg block 315 and encode the blocks with any one of a number of coding modes 325. In one embodiment, block encoder 320 encodes each received block by all coding modes appropriate for the color space associated with each received block. The block encoder 320 may output a number of reconstructed blocks 330, the reconstructed blocks 330 representing one of the received blocks (310, 315) encoded using a number of modes. For example, block 1 in RGB block 310 may be encoded using a midpoint prediction mode and a transform mode from coding mode 325. The block encoder 320 may output two blocks corresponding to block 1, each encoded by either a midpoint prediction mode or a transform mode, each encoded block being an encoded representation of block 1 in the RGB block 310. The block encoder 320 generates a number of reconstructed blocks 330 so that the distortion circuit 188 may calculate the difference between both the RGB block 310 and the YCoCg block 315 as received and the reconstructed blocks 330 for each mode.
Still referring to fig. 3, the distortion circuit 188 may further include difference computation 335 a-335 n components. The difference computations 335 a-335 n components may include a processor (e.g., a video processing unit or a general purpose processing unit) and a memory collectively configured to store instructions and perform tasks. The difference computation 335 a-335 n components may compute the difference between the reconstructed block 330 and its corresponding original block (310, 315). For example, block encoder 320 may encode block 1 in RGB block 310 using a midpoint prediction mode and a transform mode from coding mode 325. The block encoder 320 may output two blocks corresponding to block 1, each encoded by either a midpoint prediction mode or a transform mode, each encoded block being an encoded representation of block 1 in the RGB block 310. The difference calculation 335a module may calculate the difference between block 1 in the RGB block 310 and the corresponding encoded block pattern 1 of the reconstructed block 330 (i.e., encoded by the midpoint prediction mode). The difference calculation 335b module may calculate the difference between block 1 in the RGB block 310 and the corresponding encoded block pattern 2 of the reconstructed block 330 (i.e., encoded by the transform pattern). The difference computations 335 a-335 n may generate residual blocks 340 a-340 n, where the residual blocks 340 a-340 n represent differences between the RGB block 310 and the YCoCg block 315 and its corresponding reconstructed block 330.
Still referring to fig. 3, distortion circuit 188 may perform distortion calculations 345 a-345 n. Distortion calculations 345 a-345 n may calculate the distortion for each residual block 340 a-340 n. Distortion calculations 345 a-345 n may include color space transform functions that convert received residual blocks 340 a-340 n to a uniform color space prior to calculating distortion for residual blocks 340 a-340 n. Distortion circuit 188 may determine the best mode for a particular block based on the calculated distortion and output the block encoded by the best mode to PQR component 125. For example, if the source video data 13 input into the distortion circuit 188 is in the RGB color space, the block encoder 320 may encode block 1 in the RGB color space 310 using both midpoint prediction modes, thereby generating one encoded version of block 1 in the RGB color space. However, certain coding modes of the plurality of coding modes 325 may only encode video blocks in the luma-chroma color space. Thus, color-space converter 105 may convert the color-space of source video data 13 from the RGB color-space to a luma-chroma representation (e.g., YCoCg). The block encoder 320 may encode block 1 in the YCoCg color space 315 with both transform and pattern modes, thereby producing two encoded versions of block 1 in the YCoCg color space. The difference computations 335a through 335n may produce residual blocks 340a through 340n for block 1 in each mode. The distortion calculations 345 a-345 n may perform a color-space transform function on the residual blocks 340 a-340 n in the RGB color space or the residual blocks 340 a-340 n in the YCoCg color space so that the distortion for each mode used on block 1 may be calculated in the same color space.
In one example, for all modes, the distortion circuit 188 may perform the distortion calculations 345 a-345 n in either the RGB color space or the luma-chroma color space, where the distortion calculations include SAD (sum of absolute differences) or SSE (sum of squared errors). For example, when the YCoCg-R transform is used as the color space to calculate the distortion, the distortion of the chroma components may be normalized to consider one extra bit. For example, YCoCg-R may use 8 bits per luma component and 9 bits for each of the chroma components. The SAD in the YCoCg color space can be calculated in equation 4 as follows:
SADYCoCg=SAD(Y)+(SAD(Co)+SAD(Cg)+offset)>>1 (4)
wherein:
SAD (Y): the sum of absolute differences of the luma components of the block,
SAD (Co): the sum of absolute differences of the Co chrominance components of the block,
sad (cg): sum of absolute differences of Cg chrominance components of the block, and
offset: an optional value that can be used to round to the nearest integer, for example, the offset can be a value of 0 or 1.
It should be noted that the luma component (Y) or luma plane and the chroma components (Co, Cg) or chroma planes represent the luma and chroma values of each pixel in the block or slice being analyzed. For example, applying equation 4 to a block containing 16 pixels will result in calculating the SAD for each of the 16 sample luma values, the 16 sample Co values, and the 16 sample Cg values. Resulting SADYCoCgThe value is shifted to the right by 1 to effectively normalize the chroma components to take one extra bit into account in each component.
When SSE is used as a metric to calculate distortion, equation 5 may be used:
SSEYCoCg=SSE(Y)+(SSE(Co)+SSE(Cg)+offset)>>1 (5)
wherein:
SSE (Y): the sum of squared errors of the luma components of the block,
sse (co): sum of squared errors of Co chroma components of a block
Sse (cg): the sum of squared errors of the Cg chrominance components of the block, and
offset: an optional value that can be used to round to the nearest integer, for example, the offset can be a value of 0 or 1.
It should be noted that the luma component (Y) or luma plane and the chroma components (Co, Cg) or chroma planes represent the luma and chroma values of each pixel in the block or slice being analyzed. For example, applying equation 5 to a block containing 16 pixels will result in calculating the SSE for each of the 16 sample luma values, the 16 sample Co values, and the 16 sample Cg values. Resulting SSEYCoCgThe value is shifted to the right by 1 to effectively normalize the chroma components to take one extra bit into account in each component.
Alternatively, the distortion circuit 188 may apply weights to each color plane in the YCoCg color space to effectively account for human contrast visual sensitivity. Because human vision may be more sensitive to luma than chroma, the distortion circuit 188 may apply a greater weight to the luma component and a lesser weight to the chroma component relative to the luma component. For example, the calculation of SAD is as follows:
wherein WY、WCoAnd WCgAre weights applied to the respective luma and chroma components. Similar visual weights may be used when SSE is used as a distortion metric:
for example, instead of applying a color transform to the residual blocks 340 a-340 n in distortion calculations 345 a-345 n, the distortion circuit 188 may derive weights from the color transform matrices of equations 2 and 3, and the distortion in the respective luma and chroma components may be weighted to calculate the distortion. This approach avoids the computation of the color transform performed in the distortion calculations 345 a-345 n, thereby simplifying the process. The distortion circuit 188 may be based on the column norm (e.g., l) of each of the three column values in the transform matrix2Norm (euclidean norm)) to compute the weight of each component. For example, when calculating the distortion of the coding mode in the RGB color space, the distortion of the coding mode operating in the YCoCg color space is calculated using the transform matrix of equation 3 in either of equations 8 and 9 as follows:
here, the weightL representing a column in a reverse transformation matrix (YCoCg to RGB)2And (4) norm.
SSEYCoCg=3SSEY+0.5SSECo+0.75SSECg (9)
Here, the weight (3,0.5,0.75) is expressed in the reverse transformation matrix (YCoCg to RGB)The square of the l2 norm of the corresponding column. Furthermore, fixed-point calculations may be used to calculate distortion, rather than floating-point calculations. For example, the weightCan be expressed by 8 bit fractional accuracy as
Alternatively, when the YCoCg color space is set as the color space for calculating distortion, weights may be derived based on the columns of the forward transform matrix to weight R, G and B distortion. For example, the SAD may be calculated as:
SADRGB=WR*SAD(R)+WG*SAD(G)+WB*SAD(B) (10)
wherein WR、WGAnd WBAre weights applied to the respective luma and chroma components. Similar visual weights may be used when SSE is used as a distortion metric:
SSERGB=WR*SSE(R)+WG*SSE(G)+WB*SSE(B) (11)
it should be noted that the R component (R), G component (G), and B component (B) represent the red, green, and blue values for each pixel in the block or slice being analyzed. For example, applying equations 10 and 11 to a block containing 16 pixels will result in calculating the SAD and SSE for each of the red values of 16 samples, the green values of 16 samples, and the blue values of 16 samples. May be based on the column norm (e.g., l) of each of the three column values in the forward transform matrix2Norm (euclidean norm)) to compute the weight of each component. For example, when calculating the distortion of the coding mode in the YCoCg color space, the distortion of the coding mode operating in the RGB color space may be calculated using the forward transform matrix of equation 2 in either of equations 12 and 13 as follows:
therein, theA weight ofL representing a column in a forward transform matrix (RGB to YCoCg)2And (4) norm.
Here, the weightL representing the corresponding column in the inverse transform matrix (RGB to YCoCg)2The square of the norm. Furthermore, fixed-point calculations may be used to calculate distortion, rather than floating-point calculations.
The techniques and methods described above are not limited to RGB to YCoCg lossless color transforms, and they may be applied to any linear color transform, e.g., YCbCr, YCoCg lossy transforms. In this way, the techniques may use the same color space in order to calculate distortion for various coding modes (e.g., all coding modes). Using the same color space makes the computation more efficient and improves performance. Depending on the embodiment, the examples and embodiments described in the present disclosure may be implemented alone or in combination. Depending on the embodiment, certain features of the examples and embodiments may be omitted or changed, and other features may be added to the examples and embodiments.
Fig. 4 illustrates an alternative embodiment of a distortion circuit 188 substantially similar to fig. 3. In this embodiment, color transform and distortion calculations 345 a-345 n may be applied to both the original and reconstructed blocks prior to generation of difference calculations 335 a-335 n and residual blocks 340 a-340 n. All other functional blocks of the distortion circuit 188 in fig. 3 function in a similar manner to the functional blocks of the distortion circuit 188 of fig. 4. It should be noted that although fig. 4 illustrates difference calculations 335 a-335 n made based on distortion calculations 345 a-345 n, alternative embodiments may include difference calculations made in a manner similar to fig. 3.
DSC video decoder
Fig. 5 is a block diagram illustrating an example of a video decoder 30 that may implement techniques in accordance with aspects described in this disclosure. Video decoder 30 may be configured to perform some or all of the techniques of this disclosure. In some examples, the techniques described in this disclosure may be shared among various components of video decoder 30. In some examples, additionally or alternatively, a processor (not shown) may be configured to perform some or all of the techniques described in this disclosure.
For purposes of explanation, this disclosure describes video decoder 30 in the context of DSC coding. However, the techniques of this disclosure may be applicable to other coding standards or methods.
In the example of fig. 5, video decoder 30 includes a plurality of functional components. Functional components of video decoder 30 include rate buffer 155, substream demultiplexer 160, entropy decoder 165, rate controller 170, predictor, quantizer and reconstructor (PQR) component 175, indexed color history 180, line buffer 185, and color space converter 190. The illustrated components of video decoder 30 are similar to the corresponding components described above in connection with video encoder 20 in fig. 2. Thus, each of the components of video decoder 30 may operate in a similar manner to the corresponding components of video encoder 20 described above.
Still referring to fig. 5, rate buffer 155 of video decoder 30 may be part of the physical memory used to store compressed video data received from input interface 28 of fig. 1B. Rate buffer 155 may receive compressed video data at a certain bit rate and output a compressed video stream at a certain constant bit rate. To ensure proper decoding, rate buffer 155 of video decoder 30 should not "underflow" or "overflow" during decoding of the compressed bitstream. In some embodiments, Buffer Fullness (BF) may be defined based on a value BufferCurrentSize representing the number of bits currently in the buffer and BufferMaxSize representing the size of the rate buffer 150 (i.e., the maximum number of bits that may be stored in the rate buffer 150 at any point in time, as mentioned in equation 1 above). The rate buffer 155 may smooth rate changes in the compressed video. Rate buffer 155 may serve as part of a rate control mechanism in video decoder 30, which will be described in more detail below in conjunction with rate controller 170.
BF may be calculated in other ways, and the selected method of BF calculation may vary depending on the context and application. In another example, BF may be normalized from 0 to 1 by dividing BF by 100. Normalized BF values may be used to calculate lambda values. The buffer fullness based λ value may be calculated based on the following equation:
wherein { Λ, a1,b1,c1,d1Is a tunable parameter. x is equal to 0,1]And x is calculated asWhere BF is expressed here as a percentage (e.g., the percentage of occupied bits in the buffer).
Still referring to fig. 5, the substream demultiplexer 160 may comprise an integrated circuit device that receives compressed video data from the rate buffer 155 and outputs the data using a number of output lines connected to the entropy decoder 165, the output lines being determined by the select input. The substream demultiplexer 160 may be arranged to divide the received compressed video data into one or more demultiplexer bitstreams for transmission over one or more channels. The one or more bitstreams may be output to the one or more entropy decoders 165 for decoding. The substream demultiplexer 160 may be used as a complementary device for demultiplexing the multiplexed data output from the substream multiplexer 145 of the video encoder 20.
Still referring to fig. 5, entropy decoder 165 may include electronic circuitry, such as a video processing unit or a general purpose processing unit. The entropy decoder 165 may receive compressed video data from the substream demultiplexer 160. Entropy decoding unit 165 may parse the compressed video data to obtain syntax elements from the bitstream. Entropy decoding unit 165 may entropy decode the entropy-encoded syntax elements. The received compressed video data may include coded slice data. As part of decoding the bitstream, entropy decoding unit 165 may extract and entropy decode syntax elements from the coded slice data. Each of the coded slices may include a slice header and slice data. The slice header may contain syntax elements that relate to the slice. Entropy decoder 165 may deliver the motion vectors and other syntax elements to PQR component 175. Video decoder 30 may receive syntax elements at the video slice level and/or the video block level. The entropy decoder 165 may serve as a complementary means for decoding data encoded by the entropy encoder 140 of the video encoder 20. PQR component 175 may generate decoded video data based on syntax elements extracted from the bitstream.
Still referring to fig. 5, rate controller 170 may include electronic circuitry, such as a video processing unit or a general purpose processing unit. Rate controller 170 may receive the entropy decoded bitstream as an input from entropy decoder 165. Rate controller 170 determines a set of coding parameters (e.g., QPs). The QP may be adjusted by rate controller 170 based on the buffer fullness of rate buffer 155 and the image activity of the video data in order to maximize the picture quality at the target bit rate, which ensures that rate buffer 155 does not overflow or underflow. Rate controller 170 also selects particular coding options (e.g., particular modes) for each block of video data in order to achieve optimal rate-distortion performance. Rate controller 170 minimizes the distortion of the reconstructed image such that it satisfies the bitrate constraint (i.e., the overall actual coding rate fits into the target bitrate). In other words, the rate controller prevents buffer failure by preventing the block rate from overwhelming the available resources.
Still referring to FIG. 5, indexed color history 180 may comprise electronic circuitry, such as a video processing unit or a general purpose processing unit, any of which comprises memory. Indexed color history 180 may receive a bitstream of compressed video from one or more entropy decoders 165 and may also receive data from PQR component 175. Indexed color history 180 may store recently used pixel values. These recently used pixel values may be referenced directly by the PQR component 175 via a dedicated syntax. Advantages of using indexed color history 180 include managing the colors of digital images to speed up display updates and data transfers.
Still referring to FIG. 5, the line buffer 185 may comprise an electronic circuit, such as a memory device implemented on an integrated circuit. Line buffer 185 saves the output from PQR component 175 so that PQR component 175 and indexed color history 180 can use the buffered video data. Indexed color history 180 stores the most recently used pixel values. These recently used pixel values may be referenced directly by video decoder 30 via a dedicated syntax.
Still referring to fig. 5, the PQR component 175 may comprise electronic circuitry, such as a video processing unit or a general purpose processing unit. PQR component 175 may perform at least three encoding operations of video decoder 30. For example, PQR component 175 may perform prediction in several different modes. For example, if the video slice is coded as an intra-coded slice, PQR component 175 may generate prediction data for the video blocks of the current video slice based on the signaled intra-prediction mode and data from previously decoded blocks of the current frame or picture. In another example, if the video frame is coded as an inter-coded slice, PQR component 175 may generate predictive blocks for video blocks of the current video slice based on the motion vectors and other syntax elements received from entropy decoder 165. The prediction process may provide the resulting intra-or inter-coded block to a summer or reconstructor to generate residual block data and reconstruct the decoded block.
The PQR component 175 also performs quantization. The residual block may be determined via inverse quantization. For example, the inverse quantization process quantizes (i.e., de-quantizes) the quantized transform coefficients provided in the bitstream and decoded by entropy decoder 165. The inverse quantization process may include using the quantization parameters calculated by video encoder 20 for each video block in the video slice to determine the degree of quantization, and likewise the degree of inverse quantization that should be applied. PQR component 175 may include an inverse transform process that applies an inverse transform (e.g., an inverse DCT, an inverse integer transform, or a conceptually similar inverse transform process) to the transform coefficients in order to generate a residual block in the pixel domain. PQR component 175 may be used as a complementary device for inverse quantizing the data output from PQR component 125 of video encoder 20.
The PQR component 175 also performs reconstruction. The PQR component 175 may reconstruct the residual block in the pixel domain for later use as a reference block. For example, in a luma-chroma representation, a reconstructor may use residual values (i.e., intra-prediction data or inter-prediction data, as applicable) from luma, Cb, and Cr transform blocks associated with Transform Units (TUs) of a Coding Unit (CU), and Prediction Unit (PU) luma, Cb, and Cr blocks of PUs of the CU, to reconstruct luma, Cb, and Cr coding blocks of the CU. For example, the reconstructor of PQR component 175 may add samples of the luma, Cb, and Cr transform blocks to corresponding samples of the predictive luma, Cb, and Cr blocks to reconstruct the luma, Cb, and Cr coding blocks of the CU.
Referring again to FIG. 5, color space converter 190 may comprise an electronic circuit, such as a video processing unit or a general purpose processing unit. Color space converter 190 may convert the color space used in a coding implementation to a color space used in a display implementation. For example, the color space received by color-space converter 190 may be in the luma Y, chroma green Cg, and chroma orange co (ycgco) color spaces used by coding implementations, and display implementations may include red-green-blue (RGB) color spaces. Color space conversion may be performed by various methods, including shifting and adding to the video data as mentioned in equations 2 and 3 above. It should be noted that input video data in other color spaces may be processed, and that conversions to other color spaces may also be performed.
In a related aspect, video decoder 30 may include rate buffer 155 and/or line buffer 185. For example, rate buffer 155 may save the color-space converted video data before other portions of video decoder 30 use the video data. In another example, video data may be stored in a luma-chroma color space and color space conversion may be performed as needed, as more bits may be required for the color space converted data.
FIG. 6 is a flow diagram illustrating an example of a process 400 for determining a coding mode for a first video block (310, 315). At block 405, distortion circuit 188 of video encoder 20 receives a first video block in at least one color space (310, 315). In some implementations, a first video block (310, 315) can be received in several different color spaces. For example, a first video block (310, 315) may be received in an RGB color space and a luma-chroma color space.
Still referring to fig. 6, at block 410, block encoder 320 of distortion circuit 188 generates a plurality of reconstructed video blocks 330. Plurality of reconstructed video blocks 330 indicates first video blocks (310, 315) that were encoded using plurality of coding modes 325. In other words, each of plurality of reconstructed video blocks 330 is a copy of a first video block (310, 315) that has been reconstructed using one of a number of coding modes. In some implementations, block encoder 320 may encode the first video block (310, 315) using a number of coding modes that are color-space compatible with the first video block (310, 315). For example, the block encoder 320 may encode a first video block in the RGB color space 310 using a midpoint prediction mode. The block encoder 320 may also encode the first video block in the YCoCg color space 315 using a transform mode. In this example, block encoder 320 generates a plurality of reconstructed video blocks 330, where the plurality of reconstructed video blocks 330 are represented in different color spaces and indicate the first video block (310, 315).
Still referring to fig. 6, at block 415, the distortion circuit 188 selects one of a plurality of color spaces. In one implementation, distortion circuit 188 may determine a number of reconstructed blocks in the RGB color space and a number of reconstructed blocks in the luma-chroma representation from plurality of reconstructed video blocks 330. The distortion circuit 188 may reduce computations at block 420 by selecting a color space that represents a large portion of the reconstructed video block 330. In another implementation, the user may select a color space, or the distortion circuit 188 may be preprogrammed to select a particular color space.
Still referring to FIG. 6, at block 420, distortion circuit 188 applies a color transform to each encoded video block of the plurality of reconstructed video blocks 330 that is not in the selected color space. The color transform may include the color transform matrices of equations 2 and 3, where the color transform matrices include a number of columns equal to the number of color planes in the color space. In one implementation, distortion circuit 188 applies a color transform to a number of residual blocks 340 a-340 n, where each residual block represents a difference between a first video block (310, 315) and each of a plurality of reconstructed video blocks 330. In another implementation, distortion circuit 188 applies a color transform to both the first video block (310, 315) and each of the plurality of reconstructed video blocks 330 prior to computing residual blocks 340 a-340 n.
Still referring to fig. 6, at block 425, the distortion circuit 188 determines a distortion value for each of the plurality of residual blocks 340a through 340 n. In another implementation, distortion circuit 188 determines a distortion value for each of the plurality of reconstructed video blocks 330. In any implementation, the distortion circuit 188 may calculate a distortion value in either the RGB color space or the luma-chroma color space, where the distortion value may be (i) the SAD or SSE of each of the reconstructed blocks 330 or (ii) each of the residual blocks 340 a-340 n. In another implementation, when the selected color space is a luma-chroma color space, the distortion circuit 188 may normalize the calculated distortion value to account for additional bits in the chroma components. For example, the distortion circuit 188 may shift the distortion value calculated by SAD or SEE by 1 to the right. In yet another implementation, the distortion circuit 188 may apply the weighting values calculated by SAD and SSE in the luma-chroma color space. For example, the distortion circuit 188 may calculate weights for each color plane in the luma-chroma color space based on a column norm or euclidean norm of each column in the color transform matrix.
Still referring to fig. 6, at block 430, the distortion circuit 188 determines the best coding mode of the plurality of coding modes 325 based on a cost function that considers both the bit rate and the distortion value. In one implementation, distortion circuit 188 determines the coding mode using a cost value function. For example, the distortion circuit 188 may depend on a cost function: d + λ × R determines the best coding mode based on the trade-off between bit rate and distortion. Here, parameter R refers to the bit rate of the first video block (310, 315), which may be the total number of bits of the first video block (310, 315) transmitted between encoder 20 and decoder 30. Parameter D refers to the distortion of the first video block (310, 315). The parameter λ is a lagrangian parameter that may be a trade-off between the parameters R and D. It should be noted that the lagrangian parameter λ may be calculated in various ways, and the selected method of λ calculation may vary depending on the context and application. For example, the video encoder may calculate the lagrangian parameter λ based on a number of factors, such as the rate buffer (150, 155) state, the condition of the first video block (310, 315), and so on.
Still referring to fig. 6, at block 435, video encoder 20 communicates a first encoded video block to the destination device, the first encoded video block indicating a first video block encoded using the determined best coding mode (310, 315).
Other considerations
It should be noted that aspects of the present disclosure have been described from the perspective of an encoder (e.g., video encoder 20 in fig. 2). However, those skilled in the art will appreciate that the inverse of those described above may be applied to decode a bitstream generated by, for example, video decoder 30 in fig. 5.
Information and signals disclosed herein may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.
The various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
Thus, the techniques described herein may be implemented in hardware, software, firmware, or any combination thereof. Such techniques may be implemented in any of a variety of devices, such as a general purpose computer, a wireless communication device handset, or an integrated circuit device having multiple uses. Any features described as devices or components may be implemented together in an integrated logic device or separately as discrete but interoperable logic devices. If implemented in software, the techniques may be realized at least in part by a computer-readable data storage medium comprising program code comprising instructions that, when executed, perform one or more of the methods described above. The computer-readable data storage medium may form part of a computer program product, which may include packaging materials. The computer-readable medium may include memory or data storage media such as Random Access Memory (RAM), e.g., Synchronous Dynamic Random Access Memory (SDRAM), read-only memory (ROM), non-volatile random access memory (NVRAM), electrically erasable programmable read-only memory (EEPROM), FLASH memory, magnetic or optical data storage media, and the like. The techniques additionally, or alternatively, may be realized at least in part by a computer-readable communication medium that carries or communicates program code, such as a propagated signal or wave, in the form of instructions or data structures and that can be accessed, read, and/or executed by a computer.
The program code may be executed by a processor, which may include one or more processors, such as one or more Digital Signal Processors (DSPs), general purpose microprocessors, Application Specific Integrated Circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Such a processor may be configured to perform any of the techniques described in this disclosure. A general purpose processor may be a microprocessor; but, in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Thus, the term "processor," as used herein may refer to any of the foregoing structure, any combination of the foregoing structure, or any other structure or device suitable for implementation of the techniques described herein. Additionally, in some aspects, the functionality described herein may be provided within dedicated software or hardware configured for encoding and decoding, or incorporated in a combined video encoder-decoder (codec). Also, the techniques may be fully implemented in one or more circuits or logic elements.
The techniques of this disclosure may be implemented in a wide variety of devices or apparatuses, including a wireless handset, an Integrated Circuit (IC), or a set of ICs (e.g., a chipset). Various components or units are described in this disclosure to emphasize functional aspects of devices configured to perform the disclosed techniques, but do not necessarily require realization by different hardware units. Instead, as described above, the various units may be combined in a codec hardware unit or provided by a collection of interoperative hardware units, including one or more processors as described above, along with suitable software and/or firmware.
While the foregoing has been described in connection with various embodiments, features or elements from one embodiment may be combined with other embodiments without departing from the teachings of the present disclosure. However, the combination of features between the respective embodiments is not necessarily limited thereto. Various embodiments of the present disclosure have been described. These and other embodiments are within the scope of the following claims.

Claims (19)

1. An apparatus for coding video data, comprising:
a memory for storing the video data and information regarding a plurality of coding modes, the video data comprising a plurality of video blocks; and
a hardware processor operatively coupled to the memory and configured to:
generating a plurality of reconstructed video blocks representing one video block of the plurality of video blocks encoded based on a plurality of color spaces of the video block using the plurality of coding modes,
selecting one of the plurality of color spaces for a reconstructed video block of the plurality of reconstructed video blocks,
applying a color transform to each reconstructed video block of the plurality of reconstructed video blocks that is not in the selected color space and verifying that all of the reconstructed video blocks of the plurality of reconstructed video blocks are in the selected color space,
determining a distortion value for each of the plurality of reconstructed video blocks based on the selected color space,
determining an optimal coding mode from the plurality of coding modes based on the respective distortion value for each of the plurality of reconstructed video blocks, an
Encoding the video block using the determined optimal coding mode.
2. The apparatus of claim 1, wherein the hardware processor is further configured to:
determining an initial color space for each video block of the plurality of video blocks, the initial color space being the color space for each video block prior to applying the color transform;
determining which of the plurality of coding modes are compatible with the initial color space; and
encoding the video block of the plurality of video blocks with the compatible coding mode to provide an encoded block.
3. The apparatus of claim 1, wherein the hardware processor is further configured to:
determining which of the plurality of coding modes are not compatible with an initial color space, the initial color space being the color space of each video block prior to applying the color transform;
applying the color transform to the initial color space to provide a compatible color block; and
encoding the compatible color block with the coding mode that is not compatible with the initial color space to provide an encoded block.
4. The apparatus of claim 1, wherein the hardware processor is further configured to calculate a residual block from the video block and the reconstructed video block, the residual block indicating a difference between the video block and the reconstructed video block.
5. The apparatus of claim 4, wherein determining the distortion value comprises determining the distortion value for the residual block.
6. The apparatus of claim 1, wherein the selected color space comprises a luma-chroma color space and wherein determining the distortion value comprises normalizing various chroma components of the luma-chroma color space.
7. The apparatus of claim 1, wherein the video block comprises a number of color planes, and wherein determining the distortion value for the reconstructed video block comprises at least one of:
a sum of absolute differences of each of the plurality of color planes, an
A sum of squared errors for each of the plurality of color planes.
8. The apparatus of claim 1, wherein the color transform is based on a transform matrix defined by a number of columns indicating a number of color planes of the selected color space, and wherein the hardware processor is further configured to determine weight values based on a Euclidean norm of one of the number of columns.
9. The apparatus of claim 8, wherein the distortion value for the transformed reconstructed video block is based on at least one of:
a sum of absolute differences of each color plane of the plurality of color planes, wherein each color plane is multiplied by a corresponding weight value of the plurality of weight values, an
A sum of squared errors of each color plane of the plurality of color planes, wherein each color plane is multiplied by the corresponding weight value of the plurality of weight values.
10. The apparatus of claim 1, wherein the selected color space is in at least one of a luma-chroma color space and an RGB color space.
11. The apparatus of claim 1, wherein determining a distortion value further comprises determining a coding mode of the plurality of coding modes based on: (i) the distortion value for each of the plurality of reconstructed video blocks, (ii) a lambda value, and (iii) a bitstream rate at which the one video block is conveyed.
12. A method of coding video data, comprising:
generating a plurality of reconstructed video blocks representing one of a plurality of video blocks that are encoded based on a plurality of color spaces of the video blocks using a plurality of coding modes,
selecting one of the plurality of color spaces for a reconstructed video block of the plurality of reconstructed video blocks;
applying a color transform to each reconstructed video block of the plurality of reconstructed video blocks that is not in the selected color space and verifying that all of the reconstructed video blocks of the plurality of reconstructed video blocks are in the selected color space,
determining a distortion value for each of the plurality of reconstructed video blocks based on the selected color space,
determining an optimal coding mode from the plurality of coding modes based on the respective distortion value for each of the plurality of reconstructed video blocks, an
Encoding the video block using the determined optimal coding mode.
13. The method of claim 12, further comprising:
determining an initial color space for each video block of the plurality of video blocks, the initial color space being the color space for each video block prior to applying the color transform;
determining which of a plurality of coding modes are compatible with the initial color space; and
encoding the video block of the plurality of video blocks with a compatible coding mode to provide an encoded block.
14. The method of claim 12, further comprising:
determining which of a plurality of coding modes are not compatible with an initial color space, the initial color space being the color space of each video block prior to applying the color transform;
applying the color transform to the initial color space to provide a compatible color block; and
the compatible color block is encoded with a coding mode that is not compatible with the initial color space to provide an encoded block.
15. The method of claim 12, further comprising calculating a residual block from the video block and the reconstructed video block, the residual block indicating a difference between the video block and the reconstructed video block.
16. The method of claim 12, wherein determining a distortion value further comprises determining a coding mode of the plurality of coding modes based on: (i) the distortion value for each of the plurality of reconstructed video blocks, (ii) a lambda value, and (iii) a bitstream rate at which the one video block is conveyed.
17. A non-transitory computer-readable medium comprising instructions that, when executed by a device, cause the device to:
generating a plurality of reconstructed video blocks representing one of a plurality of video blocks that are encoded based on a plurality of color spaces of the video blocks using a plurality of coding modes,
selecting one of the plurality of color spaces for a reconstructed video block of the plurality of reconstructed video blocks, applying a color transform to each reconstructed video block of the plurality of reconstructed video blocks that is not in the selected color space, and verifying that all of the reconstructed video blocks of the plurality of reconstructed video blocks are in the selected color space,
determining a distortion value for each of the plurality of reconstructed video blocks based on the selected color space,
determining an optimal coding mode from the plurality of coding modes based on the respective distortion value for each of the plurality of reconstructed video blocks, an
Encoding the video block using the determined optimal coding mode.
18. The non-transitory computer-readable medium of claim 17, further comprising:
determining an initial color space for each video block of the plurality of video blocks, the initial color space being the color space for each video block prior to applying the color transform;
determining which of a plurality of coding modes are compatible with the initial color space; and
encoding the video block of the plurality of video blocks with a compatible coding mode to provide an encoded block.
19. The non-transitory computer-readable medium of claim 17, further comprising:
determining which of a plurality of coding modes are not compatible with an initial color space, the initial color space being the color space of each video block prior to applying the color transform;
applying the color transform to the initial color space to provide a compatible color block; and
the compatible color block is encoded with a coding mode that is not compatible with the initial color space to provide an encoded block.
HK18114077.7A 2016-01-11 2017-01-05 System and methods for calculating distortion in display stream compression (dsc) HK1254970B (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US201662277379P 2016-01-11 2016-01-11
US62/277,379 2016-01-11
US15/398,567 2017-01-04
US15/398,567 US10448024B2 (en) 2016-01-11 2017-01-04 System and methods for calculating distortion in display stream compression (DSC)
PCT/US2017/012331 WO2017123451A1 (en) 2016-01-11 2017-01-05 System and methods for calculating distortion in display stream compression (dsc)

Publications (2)

Publication Number Publication Date
HK1254970A1 HK1254970A1 (en) 2019-08-02
HK1254970B true HK1254970B (en) 2022-11-25

Family

ID=

Similar Documents

Publication Publication Date Title
EP3632114B1 (en) Substream multiplexing for display stream compression
EP3284253B1 (en) Rate-constrained fallback mode for display stream compression
US10631005B2 (en) System and method for coding in block prediction mode for display stream compression (DSC)
CN108432249B (en) System and method for computing distortion in Display Stream Compression (DSC)
EP3132603B1 (en) System and method for flatness detection for display stream compression (dsc)
US9866853B2 (en) System and method for lagrangian parameter calculation for display stream compression (DSC)
CN108702513B (en) Apparatus and method for adaptive computation of quantization parameters in display stream compression
KR20180083861A (en) System and method for fixed-point approximation in display stream compression (DSC)
CA3012869A1 (en) Apparatus and method for vector-based entropy coding for display stream compression
KR102112942B1 (en) Apparatus and methods for perceptual quantization parameter (QP) weighting for display stream compression
US10123045B2 (en) Modification to block size for transform mode in display stream compression
HK1254970B (en) System and methods for calculating distortion in display stream compression (dsc)