[go: up one dir, main page]

HK1241179A1 - Dynamic range adjustment for high dynamic range and wide color gamut video coding - Google Patents

Dynamic range adjustment for high dynamic range and wide color gamut video coding Download PDF

Info

Publication number
HK1241179A1
HK1241179A1 HK18100339.0A HK18100339A HK1241179A1 HK 1241179 A1 HK1241179 A1 HK 1241179A1 HK 18100339 A HK18100339 A HK 18100339A HK 1241179 A1 HK1241179 A1 HK 1241179A1
Authority
HK
Hong Kong
Prior art keywords
video data
color
video
dynamic range
range adjustment
Prior art date
Application number
HK18100339.0A
Other languages
Chinese (zh)
Inventor
德米特罗.鲁萨诺夫斯基
德内.布达伊哲.桑斯利
霍埃尔.索赖.罗哈斯
马尔塔.卡切维奇
李圣远
阿达许.克里许纳.瑞玛苏布雷蒙尼安
Original Assignee
高通股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 高通股份有限公司 filed Critical 高通股份有限公司
Publication of HK1241179A1 publication Critical patent/HK1241179A1/en

Links

Description

Dynamic range adjustment for high dynamic range and wide color gamut video coding
This application claims the benefit of U.S. provisional application No. 62/149,446, filed on day 4, month 17, 2015, the entire contents of which are incorporated herein by reference.
Technical Field
The present invention relates to video processing.
Background
Digital video capabilities can be incorporated into a wide variety of devices, including digital televisions, digital direct broadcast systems, wireless broadcast systems, Personal Digital Assistants (PDAs), handheld or desktop computers, tablet computers, electronic book readers, digital cameras, digital recording devices, digital media players, video gaming devices, video gaming consoles, cellular or satellite radio telephones, so-called "smart phones," video teleconferencing devices, video streaming devices, and the like. Digital video devices implement video coding techniques, such as those described in: standards defined by MPEG-2, MPEG-4, ITU-T H.263, ITU-T H.264/MPEG-4 part 10 Advanced Video Coding (AVC), ITU-T H.265, High Efficiency Video Coding (HEVC), and extensions of such standards. Video devices may transmit, receive, encode, decode, and/or store digital video information more efficiently by implementing such video coding techniques.
Video coding techniques include spatial (intra-picture) prediction and/or temporal (inter-picture) prediction to reduce or remove native redundancy in a video sequence. For block-based video coding, a video slice (e.g., a video frame or a portion of a video frame) may be partitioned into video blocks, which may also be referred to as treeblocks, Coding Units (CUs), and/or coding nodes. Video blocks in an intra-coded (I) slice of a picture are encoded using spatial prediction with respect to reference samples of neighboring blocks in the same picture. Video blocks in inter-coded (P or B) slices of a picture may use spatial prediction with respect to reference samples of neighboring blocks in the same picture or temporal prediction with respect to reference samples in other reference pictures. A picture may be referred to as a frame and a reference picture may be referred to as a reference frame.
Spatial or temporal prediction produces a predictive block of the block to be coded. The residual data represents the pixel differences between the original block to be coded and the predictive block. An inter-coded block is encoded according to a motion vector that points to a block of reference samples that forms a predictive block and residual data that indicates a difference between the coded block and the predictive block. The intra-coded block is encoded according to the intra-coded mode and the residual data. For further compression, the residual data may be transformed from the pixel domain to a transform domain, producing residual transform coefficients, which may then be quantized. The quantized transform coefficients, initially arranged in a two-dimensional array, may be scanned in order to generate a one-dimensional vector of transform coefficients, and entropy coding may be applied to achieve a greater degree of compression.
The total number of color values that can be captured, decoded, and displayed may be defined by a color gamut. Color gamut refers to the range of colors that a device can capture (e.g., a camera) or reproduce (e.g., a display). In general, the color gamut differs from device to device. For video coding, a predefined color gamut of video data may be used, such that each device in the video coding process may be configured to process pixel values in the same color gamut. Some color gamuts are defined with a greater range of colors than color gamuts traditionally used for video coding. Such a color gamut with a larger range of colors may be referred to as a Wide Color Gamut (WCG).
Another aspect of video data is dynamic range. Dynamic range is generally defined as the ratio between the minimum luminance (e.g., brightness) and the maximum luminance of a video signal. The dynamic range of video data that has been commonly used in the past is considered to have Standard Dynamic Range (SDR). Other example specifications for video data define color data having a large ratio between minimum luminance and maximum luminance. Such video data may be considered to have High Dynamic Range (HDR).
Disclosure of Invention
The disclosure relates to processing video data, including processing the video data to conform to a HDR/WCG color container. As will be explained in more detail below, the techniques of this disclosure apply Dynamic Range Adjustment (DRA) parameters to video data in order to better use HDR/WCG color containers. The techniques of this disclosure may also include signaling syntax elements that allow a video decoder or video post-processing device to reverse the DRA techniques of this disclosure to reconstruct the original or native color containers of the video data.
In one example of the present invention, a method of processing video data comprises: receiving video data relating to a first color container, the video data relating to the first color container being defined by a first color gamut and a first color space; deriving one or more dynamic range adjustment parameters, the dynamic range adjustment parameters being based on characteristics of the video data relating to the first color container; and performing dynamic range adjustment on the video data according to the one or more dynamic range adjustment parameters.
In another example of this disclosure, there is provided an apparatus configured to process video data, the apparatus comprising: a memory configured to store the video data; and one or more processors configured to: receiving the video data relating to a first color container, the video data relating to the first color container being defined by a first color gamut and a first color space; deriving one or more dynamic range adjustment parameters, the dynamic range adjustment parameters being based on characteristics of the video data relating to the first color container; and performing dynamic range adjustment on the video data according to the one or more dynamic range adjustment parameters.
In another example of this disclosure, an apparatus configured to process video data comprises: means for receiving video data relating to a first color container, the video data relating to the first color container being defined by a first color gamut and a first color space; means for deriving one or more dynamic range adjustment parameters, the dynamic range adjustment parameters being based on characteristics of the video data relating to the first color container; and means for performing dynamic range adjustment on the video data according to the one or more dynamic range adjustment parameters.
In another example, this disclosure describes a computer-readable storage medium storing instructions that, when executed, cause one or more processors to: receiving video data relating to a first color container, the video data relating to the first color container being defined by a first color gamut and a first color space; deriving one or more dynamic range adjustment parameters, the dynamic range adjustment parameters being based on characteristics of the video data relating to the first color container; and performing dynamic range adjustment on the video data according to the one or more dynamic range adjustment parameters.
The details of one or more examples are set forth in the accompanying drawings and the description below. Other features, objects, and advantages will be apparent from the description and drawings, and from the claims.
Drawings
FIG. 1 is a block diagram illustrating an example video encoding and decoding system configured to implement the techniques of this disclosure.
Fig. 2 is a conceptual diagram illustrating an overview of HDR data.
Fig. 3 is a conceptual diagram illustrating an example color gamut.
FIG. 4 is a flow chart illustrating an example of HDR/WCG representation conversion.
FIG. 5 is a flow chart illustrating an example of HDR/WCG inverse conversion.
Fig. 6 is a conceptual diagram illustrating an example of an electro-optical transfer function (EOTF) for video data conversion from perceptually uniform code levels to linear luminance, including SDR and HDR.
Fig. 7A and 7B are conceptual diagrams illustrating the visualization of color distribution in two example gamuts.
FIG. 8 is a block diagram illustrating an example HDR/WCG conversion apparatus operating in accordance with the techniques of this disclosure.
FIG. 9 is a block diagram illustrating an example HDR/WCG inverse conversion apparatus in accordance with the techniques of this disclosure.
FIG. 10 is a block diagram illustrating an example of a video encoder that may implement the techniques of this disclosure.
FIG. 11 is a block diagram illustrating an example of a video decoder that may implement the techniques of this disclosure.
FIG. 12 is a flow diagram illustrating an example HDR/WCG conversion process in accordance with the techniques of this disclosure.
FIG. 13 is a flow diagram illustrating an example HDR/WCG inverse conversion process in accordance with the techniques of this disclosure.
Detailed Description
The present invention relates to processing and/or coding of video data having a High Dynamic Range (HDR) and Wide Color Gamut (WCG) representation. More specifically, the techniques of this disclosure include signaling and related operations applied to video data in a particular color space to enable more efficient compression of HDR and WCG video data. The techniques and devices described herein may improve compression efficiency for hybrid video coding systems (e.g., h.265/HEVC, h.264/AVC, etc.) used to code HDR and WCG video data.
Video coding standards, including hybrid video coding standards, include ITU-T H.261, ISO/IEC MPEG-1Visual, ITU-T H.262, or ISO/IEC MPEG-2 Visual, ITU-T H.263, ISO/IEC MPEG-4 Visual, and ITU-T H.264 (also known as ISO/IEC MPEG-4 AVC), including Scalable Video Coding (SVC) and Multiview Video Coding (MVC) extensions thereof. The joint collaboration team of video coding (JCT-VC) of the ITU-T Video Coding Experts Group (VCEG) and ISO/IEC Moving Picture Experts Group (MPEG) has completed the design of a new video coding standard, namely high efficiency video coding (HEVC, also known as h.265). HEVC draft specification "High Efficiency Video Coding (HEVC) text specification draft 10(for FDIS and last call) (High efficiency video coding (HEVC) text specification draft 10(for FDIS) by bloss (Bross) et al, referred to as HEVC working draft 10(WD10) (for FDIS)&Last Call)) "(ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11 video coding joint collaboration team (JCT-VC), Switzerland Innova 12 meeting, 1 month, 14 days to 23 days 2013, JCTVC-L1003v34)http://phenix.int-evry.fr/jct/doc_end_ user/documents/12_Geneva/wg11/JCTVC-L1003-v34.zipAnd (4) obtaining. The finalized HEVC standard is referred to as HEVC version 1.
The Defect Report "High Efficiency Video Coding (HEVC) Defect Report" by Wang et al (video coding Joint collaboration team (JCT-VC) of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG 11) can be from Vienna Austria, 7.25.8.2.7.2013, JCTVC-N1003v1http://phenix.int-evry.fr/jct/doc_end_user/documents/14_Vienna/ wg11/JCTVC-N1003-v1.zipAnd (4) obtaining. The finalized HEVC standard document is published as ITU-T h.265, series H: audio-visual and multimedia systems, infrastructure of audio-visual services-coding of mobile video, high efficiency video coding (telecommunication standardization sector of the International Telecommunication Union (ITU), 4 months 2013), andanother version of the finalized HEVC standard was published in 10 months 2014. Can be selected fromhttp://www.itu.int/rec/T-REC-H.265-201504-I/enA copy of the h.265/HEVC specification text is downloaded.
FIG. 1 is a block diagram illustrating an example video encoding and decoding system 10 that may use the techniques of this disclosure. As shown in fig. 1, system 10 includes a source device 12 that provides encoded video data to be later decoded by a destination device 14. In particular, source device 12 provides video data to destination device 14 via computer-readable medium 16. Source device 12 and destination device 14 may comprise a wide range of devices, including desktop computers, notebook (e.g., handheld) computers, tablet computers, set-top boxes, telephone handsets such as so-called "smart" phones, so-called "smart" tablets, televisions, cameras, display devices, digital media players, video game consoles, video streaming devices, or the like. In some cases, source device 12 and destination device 14 may be equipped for wireless communication.
Destination device 14 may receive encoded video data to be decoded via computer-readable medium 16. Computer-readable medium 16 may comprise any type of medium or device capable of moving encoded video data from source device 12 to destination device 14. In one example, computer-readable medium 16 may comprise a communication medium to enable source device 12 to transmit encoded video data directly to destination device 14 in real-time. The encoded video data may be modulated according to a communication standard, such as a wired or wireless communication protocol, and transmitted to destination device 14. The communication medium may comprise any wireless or wired communication medium, such as a Radio Frequency (RF) spectrum or one or more physical transmission lines. The communication medium may form part of a packet network, such as a local area network, a wide area network, or a global network (e.g., the internet). The communication medium may include a router, switch, base station, or any other apparatus that may be used to facilitate communication from source device 12 to destination device 14.
In other examples, computer-readable medium 16 may comprise a non-transitory storage medium, such as a hard disk, flash drive, compact disc, digital video disc, blu-ray disc, or other computer-readable medium. In some examples, a network server (not shown) may receive encoded video data from source device 12 and provide the encoded video data to destination device 14, e.g., via network transmission. Similarly, a computing device of a media production facility (e.g., a disc stamping facility) may receive encoded video data from source device 12 and produce a disc containing the encoded video data. Thus, in various examples, computer-readable medium 16 may be understood to include one or more computer-readable media in various forms.
In some examples, the encoded data may be output from output interface 22 to a storage device. Similarly, encoded data may be accessed from a storage device through an input interface. The storage device may include any of a variety of distributed or locally accessed data storage media such as a hard drive, blu-ray discs, DVDs, CD-ROMs, flash memory, volatile or non-volatile memory, or any other suitable digital storage medium for storing encoded video data. In another example, the storage device may correspond to a file server or another intermediate storage device that may hold the encoded video generated by source device 12. Destination device 14 may access the saved video data from storage via streaming or download. The file server may be any type of server capable of storing encoded video data and transmitting the encoded video data to destination device 14. Example file servers include web servers (e.g., for a website), FTP servers, Network Attached Storage (NAS) devices, or local disk drives. Destination device 14 may access the encoded video data over any standard data connection, including an internet connection. Such a standard data connection may include a wireless channel (e.g., a Wi-Fi connection), a wired connection (e.g., DSL, cable modem, etc.), or a combination of both, suitable for accessing encoded video data stored on a file server. The transmission of the encoded video data from the storage device may be a streaming transmission, a download transmission, or a combination thereof.
The techniques of this disclosure are not necessarily limited to wireless applications or settings. The techniques may be applied to video coding and support any of a variety of multimedia applications, such as over-the-air protocol television broadcasting, cable television transmissions, satellite television transmissions, internet streaming video transmissions (e.g., dynamic adaptive HTTP streaming (DASH)), encoding digital video onto a data storage medium, decoding digital video stored on a data storage medium, or other applications. In some examples, system 10 may be configured to support one-way or two-way video transmission to support applications such as video streaming, video playback, video broadcasting, and/or video telephony.
In the example of fig. 1, source device 12 includes a video source 18, a video encoder 20, and an output interface 22. Destination device 14 includes an input interface 28, a Dynamic Range Adjustment (DRA) unit 19, a video decoder 30, and a display device 32. In accordance with this disclosure, DRA unit 19 of source device 12 may be configured to implement the techniques of this disclosure, including signaling and related operations applied to video data in a particular color space to enable more efficient compression of HDR and WCG video data. In some examples, DRA unit 19 may be separate from video encoder 20. In other examples, DRA unit 19 may be part of video encoder 20. In other examples, the source device and destination device may include other components or arrangements. For example, source device 12 may receive video data from an external video source 18, such as an external camera. Likewise, destination device 14 may interface with an external display device, rather than including an integrated display device.
The illustrated system 10 of fig. 1 is merely one example. The techniques for processing HDR and WCG video data may be performed by any digital video encoding and/or video decoding device. Furthermore, the techniques of this disclosure may also be performed by a video pre-processor and/or a video post-processor. The video preprocessor may be any device configured to process video data prior to encoding (e.g., prior to HEVC encoding). The video post-processor may be any device configured to process video data after decoding (e.g., after HEVC decoding). Source device 12 and destination device 14 are merely examples of such coding devices, where source device 12 generates coded video data for transmission to destination device 14. In some examples, devices 12, 14 may operate in a substantially symmetric manner such that each of devices 12, 14 includes video encoding and decoding components, as well as video pre-and video post-processors (e.g., DRA unit 19 and inverse DRA unit 31, respectively). Thus, system 10 may support one-way or two-way video propagation between video devices 12, 14, for example, for video streaming, video playback, video broadcasting, or video telephony.
Video source 18 of source device 12 may include a video capture device, such as a video camera, a video archive containing previously captured video, and/or a video feed interface to receive video from a video content provider. As another alternative, video source 18 may generate computer graphics-based data as the source video, or a combination of live video, archived video, and computer-generated video. In some cases, if video source 18 is a camera, source device 12 and destination device 14 may form so-called camera phones or video phones. However, as mentioned above, the techniques described in this disclosure may be applicable to video coding and video processing in general, and may be applied to wireless and/or wired applications. In each case, the captured, pre-captured, or computer-generated video may be encoded by video encoder 20. The encoded video information may then be output by output interface 22 onto computer-readable medium 16.
Input interface 28 of destination device 14 receives information from computer-readable medium 16. The information of computer-readable medium 16 may include syntax information defined by video encoder 20 that is also used by video decoder 30, including syntax elements that describe characteristics and/or processing of blocks and other coded units, such as groups of pictures (GOPs). Display device 32 displays the decoded video data to a user, and may comprise any of a variety of display devices, such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), a plasma display, an Organic Light Emitting Diode (OLED) display, or another type of display device.
Video encoder 20 and video decoder 30 may each be implemented as any of a variety of suitable encoder circuits, such as one or more microprocessors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), discrete logic, software, hardware, firmware or any combinations thereof. When the techniques are implemented in part in software, a device may store instructions for the software in a suitable non-transitory computer-readable medium and execute the instructions in hardware using one or more processors to perform the techniques of this disclosure. Each of video encoder 20 and video decoder 30 may be included in one or more encoders or decoders, either of which may be integrated as part of a combined encoder/decoder (codec) in the respective device.
DRA unit 19 and inverse DRA unit 31 may each be implemented as any of a variety of suitable encoder circuits, such as one or more microprocessors, DSPs, ASICs, FPGAs, discrete logic, software, hardware, firmware, or any combinations thereof. When the techniques are implemented in part in software, a device may store instructions for the software in a suitable non-transitory computer-readable medium and execute the instructions in hardware using one or more processors to perform the techniques of this disclosure.
In some examples, video encoder 20 and video decoder 30 operate in accordance with video compression standards such as the ISO/IEC MPEG-4 Visual and ITU-T H.264 (also known as ISO/IEC MPEG-4 AVC), including their Scalable Video Coding (SVC) extensions, Multiview Video Coding (MVC) extensions, and MVC-based three-dimensional video (3DV) extensions. In some cases, any bitstream compliant with MVC-based 3DV always contains a sub-bitstream compliant with the MVC profile (e.g., stereo high profile). Furthermore, efforts are still being made to generate the 3DV coding extension of H.264/AVC, i.e., AVC-based 3 DV. Other examples of video coding standards include ITU-T H.261, ISO/IEC MPEG-1Visual, ITU-T H.262 or ISO/IEC MPEG-2 Visual, ITU-T H.263, ISO/IEC MPEG-4 Visual, and TU-T H.264, ISO/IECVactual. In other examples, video encoder 20 and video decoder 30 may be configured to operate according to the HEVC standard.
As will be explained in more detail below, DRA unit 19 and inverse DRA unit 31 may be configured to implement the techniques of this disclosure. In some examples, DRA unit 19 and/or inverse DRA unit 31 may be configured to receive video data relating to a first color container, wherein the first color container is defined by a first color gamut and a first color space, derive one or more dynamic range adjustment parameters, and perform dynamic range adjustment on the video data according to the one or more dynamic range adjustment parameters, the dynamic range adjustment parameters being based on characteristics of the video data.
DRA unit 19 and inverse DRA unit 31 may each be implemented as any of a variety of suitable encoder circuits, such as one or more microprocessors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), discrete logic, software, hardware, firmware or any combinations thereof. When the techniques are implemented in part in software, a device may store instructions for the software in a suitable non-transitory computer-readable medium and execute the instructions in hardware using one or more processors to perform the techniques of this disclosure. As discussed above, DRA unit 19 and inverse DRA unit 31 may be devices that are independent of video encoder 20 and video decoder 30, respectively. In other examples, DRA unit 19 may be integrated with video encoder 20 in a single device, and inverse DRA unit 31 may be integrated with video decoder 30 in a single device.
In HEVC and other video coding standards, a video sequence typically includes a series of pictures. Pictures may also be referred to as "frames". The picture may include three sample arrays, denoted as SL、SCbAnd SCr。SLIs a two-dimensional array (i.e., block) of luma samples. SCbIs a two-dimensional array of Cb chroma samples. SCrIs a two-dimensional array of Cr chroma samples. Chroma samples may also be referred to herein as "chroma" samples. In other cases, a picture may be monochrome and may include only an array of luma samples.
Video encoder 20 may generate a set of Coding Tree Units (CTUs). Each of the CTUs may include a coding tree block of luma samples, two corresponding coding tree blocks of chroma samples, and syntax structures used to code the samples of the coding tree blocks. In a monochrome picture or a picture having three separate color planes, a CTU may comprise a single coding tree block and syntax structures used to code the samples of the coding tree block. The coding tree block may be an NxN sample block. A CTU may also be referred to as a "treeblock" or a "Largest Coding Unit (LCU)". The CTUs of HEVC may be substantially similar to macroblocks of other video coding standards (e.g., h.264/AVC). However, the CTU is not necessarily limited to a particular size and may include one or more Coding Units (CUs). A slice may include an integer number of CTUs ordered consecutively in a raster scan.
This disclosure may use the terms "video unit" or "video block" to refer to one or more blocks of samples and syntax structures used to code the samples of the one or more blocks of samples. Example types of video units may include CTUs, CUs, PUs, Transform Units (TUs), or macroblocks of other video coding standards, macroblock partitions, and so forth in HEVC.
To generate a coded CTU, video encoder 20 may perform quadtree partitioning on a coding tree block of the CTU in a recursive manner to divide the coding tree block into coding blocks, hence the name "coding tree unit". The coded block is a block of NxN samples. A CU may include a coding block of luma samples and two corresponding coding blocks of chroma samples of a picture having a luma sample array, a Cb sample array, and a Cr sample array, as well as syntax structures for coding samples of the coding blocks. In a monochrome picture or a picture having three separate color planes, a CU may comprise a single coding block and syntax structures used to code the samples of the coding block.
Video encoder 20 may partition the coding block of the CU into one or more prediction blocks. A prediction block may be a rectangular (i.e., square or non-square) block of samples to which the same prediction is applied. A Prediction Unit (PU) of a CU may include a prediction block of luma samples of a picture, two corresponding prediction blocks of chroma samples of the picture, and syntax structures used to predict the prediction block samples. In a monochrome picture or a picture with three separate color planes, a PU may include a single prediction block and syntax structures used to predict prediction block samples. Video encoder 20 may generate predictive luma, Cb, and Cr blocks for the luma prediction block, the Cb prediction block, and the Cr prediction block for each PU of the CU.
Video encoder 20 may use intra prediction or inter prediction to generate the predictive blocks for the PU. If video encoder 20 generates the predictive blocks of the PU using intra prediction, video encoder 20 may generate the predictive blocks of the PU based on decoded samples of a picture associated with the PU.
If video encoder 20 generates the predictive block for the PU using inter prediction, video encoder 20 may generate the predictive block for the PU based on decoded samples of one or more pictures other than the picture associated with the PU. The inter prediction may be unidirectional inter prediction (i.e., unidirectional prediction) or bidirectional inter prediction (i.e., bidirectional prediction). To perform uni-directional prediction or bi-directional prediction, video encoder 20 may generate a first reference picture list (RefPicList0) and a second reference picture list (RefPicList1) for the current slice.
Each of the reference picture lists may include one or more reference pictures. When using uni-directional prediction, video encoder 20 may search for reference pictures in either or both of RefPicList0 and RefPicList1 to determine a reference position within the reference picture. Further, when using uni-prediction, the video encoder may generate the predictive sample block for the PU based at least in part on the samples corresponding to the reference location. Moreover, when using uni-directional prediction, video encoder 20 may generate a single motion vector that indicates a spatial displacement between a prediction block and a reference position for the PU. To indicate a spatial displacement between the prediction block and the reference location of the PU, the motion vector may include a horizontal component that specifies a horizontal displacement between the prediction block and the reference location of the PU, and may include a vertical component that specifies a vertical displacement between the prediction block and the reference location of the PU.
When a PU is encoded using bi-prediction, video encoder 20 may determine a first reference position in a reference picture in RefPicList0 and a second reference position in a reference picture in RefPicList 1. Video encoder 20 may then generate the predictive blocks for the PU based at least in part on the samples corresponding to the first and second reference positions. Moreover, when encoding the PU using bi-prediction, video encoder 20 may generate a first motion that indicates a spatial displacement between a sample block of the PU and a first reference location, and a second motion that indicates a spatial displacement between a prediction block of the PU and a second reference location.
After video encoder 20 generates predictive luma, Cb, and Cr blocks for one or more PUs of the CU, video encoder 20 may generate a luma residual block for the CU. Each sample in the luma residual block of the CU indicates a difference between a luma sample in one of the predictive luma blocks of the CU and a corresponding sample in the original luma coding block of the CU. In addition, video encoder 20 may generate a Cb residual block for the CU. Each sample in the Cb residual block of the CU may indicate a difference between the Cb sample in one of the predictive Cb blocks of the CU and the corresponding sample in the original Cb coding block of the CU. Video encoder 20 may also generate a Cr residual block for the CU. Each sample in the Cr residual block of the CU may indicate a difference between a Cr sample in one of the predictive Cr blocks of the CU and a corresponding sample in the original Cr coding block of the CU.
Further, video encoder 20 may use quadtree partitioning to decompose the luma, Cb, and Cr residual blocks of the CU into one or more luma, Cb, and Cr transform blocks. The transform block may be a rectangular block of samples to which the same transform is applied. A Transform Unit (TU) of a CU may include a transform block of luma samples, two corresponding transform blocks of chroma samples, and syntax structures used to transform the transform block samples. In a monochrome picture or a picture with three separate color planes, a TU may comprise a single transform block, as well as syntax structures used to transform the transform block samples. Thus, each TU of a CU may be associated with a luma transform block, a Cb transform block, and a Cr transform block. A luma transform block associated with a TU may be a sub-block of a luma residual block of a CU. The Cb transform block may be a sub-block of a Cb residual block of the CU. The Cr transform block may be a sub-block of a Cr residual block of the CU.
Video encoder 20 may apply one or more transforms to a luma transform block of a TU to generate a luma coefficient block for the TU. The coefficient block may be a two-dimensional array of transform coefficients. The transform coefficients may be scalars. Video encoder 20 may apply one or more transforms to a Cb transform block of a TU to generate a Cb coefficient block of the TU. Video encoder 20 may apply one or more transforms to a Cr transform block of a TU to generate a Cr coefficient block of the TU.
After generating the coefficient blocks (e.g., luma coefficient blocks, Cb coefficient blocks, or Cr coefficient blocks), video encoder 20 may quantize the coefficient blocks. Quantization generally refers to the process of quantizing transform coefficients to possibly reduce the amount of data used to represent the transform coefficients to provide further compression. Furthermore, video encoder 20 may inverse quantize the transform coefficients and apply inverse transforms to the transform coefficients to reconstruct the transform blocks of the TUs of the CU of the picture. Video encoder 20 may reconstruct the coding blocks of the CU using the reconstructed transform blocks of the TUs of the CU and the predictive blocks of the PUs of the CU. By reconstructing the coding blocks of each CU of a picture, video encoder 20 may reconstruct the picture. Video encoder 20 may store the reconstructed pictures in a Decoded Picture Buffer (DPB). Video encoder 20 may use the reconstructed pictures in the DPB for inter prediction and intra prediction.
After video encoder 20 quantizes the coefficient block, video encoder 20 may entropy encode syntax elements indicating the quantized transform coefficients. For example, video encoder 20 may perform Context Adaptive Binary Arithmetic Coding (CABAC) on syntax elements that indicate quantized transform coefficients. Video encoder 20 may output the entropy-encoded syntax elements in a bitstream.
Video encoder 20 may output a bitstream that includes a sequence of bits that forms a representation of coded pictures and associated data. The bitstream may include a sequence of Network Abstraction Layer (NAL) units. Each of the NAL units includes a NAL unit header and encapsulates a Raw Byte Sequence Payload (RBSP). The NAL unit header may include a syntax element indicating a NAL unit type code. The NAL unit type code specified by the NAL unit header of a NAL unit indicates the type of the NAL unit. An RBSP may be a syntax structure containing an integer number of bytes encapsulated within a NAL unit. In some cases, the RBSP contains zero bits.
Different types of NAL units may encapsulate different types of RBSPs. For example, a first type of NAL unit may encapsulate RBSPs for a Picture Parameter Set (PPS), a second type of NAL unit may encapsulate RBSPs for a coded slice, a third type of NAL unit may encapsulate RBSPs for Supplemental Enhancement Information (SEI), and so on. A PPS is a syntax structure that may contain syntax elements applicable to zero or more complete coded pictures. The NAL units that encapsulate the RBSP of the video coding data (as opposed to the RBSP of the parameter set and SEI message) may be referred to as Video Coding Layer (VCL) NAL units. NAL units that encapsulate a coded slice may be referred to herein as coded slice NAL units. The RBSPs of a coded slice may include a slice header and slice data.
Video decoder 30 may receive a bitstream. In addition, video decoder 30 may parse the bitstream to decode the syntax elements from the bitstream. Video decoder 30 may reconstruct pictures of the video data based at least in part on syntax elements decoded from the bitstream. The process of reconstructing the video data may generally be reciprocal to the process performed by video encoder 20. For example, video decoder 30 may use the motion vectors of the PUs to determine predictive blocks for the PUs of the current CU. Video decoder 30 may generate the predictive blocks for the PU using one or more motion vectors of the PU.
In addition, video decoder 30 may inverse quantize coefficient blocks associated with TUs of the current CU. Video decoder 30 may perform inverse transforms on the coefficient blocks to reconstruct transform blocks associated with TUs of the current CU. Video decoder 30 may reconstruct the coding blocks of the current CU by adding samples of predictive sample blocks of PUs of the current CU to corresponding samples of transform blocks of TUs of the current CU. By reconstructing the coding blocks of each CU of a picture, video decoder 30 may reconstruct the picture. Video decoder 30 may store the decoded pictures in a decoded picture buffer for output and/or for decoding other pictures.
Next generation video applications are expected to operate with video data representing a scene with HDR and WCG that has been captured. The parameters of dynamic range and color gamut used are two independent attributes of video content, and their specifications for digital television and multimedia service purposes are defined by several international standards. For example, ITU-R rec.bt.709 "Parameter values for the HDTV standards for production and international program exchange" define parameters for HDTV (high definition television), such as Standard Dynamic Range (SDR) and standard color gamut, and ITU-R rec.bt.2020 "Parameter values for the ultra-high definition television systems for production and international program exchange" specifies UHDTV (ultra high definition television) parameters, such as HDR and WCG. There are also other Standard Development Organization (SDO) files that specify dynamic range and color gamut attributes in other systems, for example, DCI-P3 color gamut is defined in SMPTE-231-2 (society of motion picture and television engineers), and some parameters of HDR are defined in SMPTE-2084. A brief description of the dynamic range and color gamut of video data is provided below.
Dynamic range is generally defined as the ratio between the minimum luminance (e.g., luminance) and the maximum luminance of a video signal. The dynamic range may also be measured in terms of "f-stops", where one f-stop corresponds to a double of the dynamic range of the signal. In the definition of MPEG, HDR content is contained in a feature representing a change in luminance with more than 16 f-stop. In some clauses, the hierarchy between 10 and 16 f-stop is considered to be the intermediate dynamic range, but it is considered HDR in other definitions. In some examples of this disclosure, the HDR video content may be any video content having a higher dynamic range than conventionally used video content having a standard dynamic range (e.g., video content specified by ITU-R rec. bt.709).
The Human Visual System (HVS) is capable of perceiving a much larger dynamic range than SDR content and HDR content. However, the HVS includes an adaptive mechanism to narrow the dynamic range of the HVS to a so-called synchronous range. The width of the synchronization range may depend on the current lighting conditions (e.g., current brightness). The visualization of the dynamic range provided by the SDR of the HDTV, the desired HDR of the UHDTV and the HVS dynamic range is shown in fig. 2.
Current video applications and services are regulated by ITU rec.709 and provide SDR, typically supporting a range of luminance (e.g., luminance) of about 0.1 to 100 candelas (candela; cd), commonly referred to as "nit," per m2, resulting in less than 10 f-stops. Some examples of next generation video services are expected to provide dynamic ranges up to 16 f-stops. Some initial parameters have been specified in SMPTE-2084 and ITU-R Rec.2020, although detailed specifications for such content are currently in development.
Another aspect of a more realistic video experience beyond HDR is the color dimension. The color dimension is typically defined by the color gamut. Fig. 3 is a conceptual diagram showing the SDR gamut (bt.709 color primitive-based triangle 100) and a wider gamut for UHDTV (bt.2020 color-based triangle 102). Fig. 3 also depicts the so-called spectral locus (defined by the tongue-shaped area 104), representing the limits of natural colors. Moving from bt.709 (triangle 100) to bt.2020 (triangle 102), the color bins are intended to provide UHDTV service with about 70% more color, as illustrated in fig. 3. D65 specifies example white colors for the bt.709 and/or bt.2020 specifications.
Examples of color gamut specifications for DCI-P3, bt.709, and bt.202 color spaces are shown in table 1.
TABLE 1 color gamut parameters
As can be seen in table 1, the color gamut may be defined by the X and Y values of the white point and the X and Y values of the primary colors (e.g., red (R), green (G), and blue (B)). As defined by the CIE 1931 color space, the X and Y values represent chromaticity (X) and luminance (Y) of a color. The CIE 1931 color space defines the link between pure colors (e.g., with respect to wavelength) and how the human eye perceives such colors.
HDR/WCG video data is typically acquired and stored at very high precision per component (uniform floating point) in a 4:4:4 chroma sub-sampling format and very wide color space (e.g., CIE XYZ). This representation is intended to be highly accurate and almost mathematically lossless. However, such formats for storing HDR/WCG video data may contain a large amount of redundancy and may not be optimal for compression purposes. The lower accuracy format based on the HVS assumption is typically used for state-of-the-art video applications.
As shown in fig. 4, one example of a video data format conversion process for compression purposes includes three main processes. The technique of fig. 4 may be performed by source device 12. The linear RGB data 110 may be HDR/WCG video data and may be stored in floating point representation. The linear RGB data 110 may be compressed using a nonlinear Transfer Function (TF)112 for dynamic range compression. The transfer function 112 may compress the linear RGB data 110 using any number of non-linear transfer functions (e.g., PQ TF defined in SMPTE-2084). In some examples, color conversion process 114 converts the compressed data into a more compact or robust color space (e.g., a YUV or YCrCb color space), which is more suitable for compression by a hybrid video encoder. This data is then quantized using a floating-point to integer representation quantization unit 116 to produce converted HDR' data 118. In this example, the HDR' data 118 is represented in integers. The currently used format for HDR' data is more suitable for compression by a hybrid video encoder, such as video encoder 20 applying HEVC technology. The order of the processes depicted in fig. 4 is given as an example, and may be varied in other applications. For example, the color conversion may precede the TF process. In addition, additional processing (e.g., spatial sub-sampling) may be applied to the color components.
The inverse transform at the decoder side is depicted in fig. 5. The technique of fig. 5 may be performed by destination device 14. Converted HDR' data 120 may be obtained at destination device 14 by decoding the video data using a hybrid video decoder (e.g., video decoder 30 applying HEVC techniques). The HDR' data 120 may then be inverse quantized by an inverse quantization unit 122. An inverse color conversion process 124 may then be applied to the inverse quantized HDR' data. The inverse color conversion process 124 may be the inverse of the color conversion process 114. For example, the inverse color conversion process 124 may convert HDR' data from the YCrCb format back to the RGB format. Next, an inverse transfer function 126 may be applied to the data to reverse the dynamic range compressed by the transfer function 112, thereby regenerating linear RGB data 128.
The technique depicted in FIG. 4 will now be discussed in more detail. In general, a transfer function is applied to data (e.g., HDR/WCG video data) to compress the dynamic range of the data. Such compression allows the data to be represented in fewer bits. In one example, the transfer function may be a one-dimensional (1D) nonlinear function and may reflect the inverse of the electro-optic transfer function (EOTF) of the end-user display, such as specified for SDR in rec.709. In another example, the transfer function may approximate HVS perception as a luminance change, e.g., the PQ transfer function specified for HDR in SMPTE-208. The inverse process of OETF is EOTF (electro-optical transfer function), which maps the code level back to illumination. FIG. 6 shows several examples of non-linear transfer functions used to compress the dynamic range of a particular color container. The transfer function may also be applied to each R, G and B component, respectively.
In the context of this disclosure, the term "signal value" or "color value" may be used to describe a luminance level corresponding to the value of a particular color component (e.g., R, G, B or Y) of an image element. The signal values typically represent linear light levels (luminance values). The term "code level" or "digital code value" may refer to a digital representation of an image signal value. Typically, such digital representations are used to represent nonlinear signal values. The EOTF represents the relationship between the value of a non-linear signal provided to a display device (e.g., display device 32) and the value of a linear color generated by the display device.
RGB data is typically used as the input color space, since RGB is a type of data that is typically generated by an image capture sensor. However, the RGB color space has high redundancy among its components and is not optimal for compact representation. To achieve a tighter and more robust representation, the RGB components are typically converted (e.g., perform a color transform) into a less relevant color space that is more suitable for compression, such as YCbCr. The YCbCr color space separates the luminance and color information (CrCb) in the form of luminance (Y) into different lower correlation components. In this context, a robust representation may refer to a color space that has a higher level of error resilience when compressed at a constrained bit rate.
After the color transform, the input data in the target color space may still be represented in high bit depth (e.g., floating point accuracy). For example, high bit depth data may be converted to a target bit depth using a quantization process. Some studies show that 10-to-12 bit accuracy in combination with PQ delivery is sufficient to provide HDR data for 16 f-stop with distortion below Just Noticeable Difference (JND). In general, JND is the degree to which something (e.g., video data) must change in order for the difference to be perceptible (e.g., by the HVS). Data represented with 10 bit accuracy may be further coded with most of the state-of-the-art video coding solutions. This quantization is an element of lossy coding and is the source of inaccuracies introduced to the converted data.
It is expected that next generation HDR/WCG video applications will operate with video data captured with different parameters for HDR and CG. An example of a different configuration may be the capture of HDR video content with peak brightness up to 1000 nits or up to 10,000 nits. Examples of different color gamuts may include bt.709, bt.2020, and SMPTE specific to P3, or others.
It is also expected that a single color space (e.g., a target color container) incorporating all other currently used color gamuts will be used in the future. One example of such a target color container is bt.2020. Supporting a single target color container would significantly simplify the standardization, implementation, and deployment of HDR/WCG systems, since a decoder (e.g., video decoder 30) should support a reduced number of operating points (e.g., number of color containers, color spaces, color conversion algorithms, etc.) and/or a reduced number of required algorithms.
In one example of such a system, content captured in a native color gamut (e.g., P3 or bt.709) different from a target color container (e.g., bt.2020) may be converted to the target container prior to processing (e.g., prior to video encoding). The following are several examples of such conversions:
RGB conversion from bt.709 to bt.2020 color container:
○R2020=0.627404078626*R709+0.329282097415*G709+0.043313797587*B709
○G2020=0.069097233123*R709+0.919541035593*G709+0.011361189924*B709
○B2020=0.016391587664*R709+0.088013255546*G709+0.895595009604*B709(1)
RGB conversion from P3 to bt.2020 color container:
○R2020=0.753832826496*RP3+0.198597635641*GP3+0.047569409186*BP3
○G2020=0.045744636411*RP3+0.941777687331*GP3+0.012478735611*BP3
○B2020=-0.001210377285*RP3+0.017601107390*GP3+0.983608137835*BP3(2)
during this conversion, the dynamic range of the signal captured in the P3 or bt.709 color gamut is reduced as represented by bt.2020. Since the data is represented in floating point accuracy, there is no penalty; however, when combined with color conversion (e.g., conversion from RGB to YCrCB shown in equation 3 below) and quantization (an example in equation 4 below), the dynamic range decreases resulting in increased quantization error of the input data.
○Y'=0.2627*R'+0.6780*G'+0.0593*B';
oDY′=(Round((1<<(BitDepthY-8))*(219*Y′+16)))
oDCb=(Round((1<<(BitDepthCr-8))*(224*Cb+128)))
oDCr=(Round((1<<(BitDepthCb-8))*(224*Cb+128)))
(4)
In equation (4), DY'is the quantized Y' component, DCbIs quantized Cb, and DCrIs a quantized Cr component. Item(s)<<Indicating a bit-by-bit right shift. BitDepthY、BitDepthCrAnd BitDepthCbRespectively, the desired bit depth of the quantized components.
Additionally, in real-world coding systems, coding signals with reduced dynamic range may result in significant accuracy loss of the coded chroma components and will be observed by a viewer as coding artifacts, such as color mismatch and/or color penetration.
Problems may also arise when the gamut of the content is the same as the gamut of the target color container, but the content does not fully occupy the gamut of the entire color container (e.g., in some frames or for one component). This situation is visualized in fig. 7A and 7B, where the colors of the HDR sequence are depicted in the xy color plane. Fig. 7A shows the colors of the "Tibul" test sequence captured in the native bt.709 color space (triangle 150). However, the colors of the test sequence (shown as dots) do not occupy the full color gamut of bt.709. In fig. 7A and 7B, bt.2020 triangle 152 represents the bt.2020 color gamut. Fig. 7B shows the colors of a "Bikes" HDR test sequence with a P3 native color gamut (triangle 154). As can be seen in fig. 7B, the colors do not occupy the entire range of the native gamut in the xy color plane (triangle 154).
In order to solve the above-described problems, the following techniques may be considered. One example technique involves HDR coding in a native color space. In such techniques, HDR video coding systems will support various types of color gamuts that are currently known, and allow for the extension of video coding standards to support future color gamuts. This support is not only limited to supporting different color conversion transforms (e.g., RGB to YCbCr) and their inverse transforms, but also specifies transform functions that adjust to each of the color gamuts. Supporting such multiple tools would be complex and expensive.
Another example technique includes a gamut aware video codec. In such techniques, the hypothetical video encoder is configured to estimate the native color gamut of the input signal and adjust coding parameters (e.g., quantization parameters for the coded chroma components) to reduce any distortion caused by the reduced dynamic range. However, such techniques would not be able to recover the accuracy loss that may occur due to the quantization done in equation (4) above, since all of the input data is provided to a typical codec with integer point accuracy.
In view of the foregoing, the present invention proposes techniques, methods and devices that perform Dynamic Range Adjustment (DRA) to compensate for the dynamic range changes introduced by color gamut conversion to the HDR signal representation. Dynamic range adjustment may help prevent and/or reduce any distortion caused by gamut conversion, including color mismatch, color bleed, and the like. In one or more examples of this disclosure, the values of each color component of the target color space (e.g., YCbCr) are DRA prior to quantization at the encoder side (e.g., by source device 12) and after inverse quantization at the decoder side (e.g., by destination device 14).
FIG. 8 is a block diagram illustrating an example HDR/WCG conversion apparatus operating in accordance with the techniques of this disclosure. In fig. 8, a solid line represents a data flow, and a dotted line represents a control signal. DRA unit 19 of source device 12 may perform the techniques of this disclosure. As discussed above, DRA unit 19 may be a device separate from video encoder 20. In other examples, DRA unit 19 may be incorporated into the same device as video encoder 20.
As shown in fig. 8, RGB native CG video data 200 is input to the DRA unit 19. In the context of video pre-processing by DRA unit 19, RGB native CG video data 200 is defined by an input color container. Inputting a color container proceeds with both: defines a color gamut of video data 200 (e.g., bt.709, bt.2020, P3, etc.) and defines a color space of video data 200 (e.g., RGB, XYZ, YCrCb, YUV, etc.). In one example of this disclosure, DRA unit 19 may be configured to convert both the color gamut and the color space of RGB native CB video data 200 into a target color container of HDR' data 216. Similar to the input color container, the target color container may define both a gamut and a color space. In one example of this disclosure, RGB native CB video data 200 may be HDR/WCG video, and may have a bt.2020 or P3 color gamut (or any WCG) and be in the RGB color space. In another example, the RGB native CB video data 200 may be SDR video and may have bt.709 color gamut. In one example, the target color container of HDR' data 216 may be configured for HDR/WCG video (e.g., bt.2020 color gamut) and may use a color space that is better for video encoding (e.g., YCrCb).
In one example of this disclosure, CG converter 202 may be configured to convert the color gamut of RGB native CG video data 200 from the color gamut of an input color container (e.g., a first color container) to the color gamut of a target color container (e.g., a second color container). As one example, CG converter 202 may convert RGB native CG video data 200 from a bt.709 color representation to a bt.2020 color representation, an example of which is shown below.
The RGB BT.709 sample (R) may be implemented with a two-step transformation involving the following steps709、G709、B709) Conversion to RGB bt.2020 samples (R)2020、G2020、B2020) The process of (2): first converted to XYZ representationThis is followed by conversion from XYZ to RGB bt.2020 using an appropriate conversion matrix.
X=0.412391*R709+0.357584*G709+0.180481*B709
Y=0.212639*R709+0.715169*G709+0.072192*B709(5)
Z=0.019331*R709+0.119195*G709+0.950532*B709
From XYZ to R2020G2020B2020(BT.2020) conversion
R2020=clipRGB(1.716651*X-0.355671*Y-0.253366*Z)
G2020=clipRGB(-0.666684*X+1.616481*Y+0.015768*Z) (6)
B2020=clipRGB(0.017640*X-0.042771*Y+0.942103*Z)
Similarly, the single steps and recommended method are as follows:
R2020=clipRGB(0.627404078626*R709+0.329282097415*G709+0.043313797587*B709)
G2020=clipRGB(0.069097233123*R709+0.919541035593*G709+0.011361189924*B709) (7)
B2020=clipRGB(0.016391587664*R709+0.088013255546*G709+0.895595009604*B709)
the resulting video data after CG conversion is shown as RGB target CG video data 204 in fig. 8. In other examples of the invention, the gamut of the input color container and the gamut of the output color container may be the same. In such examples, CG converter 202 need not perform any conversion on RGB native CG video data 200.
Next, the transfer function unit 206 compresses the dynamic range of the RGB target CG video data 204. Transfer function unit 206 may be configured to apply a transfer function to compress the dynamic range in the same manner as discussed above with reference to fig. 4. The color conversion unit 208 converts the RGB target CG color data 204 from the color space of the input color container (e.g., RGB) to the color space of the target color container (e.g., YCrCb). As explained above with reference to fig. 4, color conversion unit 208 converts the compressed data into a more compact or robust color space (e.g., YUV or YCrCb color space), which is more suitable for compression by a hybrid video encoder (e.g., video encoder 20).
Adjustment unit 210 is configured to perform Dynamic Range Adjustment (DRA) of the color converted video data according to the DRA parameters derived by DRA parameter estimation unit 212. In general, after CG conversion by CG converter 202 and dynamic range compression by transfer function unit 206, the actual color values of the resulting video data may not use all of the available codewords (e.g., the unique bit sequence representing each color) allocated for the color gamut of a particular target color container. That is, in some cases, the conversion of RGB native CG video data 200 from an input color container to an output color container may over-compress the color values (e.g., Cr and Cb) of the video data such that the resulting compressed video data does not efficiently use all possible color representations. As explained above, coding signals having a reduced range of values in terms of color may result in a significant loss of coded chroma component accuracy and will be observed by a viewer as coding artifacts, such as color mismatch and/or color penetration.
Adjustment unit 210 may be configured to apply DRA parameters to color components (e.g., YCrCb) of video data (e.g., RGB target CG video data 204) after dynamic range compression and color conversion to enable full use of codewords that may be used for a particular target color container. Adjustment unit 210 may apply the DRA parameters to the video data at the pixel level. In general, DRA parameters define the following functions: the codewords used to represent the actual video data are augmented to as many codewords as possible that are available for the target color container.
In one example of the present invention, the DRA parameters include scaling and offset values that are applied to components of the video data. In general, the lower the dynamic range of values for the color components of the video data, the larger the scaling factor may be used. The offset parameter is used to center the value of the color component to the center of the available codeword for the target color container. For example, if the target color container includes 1024 codewords per color component, the offset value may be selected such that the center codeword moves to codeword 512 (e.g., the largest codeword in the middle).
In one example, adjustment unit 210 applies the DRA parameters to the video data in the target color space (e.g., YCrCb) as follows:
-Y”=scale1*Y'+offset1
-Cb”=scale2*Cb'+offset2 (8)
-Cr”=scale3*Cr'+offset3
where the signal components Y ', Cb ' and Cr ' are the signals resulting from RGB to YCbCr conversion (example in equation 3). It should be noted that Y ', Cr ', and Cr ' may also be video signals decoded by the video decoder 30. Y ", Cb", and Cr "are the color components of the video signal after the DRA parameters are applied to each color component. As can be seen in the example above, each color component is associated with a different scaling and offset parameter. For example, scale1 and offset1 are used for the Y ' component, scale2 and offset2 are used for the Cb ' component, and scale3 and offset3 are used for the Cr ' component. It should be understood that this is only an example. In other examples, the same scaling and offset values may be used for each color component.
In other examples, each color component may be associated with multiple scaling and offset parameters. For example, the actual distribution of chroma values for a Cr or Cb color component may be different for different portions of a codeword. As one example, more unique codewords may be used above a center codeword (e.g., codeword 512) than below the center codeword. In such examples, adjustment unit 210 may be configured to apply one set of scaling and offset parameters for chroma values above the center codeword (e.g., having a value greater than the center codeword) and a different set of scaling and offset parameters for chroma values below the center codeword (e.g., having a value less than the center codeword).
As can be seen in the above example, the adjustment unit 210 applies the scale and offset DRA parameters as a linear function. As such, the adjustment unit 210 does not have to apply DRA parameters in the target color space after color conversion by the color conversion unit 208. This is because color conversion is itself a linear process. As such, in other examples, adjustment unit 210 may apply DRA parameters to video data in a native color space (e.g., RGB) prior to any color conversion process. In this example, color conversion unit 208 will apply color conversion after adjustment unit 210 applies the DRA parameters.
In another example of this disclosure, adjustment unit 210 may apply the DRA parameters in a target color space or a native color space as follows:
-Y”=(scale1*(Y'-offsetY)+offset1)+offsetY;
-Cb”=scale2*Cb'+offset2 (9)
-Cr”=scale3*Cr'+offset3
in this example, the parameters scale1, scale2, scale3, offset1, offset2, and offset3 are the same as described above. The parameter offset Y is a parameter reflecting the luminance of the signal and may be equal to the mean value of Y'.
In another example of the present disclosure, adjustment unit 210 may be configured to apply DRA parameters in a color space other than the native color space or the target color space. In general, adjustment unit 210 may be configured to apply DRA parameters as follows:
-X'=scale1*X+offset1;
-Y'=scale2*Y+offset2 (10)
-Z'=scale3*Z+offset3
where signal components X, Y and Z are signal components in a color space (e.g., RGB or intermediate color space) different from the target color space.
In other examples of this disclosure, adjustment unit 210 is configured to apply a linear transfer function to the video to perform the DRA. Such transfer functions are different from the transfer functions used by transfer function unit 206 to compress the dynamic range. Similar to the scaling and offset terms defined above, the transfer function applied by adjustment unit 210 may be used to augment the color values and center them as available codewords in the target color container. An example of applying a transfer function to perform a DRA is shown below:
-Y”=TF2(Y')
-Cb”=TF2(Cb')
-Cr”=TF2(Cr')
the term TF2 denotes the transfer function applied by the adaptation unit 210.
In another example of the disclosure, adjustment unit 210 may be configured to apply the DRA parameters in a single process in common with the color conversion by color conversion unit 208. That is, the linear functions of the adjustment unit 210 and the color conversion unit 208 may be combined. An example of a combined application is shown below, where f1 and f2 are combinations of RGB to YCbCr matrices and DRA scaling factors:
in another example of this disclosure, after applying the DRA parameters, adjustment unit 210 may be configured to perform a clipping process to prevent the video data from having values outside of the range of codewords specified for the particular target color container. In some cases, the scaling and offset parameters applied by adjustment unit 210 may cause some color component values to exceed allowable codeword ranges. In this case, the adjustment unit 210 may be configured to clip component values that exceed the range to a maximum value within the range.
DRA parameter estimation unit 212 may determine the DRA parameters that are applied by adjustment unit 210. It is flexible how often the DRA parameter estimation unit 212 updates DRA parameters. For example, DRA parameter estimation unit 212 may update DRA parameters on a time level. That is, new DRA parameters for a group of pictures (GOP) or a single picture (frame) may be determined. In this example, RGB native CG video data 200 may be a GOP or a single picture. In other examples, DRA parameter estimation unit 212 may update DRA parameters at a spatial level (e.g., at a slice tile or block level). In this context, a block of video data may be a macroblock, a Coding Tree Unit (CTU), a coding unit, or any other size and shape block. The blocks may be square, rectangular, or any other shape. Accordingly, DRA parameters may be used for more efficient temporal and spatial prediction and coding.
In one example of the present invention, DRA parameter estimation unit 212 may derive DRA parameters based on a correspondence of a native color gamut of RGB native CG video data 200 and a color gamut of a target color container. For example, given a particular native color gamut (e.g., bt.709) and the color gamut of a target color container (e.g., bt.2020), DRA parameter estimation unit 212 may use a set of predefined rules to determine the scaling and offset values.
For example, assume that the native gamut and the target color container are defined in the form of color element coordinates and white point coordinates in the xy space. One example of such information for bt.709 and bt.2020 is shown in table 2 below.
TABLE 2 RGB color space parameters
In one example, bt.2020 is the gamut of the target color container and bt.709 is the gamut of the native color container. In this example, the adjustment unit 210 applies the DRA parameters to the YCbCr target color space. DRA parameter estimation unit 212 may be configured to estimate DRA parameters and forward them to adjustment unit 210 as follows:
scale1=1; offset1=0;
scale2=1.0698; offset2=0;
scale3=2.1735; offset3=0;
as another example, where bt.2020 is the target color gamut and P3 is the native color gamut, and the DRA is applied in the YCbCr target color space, DRA parameter estimation unit 212 may be configured to estimate DRA parameters as follows:
scale1=1; offset1=0;
scale2=1.0068; offset2=0;
scale3=1.7913; offset3=0;
in the above example, given a particular native color gamut and a particular target color gamut, DRA parameter estimation unit 212 may be configured to determine the scaling and offset values listed above by consulting a lookup table that indicates the DRA parameters to be used. In other examples, DRA parameter estimation unit 212 may be configured to calculate DRA parameters from primary and white space values of the native and target color gamuts (e.g., as shown in table 2).
For example, consider that the primary color coordinates (xXt, yXt) specify a target (T) color container, where X represents R, G, B color components:
and the primary color coordinates (xXn, yXn) specify the native (N) gamut, where X represents R, G, B color components:
the white point coordinates of the two gamuts are equal to whiteP ═ xW, yW. DRA parameter estimation unit 212 may derive scale2 and scale3 parameters of the DRA in terms of the distance between the primary color coordinates to the white point. An example of such an estimate is given below:
rdT=sqrt((primeT(1,1)-whiteP(1,1))^2+(primeN(1,2)-whiteP(1,2))^2)
gdT=sqrt((primeT(2,1)-whiteP(1,1))^2+(primeN(2,2)-whiteP(1,2))^2)
bdT=sqrt((primeT(3,1)-whiteP(1,1))^2+(primeN(3,2)-whiteP(1,2))^2)
rdN=sqrt((primeN(1,1)-whiteP(1,1))^2+(primeN(1,2)-whiteP(1,2))^2)
gdN=sqrt((primeN(2,1)-whiteP(1,1))^2+(primeN(2,2)-whiteP(1,2))^2)
bdN=sqrt((primeN(3,1)-whiteP(1,1))^2+(primeN(3,2)-whiteP(1,2))^2)
scale2=bdT/bdN
scale3=sqrt((rdT/rdN)^2+(gdT/gdN)^2)
in some examples, DRA parameter estimation unit 212 may be configured to estimate DRA parameters by determining primary color coordinates in prime from the actual distribution of color values in RGB native CG video data 200 (and not from predefined primary color values of the native color gamut). That is, DRA parameter estimation unit 212 may be configured to analyze the actual colors present in RGB native CG video data 200 and calculate DRA parameters using the primary color values and white point determined from such analysis in the functions described above. Approximations of some of the parameters defined above may be used as the DRA to facilitate the calculation. For example, scale 3-2.1735 may be approximated as scale 3-2, which allows for easier implementation in some architectures.
In other examples of the present disclosure, DRA parameter estimation unit 212 may be configured to determine DRA parameters based not only on the color gamut of the target color container, but also on the target color space. The actual value distribution of the component values may differ from color space to color space. For example, the chromaticity value distribution may be different for a YCbCr color space with constant luminance as compared to a YCbCr color space with non-constant luminance. The DRA parameter estimation unit 212 may determine DRA parameters using the color distributions of the different color spaces.
In other examples of this disclosure, DRA parameter estimation unit 212 may be configured to derive values for DRA parameters in order to minimize certain cost functions associated with preprocessing and/or encoding video data. As one example, DRA parameter estimation unit 212 may be configured to estimate DRA parameters that minimize the quantization error introduced at quantization unit 214 above (e.g., see equation (4)). DRA parameter estimation unit 212 may minimize such errors by performing quantization error tests on video data to which different sets of DRA parameters have been applied. DRA parameter estimation unit 212 may then select the DRA parameter that yields the lowest quantization error.
In another example, DRA parameter estimation unit 212 may select DRA parameters that minimize a cost function associated with both the DRA performed by adjustment unit 210 and the video encoding performed by video encoder 20. For example, DRA parameter estimation unit 212 may execute a DRA and encode video data with a plurality of different sets of DRA parameters. DRA parameter estimation unit 212 may then calculate a cost function for each DRA parameter set by forming a weighted sum of the bit rates resulting from DRA and video encoding and the distortion introduced by these two lossy processes. DRA parameter estimation unit 212 may then select a DRA parameter set that minimizes the cost function.
In each of the techniques used above for DRA parameter estimation, DRA parameter estimation unit 212 may use information about each component to determine the DRA parameters for the component, respectively. In other examples, DRA parameter estimation unit 212 may determine DRA parameters using cross-component information. For example, the derived DRA parameters for the Cr component may be used to derive DRA parameters for the CB component.
In addition to deriving DRA parameters, DRA parameter estimation unit 212 may be configured to signal DRA parameters in an encoded bitstream. DRA parameter estimation unit 212 may signal one or more syntax elements that directly indicate DRA parameters, or may be configured to provide the one or more syntax elements to video encoder 20 for signaling. Such syntax elements for the parameters may be signaled in the bitstream such that video decoder 30 and/or inverse DRA unit 31 may perform an inverse of the process of DRA unit 19, reconstructing the video data in its native color container. Example techniques for signaling DRA parameters are discussed below.
In one example, DRA parameter estimation unit 212 may signal one or more syntax elements in an encoded video bitstream as metadata, in a Supplemental Enhancement Information (SEI) message, in Video Usability Information (VUI), in a Video Parameter Set (VPS), in a Sequence Parameter Set (SPS), in a picture parameter set, in a slice header, in a CTU header, or in any other syntax structure suitable for indicating DRA parameters for a size of video data (e.g., GOP, picture, block, macroblock, CTU, etc.).
In some examples, the one or more syntax elements explicitly indicate DRA parameters. For example, the one or more syntax elements may be various scaling and offset values for the DRA. In other examples, the one or more syntax elements may be one or more indices into a lookup table that includes scaling and offset values for the DRA. In yet another example, the one or more syntax elements may be an index into a lookup table that specifies a linear transfer function for the DRA.
In other examples, DRA parameters are not explicitly signaled, but rather, both DRA unit 19 and inverse DRA unit 31 are configured to derive DRA parameters using the same information and/or characteristics of video data discernable from the bitstream using the same predefined process. As one example, inverse DRA unit 31 may be configured to indicate a native color container of video data and a target color container of encoded video data in an encoded bitstream. Inverse DRA unit 31 may then be configured to derive DRA parameters from such information using the same processes as defined above. In some examples, one or more syntax elements that identify the native and target color containers are provided in a syntax structure. Such syntax elements may explicitly indicate a color container, or may index into a lookup table. In another example, DRA unit 19 may be configured to signal one or more syntax elements indicating XY values of color elements and a white point of a particular color container. In another example, DRA unit 19 may be configured to signal one or more syntax elements indicating XY values of white points of color elements and actual color values (content primaries and content white points) in the video data based on the analysis performed by DRA parameter estimation unit 212.
As one example, it is possible to signal a color element containing the minimum gamut of colors in the content, and at video decoder 30 and/or inverse DRA unit 31, derive DRA parameters using both container primaries and content primaries. In one example, the content primaries may be signaled using the x and y components of R, G and B, as described above. In another example, a content primary color may be signaled as a ratio between two sets of known primary colors. For example, the content primaries may be signaled as linear positions between the bt.709 primaries and the bt.2020 primaries: x is the number ofr_content=alfar*xr_bt709+(1-alfar)*xr_bt2020(with alfa for the G and B componentsgAnd alfabIs similar) in which the parameter alfarA ratio between two known main sets is specified. In some examples, video encoder 20 and/or video decoder 30 may use the signaled and/or derived DRA parameters to facilitate weighted prediction-based techniques for coding of HDR/WCG video data.
In a video coding scheme using weighted prediction, the samples (for uni-directional prediction) and weights (W) from the reference picture Srwp) And offset (O)wp) Predicting samples of a currently coded picture Sc, producing predicted samples Sp:
Sp=Sr·*Wwp+Owp
in some instances where a DRA is used, a non-DRA may be used by the DRASame parameter (i.e. { scale1 for current picture)cur、offset1curAnd scale1 for reference picturesref、offset1ref} to process the reference samples and samples of the current coded picture. In such embodiments, parameters for weighted prediction may be derived from the DRA, such as:
Wwp=scale1cur/scale1ref
Owp=offset1cur-offset1ref
after adjustment unit 210 applies the DRA parameters, DRA unit 19 may then quantize the video data using quantization unit 214. The quantization unit 214 may operate in the same manner as described above with reference to fig. 4. After quantization, the video data is now adjusted in the target color space and target color gamut of the target color container of the HDR' data 316. The HDR' data 316 may then be sent to the video encoder 20 for compression.
FIG. 9 is a block diagram illustrating an example HDR/WCG inverse conversion apparatus in accordance with the techniques of this disclosure. As shown in fig. 9, inverse DRA unit 31 may be configured to apply an inverse of the techniques performed by DRA unit 19 of fig. 8. In other examples, the techniques of inverse DRA unit 31 may be incorporated in video decoder 30 and performed by video decoder 30.
In one example, video decoder 30 may be configured to decode video data encoded by video encoder 20. The decoded video data (HDR' data 316 in the target color container) is then forwarded to the inverse DRA unit 31. The inverse quantization unit 314 performs an inverse quantization process on the HDR' data 316 to reverse the quantization process performed by the quantization unit 214 of fig. 8.
Video decoder 30 may also be configured to decode any of the one or more syntax elements generated by DRA parameter estimation unit 212 of fig. 8 and send them to DRA parameter derivation unit 312 of inverse DRA unit 13. As described above, DRA parameter derivation unit 312 may be configured to determine DRA parameters based on one or more syntax elements. In some examples, the one or more syntax elements explicitly indicate DRA parameters. In other examples, DRA parameter derivation unit 312 is configured to derive DRA parameters using the same techniques as used by DRA parameter estimation unit 212 of fig. 8.
The parameters derived by the DRA parameter derivation unit 312 are sent to the inverse adjustment unit 310. The inverse adjustment unit 310 performs an inverse of the linear DRA adjustment performed by the adjustment unit 210 using the DRA parameters. Inverse adjustment unit 310 may apply the inverse of any of the adjustment techniques described above for adjustment unit 210. In addition, as with adjustment unit 210, inverse adjustment unit 310 may apply the inverse DRA before or after any inverse color conversion. As such, inverse adjustment unit 310 may apply DRA parameters on the video data in the target color container or the native color container.
An inverse color conversion unit 308 converts the video data from a target color space (e.g., YCbCr) to a native color space (e.g., RGB). Inverse transfer function 306 then applies the inverse of the transfer function applied by transfer function 206 to decompress the dynamic range of the video data. The resulting video data (RGB target CG 304) is still in the target color domain, but is now in the native dynamic range and native color space. Next, the inverse CG converter 302 converts the RGB target CG 304 into a native color gamut to reconstruct the RGB native CG 300.
In some examples, inverse DRA unit 31 may employ additional post-processing techniques. The application DRA may place the video outside its actual native color gamut. The quantization steps performed by quantization unit 214 and inverse quantization unit 314, and the up and down sampling techniques performed by adjustment unit 210 and inverse adjustment unit 310, may cause the resulting color values in the native color container to be outside the native color gamut. When the native color gamut (or, as described above, the actual minimum content primary color in the case of signaled) is known, then as post-processing for the DRA, additional processes may be applied to the RGB native CG video data 304 to transform the color values (e.g., RGB or Cb and Cr) back to the desired color gamut. In other examples, such post-processing may be applied after quantization or after the DRA application.
Fig. 10 is a block diagram illustrating an example of a video encoder 20 that may implement the techniques of this disclosure. Video encoder 20 may perform intra and inter coding of video blocks within video slices in the target color container that have been processed by DRA unit 19. Intra-coding relies on spatial prediction to reduce or remove spatial redundancy in video within a given video frame or picture. Inter-coding relies on temporal prediction to reduce or remove temporal redundancy in video within adjacent frames or pictures of a video sequence. Intra-mode (I-mode) may refer to any of a number of spatial-based coding modes. Inter modes, such as uni-directional prediction (P-mode) or bi-directional prediction (B-mode), may refer to any of several temporally based coding modes.
As shown in fig. 10, video encoder 20 receives a current video block within a video frame to be encoded. In the example of fig. 10, video encoder 20 includes mode select unit 40, video data memory 41, decoded picture buffer 64, summer 50, transform processing unit 52, quantization unit 54, and entropy encoding unit 56. Mode select unit 40, in turn, includes motion compensation unit 44, motion estimation unit 42, intra-prediction processing unit 46, and partition unit 48. For video block reconstruction, video encoder 20 also includes inverse quantization unit 58, inverse transform processing unit 60, and summer 62. Deblocking filters (not shown in fig. 10) may also be included to the boundaries of the filter blocks to remove blockiness artifacts from the reconstructed video. The deblocking filter will typically filter the output of summer 62, if desired. In addition to deblocking filters, additional filters (in-loop or post-loop) may be used. Such filters are not shown for simplicity, but may filter the output of summer 50 (as an in-loop filter) if desired.
Video data memory 41 may store video data to be encoded by components of video encoder 20. The video data stored in video data memory 41 may be obtained, for example, from video source 18. Decoded picture buffer 64 may be a reference picture memory that stores reference video data for use by video encoder 20 when encoding video data, e.g., in intra or inter coding modes. Video data memory 41 and decoded picture buffer 64 may be formed from any of a variety of memory devices, such as Dynamic Random Access Memory (DRAM), including synchronous DRAM (sdram), magnetoresistive ram (mram), resistive ram (rram), or other types of memory devices. Video data memory 41 and decoded picture buffer 64 may be provided by the same memory device or separate memory devices. In various examples, video data memory 41 may be on-chip with other components of video encoder 20, or off-chip with respect to those components.
During the encoding process, video encoder 20 receives a video frame or slice to be coded. The frame or slice may be divided into a plurality of video blocks. Motion estimation unit 42 and motion compensation unit 44 may perform inter-predictive coding of the received video block relative to one or more blocks in one or more reference frames to provide temporal prediction. Intra-prediction processing unit 46 may alternatively perform intra-predictive coding of the received video block relative to one or more neighboring blocks in the same frame or slice as the block to be coded to provide spatial prediction. Video encoder 20 may perform, for example, multiple coding passes to select an appropriate coding mode for each block of video data.
Furthermore, partition unit 48 may partition the block of video data into sub-blocks based on an evaluation of the aforementioned partitioning scheme in the aforementioned coding pass. For example, partition unit 48 may initially partition a frame or slice into LCUs and partition each of the LCUs into sub-CUs based on a rate-distortion analysis (e.g., rate-distortion optimization). Mode select unit 40 may further generate a quadtree data structure indicating the partitioning of the LCU into sub-CUs. Leaf-node CUs of a quadtree may include one or more PUs and one or more TUs.
Mode select unit 40 may select one of the coding modes (intra or inter), e.g., based on the error results, and provide the resulting intra or inter coded block to summer 50 to generate residual block data and to summer 62 to reconstruct the encoded block for use as a reference frame. Mode select unit 40 also provides syntax elements (e.g., motion vectors, intra-mode indicators, partition information, and other such syntax information) to entropy encoding unit 56.
Motion estimation unit 42 and motion compensation unit 44 may be highly integrated, but are illustrated separately for conceptual purposes. Motion estimation performed by motion estimation unit 42 is the process of generating motion vectors, which estimate the motion of video blocks. For example, a motion vector may indicate the displacement of a PU of a video block within a current video frame or picture with respect to a predictive block within a reference frame (or other coded unit) relative to a current block being coded within the current frame (or other coded unit). Predictive blocks are blocks that are found to closely match the block to be coded in terms of pixel differences, which may be determined by Sum of Absolute Differences (SAD), Sum of Squared Differences (SSD), or other difference metrics. In some examples, video encoder 20 may calculate values for sub-integer pixel positions of reference pictures stored in decoded picture buffer 64. For example, video encoder 20 may interpolate values for a quarter-pixel position, an eighth-pixel position, or other fractional-pixel positions of a reference picture. Thus, motion estimation unit 42 may perform a motion search with respect to the full pixel positions and fractional pixel positions and output motion vectors with fractional pixel precision.
Motion estimation unit 42 calculates motion vectors for PUs of video blocks in inter-coded slices by comparing the locations of the PUs to the locations of predictive blocks of the reference picture. The reference picture may be selected from a first reference picture list (list 0) or a second reference picture list (list 1), each of which identifies one or more reference pictures stored in decoded picture buffer 64. Motion estimation unit 42 sends the calculated motion vectors to entropy encoding unit 56 and motion compensation unit 44.
The motion compensation performed by motion compensation unit 44 may involve extracting or generating a predictive block based on the motion vectors determined by motion estimation unit 42. Again, in some examples, motion estimation unit 42 and motion compensation unit 44 may be functionally integrated. Upon receiving the motion vector for the PU of the current video block, motion compensation unit 44 may locate, in one of the reference picture lists, the predictive block to which the motion vector points. As discussed below, summer 50 forms a residual video block by subtracting pixel values of the predictive block from pixel values of the current video block being coded, forming pixel difference values. In general, motion estimation unit 42 performs motion estimation with respect to the luma component, and motion compensation unit 44 uses motion vectors calculated based on the luma component for both the chroma and luma components. Mode select unit 40 may also generate syntax elements associated with the video blocks and the video slice for use by video decoder 30 in decoding the video blocks of the video slice.
As described above, intra-prediction processing unit 46 may intra-predict the current block as an alternative to the inter-prediction performed by motion estimation unit 42 and motion compensation unit 44. In particular, intra-prediction processing unit 46 may determine that the intra-prediction mode is used to encode the current block. In some examples, intra-prediction processing unit 46 may encode the current block using various intra-prediction modes, e.g., during independent encoding passes, and intra-prediction processing unit 46 (or mode selection unit 40, in some examples) may select an appropriate intra-prediction mode to use from the tested modes.
For example, intra-prediction processing unit 46 may calculate rate-distortion values for various tested intra-prediction modes using rate-distortion analysis and select the intra-prediction mode having the best rate-distortion characteristics among the tested modes. Rate-distortion analysis generally determines the amount of distortion (or error) between an encoded block and an original, unencoded block that was encoded to produce the encoded block, as well as the bit rate (that is, the number of bits) used to produce the encoded block. Intra-prediction processing unit 46 may calculate ratios from the distortion and rates for various encoded blocks to determine which intra-prediction mode exhibits the best rate-distortion value for the block.
Upon selecting the intra-prediction mode for the block, intra-prediction unit 46 may provide information indicating the selected intra-prediction mode for the block to entropy encoding unit 56. Entropy encoding unit 56 may encode information indicating the selected intra-prediction mode. Video encoder 20 may include configuration data in the transmitted bitstream, which may include a plurality of intra-prediction mode index tables and a plurality of modified intra-prediction mode index tables (also referred to as codeword mapping tables), definitions of contexts used for encoding various blocks, and an indication of the most probable intra-prediction mode, intra-prediction mode index table, and modified intra-prediction mode index table for each of the contexts.
Video encoder 20 forms a residual video block by subtracting the prediction data from mode select unit 40 from the original video block being coded. Summer 50 represents one or more components that perform this subtraction operation. Transform processing unit 52 applies a transform, such as a Discrete Cosine Transform (DCT) or a conceptually similar transform, to the residual block, producing a video block that includes residual transform coefficient values. Transform processing unit 52 may perform other transforms that are conceptually similar to DCT. Wavelet transforms, integer transforms, sub-band transforms, or other types of transforms may also be used. In any case, transform processing unit 52 applies the transform to the residual block, producing a block of residual transform coefficients. The transform may convert the residual information from a pixel value domain to a transform domain (e.g., frequency domain). Transform processing unit 52 may send the resulting transform coefficients to quantization unit 54.
Quantization unit 54 quantizes the transform coefficients to further reduce the bit rate. The quantization process may reduce the bit depth associated with some or all of the coefficients. The degree of quantization may be modified by adjusting the quantization parameter. In some examples, quantization unit 54 may then perform a scan of a matrix including quantized transform coefficients. Alternatively, entropy encoding unit 56 may perform the scan.
After quantization, entropy encoding unit 56 entropy codes the quantized transform coefficients. For example, entropy encoding unit 56 may perform Context Adaptive Variable Length Coding (CAVLC), Context Adaptive Binary Arithmetic Coding (CABAC), syntax-based context adaptive binary arithmetic coding (SBAC), Probability Interval Partition Entropy (PIPE) coding, or another entropy coding technique. In the case of context-based entropy coding, the context may be based on neighboring blocks. After entropy coding by entropy encoding unit 56, the encoded bitstream may be transmitted to another device (e.g., video decoder 30), or archived for later transmission or retrieval.
Inverse quantization unit 58 and inverse transform processing unit 60 apply inverse quantization and inverse transform, respectively, to reconstruct the residual block in the pixel domain, e.g., for later use as a reference block. Motion compensation unit 44 may calculate a reference block by adding the residual block to a predictive block of one of the frames of decoded picture buffer 64. Motion compensation unit 44 may also apply one or more interpolation filters to the reconstructed residual block to calculate sub-integer pixel values for motion estimation. Summer 62 adds the reconstructed residual block to the motion compensated prediction block produced by motion compensation unit 44, producing a reconstructed video block for storage in decoded picture buffer 64. Motion estimation unit 42 and motion compensation unit 44 may use the reconstructed video block as a reference block to inter-code a block in a subsequent video frame.
Fig. 11 is a block diagram illustrating an example of a video decoder 30 that may implement the techniques of this disclosure. In particular, as described above, video decoder 30 may decode the video data into a target color container, which inverse DRA unit 31 may then process. In the example of fig. 11, video decoder 30 includes an entropy decoding unit 70, a video data memory 71, a motion compensation unit 72, an intra prediction processing unit 74, an inverse quantization unit 76, an inverse transform processing unit 78, a decoded picture buffer 82, and a summer 80. Video decoder 30 may, in some examples, perform a decoding pass that is substantially reciprocal to the encoding pass described with respect to video encoder 20 (fig. 10). Motion compensation unit 72 may generate prediction data based on the motion vectors received from entropy decoding unit 70, while intra-prediction processing unit 74 may generate prediction data based on the intra-prediction mode indicator received from entropy decoding unit 70.
Video data memory 71 may store video data, such as an encoded video bitstream, to be decoded by components of video decoder 30. The video data stored in the video data memory 71 may be obtained, for example, from the computer-readable medium 16 (e.g., from a local video source, such as a camera), via wired or wireless network communication of video data, or by accessing a physical data storage medium. Video data memory 71 may form a Coded Picture Buffer (CPB) that stores encoded video data from an encoded video bitstream. Decoded picture buffer 82 may be a reference picture memory that stores reference video data for use by video decoder 30 in decoding video data, e.g., in intra or inter coding modes. Video data memory 71 and decoded picture buffer 82 may be formed from any of a variety of memory devices, such as Dynamic Random Access Memory (DRAM), including synchronous DRAM (sdram), magnetoresistive ram (mram), resistive ram (rram), or other types of memory devices. Video data memory 71 and decoded picture buffer 82 may be provided by the same memory device or separate memory devices. In various examples, video data memory 71 may be on-chip with other components of video decoder 30, or off-chip relative to those components.
During the decoding process, video decoder 30 receives an encoded video bitstream representing video blocks of an encoded video slice and associated syntax elements from video encoder 20. Entropy decoding unit 70 of video decoder 30 entropy decodes the bitstream to generate quantized coefficients, motion vectors, or intra-prediction mode indicators, among other syntax elements. Entropy decoding unit 70 forwards the motion vectors, as well as other syntax elements, to motion compensation unit 72. Video decoder 30 may receive syntax elements at the video slice level and/or the video block level.
When a video slice is coded as an intra-coded (I) slice, intra-prediction processing unit 74 may generate prediction data for the video block of the current video slice based on the signaled intra-prediction mode and data from previously decoded blocks of the current frame or picture. When a video frame is coded as an inter-coded (i.e., B or P) slice, motion compensation unit 72 generates a predictive block for the video block of the current video slice based on the motion vectors and other syntax elements received from entropy decoding unit 70. The predictive block may be generated from one of the reference pictures within one of the reference picture lists. Video decoder 30 may construct reference picture lists (list 0 and list1) using default construction techniques based on the reference pictures stored in decoded picture buffer 82. Motion compensation unit 72 determines prediction information for the video blocks of the current video slice by parsing the motion vectors and other syntax elements and uses the prediction information to generate predictive blocks for the current video block being decoded. For example, motion compensation unit 72 uses some of the received syntax elements to determine a prediction mode (e.g., intra or inter prediction) used to code the video blocks of the video slice, an inter-prediction slice type (e.g., B-slice or P-slice), construction information for one or more of the reference picture lists of the slice, a motion vector for each inter-coded video block of the slice, an inter-prediction state for each inter-coded video block of the slice, and other information used to decode the video blocks in the current video slice.
The motion compensation unit 72 may also perform interpolation based on the interpolation filter. Motion compensation unit 72 may calculate interpolated values for sub-integer pixels of the reference block using interpolation filters used by video encoder 20 during encoding of the video block. In this case, motion compensation unit 72 may determine the interpolation filter used by video encoder 20 from the received syntax elements and generate the predictive block using the interpolation filter.
Inverse quantization unit 76 inversely quantizes, i.e., dequantizes, the quantized transform coefficients provided in the bitstream and decoded by entropy decoding unit 70. The inverse quantization process may include calculating a quantization parameter QP for video decoder 30YFor each video block in a video slice to determine the degree of quantization and, likewise, the degree of inverse quantization that should be applied. The inverse transform processing unit 78 performs an inverse transform (e.g., inverse DCT, inverse integer transform, orA conceptually similar inverse transform process) is applied to the transform coefficients to produce a block of residues in the pixel domain.
After motion compensation unit 72 generates the predictive block for the current video block based on the motion vector and other syntax elements, video decoder 30 forms a decoded video block by summing the residual block from inverse transform processing unit 78 with the corresponding predictive block generated by motion compensation unit 72. Summer 80 represents one or more components that perform this summation operation. Optionally, a deblocking filter may also be applied to filter the decoded blocks in order to remove blocking artifacts. Other loop filters (in or after the coding loop) may also be used to smooth pixel transitions or otherwise improve video quality. The decoded video blocks in a given frame or picture are then stored in decoded picture buffer 82, decoded picture buffer 82 storing reference pictures for subsequent motion compensation. Decoded picture buffer 82 also stores decoded video for later presentation on a display device (e.g., display device 32 of fig. 1).
FIG. 12 is a flow diagram illustrating an example HDR/WCG conversion process in accordance with the techniques of this disclosure. Source device 12 of fig. 1 (including one or more of DRA unit 19 and/or video encoder 20) may perform the techniques of fig. 12.
In one example of the present disclosure, source device 12 may be configured to: receiving video data relating to a first color container, the video data relating to the first color container being defined by a first color gamut and a first color space (1200); deriving one or more dynamic range adjustment parameters, the dynamic range adjustment parameters being based on characteristics of video data relating to a first color container (1210); and performing dynamic range adjustment (1220) on the video data according to the one or more dynamic range adjustment parameters. In the example of fig. 12, the video data is input video data prior to video encoding, where the first color container is a native color container, and where the second color container is a target color container. In one example, the video data is one of: a group of pictures of video data, a picture of video data, a macroblock of video data, a block of video data, or a coding unit of video data.
In one example of the present invention, the characteristic of the video data includes a first color gamut. In one example, source device 12 is configured to derive one or more dynamic range adjustment parameters based on a correspondence of a first color gamut of a first color container and a second color gamut of a second color container, the second color container defined by the second color gamut and a second color space.
In another example of this disclosure, source device 12 is configured to signal, in the encoded video bitstream, one or more syntax elements indicating the first color gamut and the second color container in accordance with one or more of: metadata, supplemental enhancement information messages, video availability information, video parameter sets, sequence parameter sets, picture parameters, slice headers, or CTU headers.
In another example of this disclosure, source device 12 is configured to signal, in the encoded video bitstream, one or more syntax elements explicitly indicating the dynamic range adjustment parameters in accordance with one or more of: metadata, supplemental enhancement information messages, video availability information, video parameter sets, sequence parameter sets, picture parameters, slice headers, or CTU headers.
In another example of this disclosure, the characteristics of the video data include luminance information, and source device 12 is configured to derive one or more dynamic range adjustment parameters based on the luminance information of the video data. In another example of this disclosure, the characteristic of the video data includes a color value, and source device 12 is configured to derive one or more dynamic range adjustment parameters based on the color value of the video data.
In another example of this disclosure, source device 12 is configured to derive the one or more dynamic range adjustment parameters by minimizing one of a quantization error associated with quantizing the video data or a cost function associated with encoding the video data.
In another example of this disclosure, the one or more dynamic range adjustment parameters include a scale and an offset for each color component of the video data, and source device 12 is further configured to adjust each color component of the video data according to a function of the scale and offset for each respective color component.
In another example of the disclosure, the one or more dynamic range parameters include a first transfer function, and source device 12 is further configured to apply the first transfer function to the video data.
FIG. 13 is a flow diagram illustrating an example HDR/WCG inverse conversion process in accordance with the techniques of this disclosure. Destination device 14 of fig. 1 (including one or more of inverse DRA unit 31 and/or video decoder 30) may perform the techniques of fig. 13.
In one example of the present disclosure, destination device 14 may be configured to: receiving video data relating to a first color container, the video data relating to the first color container being defined by a first color gamut and a first color space (1300); deriving one or more dynamic range adjustment parameters, the dynamic range adjustment parameters being based on characteristics of video data relating to a first color container (1310); and performing dynamic range adjustment (1320) on the video data according to the one or more dynamic range adjustment parameters. In the example of fig. 13, the video data is decoded video data, where the first color container is a target color container, and where the second color container is a native color container. In one example, the video data is one of: a group of pictures of video data, a picture of video data, a macroblock of video data, a block of video data, or a coding unit of video data.
In one example of this disclosure, the characteristics of the video data include a first color gamut, and destination device 14 may be configured to derive one or more dynamic range adjustment parameters based on a correspondence of a first color gamut of a first color container and a second color gamut of a second color container, the second color container being defined by the second color gamut and a second color space.
In another example of this disclosure, destination device 14 may be configured to receive one or more syntax elements indicating the first color gamut and the second color container, and derive one or more dynamic range adjustment parameters based on the received one or more syntax elements. In another example of this disclosure, destination device 14 may be configured to derive parameters for weighted prediction from one or more dynamic range adjustment parameters for the currently coded picture and the reference picture. In another example of this disclosure, destination device 14 may be configured to receive one or more syntax elements explicitly indicating dynamic range adjustment parameters.
In another example of this disclosure, the characteristics of the video data include luminance information, and destination device 14 is configured to derive one or more dynamic range adjustment parameters based on the luminance information of the video data. In another example of this disclosure, the characteristic of the video data includes a color value, and destination device 14 is configured to derive one or more dynamic range adjustment parameters based on the color value of the video data.
In another example of this disclosure, the one or more dynamic range adjustment parameters include a scale and an offset for each color component of the video data, and destination device 14 is further configured to adjust each color component of the video data according to a function of the scale and offset for each respective color component.
In another example of the disclosure, the one or more dynamic range parameters include a first transfer function, which destination device 14 is further configured to apply to the video data.
For purposes of illustration, certain aspects of the disclosure have been described with respect to extensions of the HEVC standard. However, the techniques described in this disclosure may be useful for other video coding processes, including other standard or proprietary video coding processes that have not yet been developed.
As described in this disclosure, a video coder may refer to a video encoder or a video decoder. Similarly, a video coding unit may refer to a video encoder or a video decoder. Likewise, video coding may refer to video encoding or video decoding, where applicable.
It should be recognized that depending on the example, certain acts or events of any of the techniques described herein may be performed in a different order, may be added, merged, or omitted altogether (e.g., not all described acts or events are necessarily required to practice the techniques). Further, in some examples, acts or events may be performed concurrently, e.g., through multi-threaded processing, interrupt processing, or multiple processors, rather than sequentially.
In one or more examples, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium and executed by a hardware-based processing unit. Computer-readable media may include computer-readable storage media, which corresponds to tangible media, such as data storage media or communication media, including any medium that facilitates transfer of a computer program from one place to another, such as in accordance with a communication protocol. In this manner, a computer-readable medium may generally correspond to (1) a non-transitory tangible computer-readable storage medium, or (2) a communication medium, such as a signal or carrier wave. A data storage medium may be any available medium that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementing the techniques described in this disclosure. The computer program product may include a computer-readable medium.
By way of example, and not limitation, such computer-readable storage media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the instructions are transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, Digital Subscriber Line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. It should be understood, however, that the computer-readable storage media and data storage media do not include connections, carrier waves, signals, or other transitory media, but are instead directed to tangible storage media that are not transitory. Disk and disc, as used herein, includes Compact Disc (CD), laser disc, optical disc, Digital Versatile Disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
The instructions may be executed by one or more processors, such as one or more Digital Signal Processors (DSPs), general purpose microprocessors, Application Specific Integrated Circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Thus, the term "processor," as used herein may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein. In addition, in some aspects, the functionality described herein may be provided within dedicated hardware and/or software modules configured for encoding and decoding, or incorporated in a combined codec. Also, the techniques may be fully implemented in one or more circuits or logic elements.
The techniques of this disclosure may be implemented in a variety of devices or apparatuses, including a wireless handset, an Integrated Circuit (IC), or a set of ICs (e.g., a chipset). Various components, modules, or units are described in this disclosure to emphasize functional aspects of devices configured to perform the disclosed techniques, but do not necessarily require realization by different hardware units. Indeed, as described above, the various units may be combined in a codec hardware unit, in conjunction with suitable software and/or firmware, or provided by a collection of interoperative hardware units, including one or more processors as described above.
Various examples are described. These and other examples are within the scope of the following claims.

Claims (46)

1. A method of processing video data, the method comprising:
receiving video data relating to a first color container, the video data relating to the first color container being defined by a first color gamut and a first color space;
deriving one or more dynamic range adjustment parameters, the dynamic range adjustment parameters being based on characteristics of the video data relating to the first color container; and
performing dynamic range adjustment on the video data according to the one or more dynamic range adjustment parameters.
2. The method of claim 1, wherein the characteristics of the video data include the first color gamut, the method further comprising:
deriving the one or more dynamic range adjustment parameters based on a correspondence of the first color gamut of the first color container to a second color gamut of a second color container, the second color container being defined by the second color gamut and a second color space.
3. The method of claim 2, wherein the video data is video data input prior to video encoding, wherein the first color container is a native color container, and wherein the second color container is a target color container.
4. The method of claim 3, further comprising:
signaling, in an encoded video bitstream, one or more syntax elements indicating the first color gamut and the second color container in accordance with one or more of: metadata, supplemental enhancement information messages, video availability information, video parameter sets, sequence parameter sets, picture parameters, slice headers, or CTU headers.
5. The method of claim 2, wherein the video data is decoded video data, wherein the first color container is a target color container, and wherein the second color container is a native color container.
6. The method of claim 5, further comprising:
receiving one or more syntax elements indicating the first color gamut and the second color container; and
deriving the one or more dynamic range adjustment parameters based on the received one or more syntax elements.
7. The method of claim 6, further comprising:
parameters for weighted prediction are derived from the one or more dynamic range adjustment parameters for a currently coded picture and a reference picture.
8. The method of claim 2, further comprising:
signaling, in an encoded video bitstream, one or more syntax elements explicitly indicating the dynamic range adjustment parameter in accordance with one or more of: metadata, supplemental enhancement information messages, video availability information, video parameter sets, sequence parameter sets, picture parameters, slice headers, or CTU headers.
9. The method of claim 2, wherein deriving the one or more dynamic range adjustment parameters comprises:
receiving one or more syntax elements explicitly indicating the dynamic range adjustment parameters.
10. The method of claim 1, wherein the characteristic of the video data includes luminance information, the method further comprising:
deriving the one or more dynamic range adjustment parameters based on the luminance information of the video data.
11. The method of claim 1, wherein the characteristics of the video data include color values, the method further comprising:
deriving the one or more dynamic range adjustment parameters based on the color values of the video data.
12. The method of claim 1, further comprising:
deriving the one or more dynamic range adjustment parameters by minimizing one of a quantization error associated with quantizing the video data or a cost function associated with encoding the video data.
13. The method of claim 1, wherein the one or more dynamic range adjustment parameters include a scale and an offset for each color component of the video data, the method further comprising:
adjusting each color component of the video data according to a function of the scaling and the offset for each respective color component.
14. The method of claim 1, wherein the one or more dynamic range parameters include a first transfer function, the method further comprising:
applying the first transfer function to the video data.
15. The method of claim 1, wherein the video data is one of: a group of pictures of video data, a picture of video data, a macroblock of video data, a block of video data, or a coding unit of video data.
16. An apparatus configured to process video data, the apparatus comprising:
a memory configured to store the video data; and
one or more processors configured to:
receiving the video data relating to a first color container, the video data relating to the first color container being defined by a first color gamut and a first color space;
deriving one or more dynamic range adjustment parameters, the dynamic range adjustment parameters being based on characteristics of the video data relating to the first color container; and
performing dynamic range adjustment on the video data according to the one or more dynamic range adjustment parameters.
17. The apparatus of claim 16, wherein the characteristic of the video data includes the first color gamut, and wherein the one or more processors are further configured to:
deriving the one or more dynamic range adjustment parameters based on a correspondence of the first color gamut of the first color container to a second color gamut of a second color container, the second color container being defined by the second color gamut and a second color space.
18. The apparatus of claim 17, wherein the video data is video data input prior to video encoding, wherein the first color container is a native color container, and wherein the second color container is a target color container.
19. The apparatus of claim 18, wherein the one or more processors are further configured to:
signaling, in an encoded video bitstream, one or more syntax elements indicating the first color gamut and the second color container in accordance with one or more of: metadata, supplemental enhancement information messages, video availability information, video parameter sets, sequence parameter sets, picture parameters, slice headers, or CTU headers.
20. The apparatus of claim 17, wherein the video data is decoded video data, wherein the first color container is a target color container, and wherein the second color container is a native color container.
21. The apparatus of claim 20, wherein the one or more processors are further configured to:
receiving one or more syntax elements indicating the first color gamut and the second color container; and
deriving the one or more dynamic range adjustment parameters based on the received one or more syntax elements.
22. The apparatus of claim 21, wherein the one or more processors are further configured to:
parameters for weighted prediction are derived from the one or more dynamic range adjustment parameters for a currently coded picture and a reference picture.
23. The apparatus of claim 17, wherein the one or more processors are further configured to:
signaling, in an encoded video bitstream, one or more syntax elements explicitly indicating the dynamic range adjustment parameter in accordance with one or more of: metadata, supplemental enhancement information messages, video availability information, video parameter sets, sequence parameter sets, picture parameters, slice headers, or CTU headers.
24. The apparatus of claim 17, wherein the one or more processors are further configured to:
receiving one or more syntax elements explicitly indicating the dynamic range adjustment parameters.
25. The apparatus of claim 16, wherein the characteristics of the video data include luminance information, and wherein the one or more processors are further configured to:
deriving the one or more dynamic range adjustment parameters based on the luminance information of the video data.
26. The apparatus of claim 16, wherein the characteristics of the video data include color values, and wherein the one or more processors are further configured to:
deriving the one or more dynamic range adjustment parameters based on the color values of the video data.
27. The apparatus of claim 16, wherein the one or more processors are further configured to:
deriving the one or more dynamic range adjustment parameters by minimizing one of a quantization error associated with quantizing the video data or a cost function associated with encoding the video data.
28. The apparatus of claim 16, wherein the one or more dynamic range adjustment parameters include a scale and an offset for each color component of the video data, and wherein the one or more processors are further configured to:
adjusting each color component of the video data according to a function of the scaling and the offset for each respective color component.
29. The apparatus of claim 16, wherein the one or more dynamic range parameters include a first transfer function, and wherein the one or more processors are further configured to:
applying the first transfer function to the video data.
30. The apparatus of claim 16, wherein the video data is one of: a group of pictures of video data, a picture of video data, a macroblock of video data, a block of video data, or a coding unit of video data.
31. An apparatus configured to process video data, the apparatus comprising:
means for receiving video data relating to a first color container, the video data relating to the first color container being defined by a first color gamut and a first color space;
means for deriving one or more dynamic range adjustment parameters, the dynamic range adjustment parameters being based on characteristics of the video data relating to the first color container; and
means for performing dynamic range adjustment on the video data according to the one or more dynamic range adjustment parameters.
32. The apparatus of claim 31, wherein the characteristics of the video data include the first color gamut, the apparatus further comprising:
means for deriving the one or more dynamic range adjustment parameters based on a correspondence of the first color gamut of the first color container and a second color gamut of a second color container, the second color container being defined by the second color gamut and a second color space.
33. The apparatus of claim 32, wherein the video data is video data input prior to video encoding, wherein the first color container is a native color container, and wherein the second color container is a target color container.
34. The apparatus of claim 33, further comprising:
means for signaling, in an encoded video bitstream, one or more syntax elements indicating the first color gamut and the second color container in accordance with one or more of: metadata, supplemental enhancement information messages, video availability information, video parameter sets, sequence parameter sets, picture parameters, slice headers, or CTU headers.
35. The apparatus of claim 32, wherein the video data is decoded video data, wherein the first color container is a target color container, and wherein the second color container is a native color container.
36. The apparatus of claim 35, further comprising:
means for receiving one or more syntax elements indicating the first color gamut and the second color container; and
means for deriving the one or more dynamic range adjustment parameters based on the received one or more syntax elements.
37. The apparatus of claim 36, further comprising:
means for deriving parameters for weighted prediction from the one or more dynamic range adjustment parameters for a currently coded picture and a reference picture.
38. The apparatus of claim 32, further comprising:
means for signaling, in an encoded video bitstream, one or more syntax elements explicitly indicating the dynamic range adjustment parameter in accordance with one or more of: metadata, supplemental enhancement information messages, video availability information, video parameter sets, sequence parameter sets, picture parameters, slice headers, or CTU headers.
39. The apparatus of claim 32, wherein the means for deriving the one or more dynamic range adjustment parameters comprises:
means for receiving one or more syntax elements explicitly indicating the dynamic range adjustment parameters.
40. The apparatus of claim 31, wherein the characteristic of the video data includes luminance information, the apparatus further comprising:
means for deriving the one or more dynamic range adjustment parameters based on the luminance information of the video data.
41. The apparatus of claim 31, wherein the characteristics of the video data include color values, the apparatus further comprising:
means for deriving the one or more dynamic range adjustment parameters based on the color values of the video data.
42. The apparatus of claim 31, further comprising:
means for deriving the one or more dynamic range adjustment parameters by minimizing one of a quantization error associated with quantizing the video data or a cost function associated with encoding the video data.
43. The apparatus of claim 31, wherein the one or more dynamic range adjustment parameters include a scale and an offset for each color component of the video data, the apparatus further comprising:
means for adjusting each color component of the video data according to a function of the scaling and the offset for each respective color component.
44. The apparatus of claim 31, wherein the one or more dynamic range parameters include a first transfer function, the apparatus further comprising:
means for applying the first transfer function to the video data.
45. The apparatus of claim 31, wherein the video data is one of: a group of pictures of video data, a picture of video data, a macroblock of video data, a block of video data, or a coding unit of video data.
46. A computer-readable storage medium storing instructions that, when executed, cause one or more processors to:
receiving video data relating to a first color container, the video data relating to the first color container being defined by a first color gamut and a first color space;
deriving one or more dynamic range adjustment parameters, the dynamic range adjustment parameters being based on characteristics of the video data relating to the first color container; and
performing dynamic range adjustment on the video data according to the one or more dynamic range adjustment parameters.
HK18100339.0A 2015-04-17 2016-04-15 Dynamic range adjustment for high dynamic range and wide color gamut video coding HK1241179A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US62/149,446 2015-04-17
US15/099,256 2016-04-14

Publications (1)

Publication Number Publication Date
HK1241179A1 true HK1241179A1 (en) 2018-06-01

Family

ID=

Similar Documents

Publication Publication Date Title
AU2021200087B2 (en) Fixed point implementation of range adjustment of components in video coding
US10595032B2 (en) Syntax structures for high dynamic range and wide color gamut video coding
RU2701961C2 (en) Dynamic range adjustment for video encoding with extended dynamic range and wide colour gamma
CN109643531B (en) Method and apparatus for color gamut adaptation with feedback channel
US10284863B2 (en) Adaptive constant-luminance approach for high dynamic range and wide color gamut video coding
EP3304911A1 (en) Processing high dynamic range and wide color gamut video data for video coding
HK1241179A1 (en) Dynamic range adjustment for high dynamic range and wide color gamut video coding
HK40035150B (en) Signaling mechanisms for equal ranges and other dra parameters for video coding
HK40035150A (en) Signaling mechanisms for equal ranges and other dra parameters for video coding