[go: up one dir, main page]

TWI878392B - Flexible signaling of qp offset for adaptive color transform in video coding - Google Patents

Flexible signaling of qp offset for adaptive color transform in video coding Download PDF

Info

Publication number
TWI878392B
TWI878392B TW109141561A TW109141561A TWI878392B TW I878392 B TWI878392 B TW I878392B TW 109141561 A TW109141561 A TW 109141561A TW 109141561 A TW109141561 A TW 109141561A TW I878392 B TWI878392 B TW I878392B
Authority
TW
Taiwan
Prior art keywords
block
act
chroma
offset
residue
Prior art date
Application number
TW109141561A
Other languages
Chinese (zh)
Other versions
TW202127874A (en
Inventor
黃翰
陳俊啟
艾達希克里斯南 拉瑪蘇拉莫尼安
瓦迪姆 賽萊金
錢偉榮
吉爾特 范德奧維拉
瑪塔 卡克基維克茲
Original Assignee
美商高通公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 美商高通公司 filed Critical 美商高通公司
Publication of TW202127874A publication Critical patent/TW202127874A/en
Application granted granted Critical
Publication of TWI878392B publication Critical patent/TWI878392B/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/12Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Discrete Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A video decoder can be configured to determine that a block of the video data is encoded using an adaptive color transform (ACT); determine that the block is encoded in a joint chroma mode, wherein for the joint chroma mode a single chroma residual block is encoded for a first chroma component of the block and a second chroma component of the block; determine a quantization parameter (QP) for the block; determine an ACT quantization parameter (QP) offset for the block based on the block being encoded using the ACT and encoded in the joint chroma mode; and determine an ACT QP for the block based on the QP and the ACT QP offset.

Description

用於視訊編碼中的自我調整色彩變換的QP偏移的靈活訊號傳遞Flexible signaling of QP offsets for self-adjusting color transitions in video coding

本申請案主張以下申請案的權益:於2019年11月26日提出申請的美國臨時專利申請案62/940,728,以及於2019年12月27日提出申請的美國臨時專利申請案62/954,318,據此將上述兩個申請案的全部內容藉由引用的方式併入。This application claims the benefit of U.S. Provisional Patent Application No. 62/940,728 filed on November 26, 2019, and U.S. Provisional Patent Application No. 62/954,318 filed on December 27, 2019, both of which are hereby incorporated by reference in their entirety.

本揭示內容係關於視訊編碼和視訊解碼。This disclosure relates to video encoding and video decoding.

數位視訊能力可以被合併到各種各樣的設備中,包括數位電視機、數位直播系統、無線廣播系統、個人數位助理(PDA)、膝上型電腦或桌上型電腦、平板電腦、電子書閱讀器、數位相機、數位記錄設備、數位媒體播放機、視訊遊戲設備、視訊遊戲控制台、蜂巢或衛星無線電電話(所謂的「智慧型電話」)、視訊電話會議設備、視訊串流設備等。數位視訊設備實現視訊編碼技術(諸如在由MPEG-2、MPEG-4、ITU-T H.263、ITU-T H.264/MPEG-4(第10部分,先進視訊編碼(AVC))、ITU-T H.265/高效率視訊編碼(HEVC)所定義的標準以及此類標準的擴展中描述的彼等技術)。藉由實現此種視訊編碼技術,視訊設備可以更加高效地發送、接收、編碼、解碼及/或儲存數位視訊資訊。Digital video capabilities can be incorporated into a wide variety of devices, including digital televisions, digital live broadcast systems, wireless broadcast systems, personal digital assistants (PDAs), laptop or desktop computers, tablet computers, e-book readers, digital cameras, digital recording devices, digital media players, video gaming devices, video game consoles, cellular or satellite radios (so-called "smart phones"), video teleconferencing equipment, video streaming devices, and many others. Digital video devices implement video coding techniques, such as those described in the standards defined by MPEG-2, MPEG-4, ITU-T H.263, ITU-T H.264/MPEG-4 (Part 10, Advanced Video Coding (AVC)), ITU-T H.265/High Efficiency Video Coding (HEVC), and extensions of such standards. By implementing such video coding techniques, video devices can more efficiently send, receive, encode, decode and/or store digital video information.

視訊編碼技術包括空間(圖片內(intra-picture))預測及/或時間(圖片間(inter-picture))預測以減少或去除在視訊序列中固有的冗餘。對於基於區塊的視訊編碼,視訊切片(例如,視訊圖片或視訊圖片的一部分)可以被分割為視訊區塊,視訊區塊亦可以被稱為編碼樹單元(CTU)、編碼單元(CU)及/或編碼節點。圖片的經訊框內編碼(I)的切片中的視訊區塊是使用相對於同一圖片中的相鄰區塊中的參考取樣的空間預測來編碼的。圖片的經訊框間編碼(P或B)的切片中的視訊區塊可以使用相對於同一圖片中的相鄰區塊中的參考取樣的空間預測或者相對於其他參考圖片中的參考取樣的時間預測。圖片可以被稱為訊框,並且參考圖片可以被稱為參考訊框。Video coding techniques include spatial (intra-picture) prediction and/or temporal (inter-picture) prediction to reduce or remove redundancy inherent in video sequences. For block-based video coding, a video slice (e.g., a video picture or a portion of a video picture) can be partitioned into video blocks, which can also be referred to as coding tree units (CTUs), coding units (CUs), and/or coding nodes. Video blocks in an intra-frame coded (I) slice of a picture are coded using spatial prediction with respect to reference samples in neighboring blocks in the same picture. Video blocks in an inter-frame coded (P or B) slice of a picture can use spatial prediction relative to reference samples in neighboring blocks in the same picture or temporal prediction relative to reference samples in other reference pictures. A picture may be referred to as a frame, and a reference picture may be referred to as a reference frame.

本揭示內容描述了使用自我調整色彩變換(ACT)和聯合色度模式兩者來對視訊資料區塊進行編碼的技術,與用於結合聯合色度模式來使用ACT的現有技術相比,該等技術可以提供改進的編碼效率。如下文將更詳細地解釋的,當使用ACT時,視訊編碼器和視訊解碼器將偏移應用於量化參數(QP)值以決定ACT QP值。隨後,視訊編碼器和視訊解碼器使用ACT QP值來對變換係數進行量化和反量化。本揭示內容描述了用於決定ACT QP偏移的技術,該等技術可以改進針對結合聯合色度模式來使用ACT的編碼場景的整體視訊編碼效率。更具體地說,藉由基於區塊是使用ACT而編碼的並且是以聯合色度模式而編碼的來決定用於該區塊的ACT QP偏移,以及基於QP和ACT QP偏移來決定用於該區塊的ACT QP,本揭示內容的技術可以在使用ACT和聯合色度模式兩者的編碼場景中改進視訊資料的整體編碼品質。The present disclosure describes techniques for encoding blocks of video data using both an adaptive color transform (ACT) and a joint chroma mode that can provide improved coding efficiency compared to prior art techniques for using ACT in conjunction with a joint chroma mode. As will be explained in more detail below, when using ACT, a video encoder and a video decoder apply an offset to a quantization parameter (QP) value to determine an ACT QP value. The video encoder and the video decoder then use the ACT QP value to quantize and dequantize transform coefficients. The present disclosure describes techniques for determining an ACT QP offset that can improve overall video coding efficiency for coding scenes that use ACT in conjunction with a joint chroma mode. More specifically, by determining an ACT QP offset for a block based on whether the block is encoded using ACT and is encoded in joint chroma mode, and determining an ACT QP for the block based on the QP and the ACT QP offset, the techniques of the present disclosure can improve the overall encoding quality of video data in encoding scenarios using both ACT and joint chroma mode.

根據一個實例,一種對視訊資料進行解碼的方法,該方法包括:決定該視訊資料的區塊是使用自我調整色彩變換(ACT)而編碼的;決定該區塊是以聯合色度模式而編碼的,其中對於該聯合色度模式,單個色度殘差區塊是針對該區塊的第一色度分量和該區塊的第二色度分量而編碼的;決定用於該區塊的量化參數(QP);基於該區塊是使用該ACT而編碼的並且是以該聯合色度模式而編碼的來決定用於該區塊的ACT量化參數(QP)偏移;基於該QP和該ACT QP偏移來決定用於該區塊的ACT QP;基於用於該區塊的該ACT QP來決定該單個色度殘差區塊;根據該單個色度殘差區塊來決定用於該第一色度分量的第一色度殘差區塊,其中該第一色度殘差區塊在第一色彩空間中;根據該單個色度殘差區塊來決定用於該第二色度分量的第二色度殘差區塊,其中該第二色度殘差區塊在該第一色彩空間中;對該第一色度殘差區塊執行逆ACT,以將該第一色度殘差區塊轉換到第二色彩空間;及對該第二色度殘差區塊執行該逆ACT,以將該第二色度殘差區塊轉換到該第二色彩空間。According to one example, a method for decoding video data includes: determining that a block of the video data is encoded using an adaptive color transform (ACT); determining that the block is encoded in a joint chroma mode, wherein for the joint chroma mode, a single chroma residue block is encoded for a first chroma component of the block and a second chroma component of the block; determining a quantization parameter (QP) for the block; determining an ACT quantization parameter (QP) offset for the block based on that the block is encoded using the ACT and is encoded in the joint chroma mode; determining an ACT QP for the block based on the QP and the ACT QP offset; determining an ACT QP for the block based on the ACT The invention relates to a method for determining a single chroma residue block based on a QP; determining a first chroma residue block for the first chroma component according to the single chroma residue block, wherein the first chroma residue block is in a first color space; determining a second chroma residue block for the second chroma component according to the single chroma residue block, wherein the second chroma residue block is in the first color space; performing an inverse ACT on the first chroma residue block to convert the first chroma residue block to a second color space; and performing the inverse ACT on the second chroma residue block to convert the second chroma residue block to the second color space.

根據一個實例,一種用於對視訊資料進行解碼的設備包括被配置為儲存視訊資料的記憶體和一或多個處理器,該一或多個處理器在電路中實現並且被配置為:決定該視訊資料的區塊是使用自我調整色彩變換(ACT)而編碼的;決定該區塊是以聯合色度模式而編碼的,其中對於該聯合色度模式,單個色度殘差區塊是針對該區塊的第一色度分量和該區塊的第二色度分量而編碼的;決定用於該區塊的量化參數(QP);基於該區塊是使用該ACT而編碼的並且是以該聯合色度模式而編碼的來決定用於該區塊的ACT量化參數(QP)偏移;基於該QP和該ACT QP偏移來決定用於該區塊的ACT QP;基於用於該區塊的該ACT QP來決定該單個色度殘差區塊;根據該單個色度殘差區塊來決定用於該第一色度分量的第一色度殘差區塊,其中該第一色度殘差區塊在第一色彩空間中;根據該單個色度殘差區塊來決定用於該第二色度分量的第二色度殘差區塊,其中該第二色度殘差區塊在該第一色彩空間中;對該第一色度殘差區塊執行逆ACT,以將該第一色度殘差區塊轉換到第二色彩空間;及對該第二色度殘差區塊執行該逆ACT,以將該第二色度殘差區塊轉換到該第二色彩空間。According to one example, a device for decoding video data includes a memory configured to store the video data and one or more processors, the one or more processors being implemented in circuitry and configured to: determine that a block of the video data is encoded using an adaptive color transform (ACT); determine that the block is encoded in a joint chroma mode, wherein for the joint chroma mode, a single chroma residue block is encoded for a first chroma component of the block and a second chroma component of the block; determine a quantization parameter (QP) for the block; determine an ACT quantization parameter (QP) offset for the block based on the block being encoded using the ACT and being encoded in the joint chroma mode; and determine a QP offset based on the QP and the ACT. QP offset to determine the ACT QP for the block; based on the ACT QP for the block The invention relates to a method for determining a single chroma residue block based on a QP; determining a first chroma residue block for the first chroma component according to the single chroma residue block, wherein the first chroma residue block is in a first color space; determining a second chroma residue block for the second chroma component according to the single chroma residue block, wherein the second chroma residue block is in the first color space; performing an inverse ACT on the first chroma residue block to convert the first chroma residue block to a second color space; and performing the inverse ACT on the second chroma residue block to convert the second chroma residue block to the second color space.

根據另一實例,一種用於對視訊資料進行解碼的裝置包括:用於決定該視訊資料的區塊是使用自我調整色彩變換(ACT)而編碼的構件;用於決定該區塊是以聯合色度模式而編碼的構件,其中對於該聯合色度模式,單個色度殘差區塊是針對該區塊的第一色度分量和該區塊的第二色度分量而編碼的;用於決定用於該區塊的量化參數(QP)的構件;用於基於該區塊是使用該ACT而編碼的並且是以該聯合色度模式而編碼的來決定用於該區塊的ACT量化參數(QP)偏移的構件;用於基於該QP和該ACT QP偏移來決定用於該區塊的ACT QP的構件;用於基於用於該區塊的該ACT QP來決定該單個色度殘差區塊的構件;用於根據該單個色度殘差區塊來決定用於該第一色度分量的第一色度殘差區塊的構件,其中該第一色度殘差區塊在第一色彩空間中;用於根據該單個色度殘差區塊來決定用於該第二色度分量的第二色度殘差區塊的構件,其中該第二色度殘差區塊在該第一色彩空間中;用於對該第一色度殘差區塊執行逆ACT,以將該第一色度殘差區塊轉換到第二色彩空間的構件;及用於對該第二色度殘差區塊執行該逆ACT,以將該第二色度殘差區塊轉換到該第二色彩空間的構件。According to another example, an apparatus for decoding video data includes: means for determining that a block of the video data is encoded using an adaptive color transform (ACT); means for determining that the block is encoded in a joint chroma mode, wherein for the joint chroma mode, a single chroma residue block is encoded for a first chroma component of the block and a second chroma component of the block; means for determining a quantization parameter (QP) for the block; means for determining an ACT quantization parameter (QP) offset for the block based on the block being encoded using the ACT and encoded in the joint chroma mode; means for determining an ACT QP offset for the block based on the QP and the ACT QP offset. QP component; used based on the ACT used for the block A component for determining the single chroma residue block based on a QP; a component for determining a first chroma residue block for the first chroma component based on the single chroma residue block, wherein the first chroma residue block is in a first color space; a component for determining a second chroma residue block for the second chroma component based on the single chroma residue block, wherein the second chroma residue block is in the first color space; a component for performing an inverse ACT on the first chroma residue block to convert the first chroma residue block to a second color space; and a component for performing the inverse ACT on the second chroma residue block to convert the second chroma residue block to the second color space.

根據另一實例,一種電腦可讀取儲存媒體儲存指令,該等指令在由一或多個處理器執行時使得該一或多個處理器進行以下操作:決定該視訊資料的區塊是使用自我調整色彩變換(ACT)而編碼的;決定該區塊是以聯合色度模式而編碼的,其中對於該聯合色度模式,單個色度殘差區塊是針對該區塊的第一色度分量和該區塊的第二色度分量而編碼的;決定用於該區塊的量化參數(QP);基於該區塊是使用該ACT而編碼的並且是以該聯合色度模式而編碼的來決定用於該區塊的ACT量化參數(QP)偏移;基於該QP和該ACT QP偏移來決定用於該區塊的ACT QP;基於用於該區塊的該ACT QP來決定該單個色度殘差區塊;根據該單個色度殘差區塊來決定用於該第一色度分量的第一色度殘差區塊,其中該第一色度殘差區塊在第一色彩空間中;根據該單個色度殘差區塊來決定用於該第二色度分量的第二色度殘差區塊,其中該第二色度殘差區塊在該第一色彩空間中;對該第一色度殘差區塊執行逆ACT,以將該第一色度殘差區塊轉換到第二色彩空間;及對該第二色度殘差區塊執行該逆ACT,以將該第二色度殘差區塊轉換到該第二色彩空間。According to another example, a computer-readable storage medium stores instructions that, when executed by one or more processors, cause the one or more processors to perform the following operations: determine that a block of video data is encoded using an adaptive color transform (ACT); determine that the block is encoded in a joint chroma mode, wherein for the joint chroma mode, a single chroma residue block is encoded for a first chroma component of the block and a second chroma component of the block; determine a quantization parameter (QP) for the block; determine an ACT quantization parameter (QP) offset for the block based on the block being encoded using the ACT and encoded in the joint chroma mode; determine an ACT QP offset for the block based on the QP and the ACT QP offset. QP; determining the single chroma residue block based on the ACT QP for the block; determining a first chroma residue block for the first chroma component based on the single chroma residue block, wherein the first chroma residue block is in a first color space; determining a second chroma residue block for the second chroma component based on the single chroma residue block, wherein the second chroma residue block is in the first color space; performing an inverse ACT on the first chroma residue block to convert the first chroma residue block to a second color space; and performing the inverse ACT on the second chroma residue block to convert the second chroma residue block to the second color space.

根據另一實例,一種對視訊資料進行編碼的方法包括:決定用於視訊資料的區塊的第一色度分量的第一色度殘差區塊;決定用於視訊資料的該區塊的第二色度分量的第二色度殘差區塊,其中該第一色度殘差區塊和該第二色度殘差區塊在第一色彩空間中;決定該視訊資料的該區塊是使用自我調整色彩變換(ACT)而編碼的;對該第一色度殘差區塊執行該ACT,以將該第一色度殘差區塊轉換到第二色彩空間;對該第二色度殘差區塊執行逆ACT,以將該第二色度殘差區塊轉換到該第二色彩空間;決定該視訊資料的該區塊是以聯合色度模式而編碼的,其中對於該聯合色度模式,單個色度殘差區塊是針對該區塊的該第一色度分量和該區塊的該第二色度分量而編碼的;基於經轉換的第一色度殘差區塊和經轉換的第二色度殘差區塊來決定該單個色度殘差區塊;決定用於該區塊的量化參數(QP);基於該區塊是使用該ACT而編碼的並且是以該聯合色度模式而編碼的來決定用於該區塊的ACT量化參數(QP)偏移;基於該QP和該ACT QP偏移來決定用於該區塊的ACT QP;及基於用於該區塊的該ACT QP來對該單個色度殘差區塊進行量化。According to another example, a method for encoding video data includes: determining a first chroma residue block for a first chroma component of a block of video data; determining a second chroma residue block for a second chroma component of the block of video data, wherein the first chroma residue block and the second chroma residue block are in a first color space; determining that the block of video data is encoded using an adaptive color transform (ACT); performing the ACT on the first chroma residue block to convert the first chroma residue block to a second color space; performing an inverse ACT on the second chroma residue block to convert the second chroma residue block to a second color space; The method comprises: converting a block of the video data to the second color space; determining that the block of the video data is encoded in a joint chroma mode, wherein for the joint chroma mode, a single chroma residue block is encoded for the first chroma component of the block and the second chroma component of the block; determining the single chroma residue block based on the converted first chroma residue block and the converted second chroma residue block; determining a quantization parameter (QP) for the block; determining an ACT quantization parameter (QP) offset for the block based on the block being encoded using the ACT and encoded in the joint chroma mode; determining a QP offset for the block based on the QP and the ACT The method further comprises: determining an ACT QP for the block using the QP offset; and quantizing the single chroma residue block based on the ACT QP for the block.

根據另一實例,一種用於對視訊資料進行編碼的設備包括被配置為儲存視訊資料的記憶體和一或多個處理器,該一或多個處理器在電路中實現並且被配置為:決定用於視訊資料的區塊的第一色度分量的第一色度殘差區塊;決定用於視訊資料的該區塊的第二色度分量的第二色度殘差區塊,其中該第一色度殘差區塊和該第二色度殘差區塊在第一色彩空間中;決定該視訊資料的該區塊是使用自我調整色彩變換(ACT)而編碼的;對該第一色度殘差區塊執行該ACT,以將該第一色度殘差區塊轉換到第二色彩空間;對該第二色度殘差區塊執行逆ACT,以將該第二色度殘差區塊轉換到該第二色彩空間;決定該視訊資料的該區塊是以聯合色度模式而編碼的,其中對於該聯合色度模式,單個色度殘差區塊是針對該區塊的該第一色度分量和該區塊的該第二色度分量而編碼的;基於經轉換的第一色度殘差區塊和經轉換的第二色度殘差區塊來決定該單個色度殘差區塊;決定用於該區塊的量化參數(QP);基於該區塊是使用該ACT而編碼的並且是以該聯合色度模式而編碼的來決定用於該區塊的ACT量化參數(QP)偏移;基於該QP和該ACT QP偏移來決定用於該區塊的ACT QP;及基於用於該區塊的該ACT QP來對該單個色度殘差區塊進行量化。According to another example, a device for encoding video data includes a memory configured to store the video data and one or more processors, the one or more processors being implemented in circuitry and configured to: determine a first chroma residue block for a first chroma component of a block of video data; determine a second chroma residue block for a second chroma component of the block of video data, wherein the first chroma residue block and the second chroma residue block are in a first color space; determine that the block of video data is encoded using an adaptive color transform (ACT); perform the ACT on the first chroma residue block to convert the first chroma residue block to a second color space; and perform the ACT on the second chroma residue block to convert the first chroma residue block to a second color space. The method comprises performing an inverse ACT on a chroma residue block to convert the second chroma residue block to the second color space; determining that the block of the video data is encoded in a joint chroma mode, wherein for the joint chroma mode, a single chroma residue block is encoded for the first chroma component of the block and the second chroma component of the block; determining the single chroma residue block based on the converted first chroma residue block and the converted second chroma residue block; determining a quantization parameter (QP) for the block; determining an ACT quantization parameter (QP) offset for the block based on the block being encoded using the ACT and encoded in the joint chroma mode; and determining a QP offset based on the QP and the ACT. The method further comprises: determining an ACT QP for the block using the QP offset; and quantizing the single chroma residue block based on the ACT QP for the block.

根據另一實例,一種用於對視訊資料進行編碼的裝置包括:用於決定用於視訊資料的區塊的第一色度分量的第一色度殘差區塊的構件;用於決定用於視訊資料的該區塊的第二色度分量的第二色度殘差區塊的構件,其中該第一色度殘差區塊和該第二色度殘差區塊在第一色彩空間中;用於決定該視訊資料的該區塊是使用自我調整色彩變換(ACT)而編碼的構件;用於對該第一色度殘差區塊執行該ACT,以將該第一色度殘差區塊轉換到第二色彩空間的構件;用於對該第二色度殘差區塊執行逆ACT,以將該第二色度殘差區塊轉換到該第二色彩空間的構件;用於決定該視訊資料的該區塊是以聯合色度模式而編碼的構件,其中對於該聯合色度模式,單個色度殘差區塊是針對該區塊的該第一色度分量和該區塊的該第二色度分量而編碼的;用於基於經轉換的第一色度殘差區塊和經轉換的第二色度殘差區塊來決定該單個色度殘差區塊的構件;用於決定用於該區塊的量化參數(QP)的構件;用於基於該區塊是使用該ACT而編碼的並且是以該聯合色度模式而編碼的來決定用於該區塊的ACT量化參數(QP)偏移的構件;用於基於該QP和該ACT QP偏移來決定用於該區塊的ACT QP的構件;及用於基於用於該區塊的該ACT QP來對該單個色度殘差區塊進行量化的構件。According to another example, an apparatus for encoding video data includes: means for determining a first chroma residue block for a first chroma component of a block of video data; means for determining a second chroma residue block for a second chroma component of the block of video data, wherein the first chroma residue block and the second chroma residue block are in a first color space; means for determining that the block of video data is encoded using an adaptive color transform (ACT); means for performing the ACT on the first chroma residue block to convert the first chroma residue block to a second color space; means for performing an inverse ACT on the second chroma residue block to convert the second chroma residue block to a second color space; means for determining that the block of video data is encoded in a joint chroma mode, wherein for the joint chroma mode, a single chroma residue block is encoded for the first chroma component of the block and the second chroma component of the block; means for determining the single chroma residue block based on the converted first chroma residue block and the converted second chroma residue block; means for determining a quantization parameter (QP) for the block; means for determining an ACT quantization parameter (QP) offset for the block based on the block being encoded using the ACT and encoded in the joint chroma mode; means for determining an ACT quantization parameter (QP) offset for the block based on the QP and the ACT means for determining an ACT QP for the block based on a QP offset; and means for quantizing the single chroma residue block based on the ACT QP for the block.

根據另一實例,一種電腦可讀取儲存媒體儲存指令,該等指令在由一或多個處理器執行時使得該一或多個處理器進行以下操作:決定用於視訊資料的區塊的第一色度分量的第一色度殘差區塊;決定用於視訊資料的該區塊的第二色度分量的第二色度殘差區塊,其中該第一色度殘差區塊和該第二色度殘差區塊在第一色彩空間中;決定該視訊資料的該區塊是使用自我調整色彩變換(ACT)而編碼的;對該第一色度殘差區塊執行該ACT,以將該第一色度殘差區塊轉換到第二色彩空間;對該第二色度殘差區塊執行逆ACT,以將該第二色度殘差區塊轉換到該第二色彩空間;決定該視訊資料的該區塊是以聯合色度模式而編碼的,其中對於該聯合色度模式,單個色度殘差區塊是針對該區塊的該第一色度分量和該區塊的該第二色度分量而編碼的;基於經轉換的第一色度殘差區塊和經轉換的第二色度殘差區塊來決定該單個色度殘差區塊;決定用於該區塊的量化參數(QP);基於該區塊是使用該ACT而編碼的並且是以該聯合色度模式而編碼的來決定用於該區塊的ACT量化參數(QP)偏移;基於該QP和該ACT QP偏移來決定用於該區塊的ACT QP;及基於用於該區塊的該ACT QP來對該單個色度殘差區塊進行量化。According to another example, a computer-readable storage medium stores instructions that, when executed by one or more processors, cause the one or more processors to: determine a first chroma residue block for a first chroma component of a block of video data; determine a second chroma residue block for a second chroma component of the block of video data, wherein the first chroma residue block and the second chroma residue block are in a first color space; determine that the block of video data is encoded using an adaptive color transform (ACT); perform the ACT on the first chroma residue block to convert the first chroma residue block to a second color space; perform the ACT on the second chroma residue block; The method comprises performing an inverse ACT to convert the second chroma residue block to the second color space; determining that the block of the video data is encoded in a joint chroma mode, wherein for the joint chroma mode, a single chroma residue block is encoded for the first chroma component of the block and the second chroma component of the block; determining the single chroma residue block based on the converted first chroma residue block and the converted second chroma residue block; determining a quantization parameter (QP) for the block; determining an ACT quantization parameter (QP) offset for the block based on the block being encoded using the ACT and encoded in the joint chroma mode; and determining a QP offset for the block based on the QP and the ACT. The method further comprises: determining an ACT QP for the block using the QP offset; and quantizing the single chroma residue block based on the ACT QP for the block.

在附圖和以下描述中闡述了本揭示內容的一或多個實例的細節。根據描述、附圖和申請專利範圍,本揭示內容的其他特徵、目的和優點將是顯而易見的。The details of one or more embodiments of the present disclosure are set forth in the accompanying drawings and the following description. Other features, objectives, and advantages of the present disclosure will be apparent from the description, drawings, and claims.

視訊編碼(coding)(例如,視訊編碼(encoding)及/或視訊解碼(decoding))通常涉及從視訊資料的在同一圖片中的已經編碼的區塊預測視訊資料的一個區塊(例如,訊框內預測)或從視訊資料的在不同圖片中的已經編碼的區塊預測視訊資料的一個區塊(例如,訊框間預測)。在一些情況下,視訊編碼器亦藉由將預測區塊與原始區塊進行比較來計算殘差資料。因此,殘差資料表示預測區塊與原始區塊之間的差。為了減少用信號通知殘差資料所需要的位元數量,視訊編碼器對殘差資料進行變換和量化,並且在經編碼的位元串流中用信號通知經變換且經量化的殘差資料。藉由變換和量化過程實現的壓縮可能是有損的,這意味著變換和量化過程可能向經解碼的視訊資料中引入失真。量化的量藉由量化參數(QP)來控制。在一些情況下,在變換和量化之前,視訊編碼器亦可以將自我調整色彩變換(ACT)應用於殘差資料,以將殘差資料從第一色彩空間轉換到第二色彩空間。例如,可以在編碼場景中使用ACT,其中與第一色彩空間相比,可以在第二色彩空間中更高效地對殘差資料進行編碼。Video coding (e.g., video encoding and/or video decoding) typically involves predicting a block of video data from an already coded block of video data in the same picture (e.g., intra-frame prediction) or predicting a block of video data from an already coded block of video data in a different picture (e.g., inter-frame prediction). In some cases, the video coder also computes residual data by comparing the predicted block to the original block. Thus, the residual data represents the difference between the predicted block and the original block. To reduce the number of bits required to signal the residue data, the video encoder transforms and quantizes the residue data, and signals the transformed and quantized residue data in the encoded bit stream. The compression achieved by the transform and quantization process may be lossy, meaning that the transform and quantization process may introduce distortion into the decoded video data. The amount of quantization is controlled by a quantization parameter (QP). In some cases, the video encoder may also apply an adaptive color transform (ACT) to the residue data prior to the transform and quantization to convert the residue data from a first color space to a second color space. For example, ACT may be used in encoding scenarios where the residue data may be more efficiently encoded in the second color space than in the first color space.

視訊解碼器執行逆量化、逆變換和逆ACT以對殘差資料進行解碼,並且隨後將經解碼的殘差資料與預測區塊相加,以產生經重構的視訊區塊,該經重構的視訊區塊與原始視訊區塊更緊密地匹配(與單獨預測區塊相比)。由於對殘差資料的變換和量化所引入的損失,第一經重構的區塊可能具有失真或偽影。一種常見類型的偽影或失真被稱為區塊效應,其中用於對視訊資料進行編碼的區塊的邊界是可見的。The video decoder performs inverse quantization, inverse transform, and inverse ACT to decode the residual data, and then adds the decoded residual data to the prediction block to produce a reconstructed video block that more closely matches the original video block (compared to the prediction block alone). The first reconstructed block may have distortion or artifacts due to the loss introduced by the transform and quantization of the residual data. One common type of artifact or distortion is called blocking, where the boundaries of the blocks used to encode the video data are visible.

為了進一步改進經解碼的視訊的品質,視訊解碼器可以對經重構的視訊區塊執行一或多個濾波操作。該等濾波操作的實例包括去區塊濾波、取樣自我調整偏移(SAO)濾波和自我調整迴路濾波(ALF)。用於該等濾波操作的參數可以由視訊編碼器決定並且在經編碼的視訊位元串流中顯式地用信號通知,或者可以由視訊解碼器隱式地決定,而不需要在經編碼的視訊位元串流中顯式地用信號通知該等參數。To further improve the quality of the decoded video, the video decoder may perform one or more filtering operations on the reconstructed video blocks. Examples of such filtering operations include deblocking filtering, sample self-adjusted offset (SAO) filtering, and self-adjusting loop filtering (ALF). Parameters for such filtering operations may be determined by the video encoder and explicitly signaled in the coded video bit stream, or may be implicitly determined by the video decoder without explicitly signaling such parameters in the coded video bit stream.

如下文將更詳細地解釋的,視訊資料經常以亮度取樣區塊和兩個對應的色度取樣區塊來編碼。視訊資料可以以聯合色度模式(亦被稱為聯合CbCr模式)來編碼,其中視訊編碼器對用於兩個對應的色度殘差取樣區塊的單個色度殘差區塊進行編碼,並且隨後,視訊解碼器從單個色度殘差區塊推導兩個對應的色度殘差取樣區塊。As will be explained in more detail below, video data is often encoded as a block of luma samples and two corresponding blocks of chroma samples. Video data can be encoded in a joint-chroma mode (also called joint-CbCr mode), in which the video encoder encodes a single chroma residue block for two corresponding blocks of chroma residue samples, and then the video decoder derives the two corresponding blocks of chroma residue samples from the single chroma residue block.

本揭示內容描述了使用ACT和聯合色度模式(例如,聯合CbCr模式)兩者來對視訊資料區塊進行編碼的技術,與用於結合聯合色度模式來使用ACT的現有技術相比,該等技術可以提供改進的編碼效率。如下文將更詳細地解釋的,當使用ACT時,視訊編碼器和視訊解碼器將偏移應用於QP值以決定ACT QP值。隨後,視訊編碼器和視訊解碼器使用ACT QP值來對變換係數進行量化和反量化。本揭示內容描述了用於決定改進的ACT QP偏移的技術,該等技術可以改進針對結合聯合色度模式來使用ACT的編碼場景的整體視訊編碼效率。更具體地說,藉由基於區塊是使用ACT而編碼的並且是以聯合色度模式而編碼的來決定用於該區塊的ACT QP偏移,以及基於QP和ACT QP偏移來決定用於該區塊的ACT QP,本揭示內容的技術可以在使用ACT和聯合色度模式兩者的編碼場景中改進視訊資料的整體編碼品質。The present disclosure describes techniques for encoding blocks of video data using both ACT and a joint chroma mode (e.g., a joint CbCr mode) that can provide improved coding efficiency compared to prior art techniques for using ACT in conjunction with a joint chroma mode. As will be explained in more detail below, when using ACT, a video encoder and a video decoder apply an offset to a QP value to determine an ACT QP value. The video encoder and the video decoder then use the ACT QP value to quantize and dequantize transform coefficients. The present disclosure describes techniques for determining an improved ACT QP offset that can improve overall video coding efficiency for coding scenes that use ACT in conjunction with a joint chroma mode. More specifically, by determining an ACT QP offset for a block based on whether the block is encoded using ACT and is encoded in joint chroma mode, and determining an ACT QP for the block based on the QP and the ACT QP offset, the techniques of the present disclosure can improve the overall encoding quality of video data in encoding scenarios using both ACT and joint chroma mode.

圖1是示出可以執行本揭示內容的技術的示例視訊編碼和解碼系統100的方塊圖。概括而言,本揭示內容的技術涉及對視訊資料進行編碼(coding)(編碼(encoding)及/或解碼(decoding))。通常,視訊資料包括用於處理視訊的任何資料。因此,視訊資料可以包括原始的未經編碼的視訊、經編碼的視訊、經解碼(例如,經重構)的視訊、以及視訊中繼資料(諸如,訊號傳遞資料)。FIG. 1 is a block diagram illustrating an example video encoding and decoding system 100 in which the techniques of the present disclosure may be implemented. In general, the techniques of the present disclosure relate to encoding (encoding and/or decoding) video data. In general, video data includes any data used to process video. Thus, video data may include original unencoded video, encoded video, decoded (e.g., reconstructed) video, and video relay data (e.g., signaling data).

如圖1所示,在該實例中,系統100包括源設備102,源設備102提供要被目的地設備116解碼和顯示的、經編碼的視訊資料。具體地,源設備102經由電腦可讀取媒體110來將視訊資料提供給目的地設備116。源設備102和目的地設備116可以包括各種各樣的設備中的任何一種,包括桌上型電腦、筆記型電腦(亦即,膝上型電腦)、行動設備、平板電腦、機上盒、電話手機(諸如智慧型電話)、電視機、相機、顯示設備、數位媒體播放機、視訊遊戲控制台、視訊資料串流設備、廣播接收器設備等。在一些情況下,源設備102和目的地設備116可以被配備用於無線通訊,並且因此可以被稱為無線通訊設備。1 , in this example, system 100 includes a source device 102 that provides encoded video data to be decoded and displayed by a destination device 116. Specifically, source device 102 provides the video data to destination device 116 via computer-readable medium 110. Source device 102 and destination device 116 may include any of a wide variety of devices, including desktop computers, notebook computers (i.e., laptops), mobile devices, tablet computers, set-top boxes, mobile phones (e.g., smart phones), televisions, cameras, display devices, digital media players, video game consoles, video data streaming devices, broadcast receiver devices, etc. In some cases, source device 102 and destination device 116 may be equipped for wireless communications and, therefore, may be referred to as wireless communication devices.

在圖1的實例中,源設備102包括視訊源104、記憶體106、視訊編碼器200以及輸出介面108。目的地設備116包括輸入介面122、視訊解碼器300、記憶體120以及顯示設備118。根據本揭示內容,源設備102的視訊編碼器200和目的地設備116的視訊解碼器300可以被配置為應用用於針對ACT的QP偏移的靈活訊號傳遞的技術。因此,源設備102表示視訊編碼設備的實例,而目的地設備116表示視訊解碼設備的實例。在其他實例中,源設備和目的地設備可以包括其他部件或佈置。例如,源設備102可以從諸如外部相機之類的外部視訊源接收視訊資料。同樣,目的地設備116可以與外部顯示設備對接,而不是包括集成顯示設備。In the example of FIG. 1 , source device 102 includes video source 104, memory 106, video encoder 200, and output interface 108. Destination device 116 includes input interface 122, video decoder 300, memory 120, and display device 118. According to the present disclosure, video encoder 200 of source device 102 and video decoder 300 of destination device 116 can be configured to apply techniques for flexible signaling for QP offset for ACT. Thus, source device 102 represents an instance of a video encoding device, and destination device 116 represents an instance of a video decoding device. In other examples, source device and destination device may include other components or arrangements. For example, source device 102 may receive video data from an external video source such as an external camera. Likewise, destination device 116 may interface with an external display device rather than including an integrated display device.

在圖1中所示的系統100僅是一個實例。通常,任何數位視訊編碼及/或解碼設備可以執行用於針對ACT的QP偏移的靈活訊號傳遞的技術。源設備102和目的地設備116僅是此種編碼設備的實例,其中源設備102產生經編碼的視訊資料以用於傳輸給目的地設備116。本揭示內容將「編碼」設備代表為執行對資料的編碼(例如,編碼及/或解碼)的設備。因此,視訊編碼器200和視訊解碼器300分別表示編碼設備(具體地,視訊編碼器和視訊解碼器)的實例。在一些實例中,源設備102和目的地設備116可以以基本上對稱的方式進行操作,使得源設備102和目的地設備116中的每一者皆包括視訊編碼和解碼部件。因此,系統100可以支援在源設備102和目的地設備116之間的單向或雙向視訊傳輸,例如,以用於視訊資料串流、視訊重播、視訊廣播或視訊電話。The system 100 shown in FIG. 1 is merely an example. In general, any digital video encoding and/or decoding device may implement techniques for flexible signaling of QP offsets for ACTs. Source device 102 and destination device 116 are merely examples of such encoding devices, where source device 102 generates encoded video data for transmission to destination device 116. The present disclosure represents a "coding" device as a device that performs encoding (e.g., encoding and/or decoding) of data. Thus, video encoder 200 and video decoder 300 represent examples of encoding devices, specifically, a video encoder and a video decoder, respectively. In some examples, source device 102 and destination device 116 can operate in a substantially symmetrical manner such that each of source device 102 and destination device 116 includes video encoding and decoding components. Thus, system 100 can support one-way or two-way video transmission between source device 102 and destination device 116, for example, for video data streaming, video playback, video broadcasting, or video telephony.

通常,視訊源104表示視訊資料(亦即,原始的未經編碼的視訊資料)的源,並且將視訊資料的順序的一系列圖片(亦被稱為「訊框」)提供給視訊編碼器200,視訊編碼器200對用於圖片的資料進行編碼。源設備102的視訊源104可以包括視訊擷取設備,諸如視訊相機、包含先前擷取的原始視訊的視訊存檔單元、及/或用於從視訊內容提供者接收視訊的視訊饋送介面。作為另外的替代方式,視訊源104可以產生基於電腦圖形的資料作為源視訊,或者產生即時視訊、被存檔的視訊和電腦產生的視訊的組合。在每種情況下,視訊編碼器200可以對被擷取的、預擷取的或電腦產生的視訊資料進行編碼。視訊編碼器200可以將圖片從所接收的次序(有時被稱為「顯示次序」)重新排列為用於編碼的編碼次序。視訊編碼器200可以產生包括經編碼的視訊資料的位元串流。隨後,源設備102可以經由輸出介面108將經編碼的視訊資料輸出到電腦可讀取媒體110上,以便由例如目的地設備116的輸入介面122接收及/或取回。In general, video source 104 represents a source of video data (i.e., raw, unencoded video data) and provides a sequential series of pictures (also referred to as "frames") of the video data to video encoder 200, which encodes the data for the pictures. Video source 104 of source device 102 may include a video capture device, such as a video camera, a video archive unit containing previously captured raw video, and/or a video feed interface for receiving video from a video content provider. As a further alternative, video source 104 may generate computer graphics-based data as source video, or a combination of real-time video, archived video, and computer-generated video. In each case, the video encoder 200 can encode captured, pre-captured, or computer-generated video data. The video encoder 200 can rearrange the pictures from the order in which they were received (sometimes referred to as "display order") to a coding order for encoding. The video encoder 200 can generate a bit stream including the encoded video data. The source device 102 can then output the encoded video data to the computer-readable medium 110 via the output interface 108 to be received and/or retrieved by, for example, the input interface 122 of the destination device 116.

源設備102的記憶體106和目的地設備116的記憶體120表示通用記憶體。在一些實例中,記憶體106、120可以儲存原始視訊資料,例如,來自視訊源104的原始視訊以及來自視訊解碼器300的原始的經解碼的視訊資料。另外或替代地,記憶體106、120可以儲存可由例如視訊編碼器200和視訊解碼器300分別執行的軟體指令。儘管記憶體106和記憶體120在該實例中被示為與視訊編碼器200和視訊解碼器300分開,但是應當理解的是,視訊編碼器200和視訊解碼器300亦可以包括用於在功能上類似或等效目的的內部記憶體。此外,記憶體106、120可以儲存例如從視訊編碼器200輸出並且輸入到視訊解碼器300的經編碼的視訊資料。在一些實例中,記憶體106、120的部分可以被分配為一或多個視訊緩衝器,例如,以儲存原始的經解碼及/或經編碼的視訊資料。The memory 106 of the source device 102 and the memory 120 of the destination device 116 represent general purpose memories. In some examples, the memories 106, 120 can store raw video data, such as raw video from the video source 104 and raw decoded video data from the video decoder 300. Additionally or alternatively, the memories 106, 120 can store software instructions that can be executed by, for example, the video encoder 200 and the video decoder 300, respectively. Although the memory 106 and the memory 120 are shown in this example as being separate from the video encoder 200 and the video decoder 300, it should be understood that the video encoder 200 and the video decoder 300 may also include internal memory for functionally similar or equivalent purposes. In addition, the memory 106, 120 may store, for example, encoded video data output from the video encoder 200 and input to the video decoder 300. In some examples, portions of the memory 106, 120 may be allocated as one or more video buffers, for example, to store raw decoded and/or encoded video data.

電腦可讀取媒體110可以表示能夠將經編碼的視訊資料從源設備102輸送到目的地設備116的任何類型的媒體或設備。在一個實例中,電腦可讀取媒體110表示通訊媒體,其使得源設備102能夠例如經由射頻網路或基於電腦的網路,來即時地向目的地設備116直接發送經編碼的視訊資料。輸出介面108可以根據諸如無線通訊協定之類的通訊標準來對包括經編碼的視訊資料的傳輸信號進行調制,並且輸入介面122可以根據諸如無線通訊協定之類的通訊標準來對所接收的傳輸信號進行解調。通訊媒體可以包括任何無線或有線通訊媒體,諸如,射頻(RF)頻譜或一或多條實體傳輸線。通訊媒體可以形成諸如以下各項的基於封包的網路的一部分:區域網路、廣域網、或諸如網際網路之類的全球網路。通訊媒體可以包括路由器、交換機、基地台、或對於促進從源設備102到目的地設備116的通訊而言可以有用的任何其他設備。Computer-readable medium 110 may represent any type of medium or device capable of transmitting encoded video data from source device 102 to destination device 116. In one example, computer-readable medium 110 represents a communication medium that enables source device 102 to send encoded video data directly to destination device 116 in real time, such as via a radio frequency network or a computer-based network. Output interface 108 may modulate a transmission signal including the encoded video data according to a communication standard such as a wireless communication protocol, and input interface 122 may demodulate a received transmission signal according to a communication standard such as a wireless communication protocol. The communication medium may include any wireless or wired communication medium, such as a radio frequency (RF) spectrum or one or more physical transmission lines. The communication medium may form part of a packet-based network such as a local area network, a wide area network, or a global network such as the Internet. The communication medium may include routers, switches, base stations, or any other device that may be useful for facilitating communication from the source device 102 to the destination device 116.

在一些實例中,源設備102可以將經編碼的資料從輸出介面108輸出到儲存設備112。類似地,目的地設備116可以經由輸入介面122從儲存設備112存取經編碼的資料。儲存設備112可以包括各種分散式或本端存取的資料儲存媒體中的任何一種,諸如硬碟驅動器、藍光光碟、DVD、CD-ROM、快閃記憶體、揮發性或非揮發性記憶體、或用於儲存經編碼的視訊資料的任何其他適當的數位儲存媒體。In some examples, source device 102 can output the encoded data from output interface 108 to storage device 112. Similarly, destination device 116 can access the encoded data from storage device 112 via input interface 122. Storage device 112 can include any of a variety of distributed or locally accessed data storage media, such as a hard drive, Blu-ray Disc, DVD, CD-ROM, flash memory, volatile or non-volatile memory, or any other suitable digital storage media for storing encoded video data.

在一些實例中,源設備102可以將經編碼的視訊資料輸出到檔案伺服器114或者可以儲存由源設備102產生的經編碼的視訊資料的另一中間儲存設備。目的地設備116可以經由資料串流或下載來從檔案伺服器114存取被儲存的視訊資料。In some examples, source device 102 may output the encoded video data to file server 114 or another intermediate storage device that may store the encoded video data generated by source device 102. Destination device 116 may access the stored video data from file server 114 via data streaming or downloading.

檔案伺服器114可以是能夠儲存經編碼的視訊資料並且將該經編碼的視訊資料發送給目的地設備116的任何類型的伺服器設備。檔案伺服器114可以表示網頁伺服器(例如,用於網站)、被配置為提供檔案傳輸協定服務(諸如檔案傳輸協定(FTP)或單向傳輸檔遞送(FLUTE)協定)的伺服器、內容遞送網路(CDN)設備、超文字傳輸協定(HTTP)伺服器、多媒體廣播多播服務(MBMS)或增強型MBMS(eMBMS)伺服器及/或網路附加儲存(NAS)設備。檔案伺服器114可以另外或替代地實現一或多個HTTP資料串流協定,諸如基於HTTP的動態自我調整資料串流(DASH)、HTTP即時資料串流(HLS)、即時資料串流協定(RTSP)、HTTP動態資料串流等。The file server 114 may be any type of server device capable of storing encoded video data and sending the encoded video data to the destination device 116. The file server 114 may represent a web server (e.g., for a website), a server configured to provide file transfer protocol services (such as the File Transfer Protocol (FTP) or the File Delivery over One-Way Transport (FLUTE) protocol), a content delivery network (CDN) device, a Hypertext Transfer Protocol (HTTP) server, a Multimedia Broadcast Multicast Service (MBMS) or an enhanced MBMS (eMBMS) server, and/or a Network Attached Storage (NAS) device. The file server 114 may additionally or alternatively implement one or more HTTP data streaming protocols, such as Dynamic Self-Adjusting Data Streaming over HTTP (DASH), HTTP Live Streaming (HLS), Real-Time Streaming Protocol (RTSP), HTTP Dynamic Data Streaming, etc.

目的地設備116可以通過任何標準資料連接(包括網際網路連接)來從檔案伺服器114存取經編碼的視訊資料。這可以包括適於存取被儲存在檔案伺服器114上的經編碼的視訊資料的無線通道(例如,Wi-Fi連接)、有線連接(例如,數位用戶線路(DSL)、纜線數據機等)、或這兩者的組合。輸入介面122可以被配置為根據以下各項中的任何一或多項來操作:上文論述的用於從檔案伺服器114取回或接收媒體資料的各種協定、或用於取回媒體資料的其他此類協定。The destination device 116 may access the encoded video data from the file server 114 through any standard data connection, including an Internet connection. This may include a wireless channel (e.g., a Wi-Fi connection), a wired connection (e.g., digital subscriber line (DSL), cable modem, etc.), or a combination of the two suitable for accessing the encoded video data stored on the file server 114. The input interface 122 may be configured to operate according to any one or more of the various protocols discussed above for retrieving or receiving media data from the file server 114, or other such protocols for retrieving media data.

輸出介面108和輸入介面122可以表示無線發射器/接收器、數據機、有線聯網部件(例如,乙太網路卡)、根據各種IEEE 802.11標準中的任何一種標準進行操作的無線通訊部件、或其他實體部件。在其中輸出介面108和輸入介面122包括無線部件的實例中,輸出介面108和輸入介面122可以被配置為根據蜂巢通訊標準(諸如4G、4G-LTE(長期進化)、先進的LTE、5G等)來傳輸資料(諸如經編碼的視訊資料)。在其中輸出介面108包括無線發射器的一些實例中,輸出介面108和輸入介面122可以被配置為根據其他無線標準(諸如IEEE 802.11規範、IEEE 802.15規範(例如,ZigBee™)、Bluetooth™標準等)來傳輸資料(諸如經編碼的視訊資料)。在一些實例中,源設備102及/或目的地設備116可以包括相應的片上系統(SoC)設備。例如,源設備102可以包括用於執行被賦予視訊編碼器200及/或輸出介面108的功能的SoC設備,並且目的地設備116可以包括用於執行被賦予視訊解碼器300及/或輸入介面122的功能的SoC設備。The output interface 108 and the input interface 122 may represent wireless transmitters/receivers, modems, wired networking components (e.g., Ethernet cards), wireless communication components operating in accordance with any of the various IEEE 802.11 standards, or other physical components. In instances where the output interface 108 and the input interface 122 include wireless components, the output interface 108 and the input interface 122 may be configured to transmit data (e.g., encoded video data) in accordance with a cellular communication standard (e.g., 4G, 4G-LTE (Long Term Evolution), Advanced LTE, 5G, etc.). In some examples where the output interface 108 includes a wireless transmitter, the output interface 108 and the input interface 122 may be configured to transmit data (e.g., encoded video data) in accordance with other wireless standards, such as the IEEE 802.11 specification, the IEEE 802.15 specification (e.g., ZigBee™), the Bluetooth™ standard, etc. In some examples, the source device 102 and/or the destination device 116 may include corresponding system-on-chip (SoC) devices. For example, the source device 102 may include a SoC device for performing the functions assigned to the video encoder 200 and/or the output interface 108, and the destination device 116 may include a SoC device for performing the functions assigned to the video decoder 300 and/or the input interface 122.

本揭示內容的技術可以應用於視訊編碼,以支援各種多媒體應用中的任何一種,諸如空中電視廣播、有線電視傳輸、衛星電視傳輸、網際網路流式視訊傳輸(諸如基於HTTP的動態自我調整資料串流(DASH))、被編碼到資料儲存媒體上的數位視訊、對被儲存在資料儲存媒體上的數位視訊的解碼、或其他應用。The techniques disclosed herein may be applied to video encoding to support any of a variety of multimedia applications, such as over-the-air television broadcasting, cable television transmission, satellite television transmission, Internet streaming video transmission (such as Dynamic Self-Adapting Streaming over HTTP (DASH)), digital video encoded onto data storage media, decoding of digital video stored on data storage media, or other applications.

目的地設備116的輸入介面122從電腦可讀取媒體110(例如,通訊媒體、儲存設備112、檔案伺服器114等)接收經編碼的視訊位元串流。經編碼的視訊位元串流可以包括由視訊編碼器200定義的諸如以下語法元素之類的訊號傳遞資訊(其亦被視訊解碼器300使用):該語法元素具有描述視訊區塊或其他編碼單元(例如,切片、圖片、圖片組、序列等)的特性及/或處理的值。顯示設備118將經解碼的視訊資料的經解碼的圖片顯示給使用者。顯示設備118可以表示各種顯示設備中的任何一種,諸如液晶顯示器(LCD)、電漿顯示器、有機發光二極體(OLED)顯示器、或另一種類型的顯示設備。The input interface 122 of the destination device 116 receives the encoded video bit stream from the computer-readable medium 110 (e.g., communication media, storage device 112, file server 114, etc.). The encoded video bit stream may include signaling information such as the following syntax elements defined by the video encoder 200 (which are also used by the video decoder 300): The syntax elements have values that describe the characteristics and/or processing of video blocks or other coding units (e.g., slices, pictures, groups of pictures, sequences, etc.). The display device 118 displays the decoded pictures of the decoded video data to the user. Display device 118 may represent any of a variety of display devices, such as a liquid crystal display (LCD), a plasma display, an organic light emitting diode (OLED) display, or another type of display device.

儘管在圖1中未圖示,但是在一些實例中,視訊編碼器200和視訊解碼器300可以各自與音訊編碼器及/或音訊解碼器集成,並且可以包括適當的MUX-DEMUX單元或其他硬體及/或軟體,以處理包括公共資料串流中的音訊和視訊兩者的經多工的串流。若適用,MUX-DEMUX單元可以遵循ITU H.223多工器協定或其他協定(諸如使用者資料包協定(UDP))。Although not shown in FIG. 1 , in some examples, the video encoder 200 and the video decoder 300 may each be integrated with an audio encoder and/or an audio decoder and may include an appropriate MUX-DEMUX unit or other hardware and/or software to process multiplexed streams including both audio and video in a common data stream. If applicable, the MUX-DEMUX unit may comply with the ITU H.223 multiplexer protocol or other protocols such as the User Data Packet Protocol (UDP).

視訊編碼器200和視訊解碼器300各自可以被實現為各種適當的編碼器及/或解碼器電路中的任何一種,諸如一或多個微處理器、數位訊號處理器(DSP)、特殊應用積體電路(ASIC)、現場可程式設計閘陣列(FPGA)、個別邏輯、軟體、硬體、韌體、或其任何組合。當該等技術部分地用軟體實現時,設備可以將用於軟體的指令儲存在適當的非暫時性電腦可讀取媒體中,並且使用一或多個處理器,用硬體來執行指令以執行本揭示內容的技術。視訊編碼器200和視訊解碼器300中的每一者可以被包括在一或多個編碼器或解碼器中,編碼器或解碼器中的任一者可以被集成為相應設備中的組合編碼器/解碼器(CODEC)的一部分。包括視訊編碼器200及/或視訊解碼器300的設備可以包括積體電路、微處理器、及/或無線通訊設備(諸如蜂巢式電話)。The video encoder 200 and the video decoder 300 can each be implemented as any of a variety of suitable encoder and/or decoder circuits, such as one or more microprocessors, digital signal processors (DSPs), application-specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), individual logic, software, hardware, firmware, or any combination thereof. When the techniques are partially implemented in software, the device can store instructions for the software in a suitable non-transitory computer-readable medium and use one or more processors to execute the instructions in hardware to perform the techniques of the present disclosure. Each of the video encoder 200 and the video decoder 300 may be included in one or more coders or decoders, either of which may be integrated as part of a combined coder/decoder (CODEC) in a corresponding device. A device including the video encoder 200 and/or the video decoder 300 may include an integrated circuit, a microprocessor, and/or a wireless communication device (such as a cellular phone).

視訊編碼器200和視訊解碼器300可以根據視訊編碼標準(諸如ITU-T H.265(亦被稱為高效率視訊編碼(HEVC))或對其的擴展(諸如多視圖及/或可伸縮視訊編碼擴展))進行操作。替代地,視訊編碼器200和視訊解碼器300可以根據其他專有或行業標準(諸如ITU-T H.266,亦被稱為多功能視訊編碼(VVC))進行操作。VVC標準的草案是在以下項中描述的:Bross等人,「Versatile Video Coding (Draft 7)」,ITU-T SG 16 WP 3和ISO/IEC JTC 1/SC 29/WG 11的聯合視訊專家組(JVET),第16次會議,瑞士日內瓦,2019年10月1-11日,JVET-P2001-v14(下文中稱為「VVC草案7」)。VVC標準的另一草案是在以下項中描述的:Bross等人,「Versatile Video Coding (Draft 10)」,ITU-T SG 16 WP 3和ISO/IEC JTC 1/SC 29/WG 11的聯合視訊專家組(JVET),藉由電話會議的第18次會議,2020年6月22日-7月1日,JVET-S2001-v17(下文中稱為「VVC草案10」)。然而,本揭示內容的技術不限於任何特定的編碼標準。The video encoder 200 and the video decoder 300 may operate according to a video coding standard such as ITU-T H.265 (also known as High Efficiency Video Coding (HEVC)) or an extension thereof such as multi-view and/or scalable video coding extensions. Alternatively, the video encoder 200 and the video decoder 300 may operate according to other proprietary or industry standards such as ITU-T H.266, also known as Versatile Video Coding (VVC). The draft of the VVC standard is described in Bross et al., “Versatile Video Coding (Draft 7)”, Joint Video Experts Group (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 16th Meeting, Geneva, Switzerland, October 1-11, 2019, JVET-P2001-v14 (hereinafter referred to as “VVC Draft 7”). Another draft of the VVC standard is described in Bross et al., “Versatile Video Coding (Draft 10)”, Joint Video Experts Group (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 18th meeting by teleconference, June 22-July 1, 2020, JVET-S2001-v17 (hereinafter referred to as “VVC Draft 10”). However, the techniques of the present disclosure are not limited to any particular coding standard.

通常,視訊編碼器200和視訊解碼器300可以執行對圖片的基於區塊的編碼。術語「區塊」通常代表包括要被處理的(例如,在編碼及/或解碼過程中要被編碼、被解碼或以其他方式使用的)資料的結構。例如,區塊可以包括亮度及/或色度資料的取樣的二維矩陣。通常,視訊編碼器200和視訊解碼器300可以對以YUV(例如,Y、Cb、Cr)格式表示的視訊資料進行編碼。亦即,並不是對用於圖片的取樣的紅色、綠色和藍色(RGB)資料進行編碼,視訊編碼器200和視訊解碼器300可以對亮度和色度分量進行編碼,其中色度分量可以包括紅色色相和藍色色相色度分量兩者。在一些實例中,視訊編碼器200在進行編碼之前將所接收的經RGB格式化的資料轉換為YUV表示,並且視訊解碼器300將YUV表示轉換為RGB格式。替代地,預處理和後處理單元(未圖示)可以執行該等轉換。Typically, the video encoder 200 and the video decoder 300 can perform block-based encoding of a picture. The term "block" generally refers to a structure that includes data to be processed (e.g., to be encoded, decoded, or otherwise used in an encoding and/or decoding process). For example, a block may include a two-dimensional matrix of samples of luminance and/or chrominance data. Typically, the video encoder 200 and the video decoder 300 can encode video data represented in a YUV (e.g., Y, Cb, Cr) format. That is, instead of encoding sampled red, green, and blue (RGB) data for a picture, the video encoder 200 and the video decoder 300 can encode luminance and chrominance components, where the chrominance components can include both red hue and blue hue chrominance components. In some examples, the video encoder 200 converts the received RGB formatted data into a YUV representation before encoding, and the video decoder 300 converts the YUV representation into an RGB format. Alternatively, a pre-processing and post-processing unit (not shown) may perform the conversions.

概括而言,本揭示內容可以涉及對圖片的編碼(例如,編碼和解碼)以包括對圖片的資料進行編碼或解碼的過程。類似地,本揭示內容可以涉及對圖片的區塊的編碼以包括對用於區塊的資料進行編碼或解碼(例如,預測及/或殘差編碼)的過程。經編碼的視訊位元串流通常包括用於表示編碼決策(例如,編碼模式)以及將圖片分割為區塊的語法元素的一系列值。因此,關於對圖片或區塊進行編碼的引用通常應當被理解為對用於形成圖片或區塊的語法元素的值進行編碼。In general, the present disclosure may relate to the encoding of a picture (e.g., encoding and decoding) to include the process of encoding or decoding data for a picture. Similarly, the present disclosure may relate to the encoding of blocks of a picture to include the process of encoding or decoding data for the blocks (e.g., prediction and/or residual coding). The encoded video bit stream typically includes a series of values for syntax elements that represent coding decisions (e.g., coding modes) and the partitioning of the picture into blocks. Therefore, references to encoding a picture or block should generally be understood as encoding the values of the syntax elements used to form the picture or block.

HEVC定義了各種區塊,包括編碼單元(CU)、預測單元(PU)和變換單元(TU)。根據HEVC,視訊編碼裝置(coder)(諸如視訊編碼器200)根據四叉樹結構來將編碼樹單元(CTU)分割為CU。亦即,視訊編碼裝置將CTU和CU分割為四個相等的、不重疊的正方形,並且四叉樹的每個節點具有零個或四個子節點。沒有子節點的節點可以被稱為「葉節點」,並且此種葉節點的CU可以包括一或多個PU及/或一或多個TU。視訊編碼裝置可以進一步分割PU和TU。例如,在HEVC中,殘差四叉樹(RQT)表示對TU的分區。在HEVC中,PU表示訊框間預測資料,而TU表示殘差資料。經訊框內預測的CU包括訊框內預測資訊,諸如訊框內模式指示。HEVC defines various blocks, including coding units (CUs), prediction units (PUs), and transform units (TUs). According to HEVC, a video coding device (coder) (such as the video coder 200) divides a coding tree unit (CTU) into CUs according to a quadtree structure. That is, the video coding device divides the CTU and the CU into four equal, non-overlapping squares, and each node of the quadtree has zero or four child nodes. A node without child nodes may be referred to as a "leaf node", and the CU of such a leaf node may include one or more PUs and/or one or more TUs. The video coding device may further divide the PUs and TUs. For example, in HEVC, a residual quadtree (RQT) represents the partitioning of a TU. In HEVC, PU represents inter-frame prediction data, and TU represents residual data. An intra-frame predicted CU includes intra-frame prediction information, such as intra-frame mode indication.

作為另一實例,視訊編碼器200和視訊解碼器300可以被配置為根據VVC進行操作。根據VVC,視訊編碼裝置(諸如視訊編碼器200)將圖片分割為複數個編碼樹單元(CTU)。視訊編碼器200可以根據樹結構(諸如四叉樹-二叉樹(QTBT)結構或多類型樹(MTT)結構)分割CTU。QTBT結構去除了多種分割類型的概念,諸如在HEVC的CU、PU和TU之間的分隔。QTBT結構包括兩個級別:根據四叉樹分割而被分割的第一級別、以及根據二叉樹分割而被分割的第二級別。QTBT結構的根節點對應於CTU。二叉樹的葉節點對應於編碼單元(CU)。As another example, the video encoder 200 and the video decoder 300 may be configured to operate according to VVC. According to VVC, a video encoding device (such as the video encoder 200) partitions a picture into a plurality of coding tree units (CTUs). The video encoder 200 may partition the CTU according to a tree structure (such as a quadtree-binary tree (QTBT) structure or a multi-type tree (MTT) structure). The QTBT structure removes the concept of multiple partition types, such as the separation between CU, PU, and TU of HEVC. The QTBT structure includes two levels: a first level partitioned according to quadtree partitioning, and a second level partitioned according to binary tree partitioning. The root node of the QTBT structure corresponds to the CTU. The leaf nodes of the binary tree correspond to the coding units (CUs).

在MTT分割結構中,可以使用四叉樹(QT)分割、二叉樹(BT)分割以及一或多個類型的三叉樹(TT)(亦被稱為三元樹(TT))分割來對區塊進行分割。三叉樹或三元樹分割是其中區塊被分為三個子區塊的分割。在一些實例中,三叉樹或三元樹分割將區塊劃分為三個子區塊,而不通過中心劃分原始區塊。MTT中的分割類型(例如,QT、BT和TT)可以是對稱的或不對稱的。In the MTT partitioning structure, blocks can be partitioned using quadtree (QT) partitioning, binary tree (BT) partitioning, and one or more types of ternary tree (TT) (also known as ternary tree (TT)) partitioning. Trinary tree or ternary tree partitioning is a partitioning in which a block is divided into three sub-blocks. In some examples, the ternary tree or ternary tree partitioning divides the block into three sub-blocks without dividing the original block through the center. The partitioning types in MTT (e.g., QT, BT, and TT) can be symmetric or asymmetric.

在一些實例中,視訊編碼器200和視訊解碼器300可以使用單個QTBT或MTT結構來表示亮度分量和色度分量中的每一者,而在其他實例中,視訊編碼器200和視訊解碼器300可以使用兩個或更多個QTBT或MTT結構,諸如用於亮度分量的一個QTBT/MTT結構以及用於兩個色度分量的另一個QTBT/MTT結構(或者用於相應色度分量的兩個QTBT/MTT結構)。In some examples, the video encoder 200 and the video decoder 300 may use a single QTBT or MTT structure to represent each of the luma component and the chroma components, while in other examples, the video encoder 200 and the video decoder 300 may use two or more QTBT or MTT structures, such as one QTBT/MTT structure for the luma component and another QTBT/MTT structure for the two chroma components (or two QTBT/MTT structures for the corresponding chroma components).

視訊編碼器200和視訊解碼器300可以被配置為使用每HEVC的四叉樹分割、QTBT分割、MTT分割、或其他分割結構。為了解釋的目的,關於QTBT分割提供了本揭示內容的技術的描述。然而,應當理解的是,本揭示內容的技術亦可以應用於被配置為使用四叉樹分割或者亦使用其他類型的分割的視訊編碼裝置。The video encoder 200 and the video decoder 300 may be configured to use quadtree partitioning, QTBT partitioning, MTT partitioning, or other partitioning structures per HEVC. For purposes of explanation, a description of the techniques of the present disclosure is provided with respect to QTBT partitioning. However, it should be understood that the techniques of the present disclosure may also be applied to video encoding devices configured to use quadtree partitioning or other types of partitioning.

在一些實例中,CTU包括亮度取樣的編碼樹區塊(CTB)、具有三個取樣陣列的圖片的色度取樣的兩個對應的CTB、或者單色圖片或使用三個單獨的色彩平面和用於對取樣進行編碼的語法結構來編碼的圖片的取樣的CTB。CTB可以是取樣的NxN區塊(針對N的某個值),使得將分量劃分為CTB是一種分割。分量是來自以4:2:0、4:2:2或4:4:4的色彩格式組成圖片的三個陣列(一個亮度和兩個色度)之一的陣列或單個取樣,或者是以單色格式組成圖片的陣列或陣列的單個取樣。在一些實例中,編碼區塊是取樣的M×N區塊(針對M和N的某些值),使得將CTB劃分成編碼區塊是一種分割。In some examples, a CTU includes a coding tree block (CTB) of luma samples, two corresponding CTBs of chroma samples for a picture with three sample arrays, or a CTB of samples for a monochrome picture or a picture encoded using three separate color planes and a syntax structure for encoding the samples. A CTB can be an NxN block of samples (for some value of N) such that the division of components into CTBs is a partitioning. A component is an array or a single sample from one of the three arrays (one luma and two chroma) that make up a picture in 4:2:0, 4:2:2, or 4:4:4 color format, or an array or a single sample of an array that makes up a picture in monochrome format. In some examples, a coding block is an MxN block of samples (for some values of M and N), so that dividing the CTB into coding blocks is a partitioning.

可以以各種方式在圖片中對區塊(例如,CTU或CU)進行群組。作為一個實例,磚塊可以代表圖片中的特定瓦片(tile)內的CTU行的矩形區域。瓦片可以是圖片中的特定瓦片列和特定瓦片行內的CTU的矩形區域。瓦片列代表CTU的矩形區域,其具有等於圖片的高度的高度以及由語法元素(例如,諸如在圖片參數集中)指定的寬度。瓦片行代表CTU的矩形區域,其具有由語法元素指定的高度(例如,諸如在圖片參數集中)以及等於圖片的寬度的寬度。Blocks (e.g., CTUs or CUs) can be grouped in a picture in various ways. As an example, a tile can represent a rectangular area of a CTU row within a particular tile in a picture. A tile can be a rectangular area of a CTU within a particular tile column and a particular tile row in a picture. A tile column represents a rectangular area of a CTU with a height equal to the height of the picture and a width specified by a syntax element (e.g., such as in a picture parameter set). A tile row represents a rectangular area of a CTU with a height specified by a syntax element (e.g., such as in a picture parameter set) and a width equal to the width of the picture.

在一些實例中,可以將瓦片分割為多個磚塊,每個磚塊可以包括瓦片內的一或多個CTU行。沒有被分割為多個磚塊的瓦片亦可以被稱為磚塊。然而,作為瓦片的真實子集的磚塊可以不被稱為瓦片。In some examples, a tile may be partitioned into multiple bricks, each of which may include one or more CTU rows within the tile. Tiles that are not partitioned into multiple bricks may also be referred to as bricks. However, bricks that are true subsets of tiles may not be referred to as tiles.

圖片中的磚塊亦可以以切片來排列。切片可以是圖片的整數個磚塊,其可以唯一地被包含在單個網路抽象層(NAL)單元中。在一些實例中,切片包括多個完整的瓦片或者僅包括一個瓦片的完整磚塊的連續序列。The tiles in a picture can also be arranged in slices. A slice can be an integer number of tiles of a picture that can be uniquely contained in a single Network Abstraction Layer (NAL) unit. In some examples, a slice includes multiple complete tiles or a contiguous sequence of complete tiles of just one tile.

本揭示內容可以互換地使用「NxN」和「N乘N」來代表區塊(諸如CU或其他視訊區塊)在垂直和水平維度方面的取樣大小,例如,16x16個取樣或16乘16個取樣。通常,16x16 CU在垂直方向上將具有16個取樣(y=16),並且在水平方向上將具有16個取樣(x=16)。同樣地,NxN CU通常在垂直方向上具有N個取樣,並且在水平方向上具有N個取樣,其中N表示非負整數值。CU中的取樣可以按行和列來排列。此外,CU不一定需要在水平方向上具有與在垂直方向上相同的數量的取樣。例如,CU可以包括NxM個取樣,其中M不一定等於N。The present disclosure may use "NxN" and "N times N" interchangeably to represent the sample size of a block (such as a CU or other video block) in the vertical and horizontal dimensions, for example, 16x16 samples or 16 times 16 samples. Typically, a 16x16 CU will have 16 samples in the vertical direction (y=16) and 16 samples in the horizontal direction (x=16). Similarly, an NxN CU typically has N samples in the vertical direction and N samples in the horizontal direction, where N represents a non-negative integer value. The samples in a CU can be arranged in rows and columns. In addition, a CU does not necessarily need to have the same number of samples in the horizontal direction as in the vertical direction. For example, a CU may include NxM samples, where M is not necessarily equal to N.

視訊編碼器200對用於CU的表示預測及/或殘差資訊以及其他資訊的視訊資料進行編碼。預測資訊指示將如何預測CU以便形成用於CU的預測區塊。殘差資訊通常表示在編碼之前的CU的取樣與預測區塊之間的逐取樣差。The video encoder 200 encodes video data representing prediction and/or residual information and other information for a CU. The prediction information indicates how the CU will be predicted in order to form a prediction block for the CU. The residual information typically represents the sample-by-sample difference between the samples of the CU before encoding and the prediction block.

為了預測CU,視訊編碼器200通常可以經由訊框間預測或訊框內預測來形成用於CU的預測區塊。訊框間預測通常代表根據先前編碼的圖片的資料來預測CU,而訊框內預測通常代表根據同一圖片的先前編碼的資料來預測CU。為了執行訊框間預測,視訊編碼器200可以使用一或多個運動向量來產生預測區塊。視訊編碼器200通常可以執行運動搜尋,以辨識例如在CU與參考區塊之間的差異方面與CU緊密匹配的參考區塊。視訊編碼器200可以使用絕對差之和(SAD)、平方差之和(SSD)、平均絕對差(MAD)、均方差(MSD)、或其他此種差計算來計算差度量,以決定參考區塊是否與當前CU緊密匹配。在一些實例中,視訊編碼器200可以使用單向預測或雙向預測來預測當前CU。To predict a CU, the video encoder 200 may typically form a prediction block for the CU via inter-frame prediction or intra-frame prediction. Inter-frame prediction typically means predicting the CU based on data from a previously encoded picture, while intra-frame prediction typically means predicting the CU based on previously encoded data from the same picture. To perform inter-frame prediction, the video encoder 200 may use one or more motion vectors to generate the prediction block. The video encoder 200 may typically perform a motion search to identify a reference block that closely matches the CU, for example, in terms of the difference between the CU and the reference block. The video encoder 200 may calculate a difference metric using the sum of absolute differences (SAD), the sum of squared differences (SSD), the mean absolute difference (MAD), the mean square difference (MSD), or other such difference calculations to determine whether the reference block closely matches the current CU. In some examples, the video encoder 200 may predict the current CU using unidirectional prediction or bidirectional prediction.

VVC的一些實例亦提供仿射運動補償模式,其可以被認為是訊框間預測模式。在仿射運動補償模式下,視訊編碼器200可以決定表示非平移運動(諸如放大或縮小、旋轉、透視運動或其他不規則的運動類型)的兩個或更多個運動向量。Some implementations of VVC also provide an affine motion compensation mode, which can be considered an inter-frame prediction mode. In the affine motion compensation mode, the video encoder 200 can determine two or more motion vectors representing non-translational motion (such as zooming in or out, rotation, perspective motion, or other irregular motion types).

為了執行訊框內預測,視訊編碼器200可以選擇訊框內預測模式來產生預測區塊。VVC的一些實例提供六十七種訊框內預測模式,包括各種方向性模式、以及平面模式和DC模式。通常,視訊編碼器200選擇訊框內預測模式,訊框內預測模式描述要根據其來預測當前區塊(例如,CU的區塊)的取樣的、當前區塊的相鄰取樣。假定視訊編碼器200以光柵掃瞄次序(從左到右、從上到下)對CTU和CU進行編碼,則此種取樣通常可以是在與當前區塊相同的圖片中在當前區塊的上方、左上方或左側。To perform intra-frame prediction, the video encoder 200 can select an intra-frame prediction mode to generate a prediction block. Some examples of VVC provide sixty-seven intra-frame prediction modes, including various directional modes, as well as planar modes and DC modes. Typically, the video encoder 200 selects an intra-frame prediction mode that describes the adjacent samples of the current block according to which the samples of the current block (e.g., a block of CU) are to be predicted. Assuming that the video encoder 200 encodes CTUs and CUs in raster scan order (from left to right, from top to bottom), such samples can typically be above, above left, or to the left of the current block in the same picture as the current block.

視訊編碼器200對表示用於當前區塊的預測模式的資料進行編碼。例如,對於訊框間預測模式,視訊編碼器200可以對表示使用各種可用訊框間預測模式中的哪一種的資料以及用於對應模式的運動資訊進行編碼。對於單向或雙向訊框間預測,例如,視訊編碼器200可以使用先進運動向量預測(AMVP)或合併模式來對運動向量進行編碼。視訊編碼器200可以使用類似的模式來對用於仿射運動補償模式的運動向量進行編碼。The video encoder 200 encodes data indicating a prediction mode for the current block. For example, for an inter-frame prediction mode, the video encoder 200 may encode data indicating which of various available inter-frame prediction modes to use and motion information for the corresponding mode. For unidirectional or bidirectional inter-frame prediction, for example, the video encoder 200 may encode motion vectors using advanced motion vector prediction (AMVP) or merge mode. The video encoder 200 may use a similar mode to encode motion vectors for an affine motion compensation mode.

在諸如對區塊的訊框內預測或訊框間預測之類的預測之後,視訊編碼器200可以計算用於該區塊的殘差資料。殘差資料(諸如殘差區塊)表示在區塊與用於該區塊的預測區塊之間的逐取樣差,該預測區塊是使用對應的預測模式來形成的。視訊編碼器200可以將一或多個變換應用於殘差區塊,以在變換域中而非在取樣域中產生經變換的資料。例如,視訊編碼器200可以將離散餘弦變換(DCT)、整數變換、小波變換或概念上類似的變換應用於殘差視訊資料。另外,視訊編碼器200可以在第一變換之後應用二次變換,諸如模式相關的不可分離二次變換(MDNSST)、信號相關變換、Karhunen-Loeve變換(KLT)等。視訊編碼器200在應用一或多個變換之後產生變換係數。After a prediction, such as intra-frame prediction or inter-frame prediction for a block, the video encoder 200 may calculate the residual data for the block. The residual data, such as a residual block, represents the sample-by-sample difference between the block and a predicted block for the block, which is formed using a corresponding prediction mode. The video encoder 200 may apply one or more transforms to the residual block to produce transformed data in a transform domain rather than in a sample domain. For example, the video encoder 200 may apply a discrete cosine transform (DCT), an integer transform, a wavelet transform, or a conceptually similar transform to the residual video data. Additionally, the video encoder 200 may apply a secondary transform after the first transform, such as a mode-dependent non-separable secondary transform (MDNSST), a signal-dependent transform, a Karhunen-Loeve transform (KLT), etc. The video encoder 200 generates transform coefficients after applying one or more transforms.

如前述,在任何變換以產生變換係數之後,視訊編碼器200可以執行對變換係數的量化。量化通常代表如下的過程:在該過程中,對變換係數進行量化以可能減少用於表示變換係數的資料量,從而提供進一步的壓縮。藉由執行量化過程,視訊編碼器200可以減小與一些或所有變換係數相關聯的位元深度。例如,視訊編碼器200可以在量化期間將n 位元的值向下捨入為m 位元的值,其中n 大於m 。在一些實例中,為了執行量化,視訊編碼器200可以執行對要被量化的值的按位右移。As previously described, after any transformation to produce transform coefficients, the video encoder 200 may perform quantization of the transform coefficients. Quantization generally refers to a process in which the transform coefficients are quantized to possibly reduce the amount of data used to represent the transform coefficients, thereby providing further compression. By performing the quantization process, the video encoder 200 may reduce the bit depth associated with some or all transform coefficients. For example, the video encoder 200 may round down an n -bit value to an m -bit value during quantization, where n is greater than m . In some examples, to perform quantization, the video encoder 200 may perform a bitwise right shift of the value to be quantized.

在量化之後,視訊編碼器200可以掃瞄變換係數,從而從包括經量化的變換係數的二維矩陣產生一維向量。可以將掃瞄設計為將較高能量(並且因此較低頻率)的變換係數放在向量的前面,並且將較低能量(並且因此較高頻率)的變換係數放在向量的後面。在一些實例中,視訊編碼器200可以利用預定義的掃瞄次序來掃瞄經量化的變換係數以產生經序列化的向量,並且隨後對向量的經量化的變換係數進行熵編碼。在其他實例中,視訊編碼器200可以執行自我調整掃瞄。在掃瞄經量化的變換係數以形成一維向量之後,視訊編碼器200可以例如根據上下文自我調整二進位算術編碼(CABAC)來對一維向量進行熵編碼。視訊編碼器200亦可以對用於描述與經編碼的視訊資料相關聯的中繼資料的語法元素的值進行熵編碼,以供視訊解碼器300在對視訊資料進行解碼時使用。After quantization, the video encoder 200 can scan the transform coefficients to generate a one-dimensional vector from a two-dimensional matrix including the quantized transform coefficients. The scan can be designed to place the transform coefficients of higher energy (and therefore lower frequency) at the front of the vector and the transform coefficients of lower energy (and therefore higher frequency) at the back of the vector. In some examples, the video encoder 200 can scan the quantized transform coefficients using a predefined scan order to generate a serialized vector, and then entropy encode the quantized transform coefficients of the vector. In other examples, the video encoder 200 can perform self-adjusting scanning. After scanning the quantized transform coefficients to form a one-dimensional vector, the video encoder 200 may entropy encode the one-dimensional vector, for example, according to context-adaptive binary arithmetic coding (CABAC). The video encoder 200 may also entropy encode the values of syntax elements used to describe metadata associated with the encoded video data for use by the video decoder 300 when decoding the video data.

為了執行CABAC,視訊編碼器200可以將上下文模型內的上下文指派給要被發送的符號。上下文可以涉及例如符號的相鄰值是否為零值。概率決定可以是基於被指派給符號的上下文的。To perform CABAC, the video encoder 200 may assign a context within a context model to a symbol to be sent. The context may relate to, for example, whether the neighboring values of the symbol are zero values. The probability decision may be based on the context assigned to the symbol.

視訊編碼器200亦可以例如在圖片標頭、區塊標頭、切片標頭中為視訊解碼器300產生語法資料(諸如基於區塊的語法資料、基於圖片的語法資料和基於序列的語法資料)、或其他語法資料(諸如序列參數集(SPS)、圖片參數集(PPS)或視訊參數集(VPS))。同樣地,視訊解碼器300可以對此種語法資料進行解碼以決定如何解碼對應的視訊資料。The video encoder 200 may also generate syntax data (such as block-based syntax data, picture-based syntax data, and sequence-based syntax data) for the video decoder 300, for example, in a picture header, a block header, a slice header, or other syntax data (such as a sequence parameter set (SPS), a picture parameter set (PPS), or a video parameter set (VPS)). Similarly, the video decoder 300 may decode such syntax data to determine how to decode the corresponding video data.

以此種方式,視訊編碼器200可以產生位元串流,其包括經編碼的視訊資料,例如,描述將圖片分割為區塊(例如,CU)以及用於該等區塊的預測及/或殘差資訊的語法元素。最終,視訊解碼器300可以接收位元串流並且對經編碼的視訊資料進行解碼。In this way, the video encoder 200 can generate a bit stream that includes coded video data, such as syntax elements describing the partitioning of a picture into blocks (e.g., CUs) and prediction and/or residual information for the blocks. Finally, the video decoder 300 can receive the bit stream and decode the coded video data.

通常,視訊解碼器300執行與由視訊編碼器200執行的過程相反的過程,以對位元串流的經編碼的視訊資料進行解碼。例如,視訊解碼器300可以使用CABAC,以與視訊編碼器200的CABAC編碼過程基本上類似的、但是相反的方式來對用於位元元流的語法元素的值進行解碼。語法元素可以定義用於將圖片分割為CTU、以及根據對應的分割結構(諸如QTBT結構)對每個CTU進行分割以定義CTU的CU的分割資訊。語法元素亦可以定義用於視訊資料的區塊(例如,CU)的預測和殘差資訊。In general, the video decoder 300 performs a process that is the reverse of the process performed by the video encoder 200 to decode the encoded video data of the bitstream. For example, the video decoder 300 may use CABAC to decode the values of syntax elements for the bitstream in a manner substantially similar to, but reversed from, the CABAC encoding process of the video encoder 200. The syntax elements may define partitioning information for partitioning a picture into CTUs and partitioning each CTU according to a corresponding partitioning structure (e.g., a QTBT structure) to define CUs of the CTU. The syntax elements may also define prediction and residual information for a block (e.g., a CU) of video data.

殘差資訊可以由例如經量化的變換係數來表示。視訊解碼器300可以對區塊的經量化的變換係數進行逆量化和逆變換以重現用於該區塊的殘差區塊。視訊解碼器300使用經信號通知的預測模式(訊框內預測或訊框間預測)和相關的預測資訊(例如,用於訊框間預測的運動資訊)來形成用於該區塊的預測區塊。視訊解碼器300隨後可以對預測區塊和殘差區塊(在逐個取樣的基礎上)進行組合以重現原始區塊。視訊解碼器300可以執行額外處理,諸如執行去區塊過程以減少沿著區塊的邊界的視覺偽影。The residual information may be represented by, for example, quantized transform coefficients. The video decoder 300 may inverse quantize and inverse transform the quantized transform coefficients of the block to reproduce a residual block for the block. The video decoder 300 uses the signaled prediction mode (intra-frame prediction or inter-frame prediction) and associated prediction information (e.g., motion information for inter-frame prediction) to form a prediction block for the block. The video decoder 300 may then combine the prediction block and the residual block (on a sample-by-sample basis) to reproduce the original block. The video decoder 300 may perform additional processing, such as performing a deblocking process to reduce visual artifacts along block boundaries.

概括而言,本揭示內容可能涉及「用信號通知」某些資訊(諸如語法元素)。術語「用信號通知」通常可以代表對用於語法元素的值及/或用於對經編碼的視訊資料進行解碼的其他資料的通訊。亦即,視訊編碼器200可以在位元串流中用信號通知用於語法元素的值。通常,用信號通知代表在位元串流中產生值。如前述,源設備102可以基本上即時地或不是即時地(諸如可能在將語法元素儲存到儲存設備112以供目的地設備116稍後取回時發生)將位元串流傳輸到目的地設備116。In general, the present disclosure may involve "signaling" certain information (such as syntax elements). The term "signaling" can generally refer to the communication of values for syntax elements and/or other data used to decode the encoded video data. That is, the video encoder 200 can signal values for syntax elements in a bit stream. Generally, signaling refers to generating values in the bit stream. As mentioned above, the source device 102 can transmit the bit stream to the destination device 116 substantially in real time or not in real time (such as may occur when storing syntax elements to the storage device 112 for later retrieval by the destination device 116).

圖2A和圖2B是示出示例四叉樹二叉樹(QTBT)結構130以及對應的編碼樹單元(CTU)132的概念圖。實線表示四叉樹分離,而虛線指示二叉樹分離。在二叉樹的每個分離(亦即,非葉)節點中,用信號通知一個旗標以指示使用哪種分離類型(亦即,水平或垂直),其中在該實例中,0指示水平分離,而1指示垂直分離。對於四叉樹分離,由於四叉樹節點將區塊水平地並且垂直地分離為具有相等大小的4個子區塊,因此無需指示分離類型。因此,視訊編碼器200可以對以下各項進行編碼,而視訊解碼器300可以對以下各項進行解碼:用於QTBT結構130的區域樹級別(亦即,實線)的語法元素(諸如分離資訊)、以及用於QTBT結構130的預測樹級別(亦即,虛線)的語法元素(諸如分離資訊)。視訊編碼器200可以對用於由QTBT結構130的終端葉節點表示的CU的視訊資料(諸如預測和變換資料)進行編碼,而視訊解碼器300可以對視訊資料進行解碼。可以使用單樹分割或雙樹分割來對CTU進行分割。在單樹分割的情況下,CTU的色度分量和CTU的亮度分量具有相同的分割結構。在雙樹分割的情況下,CTU的色度分量和CTU的亮度分量可以具有不同的分割結構。2A and 2B are conceptual diagrams showing an example quadtree binary tree (QTBT) structure 130 and a corresponding coding tree unit (CTU) 132. Solid lines represent quadtree separations, while dashed lines indicate binary tree separations. In each separation (i.e., non-leaf) node of the binary tree, a flag is signaled to indicate which separation type (i.e., horizontal or vertical) is used, where in this example, 0 indicates horizontal separation and 1 indicates vertical separation. For quadtree separations, since the quadtree node separates the block horizontally and vertically into 4 sub-blocks of equal size, there is no need to indicate the separation type. Thus, the video encoder 200 may encode, and the video decoder 300 may decode, syntax elements (such as separation information) for the region tree level (i.e., solid line) of the QTBT structure 130, and syntax elements (such as separation information) for the prediction tree level (i.e., dashed line) of the QTBT structure 130. The video encoder 200 may encode, and the video decoder 300 may decode, video data (such as prediction and transform data) for a CU represented by a terminal leaf node of the QTBT structure 130. The CTU may be partitioned using single-tree partitioning or dual-tree partitioning. In the case of single-tree partitioning, the chrominance components of the CTU and the luma components of the CTU have the same partitioning structure. In the case of dual-tree partitioning, the chrominance components of the CTU and the luma components of the CTU may have different partitioning structures.

通常,圖2B的CTU 132可以與定義與QTBT結構130的處於第一和第二級別的節點相對應的區塊的大小的參數相關聯。該等參數可以包括CTU大小(表示取樣中的CTU 132的大小)、最小四叉樹大小(MinQTSize,其表示最小允許四叉樹葉節點大小)、最大二叉樹大小(MaxBTSize,其表示最大允許二叉樹根節點大小)、最大二叉樹深度(MaxBTDepth,其表示最大允許二叉樹深度)、以及最小二叉樹大小(MinBTSize,其表示最小允許二叉樹葉節點大小)。2B may be associated with parameters defining the size of blocks corresponding to nodes at the first and second levels of the QTBT structure 130. The parameters may include a CTU size (indicating the size of the CTU 132 in a sample), a minimum quadtree size (MinQTSize, indicating the minimum allowed quadtree leaf node size), a maximum binary tree size (MaxBTSize, indicating the maximum allowed binary tree root node size), a maximum binary tree depth (MaxBTDepth, indicating the maximum allowed binary tree depth), and a minimum binary tree size (MinBTSize, indicating the minimum allowed binary tree leaf node size).

QTBT結構的與CTU相對應的根節點可以在QTBT結構的第一級別處具有四個子節點,每個子節點可以是根據四叉樹分割來分割的。亦即,第一級別的節點是葉節點(沒有子節點)或者具有四個子節點。QTBT結構130的實例將此種節點表示為包括具有實線分支的父節點和子節點。若第一級別的節點不大於最大允許二叉樹根節點大小(MaxBTSize),則可以藉由相應的二叉樹進一步對該等節點進行分割。可以對一個節點的二叉樹分離進行反覆運算,直到從分離產生的節點達到最小允許二叉樹葉節點大小(MinBTSize)或最大允許二叉樹深度(MaxBTDepth)。QTBT結構130的實例將此種節點表示為具有虛線分支。二叉樹葉節點被稱為編碼單元(CU),其用於預測(例如,圖片內或圖片間預測)和變換,而不進行任何進一步分割。如上所論述的,CU亦可以被稱為「視訊區塊」或「區塊」。The root node of the QTBT structure corresponding to the CTU can have four child nodes at the first level of the QTBT structure, and each child node can be split according to the quadtree partitioning. That is, the nodes at the first level are leaf nodes (without child nodes) or have four child nodes. The instance of the QTBT structure 130 represents such a node as including a parent node and child nodes with solid line branches. If the nodes at the first level are not larger than the maximum allowed binary tree root node size (MaxBTSize), the nodes can be further partitioned by the corresponding binary tree. The binary tree separation of a node can be repeatedly calculated until the node resulting from the separation reaches the minimum allowed binary tree leaf node size (MinBTSize) or the maximum allowed binary tree depth (MaxBTDepth). The example of QTBT structure 130 represents such nodes as having dashed branches. The binary leaf nodes are called coding units (CUs), which are used for prediction (e.g., intra-picture or inter-picture prediction) and transformation without any further partitioning. As discussed above, a CU may also be referred to as a "video block" or "block".

在QTBT分割結構的一個實例中,CTU大小被設置為128x128(亮度取樣和兩個對應的64x64色度取樣),MinQTSize被設置為16x16,MaxBTSize被設置為64x64,MinBTSize(對於寬度和高度兩者)被設置為4,並且MaxBTDepth被設置為4。首先對CTU應用四叉樹分割以產生四叉樹葉節點。四叉樹葉節點可以具有從16x16(亦即,MinQTSize)到128x128(亦即,CTU大小)的大小。若四叉樹葉節點為128x128,則由於該大小超過MaxBTSize(亦即,在該實例中為64x64),因此葉四叉樹節點將不被二叉樹進一步分離。否則,四叉樹葉節點將被二叉樹進一步分割。因此,四叉樹葉節點亦是用於二叉樹的根節點,並且具有為0的二叉樹深度。當二叉樹深度達到MaxBTDepth(在該實例中為4)時,不允許進一步分離。具有等於MinBTSize(在該實例中為4)的寬度的二叉樹節點意味著不允許針對該二叉樹節點進行進一步的垂直分離(亦即,對寬度的劃分)。類似地,具有等於MinBTSize的高度的二叉樹節點意味著不允許針對該二叉樹節點進行進一步的水平分離(亦即,對高度的劃分)。如前述,二叉樹的葉節點被稱為CU,並且根據預測和變換而被進一步處理,而無需進一步分割。In an example of a QTBT partitioning structure, the CTU size is set to 128x128 (luminance sample and two corresponding 64x64 chroma samples), MinQTSize is set to 16x16, MaxBTSize is set to 64x64, MinBTSize (for both width and height) is set to 4, and MaxBTDepth is set to 4. Quadtree partitioning is first applied to the CTU to produce quadtree leaf nodes. Quadtree leaf nodes can have sizes from 16x16 (i.e., MinQTSize) to 128x128 (i.e., CTU size). If the quadtree leaf node is 128x128, then since the size exceeds MaxBTSize (i.e., 64x64 in this example), the leaf quadtree node will not be further separated by the binary tree. Otherwise, the quadtree leaf node will be further split by the binary tree. Therefore, the quadtree leaf node is also the root node for the binary tree and has a binary tree depth of 0. When the binary tree depth reaches MaxBTDepth (4 in this example), no further separation is allowed. A binary tree node with a width equal to MinBTSize (4 in this example) means that no further vertical separation (i.e., division of width) is allowed for the binary tree node. Similarly, a binary tree node with a height equal to MinBTSize means that no further horizontal separation (i.e., division of height) is allowed for the binary tree node. As mentioned above, the leaf nodes of the binary tree are called CUs and are further processed based on prediction and transformation without further partitioning.

在HEVC螢幕內容編碼(SCC)擴展中,採用ACT來將預測殘差從一個色彩空間自我調整地轉換到第二色彩空間,諸如YCgCo空間。藉由用信號通知一個ACT旗標,可以自我調整地選擇兩個色彩空間。例如,等於一的旗標可以指示殘差是在YCgCo空間中編碼的。否則,等於0的旗標可以指示殘差是在原始色彩空間中編碼的。在VVC中採用了類似的技術,其中在殘差域中執行色彩空間轉換。具體地說,在用於將殘差從YCgCo域轉換回原始域的逆變換之後,引入了關於圖3和圖4更詳細地描述的一個額外的解碼單元(即逆ACT單元)。In the HEVC Screen Content Coding (SCC) extension, ACT is used to self-adjust the predicted residue from one color space to a second color space, such as YCgCo space. By signaling an ACT flag, the two color spaces can be selected self-adjustingly. For example, a flag equal to one may indicate that the residue is encoded in the YCgCo space. Otherwise, a flag equal to 0 may indicate that the residue is encoded in the original color space. A similar technique is used in VVC, where the color space conversion is performed in the residue domain. Specifically, after the inverse transform for converting the residue from the YCgCo domain back to the original domain, an additional decoding unit (i.e., an inverse ACT unit) is introduced, which is described in more detail with respect to Figures 3 and 4.

前向和逆向YCgCo色彩變換矩陣如下: The forward and inverse YCgCo color transformation matrices are as follows:

另外,為了補償殘留信號在色彩變換之前和之後的動態範圍變化,將(-5, -5, -3)的QP調整應用於變換殘差。亦即,可以針對利用ACT編碼的區塊來調整用於量化組的QP。採用使得在視訊編碼器200處應用的ACT可以被視訊解碼器300反向的方式來實現ACT。為了補償殘留信號在色彩變換之前和之後的動態範圍變化,可以藉由向不同的色彩分量添加QP偏移來將QP調整應用於變換殘差。亦即,在第二色彩空間中執行量化或逆量化之前,修改在第一色彩空間中使用的QP。可以將QP偏移作為高級語法來用信號通知。In addition, in order to compensate for the dynamic range changes of the residual signal before and after the color change, a QP adjustment of (-5, -5, -3) is applied to the transform residue. That is, the QP used for the quantization group can be adjusted for the block encoded with ACT. ACT is implemented in a manner such that the ACT applied at the video encoder 200 can be reversed by the video decoder 300. In order to compensate for the dynamic range changes of the residual signal before and after the color change, the QP adjustment can be applied to the transform residue by adding QP offsets to different color components. That is, the QP used in the first color space is modified before quantization or inverse quantization is performed in the second color space. The QP offset can be signaled as a high-level syntax.

在HEVC中,將語法元素residual_adaptive_colour_transform_enabled_flag作為PPS的一部分來用信號通知,以指示是否啟用ACT。若residual_adaptive_colour_transform_enabled_flag為真,則將用於ACT的QP偏移的語法元素pps_act_y_qp_offset_plus5、pps_act_cb_qp_offset_plus5和pps_act_cr_qp_offset_plus3作為PPS的一部分來用信號通知。當residual_adaptive_colour_transform_enabled_flag為真時,亦用信號通知pps_slice_act_qp_offsets_present_flag,以指示在切片標頭中是否存在用於ACT的切片級別QP偏移。若pps_slice_act_qp_offset_present_flag為真,則在切片標頭中用信號通知語法元素slice_act_y_qp_offset、slice_act_cb_qp_offset和slice_act_cr_qp_offset。PPS和切片標頭處的用於ACT的QP偏移的語義如下: pps_act_y_qp_offset_plus5、pps_act_cb_qp_offset_plus5和pps_act_cr_qp_offset_plus3用於決定當tu_residual_act_flag[ xTbY ][ yTbY ]等於1時分別應用於在條款8.6.2中針對亮度、Cb和Cr分量而推導出的量化參數值qP的偏移。當不存在時,推斷pps_act_y_qp_offset_plus5、pps_act_cb_qp_offset_plus5和pps_act_cr_qp_offset_plus3的值等於0。 變數PpsActQpOffsetY被設置為等於pps_act_y_qp_offset_plus5-5。 變數PpsActQpOffsetCb被設置為等於pps_act_cb_qp_offset_plus5-5。 變數PpsActQpOffsetCr被設置為等於pps_act_cb_qp_offset_plus3-3。 slice_act_y_qp_offset、slice_act_cb_qp_offset和slice_act_cr_qp_offset分別指定對在條款8.6.2中針對亮度、Cb和Cr分量而推導出的量化參數值qP的偏移。slice_act_y_qp_offset、slice_act_cb_qp_offset和slice_act_cr_qp_offset的值應當在-12到+12(含)的範圍中。當不存在時,推斷slice_act_y_qp_offset、slice_act_cb_qp_offset和slice_act_cr_qp_offset的值等於0。PpsActQpOffsetY+slice_act_y_qp_offset的值應當在-12到+12(含)的範圍中。PpsActQpOffsetCb+slice_act_cb_qp_offset的值應當在-12到+12(含)的範圍中。PpsActQpOffsetCr+slice_act_cr_qp_offset的值應當在-12到+12(含)的範圍中。 若將ACT應用於區塊,則藉由添加PpsActQpOffsetY+slice_act_y_qp_offset來推導用於亮度區塊的QP,藉由添加PpsActQpOffsetCb+slice_act_cb_qp_offset來推導用於Cb區塊的QP,藉由添加PpsActQpOffsetCr+slice_act_cr_qp_offset來推導用於Cr區塊的QP。In HEVC, the syntax element residual_adaptive_colour_transform_enabled_flag is signaled as part of the PPS to indicate whether ACT is enabled. If residual_adaptive_colour_transform_enabled_flag is true, the syntax elements pps_act_y_qp_offset_plus5, pps_act_cb_qp_offset_plus5, and pps_act_cr_qp_offset_plus3 for the QP offsets for ACT are signaled as part of the PPS. When residual_adaptive_colour_transform_enabled_flag is true, pps_slice_act_qp_offsets_present_flag is also signaled to indicate whether there is a slice level QP offset for ACT in the slice header. If pps_slice_act_qp_offset_present_flag is true, the syntax elements slice_act_y_qp_offset, slice_act_cb_qp_offset, and slice_act_cr_qp_offset are signaled in the slice header. The semantics of the QP offsets for ACT at the PPS and slice headers are as follows: pps_act_y_qp_offset_plus5, pps_act_cb_qp_offset_plus5, and pps_act_cr_qp_offset_plus3 are used to determine the offsets to be applied to the quantization parameter values qP derived in clause 8.6.2 for luma, Cb, and Cr components, respectively, when tu_residual_act_flag[ xTbY ][ yTbY ] is equal to 1. When not present, the values of pps_act_y_qp_offset_plus5, pps_act_cb_qp_offset_plus5, and pps_act_cr_qp_offset_plus3 are inferred to be equal to 0. The variable PpsActQpOffsetY is set equal to pps_act_y_qp_offset_plus5-5. The variable PpsActQpOffsetCb is set equal to pps_act_cb_qp_offset_plus5-5. The variable PpsActQpOffsetCr is set equal to pps_act_cb_qp_offset_plus3-3. slice_act_y_qp_offset, slice_act_cb_qp_offset, and slice_act_cr_qp_offset specify offsets to the quantization parameter values qP derived in clause 8.6.2 for luma, Cb, and Cr components, respectively. The values of slice_act_y_qp_offset, slice_act_cb_qp_offset, and slice_act_cr_qp_offset shall be in the range -12 to +12, inclusive. When not present, the values of slice_act_y_qp_offset, slice_act_cb_qp_offset, and slice_act_cr_qp_offset are inferred to be equal to 0. The value of PpsActQpOffsetY+slice_act_y_qp_offset shall be in the range -12 to +12, inclusive. The value of PpsActQpOffsetCb+slice_act_cb_qp_offset should be in the range of -12 to +12 (inclusive). The value of PpsActQpOffsetCr+slice_act_cr_qp_offset should be in the range of -12 to +12 (inclusive). If ACT is applied to blocks, the QP for luma blocks is derived by adding PpsActQpOffsetY+slice_act_y_qp_offset, the QP for Cb blocks is derived by adding PpsActQpOffsetCb+slice_act_cb_qp_offset, and the QP for Cr blocks is derived by adding PpsActQpOffsetCr+slice_act_cr_qp_offset.

根據本揭示內容的技術,視訊編碼器200和視訊解碼器300可以被配置為執行QP偏移的靈活訊號傳遞。According to the techniques of the present disclosure, video encoder 200 and video decoder 300 may be configured to perform flexible signaling of QP offsets.

根據一種技術,在切片標頭中可以存在用於ACT的QP偏移訊號傳遞。旗標pps_slice_act_qp_offsets_present_flag可以用於控制在切片標頭處是否存在用於ACT的QP偏移。當啟用ACT時,可以在圖片參數集處用信號通知pps_slice_act_qp_offsets_present_flag。然而,若當前切片使用一種以上的區塊分割樹結構,則禁用(跳過)在切片標頭處用於ACT的QP偏移訊號傳遞。According to one technique, there may be QP offset signaling for ACT in a slice header. The flag pps_slice_act_qp_offsets_present_flag may be used to control whether there is QP offset for ACT at the slice header. When ACT is enabled, pps_slice_act_qp_offsets_present_flag may be signaled at the picture parameter set. However, if the current slice uses more than one block partitioning tree structure, QP offset signaling for ACT at the slice header is disabled (skipped).

在VVC的情況下(其中用信號通知qtbt_dual_tree_intra_flag以指示序列中的I切片是否使用雙樹區塊分割結構),切片標頭處的用於ACT的QP偏移訊號傳遞如下:      if(pps_slice_act_qp_offsets_present_flag&& !( slice_type = = I  &&  qtbtt_dual_tree_intra_flag )){            slice_ act _y_qp_offset se(v)          slice_ act _cb_qp_offset se(v)          slice_ act _cr_qp_offset se(v)      } se(v) In the case of VVC (where qtbt_dual_tree_intra_flag is signaled to indicate whether the I slices in the sequence use a dual-tree block partitioning structure), the QP offset signal for ACT at the slice header is delivered as follows: if(pps_slice_act_qp_offsets_present_flag && !( slice_type = = I && qtbtt_dual_tree_intra_flag )){ slice_ act _y_qp_offset se(v) slice_ act _cb_qp_offset se(v) slice_ act _cr_qp_offset se(v) } se(v)

只有在以下各項全部為真的情況下,才用信號通知用於ACT的QP偏移的語法元素slice_act_y_qp_offset、slice_act_cb_qp_offset和slice_act_cr_qp_offset: 1) pps_slice_act_qp_offsets_present_flag為真,這意味著例如在PPS級別指示將在切片級別用信號通知ACT QP偏移。 2) slice_type不是I或qtbt_dual_tree_intra_flag為假,這意味著例如切片不是訊框內預測切片或者未啟用雙樹分割。The syntax elements slice_act_y_qp_offset, slice_act_cb_qp_offset and slice_act_cr_qp_offset for ACT QP offsets are signaled only if all of the following are true: 1) pps_slice_act_qp_offsets_present_flag is true, which means that ACT QP offsets will be signaled at slice level, e.g., indicated at PPS level. 2) slice_type is not I or qtbt_dual_tree_intra_flag is false, which means that e.g., the slice is not an intra-frame prediction slice or dual-tree partitioning is not enabled.

根據本揭示內容的一些技術,視訊編碼器200和視訊解碼器300可以如下在切片標頭處針對Y和Cb聯合地用信號通知每個色彩分量的用於ACT的QP偏移: ˙用於Y和Cb分量的聯合QP偏移:      if(pps_slice_act_qp_offsets_present_flag&& !( slice_type = = I  &&  qtbtt_dual_tree_intra_flag )){            slice_ act _y_cb_qp_offset se(v)          slice_ act _cr_qp_offset se(v)      } se(v) ˙用於Y和Cr分量的聯合QP偏移:      if(pps_slice_act_qp_offsets_present_flag&& !( slice_type = = I  &&  qtbtt_dual_tree_intra_flag )){            slice_ act _y_cr_qp_offset se(v)          slice_ act _cb_qp_offset se(v)      } se(v) ˙用於Cb和Cr分量的聯合QP偏移:      if(pps_slice_act_qp_offsets_present_flag&& !( slice_type = = I  &&  qtbtt_dual_tree_intra_flag )){            slice_ act _y_qp_offset se(v)          slice_ act _cb_cr_qp_offset se(v)      } se(v) ˙用於所有色彩分量的聯合QP偏移:      if(pps_slice_act_qp_offsets_present_flag&& !( slice_type = = I  &&  qtbtt_dual_tree_intra_flag )){            slice_ act _qp_offset se(v)      } se(v) According to some techniques of the present disclosure, video encoder 200 and video decoder 300 may signal the QP offset for ACT for each color component jointly for Y and Cb at the slice header as follows: Joint QP offset for Y and Cb components: if(pps_slice_act_qp_offsets_present_flag && !( slice_type = = I && qtbtt_dual_tree_intra_flag )){ slice_ act _y_cb_qp_offset se(v) slice_ act _cr_qp_offset se(v) } se(v) ˙Joint QP offset for Y and Cr components: if(pps_slice_act_qp_offsets_present_flag && !( slice_type = = I && qtbtt_dual_tree_intra_flag )){ slice_ act _y_cr_qp_offset se(v) slice_ act _cb_qp_offset se(v) } se(v) ˙Joint QP offset for Cb and Cr components: if(pps_slice_act_qp_offsets_present_flag && !( slice_type = = I && qtbtt_dual_tree_intra_flag )){ slice_ act _y_qp_offset se(v) slice_ act _cb_cr_qp_offset se(v) } se(v) ˙Joint QP offset for all color components: if(pps_slice_act_qp_offsets_present_flag && !( slice_type = = I && qtbtt_dual_tree_intra_flag )){ slice_ act _qp_offset se(v) } se(v)

根據本揭示內容的一些技術,切片標頭中的用於ACT的QP偏移訊號傳遞可以不相對於VVC草案7而進行修改,但是可以被約束,使得若當前切片使用一種以上的區塊分割樹結構,例如,若當前切片使用單樹分割和雙樹分割兩者,則QP偏移為零。According to some techniques of the present disclosure, the QP offset signaling for ACT in the slice header may not be modified relative to VVC Draft 7, but may be constrained such that the QP offset is zero if the current slice uses more than one block partitioning tree structure, for example, if the current slice uses both single-tree partitioning and double-tree partitioning.

根據本揭示內容的一些技術,在無損編碼的情況下(例如,對於其中在HEVC中變換旁路旗標為1或者其中在VVC中QP=4的編碼場景),不用信號通知任何用於ACT的QP偏移。具體地說,當對CU進行無損編碼時,並不針對每個CU皆使用ACT。當對CU進行無損編碼時,在位元串流中可以不存在下文介紹的CU級別旗標和編碼單元的量化組(QGCU)級別旗標。According to some techniques of the present disclosure, in the case of lossless coding (e.g., for coding scenarios where the transform bypass flag is 1 in HEVC or where QP=4 in VVC), no QP offset for ACT is signaled. Specifically, when a CU is losslessly coded, ACT is not used for each CU. When a CU is losslessly coded, the CU level flag and the quantization group of coding units (QGCU) level flag described below may not be present in the bitstream.

根據本揭示內容的技術,視訊編碼器200和視訊解碼器300可以被配置為在QGCU級別對用於ACT的啟用旗標進行編碼和解碼。According to the techniques of the present disclosure, the video encoder 200 and the video decoder 300 may be configured to encode and decode an enable flag for an ACT at the QGCU level.

根據本揭示內容的一些技術,可以在QGCU的情況下用信號通知啟用還是禁用ACT,這意味著可以以QGCU為基礎來應用ACT。一旦應用ACT,QGCU內的經變換的殘差係數(當執行變換時)、殘差取樣(當執行變換跳過時)和調色板圖元(諸如,當使用調色板模式時的調色板色彩和逸出圖元)皆可以在色彩變換域中進行編碼。此外,可以不需要用於啟用ACT的CU級別旗標。根據本揭示內容的一些技術,由於ACT是QGCU級別編碼工具,因此不存在用於切換ACT開啟/關閉的CU級別旗標,而根據本揭示內容的其他技術,該CU級別旗標仍然存在以保持以更精細的細微性(諸如CU或TU級別)切換ACT開啟/關閉的靈活性。 根據本揭示內容的技術,視訊編碼器200和視訊解碼器300可以被配置為針對ACT執行QP裁剪。According to some techniques of the present disclosure, whether to enable or disable ACT can be signaled in the case of a QGCU, which means that ACT can be applied on a QGCU basis. Once ACT is applied, the transformed residual coefficients (when performing a transform), residual samples (when performing a transform skip), and palette primitives (e.g., palette colors and escape primitives when using palette mode) within the QGCU can all be encoded in the color transform domain. In addition, a CU-level flag for enabling ACT may not be required. According to some techniques of the present disclosure, since ACT is a QGCU-level coding tool, there is no CU-level flag for switching ACT on/off, while according to other techniques of the present disclosure, the CU-level flag still exists to maintain the flexibility of switching ACT on/off with finer granularity (such as CU or TU level). According to the techniques of the present disclosure, the video encoder 200 and the video decoder 300 can be configured to perform QP clipping for ACT.

為了確保用於經變換的殘差取樣、變換跳過的殘差和調色板的QP值永遠不會超出範圍,本揭示內容的一些技術可以包括在藉由ACT調整QP值之後修剪所得到的QP值。不失一般性,記法Δy、Δcb和Δcr分別表示用於三個色彩分量的QP調整值(亦即,當針對當前殘差取樣啟用ACT時,切片標頭QP偏移+圖片級別QP偏移;否則為0)。 ˙QP’y=Clip3(0,QPmax+QpBdOffset,QPy+QpBdOffset+Δy), ˙QP’cb=Clip3(0,QPmax+QpBdOffset,QPcr+QpBdOffset+Δcb), ˙QP’cr=Clip3(0,QPmax+QpBdOffset,QPcr+QpBdOffset+Δcr), 其中QPmax是視訊編碼標準中支援的最大QP值(例如,對於HEVC而言為51,並且對於VVC而言為63),QpBdOffset=6*(內部位元深度-8),並且函數Clip3(a、b、c)將c的值修剪到從a到b(含)的範圍內。 要注意的是,根據本揭示內容的一些技術,對於一些不支援針對ACT用信號通知的靈活QP的視訊轉碼器,相應的Δy、Δcb和Δcr的值是預定的。在此種情況下,可以如下配置該等QP偏移值: ˙Δy=-5, ˙Δcb=-5, ˙Δcr=-3。To ensure that the QP values for transformed residue samples, transform skipped residues, and palettes never go out of range, some techniques of the present disclosure may include clipping the resulting QP values after adjusting the QP values by ACT. Without loss of generality, the notations Δy, Δcb, and Δcr denote the QP adjustment values for the three color components, respectively (i.e., slice header QP offset + picture level QP offset when ACT is enabled for the current residue sample; otherwise, 0). ˙QP’y=Clip3(0,QPmax+QpBdOffset,QPy+QpBdOffset+Δy), ˙QP’cb=Clip3(0,QPmax+QpBdOffset,QPcr+QpBdOffset+Δcb), ˙QP’cr=Clip3(0,QPmax+QpBdOffset,QPcr+QpBdOffset+Δcr), where QPmax is the maximum QP value supported in the video coding standard (e.g., 51 for HEVC and 63 for VVC), QpBdOffset=6*(internal bit depth-8), and the function Clip3(a, b, c) clips the value of c to the range from a to b (inclusive). Note that according to some techniques of the present disclosure, for some video codecs that do not support flexible QP signaling for ACT, the corresponding Δy, Δcb, and Δcr values are predetermined. In this case, the QP offset values can be configured as follows: ˙Δy=-5, ˙Δcb=-5, ˙Δcr=-3.

根據本揭示內容的一些技術,QP修剪可以與QGCU級別訊號傳遞相結合,如上文關於在QGCU級別用於ACT的啟用旗標所描述的。可以基於Δy的值來調整差量QP(亦即,原始QP與最小允許QP之間的差量)範圍。由於基本QP的最小值為0,因此可以如下推導QPy的最小值: QPy_min+6*(內部位元深度-8)+Δy=0, 並且因此: QPy_min=-6*(內部位元深度–8)–Δy。 因此,可以推導原始QP(亦即,QPy)與最小允許QP(亦即,QPy_min)之間的差量QP。 ΔQP=QPy_min–Qpy=-6*(內部位元深度–8)–Δy–QPy。According to some techniques of the present disclosure, QP pruning can be combined with QGCU level signaling, as described above with respect to the enable flag for ACT at the QGCU level. The delta QP (i.e., the difference between the original QP and the minimum allowed QP) range can be adjusted based on the value of Δy. Since the minimum value of the base QP is 0, the minimum value of QPy can be derived as follows: QPy_min+6*(internal bitmap depth-8)+Δy=0, and therefore: QPy_min=-6*(internal bitmap depth–8)–Δy. Therefore, the delta QP between the original QP (i.e., QPy) and the minimum allowed QP (i.e., QPy_min) can be derived. ΔQP=QPy_min–Qpy=-6*(internal bitmap depth–8)–Δy–QPy.

根據本揭示內容的技術,視訊編碼器200和視訊解碼器300可以被配置為在跳過變換編碼時執行用於ACT的QP修剪。 根據本揭示內容的一些技術,當不使用變換編碼時,最小QP值不能達到低至0以防止信號擴展。可以如下進一步調整上文推導出的QP值(亦即,QP’y、QP’cb、QP’cr): ˙QP’y=Max(QP’y,M+6*(內部位元深度-輸入位元深度)), ˙QP’cb=Max(QP’cb,M+6*(內部位元深度-輸入位元深度)), ˙QP’cr=Max(QP’cr,M+6*(內部位元深度-輸入位元深度)), 或者,更準確地說,以一種獨立式形式, ˙QP’y=Clip3(M+6*(內部位元深度-輸入位元深度),QPmax+QpBdOffset,QPy+QpBdOffset+Δy), ˙QP’cb=Clip3(M+6*(內部位元深度-輸入位元深度),QPmax+QpBdOffset,QPcr+QpBdOffset+Δcb), ˙QP’cr=Clip3(M+6*(內部位元深度-輸入位元深度),QPmax+QpBdOffset,QPcr+QpBdOffset+Δcr), 其中M是在視訊轉碼器中與等於(或最接近)1的量化步長大小相對應的QP值(例如,在VVC中為4)。According to the techniques of the present disclosure, the video encoder 200 and the video decoder 300 can be configured to perform QP pruning for ACT when transform coding is skipped. According to some techniques of the present disclosure, when transform coding is not used, the minimum QP value cannot be as low as 0 to prevent signal expansion. The QP values derived above (i.e., QP’y, QP’cb, QP’cr) can be further adjusted as follows: ˙QP’y=Max(QP’y,M+6*(internal pixel depth-input bit depth)), ˙QP’cb=Max(QP’cb,M+6*(internal pixel depth-input bit depth)), ˙QP’cr=Max(QP’cr,M+6*(internal pixel depth-input bit depth)), Or, more precisely, in a standalone form, ˙QP’y=Clip3(M+6*(internal pixel depth-input bit depth),QPmax+QpBdO ffset,QPy+QpBdOffset+Δy), ˙QP’cb=Clip3(M+6*(internal bit depth-input bit depth),QPmax+QpBdOffset,QPcr+QpBdOffset+Δcb), ˙QP’cr=Clip3(M+6*(internal bit depth-input bit depth),QPmax+QpBdOffset,QPcr+QpBdOffset+Δcr), where M is the QP value corresponding to a quantization step size equal to (or closest to) 1 in the video codec (e.g., 4 in VVC).

要注意的是,該等QP值(亦即,QP’y、QP’cb、QP’cr)亦應用於調色板編碼CU。Note that these QP values (i.e., QP’y, QP’cb, QP’cr) also apply to palette coded CUs.

根據本揭示內容的一些技術,QP修剪可以與QGCU級別訊號傳遞相結合,如上文關於在QGCU級別用於ACT的啟用旗標所介紹的。可以基於Δy的值來調整差量QP(亦即,原始QP與最小允許QP之間的差量)範圍。由於基本QP的最小值為M(例如,4),因此可以如下推導QPy的最小值: QPy_min+6*(內部位元深度-8)+Δy=M+6*(內部位元深度-輸入位元深度),並且因此: QPy_min=M-6*(輸入位元深度-8)-Δy。According to some techniques of the present disclosure, QP pruning can be combined with QGCU-level signaling, as described above with respect to the enable flag for ACT at the QGCU level. The delta QP (i.e., the difference between the original QP and the minimum allowed QP) range can be adjusted based on the value of Δy. Since the minimum value of the base QP is M (e.g., 4), the minimum value of QPy can be derived as follows: QPy_min+6*(internal bit pixel depth-8)+Δy=M+6*(internal bit pixel depth-input bit depth), and therefore: QPy_min=M-6*(input bit depth-8)-Δy.

因此,可以推導原始QP(亦即,QPy)與最小允許QP(亦即,QPy_min)之間的差量QP。 ΔQP=QPy_min-QPy=M-6*(內部位元深度-8)-Δy-QPy。Therefore, the difference QP between the original QP (i.e., QPy) and the minimum allowed QP (i.e., QPy_min) can be derived. ΔQP=QPy_min-QPy=M-6*(internal bit depth-8)-Δy-QPy.

根據本揭示內容的技術,視訊編碼器200和視訊解碼器300可以被配置為執行用於雙樹區塊分割的ACT。According to the techniques of the present disclosure, the video encoder 200 and the video decoder 300 may be configured to perform ACT for dual-tree block partitioning.

根據本揭示內容的一些技術,當使用雙樹區塊分割時,仍然可以應用ACT。當啟用雙樹區塊分割時,色彩分量C1 和C2 可以與C 0 分開編碼和重構。在C0 的色彩分量比同一圖元的其他分量更早地被編碼和重構的假設下,如前述的前向色彩變換可以被重新公式化為:, 其中C 0 的重構信號。編碼迴路僅用信號通知C 0的經量化的信號。According to some techniques of the present disclosure, ACT can still be applied when dual-tree block partitioning is used. When dual-tree block partitioning is enabled, color components C1 and C2 can be encoded and reconstructed separately from C0 . Under the assumption that the color component of C0 is encoded and reconstructed earlier than other components of the same primitive, the forward color transform as described above can be reformulated as: , in is the reconstruction signal of C 0. The coding loop only uses the signal to notify C 0 , and quantized signal.

在如下所示的後向色彩變換中,解碼迴路具有經重構的值(亦即,),並且因此不能直接應用後向變換,因為在解碼之後僅知道,而不知道 In the backward color transform shown below, the decoding loop has the reconstructed values (i.e., , and ), and therefore the backward transform cannot be applied directly, since after decoding only , but don't know .

該公式可以藉由分別將調換到等式的任一側來重新公式化,如下: The formula can be obtained by and We can reformulate this by swapping to either side of the equation:

在一些實例中,C0 (以及)可能不可用於與C 1C 2 聯合地被編碼。當此種情況發生時,在針對其他兩個色彩分量執行色彩轉換時,向指派0。因此,前向和後向色彩變換可以如下分別被重新公式化為:以及, 或以緊湊形式,如下:以及In some examples, C 0 (and and ) may not be available to be coded jointly with C1 and C2 . When this happens, the color conversion is performed on the other two color components. Assign 0. Therefore, the forward and backward color transforms can be reformulated as follows: as well as , or in compact form, as follows: as well as .

要注意的是,當pps_slice_act_qp_offsets_present_flag被啟用時,每個CU可以具有用於ACT的啟用旗標。另外,如在上文的語法表中關於QP偏移的訊號傳遞描述的斜體分支條件可以如下被重新定義。      if(pps_slice_act_qp_offsets_present_flag) {            if(slice_type != I||!qtbtt_dual_tree_intra_flag )                  slice_ act _y_qp_offset se(v)          slice_ act _cb_qp_offset se(v)          slice_ act _cr_qp_offset se(v)      } se(v) Note that when pps_slice_act_qp_offsets_present_flag is enabled, each CU can have an enable flag for ACT. In addition, the italic branch conditions described in the syntax table above regarding the signaling of QP offsets can be redefined as follows. if(pps_slice_act_qp_offsets_present_flag) { if( slice_type != I||!qtbtt_dual_tree_intra_flag ) slice_ act _y_qp_offset se(v) slice_ act _cb_qp_offset se(v) slice_ act _cr_qp_offset se(v) } se(v)

此外,根據本揭示內容的一些技術,可以如下在切片標頭處針對Y和Cb聯合地用信號通知每個色彩分量的用於ACT的QP偏移: ˙用於Y和Cb分量的聯合QP偏移:      if(pps_slice_act_qp_offsets_present_flag){            slice_ act _y_cb_qp_offset se(v)          slice_ act _cr_qp_offset se(v)      } se(v) ˙用於Y和Cr分量的聯合QP偏移:      if(pps_slice_act_qp_offsets_present){            slice_ act _y_cr_qp_offset se(v)          slice_ act _cb_qp_offset se(v)      } se(v) ˙用於Cb和Cr分量的聯合QP偏移:      if(pps_slice_act_qp_offsets_present_flag){            if(slice_type != I  || !qtbtt_dual_tree_intra_flag )                   slice_ act _y_qp_offset se(v)          slice_ act _cb_cr_qp_offset se(v)      } se(v) ˙用於所有色彩分量的聯合QP偏移:      if(pps_slice_act_qp_offsets_present_flag) {            slice_ act _qp_offset se(v)      } se(v) Furthermore, according to some techniques of the present disclosure, the QP offset for ACT per color component may be signaled jointly for Y and Cb at the slice header as follows: Joint QP offset for Y and Cb components: if(pps_slice_act_qp_offsets_present_flag){ slice_ act _y_cb_qp_offset se(v) slice_ act _cr_qp_offset se(v) } se(v) ˙Joint QP offset for Y and Cr components: if(pps_slice_act_qp_offsets_present){ slice_ act _y_cr_qp_offset se(v) slice_ act _cb_qp_offset se(v) } se(v) ˙Joint QP offset for Cb and Cr components: if(pps_slice_act_qp_offsets_present_flag){ if( slice_type != I || !qtbtt_dual_tree_intra_flag ) slice_ act _y_qp_offset se(v) slice_ act _cb_cr_qp_offset se(v) } se(v) ˙Joint QP offset for all color components: if(pps_slice_act_qp_offsets_present_flag) { slice_ act _qp_offset se(v) } se(v)

根據本揭示內容的技術,視訊編碼器200和視訊解碼器300可以被配置為將單獨的QP偏移用於聯合CbCr模式。亦即,視訊編碼器200和視訊解碼器300可以被配置為基於區塊是使用ACT而編碼的並且是以聯合色度模式(例如,聯合CbCr模式)而編碼的來決定用於該區塊的ACT QP偏移。例如,視訊編碼器200和視訊解碼器300可以儲存ACT QP偏移集合,其中該集合包括用於視訊資料的亮度殘差分量的第一ACT QP偏移、用於視訊資料的第一色度殘差分量的第二ACT QP偏移、用於視訊資料的第二色度殘差分量的第三ACT QP偏移、以及用於經聯合編碼的色度殘差分量的第四ACT QP偏移。第四ACT QP偏移可以不同於第二ACT QP偏移和第三ACT QP偏移中的一者或兩者。According to the techniques of the present disclosure, the video encoder 200 and the video decoder 300 may be configured to use a separate QP offset for the joint-CbCr mode. That is, the video encoder 200 and the video decoder 300 may be configured to determine an ACT QP offset for a block based on whether the block is encoded using ACT and is encoded in a joint-chroma mode (e.g., joint-CbCr mode). For example, the video encoder 200 and the video decoder 300 may store a set of ACT QP offsets, where the set includes a first ACT QP offset for a luma residue component of the video data, a second ACT QP offset for a first chroma residue component of the video data, a third ACT QP offset for a second chroma residue component of the video data, and a fourth ACT QP offset for a jointly encoded chroma residue component. The fourth ACT QP offset may be different from one or both of the second ACT QP offset and the third ACT QP offset.

VVC草案7包括聯合CbCr模式,其中僅對一個色度殘差區塊進行編碼,其被表示為CbCr殘差。在視訊解碼器300處,在CbCr殘差被重構之後,根據所選擇的聯合CbCr模式來推導Cb和Cr殘差。在聯合CbCr模式之一(在VVC中表示為模式2)中,Cr殘差被設置為與CbCr殘差相同,並且Cb殘差被設置為Cb=Csign*Cr,其中Csign可以是1或-1,這取決於聯合CbCr模式。若聯合CbCr模式2用於編碼單元,則可以應用針對CbCr殘差指定的單獨QP偏移。VVC draft 7 includes a joint CbCr mode, in which only one chroma residue block is encoded, which is denoted as CbCr residue. At the video decoder 300, after the CbCr residue is reconstructed, the Cb and Cr residues are derived according to the selected joint CbCr mode. In one of the joint CbCr modes (denoted as mode 2 in VVC), the Cr residue is set to be the same as the CbCr residue, and the Cb residue is set to Cb=Csign*Cr, where Csign can be 1 or -1, depending on the joint CbCr mode. If joint CbCr mode 2 is used for a coding unit, a separate QP offset specified for the CbCr residue can be applied.

根據本揭示內容的技術,視訊編碼器200和視訊解碼器300可以被配置為:若將聯合CbCr模式2應用於ACT區塊以進行殘差編碼,則使用單獨的ACT QP偏移。因此,整體而言,可以存在四個ACT QP偏移,一個用於亮度,一個用於Cb,一個用於Cr,以及一個用於CbCr。According to the techniques of the present disclosure, the video encoder 200 and the video decoder 300 can be configured to use separate ACT QP offsets if joint CbCr mode 2 is applied to the ACT block for residual coding. Therefore, overall, there can be four ACT QP offsets, one for luma, one for Cb, one for Cr, and one for CbCr.

在一些實例中,用於聯合CbCr模式2的單獨的ACT QP偏移可以被固定為整數值。在一些實例中,可以與其他ACT QP偏移一樣用信號通知用於聯合CbCr模式的單獨的ACT QP偏移。例如,可以與pps_act_cb_qp_offset_plus5一樣在圖片參數集中用信號通知pps_act_cb_cr_qp_offset_plus5,並且可以與slice_act_cr_qp_offset一樣在切片標頭中用信號通知slice_act_cb_cr_qp_offset。In some examples, a separate ACT QP offset for joint-CbCr mode 2 may be fixed to an integer value. In some examples, a separate ACT QP offset for joint-CbCr mode may be signaled like other ACT QP offsets. For example, pps_act_cb_cr_qp_offset_plus5 may be signaled in the picture parameter set like pps_act_cb_cr_qp_offset_plus5, and slice_act_cb_cr_qp_offset may be signaled in the slice header like slice_act_cr_qp_offset.

在一些實例中,只有在SPS處啟用聯合CbCr模式時,才可以用信號通知用於聯合CbCr模式的單獨的ACT QP偏移(pps_act_cb_cr_qp_offset_plus5和slice_act_cb_cr_qp_offset)。In some examples, separate ACT QP offsets for joint-CbCr mode (pps_act_cb_cr_qp_offset_plus5 and slice_act_cb_cr_qp_offset) may be signaled only when joint-CbCr mode is enabled at the SPS.

在一些實例中,可以始終用信號通知用於聯合CbCr模式的單獨的ACT QP偏移,即使在SPS處未啟用聯合CbCr模式。In some examples, a separate ACT QP offset for joint-CbCr mode may always be signaled even if joint-CbCr mode is not enabled at the SPS.

為了實現上述各種技術,視訊編碼器200可以被配置為:決定用於視訊資料的區塊的第一色度分量的第一色度殘差區塊;決定用於視訊資料的區塊的第二色度分量的第二色度殘差區塊,其中第一色度殘差區塊和第二色度殘差區塊在第一色彩空間中;決定視訊資料的該區塊是使用自我調整色彩變換(ACT)而編碼的;對第一色度殘差區塊執行ACT,以將第一色度殘差區塊轉換到第二色彩空間;對第二色度殘差區塊執行逆ACT,以將第二色度殘差區塊轉換到第二色彩空間;決定視訊資料的區塊是以聯合色度模式而編碼的,其中對於聯合色度模式,單個色度殘差區塊是針對該區塊的第一色度分量和該區塊的第二色度分量來編碼的;基於經轉換的第一色度殘差區塊和經轉換的第二色度殘差區塊來決定單個色度殘差區塊;決定用於該區塊的QP;基於該區塊是使用ACT而編碼的並且是以聯合色度模式而編碼的來決定用於該區塊的ACT QP偏移;基於QP和ACT QP偏移來決定用於該區塊的ACT QP;及基於用於該區塊的ACT QP來對單個色度殘差區塊進行量化。To implement the various techniques described above, the video encoder 200 may be configured to: determine a first chroma residue block for a first chroma component of a block of video data; determine a second chroma residue block for a second chroma component of a block of video data, wherein the first chroma residue block and the second chroma residue block are in a first color space; determine that the block of video data is encoded using an adaptive color transform (ACT); perform the ACT on the first chroma residue block to convert the first chroma residue block to a second color space; perform the ACT on the second chroma residue block; The method comprises performing an inverse ACT to convert the second chroma residue block to a second color space; determining that the block of video data is encoded in a joint chroma mode, wherein for the joint chroma mode, a single chroma residue block is encoded for a first chroma component of the block and a second chroma component of the block; determining a single chroma residue block based on the converted first chroma residue block and the converted second chroma residue block; determining a QP for the block; determining an ACT for the block based on the block being encoded using ACT and being encoded in the joint chroma mode QP offset; determining an ACT QP for the block based on the QP and the ACT QP offset; and quantizing a single chroma residue block based on the ACT QP for the block.

為了實現上述各種技術,視訊解碼器300可以被配置為:決定視訊資料的區塊是使用ACT而編碼的;決定該區塊是以聯合色度模式而編碼的,其中對於聯合色度模式,單個色度殘差區塊是針對該區塊的第一色度分量和該區塊的第二色度分量來編碼的;決定用於該區塊的QP;基於該區塊是使用ACT而編碼的並且是以聯合色度模式而編碼的來決定用於該區塊的ACT QP偏移;基於QP和ACT QP偏移來決定用於該區塊的ACT QP;基於用於該區塊的ACT QP來決定單個色度殘差區塊;根據單個色度殘差區塊來決定用於第一色度分量的第一色度殘差區塊,其中第一色度殘差區塊在第一色彩空間中;根據單個色度殘差區塊來決定用於第二色度分量的第二色度殘差區塊,其中第二色度殘差區塊在第一色彩空間中;對第一色度殘差區塊執行逆ACT,以將第一色度殘差區塊轉換到第二色彩空間;及對第二色度殘差區塊執行逆ACT,以將第二色度殘差區塊轉換到第二色彩空間。To implement the various techniques described above, the video decoder 300 may be configured to: determine that a block of video data is encoded using ACT; determine that the block is encoded in a joint chroma mode where a single chroma residue block is encoded for a first chroma component of the block and a second chroma component of the block; determine a QP for the block; determine an ACT QP offset for the block based on the block being encoded using ACT and being encoded in a joint chroma mode; determine an ACT QP for the block based on the QP and the ACT QP offset; determine an ACT QP for the block based on the ACT QP for the block; A method for determining a single chroma residue block based on a QP; determining a first chroma residue block for a first chroma component based on the single chroma residue block, wherein the first chroma residue block is in a first color space; determining a second chroma residue block for a second chroma component based on the single chroma residue block, wherein the second chroma residue block is in the first color space; performing an inverse ACT on the first chroma residue block to convert the first chroma residue block to the second color space; and performing an inverse ACT on the second chroma residue block to convert the second chroma residue block to the second color space.

圖3是示出可以執行本揭示內容的技術的示例視訊編碼器200的方塊圖。圖3是出於解釋的目的而提供的,並且不應當被認為對在本揭示內容中泛泛地舉例說明和描述的技術進行限制。出於解釋的目的,本揭示內容在視訊編碼標準(諸如正在開發的HEVC視訊編碼標準和H.266視訊編碼標準)的上下文中描述了視訊編碼器200。然而,本揭示內容的技術不限於該等視訊編碼標準,並且通常適用於視訊編碼和解碼。FIG. 3 is a block diagram illustrating an example video encoder 200 that may perform the techniques of the present disclosure. FIG. 3 is provided for purposes of explanation and should not be considered limiting of the techniques generally exemplified and described in the present disclosure. For purposes of explanation, the present disclosure describes the video encoder 200 in the context of video coding standards, such as the developing HEVC video coding standard and the H.266 video coding standard. However, the techniques of the present disclosure are not limited to such video coding standards and are generally applicable to video encoding and decoding.

在圖3的實例中,視訊編碼器200包括視訊資料記憶體230、模式選擇單元202、殘差產生單元204、ACT單元205、變換處理單元206、量化單元208、逆量化單元210、逆變換處理單元212、逆ACT單元213、重構單元214、濾波器單元216、解碼圖片緩衝器(DPB)218和熵編碼單元220。視訊資料記憶體230、模式選擇單元202、殘差產生單元204、變換處理單元206、量化單元208、逆量化單元210、逆變換處理單元212、重構單元214、濾波器單元216、DPB 218和熵編碼單元220中的任何一者或全部可以在一或多個處理器中或者在處理電路中實現。例如,視訊編碼器200的單元可以被實現為一或多個電路或邏輯元件,作為硬體電路的一部分,或者作為處理器、ASIC或FPGA的一部分。此外,視訊編碼器200可以包括額外或替代的處理器或處理電路以執行該等和其他功能。In the example of Figure 3, the video encoder 200 includes a video data memory 230, a mode selection unit 202, a residue generation unit 204, an ACT unit 205, a transform processing unit 206, a quantization unit 208, an inverse quantization unit 210, an inverse transform processing unit 212, an inverse ACT unit 213, a reconstruction unit 214, a filter unit 216, a decoded picture buffer (DPB) 218 and an entropy coding unit 220. Any or all of the video data memory 230, the mode selection unit 202, the residue generation unit 204, the transform processing unit 206, the quantization unit 208, the inverse quantization unit 210, the inverse transform processing unit 212, the reconstruction unit 214, the filter unit 216, the DPB 218, and the entropy coding unit 220 may be implemented in one or more processors or in processing circuits. For example, the units of the video encoder 200 may be implemented as one or more circuits or logic elements, as part of a hardware circuit, or as part of a processor, ASIC, or FPGA. In addition, the video encoder 200 may include additional or alternative processors or processing circuits to perform these and other functions.

視訊資料記憶體230可以儲存要由視訊編碼器200的部件來編碼的視訊資料。視訊編碼器200可以從例如視訊源104(圖1)接收被儲存在視訊資料記憶體230中的視訊資料。DPB 218可以充當參考圖片記憶體,其儲存參考視訊資料以在由視訊編碼器200對後續視訊資料進行預測時使用。視訊資料記憶體230和DPB 218可以由各種記憶體設備中的任何一種形成,諸如動態隨機存取記憶體(DRAM)(包括同步DRAM(SDRAM))、磁阻RAM(MRAM)、電阻性RAM(RRAM)、或其他類型的記憶體設備。視訊資料記憶體230和DPB 218可以由相同的記憶體設備或單獨的記憶體設備來提供。在各個實例中,視訊資料記憶體230可以與視訊編碼器200的其他部件在晶片上(如圖所示),或者相對於彼等部件在晶片外。The video data memory 230 may store video data to be encoded by components of the video encoder 200. The video encoder 200 may receive video data stored in the video data memory 230 from, for example, the video source 104 ( FIG. 1 ). The DPB 218 may function as a reference picture memory that stores reference video data for use in making predictions of subsequent video data by the video encoder 200. The video data memory 230 and the DPB 218 may be formed of any of a variety of memory devices, such as dynamic random access memory (DRAM) (including synchronous DRAM (SDRAM)), magnetoresistive RAM (MRAM), resistive RAM (RRAM), or other types of memory devices. The video data memory 230 and DPB 218 can be provided by the same memory device or a separate memory device. In various examples, the video data memory 230 can be on the chip with other components of the video encoder 200 (as shown in the figure), or off the chip relative to those components.

在本揭示內容中,對視訊資料記憶體230的引用不應當被解釋為限於在視訊編碼器200內部的記憶體(除非如此具體地描述),或者不限於在視訊編碼器200外部的記憶體(除非如此具體地描述)。確切而言,對視訊資料記憶體230的引用應當被理解為儲存視訊編碼器200接收以用於編碼的視訊資料(例如,用於要被編碼的當前區塊的視訊資料)的參考記憶體。圖1的記憶體106亦可以提供對來自視訊編碼器200的各個單元的輸出的臨時儲存。In the present disclosure, references to the video data memory 230 should not be interpreted as limited to memory within the video encoder 200 (unless specifically described as such), or to memory external to the video encoder 200 (unless specifically described as such). Rather, references to the video data memory 230 should be understood as a reference memory that stores video data received by the video encoder 200 for encoding (e.g., video data for the current block to be encoded). The memory 106 of FIG. 1 may also provide temporary storage for outputs from various units of the video encoder 200.

示出圖3的各個單元以輔助理解由視訊編碼器200執行的操作。該等單元可以被實現為固定功能電路、可程式設計電路、或其組合。固定功能電路代表提供特定功能並且關於可以執行的操作而預先設置的電路。可程式設計電路代表可以被程式設計以執行各種任務並且以可以執行的操作來提供靈活功能的電路。例如,可程式設計電路可以執行軟體或韌體,軟體或韌體使得可程式設計電路以軟體或韌體的指令所定義的方式進行操作。固定功能電路可以執行軟體指令(例如,以接收參數或輸出參數),但是固定功能電路執行的操作類型通常是不可變的。在一些實例中,該等單元中的一或多個單元可以是不同的電路區塊(固定功能或可程式設計),並且在一些實例中,該等單元中的一或多個單元可以是積體電路。The various units of FIG. 3 are shown to assist in understanding the operations performed by the video encoder 200. The units may be implemented as fixed-function circuits, programmable circuits, or a combination thereof. Fixed-function circuits represent circuits that provide specific functions and are pre-set with respect to the operations that may be performed. Programmable circuits represent circuits that may be programmed to perform a variety of tasks and provide flexible functionality with respect to the operations that may be performed. For example, a programmable circuit may execute software or firmware that causes the programmable circuit to operate in a manner defined by the instructions of the software or firmware. Fixed-function circuits may execute software instructions (e.g., to receive parameters or output parameters), but the type of operation performed by the fixed-function circuit is generally immutable. In some instances, one or more of the cells may be distinct circuit blocks (fixed function or programmable), and in some instances, one or more of the cells may be integrated circuits.

視訊編碼器200可以包括由可程式設計電路形成的算數邏輯單元(ALU)、基本功能單元(EFU)、數位電路、類比電路及/或可程式設計核。在其中使用由可程式設計電路執行的軟體來執行視訊編碼器200的操作的實例中,記憶體106(圖1)可以儲存視訊編碼器200接收並且執行的軟體的指令(例如,目標代碼),或者視訊編碼器200內的另一記憶體(未圖示)可以儲存此種指令。The video encoder 200 may include an arithmetic logic unit (ALU), an elementary function unit (EFU), a digital circuit, an analog circuit, and/or a programmable core formed by a programmable circuit. In an example in which the operation of the video encoder 200 is performed using software executed by the programmable circuit, the memory 106 ( FIG. 1 ) may store instructions (e.g., object code) of the software received and executed by the video encoder 200, or another memory (not shown) within the video encoder 200 may store such instructions.

視訊資料記憶體230被配置為儲存所接收的視訊資料。視訊編碼器200可以從視訊資料記憶體230取回視訊資料的圖片,並且將視訊資料提供給殘差產生單元204和模式選擇單元202。視訊資料記憶體230中的視訊資料可以是要被編碼的原始視訊資料。The video data memory 230 is configured to store the received video data. The video encoder 200 can retrieve the picture of the video data from the video data memory 230 and provide the video data to the residual generation unit 204 and the mode selection unit 202. The video data in the video data memory 230 can be the original video data to be encoded.

模式選擇單元202包括運動估計單元222、運動補償單元224和訊框內預測單元226。模式選擇單元202可以包括額外功能單元,其根據其他預測模式來執行視訊預測。作為實例,模式選擇單元202可以包括調色板單元、區塊內複製單元(其可以是運動估計單元222及/或運動補償單元224的一部分)、仿射單元、線性模型(LM)單元等。The mode selection unit 202 includes a motion estimation unit 222, a motion compensation unit 224, and an intra-frame prediction unit 226. The mode selection unit 202 may include additional functional units that perform video prediction according to other prediction modes. As an example, the mode selection unit 202 may include a palette unit, an intra-block copy unit (which may be part of the motion estimation unit 222 and/or the motion compensation unit 224), an affine unit, a linear model (LM) unit, etc.

模式選擇單元202通常協調多個編碼通路(pass),以測試編碼參數的組合以及針對此種組合所得到的率失真值。編碼參數可以包括將CTU分割為CU、用於CU的預測模式、用於CU的殘差資料的變換類型、用於CU的殘差資料的量化參數等。模式選擇單元202可以最終選擇編碼參數的具有比其他測試的組合更佳的率失真值的組合。The mode selection unit 202 typically coordinates multiple encoding passes to test combinations of encoding parameters and the resulting rate-distortion values for such combinations. The encoding parameters may include partitioning of a CTU into CUs, a prediction mode for a CU, a transform type for residual data of a CU, a quantization parameter for residual data of a CU, etc. The mode selection unit 202 may ultimately select a combination of encoding parameters that has a better rate-distortion value than other tested combinations.

視訊編碼器200可以將從視訊資料記憶體230取回的圖片分割為一系列CTU,並且將一或多個CTU封裝在切片內。模式選擇單元202可以根據樹結構(諸如上述HEVC的QTBT結構或四叉樹結構)來分割圖片的CTU。如上述,視訊編碼器200可以藉由根據樹結構來分割CTU,從而形成一或多個CU。此種CU通常亦可以被稱為「視訊區塊」或「區塊」。The video encoder 200 may partition the picture retrieved from the video data memory 230 into a series of CTUs and encapsulate one or more CTUs in a slice. The mode selection unit 202 may partition the CTUs of the picture according to a tree structure (such as the QTBT structure or quadtree structure of HEVC described above). As described above, the video encoder 200 may partition the CTUs according to the tree structure to form one or more CUs. Such CUs may also be generally referred to as "video blocks" or "blocks".

通常,模式選擇單元202亦控制其部件(例如,運動估計單元222、運動補償單元224和訊框內預測單元226)以產生用於當前區塊(例如,當前CU,或者在HEVC中為PU和TU的重疊部分)的預測區塊。為了對當前區塊進行訊框間預測,運動估計單元222可以執行運動搜尋以辨識在一或多個參考圖片(例如,被儲存在DPB 218中的一或多個先前編碼的圖片)中的一或多個緊密匹配的參考區塊。具體地,運動估計單元222可以例如根據絕對差之和(SAD)、平方差之和(SSD)、平均絕對差(MAD)、均方差(MSD)等,來計算表示潛在參考區塊將與當前區塊的類似程度的值。運動估計單元222通常可以使用在當前區塊與所考慮的參考區塊之間的逐取樣差來執行該等計算。運動估計單元222可以辨識從該等計算所得到的具有最低值的參考區塊,其指示與當前區塊最緊密匹配的參考區塊。Typically, the mode selection unit 202 also controls its components (e.g., the motion estimation unit 222, the motion compensation unit 224, and the intra-frame prediction unit 226) to generate a prediction block for a current block (e.g., the current CU, or the overlapping portion of a PU and a TU in HEVC). To perform inter-frame prediction for the current block, the motion estimation unit 222 may perform a motion search to identify one or more closely matching reference blocks in one or more reference pictures (e.g., one or more previously coded pictures stored in the DPB 218). Specifically, the motion estimation unit 222 may calculate a value representing how similar a potential reference block will be to the current block, for example, based on the sum of absolute differences (SAD), the sum of squared differences (SSD), the mean absolute difference (MAD), the mean squared difference (MSD), etc. The motion estimation unit 222 may typically perform such calculations using the sample-by-sample difference between the current block and the reference block under consideration. The motion estimation unit 222 may identify the reference block with the lowest value resulting from such calculations, which indicates the reference block that most closely matches the current block.

運動估計單元222可以形成一或多個運動向量(MV),該等運動向量限定相對於當前區塊在當前圖片中的位置而言參考區塊在參考圖片中的位置。隨後,運動估計單元222可以將運動向量提供給運動補償單元224。例如,對於單向訊框間預測,運動估計單元222可以提供單個運動向量,而對於雙向訊框間預測,運動估計單元222可以提供兩個運動向量。隨後,運動補償單元224可以使用運動向量來產生預測區塊。例如,運動補償單元224可以使用運動向量來取回參考區塊的資料。作為另一實例,若運動向量具有分數取樣精度,則運動補償單元224可以根據一或多個內插濾波器來對用於預測區塊的值進行內插。此外,對於雙向訊框間預測,運動補償單元224可以取回用於由相應的運動向量辨識的兩個參考區塊的資料並且例如經由逐取樣平均或加權平均來將所取回的資料進行組合。The motion estimation unit 222 may form one or more motion vectors (MVs) that define the position of the reference block in the reference picture relative to the position of the current block in the current picture. The motion estimation unit 222 may then provide the motion vectors to the motion compensation unit 224. For example, for unidirectional inter-frame prediction, the motion estimation unit 222 may provide a single motion vector, while for bidirectional inter-frame prediction, the motion estimation unit 222 may provide two motion vectors. The motion compensation unit 224 may then use the motion vectors to generate a predicted block. For example, the motion compensation unit 224 may use the motion vectors to retrieve data for the reference block. As another example, if the motion vector has fractional sample precision, the motion compensation unit 224 may interpolate the values for the prediction block according to one or more interpolation filters. In addition, for bidirectional inter-frame prediction, the motion compensation unit 224 may retrieve data for two reference blocks identified by the corresponding motion vectors and combine the retrieved data, for example, via sample-by-sample averaging or weighted averaging.

作為另一實例,對於訊框內預測或訊框內預測編碼,訊框內預測單元226可以根據與當前區塊相鄰的取樣來產生預測區塊。例如,對於方向性模式,訊框內預測單元226通常可以在數學上將相鄰取樣的值進行組合,並且跨當前區塊在所定義的方向上填充該等計算出的值以產生預測區塊。作為另一實例,對於DC模式,訊框內預測單元226可以計算當前區塊的相鄰取樣的平均值,並且產生預測區塊以包括針對預測區塊的每個取樣的該得到的平均值。As another example, for intra-frame prediction or intra-frame prediction coding, the intra-frame prediction unit 226 can generate a prediction block based on samples adjacent to the current block. For example, for a directional mode, the intra-frame prediction unit 226 can generally mathematically combine the values of the adjacent samples and fill the calculated values across the current block in a defined direction to generate the prediction block. As another example, for a DC mode, the intra-frame prediction unit 226 can calculate the average of the adjacent samples of the current block and generate the prediction block to include the resulting average for each sample of the prediction block.

模式選擇單元202將預測區塊提供給殘差產生單元204。殘差產生單元204從視訊資料記憶體230接收當前區塊的原始的未經編碼的版本,並且從模式選擇單元202接收預測區塊。殘差產生單元204計算在當前區塊與預測區塊之間的逐取樣差。所得到的逐取樣差定義了用於當前區塊的殘差區塊。在其中視訊資料是以聯合色度模式來編碼的一些實例中,殘差產生單元204可以根據兩個單獨的色度殘差區塊來決定單個色度殘差區塊。在一些實例中,可以使用執行二進位減法的一或多個減法器電路來形成殘差產生單元204。The mode selection unit 202 provides the prediction block to the residue generation unit 204. The residue generation unit 204 receives the original, uncoded version of the current block from the video data memory 230 and receives the prediction block from the mode selection unit 202. The residue generation unit 204 calculates the sample-by-sample difference between the current block and the prediction block. The resulting sample-by-sample difference defines a residue block for the current block. In some examples where the video data is encoded in a joint chroma mode, the residue generation unit 204 may decide a single chroma residue block from two separate chroma residue blocks. In some examples, the residue generation unit 204 may be formed using one or more subtractor circuits that perform binary subtraction.

在其中模式選擇單元202將CU分割為PU的實例中,每個PU可以與亮度預測單元和對應的色度預測單元相關聯。視訊編碼器200和視訊解碼器300可以支援具有各種大小的PU。如上所指出的,CU的大小可以代表CU的亮度編碼區塊的大小,而PU的大小可以代表PU的亮度預測單元的大小。假設特定CU的大小為2Nx2N,則視訊編碼器200可以支援用於訊框內預測的2Nx2N或NxN的PU大小、以及用於訊框間預測的2Nx2N、2NxN、Nx2N、NxN或類似的對稱的PU大小。視訊編碼器200和視訊解碼器300亦可以支援針對用於訊框間預測的2NxnU、2NxnD、nLx2N和nRx2N的PU大小的非對稱分割。In an example where the mode selection unit 202 partitions the CU into PUs, each PU can be associated with a luma prediction unit and a corresponding chroma prediction unit. The video encoder 200 and the video decoder 300 can support PUs of various sizes. As noted above, the size of the CU can represent the size of the luma coding block of the CU, and the size of the PU can represent the size of the luma prediction unit of the PU. Assuming that the size of a particular CU is 2Nx2N, the video encoder 200 can support PU sizes of 2Nx2N or NxN for intra-frame prediction, and 2Nx2N, 2NxN, Nx2N, NxN, or similar symmetric PU sizes for inter-frame prediction. The video encoder 200 and the video decoder 300 may also support asymmetric partitioning for PU sizes of 2NxnU, 2NxnD, nLx2N, and nRx2N for inter-frame prediction.

在其中模式選擇單元202不將CU進一步分割為PU的實例中,每個CU可以與亮度編碼區塊和對應的色度編碼區塊相關聯。如上述,CU的大小可以代表CU的亮度編碼區塊的大小。視訊編碼器200和視訊解碼器300可以支援2Nx2N、2NxN或Nx2N的CU大小。In an example where the mode selection unit 202 does not further partition the CU into PUs, each CU may be associated with a luma coding block and a corresponding chroma coding block. As described above, the size of the CU may represent the size of the luma coding block of the CU. The video encoder 200 and the video decoder 300 may support CU sizes of 2Nx2N, 2NxN, or Nx2N.

對於其他視訊編碼技術(舉幾個實例,諸如區塊內複製模式編碼、仿射模式編碼和線性模型(LM)模式編碼),模式選擇單元202經由與編碼技術相關聯的相應單元來產生用於正被編碼的當前區塊的預測區塊。在一些實例中(諸如調色板模式編碼),模式選擇單元202可以不產生預測區塊,而是替代地產生指示基於所選擇的調色板來重構區塊的方式的語法元素。在此種模式下,模式選擇單元202可以將該等語法元素提供給熵編碼單元220以進行編碼。For other video coding techniques (such as intra-block copy mode coding, affine mode coding, and linear model (LM) mode coding, to name a few), the mode selection unit 202 generates a prediction block for the current block being encoded via a corresponding unit associated with the coding technique. In some examples (such as palette mode coding), the mode selection unit 202 may not generate a prediction block, but instead generate syntax elements indicating how to reconstruct the block based on the selected palette. In this mode, the mode selection unit 202 can provide these syntax elements to the entropy coding unit 220 for encoding.

如上述,殘差產生單元204接收用於當前區塊和對應的預測區塊的視訊資料。隨後,殘差產生單元204為當前區塊產生殘差區塊。為了產生殘差區塊,殘差產生單元204計算在預測區塊與當前區塊之間的逐取樣差。在其中啟用ACT的場景中,ACT單元205可以對殘差區塊執行ACT,以將殘差區塊從第一色彩空間轉換到第二色彩空間。在其中未啟用ACT的場景中,ACT單元205可以充當不改變由殘差產生單元204輸出的殘差區塊的直通單元。As described above, the residue generation unit 204 receives video data for a current block and a corresponding predicted block. Subsequently, the residue generation unit 204 generates a residue block for the current block. To generate the residue block, the residue generation unit 204 calculates a sample-by-sample difference between the predicted block and the current block. In a scene in which ACT is enabled, the ACT unit 205 may perform ACT on the residue block to convert the residue block from a first color space to a second color space. In a scene in which ACT is not enabled, the ACT unit 205 may act as a pass-through unit that does not change the residue block output by the residue generation unit 204.

變換處理單元206將一或多個變換應用於殘差區塊,以產生變換係數的區塊(本文中被稱為「變換係數區塊」)。變換處理單元206可以將各種變換應用於殘差區塊,以形成變換係數區塊。例如,變換處理單元206可以將離散餘弦變換(DCT)、方向變換、Karhunen-Loeve變換(KLT)、或概念上類似的變換應用於殘差區塊。在一些實例中,變換處理單元206可以對殘差區塊執行多種變換,例如,初級變換和二次變換(諸如旋轉變換)。在一些實例中,變換處理單元206不對殘差區塊應用變換。The transform processing unit 206 applies one or more transforms to the residue block to generate a block of transform coefficients (referred to herein as a "transform coefficient block"). The transform processing unit 206 can apply various transforms to the residue block to form the transform coefficient block. For example, the transform processing unit 206 can apply a discrete cosine transform (DCT), a directional transform, a Karhunen-Loeve transform (KLT), or a conceptually similar transform to the residue block. In some examples, the transform processing unit 206 can perform a variety of transforms on the residue block, such as a primary transform and a secondary transform (such as a rotation transform). In some examples, the transform processing unit 206 does not apply a transform to the residual block.

量化單元208可以對變換係數區塊中的變換係數進行量化,以產生經量化的變換係數區塊。量化單元208可以根據與當前區塊相關聯的QP值來對變換係數區塊的變換係數進行量化。視訊編碼器200(例如,經由模式選擇單元202)可以藉由調整與CU相關聯的QP值來調整被應用於與當前區塊相關聯的變換係數區塊的量化程度。The quantization unit 208 may quantize the transform coefficients in the transform coefficient block to generate a quantized transform coefficient block. The quantization unit 208 may quantize the transform coefficients of the transform coefficient block according to the QP value associated with the current block. The video encoder 200 (e.g., via the mode selection unit 202) may adjust the degree of quantization applied to the transform coefficient block associated with the current block by adjusting the QP value associated with the CU.

對於以聯合色度模式並且使用ACT編碼的視訊資料的區塊,量化單元208可以基於該區塊是使用ACT而編碼的並且是以聯合色度模式而編碼的來決定用於該區塊的ACT QP偏移,並且基於QP值和ACT QP偏移來決定用於該區塊的ACT QP。因此,對於以聯合色度模式並且使用ACT而編碼的視訊資料的區塊,量化單元208可以使用ACT QP值而不是QP值來對該區塊進行量化。量化可能引起資訊損失,並且因此,經量化的變換係數可能具有與變換處理單元206所產生的原始變換係數相比較低的精度。For a block of video data encoded in a joint chroma mode and using ACT, the quantization unit 208 may determine an ACT QP offset for the block based on the block being encoded using ACT and being encoded in a joint chroma mode, and determine an ACT QP for the block based on the QP value and the ACT QP offset. Thus, for a block of video data encoded in a joint chroma mode and using ACT, the quantization unit 208 may quantize the block using the ACT QP value instead of the QP value. Quantization may cause information loss, and therefore, the quantized transform coefficients may have lower precision than the original transform coefficients generated by the transform processing unit 206.

逆量化單元210和逆變換處理單元212可以將逆量化和逆變換分別應用於經量化的變換係數區塊,以從變換係數區塊重構殘差區塊。對於以聯合色度模式並且使用ACT來編碼的視訊資料的區塊,逆量化單元210可以基於該區塊是使用ACT而編碼的並且是以聯合色度模式而編碼的來決定用於該區塊的ACT QP偏移,並且基於QP值和ACT QP偏移來決定用於該區塊的ACT QP。因此,對於以聯合色度模式並且使用ACT而編碼的視訊資料的區塊,逆量化單元210可以使用ACT QP值而不是QP值來對該區塊進行反量化。The inverse quantization unit 210 and the inverse transform processing unit 212 may apply inverse quantization and inverse transform to the quantized transform coefficient block, respectively, to reconstruct the residue block from the transform coefficient block. For a block of video data encoded in a joint chroma mode and using ACT, the inverse quantization unit 210 may determine an ACT QP offset for the block based on that the block is encoded using ACT and is encoded in the joint chroma mode, and determine an ACT QP for the block based on the QP value and the ACT QP offset. Therefore, for a block of video data encoded in a joint chroma mode and using ACT, the inverse quantization unit 210 may use the ACT QP value instead of the QP value to inverse quantize the block.

在其中啟用ACT的場景中,逆ACT單元213可以對經重構的殘差區塊執行逆ACT,以將殘差區塊從第二色彩空間轉換回第一色彩空間。在其中未啟用ACT的場景中,逆ACT單元213可以充當不改變由逆變換處理單元212輸出的經重構的殘差區塊的直通單元。In the scene where ACT is enabled, the inverse ACT unit 213 can perform inverse ACT on the reconstructed residual block to convert the residual block from the second color space back to the first color space. In the scene where ACT is not enabled, the inverse ACT unit 213 can act as a pass-through unit that does not change the reconstructed residual block output by the inverse transform processing unit 212.

重構單元214可以基於經重構的殘差區塊和由模式選擇單元202產生的預測區塊來產生與當前區塊相對應的重構區塊(儘管潛在地具有某種程度的失真)。例如,重構單元214可以將經重構的殘差區塊的取樣與來自模式選擇單元202所產生的預測區塊的對應取樣相加,以產生經重構的區塊。The reconstruction unit 214 may generate a reconstructed block corresponding to the current block (albeit potentially with some degree of distortion) based on the reconstructed residual block and the predicted block generated by the mode selection unit 202. For example, the reconstruction unit 214 may add samples of the reconstructed residual block to corresponding samples from the predicted block generated by the mode selection unit 202 to generate the reconstructed block.

濾波器單元216可以對經重構的區塊執行一或多個濾波器操作。例如,濾波器單元216可以執行去區塊操作以減少沿著CU的邊緣的區塊效應偽影。在一些實例中,可以跳過濾波器單元216的操作。The filter unit 216 may perform one or more filter operations on the reconstructed blocks. For example, the filter unit 216 may perform a deblocking operation to reduce block effect artifacts along the edges of the CU. In some examples, the operation of the filter unit 216 may be skipped.

視訊編碼器200將經重構的區塊儲存在DPB 218中。例如,在其中不需要濾波器單元216的操作的實例中,重構單元214可以將經重構的區塊儲存到DPB 218中。在其中需要濾波器單元216的操作的實例中,濾波器單元216可以將經濾波的重構區塊儲存到DPB 218中。運動估計單元222和運動補償單元224可以從DPB 218取回由經重構的(並且潛在地經濾波的)區塊形成的參考圖片,以對後續編碼的圖片的區塊進行訊框間預測。另外,訊框內預測單元226可以使用在DPB 218中的當前圖片的經重構的區塊來對當前圖片中的其他區塊進行訊框內預測。The video encoder 200 stores the reconstructed blocks in the DPB 218. For example, in instances where operation of the filter unit 216 is not required, the reconstruction unit 214 may store the reconstructed blocks in the DPB 218. In instances where operation of the filter unit 216 is required, the filter unit 216 may store the filtered reconstructed blocks in the DPB 218. The motion estimation unit 222 and the motion compensation unit 224 may retrieve a reference picture formed by the reconstructed (and potentially filtered) blocks from the DPB 218 to perform inter-frame prediction on blocks of subsequently encoded pictures. Additionally, the intra-frame prediction unit 226 may use the reconstructed blocks of the current picture in the DPB 218 to perform intra-frame prediction on other blocks in the current picture.

通常,熵編碼單元220可以對從視訊編碼器200的其他功能部件接收的語法元素進行熵編碼。例如,熵編碼單元220可以對來自量化單元208的經量化的變換係數區塊進行熵編碼。作為另一實例,熵編碼單元220可以對來自模式選擇單元202的預測語法元素(例如,用於訊框間預測的運動資訊或用於訊框內預測的訊框內模式資訊)進行熵編碼。熵編碼單元220可以對作為視訊資料的另一實例的語法元素執行一或多個熵編碼操作,以產生經熵編碼的資料。例如,熵編碼單元220可以執行上下文自我調整變長編碼(CAVLC)操作、CABAC操作、可變-可變(V2V)長度編碼操作、基於語法的上下文自我調整二進位算術編碼(SBAC)操作、概率區間分割熵(PIPE)編碼操作、指數哥倫佈編碼操作、或對資料的另一種類型的熵編碼操作。在一些實例中,熵編碼單元220可以在其中語法元素未被熵編碼的旁路模式下操作。In general, entropy coding unit 220 may entropy code syntax elements received from other functional components of video coder 200. For example, entropy coding unit 220 may entropy code quantized transform coefficient blocks from quantization unit 208. As another example, entropy coding unit 220 may entropy code prediction syntax elements (e.g., motion information for inter-frame prediction or intra-frame mode information for intra-frame prediction) from mode selection unit 202. Entropy coding unit 220 may perform one or more entropy coding operations on syntax elements, which are another example of video data, to generate entropy-coded data. For example, the entropy coding unit 220 may perform a context-adjusting variable length coding (CAVLC) operation, a CABAC operation, a variable-to-variable (V2V) length coding operation, a syntax-based context-adjusting binary arithmetic coding (SBAC) operation, a probability interval partitioning entropy (PIPE) coding operation, an exponential Columbus coding operation, or another type of entropy coding operation on the data. In some examples, the entropy coding unit 220 may operate in a bypass mode in which syntax elements are not entropy coded.

視訊編碼器200可以輸出位元串流,其包括用於重構切片或圖片的區塊所需要的經熵編碼的語法元素。具體地,熵編碼單元220可以輸出位元串流。The video encoder 200 may output a bitstream including entropy-coded syntax elements required for reconstructing a slice or a block of a picture. Specifically, the entropy coding unit 220 may output a bitstream.

關於區塊描述了上述操作。此種描述應當被理解為用於亮度編碼區塊及/或色度編碼區塊的操作。如上述,在一些實例中,亮度編碼區塊和色度編碼區塊是CU的亮度分量和色度分量。在一些實例中,亮度編碼區塊和色度編碼區塊是PU的亮度分量和色度分量。The above operations are described with respect to blocks. Such descriptions should be understood as operations for luma coding blocks and/or chroma coding blocks. As described above, in some instances, the luma coding blocks and chroma coding blocks are the luma components and chroma components of a CU. In some instances, the luma coding blocks and chroma coding blocks are the luma components and chroma components of a PU.

在一些實例中,不需要針對色度編碼區塊重複關於亮度編碼區塊執行的操作。作為一個實例,不需要重複用於辨識用於亮度編碼區塊的運動向量(MV)和參考圖片的操作來辨識用於色度區塊的MV和參考圖片。確切而言,可以對用於亮度編碼區塊的MV進行縮放以決定用於色度區塊的MV,並且參考圖片可以是相同的。作為另一實例,對於亮度編碼區塊和色度編碼區塊,訊框內預測過程可以是相同的。In some examples, operations performed with respect to luma coding blocks do not need to be repeated for chroma coding blocks. As one example, operations for identifying motion vectors (MVs) and reference pictures for luma coding blocks do not need to be repeated to identify MVs and reference pictures for chroma blocks. Rather, the MVs for luma coding blocks may be scaled to determine MVs for chroma blocks, and the reference pictures may be the same. As another example, the intra-frame prediction process may be the same for luma coding blocks and chroma coding blocks.

視訊編碼器200表示被配置為對視訊資料進行編碼的設備的實例,該設備包括:被配置為儲存視訊資料的記憶體;及一或多個處理單元,其在電路中實現並且被配置為:決定一或多個QP偏移值被包括在切片標頭中;回應於決定一或多個QP偏移值被包括在切片標頭中,產生具有第一值的旗標以包括在參數集中,其中該旗標的第一值指示一或多個QP偏移值被包括在切片標頭中,並且該旗標的第二值指示一或多個QP偏移值沒有被包括在切片標頭中;回應於決定一或多個QP偏移值被包括在切片標頭中,產生一或多個QP偏移值以包括在切片標頭中;基於一或多個QP偏移值來對殘差資料執行自我調整色彩變換,以決定經色彩變換的殘差資料。視訊編碼器200可以另外或替代地被配置為:決定針對編碼單元的量化組(QGCU)啟用還是禁用自我調整色彩變換;回應於決定針對QGCU啟用自我調整色彩變換,產生指示針對QGCU啟用或禁用自我調整色彩變換的旗標以包括在視訊資料中;及在色彩變換域中處理QGCU的取樣值。The video encoder 200 represents an example of an apparatus configured to encode video data, the apparatus comprising: a memory configured to store the video data; and one or more processing units implemented in circuitry and configured to: determine one or more QP offset values to be included in a slice header; in response to determining that the one or more QP offset values are to be included in the slice header, generate a flag having a first value to be included in a parameter set, wherein the flag The first value of the flag indicates that the one or more QP offset values are included in the slice header, and the second value of the flag indicates that the one or more QP offset values are not included in the slice header; in response to determining that the one or more QP offset values are included in the slice header, the one or more QP offset values are generated to be included in the slice header; based on the one or more QP offset values, self-adjusting color transform is performed on the residual data to determine the color transformed residual data. The video encoder 200 can be configured to: determine whether to enable or disable self-adjusting color transform for a quantization group of a coding unit (QGCU); in response to determining that self-adjusting color transform is enabled for the QGCU, generate a flag indicating whether self-adjusting color transform is enabled or disabled for the QGCU to be included in the video data; and process sample values of the QGCU in the color transform domain.

圖4是示出可以執行本揭示內容的技術的示例視訊解碼器300的方塊圖。圖4是出於解釋的目的而提供的,並且不對在本揭示內容中泛泛地舉例說明和描述的技術進行限制。出於解釋的目的,本揭示內容根據JEM、VVC和HEVC的技術描述了視訊解碼器300。然而,本揭示內容的技術可以由被配置用於其他視訊編碼標準的視訊編碼設備來執行。FIG. 4 is a block diagram illustrating an example video decoder 300 that can perform the techniques of the present disclosure. FIG. 4 is provided for purposes of explanation and does not limit the techniques generally exemplified and described in the present disclosure. For purposes of explanation, the present disclosure describes the video decoder 300 according to the techniques of JEM, VVC, and HEVC. However, the techniques of the present disclosure can be performed by video coding devices configured for other video coding standards.

在圖4的實例中,視訊解碼器300包括編碼圖片緩衝器(CPB)記憶體320、熵解碼單元302、預測處理單元304、逆量化單元306、逆變換處理單元308、逆ACT單元309、重構單元310、濾波器單元312和解碼圖片緩衝器(DPB)134。CPB記憶體320、熵解碼單元302、預測處理單元304、逆量化單元306、逆變換處理單元308、重構單元310、濾波器單元312和DPB 134中的任何一者或全部可以在一或多個處理器中或者在處理電路中實現。例如,視訊解碼器300的單元可以被實現為一或多個電路或邏輯元件,作為硬體電路的一部分,或者作為處理器、ASIC或FPGA的一部分。此外,視訊解碼器300可以包括額外或替代的處理器或處理電路以執行該等和其他功能。4, the video decoder 300 includes a coded picture buffer (CPB) memory 320, an entropy decoding unit 302, a prediction processing unit 304, an inverse quantization unit 306, an inverse transform processing unit 308, an inverse ACT unit 309, a reconstruction unit 310, a filter unit 312, and a decoded picture buffer (DPB) 134. Any one or all of the CPB memory 320, the entropy decoding unit 302, the prediction processing unit 304, the inverse quantization unit 306, the inverse transform processing unit 308, the reconstruction unit 310, the filter unit 312, and the DPB 134 may be implemented in one or more processors or in a processing circuit. For example, the units of the video decoder 300 may be implemented as one or more circuits or logic elements, as part of a hardware circuit, or as part of a processor, ASIC or FPGA. In addition, the video decoder 300 may include additional or alternative processors or processing circuits to perform these and other functions.

預測處理單元304包括運動補償單元316和訊框內預測單元318。預測處理單元304可以包括加法單元,其根據其他預測模式來執行預測。作為實例,預測處理單元304可以包括調色板單元、區塊內複製單元(其可以形成運動補償單元316的一部分)、仿射單元、線性模型(LM)單元等。在其他實例中,視訊解碼器300可以包括更多、更少或不同的功能部件。The prediction processing unit 304 includes a motion compensation unit 316 and an intra-frame prediction unit 318. The prediction processing unit 304 may include an addition unit that performs prediction according to other prediction modes. As an example, the prediction processing unit 304 may include a palette unit, an intra-block copy unit (which may form part of the motion compensation unit 316), an affine unit, a linear model (LM) unit, etc. In other examples, the video decoder 300 may include more, fewer, or different functional components.

CPB記憶體320可以儲存要由視訊解碼器300的部件解碼的視訊資料,諸如經編碼的視訊位元串流。例如,可以從電腦可讀取媒體110(圖1)獲得被儲存在CPB記憶體320中的視訊資料。CPB記憶體320可以包括儲存來自經編碼的視訊位元串流的經編碼的視訊資料(例如,語法元素)的CPB。此外,CPB記憶體320可以儲存除了經編碼的圖片的語法元素之外的視訊資料,諸如表示來自視訊解碼器300的各個單元的輸出的臨時資料。DPB 314通常儲存經解碼的圖片,視訊解碼器300可以輸出經解碼的圖片,及/或在解碼經編碼的視訊位元串流的後續資料或圖片時使用經解碼的圖片作為參考視訊資料。CPB記憶體320和DPB 314可以由各種記憶體設備中的任何一種形成,諸如DRAM,包括SDRAM、MRAM、RRAM或其他類型的記憶體設備。CPB記憶體320和DPB 314可以由相同的記憶體設備或單獨的記憶體設備來提供。在各個實例中,CPB記憶體320可以與視訊解碼器300的其他部件在晶片上,或者相對於彼等部件在晶片外。CPB memory 320 can store video data, such as an encoded video bit stream, to be decoded by components of video decoder 300. For example, the video data stored in CPB memory 320 can be obtained from computer-readable medium 110 (FIG. 1). CPB memory 320 can include a CPB that stores encoded video data (e.g., syntax elements) from the encoded video bit stream. In addition, CPB memory 320 can store video data other than syntax elements of encoded pictures, such as temporary data representing output from various units of video decoder 300. DPB 314 typically stores decoded pictures, which the video decoder 300 can output and/or use as reference video data when decoding subsequent data or pictures of the encoded video bit stream. CPB memory 320 and DPB 314 can be formed by any of a variety of memory devices, such as DRAM, including SDRAM, MRAM, RRAM or other types of memory devices. CPB memory 320 and DPB 314 can be provided by the same memory device or a separate memory device. In various embodiments, CPB memory 320 can be on-chip with other components of video decoder 300, or off-chip relative to those components.

另外或替代地,在一些實例中,視訊解碼器300可以從記憶體120(圖1)取回經編碼的視訊資料。亦即,記憶體120可以如上文所論述地利用CPB記憶體320來儲存資料。同樣,當視訊解碼器300的一些或全部功能是用要被視訊解碼器300的處理電路執行的軟體來實現時,記憶體120可以儲存要被視訊解碼器300執行的指令。Additionally or alternatively, in some examples, the video decoder 300 may retrieve the encoded video data from the memory 120 ( FIG. 1 ). That is, the memory 120 may utilize the CPB memory 320 to store the data as discussed above. Similarly, when some or all of the functions of the video decoder 300 are implemented in software to be executed by the processing circuits of the video decoder 300, the memory 120 may store instructions to be executed by the video decoder 300.

示出圖4中圖示的各個單元以輔助理解由視訊解碼器300執行的操作。該等單元可以被實現為固定功能電路、可程式設計電路、或其組合。類似於圖3,固定功能電路代表提供特定功能並且關於可以執行的操作而預先設置的電路。可程式設計電路代表可以被程式設計以執行各種任務並且以可以執行的操作來提供靈活功能的電路。例如,可程式設計電路可以執行軟體或韌體,軟體或韌體使得可程式設計電路以軟體或韌體的指令所定義的方式進行操作。固定功能電路可以執行軟體指令(例如,以接收參數或輸出參數),但是固定功能電路執行的操作的類型通常是不可變的。在一些實例中,該等單元中的一或多個單元可以是不同的電路區塊(固定功能或可程式設計),並且在一些實例中,該等單元中的一或多個單元可以是積體電路。The various units illustrated in FIG. 4 are shown to assist in understanding the operations performed by the video decoder 300. The units may be implemented as fixed-function circuits, programmable circuits, or a combination thereof. Similar to FIG. 3 , fixed-function circuits represent circuits that provide specific functions and are pre-set with respect to the operations that may be performed. Programmable circuits represent circuits that may be programmed to perform a variety of tasks and provide flexible functionality with respect to the operations that may be performed. For example, a programmable circuit may execute software or firmware that causes the programmable circuit to operate in a manner defined by the instructions of the software or firmware. Fixed-function circuits may execute software instructions (e.g., to receive parameters or output parameters), but the type of operation performed by the fixed-function circuit is generally immutable. In some instances, one or more of the cells may be distinct circuit blocks (fixed function or programmable), and in some instances, one or more of the cells may be integrated circuits.

視訊解碼器300可以包括由可程式設計電路形成的ALU、EFU、數位電路、類比電路及/或可程式設計核。在其中由在可程式設計電路上執行的軟體執行視訊解碼器300的操作的實例中,片上或片外記憶體可以儲存視訊解碼器300接收並且執行的軟體的指令(例如,目標代碼)。The video decoder 300 may include an ALU, an EFU, a digital circuit, an analog circuit, and/or a programmable core formed by a programmable circuit. In an example where the operation of the video decoder 300 is performed by software executed on the programmable circuit, an on-chip or off-chip memory may store instructions (e.g., object code) of the software received and executed by the video decoder 300.

熵解碼單元302可以從CPB接收經編碼的視訊資料,並且對視訊資料進行熵解碼以重現語法元素。預測處理單元304、逆量化單元306、逆變換處理單元308、重構單元310和濾波器單元312可以基於從位元串流中提取的語法元素來產生經解碼的視訊資料。The entropy decoding unit 302 may receive the encoded video data from the CPB and perform entropy decoding on the video data to reproduce the syntax elements. The prediction processing unit 304, the inverse quantization unit 306, the inverse transform processing unit 308, the reconstruction unit 310 and the filter unit 312 may generate decoded video data based on the syntax elements extracted from the bit stream.

通常,視訊解碼器300在逐區塊的基礎上重構圖片。視訊解碼器300可以單獨地對每個區塊執行重構操作(其中當前正在被重構(亦即,被解碼)的區塊可以被稱為「當前區塊」)。Typically, the video decoder 300 reconstructs the picture on a block-by-block basis. The video decoder 300 may perform the reconstruction operation on each block individually (where the block currently being reconstructed (i.e., decoded) may be referred to as the "current block").

熵解碼單元302可以對定義經量化的變換係數區塊的經量化的變換係數的語法元素以及諸如QP及/或變換模式指示之類的變換資訊進行熵解碼。逆量化單元306可以使用與經量化的變換係數區塊相關聯的QP來決定量化程度,並且同樣地,決定供逆量化單元306應用的逆量化程度。對於以聯合色度模式並且使用ACT來編碼的視訊資料的區塊,逆量化單元306可以基於該區塊是使用ACT而編碼的並且是以聯合色度模式而編碼的來決定用於該區塊的ACT QP偏移,並且基於QP值和ACT QP偏移來決定用於該區塊的ACT QP。因此,對於以聯合色度模式並且使用ACT而編碼的視訊資料的區塊,逆量化單元306可以使用ACT QP值而不是QP值來對區塊進行反量化。逆量化單元306可以例如執行按位左移操作以對經量化的變換係數進行逆量化。逆量化單元306從而可以形成包括變換係數的變換係數區塊。The entropy decoding unit 302 may entropy decode syntax elements of quantized transform coefficients defining the block of quantized transform coefficients and transform information such as a QP and/or a transform mode indication. The inverse quantization unit 306 may use the QP associated with the block of quantized transform coefficients to determine a degree of quantization and, likewise, a degree of inverse quantization to be applied by the inverse quantization unit 306. For a block of video data encoded in joint chroma mode and using ACT, the inverse quantization unit 306 may determine an ACT QP offset for the block based on that the block is encoded using ACT and is encoded in joint chroma mode, and determine an ACT QP for the block based on the QP value and the ACT QP offset. Therefore, for a block of video data encoded in joint chroma mode and using ACT, the inverse quantization unit 306 can use the ACT QP value instead of the QP value to inverse quantize the block. The inverse quantization unit 306 can, for example, perform a bitwise left shift operation to inverse quantize the quantized transform coefficients. The inverse quantization unit 306 can thereby form a transform coefficient block including the transform coefficients.

在逆量化單元306形成變換係數區塊之後,逆變換處理單元308可以將一或多個逆變換應用於變換係數區塊,以產生與當前區塊相關聯的殘差區塊。例如,逆變換處理單元308可以將逆DCT、逆整數變換、逆Karhunen-Loeve變換(KLT)、逆旋轉變換、逆方向變換或另一逆變換應用於變換係數區塊。After the inverse quantization unit 306 forms the transform coefficient block, the inverse transform processing unit 308 may apply one or more inverse transforms to the transform coefficient block to generate a residual block associated with the current block. For example, the inverse transform processing unit 308 may apply an inverse DCT, an inverse integer transform, an inverse Karhunen-Loeve transform (KLT), an inverse rotation transform, an inverse directional transform, or another inverse transform to the transform coefficient block.

在其中啟用ACT的場景中,逆ACT單元309可以對殘差區塊執行逆ACT,以將殘差區塊從第二色彩空間轉換回第一色彩空間。在其中未啟用ACT的場景中,逆ACT單元309可以充當不改變由逆變換處理單元308輸出的殘差區塊的直通單元。In the scenario where ACT is enabled, the inverse ACT unit 309 can perform inverse ACT on the residue block to convert the residue block from the second color space back to the first color space. In the scenario where ACT is not enabled, the inverse ACT unit 309 can act as a pass-through unit that does not change the residue block output by the inverse transform processing unit 308.

此外,預測處理單元304根據由熵解碼單元302進行熵解碼的預測資訊語法元素來產生預測區塊。例如,若預測資訊語法元素指示當前區塊是經訊框間預測的,則運動補償單元316可以產生預測區塊。在此種情況下,預測資訊語法元素可以指示在DPB 314中的要從其取回參考區塊的參考圖片、以及辨識相對於當前區塊在當前圖片中的位置而言參考區塊在參考圖片中的位置的運動向量。運動補償單元316通常可以以與關於運動補償單元224(圖3)所描述的方式基本類似的方式來執行訊框間預測過程。In addition, the prediction processing unit 304 generates a prediction block based on the prediction information syntax element entropy decoded by the entropy decoding unit 302. For example, if the prediction information syntax element indicates that the current block is predicted through inter-frame, the motion compensation unit 316 can generate the prediction block. In this case, the prediction information syntax element can indicate a reference picture in the DPB 314 from which the reference block is to be retrieved, and a motion vector identifying the position of the reference block in the reference picture relative to the position of the current block in the current picture. The motion compensation unit 316 can generally perform the inter-frame prediction process in a manner substantially similar to that described with respect to the motion compensation unit 224 (Figure 3).

作為另一實例,若預測資訊語法元素指示當前區塊是經訊框內預測的,則訊框內預測單元318可以根據由預測資訊語法元素指示的訊框內預測模式來產生預測區塊。再次,訊框內預測構件318通常可以以與關於訊框內預測單元226(圖3)所描述的方式基本上類似的方式來執行訊框內預測過程。訊框內預測單元318可以從DPB 314取回當前區塊的相鄰取樣的資料。As another example, if the prediction information syntax element indicates that the current block is predicted intra-frame, the intra-frame prediction unit 318 can generate a prediction block according to the intra-frame prediction mode indicated by the prediction information syntax element. Again, the intra-frame prediction component 318 can generally perform the intra-frame prediction process in a manner substantially similar to that described with respect to the intra-frame prediction unit 226 (FIG. 3). The intra-frame prediction unit 318 can retrieve data of neighboring samples of the current block from the DPB 314.

重構單元310可以使用預測區塊和殘差區塊來重構當前區塊。例如,重構單元310可以將殘差區塊的取樣與預測區塊的對應取樣相加來重構當前區塊。The reconstruction unit 310 can use the prediction block and the residual block to reconstruct the current block. For example, the reconstruction unit 310 can add the samples of the residual block to the corresponding samples of the prediction block to reconstruct the current block.

濾波器單元312可以對經重構的區塊執行一或多個濾波器操作。例如,濾波器單元312可以執行去區塊操作以減少沿著經重構的區塊的邊緣的區塊效應偽影。不一定在所有實例中皆執行濾波器單元312的操作。The filter unit 312 may perform one or more filter operations on the reconstructed block. For example, the filter unit 312 may perform a deblocking operation to reduce block effect artifacts along the edges of the reconstructed block. The operations of the filter unit 312 may not necessarily be performed in all instances.

視訊解碼器300可以將經重構的區塊儲存在DPB 314中。例如,在其中不執行濾波器單元312的操作的實例中,重構單元310可以將經重構的區塊儲存到DPB 314中。在其中執行濾波器單元312的操作的實例中,濾波器單元312可以將經濾波的重構區塊儲存到DPB 314中。如上所論述的,DPB 314可以將參考資訊(諸如用於訊框內預測的當前圖片以及用於後續運動補償的先前解碼的圖片的取樣)提供給預測處理單元304。此外,視訊解碼器300可以從DPB 314輸出經解碼的圖片(例如,經解碼的視訊),以用於在諸如圖1的顯示設備118之類的顯示設備上的後續呈現。The video decoder 300 may store the reconstructed blocks in the DPB 314. For example, in instances where the operation of the filter unit 312 is not performed, the reconstruction unit 310 may store the reconstructed blocks in the DPB 314. In instances where the operation of the filter unit 312 is performed, the filter unit 312 may store the filtered reconstructed blocks in the DPB 314. As discussed above, the DPB 314 may provide reference information (such as samples of the current picture for intra-frame prediction and previously decoded pictures for subsequent motion compensation) to the prediction processing unit 304. In addition, the video decoder 300 can output decoded pictures (e.g., decoded video) from the DPB 314 for subsequent presentation on a display device such as the display device 118 of Figure 1.

以此種方式,視訊解碼器300表示視訊解碼設備的實例,該視訊解碼設備包括:被配置為儲存視訊資料的記憶體;及一或多個處理單元,其在電路中實現並且被配置為:接收參數集中的旗標,其中該旗標的第一值指示一或多個QP偏移值被包括在切片標頭中,並且該旗標的第二值指示一或多個QP偏移值沒有被包括在切片標頭中;回應於決定該旗標具有第一值,在切片標頭中接收一或多個QP偏移值;基於一或多個QP偏移值來對殘差資料執行自我調整色彩變換。視訊解碼器300可以另外或替代地被配置為:在編碼單元的量化組(QGCU)級別接收旗標,該旗標指示針對QGCU啟用還是禁用自我調整色彩變換;及回應於決定該旗標指示針對QGCU啟用自我調整色彩變換,在色彩變換域中處理QGCU的取樣值。In this manner, the video decoder 300 represents an example of a video decoding device comprising: a memory configured to store video data; and one or more processing units implemented in circuitry and configured to: receive a flag in a parameter set, wherein a first value of the flag indicates that one or more QP offset values are included in a slice header, and a second value of the flag indicates that the one or more QP offset values are not included in the slice header; in response to determining that the flag has the first value, receive the one or more QP offset values in the slice header; and perform a self-adjusting color transform on the residual data based on the one or more QP offset values. The video decoder 300 may additionally or alternatively be configured to: receive a flag at a quantization group (QGCU) level of a coding unit, the flag indicating whether self-adjusting color transform is enabled or disabled for the QGCU; and in response to determining that the flag indicates that self-adjusting color transform is enabled for the QGCU, process sample values of the QGCU in the color transform domain.

圖5是示出用於對當前區塊進行編碼的示例方法的流程圖。當前區塊可以包括當前CU。儘管關於視訊編碼器200(圖1和圖3)進行了描述,但是應當理解的是,其他設備可以被配置為執行與圖5的方法類似的方法。FIG5 is a flow chart showing an example method for encoding a current block. The current block may include a current CU. Although described with respect to the video encoder 200 (FIGS. 1 and 3), it should be understood that other devices may be configured to perform methods similar to those of FIG5.

在該實例中,視訊編碼器200最初預測當前區塊(350)。例如,視訊編碼器200可以形成用於當前區塊的預測區塊。隨後,視訊編碼器200可以計算用於當前區塊的殘差區塊(352)。為了計算殘差區塊,視訊編碼器200可以計算在原始的未經編碼的區塊與用於當前區塊的預測區塊之間的差。對於一些區塊,視訊編碼器200亦可以藉由執行ACT來計算殘差區塊,如上述。隨後,視訊編碼器200可以對殘差區塊的係數進行變換和量化(354)。接下來,視訊編碼器200可以掃瞄殘差區塊的經量化的變換係數(356)。在掃瞄期間或在掃瞄之後,視訊編碼器200可以對變換係數進行熵編碼(358)。例如,視訊編碼器200可以使用CAVLC或CABAC來對變換係數進行編碼。隨後,視訊編碼器200可以輸出區塊的經熵編碼的資料(360)。In this example, the video encoder 200 initially predicts the current block (350). For example, the video encoder 200 can form a predicted block for the current block. The video encoder 200 can then calculate a residue block for the current block (352). To calculate the residue block, the video encoder 200 can calculate the difference between the original uncoded block and the predicted block for the current block. For some blocks, the video encoder 200 can also calculate the residue block by performing ACT, as described above. The video encoder 200 can then transform and quantize the coefficients of the residue block (354). Next, the video encoder 200 can scan the quantized transform coefficients of the residual block (356). During or after scanning, the video encoder 200 can entropy encode the transform coefficients (358). For example, the video encoder 200 can encode the transform coefficients using CAVLC or CABAC. The video encoder 200 can then output the entropy encoded data of the block (360).

圖6是示出用於對視訊資料的當前區塊進行解碼的示例方法的流程圖。當前區塊可以包括當前CU。儘管關於視訊解碼器300(圖1和圖4)進行了描述,但是應當理解的是,其他設備可以被配置為執行與圖6的方法類似的方法。FIG6 is a flow chart illustrating an example method for decoding a current block of video data. The current block may include a current CU. Although described with respect to video decoder 300 (FIGS. 1 and 4), it should be understood that other devices may be configured to perform methods similar to those of FIG6.

視訊解碼器300可以接收用於當前區塊的經熵編碼的資料(諸如,經熵編碼的預測資訊和用於與當前區塊相對應的殘差區塊的係數的經熵編碼的資料)(370)。視訊解碼器300可以對經熵編碼的資料進行熵解碼以決定用於當前區塊的預測資訊並且重現殘差區塊的係數(372)。視訊解碼器300可以例如使用如由用於當前區塊的預測資訊所指示的訊框內或訊框間預測模式來預測當前區塊(374),以計算用於當前區塊的預測區塊。隨後,視訊解碼器300可以對所重現的係數進行逆掃瞄(376),以建立經量化的變換係數的區塊。隨後,視訊解碼器300可以對變換係數進行逆量化和逆變換以產生殘差區塊(378)。對於一些區塊,視訊解碼器亦可以如上述地執行ACT以產生殘差區塊。最終,視訊解碼器300可以藉由將預測區塊和殘差區塊進行組合來對當前區塊進行解碼(380)。The video decoder 300 may receive entropy coded data for a current block (e.g., entropy coded prediction information and entropy coded data for coefficients of a residue block corresponding to the current block) (370). The video decoder 300 may entropy decode the entropy coded data to determine prediction information for the current block and reproduce coefficients of the residue block (372). The video decoder 300 may predict the current block (374), e.g., using an intra-frame or inter-frame prediction mode as indicated by the prediction information for the current block, to calculate a prediction block for the current block. The video decoder 300 may then inverse scan the reproduced coefficients (376) to create a block of quantized transform coefficients. The video decoder 300 may then inverse quantize and inverse transform the transform coefficients to produce a residue block (378). For some blocks, the video decoder may also perform ACT as described above to produce a residue block. Finally, the video decoder 300 may decode the current block by combining the prediction block and the residue block (380).

圖7是示出用於對視訊資料的當前區塊進行解碼的示例方法的流程圖。當前區塊可以包括當前CU。儘管關於視訊解碼器300(圖1和圖4)進行了描述,但是應當理解的是,其他設備可以被配置為執行與圖7的方法類似的方法。FIG. 7 is a flow chart illustrating an example method for decoding a current block of video data. The current block may include a current CU. Although described with respect to video decoder 300 (FIGS. 1 and 4), it should be understood that other devices may be configured to perform methods similar to those of FIG. 7.

視訊解碼器300決定視訊資料的區塊是使用ACT而編碼的(400)。例如,視訊解碼器300可以藉由接收到指示針對視訊資料的該區塊啟用了ACT的CU級別旗標來決定該區塊是使用ACT而編碼的。The video decoder 300 determines that a block of video data is coded using ACT (400). For example, the video decoder 300 may determine that the block of video data is coded using ACT by receiving a CU level flag indicating that ACT is enabled for the block of video data.

視訊解碼器300決定該區塊是以聯合色度模式而編碼的(402)。如上述,對於聯合色度模式,單個色度殘差區塊可以是針對該區塊的第一色度分量和該區塊的第二色度分量而編碼的。視訊解碼器300可以藉由例如接收到指示針對該區塊啟用了聯合色度模式的CU級別語法元素來決定該區塊是以聯合色度模式而編碼的。The video decoder 300 determines that the block is coded in a joint chroma mode (402). As described above, for the joint chroma mode, a single chroma residue block may be coded for a first chroma component of the block and a second chroma component of the block. The video decoder 300 may determine that the block is coded in a joint chroma mode by, for example, receiving a CU-level syntax element indicating that the joint chroma mode is enabled for the block.

視訊解碼器300決定用於該區塊的QP(404)。例如,視訊解碼器300可以以量化組級別決定用於該區塊的QP。量化組可以具有與該區塊相同的大小,或者可以大於或小於該區塊,使得用於該區塊的QP可以是用於該區塊的多個QP中的一個QP或者應用於多個區塊。The video decoder 300 determines a QP for the block (404). For example, the video decoder 300 may determine the QP for the block at a quantization group level. The quantization group may have the same size as the block, or may be larger or smaller than the block, so that the QP for the block may be one of multiple QPs for the block or apply to multiple blocks.

視訊解碼器300基於該區塊是使用ACT而編碼的並且是以聯合色度模式而編碼的來決定用於該區塊的ACT QP偏移(406)。例如,視訊解碼器300可以儲存ACT QP偏移集合,其中該集合包括用於視訊資料的亮度殘差分量的第一ACT QP偏移、用於視訊資料的第一色度殘差分量的第二ACT QP偏移、用於視訊資料的第二色度殘差分量的第三ACT QP偏移、以及用於經聯合編碼的色度殘差分量的第四ACT QP偏移。為了基於該區塊是使用ACT而編碼的並且是以聯合色度模式而編碼的來決定用於該區塊的ACT QP偏移,視訊解碼器300可以被配置為:回應於該區塊是使用ACT而編碼的並且是以聯合色度模式而編碼的,將用於ACT QP偏移的值設置為用於第四ACT QP偏移的值。為了基於該區塊是使用ACT而編碼的並且是以聯合色度模式而編碼的來決定用於該區塊的ACT QP偏移,視訊解碼器300可以被配置為:將ACT QP偏移設置為固定的整數值。在該上下文中,固定可以例如意味著在由視訊解碼器300執行的轉碼器中定義ACT QP偏移。The video decoder 300 determines an ACT QP offset for the block based on the block being encoded using ACT and being encoded in a joint chroma mode (406). For example, the video decoder 300 may store a set of ACT QP offsets, where the set includes a first ACT QP offset for a luma residue component of the video data, a second ACT QP offset for a first chroma residue component of the video data, a third ACT QP offset for a second chroma residue component of the video data, and a fourth ACT QP offset for a jointly encoded chroma residue component. To determine the ACT QP offset for the block based on the block being coded using ACT and coded in joint chroma mode, the video decoder 300 can be configured to, in response to the block being coded using ACT and coded in joint chroma mode, set the value for the ACT QP offset to the value for the fourth ACT QP offset. To determine the ACT QP offset for the block based on the block being coded using ACT and coded in joint chroma mode, the video decoder 300 can be configured to set the ACT QP offset to a fixed integer value. In this context, fixed can, for example, mean that the ACT QP offset is defined in a transcoder executed by the video decoder 300.

視訊解碼器300基於QP和ACT QP偏移來決定用於該區塊的ACT QP(408)。視訊解碼器300基於用於該區塊的ACT QP來決定單個色度殘差區塊(410)。亦即,視訊解碼器300可以對經量化的變換係數的區塊進行反量化,以決定單個色度殘差區塊。The video decoder 300 determines an ACT QP for the block based on the QP and the ACT QP offset (408). The video decoder 300 determines a single chroma residue block based on the ACT QP for the block (410). That is, the video decoder 300 may dequantize the block of quantized transform coefficients to determine the single chroma residue block.

視訊解碼器300根據單個色度殘差區塊來決定用於第一色度分量的第一色度殘差區塊(412)。視訊解碼器300根據單個色度殘差區塊來決定用於第二色度分量的第二色度殘差區塊(414)。第一色度殘差區塊和第二色度殘差區塊可以在第一色彩空間(諸如YCgCo色彩空間)中。The video decoder 300 determines a first chroma residue block for a first chroma component based on the single chroma residue block (412). The video decoder 300 determines a second chroma residue block for a second chroma component based on the single chroma residue block (414). The first chroma residue block and the second chroma residue block may be in a first color space, such as a YCgCo color space.

為了根據單個色度殘差區塊來決定用於第一色度分量的第一色度殘差區塊,視訊解碼器300可以例如將用於第一色度殘差區塊的取樣值設置為等於單個色度殘差區塊中的對應取樣的值。為了根據單個色度殘差區塊來決定用於第二色度分量的第二色度殘差區塊,視訊解碼器300可以將用於第二色度殘差區塊的取樣值設置為等於第一色度殘差區塊中的對應取樣的值乘以負一。To determine a first chroma residue block for a first chroma component based on a single chroma residue block, the video decoder 300 may, for example, set the sample values for the first chroma residue block to be equal to the values of corresponding samples in the single chroma residue block. To determine a second chroma residue block for a second chroma component based on a single chroma residue block, the video decoder 300 may set the sample values for the second chroma residue block to be equal to the values of corresponding samples in the first chroma residue block multiplied by negative one.

視訊解碼器300對第一色度殘差區塊執行逆ACT,以將第一色度殘差區塊轉換到第二色彩空間(416)。視訊解碼器300對第二色度殘差區塊執行逆ACT,以將第二色度殘差區塊轉換到第二色彩空間(418)。視訊解碼器300可以將經轉換的第一色度殘差區塊與第一預測色度區塊相加,以決定第一經重構的色度區塊;將經轉換的第二色度殘差區塊與第二預測色度區塊相加,以決定第二經重構的色度區塊;及輸出第一經重構的色度區塊和第二經重構的色度區塊。The video decoder 300 performs an inverse ACT on the first chroma residue block to convert the first chroma residue block to a second color space (416). The video decoder 300 performs an inverse ACT on the second chroma residue block to convert the second chroma residue block to a second color space (418). The video decoder 300 may add the converted first chroma residue block to the first predicted chroma block to determine a first reconstructed chroma block; add the converted second chroma residue block to the second predicted chroma block to determine a second reconstructed chroma block; and output the first reconstructed chroma block and the second reconstructed chroma block.

視訊解碼器300亦可以決定視訊資料的第二區塊是使用ACT而編碼的;決定第二區塊不是以聯合色度模式而編碼的;決定用於第二區塊的QP;基於第二區塊是使用ACT而編碼的並且不是以聯合色度模式而編碼的,來決定用於第二區塊的第一色度分量的第二ACT QP偏移;及基於第二區塊是使用ACT而編碼的並且不是以聯合色度模式而編碼的,來決定用於第二區塊的第二色度分量的第三ACT QP偏移,其中第二ACT QP偏移和第三ACT QP偏移中的至少一項不同於第一ACT QP偏移。The video decoder 300 may also determine that a second block of video data is encoded using ACT; determine that the second block is not encoded in joint chroma mode; determine a QP for the second block; determine a second ACT QP offset for a first chroma component of the second block based on that the second block is encoded using ACT and is not encoded in joint chroma mode; and determine a third ACT QP offset for a second chroma component of the second block based on that the second block is encoded using ACT and is not encoded in joint chroma mode, wherein at least one of the second ACT QP offset and the third ACT QP offset is different from the first ACT QP offset.

圖8是示出用於對當前區塊進行編碼的示例方法的流程圖。當前區塊可以包括當前CU。儘管關於視訊編碼器200(圖1和圖3)進行了描述,但是應當理解的是,其他設備可以被配置為執行與圖8的方法類似的方法。FIG8 is a flow chart showing an example method for encoding a current block. The current block may include a current CU. Although described with respect to the video encoder 200 (FIGS. 1 and 3), it should be understood that other devices may be configured to perform methods similar to those of FIG8.

視訊編碼器200決定用於視訊資料的區塊的第一色度分量的第一色度殘差區塊(420)。視訊編碼器200決定用於視訊資料的區塊的第二色度分量的第二色度殘差區塊,其中第一色度殘差區塊和第二色度殘差區塊在第一色彩空間中(422)。視訊編碼器200決定視訊資料的區塊是使用ACT而編碼的(424)。視訊編碼器200對第一色度殘差區塊執行ACT,以將第一色度殘差區塊轉換到第二色彩空間(426)。視訊編碼器200對第二色度殘差區塊執行逆ACT,以將第二色度殘差區塊轉換到第二色彩空間(428)。例如,第二色彩空間可以是YCgCo色彩空間。視訊編碼器200決定視訊資料的區塊是以聯合色度模式而編碼的(430)。在聯合色度模式下,視訊編碼器200對用於該區塊的第一色度分量和該區塊的第二色度分量的單個色度殘差區塊進行編碼。視訊編碼器200基於經轉換的第一色度殘差區塊和經轉換的第二色度殘差區塊來決定單個色度殘差區塊(432)。視訊編碼器200決定用於該區塊的QP(434)。The video encoder 200 determines a first chroma residue block for a first chroma component of a block of video data (420). The video encoder 200 determines a second chroma residue block for a second chroma component of the block of video data, wherein the first chroma residue block and the second chroma residue block are in a first color space (422). The video encoder 200 determines that the block of video data is encoded using ACT (424). The video encoder 200 performs ACT on the first chroma residue block to convert the first chroma residue block to a second color space (426). The video encoder 200 performs an inverse ACT on the second chroma residue block to convert the second chroma residue block to a second color space (428). For example, the second color space can be a YCgCo color space. The video encoder 200 determines that the block of video data is encoded in a joint chroma mode (430). In the joint chroma mode, the video encoder 200 encodes a single chroma residue block for a first chroma component of the block and a second chroma component of the block. The video encoder 200 determines a single chroma residue block based on the converted first chroma residue block and the converted second chroma residue block (432). The video encoder 200 determines a QP for the block (434).

視訊編碼器200基於該區塊是使用ACT而編碼的並且是以聯合色度模式而編碼的來決定用於該區塊的ACT QP偏移(436)。視訊編碼器200可以儲存ACT QP偏移集合,其中該ACT QP偏移集合包括用於視訊資料的亮度殘差分量的第一ACT QP偏移、用於視訊資料的第一色度殘差分量的第二ACT QP偏移、用於視訊資料的第二色度殘差分量的第三ACT QP偏移、以及用於經聯合編碼的色度殘差分量的第四ACT QP偏移。為了基於該區塊是使用ACT而編碼的並且是以聯合色度模式而編碼的來決定用於該區塊的ACT QP偏移,視訊編碼器200可以回應於該區塊是使用ACT而編碼的並且是以聯合色度模式而編碼的,將用於ACT QP偏移的值設置為用於第四ACT QP偏移的值。根據請求項12之方法,其中為了基於該區塊是使用ACT而編碼的並且是以聯合色度模式而編碼的來決定用於該區塊的ACT QP偏移,視訊編碼器200可以將ACT QP偏移設置為固定的整數值。The video encoder 200 determines an ACT QP offset for the block based on the block being encoded using ACT and encoded in a joint chroma mode (436). The video encoder 200 may store a set of ACT QP offsets, wherein the set of ACT QP offsets includes a first ACT QP offset for a luma residue component of the video data, a second ACT QP offset for a first chroma residue component of the video data, a third ACT QP offset for a second chroma residue component of the video data, and a fourth ACT QP offset for a jointly encoded chroma residue component. To determine the ACT QP offset for the block based on the block being encoded using ACT and being encoded in joint-chroma mode, the video encoder 200 may set the value for the ACT QP offset to the value for the fourth ACT QP offset in response to the block being encoded using ACT and being encoded in joint-chroma mode. The method of claim 12, wherein to determine the ACT QP offset for the block based on the block being encoded using ACT and being encoded in joint-chroma mode, the video encoder 200 may set the ACT QP offset to a fixed integer value.

視訊編碼器200基於QP和ACT QP偏移來決定用於該區塊的ACT QP(438)。視訊編碼器200基於用於該區塊的ACT QP來對單個色度殘差區塊進行量化(440)。隨後,視訊編碼器200可以對經量化的單個色度殘差區塊進行變換以產生變換係數,並且輸出用於辨識變換係數的語法元素。The video encoder 200 determines an ACT QP for the block based on the QP and the ACT QP offset (438). The video encoder 200 quantizes the single chroma residue block based on the ACT QP for the block (440). The video encoder 200 may then transform the quantized single chroma residue block to generate transform coefficients and output syntax elements for identifying the transform coefficients.

以下條款描述了根據視訊編碼器200和視訊解碼器300以及上文論述的技術的示例設備和過程。The following clauses describe example apparatus and processes based on the video encoder 200 and the video decoder 300 and the techniques discussed above.

條款1:一種對視訊資料進行解碼的方法包括:接收參數集中的旗標,其中該旗標的第一值指示一或多個量化參數(QP)偏移值被包括在切片標頭中,並且該旗標的第二值指示該一或多個QP偏移值沒有被包括在該切片標頭中;回應於決定該旗標具有該第一值,在該切片標頭中接收該一或多個QP偏移值;及基於該一或多個QP偏移值來對殘差資料執行自我調整色彩變換。Clause 1: A method of decoding video data comprises: receiving a flag in a parameter set, wherein a first value of the flag indicates that one or more quantization parameter (QP) offset values are included in a slice header, and a second value of the flag indicates that the one or more QP offset values are not included in the slice header; in response to determining that the flag has the first value, receiving the one or more QP offset values in the slice header; and performing a self-adjusting color transform on residual data based on the one or more QP offset values.

條款2:如條款1所述的方法,進一步包括:回應於決定啟用自我調整色彩變換來接收參數集中的該旗標。Clause 2: The method of clause 1, further comprising: receiving the flag in the parameter set in response to determining to enable self-adjusting color transformation.

條款3:如條款1或2所述的方法,進一步包括:進一步回應於決定具有該切片標頭的切片的切片類型不是I切片來在該切片標頭中接收該一或多個QP偏移值。Clause 3: The method of clause 1 or 2, further comprising: further responsive to determining that the slice type of the slice having the slice header is not an I slice, receiving the one or more QP offset values in the slice header.

條款4:如條款1或2所述的方法,進一步包括:進一步回應於決定具有該切片標頭的切片不使用雙樹區塊分割來在該切片標頭中接收該一或多個QP偏移值。Clause 4: The method of clause 1 or 2, further comprising: further responsive to determining that the slice having the slice header does not use dual-tree block partitioning to receive the one or more QP offset values in the slice header.

條款5:如條款1-4中任一項所述的方法,其中該參數集是圖片參數集。Clause 5: A method as described in any of clauses 1-4, wherein the parameter set is a picture parameter set.

條款6:如條款1-5中任一項所述的方法,進一步包括:決定經量化的變換係數的值;對該等經量化的變換係數的該等值進行逆量化,以決定經反量化的變換係數的值;對經反量化的變換係數進行逆變換,以決定該殘差資料。Clause 6: The method as described in any one of clauses 1-5 further includes: determining the value of the quantized transform coefficient; inverse quantizing the values of the quantized transform coefficient to determine the value of the dequantized transform coefficient; inverse transforming the dequantized transform coefficient to determine the residual data.

條款7:如條款1-5中任一項所述的方法,其中該殘差資料包括跳過變換的殘差資料。Clause 7: A method as described in any of clauses 1-5, wherein the residual data includes residual data of skipped transformations.

條款8:一種對視訊資料進行解碼的方法包括:在編碼單元的量化組(QGCU)級別接收旗標,該旗標指示針對該QGCU啟用還是禁用自我調整色彩變換;及回應於決定該旗標指示針對該QGCU啟用自我調整色彩變換,在色彩變換域中處理該QGCU的取樣值。Clause 8: A method for decoding video data comprises: receiving a flag at a quantization group (QGCU) level of a coding unit, the flag indicating whether self-adjusting color transform is enabled or disabled for the QGCU; and in response to determining that the flag indicates that self-adjusting color transform is enabled for the QGCU, processing sample values of the QGCU in a color transform domain.

條款9:一種對視訊資料進行解碼的方法包括:決定視訊資料的殘差區塊是以聯合CbCr模式而編碼的;接收聯合CbCr偏移值;基於該聯合CbCr偏移值來對該殘差資料執行自我調整色彩變換。Clause 9: A method of decoding video data comprises: determining that a residue block of the video data is encoded in a joint CbCr mode; receiving a joint CbCr offset value; and performing a self-adjusting color transform on the residue data based on the joint CbCr offset value.

條款10:如條款8所述的方法,進一步包括條款1-8中的任一項或組合。Clause 10: The method as described in Clause 8, further comprising any one or combination of Clauses 1-8.

條款11:一種用於對視訊資料進行編碼的設備,該設備包括用於執行根據條款1-10中任一項所述的方法的一或多個構件。Clause 11: An apparatus for encoding video data, the apparatus comprising one or more components for performing a method as described in any one of clauses 1-10.

條款12:如條款11所述的設備,其中該一或多個構件包括在電路中實現的一或多個處理器。Clause 12: An apparatus as described in clause 11, wherein the one or more components include one or more processors implemented in circuitry.

條款13:如條款11和12中任一項所述的設備,進一步包括:用於儲存該視訊資料的記憶體。Clause 13: The apparatus as described in any one of Clauses 11 and 12 further comprises: a memory for storing the video data.

條款14:如條款11-13中任一項所述的設備,進一步包括:被配置為顯示經解碼的視訊資料的顯示器。Clause 14: The apparatus of any of clauses 11-13, further comprising: a display configured to display the decoded video data.

條款15:如條款11-14中任一項所述的設備,其中該設備包括以下各項中的一或多項:相機、電腦、行動設備、廣播接收器設備、或機上盒。Clause 15: A device as described in any of clauses 11-14, wherein the device comprises one or more of the following: a camera, a computer, a mobile device, a broadcast receiver device, or a set-top box.

條款16:如條款6-15中任一項所述的設備,其中該設備包括視訊解碼器。Clause 16: An apparatus as described in any of Clauses 6-15, wherein the apparatus includes a video decoder.

條款17:一種對視訊資料進行編碼的方法包括:決定一或多個量化參數(QP)偏移值被包括在切片標頭中;回應於決定該一或多個QP偏移值被包括在該切片標頭中,產生具有第一值的旗標以包括在參數集中,其中該旗標的該第一值指示該一或多個QP偏移值被包括在該切片標頭中,並且該旗標的第二值指示該一或多個QP偏移值沒有被包括在該切片標頭中;回應於決定該一或多個QP偏移值被包括在該切片標頭中,產生該一或多個QP偏移值以包括在該切片標頭中;基於該一或多個QP偏移值來對殘差資料執行自我調整色彩變換,以決定經色彩變換的殘差資料。Clause 17: A method for encoding video data comprises: determining one or more quantization parameter (QP) offset values to be included in a slice header; in response to determining that the one or more QP offset values are included in the slice header, generating a flag having a first value to be included in a parameter set, wherein the first value of the flag indicates that the one or more QP offset values are included in the slice header, and the second value of the flag indicates that the one or more QP offset values are not included in the slice header; in response to determining that the one or more QP offset values are included in the slice header, generating the one or more QP offset values to be included in the slice header; performing a self-adjusting color transform on residue data based on the one or more QP offset values to determine color transformed residue data.

條款18:如條款17所述的方法,進一步包括:回應於決定啟用自我調整色彩變換來產生該旗標,以包括在該參數集中。Clause 18: The method of clause 17, further comprising: generating the flag in response to determining to enable self-adjusting color transformation for inclusion in the parameter set.

條款19:如條款17或18所述的方法,進一步包括:進一步回應於決定具有該切片標頭的切片的切片類型不是I切片來產生該一或多個QP偏移值,以包括在該切片標頭中。Clause 19: The method of clause 17 or 18, further comprising: further responsive to determining that the slice type of the slice having the slice header is not an I slice to generate the one or more QP offset values for inclusion in the slice header.

條款20:如條款17或18所述的方法,進一步包括:進一步回應於決定具有該切片標頭的切片不使用雙樹區塊分割來產生該一或多個QP偏移值,以包括在該切片標頭中。Clause 20: The method of clause 17 or 18, further comprising: further responsive to determining that the slice having the slice header does not use dual-tree block partitioning to generate the one or more QP offset values for inclusion in the slice header.

條款21:如條款17-20中任一項所述的方法,其中該參數集是圖片參數集。Clause 21: A method as described in any of clauses 17-20, wherein the parameter set is a picture parameter set.

條款22:如條款17-21中任一項所述的方法,進一步包括:對經色彩變換的殘差資料進行變換,以決定變換係數;對該等變換係數進行量化;及在該視訊資料中用信號通知經量化的變換係數。Clause 22: A method as described in any one of clauses 17-21, further comprising: transforming the color transformed residual data to determine transformation coefficients; quantizing the transformation coefficients; and signaling the quantized transformation coefficients in the video data.

條款23:如條款17-22中任一項所述的方法,進一步包括:在該視訊資料中用信號通知經色彩變換的殘差資料。Clause 23: A method as described in any one of clauses 17-22, further comprising: signaling color-converted residual data in the video data.

條款24:一種對視訊資料進行編碼的方法包括:決定針對編碼單元的量化組(QGCU)啟用還是禁用自我調整色彩變換;回應於決定針對該QGCU啟用自我調整色彩變換,產生指示針對該QGCU啟用或禁用自我調整色彩變換的旗標以包括在該視訊資料中;及在色彩變換域中處理該QGCU的取樣值。Clause 24: A method for encoding video data comprises: determining whether to enable or disable self-adjusting color transform for a quantization group of coding units (QGCU); in response to determining to enable self-adjusting color transform for the QGCU, generating a flag indicating whether to enable or disable self-adjusting color transform for the QGCU for inclusion in the video data; and processing sample values of the QGCU in a color transform domain.

條款25:如條款24所述的方法,進一步包括條款16-22中的任一項或組合。Clause 25: The method as described in Clause 24, further comprising any one or combination of Clauses 16-22.

條款26:一種用於對視訊資料進行編碼的設備,該設備包括用於執行如條款17-25中任一項所述的方法的一或多個構件。Clause 26: An apparatus for encoding video data, the apparatus comprising one or more components for performing a method as described in any one of clauses 17-25.

條款27:如條款26所述的設備,其中該一或多個構件包括在電路中實現的一或多個處理器。Clause 27: An apparatus as described in clause 26, wherein the one or more components include one or more processors implemented in circuitry.

條款28:如條款26和27中任一項所述的設備,進一步包括:用於儲存該視訊資料的記憶體。Clause 28: The apparatus as described in any one of Clauses 26 and 27 further comprises: a memory for storing the video data.

條款29:如條款26-28中任一項所述的設備,其中該設備包括以下各項中的一或多項:相機、電腦、行動設備、廣播接收器設備、或機上盒。Clause 29: An apparatus as described in any of clauses 26-28, wherein the apparatus comprises one or more of the following: a camera, a computer, a mobile device, a broadcast receiver device, or a set-top box.

條款30:如條款26-29中任一項所述的設備,其中該設備包括視訊編碼器。Clause 30: An apparatus as described in any of clauses 26-29, wherein the apparatus comprises a video encoder.

條款31:一種具有儲存在其上的指令的電腦可讀取儲存媒體,該等指令在被執行時使得一或多個處理器執行如條款1-10或17-25中任一項所述的方法。Clause 31: A computer-readable storage medium having instructions stored thereon, which instructions, when executed, cause one or more processors to perform a method as described in any of clauses 1-10 or 17-25.

要認識到的是,根據實例,本文描述的任何技術的某些動作或事件可以以不同的循序執行,可以被添加、合併或完全省略(例如,並非所有描述的動作或事件是對於實踐該等技術皆是必要的)。此外,在某些實例中,動作或事件可以例如經由多線程處理、中斷處理或多個處理器併發地而不是順序地執行。It is to be appreciated that, depending on the example, certain actions or events of any technique described herein may be performed in a different order, may be added, combined, or omitted entirely (e.g., not all described actions or events are necessary to practice the techniques). In addition, in some examples, actions or events may be performed concurrently rather than sequentially, for example, via multithreading, interrupt handling, or multiple processors.

在一或多個實例中,所描述的功能可以用硬體、軟體、韌體或其任何組合來實現。若用軟體來實現,則該等功能可以作為一或多個指令或代碼儲存在電腦可讀取媒體上或者經由其進行傳輸並且由基於硬體的處理單元執行。電腦可讀取媒體可以包括電腦可讀取儲存媒體,其對應於諸如資料儲存媒體之類的有形媒體或者通訊媒體,該通訊媒體包括例如根據通訊協定來促進電腦程式從一個地方傳送到另一個地方的任何媒體。以此種方式,電腦可讀取媒體通常可以對應於(1)非暫時性的有形電腦可讀取儲存媒體、或者(2)諸如信號或載波之類的通訊媒體。資料儲存媒體可以是可以由一或多個電腦或者一或多個處理器存取以取得用於實現在本揭示內容中描述的技術的指令、代碼及/或資料結構的任何可用的媒體。電腦程式產品可以包括電腦可讀取媒體。In one or more examples, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted via a computer-readable medium as one or more instructions or codes and executed by a hardware-based processing unit. Computer-readable media may include computer-readable storage media, which corresponds to tangible media such as data storage media, or communication media, which includes, for example, any media that facilitates the transfer of a computer program from one place to another according to a communication protocol. In this manner, computer-readable media may generally correspond to (1) non-transitory, tangible computer-readable storage media, or (2) communications media such as signals or carrier waves. Data storage media may be any available media that can be accessed by one or more computers or one or more processors to obtain instructions, code and/or data structures for implementing the techniques described in this disclosure. A computer program product may include computer-readable media.

舉例而言而非進行限制,此種電腦可讀取儲存媒體可以包括RAM、ROM、EEPROM、CD-ROM或其他光碟儲存、磁碟儲存或其他磁儲存設備、快閃記憶體、或者能夠用於以指令或資料結構形式儲存期望的程式碼以及能夠由電腦存取的任何其他媒體。此外,任何連接被適當地稱為電腦可讀取媒體。例如,若使用同軸電纜、光纖光纜、雙絞線、數位用戶線路(DSL)或者無線技術(諸如,紅外線、無線電和微波)從網站、伺服器或其他遠端源傳輸指令,則同軸電纜、光纖光纜、雙絞線、DSL或者無線技術(諸如,紅外線、無線電和微波)被包括在媒體的定義中。然而,應當理解的是,電腦可讀取儲存媒體和資料儲存媒體不包括連接、載波、信號或其他臨時性媒體,而是替代地針對非臨時性的有形儲存媒體。如本文所使用的,磁碟和光碟包括壓縮光碟(CD)、鐳射光碟、光碟、數位多功能光碟(DVD)、軟碟和藍光光碟,其中磁碟通常磁性地複製資料,而光碟利用鐳射來光學地複製資料。上述各項的組合亦應當被包括在電腦可讀取媒體的範圍之內。By way of example, and not limitation, such computer-readable storage media may include RAM, ROM, EEPROM, CD-ROM or other optical disk storage, disk storage or other magnetic storage devices, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and can be accessed by a computer. In addition, any connection is properly termed a computer-readable medium. For example, if coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies (such as infrared, radio, and microwave) are used to transmit commands from a website, server, or other remote source, the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies (such as infrared, radio, and microwave) are included in the definition of media. However, it should be understood that computer-readable storage media and data storage media do not include connections, carriers, signals, or other temporary media, but instead refer to non-temporary tangible storage media. As used herein, magnetic disk and optical disc include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc, where magnetic disk usually copies data magnetically and optical disc uses laser to copy data optically. Combinations of the above should also be included in the scope of computer-readable media.

指令可以由一或多個處理器來執行,諸如一或多個數位訊號處理器(DSP)、通用微處理器、特殊應用積體電路(ASIC)、現場可程式設計閘陣列(FPGA)、或其他等效的集成或個別邏輯電路。因此,如本文所使用的術語「處理器」和「處理電路」可以代表前述結構中的任何一者或者適於實現本文描述的技術的任何其他結構。另外,在一些態樣中,本文描述的功能可以在被配置用於編碼和解碼的專用硬體及/或軟體模組內提供,或者被併入經組合的轉碼器中。此外,該等技術可以完全在一或多個電路或邏輯元件中實現。The instructions may be executed by one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), or other equivalent integrated or individual logic circuits. Therefore, the terms "processor" and "processing circuit" as used herein may represent any of the aforementioned structures or any other structure suitable for implementing the techniques described herein. In addition, in some aspects, the functions described herein may be provided within dedicated hardware and/or software modules configured for encoding and decoding, or incorporated into a combined transcoder. Furthermore, the techniques may be implemented entirely in one or more circuits or logic elements.

本揭示內容的技術可以在多種多樣的設備或裝置中實現,包括無線手機、積體電路(IC)或一組IC(例如,晶片組)。在本揭示內容中描述了各種部件、模組或單元以強調被配置為執行所揭示的技術的設備的功能性態樣,但是不一定需要藉由不同的硬體單元來實現。確切而言,如上述,各種單元可以被組合在轉碼器硬體單元中,或者由可交互操作的硬體單元的集合(包括如上述的一或多個處理器)結合適當的軟體及/或韌體來提供。The techniques of the present disclosure may be implemented in a wide variety of devices or apparatuses, including a wireless handset, an integrated circuit (IC), or a set of ICs (e.g., a chipset). Various components, modules, or units are described in the present disclosure to emphasize the functional aspects of a device configured to perform the disclosed techniques, but do not necessarily need to be implemented by different hardware units. Specifically, as described above, the various units may be combined in a transcoder hardware unit, or provided by a collection of interoperable hardware units (including one or more processors as described above) in conjunction with appropriate software and/or firmware.

已經描述了各個實例。該等和其他實例在所附的申請專利範圍的範圍內。Various examples have been described. These and other examples are within the scope of the appended claims.

100:視訊編碼和解碼系統 102:源設備 104:視訊源 106:記憶體 108:輸出介面 110:電腦可讀取媒體 112:儲存設備 114:檔案伺服器 116:目的地設備 118:顯示設備 120:記憶體 122:輸入介面 130:四叉樹二叉樹(QTBT)結構 132:編碼樹單元(CTU) 200:視訊編碼器 202:模式選擇單元 204:殘差產生單元 205:ACT單元 206:變換處理單元 208:量化單元 210:逆量化單元 212:逆變換處理單元 213:逆ACT單元 214:重構單元 216:濾波器單元 218:解碼圖片緩衝器(DPB) 220:熵編碼單元 222:運動估計單元 224:運動補償單元 226:訊框內預測單元 230:視訊資料記憶體 300:視訊解碼器 302:熵解碼單元 304:預測處理單元 306:逆量化單元 308:逆變換處理單元 309:逆ACT單元 310:重構單元 312:濾波器單元 314:解碼圖片緩衝器(DPB) 316:運動補償單元 318:訊框內預測單元 320:編碼圖片緩衝器(CPB)記憶體 350:流程 352:流程 354:流程 356:流程 358:流程 360:流程 370:流程 372:流程 374:流程 376:流程 378:流程 380:流程 400:流程 402:流程 404:流程 406:流程 408:流程 410:流程 412:流程 414:流程 416:流程 418:流程 420:流程 422:流程 424:流程 426:流程 428:流程 430:流程 432:流程 434:流程 436:流程 438:流程 440:流程100: Video encoding and decoding system 102: Source device 104: Video source 106: Memory 108: Output interface 110: Computer-readable media 112: Storage device 114: File server 116: Destination device 118: Display device 120: Memory 122: Input interface 130: Quad-tree binary tree (QTBT) structure 132: Coding tree unit (CTU) 200: Video encoder 202: Mode selection unit 204: Error generation Unit 205: ACT unit 206: Transform processing unit 208: Quantization unit 210: Inverse quantization unit 212: Inverse transform processing unit 213: Inverse ACT unit 214: Reconstruction unit 216: Filter unit 218: Decoded picture buffer (DPB) 220: Entropy coding unit 222: Motion estimation unit 224: Motion compensation unit 226: Intra-frame prediction unit 230: Video data memory 300: Video decoder 302: Entropy decoding unit 304: Prediction processing unit 306: Inverse quantization unit 308: Inverse transform processing unit 309: Inverse ACT unit 310: Reconstruction unit 312: Filter unit 314: Decoded picture buffer (DPB) 316: Motion compensation unit 318: Intra-frame prediction unit 320: Coding picture buffer (CPB) memory 350: Process 352: Process 354: Process 356: Process 358: Process 360: Process 370: Process 372: Process 374: Process 376: Process 378: Process 380: Process 400: Process 402: Process 404: Process 406: Process 408: Process 410: Process 412: Process 414: Process 416: Process 418: Process 420: Process 422: Process 424: Process 426: Process 428: Process 430: Process 432: Process 434: Process 436: Process 438: Process 440: Process

圖1是示出可以執行本揭示內容的技術的示例視訊編碼和解碼系統的方塊圖。FIG. 1 is a block diagram illustrating an example video encoding and decoding system in which the techniques of the present disclosure may be implemented.

圖2A和圖2B是示出示例四叉樹二叉樹(QTBT)結構以及對應的編碼樹單元(CTU)的概念圖。2A and 2B are conceptual diagrams illustrating an example quad-tree binary tree (QTBT) structure and a corresponding coding tree unit (CTU).

圖3是示出可以執行本揭示內容的技術的示例視訊編碼器(encoder)的方塊圖。FIG. 3 is a block diagram illustrating an example video encoder that may implement the techniques of the present disclosure.

圖4是示出可以執行本揭示內容的技術的示例視訊解碼器(decoder)的方塊圖。FIG. 4 is a block diagram illustrating an example video decoder that may implement the techniques of the present disclosure.

圖5是示出用於對視訊資料進行編碼的過程的流程圖。FIG. 5 is a flow chart showing a process for encoding video data.

圖6是示出用於對視訊資料進行解碼的過程的流程圖。FIG. 6 is a flow chart showing a process for decoding video data.

圖7是示出用於對視訊資料進行解碼的過程的流程圖。FIG. 7 is a flow chart showing a process for decoding video data.

圖8是示出用於對視訊資料進行編碼的過程的流程圖。FIG. 8 is a flow chart showing a process for encoding video data.

國內寄存資訊(請依寄存機構、日期、號碼順序註記) 無 國外寄存資訊(請依寄存國家、機構、日期、號碼順序註記) 無Domestic storage information (please note in the order of storage institution, date, and number) None Foreign storage information (please note in the order of storage country, institution, date, and number) None

400:流程 400:Process

402:流程 402: Process

404:流程 404: Process

406:流程 406: Process

408:流程 408:Process

410:流程 410: Process

412:流程 412: Process

414:流程 414: Process

416:流程 416: Process

418:流程 418: Process

Claims (29)

一種對視訊資料進行解碼的方法,該方法包括以下步驟:決定該視訊資料的一區塊是使用一自我調整色彩變換(ACT)而編碼的;決定該區塊是以一聯合色度模式而編碼的,其中對於該聯合色度模式,一單個色度殘差區塊是針對該區塊的一第一色度分量和該區塊的一第二色度分量而編碼的;決定用於該第一色度分量的一第一色度殘差區塊,其中該第一色度殘差區塊的值等於該單個色度殘差區塊的值;決定用於該第二色度分量的一第二色度殘差區塊的一符號值,其中該符號值等於1或-1中之一個;決定該第二色度殘差區塊,其中該第二色度殘差區塊的值等於該符號值乘以該單個色度殘差區塊的該等值;基於一切片標頭QP偏移和一圖片級別QP偏移,決定用於該區塊的一第二色彩空間的一量化參數(QP);決定用於該區塊的一ACT QP偏移,決定用於該區塊的該ACT QP偏移包括以下步驟:將該ACT QP偏移決定為一ACT QP偏移集合中的一個ACT QP偏移,其中該ACT QP偏移集合包括用於該視訊資料的亮度殘差分量的一ACT QP偏移、用於該視訊資料的第一色度殘差分量的一ACT QP偏移、用於該視訊資料的第二色度殘差分量的一ACT QP偏移、及用於經聯合 編碼的色度殘差分量的一ACT QP偏移以及-1的一符號值,其中用於經聯合編碼的殘差分量的一ACT QP偏移與用於該視訊資料的第一色度殘差分量的該ACT QP偏移或用於該視訊資料的第二色度殘差分量的該ACT QP偏移中的至少一個不同,其中決定該ACT QP偏移包括以下步驟:在響應於該區塊是使用該ACT而編碼的並且是以該聯合色度模式而編碼的,將該ACT QP偏移決定為用於經聯合編碼的色度殘差分量的該ACT QP偏移,其中該ACT QP偏移是與該切片標頭QP偏移和該圖片級別QP偏移分開決定的;基於該QP和所決定的該ACT QP偏移來決定用於該區塊的一第一色彩空間的一ACT QP,其中該第一色彩空間與該第二色彩空間不同;基於用於該區塊的該ACT QP來決定該單個色度殘差區塊;根據該單個色度殘差區塊來決定用於該第一色度分量的一第一色度殘差區塊,其中該第一色度殘差區塊在該第一色彩空間中;根據該單個色度殘差區塊來決定用於該第二色度分量的一第二色度殘差區塊,其中該第二色度殘差區塊在該第一色彩空間中;對該第一色度殘差區塊執行一逆ACT,以將該第一色度殘差區塊轉換到該第二色彩空間;及對該第二色度殘差區塊執行該逆ACT,以將該第二色 度殘差區塊轉換到該第二色彩空間。 A method for decoding video data, the method comprising the steps of: determining that a block of the video data is encoded using an adaptive color transform (ACT); determining that the block is encoded in a joint chroma mode, wherein for the joint chroma mode, a single chroma residue block is encoded for a first chroma component of the block and a second chroma component of the block; determining a first chroma residue block for the first chroma component, wherein the value of the first chroma residue block is equal to the value of the single chroma residue block; determining a symbol value for a second chroma residue block for the second chroma component, wherein the symbol value is equal to one of 1 or -1; determining the second chroma residue block, wherein the value of the second chroma residue block is equal to the symbol value multiplied by the values of the single chroma residue block; determining a quantization parameter (QP) in a second color space for the block based on a slice header QP offset and a picture level QP offset; determining an ACT for the block QP offset, determining the ACT QP offset for the block includes the following steps: determining the ACT QP offset as an ACT QP offset in an ACT QP offset set, wherein the ACT QP offset set includes an ACT QP offset for a luma residue component of the video data, an ACT QP offset for a first chroma residue component of the video data, an ACT QP offset for a second chroma residue component of the video data, and an ACT QP offset for a jointly coded chroma residue component and a sign value of -1, wherein the ACT QP offset for the jointly coded residue component is different from at least one of the ACT QP offset for the first chroma residue component of the video data or the ACT QP offset for the second chroma residue component of the video data, wherein determining the ACT QP offset The QP offset includes the steps of: in response to the block being encoded using the ACT and encoded in the joint chroma mode, determining the ACT QP offset as the ACT QP offset for the jointly coded chroma residue component, wherein the ACT QP offset is determined separately from the slice header QP offset and the picture level QP offset; determining an ACT QP for a first color space for the block based on the QP and the determined ACT QP offset, wherein the first color space is different from the second color space; determining an ACT QP for the block based on the ACT QP to determine the single chroma residue block; determine a first chroma residue block for the first chroma component according to the single chroma residue block, wherein the first chroma residue block is in the first color space; determine a second chroma residue block for the second chroma component according to the single chroma residue block, wherein the second chroma residue block is in the first color space; perform an inverse ACT on the first chroma residue block to convert the first chroma residue block to the second color space; and perform the inverse ACT on the second chroma residue block to convert the second chroma residue block to the second color space. 如請求項1所述的方法,其中基於該區塊是使用該ACT而編碼的並且是以該聯合色度模式而編碼的來決定用於該區塊的該ACT QP偏移包括以下步驟:將該ACT QP偏移決定為一固定的整數值。 The method of claim 1, wherein determining the ACT QP offset for the block based on the block being encoded using the ACT and encoded in the joint chroma mode comprises the steps of: determining the ACT QP offset as a fixed integer value. 如請求項1所述的方法,其中該第一色彩空間包括一YCgCo色彩空間。 The method as claimed in claim 1, wherein the first color space comprises a YCgCo color space. 如請求項1所述的方法,進一步包括以下步驟:將該經轉換的第一色度殘差區塊與一第一預測色度區塊相加,以決定一第一經重構的色度區塊;將該經轉換的第二色度殘差區塊與一第二預測色度區塊相加,以決定一第二經重構的色度區塊;及輸出該第一經重構的色度區塊和該第二經重構的色度區塊。 The method as described in claim 1 further includes the following steps: adding the converted first chroma residue block to a first predicted chroma block to determine a first reconstructed chroma block; adding the converted second chroma residue block to a second predicted chroma block to determine a second reconstructed chroma block; and outputting the first reconstructed chroma block and the second reconstructed chroma block. 如請求項1所述的方法,進一步包括以下步驟:決定該視訊資料的一第二區塊是使用該ACT而編碼的;決定該第二區塊不是以該聯合色度模式而編碼的;決定用於該第二區塊的一QP;基於該第二區塊是使用該ACT而編碼的並且不是以該聯合色度模式而編碼的,來決定用於該第二區塊的一第一色度分量的一第二ACT QP偏移,其中該第二ACT QP偏移被決定為該視訊資料的第一色度殘差分量的該ACT QP偏移;基於該第二區塊是使用該ACT而編碼的並且不是以該聯合色度模式而編碼的,來決定用於該第二區塊的一第二色度分量的一第三ACT QP偏移,其中該第三ACT QP偏移被決定為該視訊資料的第二色度殘差分量的該ACT QP偏移,其中該第二ACT QP偏移和該第三ACT QP偏移中的至少一項不同於該第一ACT QP偏移。 The method of claim 1 further comprises the steps of: determining that a second block of the video data is encoded using the ACT; determining that the second block is not encoded in the joint chroma mode; determining a QP for the second block; determining a second ACT QP offset for a first chroma component of the second block based on the second block being encoded using the ACT and not encoded in the joint chroma mode, wherein the second ACT QP offset is determined to be the ACT QP offset for the first chroma residual component of the video data; determining a third ACT QP offset for a second chroma component of the second block based on the second block being encoded using the ACT and not encoded in the joint chroma mode, wherein the third ACT QP offset is determined to be the ACT QP offset for the second chroma residual component of the video data QP offsets, wherein at least one of the second ACT QP offset and the third ACT QP offset is different from the first ACT QP offset. 如請求項1所述的方法,其中根據該單個色度殘差區塊來決定用於該第一色度分量的該第一色度殘差區塊包括以下步驟:將該第一色度殘差區塊的取樣值設置為等於該單個色度殘差區塊中的對應取樣的值。 The method of claim 1, wherein determining the first chroma residue block for the first chroma component based on the single chroma residue block comprises the following steps: setting the sample value of the first chroma residue block to be equal to the value of the corresponding sample in the single chroma residue block. 如請求項6所述的方法,其中根據該單個色度殘差區塊來決定用於該第二色度分量的該第二色度殘差區塊包括以下步驟:將該第二色度殘差區塊的取樣值設置為等於該第一色度殘差區塊中的對應取樣的值。 The method of claim 6, wherein determining the second chroma residue block for the second chroma component based on the single chroma residue block comprises the following steps: setting the sample value of the second chroma residue block to be equal to the value of the corresponding sample in the first chroma residue block. 如請求項6所述的方法,其中根據該單個色度殘差區塊來決定用於該第二色度分量的該第二色度殘差區塊包括以下步驟:將該第二色度殘差區塊的取樣值設置為等於該第一色度殘差區塊中的對應取樣的值乘以負一。 The method of claim 6, wherein determining the second chroma residue block for the second chroma component based on the single chroma residue block comprises the following steps: setting the sample value of the second chroma residue block to be equal to the value of the corresponding sample in the first chroma residue block multiplied by negative one. 如請求項1所述的方法,其中基於用於該區塊的該ACT QP來決定該單個色度殘差區塊包括以下 步驟:接收一變換係數集合;對該變換係數集合執行一逆量化操作,以決定一經反量化的變換係數集合,其中用於該逆量化操作的反量化的一量是藉由該ACT QP來控制的;及對該經反量化的變換係數集合進行逆變換,以決定該單個色度殘差區塊。 The method of claim 1, wherein determining the single chroma residue block based on the ACT QP for the block comprises the following steps: receiving a set of transform coefficients; performing an inverse quantization operation on the set of transform coefficients to determine a set of inverse quantized transform coefficients, wherein an amount of inverse quantization used in the inverse quantization operation is controlled by the ACT QP; and inverse transforming the inverse quantized set of transform coefficients to determine the single chroma residue block. 一種對視訊資料進行編碼的方法,該方法包括以下步驟:決定用於視訊資料的一區塊的一第一色度分量的一第一色度殘差區塊;決定用於視訊資料的該區塊的一第二色度分量的一第二色度殘差區塊,其中該第一色度殘差區塊和該第二色度殘差區塊在一第一色彩空間中;決定該視訊資料的該區塊是使用一自我調整色彩變換(ACT)而編碼的;對該第一色度殘差區塊執行該ACT,以將該第一色度殘差區塊轉換到一第二色彩空間;對該第二色度殘差區塊執行該ACT,以將該第二色度殘差區塊轉換到該第二色彩空間;決定該視訊資料的該區塊是以一聯合色度模式而編碼的,其中對於該聯合色度模式,一單個色度殘差區塊是針對該區塊的該第一色度分量和該區塊的該第二色度分量而編碼的; 基於該經轉換的第一色度殘差區塊和該經轉換的第二色度殘差區塊來決定該單個色度殘差區塊;決定用於該第二色度分量的一第二色度殘差區塊的一符號值,其中該符號值等於1或-1中之一個,以及該第二色度殘差區塊的值等於該符號值乘以該單個色度殘差區塊的該等值;決定用於該區塊的該第一色彩空間的一量化參數(QP);使用一切片標頭QP偏移和一圖片級別QP偏移來訊號傳遞該QP參數;決定用於該區塊的該第二色彩空間的一ACT QP偏移,決定用於該區塊的該第二色彩空間的該ACT QP偏移包括以下步驟:將該ACT QP偏移決定為一ACT QP偏移集合中的一個ACT QP偏移,其中該ACT QP偏移集合包括用於該視訊資料的亮度殘差分量的一ACT QP偏移、用於該視訊資料的第一色度殘差分量的一ACT QP偏移、用於該視訊資料的第二色度殘差分量的一ACT QP偏移、及用於經聯合編碼的色度殘差分量的一ACT QP偏移以及-1的一符號值,其中用於經聯合編碼的殘差分量的一ACT QP偏移與用於該視訊資料的第一色度殘差分量的該ACT QP偏移或用於該視訊資料的第二色度殘差分量的該ACT QP偏移中的至少一個不同,其中決定該ACT QP偏移包括以下步驟:在響應於該區塊是使用該ACT而編碼的並且是 以該聯合色度模式而編碼的,將該ACT QP偏移決定為用於經聯合編碼的色度殘差分量的該ACT QP偏移,其中該ACT QP偏移是與該切片標頭QP偏移和該圖片級別QP偏移分開決定的;基於該QP和所決定的該ACT QP偏移來決定用於該區塊的一ACT QP;及基於用於該區塊的該ACT QP來對該單個色度殘差區塊進行量化。 A method for encoding video data, the method comprising the steps of: determining a first chroma residue block for a first chroma component of a block of video data; determining a second chroma residue block for a second chroma component of the block of video data, wherein the first chroma residue block and the second chroma residue block are in a first color space; determining the first chroma residue block of the video data; determining the second ... The block is encoded using an adaptive color transform (ACT); performing the ACT on the first chroma residue block to convert the first chroma residue block to a second color space; performing the ACT on the second chroma residue block to convert the second chroma residue block to the second color space; determining that the block of the video data is encoded in a joint chroma mode , wherein for the joint chroma mode, a single chroma residue block is encoded for the first chroma component of the block and the second chroma component of the block; Determine the single chroma residue block based on the converted first chroma residue block and the converted second chroma residue block; Determine a sign value for a second chroma residue block for the second chroma component, wherein the The symbol value is equal to one of 1 or -1, and the value of the second chroma residue block is equal to the symbol value multiplied by the values of the single chroma residue block; determining a quantization parameter (QP) for the first color space of the block; using all slice header QP offsets and a picture level QP offset to signal the QP parameter; determining an ACT for the second color space of the block QP offset, determining the ACT QP offset for the second color space of the block includes the following steps: determining the ACT QP offset as an ACT QP offset in an ACT QP offset set, wherein the ACT QP offset set includes an ACT QP offset for a luma residue component of the video data, an ACT QP offset for a first chroma residue component of the video data, an ACT QP offset for a second chroma residue component of the video data, and an ACT QP offset for a jointly coded chroma residue component and a sign value of -1, wherein the ACT QP offset for the jointly coded residue component is different from at least one of the ACT QP offset for the first chroma residue component of the video data or the ACT QP offset for the second chroma residue component of the video data, wherein determining the ACT QP offset is an ACT QP offset in an ACT QP offset set, wherein ... The QP offset includes the following steps: in response to the block being encoded using the ACT and being encoded in the joint chroma mode, determining the ACT QP offset as the ACT QP offset for the jointly coded chroma residue component, wherein the ACT QP offset is determined separately from the slice header QP offset and the picture level QP offset; determining an ACT QP for the block based on the QP and the determined ACT QP offset; and quantizing the single chroma residue block based on the ACT QP for the block. 如請求項10所述的方法,其中基於該區塊是使用該ACT而編碼的並且是以該聯合色度模式而編碼的來決定用於該區塊的該ACT QP偏移包括以下步驟:將該ACT QP偏移決定為一固定的整數值。 The method of claim 10, wherein determining the ACT QP offset for the block based on the block being encoded using the ACT and encoded in the joint chroma mode comprises the steps of: determining the ACT QP offset as a fixed integer value. 如請求項10所述的方法,其中該第二色彩空間包括一YCgCo色彩空間。 The method as claimed in claim 10, wherein the second color space comprises a YCgCo color space. 一種用於對視訊資料進行解碼的設備,該設備包括:被配置為儲存視訊資料的一記憶體;一或多個處理器,該一或多個處理器在電路中實現並且被配置為:決定該視訊資料的一區塊是使用一自我調整色彩變換(ACT)而編碼的;決定該區塊是以一聯合色度模式而編碼的,其中對於該聯合色度模式,一單個色度殘差區塊是針對該區塊的一第一色度分量和該區塊的一第二色度分量而編 碼的;決定用於該第一色度分量的一第一色度殘差區塊,其中該第一色度殘差區塊的值等於該單個色度殘差區塊的值;決定用於該第二色度分量的一第二色度殘差區塊的一符號值,其中該符號值等於1或-1中之一個;決定該第二色度殘差區塊,其中該第二色度殘差區塊的值等於該符號值乘以該單個色度殘差區塊的該等值;基於一切片標頭QP偏移和一圖片級別QP偏移,決定用於該區塊的一第二色彩空間的一量化參數(QP);將用於該區塊的一ACT QP偏移決定為一ACT QP偏移集合中的一個ACT QP偏移,其中該ACT QP偏移集合包括用於該視訊資料的亮度殘差分量的一ACT QP偏移、用於該視訊資料的第一色度殘差分量的一ACT QP偏移、用於該視訊資料的第二色度殘差分量的一ACT QP偏移、及用於經聯合編碼的色度殘差分量的一ACT QP偏移以及-1的一符號值,其中用於經聯合編碼的殘差分量的一ACT QP偏移與用於該視訊資料的第一色度殘差分量的該ACT QP偏移或用於該視訊資料的第二色度殘差分量的該ACT QP偏移中的至少一個不同,其中為了決定該ACT QP偏移,該一或多個處理器進一步被配 置為:在響應於該區塊是使用該ACT而編碼的並且是以該聯合色度模式而編碼的,將該ACT QP偏移決定為用於經聯合編碼的色度殘差分量的該ACT QP偏移,其中該ACT QP偏移是與該切片標頭QP偏移和該圖片級別QP偏移分開決定的;基於該QP和所決定的該ACT QP偏移來決定用於該區塊的一第一色彩空間的一ACT QP,其中該第一色彩空間與該第二色彩空間不同;基於用於該區塊的該ACT QP來決定該單個色度殘差區塊;根據該單個色度殘差區塊來決定用於該第一色度分量的一第一色度殘差區塊,其中該第一色度殘差區塊在該第一色彩空間中;根據該單個色度殘差區塊來決定用於該第二色度分量的一第二色度殘差區塊,其中該第二色度殘差區塊在該第一色彩空間中;對該第一色度殘差區塊執行一逆ACT,以將該第一色度殘差區塊轉換到該第二色彩空間;及對該第二色度殘差區塊執行該逆ACT,以將該第二色度殘差區塊轉換到該第二色彩空間。 A device for decoding video data, the device comprising: a memory configured to store video data; one or more processors, the one or more processors implemented in circuits and configured to: determine that a block of the video data is encoded using an adaptive color transform (ACT); determine that the block is encoded in a joint chroma mode, wherein for the joint chroma mode, a single chroma residue block is encoded for a first chroma component of the block and a second chroma component of the block; determine a chroma residue block for the first chroma component; A first chroma residue block, wherein the value of the first chroma residue block is equal to the value of the single chroma residue block; determining a symbol value for a second chroma residue block for the second chroma component, wherein the symbol value is equal to one of 1 or -1; determining the second chroma residue block, wherein the value of the second chroma residue block is equal to the symbol value multiplied by the values of the single chroma residue block; determining a quantization parameter (QP) in a second color space for the block based on a slice header QP offset and a picture level QP offset; applying an ACT The QP offset is determined to be an ACT QP offset in an ACT QP offset set, wherein the ACT QP offset set includes an ACT QP offset for a luma residue component of the video data, an ACT QP offset for a first chroma residue component of the video data, an ACT QP offset for a second chroma residue component of the video data, and an ACT QP offset for a jointly coded chroma residue component, and a sign value of -1, wherein the ACT QP offset for the jointly coded residue component is different from at least one of the ACT QP offset for the first chroma residue component of the video data or the ACT QP offset for the second chroma residue component of the video data, wherein to determine the ACT QP offset, The one or more processors are further configured to: in response to the block being encoded using the ACT and encoded in the joint chroma mode, determine the ACT QP offset as the ACT QP offset for the jointly coded chroma residue component, wherein the ACT QP offset is determined separately from the slice header QP offset and the picture level QP offset; determine an ACT QP for a first color space for the block based on the QP and the determined ACT QP offset, wherein the first color space is different from the second color space; determine an ACT QP for the block based on the ACT QP to determine the single chroma residue block; determine a first chroma residue block for the first chroma component according to the single chroma residue block, wherein the first chroma residue block is in the first color space; determine a second chroma residue block for the second chroma component according to the single chroma residue block, wherein the second chroma residue block is in the first color space; perform an inverse ACT on the first chroma residue block to convert the first chroma residue block to the second color space; and perform the inverse ACT on the second chroma residue block to convert the second chroma residue block to the second color space. 如請求項13所述的設備,其中為了基於該區塊是使用該ACT而編碼的並且是以該聯合色度模式而編碼的來決定用於該區塊的該ACT QP偏移,該一或多個處理器進一步被配置為:將該ACT QP偏移決 定為一固定的整數值。 The apparatus of claim 13, wherein to determine the ACT QP offset for the block based on the block being encoded using the ACT and encoded in the joint chroma mode, the one or more processors are further configured to: determine the ACT QP offset as a fixed integer value. 如請求項13所述的設備,其中該第一色彩空間包括一YCgCo色彩空間。 The device as claimed in claim 13, wherein the first color space comprises a YCgCo color space. 如請求項13所述的設備,其中該一或多個處理器進一步被配置為:將該經轉換的第一色度殘差區塊與一第一預測色度區塊相加,以決定一第一經重構的色度區塊;將該經轉換的第二色度殘差區塊與一第二預測色度區塊相加,以決定一第二經重構的色度區塊;及輸出該第一經重構的色度區塊和該第二經重構的色度區塊。 The device of claim 13, wherein the one or more processors are further configured to: add the converted first chroma residue block to a first predicted chroma block to determine a first reconstructed chroma block; add the converted second chroma residue block to a second predicted chroma block to determine a second reconstructed chroma block; and output the first reconstructed chroma block and the second reconstructed chroma block. 如請求項13所述的設備,其中該一或多個處理器進一步被配置為:決定該視訊資料的一第二區塊是使用該ACT而編碼的;決定該第二區塊不是以該聯合色度模式而編碼的;決定用於該第二區塊的一QP;基於該第二區塊是使用該ACT而編碼的並且不是以該聯合色度模式而編碼的,來決定用於該第二區塊的一第一色度分量的一第二ACT QP偏移,其中該第二ACT QP偏移被決定為該視訊資料的第一色度殘差分量的該ACT QP偏移;基於該第二區塊是使用該ACT而編碼的並且不是以該聯合色度模式而編碼的,來決定用於該第二區塊的一 第二色度分量的一第三ACT QP偏移,其中該第三ACT QP偏移被決定為該視訊資料的第二色度殘差分量的該ACT QP偏移,其中該第二ACT QP偏移和該第三ACT QP偏移中的至少一項不同於該第一ACT QP偏移。 The apparatus of claim 13, wherein the one or more processors are further configured to: determine that a second block of the video data is encoded using the ACT; determine that the second block is not encoded in the joint chroma mode; determine a QP for the second block; determine a second ACT QP offset for a first chroma component of the second block based on the second block being encoded using the ACT and not encoded in the joint chroma mode, wherein the second ACT QP offset is determined to be the ACT QP offset for the first chroma residual component of the video data; determine a third ACT QP offset for a second chroma component of the second block based on the second block being encoded using the ACT and not encoded in the joint chroma mode, wherein the third ACT The QP offset is determined to be the ACT QP offset of the second chroma residue component of the video data, wherein at least one of the second ACT QP offset and the third ACT QP offset is different from the first ACT QP offset. 如請求項13所述的設備,其中為了根據該單個色度殘差區塊來決定用於該第一色度分量的該第一色度殘差區塊,該一或多個處理器進一步被配置為:將該第一色度殘差區塊的取樣值設置為等於該單個色度殘差區塊中的對應取樣的值。 The device of claim 13, wherein in order to determine the first chroma residue block for the first chroma component based on the single chroma residue block, the one or more processors are further configured to: set the sample value of the first chroma residue block to be equal to the value of the corresponding sample in the single chroma residue block. 如請求項18所述的設備,其中為了根據該單個色度殘差區塊來決定用於該第二色度分量的該第二色度殘差區塊,該一或多個處理器進一步被配置為:將該第二色度殘差區塊的取樣值設置為等於該第一色度殘差區塊中的對應取樣的值。 The device of claim 18, wherein in order to determine the second chroma residue block for the second chroma component based on the single chroma residue block, the one or more processors are further configured to: set the sample value of the second chroma residue block to be equal to the value of the corresponding sample in the first chroma residue block. 如請求項18所述的設備,其中為了根據該單個色度殘差區塊來決定用於該第二色度分量的該第二色度殘差區塊,該一或多個處理器進一步被配置為:將該第二色度殘差區塊的取樣值設置為等於該第一色度殘差區塊中的對應取樣的值乘以負一。 The device of claim 18, wherein in order to determine the second chroma residue block for the second chroma component based on the single chroma residue block, the one or more processors are further configured to: set the sample value of the second chroma residue block to be equal to the value of the corresponding sample in the first chroma residue block multiplied by negative one. 如請求項13所述的設備,其中為了基於用於該區塊的該ACT QP來決定該單個色度殘差區塊,該一或多個處理器進一步被配置為:接收一變換係數集合; 對該變換係數集合執行一逆量化操作,以決定一經反量化的變換係數集合,其中用於該逆量化操作的反量化的一量是藉由該ACT QP來控制的;及對該經反量化的變換係數集合進行逆變換,以決定該單個色度殘差區塊。 The apparatus of claim 13, wherein to determine the single chroma residue block based on the ACT QP for the block, the one or more processors are further configured to: receive a set of transform coefficients; perform an inverse quantization operation on the set of transform coefficients to determine a set of inverse quantized transform coefficients, wherein an amount of inverse quantization used for the inverse quantization operation is controlled by the ACT QP; and inverse transform the set of inverse quantized transform coefficients to determine the single chroma residue block. 如請求項13所述的設備,其中該設備包括一無線通訊設備,該無線通訊設備進一步包括被配置為接收經編碼的視訊資料的一接收器。 The device of claim 13, wherein the device comprises a wireless communication device, the wireless communication device further comprising a receiver configured to receive encoded video data. 如請求項22所述的設備,其中該無線通訊設備包括一電話手機,並且其中該接收器被配置為:根據一無線通訊標準來對包括該經編碼的視訊資料的一信號進行解調。 The device of claim 22, wherein the wireless communication device comprises a telephone handset, and wherein the receiver is configured to demodulate a signal comprising the encoded video data according to a wireless communication standard. 如請求項13所述的設備,進一步包括:被配置為顯示經解碼的視訊資料的一顯示器。 The device as described in claim 13 further includes: a display configured to display the decoded video data. 如請求項13所述的設備,其中該設備包括以下各項中的一或多項:一相機、一電腦、一行動設備、一廣播接收器設備、或一機上盒。 The device as claimed in claim 13, wherein the device comprises one or more of the following: a camera, a computer, a mobile device, a broadcast receiver device, or a set-top box. 一種用於對視訊資料進行編碼的設備,該設備包括:被配置為儲存視訊資料的一記憶體;一或多個處理器,該一或多個處理器在電路中實現並且被配置為:決定用於視訊資料的一區塊的一第一色度分量的一第一色度殘差區塊; 決定用於視訊資料的該區塊的一第二色度分量的一第二色度殘差區塊,其中該第一色度殘差區塊和該第二色度殘差區塊在一第一色彩空間中;決定該視訊資料的該區塊是使用一自我調整色彩變換(ACT)而編碼的;對該第一色度殘差區塊執行該ACT,以將該第一色度殘差區塊轉換到一第二色彩空間;對該第二色度殘差區塊執行該ACT,以將該第二色度殘差區塊轉換到該第二色彩空間;決定該視訊資料的該區塊是以一聯合色度模式而編碼的,其中對於該聯合色度模式,一單個色度殘差區塊是針對該區塊的該第一色度分量和該區塊的該第二色度分量而編碼的;基於該經轉換的第一色度殘差區塊和該經轉換的第二色度殘差區塊來決定該單個色度殘差區塊;決定用於該第二色度分量的一第二色度殘差區塊的一符號值,其中該符號值等於1或-1中之一個,以及該第二色度殘差區塊的值等於該符號值乘以該單個色度殘差區塊的該等值;決定用於該區塊的該第一色彩空間的一量化參數(QP);使用一切片標頭QP偏移和一圖片級別QP偏移來訊號傳遞該QP參數;將用於該區塊的該第二色彩空間的一ACT QP偏 移決定為一ACT QP偏移集合中的一個ACT QP偏移,其中該ACT QP偏移集合包括用於該視訊資料的亮度殘差分量的一ACT QP偏移、用於該視訊資料的第一色度殘差分量的一ACT QP偏移、用於該視訊資料的第二色度殘差分量的一ACT QP偏移、及用於經聯合編碼的色度殘差分量的一ACT QP偏移以及-1的一符號值,其中用於經聯合編碼的殘差分量的一ACT QP偏移與用於該視訊資料的第一色度殘差分量的該ACT QP偏移或用於該視訊資料的第二色度殘差分量的該ACT QP偏移中的至少一個不同,其中為了決定該ACT QP偏移,該一或多個處理器進一步被配置為:在響應於該區塊是使用該ACT而編碼的並且是以該聯合色度模式而編碼的,將該ACT QP偏移決定為用於經聯合編碼的色度殘差分量的該ACT QP偏移,其中該ACT QP偏移是與該切片標頭QP偏移和該圖片級別QP偏移分開決定的;基於該QP和該ACT QP偏移來決定用於該區塊的一ACT QP;及基於用於該區塊的該ACT QP來對該單個色度殘差區塊進行量化。 A device for encoding video data, the device comprising: a memory configured to store video data; one or more processors, the one or more processors being implemented in a circuit and configured to: determine a first chroma residue block for a first chroma component of a block of video data; determine a second chroma residue block for a second chroma component of the block of video data, wherein the first chroma The method comprises: determining that the block of video data is encoded using an adaptive color transform (ACT) for encoding the first chroma residue block and the second chroma residue block in a first color space; performing the ACT on the first chroma residue block to convert the first chroma residue block to a second color space; performing the ACT on the second chroma residue block to convert the second chroma residue block to the second color space; and determining that the block of video data is encoded using an adaptive color transform (ACT). The method comprises: determining that the block of the video data is encoded in a joint chroma mode, wherein for the joint chroma mode, a single chroma residue block is encoded for the first chroma component of the block and the second chroma component of the block; determining the single chroma residue block based on the converted first chroma residue block and the converted second chroma residue block; determining a second chroma residue block for the second chroma component; a symbol value for the block, wherein the symbol value is equal to one of 1 or -1, and the value of the second chroma residue block is equal to the symbol value multiplied by the values of the single chroma residue block; determining a quantization parameter (QP) for the first color space of the block; signaling the QP parameter using a slice header QP offset and a picture level QP offset; and quantizing the second color space of the block by a quantization parameter (QP). The QP offset is determined to be an ACT QP offset in an ACT QP offset set, wherein the ACT QP offset set includes an ACT QP offset for a luma residue component of the video data, an ACT QP offset for a first chroma residue component of the video data, an ACT QP offset for a second chroma residue component of the video data, and an ACT QP offset for a jointly coded chroma residue component, and a sign value of -1, wherein the ACT QP offset for the jointly coded residue component is different from at least one of the ACT QP offset for the first chroma residue component of the video data or the ACT QP offset for the second chroma residue component of the video data, wherein to determine the ACT QP offset, the ACT QP offset is determined to be an ACT QP offset in an ACT QP offset set, wherein the ACT QP offset is determined to be an ACT QP offset in an ACT QP offset for a luma residue component of the video data, an ACT QP offset for a second chroma residue component of the video data, and a sign value of -1, wherein the ACT QP offset for the jointly coded residue component is different from at least one of the ACT QP offset for the first chroma residue component of the video data or the ACT QP offset for the second chroma residue component of the video data, wherein to determine the ACT QP offset, the one or more processors are further configured to: in response to the block being encoded using the ACT and encoded in the joint chroma mode, determine the ACT QP offset as the ACT QP offset for the jointly coded chroma residue component, wherein the ACT QP offset is determined separately from the slice header QP offset and the picture level QP offset; determine an ACT QP for the block based on the QP and the ACT QP offset; and quantize the single chroma residue block based on the ACT QP for the block. 如請求項26所述的設備,其中為了基於該區塊是使用該ACT而編碼的並且是以該聯合色度模式而編碼的來決定用於該區塊的該ACT QP偏移,該一或多個處理器進一步被配置為:將該ACT QP偏移決 定為一固定的整數值。 The apparatus of claim 26, wherein to determine the ACT QP offset for the block based on the block being encoded using the ACT and encoded in the joint chroma mode, the one or more processors are further configured to: determine the ACT QP offset as a fixed integer value. 如請求項26所述的設備,其中該第二色彩空間包括一YCgCo色彩空間。 The device as claimed in claim 26, wherein the second color space comprises a YCgCo color space. 如請求項26所述的設備,其中該設備包括:被配置為擷取該視訊資料的一相機。 The device as claimed in claim 26, wherein the device comprises: a camera configured to capture the video data.
TW109141561A 2019-11-26 2020-11-26 Flexible signaling of qp offset for adaptive color transform in video coding TWI878392B (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US201962940728P 2019-11-26 2019-11-26
US62/940,728 2019-11-26
US201962954318P 2019-12-27 2019-12-27
US62/954,318 2019-12-27
US17/103,415 2020-11-24
US17/103,415 US20210160481A1 (en) 2019-11-26 2020-11-24 Flexible signaling of qp offset for adaptive color transform in video coding

Publications (2)

Publication Number Publication Date
TW202127874A TW202127874A (en) 2021-07-16
TWI878392B true TWI878392B (en) 2025-04-01

Family

ID=75974501

Family Applications (1)

Application Number Title Priority Date Filing Date
TW109141561A TWI878392B (en) 2019-11-26 2020-11-26 Flexible signaling of qp offset for adaptive color transform in video coding

Country Status (5)

Country Link
US (1) US20210160481A1 (en)
EP (1) EP4066490A1 (en)
CN (1) CN114930821A (en)
TW (1) TWI878392B (en)
WO (1) WO2021108547A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
SG11202109517RA (en) 2019-03-12 2021-09-29 Tencent America LLC Method and apparatus for color transform in vvc
WO2023171940A1 (en) * 2022-03-08 2023-09-14 현대자동차주식회사 Method and apparatus for video coding, using adaptive chroma conversion

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160261864A1 (en) * 2014-10-06 2016-09-08 Telefonaktiebolaget L M Ericsson (Publ) Coding and deriving quantization parameters

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4366308A3 (en) * 2013-06-28 2024-07-10 Velos Media International Limited Methods and devices for emulating low-fidelity coding in a high-fidelity coder
US9883184B2 (en) * 2014-10-07 2018-01-30 Qualcomm Incorporated QP derivation and offset for adaptive color transform in video coding

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160261864A1 (en) * 2014-10-06 2016-09-08 Telefonaktiebolaget L M Ericsson (Publ) Coding and deriving quantization parameters

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
網路文獻 Benjamin Bross et al. Versatile Video Coding (Draft 7) Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 16th Meeting: Geneva, CH 1–11 Oct. 2019 https://jvet-experts.org/doc_end_user/documents/16_Geneva/wg11/JVET-P2001-v14.zip *

Also Published As

Publication number Publication date
CN114930821A (en) 2022-08-19
TW202127874A (en) 2021-07-16
WO2021108547A1 (en) 2021-06-03
US20210160481A1 (en) 2021-05-27
EP4066490A1 (en) 2022-10-05

Similar Documents

Publication Publication Date Title
TWI874542B (en) Lfnst signaling for chroma based on chroma transform skip
TWI878419B (en) Block-based delta pulse code modulation for video coding
TWI875981B (en) Coded video sequence start access unit in video coding
CN113994706B (en) Chroma delta quantization parameter in video decoding
CN114982233A (en) Signaling scaling matrices in video coding
TWI898055B (en) Fixed bit depth processing for cross-component linear model (cclm) mode in video coding
TWI876066B (en) Deblocking filter parameter signaling
TWI883177B (en) Adaptive scaling list control for video coding
CN114731403B (en) Residual codec selection and low-level signaling based on quantization parameters
TWI878392B (en) Flexible signaling of qp offset for adaptive color transform in video coding
TWI883099B (en) Monochrome palette mode for video coding
CN115176471B (en) Scaling list signaling for video codec
HK40062453A (en) Chroma delta quantization parameter in video coding
HK40079984A (en) Adaptive scaling list control for video coding