[go: up one dir, main page]

WO2025078149A1 - Adaptive bif strength based on dbf strength - Google Patents

Adaptive bif strength based on dbf strength Download PDF

Info

Publication number
WO2025078149A1
WO2025078149A1 PCT/EP2024/076806 EP2024076806W WO2025078149A1 WO 2025078149 A1 WO2025078149 A1 WO 2025078149A1 EP 2024076806 W EP2024076806 W EP 2024076806W WO 2025078149 A1 WO2025078149 A1 WO 2025078149A1
Authority
WO
WIPO (PCT)
Prior art keywords
block
strength
bif
video
filter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/EP2024/076806
Other languages
French (fr)
Inventor
Ismail MARZUKI
Charles BONNINEAU
Frederic Lefebvre
Saurabh PURI
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
InterDigital CE Patent Holdings SAS
Original Assignee
InterDigital CE Patent Holdings SAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by InterDigital CE Patent Holdings SAS filed Critical InterDigital CE Patent Holdings SAS
Publication of WO2025078149A1 publication Critical patent/WO2025078149A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • H04N19/139Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • H04N19/82Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/86Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness

Definitions

  • the present embodiments generally relate to video compression.
  • the present embodiments relate to a method and an apparatus for encoding or decoding an image or a video. More particularly, the present embodiments relate to improving coding modes of video compression system that uses template-based cost.
  • BACKGROUND To achieve high compression efficiency, image and video coding schemes usually employ prediction and transform to leverage spatial and temporal redundancy in the video content.
  • intra or inter prediction is used to exploit the intra or inter picture correlation, then the differences between the original block and the predicted block, often denoted as prediction errors or prediction residuals, are transformed, quantized, and entropy coded.
  • inter prediction motion vectors used in motion compensation are often predicted from motion vector predictor.
  • the compressed data are decoded by inverse processes corresponding to the entropy coding, quantization, transform, and prediction.
  • the method comprises obtaining a reconstructed block of the video, determining a filtering strength of a bilateral filter based on at least one parameter used for encoding the block, and responsive to the determining of the filtering strength, applying the bilateral filter to the block.
  • an apparatus for encoding or decoding a block of a video comprises one or more processors operable to obtain a reconstructed block of the video, determine a filtering strength of a bilateral filter based on at least one parameter used for encoding the block, and responsive to the determining of the filtering strength, apply the bilateral filter to the block.
  • a method for encoding a video is provided.
  • One or more embodiments also provide a computer program comprising instructions which when executed by one or more processors cause the one or more processors to perform any one of the methods for encoding or decoding a video according to any of the embodiments described herein.
  • One or more of the present embodiments also provide a non-transitory computer readable medium and/or a computer readable storage medium having stored thereon instructions for encoding or decoding a video according to the methods described herein.
  • One or more embodiments also provide a computer readable storage medium having stored thereon a bitstream generated according to the methods described herein.
  • One or more embodiments also provide a method and apparatus for transmitting or receiving the bitstream generated according to the methods described above.
  • FIG. 1 illustrates a block diagram of a system within which aspects of the present embodiments may be implemented.
  • FIG.2 illustrates a block diagram of an embodiment of a video encoder within which aspects of the present embodiments may be implemented.
  • FIG.3 illustrates a block diagram of an embodiment of a video decoder within which aspects of the present embodiments may be implemented.
  • FIG.4 illustrates an example of an 8x8 TU block and the filter aperture for the sample located at (1,1).
  • FIG.5 illustrates an example of a coefficient look-up-table used to obtain the weights of the filter.
  • FIG.6 illustrates neighboring samples used in bilateral filter.
  • FIG.7 illustrates an example of windows covering two samples used in weight determination for BIF.
  • FIG.8 illustrates an example of samples used in a weighted sum for BIF.
  • FIG.9 illustrates an example of applying BIF and SAO using samples from a deblocking stage as input. Both create an offset, and these are added to the input sample and clipped.
  • FIG. 10 illustrates an example of naming convention for samples surrounding the center sample I c .
  • FIG.11 illustrates an example of a filtering stage of BIF from chroma components.
  • FIG.12 illustrates an example of in-loop filtering in ECM 9.0.
  • FIG.13 illustrates an example of horizontal and vertical block boundaries on an 8x8 grid.
  • FIG.14 illustrates an example of an HEVC deblocking decision workflow.
  • FIG.15 illustrates an example of sample position of p i,k and q i,k in the case of horizontal and vertical block boundaries FIG.
  • FIG. 16 illustrates an example of vertical and horizontal block boundaries on a 4x4 grid, 32x32CU with Pus on 8x8 grid, and vertical boundary that may require long-tap deblocking.
  • FIG. 17 illustrates an example of four-sample long vertical boundary segment formed by blocks P and Q in VVC deblocking decisions are based on lines #0 and #3.
  • FIG. 18 illustrates an example of stronger deblocking for luma when samples at either one side of a boundary belong to a large block (width ⁇ 32 and height ⁇ 32).
  • FIG.19 illustrates an example of a flowchart for encoding or decoding at least one block of a video according to an embodiment.
  • FIG. 20 illustrates an example of a flowchart for encoding at least one block of a video according to another embodiment.
  • FIG. 21 illustrates an example of a flowchart for decoding at least one block of a video according to another embodiment.
  • FIG.22 illustrates an example of an adaptive BIF strength rules for different prediction modes according to an embodiment.
  • FIG.23 illustrates an example of an adaptive BIF strength rules for intra mode according to an embodiment.
  • FIG.24 illustrates an example of an adaptive BIF strength rules for inter mode according to an embodiment.
  • FIG. 25 illustrates a block diagram of a system within which aspects of the present embodiments may be implemented, according to another embodiment.
  • FIG. 26 shows two remote devices communicating over a communication network in accordance with an example of the present principles.
  • FIG.27 shows the syntax of a signal in accordance with an example of the present principles.
  • At least one of the aspects generally relates to video encoding and decoding, and at least one other aspect generally relates to transmitting a bitstream generated or encoded.
  • These and other aspects can be implemented as a method, an apparatus, a computer readable storage medium having stored thereon instructions for encoding or decoding video data according to any of the methods described, and/or a computer readable storage medium having stored thereon a bitstream generated according to any of the methods described.
  • the terms “reconstructed” and “decoded” may be used interchangeably, the terms “pixel” and “sample” may be used interchangeably, the terms “image,” “picture” and “frame” may be used interchangeably.
  • each of the methods comprises one or more steps or actions for achieving the described method. Unless a specific order of steps or actions is required for proper operation of the method, the order and/or use of specific steps and/or actions may be modified or combined. Additionally, terms such as “first”, “second”, etc. may be used in various embodiments to modify an element, component, step, operation, etc., such as, for example, a “first decoding” and a “second decoding”. Use of such terms does not imply an ordering to the modified operations unless specifically required. So, in this example, the first decoding need not be performed before the second decoding, and may occur, for example, before, during, or in an overlapping time period with the second decoding.
  • FIG. 1 illustrates a block diagram of an example of a system in which various aspects and embodiments can be implemented.
  • System 100 may be embodied as a device including the various components described below and is configured to perform one or more of the aspects described in this application. Examples of such devices, include, but are not limited to, various electronic devices such as personal computers, laptop computers, smartphones, tablet computers, digital multimedia set top boxes, digital television receivers, personal video recording systems, connected home appliances, and servers.
  • Elements of system 100 may be embodied in a single integrated circuit, multiple ICs, and/or discrete components.
  • the processing and encoder/decoder elements of system 100 are distributed across multiple ICs and/or discrete components.
  • the system 100 is communicatively coupled to other systems, or to other electronic devices, via, for example, a communications bus or through dedicated input and/or output ports.
  • the system 100 is configured to implement one or more of the aspects described in this application.
  • the system 100 includes at least one processor 110 configured to execute instructions loaded therein for implementing, for example, the various aspects described in this application.
  • Processor 110 may include embedded memory, input output interface, and various other circuitries as known in the art.
  • the system 100 includes at least one memory 120 (e.g., a volatile memory device, and/or a non-volatile memory device).
  • System 100 includes a storage device 140, which may include non-volatile memory and/or volatile memory, including, but not limited to, EEPROM, ROM, PROM, RAM, DRAM, SRAM, flash, magnetic disk drive, and/or optical disk drive.
  • the storage device 140 may include an internal storage device, an attached storage device, and/or a network accessible storage device, as non-limiting examples.
  • System 100 includes an encoder/decoder module 130 configured, for example, to process data to provide an encoded video or decoded video, and the encoder/decoder module 130 may include its own processor and memory.
  • the encoder/decoder module 130 represents module(s) that may be included in a device to perform the encoding and/or decoding functions. As is known, a device may include one or both of the encoding and decoding modules. Additionally, encoder/decoder module 130 may be implemented as a separate element of system 100 or may be incorporated within processor 110 as a combination of hardware and software as known to those skilled in the art. Program code to be loaded onto processor 110 or encoder/decoder 130 to perform the various aspects described in this application may be stored in storage device 140 and subsequently loaded onto memory 120 for execution by processor 110.
  • processor 110 may store one or more of various items during the performance of the processes described in this application.
  • Such stored items may include, but are not limited to, the input video, the decoded video or portions of the decoded video, the bitstream, matrices, variables, and intermediate or final results from the processing of equations, formulas, operations, and operational logic.
  • memory inside of the processor 110 and/or the encoder/decoder module 130 is used to store instructions and to provide working memory for processing that is needed during encoding or decoding.
  • a memory external to the processing device (for example, the processing device may be either the processor 110 or the encoder/decoder module 130) is used for one or more of these functions.
  • the external memory may be the memory 120 and/or the storage device 140, for example, a dynamic volatile memory and/or a non-volatile flash memory.
  • an external non-volatile flash memory is used to store the operating system of a television.
  • a fast external dynamic volatile memory such as a RAM is used as working memory for video coding and decoding operations, such as for MPEG-2, HEVC (HEVC refers to High Efficiency Video Coding, also known as H.265 and MPEG-H Part 2), or VVC (Versatile Video Coding also known as H.266, standard developed by JVET, the Joint Video Experts Team).
  • HEVC High Efficiency Video Coding
  • VVC Very Video Coding also known as H.266, standard developed by JVET, the Joint Video Experts Team.
  • the input to the elements of system 100 may be provided through various input devices as indicated in block 105.
  • Such input devices include, but are not limited to, (i) a radio frequency (RF) portion that receives an RF signal transmitted, for example, over the air by a broadcaster, (ii) a Component (COMP) input terminal (or a set of COMP input terminals), (iii) a Universal Serial Bus (USB) input terminal, and/or (iv) a High Definition Multimedia Interface (HDMI) input terminal.
  • RF radio frequency
  • COMP Component
  • USB Universal Serial Bus
  • HDMI High Definition Multimedia Interface
  • the input devices of block 105 have associated respective input processing elements as known in the art.
  • the RF portion may be associated with elements suitable for (i) selecting a desired frequency (also referred to as selecting a signal, or band-limiting a signal to a band of frequencies), (ii) down converting the selected signal, (iii) band-limiting again to a narrower band of frequencies to select (for example) a signal frequency band which can be referred to as a channel in certain embodiments, (iv) demodulating the down converted and band-limited signal, (v) performing error correction, and (vi) demultiplexing to select the desired stream of data packets.
  • a desired frequency also referred to as selecting a signal, or band-limiting a signal to a band of frequencies
  • down converting the selected signal for example
  • band-limiting again to a narrower band of frequencies to select (for example) a signal frequency band which can be referred to as a channel in certain embodiments
  • demodulating the down converted and band-limited signal (v) performing error correction, and (vi) demultiplexing to select the desired stream of data packets
  • the demodulated, error corrected, and demultiplexed stream is provided to various processing elements, including, for example, processor 110, and encoder/decoder 130 operating in combination with the memory and storage elements to process the data stream as necessary for presentation on an output device.
  • Various elements of system 100 may be provided within an integrated housing, Within the integrated housing, the various elements may be interconnected and transmit data therebetween using suitable connection arrangement 115, for example, an internal bus as known in the art, including the I2C bus, wiring, and printed circuit boards.
  • the system 100 includes communication interface 150 that enables communication with other devices via communication channel 190.
  • the communication interface 150 may include, but is not limited to, a transceiver configured to transmit and to receive data over communication channel 190.
  • inventions provide streamed data to the system 100 using a set-top box that delivers the data over the HDMI connection of the input block 105. Still other embodiments provide streamed data to the system 100 using the RF connection of the input block 105. As indicated above, various embodiments provide data in a non-streaming manner. Additionally, various embodiments use wireless networks other than Wi-Fi, for example a cellular network or a Bluetooth network.
  • the system 100 may provide an output signal to various output devices, including a display 165, speakers 175, and other peripheral devices 185.
  • the display 165 of various embodiments includes one or more of, for example, a touchscreen display, an organic light- emitting diode (OLED) display, a curved display, and/or a foldable display.
  • OLED organic light- emitting diode
  • the display 165 can be for a television, a tablet, a laptop, a cell phone (mobile phone), or other devices.
  • the display 165 can also be integrated with other components (for example, as in a smart phone), or separate (for example, an external monitor for a laptop).
  • the other peripheral devices 185 include, in various examples of embodiments, one or more of a stand-alone digital video disc (or digital versatile disc) (DVR, for both terms), a disk player, a stereo system, and/or a lighting system.
  • DVR digital versatile disc
  • Various embodiments use one or more peripheral devices 185 that provide a function based on the output of the system 100. For example, a disk player performs the function of playing the output of the system 100.
  • control signals are communicated between the system 100 and the display 165, speakers 175, or other peripheral devices 185 using signaling such as AV.Link, CEC, or other communications protocols that enable device-to-device control with or without user intervention.
  • the output devices may be communicatively coupled to system 100 via dedicated connections through respective interfaces 160, 170, and 180. Alternatively, the output devices may be connected to system 100 using the communications channel 190 via the communications interface 150.
  • the display 165 and speakers 175 may be integrated in a single unit with the other components of system 100 in an electronic device, for example, a television.
  • the display interface 160 includes a display driver, for example, a timing controller (T Con) chip.
  • the memory 120 can be of any type appropriate to the technical environment and can be implemented using any appropriate data storage technology, such as optical memory devices, magnetic memory devices, semiconductor-based memory devices, fixed memory, and removable memory, as non-limiting examples.
  • the processor 110 can be of any type appropriate to the technical environment, and can encompass one or more of microprocessors, general purpose computers, special purpose computers, and processors based on a multi-core architecture, as non-limiting examples.
  • FIG. 2 illustrates an example of a block-based hybrid video encoder 200. Variations of this encoder 200 are contemplated, but the encoder 200 is described below for purposes of clarity without describing all expected variations.
  • FIG.2 also illustrate an encoder in which improvements are made to the HEVC standard or a VVC standard (Versatile Video Coding, Standard ITU-T H.266, ISO/IEC 23090-3, 2020) or an encoder employing technologies similar to HEVC or VVC, such as an encoder ECM under development by JVET (Joint Video Exploration Team).
  • VVC Very Video Coding, Standard ITU-T H.266, ISO/IEC 23090-3, 2020
  • JVET Joint Video Exploration Team
  • a CTU Coding Tree Unit refers to a group of blocks or group of units or group of coding units (CUs).
  • CUs coding units
  • a CTU may be considered as a block, or a unit as itself.
  • An example of a partitioning using CTUs and CUs is illustrates on FIG.15.
  • VVC as in HEVC, a picture is partitioned into multiple non-overlapping CTUs.
  • a CTU size in VVC can be set up to 128 ⁇ 128 or 256 ⁇ 256 in units of luma samples, while in HEVC, it can be set up to 64 ⁇ 64.
  • a recursive Quad-tree (QT) split can be applied to each CTU, resulting in one or multiple CUs, all having square shapes.
  • rectangular CUs are supported together with square CUs.
  • a binary tree (BT) split and a ternary tree (TT) split are also adopted in VVC.
  • FIG. 16 illustrates some examples of partitioning of a CTU. Further splitting of the obtained CUs are also possible.
  • Each unit is encoded using, for example, either an intra or inter mode.
  • the encoder can bypass both transform and quantization, i.e., the residual is coded directly without the application of the transform or quantization processes.
  • the encoder decodes (reconstructs) an encoded block to provide a reference for further predictions.
  • the quantized transform coefficients are de-quantized (240) and inverse transformed (250) to decode prediction residuals.
  • In-loop filters (265) are applied to the reconstructed picture to perform, for example, deblocking/SAO (Sample Adaptive Offset) filtering to reduce encoding artifacts.
  • the filtered image is stored at a reference picture buffer (280).
  • FIG.3 illustrates a block diagram of a video decoder 300.
  • a bitstream is decoded by the decoder elements as described below.
  • Video decoder 300 generally performs a decoding pass reciprocal to the encoding pass as described in FIG.2.
  • the encoder 200 also generally performs video decoding as part of encoding video data.
  • the input of the decoder includes a video bitstream, which can be generated by video encoder 200.
  • the bitstream is first entropy decoded (330) to obtain transform coefficients, motion vectors, and other coded information.
  • the picture partition information indicates how the picture is partitioned.
  • the decoder may therefore divide (335) the picture according to the decoded picture partitioning information.
  • the transform coefficients are de- quantized (340) and inverse transformed (350) to decode the prediction residuals. Combining (355) the decoded prediction residuals and the predicted block, an image block is reconstructed.
  • the predicted block can be obtained (370) from intra prediction (360) or motion-compensated prediction (i.e., inter prediction) (375).
  • the decoder may blend (373) the intra prediction result and inter prediction result, or blend results from multiple intra/inter prediction methods. Before motion compensation, the motion field may be refined (372) by using already available reference pictures. In-loop filters (365) are applied to the reconstructed image. The filtered image is stored at a reference picture buffer (380).
  • the decoded picture can further go through post-decoding processing (385), for example, an inverse color transform (e.g. conversion from YCbCr 4:2:0 to RGB 4:4:4) or an inverse remapping performing the inverse of the remapping process performed in the pre-encoding processing (201), or re-sizing the reconstructed pictures (ex: up-scaling).
  • post-decoding processing can use metadata derived in the pre-encoding processing and signaled in the bitstream.
  • embodiments provide for adapting a bilateral filter strength of a Bilateral Filter (BIF) according to coding information used for encoding the block of the video.
  • BIF strength is determined based on a deblocking filter strength used for the block.
  • Any one of the embodiments described herein can be implemented for instance in an in-loop filter module of a video encoder or video decoder.
  • the embodiments described herein can be implemented in the in-loop module 265 of the video encoder 200 or the in-loop filter module 365 of the video decoder 300.
  • Performing quantization in the transform domain is a technique known for better preserving information in images and video compared to quantizing in the pixel domain.
  • Some embodiments provide for using an adaptive BIF strength for various contents and characteristics of blocks in a video.
  • rules are provided for determining BIF strength based on coding information, such as deblocking filter boundary strength, size/shape of the block, the presence or absence of non-zero coefficients, qp (quantization parameter) values, etc.. Such rules may provide better coding performance and alleviate ringing artifact problems around edges.
  • the bilateral filter is derived from gaussian filters, and can be described as follows: Each sample in the reconstructed picture is replaced by a weighted average of itself and its neighbors. The weights are calculated based on the distance from the center sample as well as the difference in sample values.
  • a sample located at (i, j), is filtered using its neighboring sample (k, l).
  • the weight ⁇ ( ⁇ , ⁇ , ⁇ , ⁇ ) is the weight assigned for sample (k, l) to filter the sample (i, j), and it is defined as: I(i, j ) and I(k, l) are the original reconstructed intensity value of samples (i, j) and (k,l) respectively.
  • ⁇ ⁇ is the spatial parameter
  • ⁇ ⁇ is the range parameter.
  • the property (or strength) of the bilateral filter is controlled by these two parameters.
  • the goal of the LUT is to pre-calculate the weights of the bilateral filter: so that the filtered pixel ⁇ ⁇ ( ⁇ , ⁇ ) can be calculated as ( ⁇ ⁇ .7)
  • the filtering process (Eq.7) is rewritten as: where ⁇ 0, 0 is the intensity of the current sample and ⁇ 0 ′ , 0 is the modified intensity of the current sample, ⁇ ⁇ , 0 and ⁇ ⁇ ( ⁇ ) are the intensity and weighting parameter for the k-th neighboring sample, respectively.
  • An example of one current sample and its four neighboring samples (i.e., K 4), in the shape of a plus sign, is depicted in FIG.6.
  • ⁇ ⁇ , ⁇ and ⁇ represent the m-th sample value within the windows centered at ⁇ ⁇ , 0 and ⁇ 0, 0 , respectively.
  • the window size is set to 3 ⁇ 3.
  • An example of two windows covering ⁇ 2, 0 and ⁇ 0, 0 are depicted in FIG.7.
  • a spatial filter strength adjustment based on CU area size for bilateral filter is proposed in JVET-K0231.
  • can be ⁇ , ⁇ , ⁇ or ⁇ .
  • a combination of the bilateral filter with the sample adaptive offset (SAO) loop filter is provided.
  • the filter is carried out in the sample adaptive offset (SAO) loop-filter stage, as shown in FIG.9.
  • Both the bilateral filter (BIF) and SAO are using samples from the deblocking filtering as input. Each filter creates an offset per sample, and these offsets are added to the input sample and then clipped.
  • ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ( ⁇ ⁇ ⁇ + ⁇ ⁇ ⁇ ⁇ + ⁇ ⁇ ⁇ ⁇ ⁇ + ⁇ ⁇ ⁇ ⁇ ⁇ ) .
  • the calculation of BIF’s offset ( ⁇ ⁇ ⁇ ⁇ ) is modified by: 1. a TU scale factor ⁇ ⁇ ⁇ that depends on the TU shape size. 2. a TU scale factor ⁇ ⁇ ⁇ that depends on the mean absolute difference (MAD) of the TU. 3.
  • the BIF LUTs are interpolated.
  • ⁇ TU min (width TU , height TU ).
  • JVET-AE0044 the base LUT size is preserved, but the content of the base LUT is changed and the calculation of ⁇ BIF, ⁇ , ⁇ ,QP is also changed.
  • the number of cut off bits is decreased from 3 to 2.
  • LUT ⁇ ,h be a 2D 8 ⁇ 8 lookup table with non-negative 8-bit integer values
  • LUT MAD be a 1D 16-entry lookup table with non-negative 8-bit integer values.
  • these scale factors are defined as follows: width TU , log 2 height TU ) , (min( MAD TU ⁇ 4, 15)).
  • the MAD of a (h ⁇ ⁇ )-size TU with the channel samples denoted by ⁇ is defined as follows: In total, four 64-byte tables LUT ⁇ ,h and four 16-byte tables LUT MAD are introduced (for luma/chroma component, for intra/inter prediction).
  • ⁇ TU is a constant for all samples of the same channel inside one TU.
  • the number of arithmetic operations per sample and the size of memory required to store the LUTs in the BIF are compared against ECM 9 (M.Coban, F.Le Leannec, R-L. Liao, K.Naser, J.Ström, L.Zhang, "Algorithm description of Enhanced Compression Model 9 (ECM 9),” document JVET-AD2025, 30th Meeting, April 2023) in Table 2 below.
  • ECM 9 Algorithm description of Enhanced Compression Model 9 (ECM 9)
  • the value obtained in the modified version of Equation (10) before right bit-shifting belongs to the interval [-15509, 15765], and so all arithmetic can be done in 15-bit signed integers. Table 2.
  • VVC Essentially allows larger blocks than HEVC, and the DBF in VVC basically extends HEVC DBF design to address the artifacts remaining in large smooth areas.
  • the processing order of the deblocking filter is defined as horizontal filtering for vertical edges for the entire picture first, followed by vertical filtering for horizontal edges.
  • the filter strength of the deblocking filter in VVC is controlled by the variables ⁇ and ⁇ ⁇ which are derived from the averaged quantization parameters ( ⁇ ⁇ ⁇ ) of the two adjacent coding blocks.
  • maximum QP was changed from 51 to 63, and it is desired to reflect corresponding change to deblocking ⁇ ⁇ table, which derive values of deblocking parameters ⁇ ⁇ based on the block QP.
  • Luma Deblocking Filter HEVC uses an 8x8 deblocking grid for both luma and chroma.
  • FIG. 15 depicts an illustration of four-pixel long horizontal and vertical boundaries on the 8x8 grid. Due to flexible block sizes in VVC, the luma deblocking is applied on a 4x4 sample grid for boundaries between CUs and TUs, and on an 8x8 grid for boundaries between PUs inside CUs, as shown in FIG. 16, to handle blocking artifacts from rectangular transform shapes.
  • Parallel friendly luma deblocking filter process on a 4x4 grid is achieved by restricting the number of samples to be deblocked to 1 sample on each side of a vertical luma boundary where one side has a width of 4 or less or to 1 sample on each side of a horizontal luma boundary where one side has a height of 4 or less.
  • VVC deblocking filter decides whether to use long-tap filters and determine appropriate filter lengths. This step is carried out to ensure that non spatial dependency exists with the adjacent vertical or horizontal block boundaries. After determining the deblocking filter lengths, the boundary strength bS is derived.
  • the deblocking filter sample belonging to a large block that is defined as when the width is larger than or equal to 32 for a vertical edge, and when height is larger than or equal to 32 for a horizontal edge
  • the spatial activity decision for the long-tap deblocking filter or stronger filter applies.
  • the decision process for short-tap deblocking filter or weak filter applies, as shown in FIG.18.
  • the decision process is similar to that of HEVC, except that further thresholds are introduced that in fine still depend on the QP.
  • the short-tap deblocking filters are almost identical to the HEVC deblocking filters, the only difference consisting in the introduction of position-dependent clipping to control the differences between filtered values and the sample values before filtering.
  • the long-tap deblocking filter is designed to preserve inclined surfaces and linear signals across the block boundaries.
  • Chroma deblocking filter The Chroma deblocking is performed on an 8x8 sample grid on boundaries of both CUs and TUs. Compared to Luma, deblocking lengths are limited to 3+3, 1+3 or 1+1. Minor ⁇ ⁇ adaptations are introduced to account for VVC coding modes, such as BDPCM and CIIP.
  • the chroma strong filters are used on both sides of the block boundary. Here, the chroma filter is selected when both sides of the chroma edge are greater than or equal to 8 (in unit of chroma sample) and the following decision with three conditions are satisfied. The first one is for decision of boundary strength as well as large block.
  • At least one of the adjacent blocks is coded with intra or CIIP 6 2 2 2 2 mode At least one of the adjacent blocks has non-zero transform 5 1 1 1 coefficients
  • One of the adjacent blocks is coded in IBC prediction mode 4 1 1 1 and the other is coded in inter prediction mode
  • Absolute difference between the motion vectors that belong 3 to the adjacent blocks is greater than or equal to one half 1 0 0 luma sample Reference pictures the two adjacent blocks refer to are 2 1 0 0 different 1 Otherwise 0 0 0
  • For Luma only block boundaries with ⁇ ⁇ values equal to 1 or 2 are filtered. For Chroma, deblocking is performing when ⁇ ⁇ is equal to 2, or ⁇ ⁇ is equal to 1 when a large block boundary is detected.
  • the BIF is use it for an entire frame or picture of a video regardless of the contents and characteristics of blocks in the picture.
  • videos in general have different contents and characteristics.
  • Some embodiments provided herein allow for an adaptive filtering strength for these various contents and characteristic. This may be more beneficial to further increase the BIF performance instead of generalizing it with the same strength.
  • Recent contributions have attempted to improve BIF performance by proposing a control of BIF scaling factor and designing a Look-up-table for the approximation of the weights of used in the filtering as detailed in JVET-AE0044 for example.
  • the scaling factor proposed in JVET- AE0044 is determined by the size of a TU block and the MAD of the TU block.
  • FIG.19 illustrates an example of a method 1900 for encoding or decoding at least one block of a picture in a video according to an embodiment.
  • method 1900 is used in an in-loop filters module of the encoder 200 or the decoder 300.
  • a reconstructed block is obtained, for example as an output of the deblocking filter stage.
  • a BIF strength for the block is determined based on at least one parameter used for encoding the block. For example, the BIF strength is determined among the three values 0, 1, 2, wherein these values indicate as follows: 0 as half filtering strength, 1 as full filtering strength, or 2 as double filtering strength.
  • FIG.20 illustrates an example of a method 2000 for encoding at least one block of a video according to another embodiment.
  • the usage of the adaptive BIF strength is enabled or disabled for a given picture. For example, at 2010, enabling or disabling the usage of the adaptive BIF strength is signaled in a bitstream for the picture to encode. For example, an indicator is signaled in a PPS (Picture Parameter Set) referenced by the picture.
  • PPS Picture Parameter Set
  • the indicator can be signaled at a slice level of the picture.
  • the indicator is the same as the one signaling the BIF strength for the picture in the PPS but an additional value is used, for example 3, that indicates that adaptive BIF strength is used for blocks of the picture.
  • blocks of the picture are encoded, for example as described in reference to FIG.2.
  • a reconstructed version of the block is obtained, for example after the deblocking filtering stage.
  • the BIF strength is determined for the block depending on the value of the indicator signaling the usage or not of adaptive BIF strength for blocks of the picture.
  • FIG.21 illustrates an example of a method 2100 for decoding at least one block of a video according to another embodiment.
  • the usage of the adaptive BIF strength is enabled or disabled for a given picture. For example, at 2110, enabling or disabling the usage of the adaptive BIF strength is decoded from a bitstream for the picture to decode.
  • an indicator is decoded from a PPS (Picture Parameter Set) referenced by the picture.
  • the indicator can be signaled at a slice level of the picture.
  • the indicator is the same as the one signaling the BIF strength for the picture, but an additional value is used, for example 3, that indicates that adaptive BIF strength is used for blocks of the picture.
  • blocks of the picture are decoded, for example as described in reference to FIG.3. For a given block that is decoded, a reconstructed version of the block is obtained, for example after the deblocking filtering stage.
  • the BIF strength is determined for the reconstructed block.
  • the value of the indicator is 0, 1 or 2
  • the BIF strength for the reconstructed block is set to the value of the indicator.
  • adaptive BIF strength is applied for the reconstructed block.
  • the BIF strength for the reconstructed block is determined according to one of the embodiments described above or further below, and at 2150, the BIF is applied to the reconstructed block based on the determined BIF strength.
  • Some embodiments described herein aim at improving Bilateral Filtering (BIF) for both luma and chroma components by adaptively determine filtering strength of BIF based on available coding information that can mimic the contents and characteristics of a block.
  • coding information can be deblocking filter boundary strength, size/shape of the block, presence, or absence of non-zero coefficients in the block, qp values, and so forth.
  • an adaptive BIF strength can be adjusted as 0 (as half filtering strength), 1 (as full filtering strength), or 2 (as double filtering strength).
  • adaptive BIF strength is applicable for different coding modes, such as intra prediction mode, inter prediction mode, and for the IBC and palette prediction modes.
  • method 2400 of FIG.24 is applied to determine the BIF strength for the block. If the block is not encoded in an inter mode, then at 2250, BIF strength is set to 1 for the block and the BIF filtering is applied using the BIF strength of 1.
  • Table 4 below provides an example of a reasoning of the adaptive BIF strength decisions ruled out for a block. Table 4 illustrates for a given coding information (first column) that is used for a block, the BIF strength status that can be derived from the condition 1 or 2 of the coding information. In other words, for a given coding information, the BIF strength for the block (BIF strength status) is derived from the values (Condition 1, Condition 2) of the coding information.
  • both DBF Bs of horizontal and vertical edges of the block in raster order may be taken into account when determining BIF strength filtering.
  • a single Bs can be obtained by averaging both DBF Bs edges, or by checking only either one of the edges, for example taking the highest Bs value among the two Bs or in case of the block belongs to a rectangular CU, taking the horizontal Bs for a horizontal CU and a vertical Bs for a vertical CU.
  • FIG.23 illustrates an example of a method 2300 for adaptive BIF strength rules for a block encoded in an intra mode according to an embodiment.
  • Available coding information such as deblocking filter boundary strength, size/shape block, and non-zero coefficient flag or other coding information that can distinguish information of content and characteristic of blocks are possible to use.
  • FIG.23 provides an example of an embodiment that rules the utilization of BIF strength.
  • the rules is applied for both Luma and chroma components, however in some variants different rules can be defined for luma and chroma components, or adaptive BIF strength can be defined on for luma and a default BIF strength is used for chroma.
  • FIG.24 illustrates an example of a method 2400 for adaptive BIF strength rules for a block encoded in an inter mode according to an embodiment.
  • Available coding information such as deblocking filter boundary strength, size/shape block, qp value, absolute difference between the motion vectors or other coding information that can distinguish information of content and characteristic of blocks are possible to use.
  • FIG.24 provides an example of an embodiment that rules the utilization of BIF strength when the block is in an inter mode. It should be noted that the rules is applied for both Luma and chroma components, however in some variants different rules can be defined for luma and chroma components, or adaptive BIF strength can be defined on for luma and a default BIF strength is used for chroma.
  • BIF is applied with strength as 0. Otherwise, at 2480, it is checked if the block shape size is smaller than or equal to the given value (8 in this example), and if the absolute difference between the motion vectors that belong to the adjacent blocks is greater than or equal to one half luma sample. If yes, then at 2490, BIF is applied to the block with strength as 1. Otherwise, at 2491, BIF is applied to the block with strength as 2.
  • the BIF Scaling Factor is adjusted using Adaptive BIF Strength described above. The BIF offset in Equation (10) above is further adjusted by taking into account the adaptive BIF Strength decision.
  • Equation (10) which can be rewritten as Equation (11) below:
  • Equation (11) Equation (11) below:
  • the ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ h corresponds to the BIF strength used for the block.
  • the BIF strength can be determined adaptively according to one of the embodiments described herein or can be a default BIF strength such as the one defined for the picture at the PPS level.
  • Adaptive BIF strength signaling In the current ECM version, BIF strength usage of the BIF is indicated in PPS level by setting BIF strength as 0, 1, or 2 for half, full, and double filter, respectively.
  • the same manner for the adaptive BIF strength is also added to the BIF strength option to indicate the usage of the adaptive BIF strength, which is for example set to 3, and signaled in PPS level.
  • FIG. 25 illustrates a block diagram of a system within which aspects of the present embodiments may be implemented, according to another embodiment.
  • FIG. 25 shows one embodiment of an apparatus 2500 for encoding or decoding a video according to any one of the embodiments described herein.
  • the apparatus comprises Processor 2510 and can be interconnected to a memory 2520 through at least one port. Both Processor 2510 and memory 2520 can also have one or more additional interconnections to external connections.
  • Processor 2510 is also configured to obtain a reconstructed block of an image, determine a filtering strength of a bilateral filter based on at least one parameter used for encoding the block, responsive to the determination of the filtering strength, apply the bilateral filter to the block, according to any one of the embodiments described herein.
  • the processor 2510 uses a computer program product comprising code instructions that implements any one of embodiments described herein.
  • the device A comprises a processor in relation with memory RAM and ROM which are configured to implement a method for encoding a video, as described with FIG.
  • the device B comprises a processor in relation with memory RAM and ROM which are configured to implement a method for decoding a video as described in relation with FIG 1-25.
  • the network is a broadcast network, adapted to broadcast/transmit a coded video from device A to decoding devices including the device B.
  • FIG. 27 shows an example of the syntax of a signal transmitted over a packet-based transmission protocol.
  • Each transmitted packet P comprises a header H and a payload PAYLOAD.
  • the payload PAYLOAD may comprise video data encoded according to any one of the embodiments described above.
  • the payload can also comprise any signaling as described above.
  • Various implementations involve decoding.
  • “Decoding”, as used in this application, can encompass all or part of the processes performed, for example, on a received encoded sequence in order to produce a final output suitable for display.
  • such processes include one or more of the processes typically performed by a decoder, for example, entropy decoding, inverse quantization, inverse transformation, and differential decoding.
  • such processes also, or alternatively, include processes performed by a decoder of various implementations described in this application, for example, entropy decoding a sequence of binary symbols to reconstruct image or video data.
  • decoding refers only to entropy decoding
  • decoding refers only to differential decoding
  • decoding refers to a combination of entropy decoding and differential decoding
  • decoding refers to the whole reconstructing picture process including entropy decoding.
  • encoding can encompass all or part of the processes performed, for example, on an input video sequence in order to produce an encoded bitstream.
  • processes include one or more of the processes typically performed by an encoder, for example, partitioning, differential encoding, transformation, quantization, and entropy encoding.
  • processes also, or alternatively, include processes performed by an encoder of various implementations described in this application, for example, determining re-sampling filter coefficients, re- sampling a decoded picture.
  • encoding refers only to entropy encoding
  • encoding refers only to differential encoding
  • encoding refers to a combination of differential encoding and entropy encoding.
  • This information can be packaged or arranged in a variety of manners, including for example manners common in video standards such as putting the information into an SPS, a PPS, a NAL unit, a header (for example, a NAL unit header, picture header or a slice header), or an SEI message.
  • Other manners are also available, including for example manners common for system level or application level standards such as putting the information into one or more of the following: a. SDP (session description protocol), a format for describing multimedia communication sessions for the purposes of session announcement and session invitation, for example as described in RFCs and used in conjunction with RTP (Real-time Transport Protocol) transmission.
  • SDP session description protocol
  • RTP Real-time Transport Protocol
  • rate distortion optimization When a figure is presented as a flow diagram, it should be understood that it also provides a block diagram of a corresponding apparatus. Similarly, when a figure is presented as a block diagram, it should be understood that it also provides a flow diagram of a corresponding method/process.
  • Some embodiments refer to rate distortion optimization.
  • the balance or trade-off between the rate and distortion is usually considered, often given the constraints of computational complexity.
  • the rate distortion optimization is usually formulated as minimizing a rate distortion function, which is a weighted sum of the rate and of the distortion. There are different approaches to solve the rate distortion optimization problem.
  • the approaches may be based on an extensive testing of all encoding options, including all considered modes or coding parameters values, with a complete evaluation of their coding cost and related distortion of the reconstructed signal after coding and decoding.
  • Faster approaches may also be used, to save encoding complexity, in particular with computation of an approximated distortion based on the prediction or the prediction residual signal, not the reconstructed one.
  • Mix of these two approaches can also be used, such as by using an approximated distortion for only some of the possible encoding options, and a complete distortion for other encoding options.
  • Other approaches only evaluate a subset of the possible encoding options.
  • implementations and aspects described herein can be implemented in, for example, a method or a process, an apparatus, a software program, a data stream, or a signal. Even if only discussed in the context of a single form of implementation (for example, discussed only as a method), the implementation of features discussed can also be implemented in other forms (for example, an apparatus or program).
  • An apparatus can be implemented in, for example, appropriate hardware, software, and firmware.
  • a processor which refers to processing devices in general, including, for example, a computer, a microprocessor, an integrated circuit, or a programmable logic device.
  • Processors also include communication devices, such as, for example, computers, cell phones, portable/personal digital assistants ("PDAs"), and other devices that facilitate communication of information between end-users.
  • PDAs portable/personal digital assistants
  • this application may refer to “determining” various pieces of information. Determining the information can include one or more of, for example, estimating the information, calculating the information, predicting the information, or retrieving the information from memory. Further, this application may refer to “accessing” various pieces of information. Accessing the information can include one or more of, for example, receiving the information, retrieving the information (for example, from memory), storing the information, moving the information, copying the information, calculating the information, determining the information, predicting the information, or estimating the information.
  • this application may refer to “receiving” various pieces of information.
  • Receiving is, as with “accessing”, intended to be a broad term.
  • Receiving the information can include one or more of, for example, accessing the information, or retrieving the information (for example, from memory).
  • “receiving” is typically involved, in one way or another, during operations such as, for example, storing the information, processing the information, transmitting the information, moving the information, copying the information, erasing the information, calculating the information, determining the information, predicting the information, or estimating the information.
  • any of the following “and/or”, and “at least one of”, for example, in the cases of “A/B”, “A and/or B” and “at least one of A and B”, is intended to encompass the selection of the first listed option (A) only, or the selection of the second listed option (B) only, or the selection of both options (A and B).
  • such phrasing is intended to encompass the selection of the first listed option (A) only, or the selection of the second listed option (B) only, or the selection of the third listed option (C) only, or the selection of the first and the second listed options (A and B) only, or the selection of the first and third listed options (A and C) only, or the selection of the second and third listed options (B and C) only, or the selection of all three options (A and B and C).
  • This may be extended, as is clear to one of ordinary skill in this and related arts, for as many items as are listed.
  • the word “signal” refers to, among other things, indicating something to a corresponding decoder.
  • the same parameter is used at both the encoder side and the decoder side.
  • an encoder can transmit (explicit signaling) a particular parameter to the decoder so that the decoder can use the same particular parameter.
  • signaling can be used without transmitting (implicit signaling) to simply allow the decoder to know and select the particular parameter. By avoiding transmission of any actual functions, a bit savings is realized in various embodiments. It is to be appreciated that signaling can be accomplished in a variety of ways.
  • one or more syntax elements, flags, and so forth are used to signal information to a corresponding decoder in various embodiments. While the preceding relates to the verb form of the word “signal”, the word “signal” can also be used herein as a noun.
  • implementations can produce a variety of signals formatted to carry information that can be, for example, stored or transmitted.
  • the information can include, for example, instructions for performing a method, or data produced by one of the described implementations.
  • a signal can be formatted to carry the bitstream of a described embodiment.
  • Such a signal can be formatted, for example, as an electromagnetic wave (for example, using a radio frequency portion of spectrum) or as a baseband signal.
  • the formatting can include, for example, encoding a data stream and modulating a carrier with the encoded data stream.
  • the information that the signal carries can be, for example, analog or digital information.
  • the signal can be transmitted over a variety of different wired or wireless links, as is known.
  • the signal can be stored on a processor- readable medium.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A method and an apparatus for encoding or decoding a video are provided wherein a reconstructed block of a video is obtained during encoding or decoding of the video. A filtering strength of a bilateral filter is determined for the bock based on at least one parameter used for encoding the block. Responsive to the determining of the filtering strength, the bilateral filter is applied to the block.

Description

ADAPTIVE BIF STRENGTH BASED ON DBF STRENGTH This application claims the priority to European Application No.23306765.1, filed on 11 October 2023, which is incorporated herein by reference in its entirety. TECHNICAL FIELD The present embodiments generally relate to video compression. The present embodiments relate to a method and an apparatus for encoding or decoding an image or a video. More particularly, the present embodiments relate to improving coding modes of video compression system that uses template-based cost. BACKGROUND To achieve high compression efficiency, image and video coding schemes usually employ prediction and transform to leverage spatial and temporal redundancy in the video content. Generally, intra or inter prediction is used to exploit the intra or inter picture correlation, then the differences between the original block and the predicted block, often denoted as prediction errors or prediction residuals, are transformed, quantized, and entropy coded. In inter prediction, motion vectors used in motion compensation are often predicted from motion vector predictor. To reconstruct the video, the compressed data are decoded by inverse processes corresponding to the entropy coding, quantization, transform, and prediction. SUMMARY According to an aspect, a method for encoding or decoding a block of a video is provided. The method comprises obtaining a reconstructed block of the video, determining a filtering strength of a bilateral filter based on at least one parameter used for encoding the block, and responsive to the determining of the filtering strength, applying the bilateral filter to the block. According to another aspect, an apparatus for encoding or decoding a block of a video is provided. The apparatus comprises one or more processors operable to obtain a reconstructed block of the video, determine a filtering strength of a bilateral filter based on at least one parameter used for encoding the block, and responsive to the determining of the filtering strength, apply the bilateral filter to the block. According to another aspect, a method for encoding a video is provided. The method comprises signaling an indicator indicating whether an adaptive BIF strength is enabled for blocks of a picture of the video. According to another aspect, a method for decoding a video is provided. The method comprises decoding an indicator indicating whether an adaptive BIF strength is enabled for blocks of a picture of the video. According to another aspect, an apparatus for encoding a video is provided. The apparatus comprises one or more processors operable to signal an indicator indicating whether an adaptive BIF strength is enabled for blocks of a picture of the video. According to another aspect, an apparatus for decoding a video is provided. The apparatus comprises one or more processors operable to decode an indicator indicating whether an adaptive BIF strength is enabled for blocks of a picture of the video. Further embodiments that can be used alone or in combination are described herein. One or more embodiments also provide a computer program comprising instructions which when executed by one or more processors cause the one or more processors to perform any one of the methods for encoding or decoding a video according to any of the embodiments described herein. One or more of the present embodiments also provide a non-transitory computer readable medium and/or a computer readable storage medium having stored thereon instructions for encoding or decoding a video according to the methods described herein. One or more embodiments also provide a computer readable storage medium having stored thereon a bitstream generated according to the methods described herein. One or more embodiments also provide a method and apparatus for transmitting or receiving the bitstream generated according to the methods described above. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 illustrates a block diagram of a system within which aspects of the present embodiments may be implemented. FIG.2 illustrates a block diagram of an embodiment of a video encoder within which aspects of the present embodiments may be implemented. FIG.3 illustrates a block diagram of an embodiment of a video decoder within which aspects of the present embodiments may be implemented. FIG.4 illustrates an example of an 8x8 TU block and the filter aperture for the sample located at (1,1). FIG.5 illustrates an example of a coefficient look-up-table used to obtain the weights of the filter. FIG.6 illustrates neighboring samples used in bilateral filter. FIG.7 illustrates an example of windows covering two samples used in weight determination for BIF. FIG.8 illustrates an example of samples used in a weighted sum for BIF. FIG.9 illustrates an example of applying BIF and SAO using samples from a deblocking stage as input. Both create an offset, and these are added to the input sample and clipped. FIG. 10 illustrates an example of naming convention for samples surrounding the center sample Ic. FIG.11 illustrates an example of a filtering stage of BIF from chroma components. FIG.12 illustrates an example of in-loop filtering in ECM 9.0. FIG.13 illustrates an example of horizontal and vertical block boundaries on an 8x8 grid. FIG.14 illustrates an example of an HEVC deblocking decision workflow. FIG.15 illustrates an example of sample position of pi,k and qi,k in the case of horizontal and vertical block boundaries FIG. 16 illustrates an example of vertical and horizontal block boundaries on a 4x4 grid, 32x32CU with Pus on 8x8 grid, and vertical boundary that may require long-tap deblocking. FIG. 17 illustrates an example of four-sample long vertical boundary segment formed by blocks P and Q in VVC deblocking decisions are based on lines #0 and #3. FIG. 18 illustrates an example of stronger deblocking for luma when samples at either one side of a boundary belong to a large block (width ≥ 32 and height ≥ 32). FIG.19 illustrates an example of a flowchart for encoding or decoding at least one block of a video according to an embodiment. FIG. 20 illustrates an example of a flowchart for encoding at least one block of a video according to another embodiment. FIG. 21 illustrates an example of a flowchart for decoding at least one block of a video according to another embodiment. FIG.22 illustrates an example of an adaptive BIF strength rules for different prediction modes according to an embodiment. FIG.23 illustrates an example of an adaptive BIF strength rules for intra mode according to an embodiment. FIG.24 illustrates an example of an adaptive BIF strength rules for inter mode according to an embodiment. FIG. 25 illustrates a block diagram of a system within which aspects of the present embodiments may be implemented, according to another embodiment. FIG. 26 shows two remote devices communicating over a communication network in accordance with an example of the present principles. FIG.27 shows the syntax of a signal in accordance with an example of the present principles. DETAILED DESCRIPTION This application describes a variety of aspects, including tools, features, embodiments, models, approaches, etc. Many of these aspects are described with specificity and, at least to show the individual characteristics, are often described in a manner that may sound limiting. However, this is for purposes of clarity in description, and does not limit the application or scope of those aspects. Indeed, all of the different aspects can be combined and interchanged to provide further aspects. Moreover, the aspects can be combined and interchanged with aspects described in earlier filings as well. The aspects described and contemplated in this application can be implemented in many different forms. FIGs.1, 2 and 3 below provide some embodiments, but other embodiments are contemplated and the discussion of FIGs. 1, 2 and 3 does not limit the breadth of the implementations. At least one of the aspects generally relates to video encoding and decoding, and at least one other aspect generally relates to transmitting a bitstream generated or encoded. These and other aspects can be implemented as a method, an apparatus, a computer readable storage medium having stored thereon instructions for encoding or decoding video data according to any of the methods described, and/or a computer readable storage medium having stored thereon a bitstream generated according to any of the methods described. In the present application, the terms “reconstructed” and “decoded” may be used interchangeably, the terms “pixel” and “sample” may be used interchangeably, the terms “image,” “picture” and “frame” may be used interchangeably. Various methods are described herein, and each of the methods comprises one or more steps or actions for achieving the described method. Unless a specific order of steps or actions is required for proper operation of the method, the order and/or use of specific steps and/or actions may be modified or combined. Additionally, terms such as “first”, “second”, etc. may be used in various embodiments to modify an element, component, step, operation, etc., such as, for example, a “first decoding” and a “second decoding”. Use of such terms does not imply an ordering to the modified operations unless specifically required. So, in this example, the first decoding need not be performed before the second decoding, and may occur, for example, before, during, or in an overlapping time period with the second decoding. The present aspects are not limited to VVC or HEVC, and can be applied, for example, to other standards and recommendations, whether pre-existing or future-developed, and extensions of any such standards and recommendations (including VVC and HEVC). Unless indicated otherwise, or technically precluded, the aspects described in this application can be used individually or in combination. FIG. 1 illustrates a block diagram of an example of a system in which various aspects and embodiments can be implemented. System 100 may be embodied as a device including the various components described below and is configured to perform one or more of the aspects described in this application. Examples of such devices, include, but are not limited to, various electronic devices such as personal computers, laptop computers, smartphones, tablet computers, digital multimedia set top boxes, digital television receivers, personal video recording systems, connected home appliances, and servers. Elements of system 100, singly or in combination, may be embodied in a single integrated circuit, multiple ICs, and/or discrete components. For example, in at least one embodiment, the processing and encoder/decoder elements of system 100 are distributed across multiple ICs and/or discrete components. In various embodiments, the system 100 is communicatively coupled to other systems, or to other electronic devices, via, for example, a communications bus or through dedicated input and/or output ports. In various embodiments, the system 100 is configured to implement one or more of the aspects described in this application. The system 100 includes at least one processor 110 configured to execute instructions loaded therein for implementing, for example, the various aspects described in this application. Processor 110 may include embedded memory, input output interface, and various other circuitries as known in the art. The system 100 includes at least one memory 120 (e.g., a volatile memory device, and/or a non-volatile memory device). System 100 includes a storage device 140, which may include non-volatile memory and/or volatile memory, including, but not limited to, EEPROM, ROM, PROM, RAM, DRAM, SRAM, flash, magnetic disk drive, and/or optical disk drive. The storage device 140 may include an internal storage device, an attached storage device, and/or a network accessible storage device, as non-limiting examples. System 100 includes an encoder/decoder module 130 configured, for example, to process data to provide an encoded video or decoded video, and the encoder/decoder module 130 may include its own processor and memory. The encoder/decoder module 130 represents module(s) that may be included in a device to perform the encoding and/or decoding functions. As is known, a device may include one or both of the encoding and decoding modules. Additionally, encoder/decoder module 130 may be implemented as a separate element of system 100 or may be incorporated within processor 110 as a combination of hardware and software as known to those skilled in the art. Program code to be loaded onto processor 110 or encoder/decoder 130 to perform the various aspects described in this application may be stored in storage device 140 and subsequently loaded onto memory 120 for execution by processor 110. In accordance with various embodiments, one or more of processor 110, memory 120, storage device 140, and encoder/decoder module 130 may store one or more of various items during the performance of the processes described in this application. Such stored items may include, but are not limited to, the input video, the decoded video or portions of the decoded video, the bitstream, matrices, variables, and intermediate or final results from the processing of equations, formulas, operations, and operational logic. In some embodiments, memory inside of the processor 110 and/or the encoder/decoder module 130 is used to store instructions and to provide working memory for processing that is needed during encoding or decoding. In other embodiments, however, a memory external to the processing device (for example, the processing device may be either the processor 110 or the encoder/decoder module 130) is used for one or more of these functions. The external memory may be the memory 120 and/or the storage device 140, for example, a dynamic volatile memory and/or a non-volatile flash memory. In several embodiments, an external non-volatile flash memory is used to store the operating system of a television. In at least one embodiment, a fast external dynamic volatile memory such as a RAM is used as working memory for video coding and decoding operations, such as for MPEG-2, HEVC (HEVC refers to High Efficiency Video Coding, also known as H.265 and MPEG-H Part 2), or VVC (Versatile Video Coding also known as H.266, standard developed by JVET, the Joint Video Experts Team). The input to the elements of system 100 may be provided through various input devices as indicated in block 105. Such input devices include, but are not limited to, (i) a radio frequency (RF) portion that receives an RF signal transmitted, for example, over the air by a broadcaster, (ii) a Component (COMP) input terminal (or a set of COMP input terminals), (iii) a Universal Serial Bus (USB) input terminal, and/or (iv) a High Definition Multimedia Interface (HDMI) input terminal. Other examples, not shown in FIG.1, include composite video. In various embodiments, the input devices of block 105 have associated respective input processing elements as known in the art. For example, the RF portion may be associated with elements suitable for (i) selecting a desired frequency (also referred to as selecting a signal, or band-limiting a signal to a band of frequencies), (ii) down converting the selected signal, (iii) band-limiting again to a narrower band of frequencies to select (for example) a signal frequency band which can be referred to as a channel in certain embodiments, (iv) demodulating the down converted and band-limited signal, (v) performing error correction, and (vi) demultiplexing to select the desired stream of data packets. The RF portion of various embodiments includes one or more elements to perform these functions, for example, frequency selectors, signal selectors, band-limiters, channel selectors, filters, downconverters, demodulators, error correctors, and demultiplexers. The RF portion may include a tuner that performs various of these functions, including, for example, down converting the received signal to a lower frequency (for example, an intermediate frequency or a near-baseband frequency) or to baseband. In one set-top box embodiment, the RF portion and its associated input processing element receives an RF signal transmitted over a wired (for example, cable) medium, and performs frequency selection by filtering, down converting, and filtering again to a desired frequency band. Various embodiments rearrange the order of the above-described (and other) elements, remove some of these elements, and/or add other elements performing similar or different functions. Adding elements may include inserting elements in between existing elements, for example, inserting amplifiers and an analog-to-digital converter. In various embodiments, the RF portion includes an antenna. Additionally, the USB and/or HDMI terminals may include respective interface processors for connecting system 100 to other electronic devices across USB and/or HDMI connections. It is to be understood that various aspects of input processing, for example, Reed-Solomon error correction, may be implemented, for example, within a separate input processing IC or within processor 110 as necessary. Similarly, aspects of USB or HDMI interface processing may be implemented within separate interface ICs or within processor 110 as necessary. The demodulated, error corrected, and demultiplexed stream is provided to various processing elements, including, for example, processor 110, and encoder/decoder 130 operating in combination with the memory and storage elements to process the data stream as necessary for presentation on an output device. Various elements of system 100 may be provided within an integrated housing, Within the integrated housing, the various elements may be interconnected and transmit data therebetween using suitable connection arrangement 115, for example, an internal bus as known in the art, including the I2C bus, wiring, and printed circuit boards. The system 100 includes communication interface 150 that enables communication with other devices via communication channel 190. The communication interface 150 may include, but is not limited to, a transceiver configured to transmit and to receive data over communication channel 190. The communication interface 150 may include, but is not limited to, a modem or network card and the communication channel 190 may be implemented, for example, within a wired and/or a wireless medium. Data is streamed to the system 100, in various embodiments, using a Wi-Fi network such as IEEE 802.11 (IEEE refers to the Institute of Electrical and Electronics Engineers). The Wi-Fi signal of these embodiments is received over the communications channel 190 and the communications interface 150 which are adapted for Wi-Fi communications. The communications channel 190 of these embodiments is typically connected to an access point or router that provides access to outside networks including the Internet for allowing streaming applications and other over-the-top communications. Other embodiments provide streamed data to the system 100 using a set-top box that delivers the data over the HDMI connection of the input block 105. Still other embodiments provide streamed data to the system 100 using the RF connection of the input block 105. As indicated above, various embodiments provide data in a non-streaming manner. Additionally, various embodiments use wireless networks other than Wi-Fi, for example a cellular network or a Bluetooth network. The system 100 may provide an output signal to various output devices, including a display 165, speakers 175, and other peripheral devices 185. The display 165 of various embodiments includes one or more of, for example, a touchscreen display, an organic light- emitting diode (OLED) display, a curved display, and/or a foldable display. The display 165 can be for a television, a tablet, a laptop, a cell phone (mobile phone), or other devices. The display 165 can also be integrated with other components (for example, as in a smart phone), or separate (for example, an external monitor for a laptop). The other peripheral devices 185 include, in various examples of embodiments, one or more of a stand-alone digital video disc (or digital versatile disc) (DVR, for both terms), a disk player, a stereo system, and/or a lighting system. Various embodiments use one or more peripheral devices 185 that provide a function based on the output of the system 100. For example, a disk player performs the function of playing the output of the system 100. In various embodiments, control signals are communicated between the system 100 and the display 165, speakers 175, or other peripheral devices 185 using signaling such as AV.Link, CEC, or other communications protocols that enable device-to-device control with or without user intervention. The output devices may be communicatively coupled to system 100 via dedicated connections through respective interfaces 160, 170, and 180. Alternatively, the output devices may be connected to system 100 using the communications channel 190 via the communications interface 150. The display 165 and speakers 175 may be integrated in a single unit with the other components of system 100 in an electronic device, for example, a television. In various embodiments, the display interface 160 includes a display driver, for example, a timing controller (T Con) chip. The display 165 and speaker 175 may alternatively be separate from one or more of the other components, for example, if the RF portion of input 105 is part of a separate set-top box. In various embodiments in which the display 165 and speakers 175 are external components, the output signal may be provided via dedicated output connections, including, for example, HDMI ports, USB ports, or COMP outputs. The embodiments can be carried out by computer software implemented by the processor 110 or by hardware, or by a combination of hardware and software. As a non-limiting example, the embodiments can be implemented by one or more integrated circuits. The memory 120 can be of any type appropriate to the technical environment and can be implemented using any appropriate data storage technology, such as optical memory devices, magnetic memory devices, semiconductor-based memory devices, fixed memory, and removable memory, as non-limiting examples. The processor 110 can be of any type appropriate to the technical environment, and can encompass one or more of microprocessors, general purpose computers, special purpose computers, and processors based on a multi-core architecture, as non-limiting examples. FIG. 2 illustrates an example of a block-based hybrid video encoder 200. Variations of this encoder 200 are contemplated, but the encoder 200 is described below for purposes of clarity without describing all expected variations. In some embodiments, FIG.2 also illustrate an encoder in which improvements are made to the HEVC standard or a VVC standard (Versatile Video Coding, Standard ITU-T H.266, ISO/IEC 23090-3, 2020) or an encoder employing technologies similar to HEVC or VVC, such as an encoder ECM under development by JVET (Joint Video Exploration Team). Before being encoded, the video sequence may go through pre-encoding processing (201), for example, applying a color transform to the input color picture (e.g., conversion from RGB 4:4:4 to YCbCr 4:2:0), or performing a remapping of the input picture components in order to get a signal distribution more resilient to compression (for instance using a histogram equalization of color components), or re-sizing the picture (ex: down-scaling). Metadata can be associated with the pre-processing and attached to the bitstream. In the encoder 200, a picture is encoded by the encoder elements as described below. The picture to be encoded is partitioned (202) and processed in units of, for example, CUs (Coding units) or blocks. In the disclosure, different expressions may be used to refer to such a unit or block resulting from a partitioning of the picture. Such wording may be coding unit or CU, coding block or CB, luminance CB, or block. A CTU (Coding Tree Unit) refers to a group of blocks or group of units or group of coding units (CUs). In some embodiments, a CTU may be considered as a block, or a unit as itself. An example of a partitioning using CTUs and CUs is illustrates on FIG.15. For example, in VVC, as in HEVC, a picture is partitioned into multiple non-overlapping CTUs. A CTU size in VVC can be set up to 128 × 128 or 256 × 256 in units of luma samples, while in HEVC, it can be set up to 64 × 64. In HEVC, a recursive Quad-tree (QT) split can be applied to each CTU, resulting in one or multiple CUs, all having square shapes. In VVC, rectangular CUs are supported together with square CUs. A binary tree (BT) split and a ternary tree (TT) split are also adopted in VVC. FIG. 16 illustrates some examples of partitioning of a CTU. Further splitting of the obtained CUs are also possible. Each unit is encoded using, for example, either an intra or inter mode. When a unit is encoded in an intra mode, it performs intra prediction (260). In an inter mode, motion estimation (275) and compensation (270) are performed. The encoder decides (205) which one of the intra mode or inter mode to use for encoding the unit, and indicates the intra/inter decision by, for example, a prediction mode flag. The encoder may also blend (263) intra prediction result and inter prediction result, or blend results from different intra/inter prediction methods. Prediction residuals are calculated, for example, by subtracting (210) the predicted block from the original image block. The motion refinement module (272) uses already available reference picture in order to refine the motion field of a block without reference to the original block. A motion field for a region can be considered as a collection of motion vectors for all pixels with the region. If the motion vectors are sub-block-based, the motion field can also be represented as the collection of all sub-block motion vectors in the region (all pixels within a sub-block have the same motion vector, and the motion vectors may vary from sub-block to sub-block). If a single motion vector is used for the region, the motion field for the region can also be represented by the single motion vector (same motion vectors for all pixels in the region). The prediction residuals are then transformed (225) and quantized (230). The quantized transform coefficients, as well as motion vectors and other syntax elements, are entropy coded (245) to output a bitstream. The encoder can skip the transform and apply quantization directly to the non-transformed residual signal. The encoder can bypass both transform and quantization, i.e., the residual is coded directly without the application of the transform or quantization processes. The encoder decodes (reconstructs) an encoded block to provide a reference for further predictions. The quantized transform coefficients are de-quantized (240) and inverse transformed (250) to decode prediction residuals. Combining (255) the decoded prediction residuals and the predicted block, an image block is reconstructed. In-loop filters (265) are applied to the reconstructed picture to perform, for example, deblocking/SAO (Sample Adaptive Offset) filtering to reduce encoding artifacts. The filtered image is stored at a reference picture buffer (280). FIG.3 illustrates a block diagram of a video decoder 300. In the decoder 300, a bitstream is decoded by the decoder elements as described below. Video decoder 300 generally performs a decoding pass reciprocal to the encoding pass as described in FIG.2. The encoder 200 also generally performs video decoding as part of encoding video data. In particular, the input of the decoder includes a video bitstream, which can be generated by video encoder 200. The bitstream is first entropy decoded (330) to obtain transform coefficients, motion vectors, and other coded information. The picture partition information indicates how the picture is partitioned. The decoder may therefore divide (335) the picture according to the decoded picture partitioning information. The transform coefficients are de- quantized (340) and inverse transformed (350) to decode the prediction residuals. Combining (355) the decoded prediction residuals and the predicted block, an image block is reconstructed. The predicted block can be obtained (370) from intra prediction (360) or motion-compensated prediction (i.e., inter prediction) (375). The decoder may blend (373) the intra prediction result and inter prediction result, or blend results from multiple intra/inter prediction methods. Before motion compensation, the motion field may be refined (372) by using already available reference pictures. In-loop filters (365) are applied to the reconstructed image. The filtered image is stored at a reference picture buffer (380). The decoded picture can further go through post-decoding processing (385), for example, an inverse color transform (e.g. conversion from YCbCr 4:2:0 to RGB 4:4:4) or an inverse remapping performing the inverse of the remapping process performed in the pre-encoding processing (201), or re-sizing the reconstructed pictures (ex: up-scaling). The post-decoding processing can use metadata derived in the pre-encoding processing and signaled in the bitstream. Some of the embodiments described herein relates to in-loop filtering. Embodiments provided herein aim at improving the reconstructed samples of a block of a video being encoded or decoded. More particularly, embodiments provide for adapting a bilateral filter strength of a Bilateral Filter (BIF) according to coding information used for encoding the block of the video. In some embodiments, BIF strength is determined based on a deblocking filter strength used for the block. Any one of the embodiments described herein can be implemented for instance in an in-loop filter module of a video encoder or video decoder. For instance, the embodiments described herein can be implemented in the in-loop module 265 of the video encoder 200 or the in-loop filter module 365 of the video decoder 300. Performing quantization in the transform domain is a technique known for better preserving information in images and video compared to quantizing in the pixel domain. However, it is also known that quantized transform blocks may produce ringing artifacts around edges, both in still images and in moving objects in videos. Applying a bilateral filter (BIF) may significantly reduce ringing artifacts. BIF has been introduced in JVET standardization group, for example in the contribution JVET-D0069. In JVET-D0069, BIF is applied on decoded sample values directly after the inverse transform. In subsequent contributions, BIF is performed on reconstructed samples after the deblock filtering, with a larger filter, and with optimized LUT (approximation of the weights of the filter). Furthermore, BIF can be operated with different filtering strengths, such as 0 (as half filtering strength), 1 (as full filtering strength), or 2 (as double filtering strength). In current ECM, the BIF strength is set to 1 and is indicated in PPS. In the current ECM, BIF is applied with the same strength for the entire frame of a video. As the video contents and their characteristics are in general different, having the same filtering strength for an entire picture may not be optimal. There are certain blocks that may not necessarily be filtered out by BIF as the block is already smoothed with a strong filtering by deblocking filter. And there are also conditions where BIF can work more optimally for blocks that have been skipped for filtering by deblocking filter. Some embodiments provide for using an adaptive BIF strength for various contents and characteristics of blocks in a video. In some embodiments, rules are provided for determining BIF strength based on coding information, such as deblocking filter boundary strength, size/shape of the block, the presence or absence of non-zero coefficients, qp (quantization parameter) values, etc.. Such rules may provide better coding performance and alleviate ringing artifact problems around edges. In JVET-D0069, on a high level, the bilateral filter is derived from gaussian filters, and can be described as follows: Each sample in the reconstructed picture is replaced by a weighted average of itself and its neighbors. The weights are calculated based on the distance from the center sample as well as the difference in sample values. Because the filter is in the shape of a small plus sign as shown in FIG.4, all of the distances are 0 or 1. A sample located at (i, j), is filtered using its neighboring sample (k, l). The weight ^^( ^^, ^^, ^^, ^^) is the weight assigned for sample (k, l) to filter the sample (i, j), and it is defined as:
Figure imgf000014_0001
I(i, j ) and I(k, l) are the original reconstructed intensity value of samples (i, j) and (k,l) respectively. ^^ ^^ is the spatial parameter, and ^^ ^^ is the range parameter. The property (or strength) of the bilateral filter is controlled by these two parameters. Samples located closer to the sample to be filtered, and samples having smaller intensity difference to the sample to be filtered, will have larger weight than samples further away and with larger intensity difference. In JVET- D0069, ^^ ^^ is based on the transform unit size (Eq.2), and ^^ ^^ is based on the QP used for the current block (Eq.3). min TU block width, TU block height σd = 0.92 – ( ) ( ^^ ^^.2) 40
Figure imgf000014_0002
In JVET-D0069, the bilateral filter is applied to each TU block directly after the inverse transform in both the encoder and the decoder. As a result of this, subsequent Intra-coded blocks will predict from the sample values that have been filtered with the bilateral filter. This also makes it possible to include the bilateral filter operation in the Rate-Distortion decisions in the encoder and this is how the filter has been implemented in JEM in JVET-D0069. In JVET-D0069, each sample in the transform unit is filtered using its direct neighboring samples only. The filter has a plus sign shaped filter aperture centered at the sample to be filtered. The output filtered sample value ^^ ^^( ^^, ^^) is calculated as: ( ^^ ^^.4)
Figure imgf000014_0003
For TU sizes larger than 16 × 16, the block is treated as several 16 × 16 blocks using TU block width = TU block height = 16 in Equation 2. Also, rectangular blocks are treated as several instances of square blocks. In order to reduce the number of calculations, the bilateral filter has been implemented using a look-up-table (LUT) storing all weights for a particular QP in a two-dimensional array (FIG. 5). The LUT uses the intensity difference between the sample to be filtered and the reference sample as the index of the LUT in one dimension, and the TU size as the index in the other dimension. For efficient storage of the LUT, weights are rounded to 8-bit precision. The bilateral filter is a five-tap filter in the shape of a plus sign. The strength of the filter is based only on the TU size and QP. In JVET-E0032, the filter strength is lower for blocks using inter prediction. Inter predicted blocks typically have less residual than intra predicted blocks and therefore it makes sense to filter the reconstruction of inter predicted blocks less. The filter strength for intra predicted blocks is set as before but for inter predicted blocks the following spatial weight is used: min TU block width, TU block heig d = 0.72 – ( ht σ ) (
Figure imgf000015_0001
^^ ^^.5) In JVET-F0034, the size of the lookup table (LUT) for the bilateral filter, described in JVET- E0032, is reduced. The goal of the LUT is to pre-calculate the weights of the bilateral filter:
Figure imgf000015_0002
so that the filtered pixel ^^ ^^( ^^, ^^) can be calculated as ( ^^ ^^.7)
Figure imgf000015_0003
For the center weight, i.e., the weight for the center pixel, ^^ = ^^ and ^^ = ^^. Hence ( ^^ − ^^) = ( ^^ − ^^) = 0 and ^^( ^^, ^^) = ^^( ^^, ^^) which means that the center weight ^^
Figure imgf000015_0004
^^, ^^) is always 1.0. For all other weights, ( ^^ − ^^)2 + ( ^^ − ^^)2 = 1 since only the 4-neighbors of a pixel are included in the filtering (a “plus-shaped” filter kernel). Therefore
Figure imgf000015_0005
where ^^ ^^ is the intensity of the center pixel. Since the center weight is always 1.0, no LUT is needed for it. For the other weights, it is possible to use a 3D LUT indexed over the following dimensions: Modes: The variable ^^ ^^ can take 6 different values depending upon the TU size and type of block; 3 each for intra blocks (4×4, 8×8, and 16×16 blocks) and 3 for inter blocks (4×4, 8×8, and 16×16 blocks). (Rectangular blocks uses the smaller dimension.) QPs: The variable ^^ ^^ is calculated from the QP value and the filter is only turned on for QP 18 and higher: For QP 17 and lower ^^ ^^ becomes too small to change the filtered value. Therefore, this dimension can take 34 different values (18 through 51). Absolute intensity difference: The value ^^ − ^^ ^^ can take 1024 different values for 10-bit luma values. Each weight is stored using an unsigned short. Thus, a brute force implementation would need 6*34*1024*2 = 417792 bytes of LUT memory. JVET-F0096 investigates how to remove the division and replaces it with a multiplication and a lookup-table (LUT). To keep the size of the LUT as small as possible, from Eq.7 the nominator and the denominator of the division should be as small as possible. To reduce the nominator, the filtering equation is rewritten using differences. To reduce the denominator, the large center weight value for inter blocks of size 16×16 and larger is avoided by turning the filter off for these blocks. As is explained in JVET-F0034, since the filter (shape 2x2) only touches the center pixel and its 4-neighbors, this equation (Eq.7) can be written as
Figure imgf000016_0001
where ^^ ^^ is the intensity of the center pixel, and ^^ ^^, ^^ ^^, ^^ ^^ and ^^ ^^ are the intensities for the left, right, above and below pixel respectively. Likewise, ^^ ^^ is the weight for the center pixel, and ^^ ^^, ^^ ^^, ^^ ^^ and ^^ ^^ are the corresponding weights for the neighboring pixels. The nominator can become relatively big. Fortunately, it is possible to rewrite Equation 8 as
Figure imgf000016_0002
where Δ ^^ ^^ = ^^ ^^ − ^^ ^^ and Δ ^^ ^^ = ^^ ^^ − ^^ ^^, etc. When a large difference in intensity Δ ^^ occurs, the bilateral filter will choose a small weight ^^. This means that the product Δ ^^ ∗ ^^ will always be small, since either Δ ^^ is small, or otherwise ^^ is small. The denominator in Equation 9 is equal to ^^ ^^ + ^^ ^^ + ^^ ^^ + ^^ ^^ + ^^ ^^. The maximum value for the non-center weights ^^ ^^, ^^ ^^, ^^ ^^ and ^^ ^^ is 31 in test 2 of JVET-F0034. the center weight can take the following values: min(TUwidth, TUheight) 4 8 16 intra 65 81 196 inter 113 196 4079 Table 1. Center weight values based on the prediction mode and the TB length in JEM. In JVET-G0076, a simplification is provided for the bilateral filter. For each block, the inverse quantized transform coefficients are examined, and if they contain only one non-zero coefficient that is at the DC position, then bilateral filtering is skipped for the associated reconstructed block. In JVET-J0021, a new filtering process is provided for BIF. Once bilateral filter is always applied to luma blocks with non-zero transform coefficients and slice quantization parameter larger than 17, therefore, there is no need to signal the usage of the bilateral filter. Bilateral filter, if applied, is performed on decoded samples right after the inverse transform. In addition, the filter parameters, i.e., weights are explicitly derived from the coded information. The filtering process (Eq.7) is rewritten as:
Figure imgf000017_0001
where ^^0, 0 is the intensity of the current sample and ^^0 , 0 is the modified intensity of the current sample, ^^ ^^, 0 and ^^ ^^(∙) are the intensity and weighting parameter for the k-th neighboring sample, respectively. An example of one current sample and its four neighboring samples (i.e., K=4), in the shape of a plus sign, is depicted in FIG.6. More specifically, the weight ^^ ^^( ^^) associated with the k-th neighboring sample is defined as follows: ^^ ^^( ^^) = ^^ ^^ ^^ ^^ ^^ ^^ ^^ ^^ ^^ × ^^ ^^ ^^ ^^ ^^ ^^( ^^) wherein
Figure imgf000017_0002
and ^^ ^^ is dependent on the coding mode and coding block sizes. To further improve the coding performance, for inter-coded blocks, the intensity difference between current sample and one of its neighboring samples is replaced by a representative intensity difference among two windows covering current sample and the neighboring sample. Therefore, the equation of filtering process is revised to:
Figure imgf000017_0003
wherein ^^ ^^, ^^ and
Figure imgf000017_0004
^^ represent the m-th sample value within the windows centered at ^^ ^^, 0 and ^^0, 0, respectively. In JVET-J0021, the window size is set to 3×3. An example of two windows covering ^^2, 0 and ^^0, 0 are depicted in FIG.7. A spatial filter strength adjustment based on CU area size for bilateral filter is proposed in JVET-K0231. Based on JVET-J0021 and JVET-F0096, some methods focus on reducing the size of the LUT to approximate the numerator and denominator in the following equation
Figure imgf000017_0005
where ^^ ^^ is the filtered sample, ^^ ^^ is the intensity of the center sample (the sample to be filtered), ^^ ^^, ^^ ^^, ^^ ^^ and ^^ ^^ are the intensity of the samples to the left, to the right, above and below respectively, as illustrated on FIG.8. The delta values are differences against the center sample; Δ ^^ ^^ = ^^ ^^ − ^^ ^^ and Δ ^^ ^^ = ^^ ^^ − ^^ ^^, etc. The weights are calculated as
Figure imgf000018_0001
where ^^ can be ^^, ^^, ^^ or ^^. In JVET-O0548, JVET-V0094, and JVET-P0073, a combination of the bilateral filter with the sample adaptive offset (SAO) loop filter is provided. As detailed in JVET-V0094, the filter is carried out in the sample adaptive offset (SAO) loop-filter stage, as shown in FIG.9. Both the bilateral filter (BIF) and SAO are using samples from the deblocking filtering as input. Each filter creates an offset per sample, and these offsets are added to the input sample and then clipped. In detail, the output sample ^^ ^^ ^^ ^^ is obtained as ^^ ^^ ^^ ^^ = ^^ ^^ ^^ ^^3( ^^ ^^ + Δ ^^ ^^ ^^ ^^ + Δ ^^ ^^ ^^ ^^ ) where ^^ ^^ is the input sample from deblocking, Δ ^^ ^^ ^^ ^^ is the offset from the bilateral filter and ΔI ^^ ^^ ^^ is the offset from SAO. It is reported that a diamond 5x5 filter kernel is used together with 26 tables of 16 entries each, and that it is operating in the same loop-filter stage as SAO, as illustrated on FIG.10, where ^^ ^^ the center and the samples surrounding the center sample are denoted A, B, L and R that stands for above, below, left and right and where NW, NE, SW, SE stands for north-west etc. Likewise, AA stands for above-above, BB for below-below etc. In JVET-W0098, same as BIF-luma, the BIF-chroma is also performed in parallel with the SAO process as shown in FIG. 11. The BIF-chroma and SAO use as input the same chroma samples that are produced by the deblocking filter and generate two offsets per chroma sample in parallel. Then these two offsets are both added to the input chroma sample to obtain a sum, which is then clipped to form the final output chroma sample value. The BIF-chroma provides an on/off control mechanism on CTU level and slice level. Since CCSAO contribution JVET-V0153, the in-loop filtering flow and BIF’s position are illustrated on FIG.12. For each output sample, three offsets are calculated and clipped: ^^ ^^ ^^ ^^ = ^^ ^^ ^^ ^^( ^^ ^^ ^^ + ^^ ^^ ^^ ^^ + ^^ ^^ ^^ ^^ ^^ ^^ + ^^ ^^ ^^ ^^ ). In JVET-AE0044, the calculation of BIF’s offset ( ^^ ^^ ^^ ^^) is modified by: 1. a TU scale factor ^^ ^^ ^^ that depends on the TU shape size. 2. a TU scale factor ^^ ^^ ^^ that depends on the mean absolute difference (MAD) of the TU. 3. The BIF LUTs are interpolated. The BIF offset equals a sum of 12 offsets (FIG.10):
Figure imgf000019_0001
where ^^BIF, ^^, ^^,QP is based on a 26x16 LUT of 8-bit integers denoted by LUTbase,QP (for 26 QPs from 17 to 42 and 16 intervals of sample difference). For the innermost positions, i.e., { ^^, ^^} = {−1,0}, {1,0}, {0, −1} or {0,1}, the LUT is used, i.e., ^^BIF, ^^, ^^,QP( ^^) = LUTbase,clip(QP,17,42)(min { ( ^^ + 4) ≫ 3, 15 }). For the remaining positions, the LUT output is right shifted: ^^BIF, ^^, ^^,QP( ^^) = LUTbase,clip(QP,17,42)(min { ( ^^ + 4) ≫ 3, 15 }) ≫ 1. Let ^^TU = min (widthTU, heightTU). In JVET-AE0044, the base LUT size is preserved, but the content of the base LUT is changed and the calculation of ^^BIF, ^^, ^^,QP is also changed. The method of JVET-AE0044 uses three scale factors ( ^^1,0, ^^1,1 and ^^2,0 ) to pre-compute three LUTs for three different neighbor distances (1, √2, and 2), i.e.:
Figure imgf000019_0002
and the averaging linear interpolation for the half of the values of the cut off least significant bits, i.e.: ^^BIF, ^^, ^^,QP( ^^) = ( ^^1 + ^^2 + 1) ≫ 1, where ^^1 and ^^2 are successive entries of LUT ^^, ^^,QP. For chroma, the number of cut off bits is decreased from 3 to 2. In the method of JVET-AE0044, the number of cut off bits in Equation (10) is increased from 5 to 8 and ^^TU = ^^ ^ T ^U , + ^^M TU AD , where ^^ ^ T ^U , is based on the TU’s shape sizes and ^^M TU AD is based on the mean absolute difference (MAD) of the TU. Both ^^ ^ T ^U , and ^^M TU AD are calculated using LUTs. More precisely, let LUT ^^,ℎ be a 2D 8 × 8 lookup table with non-negative 8-bit integer values, and let LUTMAD be a 1D 16-entry lookup table with non-negative 8-bit integer values. Then these scale factors are defined as follows: widthTU , log2 heightTU ),
Figure imgf000019_0003
(min( MADTU ≫ 4, 15)). The MAD of a (ℎ × ^^)-size TU with the channel samples denoted by ^^
Figure imgf000019_0004
is defined as follows:
Figure imgf000019_0005
In total, four 64-byte tables LUT ^^,ℎ and four 16-byte tables LUTMAD are introduced (for luma/chroma component, for intra/inter prediction). Note that ^^TU is a constant for all samples of the same channel inside one TU. The number of arithmetic operations per sample and the size of memory required to store the LUTs in the BIF are compared against ECM 9 (M.Coban, F.Le Leannec, R-L. Liao, K.Naser, J.Ström, L.Zhang, "Algorithm description of Enhanced Compression Model 9 (ECM 9)," document JVET-AD2025, 30th Meeting, April 2023) in Table 2 below. The value obtained in the modified version of Equation (10) before right bit-shifting belongs to the interval [-15509, 15765], and so all arithmetic can be done in 15-bit signed integers. Table 2. Comparison of the complexity of the BIF of JVET-AE0044 against the ECM9.0 BIF version Bit width Summations Multiplications LUT lookups LUTs (maximal) (per sample) (per sample) (per sample) memory (bytes) ECM 9.0 12 18 0 6 832 JVET- 15 25 1 12 2816 AE0044 De-Blocking Filter (DBF) A thorough description of HEVC DBF can be found in Andrey Norkin et al., “HEVC Deblocking Filter”, in IEEE Transactions on Circuits and Systems for Video Technology, Vol.^22, No^12, December 2012. In HEVC, filtering decisions are made separately for each boundary of four-sample length that lies on the grid dividing the picture into blocks of 8x8 samples (see FIG.13). Only boundaries which are either Prediction Unit (PU) or Transform Unit (PU) boundaries are subject to deblocking. Deblocking is performed on a four-sample part of a block boundary when all of the following criteria are met: the block boundary is a PU or TU boundary, the boundary strength bS is greater than zero and variation of signal on both sides of the boundary is below a given threshold. When certain additional conditions hold, a strong filter is applied instead of the normal deblocking filter. An example of the decision workflow is depicted in FIG. 14. The following subsections provide more detailed explanations on the different conditions tested, as well as specific deblocking implementations in the different modes. Similarly, the deblocking filter process in VVC is applied on CU boundaries, transform subblock boundaries, and prediction subblock boundaries (Versatile Video Coding, Standard ITU-T H.266, ISO/IEC 23090-3, 2020). VVC essentially allows larger blocks than HEVC, and the DBF in VVC basically extends HEVC DBF design to address the artifacts remaining in large smooth areas. As done in HEVC, the processing order of the deblocking filter is defined as horizontal filtering for vertical edges for the entire picture first, followed by vertical filtering for horizontal edges. This specific order enables either multiple horizontal filtering or vertical filtering processes to be applied in parallel threads or can still be implemented on a CTB-by- CTB basis with only a small processing latency. Some modifications of DBF have been introduced in VVC, such as the filter strength of the deblocking filter dependent of the averaged luma level of the reconstructed samples, deblocking tC table extension and adaptation to 10- bit video, 4x4 grid deblocking for luma, stronger deblocking filter for luma, stronger deblocking filter for chroma, deblocking filter for subblock boundary, deblocking decision adapted to smaller difference in motion. In DBF, if boundary strength is greater than zero, additional conditions are checked for Luma and Chroma to determine whether the deblocking filtering should be applied to the block boundary with a stronger or a weaker filtering. As done in HEVC, the filter strength of the deblocking filter in VVC is controlled by the variables ^^ and ^^ ^^ which are derived from the averaged quantization parameters ( ^^ ^^ ^^) of the two adjacent coding blocks. In VVC, maximum QP was changed from 51 to 63, and it is desired to reflect corresponding change to deblocking ^^ ^^ table, which derive values of deblocking parameters ^^ ^^ based on the block QP. The ^^ ^^ table was also adapted to 10-bit video instead of 8-bit video as was the case for HEVC, to accommodate the extension of the QP range and 10-bit video in VVC. Furthermore, deblocking filter process in Luma can also adaptively adjust the filtering strength of deblocking filter by adding offset to ^^ ^^ ^^ (called as qpOffset) according to the averaged Luma level of the reconstructed samples. This additional refinement is used to compensate the nonlinear transfer function such as Electro-Optical Transfer Function (EOTF) in the linear light domain. The mapping function of qpOffset and the luma level are signalled in the SPS and should be derived according to the transfer characteristics of the contents since the transfer functions vary among video formats. Luma Deblocking Filter HEVC uses an 8x8 deblocking grid for both luma and chroma. FIG. 15 depicts an illustration of four-pixel long horizontal and vertical boundaries on the 8x8 grid. Due to flexible block sizes in VVC, the luma deblocking is applied on a 4x4 sample grid for boundaries between CUs and TUs, and on an 8x8 grid for boundaries between PUs inside CUs, as shown in FIG. 16, to handle blocking artifacts from rectangular transform shapes. Parallel friendly luma deblocking filter process on a 4x4 grid is achieved by restricting the number of samples to be deblocked to 1 sample on each side of a vertical luma boundary where one side has a width of 4 or less or to 1 sample on each side of a horizontal luma boundary where one side has a height of 4 or less. Before deriving boundary strength bS and performing filtering decisions based on spatial activity, VVC deblocking filter decides whether to use long-tap filters and determine appropriate filter lengths. This step is carried out to ensure that non spatial dependency exists with the adjacent vertical or horizontal block boundaries. After determining the deblocking filter lengths, the boundary strength bS is derived. Compared to HEVC, bS derivation is modified to accommodate VVC coding tools, such as block-level differential pulse code modulation (BDPCM), combined intra-inter prediction (CIIP), intra block copy (IBC), high motion vector accuracy, etc. FIG. 17 illustrates a four-sample long vertical boundary segment formed by blocs P and Q, VVC deblocking decisions are based on lines 0 and 3. In a next step, decisions based on spatial activity are made for the cases of non-zero ^^ ^^. If the deblocking filter sample belonging to a large block, that is defined as when the width is larger than or equal to 32 for a vertical edge, and when height is larger than or equal to 32 for a horizontal edge, the spatial activity decision for the long-tap deblocking filter or stronger filter applies. Otherwise, the decision process for short-tap deblocking filter or weak filter applies, as shown in FIG.18. In both cases, the decision process is similar to that of HEVC, except that further thresholds are introduced that in fine still depend on the QP. The short-tap deblocking filters are almost identical to the HEVC deblocking filters, the only difference consisting in the introduction of position-dependent clipping to control the differences between filtered values and the sample values before filtering. The long-tap deblocking filter is designed to preserve inclined surfaces and linear signals across the block boundaries. Chroma deblocking filter The Chroma deblocking is performed on an 8x8 sample grid on boundaries of both CUs and TUs. Compared to Luma, deblocking lengths are limited to 3+3, 1+3 or 1+1. Minor ^^ ^^ adaptations are introduced to account for VVC coding modes, such as BDPCM and CIIP. The chroma strong filters are used on both sides of the block boundary. Here, the chroma filter is selected when both sides of the chroma edge are greater than or equal to 8 (in unit of chroma sample) and the following decision with three conditions are satisfied. The first one is for decision of boundary strength as well as large block. The second and third conditions are basically the same as for HEVC luma decision, which are on/off decision and strong filter decision, respectively. If one of these three conditions is satisfied, then the remaining conditions with lower priorities are skipped. Boundary strength ( ^^ ^^) and Edge-Level adaptability The ^^ ^^ can take one of the following three possible values: 0, 1, and 2. The definition of ^^ ^^ in VVC is shown in Table 3. Table 3: Definition of ^^ ^^ values for the boundary between two neighboring blocks. bS Priority Conditions Y U V At least one of the adjacent blocks is coded with intra or CIIP 6 2 2 2 mode At least one of the adjacent blocks has non-zero transform 5 1 1 1 coefficients One of the adjacent blocks is coded in IBC prediction mode 4 1 1 1 and the other is coded in inter prediction mode Absolute difference between the motion vectors that belong 3 to the adjacent blocks is greater than or equal to one half 1 0 0 luma sample Reference pictures the two adjacent blocks refer to are 2 1 0 0 different 1 Otherwise 0 0 0 For Luma, only block boundaries with ^^ ^^ values equal to 1 or 2 are filtered. For Chroma, deblocking is performing when ^^ ^^ is equal to 2, or ^^ ^^ is equal to 1 when a large block boundary is detected. With the current development of ECM, other rules on ^^ ^^ for IntraGPM have been added, where at least one of the adjacent blocks is coded with non-inter prediction mode or intraGPM, then ^^ ^^ for Luma and both chroma components is set to 2. In the current ECM, the BIF filtering is designed to filter reconstructed samples obtained after the deblocking filter is applied, based on only one type of filtering strength out of three types of filtering strength: 0 (as half filtering strength), 1 (as full filtering strength), or 2 (as double filtering strength). The filtering strength is configured at the encoder side for the whole picture and signaled to the decoder at PPS level. Therefore, the BIF is use it for an entire frame or picture of a video regardless of the contents and characteristics of blocks in the picture. However, videos in general have different contents and characteristics. Some embodiments provided herein allow for an adaptive filtering strength for these various contents and characteristic. This may be more beneficial to further increase the BIF performance instead of generalizing it with the same strength. Recent contributions have attempted to improve BIF performance by proposing a control of BIF scaling factor and designing a Look-up-table for the approximation of the weights of used in the filtering as detailed in JVET-AE0044 for example. The scaling factor proposed in JVET- AE0044 is determined by the size of a TU block and the MAD of the TU block. It demonstrates that if the contents and characteristics of a TU block is changed, then the scaling factor of BIF will be changed too. And the proposed LUT changes the content of the original LUT in ECM. However, the use of BIF filtering strength is kept the same for all blocks of a picture. A filtering strength is used to manner how smooth the filtering output will become which also leads to determine the overall filtering performance. Some embodiments provide for adaptively deciding the BIF filtering strength based on the characteristics of a block, for example a strength of the deblocking filter used for the block. FIG.19 illustrates an example of a method 1900 for encoding or decoding at least one block of a picture in a video according to an embodiment. For example, method 1900 is used in an in-loop filters module of the encoder 200 or the decoder 300. At 1910, a reconstructed block is obtained, for example as an output of the deblocking filter stage. At 1920, a BIF strength for the block is determined based on at least one parameter used for encoding the block. For example, the BIF strength is determined among the three values 0, 1, 2, wherein these values indicate as follows: 0 as half filtering strength, 1 as full filtering strength, or 2 as double filtering strength. For example, the at least one parameter is one of a deblocking filter boundary strength determined for the block, a size or shape of the block, a value of a flag indicating whether the block has at least one non-zero transform coefficient, a quantization parameter, an absolute difference between motion vectors that belong to blocks adjacent to the block, or a coding mode of the block. In some variants, determining the BIF strength can be based on a combination of two or more of the above parameters. Further variants for determining the BIF strength are provided below. Depending on the conditions or a given decision scheme, an adaptive BIF strength can be applied to the block. In this way, all the blocks of a picture are not filtered using a same BIF strength, but the BIF strength can be adapted depending on the coding condition of the block. At 1930, the BIF filtering is applied to the block using the determined BIF strength. FIG.20 illustrates an example of a method 2000 for encoding at least one block of a video according to another embodiment. In this embodiment, the usage of the adaptive BIF strength is enabled or disabled for a given picture. For example, at 2010, enabling or disabling the usage of the adaptive BIF strength is signaled in a bitstream for the picture to encode. For example, an indicator is signaled in a PPS (Picture Parameter Set) referenced by the picture. In another example, the indicator can be signaled at a slice level of the picture. In a variant, the indicator is the same as the one signaling the BIF strength for the picture in the PPS but an additional value is used, for example 3, that indicates that adaptive BIF strength is used for blocks of the picture. At 2020, blocks of the picture are encoded, for example as described in reference to FIG.2. At 2030, for a given block that is encoded, a reconstructed version of the block is obtained, for example after the deblocking filtering stage. At 2040, depending on the value of the indicator signaling the usage or not of adaptive BIF strength for blocks of the picture, the BIF strength is determined for the block. When the value of the indicator is 0, 1 or 2, then the BIF strength for the block is set to the value of the indicator. When the value of the indicator is 3, then adaptive BIF strength is applied for the block. The BIF strength for the block is determined according to one of the embodiments described above or further below, and at 2050, the BIF is applied to the block based on the determined BIF strength. FIG.21 illustrates an example of a method 2100 for decoding at least one block of a video according to another embodiment. As for the embodiment described with FIG.20, the usage of the adaptive BIF strength is enabled or disabled for a given picture. For example, at 2110, enabling or disabling the usage of the adaptive BIF strength is decoded from a bitstream for the picture to decode. For example, an indicator is decoded from a PPS (Picture Parameter Set) referenced by the picture. In another example, the indicator can be signaled at a slice level of the picture. In a variant, the indicator is the same as the one signaling the BIF strength for the picture, but an additional value is used, for example 3, that indicates that adaptive BIF strength is used for blocks of the picture. At 2120, blocks of the picture are decoded, for example as described in reference to FIG.3. For a given block that is decoded, a reconstructed version of the block is obtained, for example after the deblocking filtering stage. At 2140, depending on the value of the indicator signaling the usage or not of adaptive BIF strength for blocks of the picture, the BIF strength is determined for the reconstructed block. When the value of the indicator is 0, 1 or 2, then the BIF strength for the reconstructed block is set to the value of the indicator. When the value of the indicator is 3, then adaptive BIF strength is applied for the reconstructed block. The BIF strength for the reconstructed block is determined according to one of the embodiments described above or further below, and at 2150, the BIF is applied to the reconstructed block based on the determined BIF strength. Some embodiments described herein aim at improving Bilateral Filtering (BIF) for both luma and chroma components by adaptively determine filtering strength of BIF based on available coding information that can mimic the contents and characteristics of a block. As explained above, such coding information can be deblocking filter boundary strength, size/shape of the block, presence, or absence of non-zero coefficients in the block, qp values, and so forth. By taking those coding information, an adaptive BIF strength can be adjusted as 0 (as half filtering strength), 1 (as full filtering strength), or 2 (as double filtering strength). Furthermore, in some embodiments, adaptive BIF strength is applicable for different coding modes, such as intra prediction mode, inter prediction mode, and for the IBC and palette prediction modes. FIG.22 illustrates a method 2200 for enabling adaptative BIF strength for a block based on a coding mode of the block. It is assumed here that adaptive BIF strength is allowed for the blocks of the picture, for example it is enabled by default for the picture or based on the indicator as explained with FIG. 20 and 21. For a block to encode or decode, at 2210, it is checked whether the block is encoded using an intra mode. If yes, then at 2220 adaptive BIF strength is applied for the block. For example, method 2300 of FIG.23 is applied to determine the BIF strength for the block. If the block is not encoded using an intra mode, then at 2230, it is checked whether the block is encoded in an inter mode. If yes, then at 2240 adaptive BIF strength is applied for the block. For example, method 2400 of FIG.24 is applied to determine the BIF strength for the block. If the block is not encoded in an inter mode, then at 2250, BIF strength is set to 1 for the block and the BIF filtering is applied using the BIF strength of 1. Table 4 below provides an example of a reasoning of the adaptive BIF strength decisions ruled out for a block. Table 4 illustrates for a given coding information (first column) that is used for a block, the BIF strength status that can be derived from the condition 1 or 2 of the coding information. In other words, for a given coding information, the BIF strength for the block (BIF strength status) is derived from the values (Condition 1, Condition 2) of the coding information. Coding Condition 1 BIF strength Condition 2 BIF strength information status status (condition (conditon 1 2 satisfied) satisfied) DBF Bs Not filtered Apply filter or Filtered with No filter or half filter (Bs=0) double filter Bs=1 or Bs=2 Non-zero True Apply filter False No filter transform coefficients flag Shape size of Small block Apply filter or Large block No filter or half filter block double filter QP value of High QP value Apply filter or Low QP value No filter or half filter block double filter Absolute Smaller than Apply filter greater than or No filter difference one half luma equal to one half between the sample luma sample motion vectors that belong to the adjacent blocks Table 4 In Table 4, giving the coding information with conditions in Condition 1, then it makes sense for a block to be further filtered by BIF. But if the coding information has conditions in Condition 2, then disabling BIF applied for the block may not be favorable. Note that these Conditions 1 and 2 scenarios are exercised for both Luma and Chroma components. For instance, if a block has been filtered by DBF with Bs=1 or Bs=2, then applying another filter with BIF on top of the filtered DBF block can be ignored or a BIF with a weak filter can be applied. Accordingly, for a block for which DBF is skipped or Bs=0, it is reasonable to filter the block using BIF. It should be noted that in DBF a horizontal Bs and a vertical Bs are determined for a block. In the embodiments provided herein, both DBF Bs of horizontal and vertical edges of the block in raster order may be taken into account when determining BIF strength filtering. For example, when the horizontal DBF Bs and vertical DBF Bs are different for the block, a single Bs can be obtained by averaging both DBF Bs edges, or by checking only either one of the edges, for example taking the highest Bs value among the two Bs or in case of the block belongs to a rectangular CU, taking the horizontal Bs for a horizontal CU and a vertical Bs for a vertical CU. In another case, if a block has a high QP value which in general results in some block artifacts, thus applying BIF for the block is more reasonable than when a block has a low QP value, and so forth. Note that half and double filter in Table 4 refer to the filter strength types of BIF. FIG.23 illustrates an example of a method 2300 for adaptive BIF strength rules for a block encoded in an intra mode according to an embodiment. Available coding information, such as deblocking filter boundary strength, size/shape block, and non-zero coefficient flag or other coding information that can distinguish information of content and characteristic of blocks are possible to use. FIG.23 provides an example of an embodiment that rules the utilization of BIF strength. It should be noted that the rules is applied for both Luma and chroma components, however in some variants different rules can be defined for luma and chroma components, or adaptive BIF strength can be defined on for luma and a default BIF strength is used for chroma. At 2310, it is checked whether the block has not been filtered by the deblocking filter (DBF Bs = 0). If this is case (answer is no at DBF Bs>=1 ?), then at 2320, BIF is applied to the block with filtering strength as full filtering (or BIF strength = 1). Otherwise, further conditions are needed to either apply the BIF or not. For example, at 2330, it is checked if the block has a flag indicating non-zero transform coefficients to true, then at2340, BIF is applied to the block with strength as full filter (BIF strength = 1). Otherwise, at 2350, it is checked if the block shape size is larger than a given value (for example 16) and if DBF Bs equals to 2. If yes, then at 2360, BIF is applied to the block with strength as 0. Otherwise, at 2370, it is checked if the block shape size is smaller than or equal to the given value and if DBF Bs equals to 2. If yes, then at 2380, BIF is applied to the block with strength as 1. Otherwise, at 2390, BIF is applied to the block with strength as 2. FIG.24 illustrates an example of a method 2400 for adaptive BIF strength rules for a block encoded in an inter mode according to an embodiment. Available coding information, such as deblocking filter boundary strength, size/shape block, qp value, absolute difference between the motion vectors or other coding information that can distinguish information of content and characteristic of blocks are possible to use. FIG.24 provides an example of an embodiment that rules the utilization of BIF strength when the block is in an inter mode. It should be noted that the rules is applied for both Luma and chroma components, however in some variants different rules can be defined for luma and chroma components, or adaptive BIF strength can be defined on for luma and a default BIF strength is used for chroma. At 2410, it is checked if there is a history that the block has not been filtered by deblocking filter (DBF Bs = 0). Then, at 2420, BIF is applied to the block with filtering strength as full filtering (or BIF strength = 1). Otherwise, further conditions are needed to either apply the BIF or not. For example, the following conditions are checked. At 2430, it is checked if the QP value of the block is larger than a given QP value. If yes, then at 2450, BIF is applied to the block with strength as 1. Otherwise, at 2460, it is checked f the block shape size is larger than or equal to a given value, for example 8, and if the absolute difference between the motion vectors that belong to the adjacent blocks is smaller than one half luma sample. If yes, then at 2470, BIF is applied with strength as 0. Otherwise, at 2480, it is checked if the block shape size is smaller than or equal to the given value (8 in this example), and if the absolute difference between the motion vectors that belong to the adjacent blocks is greater than or equal to one half luma sample. If yes, then at 2490, BIF is applied to the block with strength as 1. Otherwise, at 2491, BIF is applied to the block with strength as 2. In another embodiment, that can be combined with the other embodiments described herein, the BIF Scaling Factor is adjusted using Adaptive BIF Strength described above. The BIF offset in Equation (10) above is further adjusted by taking into account the adaptive BIF Strength decision. Thus, Equation (10) above which can be rewritten as Equation (11) below:
Figure imgf000029_0001
Where ^^ ^^ ^^_ ^^ ^^ ^^ ^^ ^^_ ^^ ^^ ^^ is a rounding offset and ^^ ^^ ^^_ ^^ ^^ ^^ ^^ ^^_ ^^ ^^ ^^ = 32 >> ^^ ^^ ^^ ^^ ^^ ^^ ^^ ^^ ^^ ^^ℎ and ( ^^ ^^ ^^_ ^^ ^^ ^^ ^^ ^^_ ^^ ^^ ^^ ^^ ^^ = 6 − ^^ ^^ ^^ ^^ ^^ ^^ ^^ ^^ ^^ ^^ℎ), >> being a binary shift operation. The ^^ ^^ ^^ ^^ ^^ ^^ ^^ ^^ ^^ ^^ℎ corresponds to the BIF strength used for the block. In this embodiment, the BIF strength can be determined adaptively according to one of the embodiments described herein or can be a default BIF strength such as the one defined for the picture at the PPS level. Adaptive BIF strength signaling In the current ECM version, BIF strength usage of the BIF is indicated in PPS level by setting BIF strength as 0, 1, or 2 for half, full, and double filter, respectively. As explained with FIG. 20 and 21, the same manner for the adaptive BIF strength is also added to the BIF strength option to indicate the usage of the adaptive BIF strength, which is for example set to 3, and signaled in PPS level. In another alternative, all BIF strength indication flags for luma and chroma can be removed from PPS signaling. Thus, it may let both encoder and decoder to determine the adaptive BIF strength without any signaling. FIG. 25 illustrates a block diagram of a system within which aspects of the present embodiments may be implemented, according to another embodiment. FIG. 25 shows one embodiment of an apparatus 2500 for encoding or decoding a video according to any one of the embodiments described herein. The apparatus comprises Processor 2510 and can be interconnected to a memory 2520 through at least one port. Both Processor 2510 and memory 2520 can also have one or more additional interconnections to external connections. Processor 2510 is also configured to obtain a reconstructed block of an image, determine a filtering strength of a bilateral filter based on at least one parameter used for encoding the block, responsive to the determination of the filtering strength, apply the bilateral filter to the block, according to any one of the embodiments described herein. For instance, the processor 2510 uses a computer program product comprising code instructions that implements any one of embodiments described herein. In an embodiment, illustrated in FIG. 26, in a transmission context between two remote devices A and B over a communication network NET, the device A comprises a processor in relation with memory RAM and ROM which are configured to implement a method for encoding a video, as described with FIG. 1-25 and the device B comprises a processor in relation with memory RAM and ROM which are configured to implement a method for decoding a video as described in relation with FIG 1-25. In accordance with an example, the network is a broadcast network, adapted to broadcast/transmit a coded video from device A to decoding devices including the device B. FIG. 27 shows an example of the syntax of a signal transmitted over a packet-based transmission protocol. Each transmitted packet P comprises a header H and a payload PAYLOAD. In some embodiments, the payload PAYLOAD may comprise video data encoded according to any one of the embodiments described above. The payload can also comprise any signaling as described above. Various implementations involve decoding. “Decoding”, as used in this application, can encompass all or part of the processes performed, for example, on a received encoded sequence in order to produce a final output suitable for display. In various embodiments, such processes include one or more of the processes typically performed by a decoder, for example, entropy decoding, inverse quantization, inverse transformation, and differential decoding. In various embodiments, such processes also, or alternatively, include processes performed by a decoder of various implementations described in this application, for example, entropy decoding a sequence of binary symbols to reconstruct image or video data. As further examples, in one embodiment “decoding” refers only to entropy decoding, in another embodiment “decoding” refers only to differential decoding, and in another embodiment “decoding” refers to a combination of entropy decoding and differential decoding, and in another embodiment “decoding” refers to the whole reconstructing picture process including entropy decoding. Whether the phrase “decoding process” is intended to refer specifically to a subset of operations or generally to the broader decoding process will be clear based on the context of the specific descriptions and is believed to be well understood by those skilled in the art. Various implementations involve encoding. In an analogous way to the above discussion about “decoding”, “encoding” as used in this application can encompass all or part of the processes performed, for example, on an input video sequence in order to produce an encoded bitstream. In various embodiments, such processes include one or more of the processes typically performed by an encoder, for example, partitioning, differential encoding, transformation, quantization, and entropy encoding. In various embodiments, such processes also, or alternatively, include processes performed by an encoder of various implementations described in this application, for example, determining re-sampling filter coefficients, re- sampling a decoded picture. As further examples, in one embodiment “encoding” refers only to entropy encoding, in another embodiment “encoding” refers only to differential encoding, and in another embodiment “encoding” refers to a combination of differential encoding and entropy encoding. Whether the phrase “encoding process” is intended to refer specifically to a subset of operations or generally to the broader encoding process will be clear based on the context of the specific descriptions and is believed to be well understood by those skilled in the art. Note that the syntax elements as used herein, are descriptive terms. As such, they do not preclude the use of other syntax element names. This disclosure has described various pieces of information, such as for example syntax, that can be transmitted or stored, for example. This information can be packaged or arranged in a variety of manners, including for example manners common in video standards such as putting the information into an SPS, a PPS, a NAL unit, a header (for example, a NAL unit header, picture header or a slice header), or an SEI message. Other manners are also available, including for example manners common for system level or application level standards such as putting the information into one or more of the following: a. SDP (session description protocol), a format for describing multimedia communication sessions for the purposes of session announcement and session invitation, for example as described in RFCs and used in conjunction with RTP (Real-time Transport Protocol) transmission. b. DASH MPD (Media Presentation Description) Descriptors, for example as used in DASH and transmitted over HTTP, a Descriptor is associated to a Representation or collection of Representations to provide additional characteristic to the content Representation. c. RTP header extensions, for example as used during RTP streaming. d. ISO Base Media File Format, for example as used in OMAF and using boxes which are object-oriented building blocks defined by a unique type identifier and length also known as 'atoms' in some specifications. e. HLS (HTTP live Streaming) manifest transmitted over HTTP. A manifest can be associated, for example, to a version or collection of versions of a content to provide characteristics of the version or collection of versions. When a figure is presented as a flow diagram, it should be understood that it also provides a block diagram of a corresponding apparatus. Similarly, when a figure is presented as a block diagram, it should be understood that it also provides a flow diagram of a corresponding method/process. Some embodiments refer to rate distortion optimization. In particular, during the encoding process, the balance or trade-off between the rate and distortion is usually considered, often given the constraints of computational complexity. The rate distortion optimization is usually formulated as minimizing a rate distortion function, which is a weighted sum of the rate and of the distortion. There are different approaches to solve the rate distortion optimization problem. For example, the approaches may be based on an extensive testing of all encoding options, including all considered modes or coding parameters values, with a complete evaluation of their coding cost and related distortion of the reconstructed signal after coding and decoding. Faster approaches may also be used, to save encoding complexity, in particular with computation of an approximated distortion based on the prediction or the prediction residual signal, not the reconstructed one. Mix of these two approaches can also be used, such as by using an approximated distortion for only some of the possible encoding options, and a complete distortion for other encoding options. Other approaches only evaluate a subset of the possible encoding options. More generally, many approaches employ any of a variety of techniques to perform the optimization, but the optimization is not necessarily a complete evaluation of both the coding cost and related distortion. The implementations and aspects described herein can be implemented in, for example, a method or a process, an apparatus, a software program, a data stream, or a signal. Even if only discussed in the context of a single form of implementation (for example, discussed only as a method), the implementation of features discussed can also be implemented in other forms (for example, an apparatus or program). An apparatus can be implemented in, for example, appropriate hardware, software, and firmware. The methods can be implemented in, for example, a processor, which refers to processing devices in general, including, for example, a computer, a microprocessor, an integrated circuit, or a programmable logic device. Processors also include communication devices, such as, for example, computers, cell phones, portable/personal digital assistants ("PDAs"), and other devices that facilitate communication of information between end-users. Reference to “one embodiment” or “an embodiment” or “one implementation” or “an implementation”, as well as other variations thereof, means that a particular feature, structure, characteristic, and so forth described in connection with the embodiment is included in at least one embodiment. Thus, the appearances of the phrase “in one embodiment” or “in an embodiment” or “in one implementation” or “in an implementation”, as well any other variations, appearing in various places throughout this application are not necessarily all referring to the same embodiment. Additionally, this application may refer to “determining” various pieces of information. Determining the information can include one or more of, for example, estimating the information, calculating the information, predicting the information, or retrieving the information from memory. Further, this application may refer to “accessing” various pieces of information. Accessing the information can include one or more of, for example, receiving the information, retrieving the information (for example, from memory), storing the information, moving the information, copying the information, calculating the information, determining the information, predicting the information, or estimating the information. Additionally, this application may refer to “receiving” various pieces of information. Receiving is, as with “accessing”, intended to be a broad term. Receiving the information can include one or more of, for example, accessing the information, or retrieving the information (for example, from memory). Further, “receiving” is typically involved, in one way or another, during operations such as, for example, storing the information, processing the information, transmitting the information, moving the information, copying the information, erasing the information, calculating the information, determining the information, predicting the information, or estimating the information. It is to be appreciated that the use of any of the following
Figure imgf000033_0001
“and/or”, and “at least one of”, for example, in the cases of “A/B”, “A and/or B” and “at least one of A and B”, is intended to encompass the selection of the first listed option (A) only, or the selection of the second listed option (B) only, or the selection of both options (A and B). As a further example, in the cases of “A, B, and/or C” and “at least one of A, B, and C”, such phrasing is intended to encompass the selection of the first listed option (A) only, or the selection of the second listed option (B) only, or the selection of the third listed option (C) only, or the selection of the first and the second listed options (A and B) only, or the selection of the first and third listed options (A and C) only, or the selection of the second and third listed options (B and C) only, or the selection of all three options (A and B and C). This may be extended, as is clear to one of ordinary skill in this and related arts, for as many items as are listed. Also, as used herein, the word “signal” refers to, among other things, indicating something to a corresponding decoder. In this way, in an embodiment the same parameter is used at both the encoder side and the decoder side. Thus, for example, an encoder can transmit (explicit signaling) a particular parameter to the decoder so that the decoder can use the same particular parameter. Conversely, if the decoder already has the particular parameter as well as others, then signaling can be used without transmitting (implicit signaling) to simply allow the decoder to know and select the particular parameter. By avoiding transmission of any actual functions, a bit savings is realized in various embodiments. It is to be appreciated that signaling can be accomplished in a variety of ways. For example, one or more syntax elements, flags, and so forth are used to signal information to a corresponding decoder in various embodiments. While the preceding relates to the verb form of the word “signal”, the word “signal” can also be used herein as a noun. As will be evident to one of ordinary skill in the art, implementations can produce a variety of signals formatted to carry information that can be, for example, stored or transmitted. The information can include, for example, instructions for performing a method, or data produced by one of the described implementations. For example, a signal can be formatted to carry the bitstream of a described embodiment. Such a signal can be formatted, for example, as an electromagnetic wave (for example, using a radio frequency portion of spectrum) or as a baseband signal. The formatting can include, for example, encoding a data stream and modulating a carrier with the encoded data stream. The information that the signal carries can be, for example, analog or digital information. The signal can be transmitted over a variety of different wired or wireless links, as is known. The signal can be stored on a processor- readable medium. A number of embodiments has been described above. Features of these embodiments can be provided alone or in any combination, across various claim categories and types.

Claims

CLAIMS 1. A method, comprising: Obtaining a reconstructed block of a video, Determining a filtering strength of a bilateral filter based on at least one parameter used for encoding the block, Responsive to the determining of the filtering strength, applying the bilateral filter to the block, wherein applying the bilateral filter to the block comprises determining an offset for at least one sample of the block based on the determined filtering strength. 2. An apparatus comprising one or more processors configured to: Obtain a reconstructed block of a video, Determine a filtering strength of a bilateral filter based on at least one parameter used for encoding the block, Responsive to the determination of the filtering strength, apply the bilateral filter to the block, wherein applying the bilateral filter to the block comprises determining an offset for at least one sample of the block based on the determined filtering strength. 3. The method of claim 1 or the apparatus of claim 2, wherein the at least one parameter used for encoding the block comprises at least one of: a deblocking filter boundary strength determined for the block, a size or shape of the block, a value of a flag indicating whether the block has at least one non-zero transform coefficient, a quantization parameter, an absolute difference between motion vectors that belong to blocks adjacent to the block, a coding mode of the block. 4. The method of claim 1 or 3 or the apparatus of claim 2 or 3, wherein determining a filtering strength of a bilateral filter based on at least one parameter used for encoding the block is responsive to a determination that the block is encoded in a given coding mode. 5. The method or the apparatus of claim 4, wherein the given coding mode is an intra mode or an inter mode. 6. The method or the apparatus of claim 4 or 5, wherein responsive to a determination that the block is not encoded in the given coding mode, the filtering strength of the bilateral filter is set to 1. 7. The method of any one of claims 1 or 3-6 or the apparatus of any one of claims 2-6, wherein determining a filtering strength of a bilateral filter based on at least one parameter used for encoding the block is responsive to a determination that adaptive bilateral filtering strength is enabled for a picture to which the block belongs. 8. The method or the apparatus of claim 7, wherein the determination that adaptive bilateral filtering strength is enabled is based on an indicator signaled at Sequence Parameter Set Level, or at a Picture Parameter Set Level, or at a Slice level. 9. The method of any one of claims 1 or 3-8 or the apparatus of any one of claims 2-8, wherein determining a filtering strength of a bilateral filter based on at least one parameter used for encoding the block depends on a component of the block. 10. The method of any one of claims 1 or 3-9 or the apparatus of any one of claims 2-9, wherein determining the offset uses a scale factor that is based on the determined filtering strength. 11. The method of any one of claims 1 or 3-10 or the apparatus of any one of claims 2- 10, wherein determining the offset uses a rounding offset that is based on the determined filtering strength. 12. The method of any one of claims 1 or 3-11 or the apparatus of any one of claims 2- 11, wherein determining the offset uses a binary shift operation that is based on the determined filtering strength. 13. The method of any one of claims 1 or 3-12 or the apparatus of any one of claims 2- 12, wherein the reconstructed video block is obtained by an encoding of the block, followed by a decoding of the encoded block. 14. The method of any one of claims 1 or 3-12 or the apparatus of any one of claims 2- 12, wherein the reconstructed block is obtained by a decoding of the block. 15. The method of any one of claims 1 or 3-14 or the apparatus of any one of claims 2- 14, wherein the reconstructed block is obtained as an output of an inverse transform applied when decoding the block. 16. The method of any one of claims 1 or 3-14 or the apparatus of any one of claims 2- 14, wherein the reconstructed block is obtained as an output of a deblocking filter applied when decoding the block. 17. A signal comprising coded data representative of a video, wherein the signal further comprises an indicator indicating whether or not adaptive bilateral filtering strength is enabled for a picture of the video. 18. A computer program product including instructions for causing one or more processors to carry out the method of any of claims 1, or 3-16. 19. A non-transitory computer readable medium storing executable program instructions to cause a computer executing the program instructions to perform a method according to any of claims 1 or 3-16. 20. A device comprising: an apparatus according to any one of claims 2-16; and at least one of (i) an antenna configured to receive or transmit a signal, the signal including data representative of the video, (ii) a band limiter configured to limit the signal to a band of frequencies that includes the data representative of the video, or (iii) a display configured to display the video. 21. A device according to claim 20, wherein the device comprises at least one of a television, a cell phone, a tablet, a set-top box.
PCT/EP2024/076806 2023-10-11 2024-09-24 Adaptive bif strength based on dbf strength Pending WO2025078149A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP23306765 2023-10-11
EP23306765.1 2023-10-11

Publications (1)

Publication Number Publication Date
WO2025078149A1 true WO2025078149A1 (en) 2025-04-17

Family

ID=88600370

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2024/076806 Pending WO2025078149A1 (en) 2023-10-11 2024-09-24 Adaptive bif strength based on dbf strength

Country Status (1)

Country Link
WO (1) WO2025078149A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120044989A1 (en) * 2010-08-20 2012-02-23 Ahuja Nilesh A Techniques for identifying block artifacts
US20220070455A1 (en) * 2019-05-11 2022-03-03 Beijing Bytedance Network Technology Co., Ltd. Boundary strength determination for deblocking filters in video processing
WO2022268184A1 (en) * 2021-06-25 2022-12-29 Beijing Bytedance Network Technology Co., Ltd. Adaptive Bilateral Filter In Video Coding

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120044989A1 (en) * 2010-08-20 2012-02-23 Ahuja Nilesh A Techniques for identifying block artifacts
US20220070455A1 (en) * 2019-05-11 2022-03-03 Beijing Bytedance Network Technology Co., Ltd. Boundary strength determination for deblocking filters in video processing
WO2022268184A1 (en) * 2021-06-25 2022-12-29 Beijing Bytedance Network Technology Co., Ltd. Adaptive Bilateral Filter In Video Coding

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ANDREY NORKIN ET AL.: "HEVC Deblocking Filter", IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, vol. 22, no. 12, December 2012 (2012-12-01), XP011487156, DOI: 10.1109/TCSVT.2012.2223053
M.COBANF.LE LEANNECR-L. LIAOK.NASERJ.STRÖML.ZHANG: "Algorithm description of Enhanced Compression Model 9 (ECM 9", JVET-AD2025, 30TH MEETING, April 2023 (2023-04-01)

Similar Documents

Publication Publication Date Title
WO2021058383A1 (en) Method and apparatus using homogeneous syntax with coding tools
WO2020068531A1 (en) Method and apparatus for determining chroma quantization parameters when using separate coding trees for luma and chroma
EP3641311A1 (en) Encoding and decoding methods and apparatus
EP4505710A1 (en) Intra mode derivation for inter-predicted coding units
WO2025078149A1 (en) Adaptive bif strength based on dbf strength
US20220360781A1 (en) Video encoding and decoding using block area based quantization matrices
US20230262268A1 (en) Chroma format dependent quantization matrices for video encoding and decoding
EP4625975A1 (en) Video coding: coding parameter restrictions
EP4661395A1 (en) Encoding and decoding methods using multiple transform set selection and corresponding apparatuses
EP4633150A1 (en) Method and apparatus for encoding responsive to a lagrangian rate-distortion cost modified for adaptive loop filtering
EP4633148A1 (en) Encoding and decoding methods using filtering with adaptive size and corresponding apparatuses
EP4625985A1 (en) Hybrid explicit/implicit lfnst/nspt
EP4629637A1 (en) Alf with partitioning
EP4625984A1 (en) Multiple transform set fusion
EP4661396A1 (en) Low complexity multiple transform set selection
EP4625980A1 (en) Content-adapted alf band classifier
EP4676037A1 (en) Dimd and obic histogram adaptation to mip modes
WO2025056400A1 (en) Encoding and decoding methods using multi-criterion classification for adaptive filtering and corresponding apparatuses
WO2025056401A1 (en) Encoding and decoding methods using multi-component adaptive filtering and corresponding apparatuses
WO2025061505A1 (en) Coding mode dependent adaptive filtering
WO2024200475A1 (en) Methods and apparatuses for encoding and decoding an image or a video
WO2026012697A1 (en) Integer computation of correction bounds for quantization constrained correction
WO2025082854A1 (en) Quantization-constrained correction from filtered and non-filtered reconstructed pictures
WO2023194104A1 (en) Temporal intra mode prediction
WO2025098769A1 (en) Joint adaptive in-loop and output filter

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24776576

Country of ref document: EP

Kind code of ref document: A1