WO2018028615A1 - Procédés et appareils de partition en fonction d'un dispositif de prédiction dans un système de traitement vidéo - Google Patents
Procédés et appareils de partition en fonction d'un dispositif de prédiction dans un système de traitement vidéo Download PDFInfo
- Publication number
- WO2018028615A1 WO2018028615A1 PCT/CN2017/096715 CN2017096715W WO2018028615A1 WO 2018028615 A1 WO2018028615 A1 WO 2018028615A1 CN 2017096715 W CN2017096715 W CN 2017096715W WO 2018028615 A1 WO2018028615 A1 WO 2018028615A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- current block
- block
- predicted
- current
- regions
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
- H04N19/517—Processing of motion vectors by encoding
- H04N19/52—Processing of motion vectors by encoding by predictive encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/119—Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
- H04N19/14—Coding unit complexity, e.g. amount of activity or edge presence estimation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/90—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
- H04N19/96—Tree coding, e.g. quad-tree coding
Definitions
- the present invention relates to video data processing methods and apparatuses for video encoding or video decoding.
- the present invention relates to video data processing methods and apparatuses encode or decode video data by splitting blocks according to predictor-based partition.
- the High-Efficiency Video Coding (HEVC) standard is the latest video coding standard developed by the Joint Collaborative Team on Video Coding (JCT-VC) group of video coding experts from ITU-T Study Group.
- JCT-VC Joint Collaborative Team on Video Coding
- the HEVC standard relies on a block-based coding structure which divides each slice into multiple Coding Tree Units (CTUs) .
- CTUs Coding Tree Units
- SPS Sequence Parameter Set
- the CTUs in a slice are processed according to a raster scan order.
- Each CTU is further recursively divided into one or more Coding Units (CUs) according to a quadtree partitioning method to adapt to various local characteristics.
- the CU size is restricted to be less than or equal to a minimum allowed CU size, which is also specified in the SPS.
- An example of the quadtree block partitioning structure for a CTU is illustrated in Fig. 1, where the solid lines indicate CU boundaries in the CTU 100.
- each CU is either coded by Inter picture prediction or Intra picture prediction.
- each CU is subject to further split into one or more Prediction Units (PUs) according to a PU partition type for prediction.
- PUs Prediction Units
- Fig. 2 shows eight PU partition types defined in the HEVC standard.
- Each CU is split into one, two, or four PUs according to one of the eight PU partition types shown in Fig. 2.
- the PU works as a basic representative block for sharing the prediction information as the same prediction process is applied to all pixels in the PU and prediction relevant information is conveying to the decoder on a PU basis.
- TU Transform Unit
- Fig. 1 indicate TU boundaries in the CTU 100.
- the TU is a basic representative block for applying transform and quantization on the residual signal. For each TU, a transform matrix having the same size as the TU is applied to the residual signal to generate the transform coefficients, and these transform coefficients are quantized and conveyed to the decoder on a TU basis.
- Coding Tree Block CB
- Coding block CB
- Prediction Block PB
- Transform Block T
- TB Transform Block
- a CTU consists of one luminance (luma) CTB, two chrominance (chroma) CTBs, and its associated syntax elements.
- luma luminance
- chroma chrominance
- the same quadtree block partitioning structure is generally applied to both luma and chroma components unless a minimum size for chroma block is reached.
- An alternative partitioning method is called binary tree block partitioning, where a block is recursively split into two smaller blocks.
- a simplest and most efficient binary tree partitioning method only allows symmetrical horizontal splitting and symmetrical vertical splitting.
- a flag indicates whether the block is split into two smaller blocks, if the flag is true, another syntax element is signaled to indicate which splitting type is used.
- the size of the two smaller blocks is MxN/2 if symmetrical horizontal splitting is used; otherwise the size is M/2xN if symmetrical vertical splitting is used.
- the binary tree partitioning method supports more partition shapes and thus is more flexible than the quadtree partitioning method, the coding complexity and signaling overhead increase for selecting the best partition shape among all possible partition shapes.
- Quad-Tree-Binary-Tree (QTBT) structure combines a quadtree partitioning method with a binary tree partitioning method, which balances the coding efficiency and the coding complexity of the two partitioning methods.
- An exemplary QTBT structure is shown in Fig. 3A, where a large block such as a CTU is firstly partitioned by a quadtree partitioning method then a binary tree partitioning method.
- Fig. 3A illustrates an example of block partitioning structure according to the QTBT partitioning method and
- Fig. 3B illustrates a coding tree diagram for the QTBT block partitioning structure shown in Fig. 3A.
- the solid lines in Figs 3A and 3B indicate quadtree splitting while the dotted lines indicate binary tree splitting.
- each splitting (i.e., non-leaf) node of the binary tree structure one flag indicates which splitting type (symmetric horizontal splitting or symmetric vertical splitting) is used, 0 indicates horizontal splitting and 1 indicates vertical splitting.
- the QTBT partitioning method may be used to split a slice into CTUs, a CTU into CUs, a CU into PUs, or a CU into TUs.
- it is possible to simplify the partitioning process by omitting the splitting from CU to PU and from CU to TU, as the leaf nodes of a binary tree block partitioning structure is the basic representative block for both prediction and transform coding.
- the QTBT structure shown in Fig. 3A splits the large block, a CTU, into multiple smaller blocks, CUs, and these smaller blocks are processed by prediction and transform coding without further splitting.
- the QTBT partition method is applied individually to luma and chroma components for I slices, which means a luma CTB has its own QTBT-structured block partitioning, and the two corresponding chroma CTBs have another QTBT-structured block partitioning, in another embodiment, each of the two chroma CTBs may have individual QTBT-structured block partitioning.
- the QTBT partition method is applied simultaneously to both the luma and chroma components for P and B slices.
- triple tree partitioning method Another partitioning method called triple tree partitioning method is used to capture objects which locate in the block center while quadtree partitioning method and binary tree partitioning method always split along the block center.
- Two exemplary triple tree partition types include horizontal center-side triple tree partitioning and vertical center-side triple tree partitioning.
- the triple tree partitioning method may provide capability to faster localize small objects along block boundaries, by allowing one-quarter partitioning vertically or horizontally.
- Methods and apparatuses of processing video data in a video coding system encode or decode a current block in a current picture by splitting the current block according to a predictor-based partition method.
- the video coding system receives input data associated with the current block, determines a first reference block for the current block, and splits the current block into partitions according to predicted textures of the first reference block. Each partition in the current block is separately predicted or compensated to generate predicted regions or compensated regions for the current block.
- the current block is encoded according to the predicted regions and original data of the current block or the current block is decoded by reconstructing the current block according to the compensated regions of the current block.
- An embodiment of the current block is predicted or compensated according to a prediction mode selected by a mode syntax.
- the mode syntax may be signaled at a current-block-level for the current block or the mode syntax may be signaled at a partition-level for each partition in the current block.
- the mode syntax may be signaled at CU-level or PU level when the predictor-based partition method is applied to split a CU into PUs.
- the first reference block for splitting the current block is also used to predict one partition of the current block.
- a first compensation region syntax is signaled to determine which partition of the current block is predicted by the first reference block.
- the first reference block is only used to split the current block into multiple partitions.
- the first reference block may be determined according to a first motion vector (MV) or a first Intra prediction mode, and the first MV may be coded using Advance Motion Vector Prediction (AMVP) mode or Merge mode.
- MV motion vector
- AMVP Advance Motion Vector Prediction
- a second reference block is determined for predicting one partition of the current block.
- the second reference block may be determined according to a second MV or a second Intra prediction mode, and the second MV may be coded using AMVP mode or Merge mode.
- the current block is split by applying a region partition method to the first reference block.
- the region partition method include applying an edge detection filter to the first reference block to find a dominate edge, applying K-means partition method to split the current block according to pixel intensities of the first reference block, and applying an optical flow method to partition the current block according to pixel-based motions of the first reference block. If there are more than one partition results, a second syntax can be signaled to determine which partition result is used.
- some embodiments of the video coding system process a boundary of the predicted regions or compensated regions to reduce artifacts at the boundary by changing pixel values at the boundary of the predicted regions or compensated regions. If the current block is Inter predicted, the current block is divided into NxN sub-blocks for reference MV storing. Some embodiments of reference MV storing store a reference MV for each sub-block according to a predefined reference MV storing position. One or more stored reference MVs of the current block are referenced by another block in the current picture or referenced by a block in another picture. In one embodiment, the reference MV for each sub-block is stored further according to a first compensation region position flag, for example, the first compensation region position flag indicates whether the first reference block is used to predicted a region covering a top-left pixel of the current block.
- aspects of the disclosure further provide an apparatus for the video coding system encoding or decoding video data according to a predictor-based partition method.
- the apparatus receives input data associated with a current block in a current picture, determines a first reference block, splits the current block into multiple partitions according to predicted textures of the first reference block, and separately predicts or compensates each partition in the current block to generate predicted regions or compensated regions, and encode the current block according to the predicted regions or decode the current block according to the compensated regions.
- aspects of the disclosure further provide a non-transitory computer readable medium storing program instructions for causing a processing circuit of an apparatus to perform video coding process according to a predictor-based partition method.
- Fig. 1 illustrates an exemplary coding tree for splitting a Coding Tree Unit (CTU) into Coding Units (CUs) and splitting each CU into one or more Transform Units (TUs) according to the HEVC standard.
- CTU Coding Tree Unit
- CUs Coding Units
- TUs Transform Units
- Fig. 2 illustrates eight different Prediction Unit (PU) partition types splitting a CU into one or more PUs according to the HEVC standard.
- PU Prediction Unit
- Fig. 3A illustrates an exemplary block partitioning structure of a Quad-Tree-Binary-Tree (QTBT) partitioning method.
- QTBT Quad-Tree-Binary-Tree
- Fig. 3B illustrates a coding tree structure corresponding to the block partitioning structure of Fig. 3A.
- Fig. 4 illustrates an example of CU partitions according to a quadtree partitioning method for a circular object.
- Fig. 5A illustrates an example of determining one dominate edge according to the predicted textures of a reference block.
- Fig. 5B illustrates Region-A covering a top-left pixel of the current block divided by the dominate edge determined in Fig. 5A.
- Fig. 5C illustrates Region-B of the current block divided by the dominate edge determined in Fig. 5A.
- Fig. 6 is a flowchart illustrating a video processing method with predictor-based partition according to an embodiment of the present invention.
- Fig. 7A shows exemplary predefined reference MV storing position with a 45-degree partition.
- Fig. 7B shows exemplary predefined reference MV storing position with a 135-degree partition.
- Fig. 8 illustrates an exemplary system block diagram for a video encoding system incorporating the video data processing method according to embodiments of the present invention.
- Fig. 9 illustrates an exemplary system block diagram for a video decoding system incorporating the video data processing method according to embodiments of the present invention.
- CU boundaries typically depend on object boundaries of the moving objects, which means smaller CUs are used to encode the object boundaries of the moving objects.
- various block partitioning methods were proposed to split a video picture into blocks for video coding, the resulting blocks of the various block partitioning methods are square or rectangular blocks. The square and rectangular shapes are not the best shape to predict boundaries of most moving objects, so the block partitioning method splits regions covering the boundaries into many small blocks to better fit the boundaries of the moving objects.
- Fig. 4 illustrates an example of CU partitions split according to the quadtree block partitioning method for a circular object.
- the circular object in Fig. 4 is a moving object which has a different motion with its background.
- Smaller CUs and PU partitions are used to encode the texture of the object boundary as shown in Fig. 4.
- the Merge mode may be used to reduce the syntax overheads of motion information, a lot of syntaxes such as the Merge flags are still required to be signaled for the finer granularity partitions.
- other partitioning methods such as QTBT and triple tree partitioning methods offer greater flexibility in block partitioning, however, these partitioning methods still split the blocks with straight lines to produce rectangular blocks.
- Embodiments of the present invention provide a partitioning method capable of splitting a block with one or more curve lines which better fits the object boundaries.
- Predictor-based Partition Embodiments of the present invention derive block partitions of a current block based on a predictor-based partition method.
- the predictor-based partition method splits the current block according to predicted textures of a reference block.
- the reference block may be Inter predicted predictor block determined by a motion vector or the reference block may be Intra predicted predictor block determined by an Intra prediction mode.
- the predictor-based partition method is applied to split a current block, such as a current Coding Unit (CU) , by signaling a first motion vector to derive a first reference block for the current CU.
- the current CU is first split into two or more partitions, such as Prediction Units (PUs) , according to predicted textures of the first reference block.
- PUs Prediction Units
- the first reference block is partitioned into multiple regions and the current CU is split into PUs according to the partitioning of the first reference block.
- An example of the predefined region partition method includes applying an edge detection filter to the predicted textures of the first reference block to determine one or more dominate edge in the first reference block.
- Fig. 5A illustrates an example of determining a dominate edge in a first reference block.
- the dominate edge of the first reference block divides the current block into two partitions as shown in Fig. 5B and Fig. 5C.
- Fig. 5B illustrates Region-A of the current block covering a top-left pixel of the current block
- Fig. 5C illustrates Region-B of the current block.
- Each of Region-A and Region-B is predicted or compensated separately, where both partitions may be Inter predicted or Intra predicted, and it is also possible for one partition to be Inter predicted while another partition to be Intra predicted.
- one partition is predicted by a first reference block and another partition is predicted by a second reference block to generate a first predicted region or a first compensated region and a second predicted region or a second compensated region respectively.
- the first reference block is located by a first MV or derived by a first Intra prediction mode
- a second reference block is located by a second MV or derived by a second Intra prediction mode.
- the first reference block used to determine the partition boundary of the current block may be used to predict or compensate one or none of the partitions in the current block.
- the first reference block is only used to split the current block, in another example, the first reference block is also used to predict a predefined region or a selected region of the current block.
- the first reference block is always used to split the current block and predict the partition covering a top-left pixel of the current block; in another example of using the first reference block to predict a selected region, one flag is signaled to indicate whether a partition covering the top-left pixel or any pre-defined pixel of the current block is predicted by the first reference block.
- the flag indicates whether a first predicted region predicted by the first reference block covers the pre-defined pixel such as the top-left pixel of the current block.
- a first reference block located by a first MV is used to determine a partition boundary for splitting a current block as shown in Fig. 5A, and a syntax (e.g., a first_compensation_region_position_flag) is used to indicate whether a first compensated region derived by the first reference block is region-A in Fig. 5B or region-B in Fig. 5C.
- a syntax e.g., a first_compensation_region_position_flag
- the flag first_compensation_region_position_flag 1 means the first compensation region covers the top-left pixel of the current block while the flag equals to 0 means the first compensation region does not cover the top-left pixel of the current block.
- Region-A in Fig. 5B is predicted by the first reference block while Region-B in Fig. 5C is predicted by a second reference block if the flag equals to 1;
- Region-B in Fig. 5C is predicted by the first reference block and Region-A in Fig. 5B is predicted by the second reference block if the flag equals to 0.
- more than one reference blocks are used to split the current block into multiple partitions. For example, a first reference block is used to split the current block into two partitions then a second reference block is used to further split one of the two partitions into two smaller partitions, or the second reference block is used to further split the current block into four or more partitions.
- Fig. 6 is a flowchart illustrating a video processing method with predictor-based partition according to an embodiment of the present invention.
- a current picture is first partitioned into blocks according to a partitioning method and each resulting block is further partitioned based on an embodiment of the predictor-based partition method.
- a video encoder or a video decoder receives input data associated with a current block in a current picture.
- a first reference block is determined for the current block in step S604.
- the first reference block is located according to a first motion vector (MV) or the first reference block is derived according to a first Intra prediction mode.
- the current block is split into two or more partitions according to predicted texture of the first reference block in step S606.
- Each partition of the current block is separately predicted or compensated to generate predicted regions or compensated regions in step S608.
- the partitions are separately predicted or compensated by multiple reference blocks located by multiple motion vectors.
- the video encoder encodes the current block according to the predicted regions and original data of the current block; or the video decoder decodes the current block by reconstructing the current block according to the compensated regions of the current block.
- Region Partition Methods partition a current block by applying an edge detection filter to predicted textures of a reference block.
- the Sobel edge detector or Canny edge detector is used to locate one or more dominate edges that can split the current block into two or more partitions.
- a K-means partition method is applied to the reference block to split the current block.
- the K-means partition method divides the reference block into irregularly shaped spatial partitions based on K-means clustering of pixel intensities of the reference block.
- the K-means clustering aims to partition the pixel intensities of the reference block into K clusters by minimizing a total intra-cluster variation, in which pixel intensities within a cluster are as similar as possible, whereas pixel intensities from different clusters are as dissimilar as possible.
- Another embodiment of the region partition method uses optical flow to determine pixel-based motions within the reference block.
- the reference block can be divided into multiple regions according to the pixel-based motions of the reference block, where pixels with similar motions belong to the same region, and the current block is split into partitions according to the divided regions of the reference block.
- the region partition method might divide the current block into more than one partition results, for example, finding two or more dominate edges that can split the current block into two or more partitions. If more than one partition results are generated, one syntax is signaled to indicate which partition result (for example, which dominate edge) is used to code the current block.
- the predicted regions or compensated regions of the current block are further processed to reduce or remove the artifact at the region boundary of the predicted regions or compensated regions.
- Pixel values at the region boundary of the compensated regions may be modified to reduce the artifact at the boundary.
- An example of the region boundary processing blends the region boundary by applying overlapped motion compensation or overlapped intra prediction.
- a predefined range of pixels are predicted by averaging or weighting predicted pixels of the two predicted regions or two compensated regions.
- the predefined range of pixels at the region boundary may be two or four pixels.
- Mode Signaling and MV Coding One or more prediction modes for a current block can be selected by one or more mode syntaxes.
- the mode syntax can be signaled in current-block-level (e.g. CU-level) or partition-level (e.g. PU-level) .
- all PUs in a current CU are coded using the same prediction mode when a mode syntax is signaled in CU-level, and the PUs in the current CU may be coded using different prediction modes when two or more mode syntaxes for the current CU are signaled in either CU-level or PU-level.
- Some embodiments of the predictor-based partition method first select one or more prediction modes for a current block, obtain a first reference block according to a predefined mode or a selected prediction mode, and determine region partitioning for the current block according to predicted texture of the first reference block. Each of the partitions in the current block is then separately predicted or compensated according to a corresponding selected prediction mode. Some other embodiments of the predictor-based partition method first partition a current block into multiple partitions according to a first reference block, then select one or more prediction modes for predicting or compensating the partitions in the current block.
- the current block is a current CU
- the partitions in the current CU are PUs
- the prediction mode is signaled in PU level.
- All the partitions in the current block may be restricted to be predicted or compensated using the same prediction mode according to one embodiment.
- the prediction mode for the current block is Inter prediction
- two or more partitions split from the current block are predicted or compensated by reference blocks pointed by motion vectors
- the prediction mode for the current block is Intra prediction
- two or more partitions split from the current block are predicted by reference blocks derived according to an Intra prediction mode.
- each partition in the current block is allowed to select an individual prediction mode, so the current block may be predicted by different prediction modes.
- the following examples demonstrate the mode signaling and MV coding method for a current block predicted by Inter prediction, where the current block is a CU and is partitioned into two PUs, and each PU is predicted or compensated according to a motion vector (MV) .
- MV motion vector
- two MVs are coded using Advance Motion Vector Prediction (AMVP) mode
- AMVP Advance Motion Vector Prediction
- the first MV is coded in Merge mode and the second MV is coded in AMVP mode
- the first MV is coded in AMVP mode and the second MV is coded in Merge mode
- both the MVs are coded in Merge mode.
- the prediction mode for each PU in a current CU may be signaled in the PU-level and signaled after the syntax Inter direction (interDir) . If bi-directional prediction is used, the prediction mode may be separately signaled for List 0 and List 1.
- a reference picture index and MV are signaled for the second MV while a Merge index is signaled for the first MV.
- the reference picture index of the second MV is the same as the reference picture index of the first MV, only the MV including horizontal component MVx and vertical component MVy are signaled for the second MV.
- a reference picture index and MV are signaled for the first MV while a Merge index is signaled for the second MV.
- two Merge indices are signaled for deriving the first MV and the second MV according to an embodiment.
- only one Merge index is required. If there are two MVs in the selected Merge candidate derived by the Merge index, one of the MVs is used as the first MV while the other MV is used as the second MV. If there is only one MV in the selected Merge candidate derived by the Merge index, the only MV is used as the first MV and the second MV is derived by extending the first MV to other reference frames.
- the prediction-based partition method splits a current block into multiple partitions according to predicted textures of a first reference block.
- representative MVs of the current block are stored for MV referencing by spatial or temporal neighboring blocks of the current block.
- the representative MVs of a current block are used for constructing a Motion Vector Predictor (MVP) candidate list or Merge candidate list for a neighboring block of the current block.
- MVP Motion Vector Predictor
- the current block is divided into multiple NxN sub-blocks for reference MV storing, and a representative MV is stored for each NxN sub-block, where an example of N is 4.
- the stored representative MV for each sub-block is the MV that corresponds to most pixels in the sub-block.
- the current block includes a first region compensated by a first MV and a second region compensated by a second MV, if most pixels in a sub-block belong to the first region, the representative MV of this sub-block is the first MV.
- the stored MV is the center MV of each sub-block. For example, if the center pixel in a sub-block belongs to the first region, the representative MV of this sub-block is the first MV.
- the reference MV storing position is predefined. Fig. 7A and Fig.
- FIG. 7B illustrate two examples of the predefined reference MV storing position, where Fig. 7A shows sub-blocks in a current block are divided by a predefined 45-degree partition into two regions, and Fig. 7B shows sub-blocks in a current block are divided by a predefined 135-degree partition into two regions.
- the white sub-blocks in Fig. 7A and Fig. 7B belong to a first region as the first region is defined to include a top-left pixel of the current block, and the gray sub-blocks in Fig. 7A and Fig. 7B belong to a second region.
- One of the MVs of the current block is the representative MV of the sub-blocks in the first region and another MV of the current block is the representative MV of the sub-blocks in the second region.
- a flag may be signaled to select which MV is stored for the first region covering the top-left pixel of the current block. For example, a first MV is stored for sub-blocks in the first region when a flag, first_compensation_region_position_flag, is zero, whereas the first MV is stored for sub-blocks in the second region when the flag is one.
- One advantage to store the reference MVs for a current block coded with predictor-based partition according to a predefined reference MV storing position is to allow the memory controller to pre-fetch reference data according to the stored reference MV without waiting for the derivation of the real block partitioning of the current block.
- the memory controller may pre-fetch the reference data once the entropy decoder decodes motion vector information of the current block, and this pre-fetch process can be performed at the same time during inverse quantization and inverse transform.
- the predefined reference MV storing position is only used to generate MVP or Merge candidate list for neighboring blocks, since the real block partitioning is derived during motion compensation, the deblocking filter applied after motion compensation uses MVs stored according to the real block partitioning for deblocking computation.
- PMVD Bandwidth Reduction A pattern-based MV derivation (PMVD) method was proposed to reduce the MV signaling overhead.
- the PMVD method includes bilateral matching merge mode and template matching merge mode, and a flag FRUC_merge_mode is signaled to indicate which mode is selected.
- a new temporal MVP called temporal derived MVP is derived by scanning all MVs in all reference frames.
- Each List 0 MV in List 0 reference frames is scaled to point to the current frame in order to derive the List 0 temporal derived MVP.
- a 4x4 block that pointed by this scaled MV in the current frame is the target current block.
- the MV is further scaled to point to a reference picture that the reference frame index refIdx is equal to 0 in List 0 for the target current block.
- the further scaled MV is stored in the List 0 MV field for the target current block.
- the first stage is PU-level matching
- the second stage is sub-PU-level matching.
- several starting MVs in List 0 and List 1 are selected respectively, and these MVs include the MVs from Merge candidates and MVs from temporal derived MVPs.
- Two different starting MV sets are generated for the two lists. For each MV in one list, a MV pair is generated by composing of this MV and a mirrored MV that is derived by scaling the MV to the other list.
- two reference blocks are compensated by using this MV pair.
- the sum of absolutely differences (SAD) of these two blocks is then calculated, and the MV pair with the smallest SAD is the best MV pair.
- a diamond search is performed to refine the best MV pair.
- the refinement precision is 1/8-pel.
- the refinement search range is restricted within ⁇ 8 pixel.
- the final MV pair is the PU-level derived MV pair.
- the current PU is divided into sub-PUs.
- the depth of the sub-PU is signaled in Sequence Parameter Set (SPS) .
- SPS Sequence Parameter Set
- An example of the minimum sub-PU size is 4x4 block.
- For each sub-PU several starting MVs in List 0 and List 1 are selected, which includes MVs of the PU-level derived MV, zero MV, HEVC defined collocated Temporal Motion Vector Predictor (TMVP) of the current sub-PU and bottom-right block, temporal derived MVP of the current sub-PU, and MVs of the left and above PUs or sub-PUs.
- TMVP Temporal Motion Vector Predictor
- the best MV pair for the sub-PU-level is determined.
- a diamond search is performed to refine the best MV pair.
- Motion compensation for this sub-PU is performed to generate the predictor for this sub-PU.
- MVs include MVs from Merge candidates and MVs from temporal derived MVPs. Two different starting MV sets are generated for the two lists. For each MV in one list, the SAD cost of the template with the MV is calculated, and the MV with the smallest SAD cost is the best MV. A diamond search is performed to refine the best MV.
- the refinement precision is 1/8-pel, and the refinement search range is restricted within ⁇ 8 pixel.
- the final refined MV is the PU-level derived MV.
- the MVs in the two lists are generated independently.
- sub-PU-level matching the current PU is divided into sub-PUs.
- the depth of the sub-PU is signaled in SPS, and the minimum sub-PU size may be 4x4 block.
- MVs in List 0 and List 1 are selected, which includes MVs of PU-level derived MV, zero MV, HEVC defined collocated TMVP of the current sub-PU and bottom-right block, temporal derived MVP of the current sub-PU, and MVs of left and above PUs or sub-PUs.
- the best MV pair for the sub-PU is selected.
- the diamond search is performed to refine the best MV pair.
- Motion compensation for this sub-PU is performed to generate the predictor for this sub-PU.
- the sub-PU-level matching is not applied, and the corresponding MVs are set equal to the final MVs in the first stage.
- the worse case bandwidth is for small size blocks.
- an embodiment of PMVD bandwidth reduction changes the refinement range according to the block size. For example, for a block with block area smaller than or equal to 256, the refinement range is reduced to ⁇ N, where N can be 4 according to one embodiment.
- Embodiments of the present invention determine the refinement search range for the PMVD method according to the block size.
- Fig. 8 illustrates an exemplary system block diagram for a Video Encoder 800 implementing embodiments of the present invention.
- a current picture is processed by the Video Encoder 800 in block-based, and a current block coded using predictor-based partition is split into multiple partitions according to predicted texture of a first reference block.
- the first reference block is derived by Intra Prediction 810 according to a first Intra prediction mode or the first reference block is derived by Inter Prediction 812 according to a first motion vector (MV) .
- Intra Prediction 810 generates the first reference block based on reconstructed video data of the current picture according to the first Intra prediction mode.
- Inter Prediction 812 performs motion estimation (ME) and motion compensation (MC) to provide the first reference block based on referencing video data from other picture or pictures according to the first MV.
- Some embodiments of splitting the current block according to the predicted texture of the first reference block comprise determining a dominate edge, classifying pixel intensities, or classifying pixel-based motions of the first reference block.
- Each partition of the current block is separately predicted either by the Intra Prediction 810 or Inter Prediction 812 to generated predicted regions. For example, all partitions of the current block are predicted by Inter Prediction 812, and each partition is predicted by reference block pointed by a motion vector.
- An embodiment blends the boundary of the predicted regions to reduce artifacts at the boundary.
- Intra Prediction 810 or Inter Prediction 812 supplies the predicted regions to Adder 816 to form residues by deducting corresponding pixel values of the predicted regions from the original data of the current block.
- the residues of the current block are further processed by Transformation (T) 818 followed by Quantization (Q) 820.
- the transformed and quantized residual signal is then encoded by Entropy Encoder 834 to form a video bitstream.
- the video bitstream is then packed with side information.
- the transformed and quantized residual signal of the current block is processed by Inverse Quantization (IQ) 822 and Inverse Transformation (IT) 824 to recover the prediction residues. As shown in Fig. 8, the residues are recovered by adding back to the predicted regions of the current block at Reconstruction (REC) 826 to produce reconstructed video data.
- IQ Inverse Quantization
- IT Inverse Transformation
- the reconstructed video data may be stored in Reference Picture Buffer (Ref. Pict. Buffer) 832 and used for prediction of other pictures.
- the reconstructed video data from REC 826 may be subject to various impairments due to the encoding processing, consequently, In-loop Processing Filter (ILPF) 828 is applied to the reconstructed video data before storing in the Reference Picture Buffer 832 to further enhance picture quality.
- ILPF In-loop Processing Filter
- Syntax elements are provided to Entropy Encoder 834 for incorporation into the video bitstream.
- a corresponding Video Decoder 900 for Video Encoder 800 of Fig. 8 is shown in Fig. 9.
- the video bitstream encoded by a video encoder is the input to Video Decoder 900 and is decoded by Entropy Decoder 910 to parse and recover the transformed and quantized residual signal and other system information.
- the decoding process of Decoder 900 is similar to the reconstruction loop at Encoder 800, except Decoder 900 only requires motion compensation prediction in Inter Prediction 914.
- a current block coded by predictor-based partition is decoded by Intra Prediction 912, Inter Prediction 914, or both Intra Prediction 912 and Inter Prediction 914.
- a first reference block determined by a first MV or a first Intra prediction is used to split the current block into multiple partitions.
- Mode Switch 916 selects a compensated region from Intra Prediction 912 or compensated region from Inter Prediction 914 according to decoded mode information.
- the transformed and quantized residual signal is recovered by Inverse Quantization (IQ) 920 and Inverse Transformation (IT) 922.
- IQ Inverse Quantization
- IT Inverse Transformation
- the recovered residual signal is reconstructed by adding back the compensated regions of the current block in REC 918 to produce reconstructed video.
- the reconstructed video is further processed by In-loop Processing Filter (ILPF) 924 to generate final decoded video. If the currently decoded picture is a reference picture, the reconstructed video of the currently decoded picture is also stored in Ref. Pict. Buffer 928 for later pictures in decoding order.
- ILPF In-loop Processing Filter
- Video Encoder 800 and Video Decoder 900 in Fig. 8 and Fig. 9 may be implemented by hardware components, one or more processors configured to execute program instructions stored in a memory, or a combination of hardware and processor.
- a processor executes program instructions to control receiving of input video data.
- the processor is equipped with a single or multiple processing cores.
- the processor executes program instructions to perform functions in some components in Encoder 800 and Decoder 900, and the memory electrically coupled with the processor is used to store the program instructions, information corresponding to the reconstructed images of blocks, and/or intermediate data during the encoding or decoding process.
- the memory in some embodiment includes a non-transitory computer readable medium, such as a semiconductor or solid-state memory, a random access memory (RAM) , a read-only memory (ROM) , a hard disk, an optical disk, or other suitable storage medium.
- the memory may also be a combination of two or more of the non-transitory computer readable medium listed above.
- Encoder 800 and Decoder 900 may be implemented in the same electronic device, so various functional components of Encoder 800 and Decoder 900 may be shared or reused if implemented in the same electronic device.
- Reconstruction 826 may also be used to function as Reconstruction 918, Inverse Transformation 922, Inverse Quantization 920, In-loop Processing Filter 924, and Reference Picture Buffer 928 in Fig. 9, respectively.
- Embodiments of the video data processing method with conditioned QP information signaling for video coding system may be implemented in a circuit integrated into a video compression chip or program code integrated into video compression software to perform the processing described above. For examples, determining of a current mode set for the current block may be realized in program code to be executed on a computer processor, a Digital Signal Processor (DSP) , a microprocessor, or field programmable gate array (FPGA) . These processors can be configured to perform particular tasks according to the invention, by executing machine-readable software code or firmware code that defines the particular methods embodied by the invention.
- DSP Digital Signal Processor
- FPGA field programmable gate array
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
La présente invention concerne des appareils et des procédés de traitement vidéo qui permettent de coder ou de décoder des données de vidéo et qui consistent à recevoir des données d'entrée associées à un bloc actuel dans une image actuelle, à déterminer un premier bloc de référence, à diviser le bloc actuel en de multiples partitions conformément aux textures prédites du premier bloc de référence, et à prédire séparément ou à compenser chaque partition du bloc actuel afin de générer des régions prédites ou des régions compensées. Le bloc actuel est codé conformément aux régions prédites et aux données d'origine du bloc actuel ou le bloc actuel est décodé par reconstruction du bloc actuel conformément aux régions compensées du bloc actuel.
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US16/321,907 US20190182505A1 (en) | 2016-08-12 | 2017-08-10 | Methods and apparatuses of predictor-based partition in video processing system |
| TW106127264A TWI655863B (zh) | 2016-08-12 | 2017-08-11 | 視訊處理系統中基於預測子的分割的方法及裝置 |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US201662374059P | 2016-08-12 | 2016-08-12 | |
| US62/374,059 | 2016-08-12 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2018028615A1 true WO2018028615A1 (fr) | 2018-02-15 |
Family
ID=61161730
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2017/096715 Ceased WO2018028615A1 (fr) | 2016-08-12 | 2017-08-10 | Procédés et appareils de partition en fonction d'un dispositif de prédiction dans un système de traitement vidéo |
Country Status (3)
| Country | Link |
|---|---|
| US (1) | US20190182505A1 (fr) |
| TW (1) | TWI655863B (fr) |
| WO (1) | WO2018028615A1 (fr) |
Cited By (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2019174594A1 (fr) * | 2018-03-14 | 2019-09-19 | Mediatek Inc. | Procédé et appareil fournissant une structure de division optimisée pour un codage vidéo |
| CN110662076A (zh) * | 2018-06-29 | 2020-01-07 | 北京字节跳动网络技术有限公司 | 子块的边界增强 |
| WO2020073924A1 (fr) * | 2018-10-09 | 2020-04-16 | Mediatek Inc. | Procédé et appareil de codage ou de décodage à l'aide d'échantillons de référence déterminés par des critères prédéfinis |
| CN111937404A (zh) * | 2018-03-26 | 2020-11-13 | 联发科技股份有限公司 | 用以发送视频资料的编码单元分割的方法和装置 |
| CN112368745A (zh) * | 2018-05-15 | 2021-02-12 | 蒙纳士大学 | 用于磁共振成像的图像重建的方法和系统 |
| CN113647105A (zh) * | 2019-01-28 | 2021-11-12 | Op方案有限责任公司 | 指数分区的帧间预测 |
| CN113647104A (zh) * | 2019-01-28 | 2021-11-12 | Op方案有限责任公司 | 在以自适应区域数量进行的几何分区中的帧间预测 |
| RU2829207C2 (ru) * | 2019-01-28 | 2024-10-30 | Оп Солюшнз, Ллк | Межкадровое предсказание при экспоненциальном разделении |
Families Citing this family (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN119922329A (zh) * | 2017-11-01 | 2025-05-02 | 交互数字Vc控股公司 | 用于合并模式的解码器侧运动矢量细化和子块运动导出 |
| EP3780624A1 (fr) | 2018-04-19 | 2021-02-17 | LG Electronics Inc. | Traitement d'images et dispositif pour sa mise en oeuvre |
| US11343541B2 (en) * | 2018-04-30 | 2022-05-24 | Hfi Innovation Inc. | Signaling for illumination compensation |
| KR102595146B1 (ko) * | 2018-09-03 | 2023-10-26 | 후아웨이 테크놀러지 컴퍼니 리미티드 | 비디오 인코더, 비디오 디코더 및 그에 대응하는 방법 |
| CN113545081B (zh) | 2019-03-14 | 2024-05-31 | 寰发股份有限公司 | 视频编解码系统中的处理视频数据的方法以及装置 |
| CN119420900A (zh) * | 2019-04-25 | 2025-02-11 | 华为技术有限公司 | 图像预测方法、装置和计算机可读存储介质 |
| US20220103846A1 (en) * | 2020-09-28 | 2022-03-31 | Alibaba Group Holding Limited | Supplemental enhancement information message in video coding |
| WO2023118259A1 (fr) * | 2021-12-21 | 2023-06-29 | Interdigital Vc Holdings France, Sas | Partitionnement de bloc vidéo sur la base d'informations de profondeur ou de mouvement |
| CN121040069A (zh) * | 2023-10-08 | 2025-11-28 | 海信视像科技股份有限公司 | 视频编码方法、视频解码方法及装置 |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20070064796A1 (en) * | 2005-09-16 | 2007-03-22 | Sony Corporation And Sony Electronics Inc. | Natural shaped regions for motion compensation |
| WO2012022648A1 (fr) * | 2010-08-19 | 2012-02-23 | Thomson Licensing | Procédé permettant de reconstruire un bloc actuel d'une image et procédé de codage correspondant, dispositifs correspondants ainsi que support de stockage ayant une image codée dans un train de bits |
| US20120147961A1 (en) * | 2010-12-09 | 2012-06-14 | Qualcomm Incorporated | Use of motion vectors in evaluating geometric partitioning modes |
Family Cites Families (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US8243790B2 (en) * | 2007-09-28 | 2012-08-14 | Dolby Laboratories Licensing Corporation | Treating video information |
| US9503702B2 (en) * | 2012-04-13 | 2016-11-22 | Qualcomm Incorporated | View synthesis mode for three-dimensional video coding |
| US20130287109A1 (en) * | 2012-04-29 | 2013-10-31 | Qualcomm Incorporated | Inter-layer prediction through texture segmentation for video coding |
| WO2016178485A1 (fr) * | 2015-05-05 | 2016-11-10 | 엘지전자 주식회사 | Procédé et dispositif de traitement d'unité de codage dans un système de codage d'image |
-
2017
- 2017-08-10 US US16/321,907 patent/US20190182505A1/en not_active Abandoned
- 2017-08-10 WO PCT/CN2017/096715 patent/WO2018028615A1/fr not_active Ceased
- 2017-08-11 TW TW106127264A patent/TWI655863B/zh not_active IP Right Cessation
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20070064796A1 (en) * | 2005-09-16 | 2007-03-22 | Sony Corporation And Sony Electronics Inc. | Natural shaped regions for motion compensation |
| WO2012022648A1 (fr) * | 2010-08-19 | 2012-02-23 | Thomson Licensing | Procédé permettant de reconstruire un bloc actuel d'une image et procédé de codage correspondant, dispositifs correspondants ainsi que support de stockage ayant une image codée dans un train de bits |
| US20120147961A1 (en) * | 2010-12-09 | 2012-06-14 | Qualcomm Incorporated | Use of motion vectors in evaluating geometric partitioning modes |
Non-Patent Citations (2)
| Title |
|---|
| ABDULLAH A.MUHIT ET AL.: "A Fast Approach for Geometry-Adaptive Block Partitioning", PICTURE CODING SYMPOSIUM, 21 July 2009 (2009-07-21), XP031491640 * |
| MARTA KARCZEWICZ ET AL.: "Video coding technology proposal by Qualcomm Inc.", JCT-VC OF IUT-TSG16 WP3 AND ISOIIEC JCT1/SC29/WG11, 1ST MEETING: DRESDEN , DE , JCTVC-A121, 23 April 2010 (2010-04-23) * |
Cited By (18)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US11516513B2 (en) | 2018-03-14 | 2022-11-29 | Mediatek Inc. | Method and apparatus of optimized splitting structure for video coding |
| WO2019174594A1 (fr) * | 2018-03-14 | 2019-09-19 | Mediatek Inc. | Procédé et appareil fournissant une structure de division optimisée pour un codage vidéo |
| CN111937404B (zh) * | 2018-03-26 | 2023-12-15 | 寰发股份有限公司 | 一种用于视频编码器或解码器的视频编解码方法及装置 |
| US11785258B2 (en) | 2018-03-26 | 2023-10-10 | Hfi Innovation Inc. | Methods and apparatus for signaling coding unit partitioning of video data |
| CN111937404A (zh) * | 2018-03-26 | 2020-11-13 | 联发科技股份有限公司 | 用以发送视频资料的编码单元分割的方法和装置 |
| CN112368745A (zh) * | 2018-05-15 | 2021-02-12 | 蒙纳士大学 | 用于磁共振成像的图像重建的方法和系统 |
| CN110662076B (zh) * | 2018-06-29 | 2022-10-04 | 北京字节跳动网络技术有限公司 | 子块的边界增强 |
| CN110662076A (zh) * | 2018-06-29 | 2020-01-07 | 北京字节跳动网络技术有限公司 | 子块的边界增强 |
| TWI737003B (zh) * | 2018-10-09 | 2021-08-21 | 聯發科技股份有限公司 | 使用由預定義準則確定的參考樣本進行編碼或解碼的方法和設備 |
| US11178397B2 (en) | 2018-10-09 | 2021-11-16 | Mediatek Inc. | Method and apparatus of encoding or decoding using reference samples determined by predefined criteria |
| CN112806006A (zh) * | 2018-10-09 | 2021-05-14 | 联发科技股份有限公司 | 使用由预定义准则确定的参考样本以编解码的方法和设备 |
| WO2020073924A1 (fr) * | 2018-10-09 | 2020-04-16 | Mediatek Inc. | Procédé et appareil de codage ou de décodage à l'aide d'échantillons de référence déterminés par des critères prédéfinis |
| CN112806006B (zh) * | 2018-10-09 | 2024-04-16 | 寰发股份有限公司 | 使用由预定义准则确定的参考样本以编解码的方法和设备 |
| CN113647105A (zh) * | 2019-01-28 | 2021-11-12 | Op方案有限责任公司 | 指数分区的帧间预测 |
| CN113647104A (zh) * | 2019-01-28 | 2021-11-12 | Op方案有限责任公司 | 在以自适应区域数量进行的几何分区中的帧间预测 |
| EP3918791A4 (fr) * | 2019-01-28 | 2022-03-16 | OP Solutions, LLC | Prédiction inter dans une division exponentielle |
| RU2829207C2 (ru) * | 2019-01-28 | 2024-10-30 | Оп Солюшнз, Ллк | Межкадровое предсказание при экспоненциальном разделении |
| US12537942B2 (en) | 2024-08-09 | 2026-01-27 | Hfi Innovation Inc. | Method and apparatus of encoding or decoding using reference samples determined by predefined criteria |
Also Published As
| Publication number | Publication date |
|---|---|
| TWI655863B (zh) | 2019-04-01 |
| TW201813393A (zh) | 2018-04-01 |
| US20190182505A1 (en) | 2019-06-13 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2018028615A1 (fr) | Procédés et appareils de partition en fonction d'un dispositif de prédiction dans un système de traitement vidéo | |
| US11889056B2 (en) | Method of encoding or decoding video blocks by current picture referencing coding | |
| CN111937391B (zh) | 用于视频编解码系统中的子块运动补偿的视频处理方法和装置 | |
| TWI702834B (zh) | 視訊編解碼系統中具有重疊塊運動補償的視訊處理的方法以及裝置 | |
| US11303900B2 (en) | Method and apparatus for motion boundary processing | |
| CN113228638B (zh) | 在区块分割中条件式编码或解码视频区块的方法和装置 | |
| US11785242B2 (en) | Video processing methods and apparatuses of determining motion vectors for storage in video coding systems | |
| KR20220005101A (ko) | 병렬 처리를 위한 움직임 정보를 처리하는 영상 처리 방법, 그를 이용한 영상 복호화, 부호화 방법 및 그 장치 | |
| CN116896640A (zh) | 视频编解码方法及相关装置 | |
| WO2021093730A1 (fr) | Procédé et appareil de signalisa(ion de résolution adaptative de différence de vecteur de mouvement dans le codage vidéo | |
| US12501026B2 (en) | Method and apparatus for low-latency template matching in video coding system | |
| KR20230113661A (ko) | 효과적인 차분양자화 파라미터 전송 기반 영상 부/복호화방법 및 장치 | |
| EP4561072A1 (fr) | Procédé de codage/décodage d'image pour gestion de mise en correspondance de modèle, procédé de transmission de flux binaire et support d'enregistrement dans lequel est stocké un flux binaire | |
| TWI796979B (zh) | 視訊編碼方法及相關裝置 | |
| WO2025077755A1 (fr) | Procédés et appareil de mémoire tampon partagée pour un héritage de modèle de prédiction intra par extrapolation dans un codage vidéo | |
| EP4513868A1 (fr) | Procédé de codage/décodage d'images utilisant une mise en correspondance de modèle, procédé de transmission de flux binaire et support d'enregistrement dans lequel est stocké un flux binaire | |
| WO2024017224A1 (fr) | Affinement de candidat affine | |
| WO2025148640A1 (fr) | Procédé et appareil de mélange basé sur la régression pour améliorer la fusion de prédiction intra dans un système de codage vidéo | |
| EP4557732A1 (fr) | Procédé de codage/décodage d'image basé sur un affinement d'informations de mouvement, procédé de transmission de flux binaire et support d'enregistrement stockant un flux binaire | |
| WO2025218691A1 (fr) | Procédés et appareil destinés à déterminer de manière adaptative un type de transformée sélectionné dans des systèmes de codage d'image et de vidéo | |
| WO2024235244A1 (fr) | Procédés et appareil de sélection de type de transformée dans un système de codage vidéo | |
| WO2024007789A1 (fr) | Génération de prédiction avec contrôle hors limite dans un codage vidéo | |
| KR20250172382A (ko) | 영상 부호화 방법 및 장치, 그리고 비트스트림을 저장한 기록 매체 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17838745 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 17838745 Country of ref document: EP Kind code of ref document: A1 |