[go: up one dir, main page]

WO2019191717A1 - Bi-prédiction à modèle affiné pour codage vidéo - Google Patents

Bi-prédiction à modèle affiné pour codage vidéo Download PDF

Info

Publication number
WO2019191717A1
WO2019191717A1 PCT/US2019/025046 US2019025046W WO2019191717A1 WO 2019191717 A1 WO2019191717 A1 WO 2019191717A1 US 2019025046 W US2019025046 W US 2019025046W WO 2019191717 A1 WO2019191717 A1 WO 2019191717A1
Authority
WO
WIPO (PCT)
Prior art keywords
reference block
block
template
frame
transcoder
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/US2019/025046
Other languages
English (en)
Inventor
Wenhao Zhang
Deliang FU
Min Gao
Juncheng MA
Chen Liu
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hulu LLC
Original Assignee
Hulu LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hulu LLC filed Critical Hulu LLC
Priority to EP19774721.5A priority Critical patent/EP3777176A4/fr
Publication of WO2019191717A1 publication Critical patent/WO2019191717A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/573Motion compensation with multiple frame prediction using two or more reference frames in a given prediction direction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/189Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
    • H04N19/192Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding the adaptation method, adaptation tool or adaptation type being iterative or recursive
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/577Motion compensation with bidirectional frame interpolation, i.e. using B-pictures

Definitions

  • Inter prediction such as inter bi-prediction
  • a decoder copies two inter predicted reference blocks from two reference frames and blends the reference blocks together to generate a prediction block for the current block being decoded.
  • the reference blocks are selected using a motion search that is based on a linear motion trajectory assumption, which assumes that a pixel maintains a linear path along a direction.
  • the motion search performs one search in one linear direction for a first reference block and a second search in a second linear direction for a second reference block. The search assumes the pixel is maintaining a linear path.
  • FIG. 1 depicts a simplified system for using weighted bi-prediction according to some embodiments.
  • FIG. 2 shows an example of using template refined bi-prediction according to some embodiments.
  • FIG. 3 depicts an example of a template that is used for transcoder or decoder side motion prediction refinement according to some embodiments.
  • FIG. 4 depicts a simplified flowchart of a method for refining the motion prediction according to some embodiments.
  • FIG. 5 depicts a search process for a new reference block according to some embodiments.
  • FIG. 6 depicts a simplified flowchart of a recalculation of the distances and anchor point using the new reference block Rl according to some embodiments.
  • FIG. 7 depicts an example of a transcoding system according to some embodiments.
  • FIG. 8 depicts an example of a decoding system according to some embodiments.
  • Some embodiments include a transcoder side or decoder side process that refines the motion vectors that are used in the coding process.
  • the process may use solely a transcoder side process, solely a decoder side process, or a combination of both a transcoder side and a decoder side.
  • the transcoder side may first determine motion vectors for a current block in a frame, which point to reference blocks in other frames. The transcoder may then search for reference blocks that may be a better prediction for the current block. If the transcoder determines one or more different reference blocks, the transcoder may perform different processes to signal the use of different reference blocks to the decoder. For example, the transcoder may signal the motion vectors for the different reference blocks and not include the motion vectors for the original reference blocks, if replaced, in the encoded bitstream. In other embodiments, the transcoder may only signal that different reference blocks should be used and the decoder determines the position of the different reference blocks using the decoder side process described below.
  • the transcoder may insert a value for a flag that indicates whether the decoder should perform the search or not.
  • the transcoder can insert a bit value flag, e.g., a decoder_motion_derive_flag, for each inter coded block in the encoded bitstream. If the flag is a first value (e.g., 0), the decoder will use the motion vectors in the encoded bitstream, and if the flag is a second value (e.g., 1), the decoder will adaptively search for different motion vectors.
  • a decoder may use client computational power to search for new motion vectors at the video decoder side. For example, if the transcoder did not perform a search for different reference blocks, then the decoder could perform that search. However, even if the transcoder performed the search for different reference blocks, the decoder may also perform the search again. In some embodiments, the decoder may perform the search if a flag is set by the transcoder to a value that indicates the decoder should perform the search. However, the decoder may always perform the search or make the determination on its own.
  • the search may be a template search or bilateral search, or in some embodiments, the template search and the bilateral search are combined.
  • the search can be used to predict some or all of the inter bi-prediction blocks, which results in better prediction of pixels.
  • the distances between the template of the current block being decoded and the reference blocks’ templates are compared to determine which reference block may be a better reference.
  • the determination of which is a better reference may be based on a measurement, such as the reference block that is the closest to the current block may be more reliable as a reference (e.g., the closer the reference block to the current block may indicate the reference block include more similar content to the current block).
  • the process performs a bilateral search on both reference frames to refine the motion vectors.
  • the two reference blocks are expected to converge to the optimal positions, and as a result, the accuracy of bi-prediction will be improved, and higher overall video compression performance can be obtained, which saves bandwidth in video transmission.
  • the decoder side process requires using decoder computation resources to enhance video compression, but the client may have client side computing resources that are available to use with this process.
  • FIG. 1 depicts a simplified system 100 for using weighted bi-prediction according to some embodiments.
  • System 100 transcodes a source video asset, which may be any type of video, such as for a television show, movie, or video clip.
  • the source video may need to be transcoded into one or more formats, such as one or more bitrates.
  • a server system 102 sends an encoded bitstream to client 104.
  • server system 102 may be sending a video to a client 104 for playback.
  • Server system 102 includes a transcoder 106 that transcodes a video into an encoded bitstream.
  • Transcoder 106 may be a software video processor/transcoder configured on a central processing unit (CPU), a hardware accelerated video processor/transcoder with a graphical processing unit (GPU), a field programmable gate array (FPGA), and/or a hardware processor/transcoder implemented in an application-specific integrated circuit (ASIC).
  • Transcoding may be the conversion from one digital format to another digital format. Transcoding may involve decoding the source format and encoding the source video into another digital format, or converting the source content into videos with a specific resolution, framerate, bitrate, codec, etc. Also, encoding may be the conversion of analog source content and to a digital format. As used, the term transcoding may include encoding.
  • a transcoder bi-prediction block 108 performs bi prediction for a current block of a current frame. Inter-prediction uses reference blocks from frames other than the current frame. Bi-prediction uses a first reference block from a first frame and a second reference block from a second frame. In some embodiments, the first frame is before the current frame and the second frame is after the current frame; however, the first frame and the second frame may be both before the current frame or both after the current frame.
  • Transcoder bi-prediction block 108 identifies a first reference block in a first reference frame and a second reference block in a second reference frame using a motion search process. After identifying the first reference block and the second reference block, transcoder bi-prediction block 108 determines signaling values for the bi-prediction mode. The signaling values may be the values for a first motion vector that points from the current block to the first reference block and a second motion vector that points from the current block to the second reference block. Also, transcoder bi-prediction block 108 inserts a flag that indicates the bi-prediction mode should be used in the decoding process. Transcoder bi prediction block 108 inserts these signaling values into the encoded bitstream.
  • transcoder bi-prediction block 108 may refine the values of the motion vectors and signal the refined values to a decoder 112 in client 104.
  • client 104 includes a decoder 112 that decodes the encoded bitstream.
  • a decoder bi-prediction block 110 may refine the values of the motion vectors to select different reference blocks.
  • Transcoder bi-prediction block 108 may insert a value for a flag that indicates whether decoder 112 should adaptively search for new motion vectors.
  • FIG. 2 shows an example of using template refined bi-prediction according to some embodiments.
  • Transcoder 106 transcodes a current frame 204. In the transcoding process, transcoder 106 decodes previously transcoded frames to use in the transcoding process of other frames. Here, transcoder 106 has already transcoded and then decoded reference frame 202 (reference frame 0) and reference frame 206 (reference frame 1). Transcoder 106 selects motion vectors (MV) to reference the positions of the reference blocks that are used to predict the current block C. Transcoder 106 may use various motion search methods to select the motion vectors for the reference blocks. Then, transcoder 106 may insert the motion vectors to use in the encoded bitstream along with a flag with a value that indicates whether or not to use template refined bi-prediction on the decoder side.
  • MV motion vectors
  • Decoder 112 receives the encoded bitstream and starts decoding frames. Using the example in FIG. 2, decoder 112 is decoding a current frame 204. Decoder 112 has already decoded reference frame 202 (reference frame 0) and reference frame 206 (reference frame 1). Decoder 112 uses motion vectors to select the positions of the reference blocks that are used to predict the current block C. For example, transcoder 106 may have encoded the motion vectors for the current block in the encoded bitstream. A motion vector MV0 208-1 points to a reference block R0 210-1 in reference frame 0 and a motion vector MV1 208-2 points to a reference block Rl 210-2 in reference frame 1. Decoder 112 generates a prediction block from reference block R0 and reference block Rl, and applies the residual to the prediction block to decode the current block.
  • Transcoder 106 or decoder 112 uses the pixels of reference block R0 and reference block Rl to predict the pixels of current block C 212.
  • an average blending pixel by pixel is used of: where C is the pixels of the current block, R0 is the pixels of the reference block R0, and Rl is the pixels of reference block Rl.
  • the values of“1 ⁇ 2” in the equation weight the pixels of reference block R0 and reference block Rl equally. Accordingly, the pixels of reference block R0 and reference block Rl are given equal weight to predict the pixels of current block C.
  • transcoder 106 may use a motion search that is based on a linear motion trajectory assumption, which searches in separate linear directions to select reference blocks in reference frames. However, combining two linear uni-direction motion predictions into a bi-motion prediction may be sub-optimal. In some embodiments, transcoder 106 or decoder 112 may refine the motion vectors to improve the motion prediction.
  • FIG. 3 depicts an example of a template 302 that is used for transcoder or decoder side motion prediction refinement according to some embodiments.
  • Transcoder 106 or decoder 112 has decoded some blocks in current frame 204 that are represented with shading and has not decoded some blocks without shading.
  • Transcoder 106 or decoder 112 has already decoded reference frame 202 (reference frame 0) and reference frame 206 (reference frame 1).
  • Transcoder 106 or decoder 112 determines a shape, such as a L shape of decoded existing pixels at 302.
  • the L shaped region is a template of a width W and a height H.
  • the L shaped region may be neighboring pixels to a current block 212 of MxN size being decoded.
  • an L shaped region is described, other types of shapes may be used, such as the width of the template may not go beyond the top side of the current block.
  • Transcoder 106 or decoder 112 identifies a template 306-1 in the reference frame 0 based on reference block 0 and a template 306-2 in the reference frame 1 based on reference block 1.
  • Template 306-1 and template 306-2 may have the same dimensions as template 302, such as the WxH dimensions.
  • template 306-1 and template 306-2 may also be positioned the same relative to reference blocks 0 and 1, such as forming an L-shape template next to the left and top sides of the reference blocks 0 and 1, respectively.
  • Transcoder 106 or decoder 112 uses template 306-1 in the reference frame 0, template 302 for the current block, and template 306-2 in the reference frame 1 to refine the motion vectors. The templates are used because current block 0 has not been decoded yet. Thus, transcoder 106 or decoder 112 uses decoded pixels in the current frame in the motion prediction refinement process.
  • transcoder 106 selects the reference frames and motion vectors.
  • decoder 112 determines the reference frames to use from the encoded bitstream, such as reference frame 0 and reference frame 1.
  • the encoded bitstream includes motion vectors for the current block being decoded.
  • Transcoder 106 or decoder 112 uses the motion vectors to select the positions of the reference blocks R0 and Rl.
  • Transcoder 106 or decoder 112 selects templates. For example, transcoder 106 or decoder 112 selects an L shaped region around the reference block R0 and Rl as the templates 306-1 and 306-2, respectively. Also, transcoder 106 or decoder 112 selects a similarly shaped template 302 for the current block.
  • Transcoder 106 or decoder 112 uses templates 302, 306-1, and 306-2 to refine the motion prediction. For example, transcoder 106 or decoder 112 may change one of the reference blocks or both of the reference blocks.
  • FIG. 4 depicts a simplified flowchart 400 of a method for refining the motion prediction according to some embodiments. Transcoder 106 or decoder 112 first selects an anchor point, which designates a reference block that will not change in this iteration of the process. In the process, at 402, transcoder 106 or decoder 112 calculates the distance distTO between the template 306-1 of reference block R0 and the template 302 of the current block.
  • transcoder 106 or decoder 112 calculates a distance distTl between template 306-2 of reference block Rl and the template 302 of the current block.
  • the distance may be based on a characteristic of the blocks, such as a local complexity, texture similarity, color difference, temporal distance, coding parameters such as a quantization parameter (QP), block size, coding mode, etc.
  • QP quantization parameter
  • the comparison of characteristics may use a per-pixel distance within the templates, which can be calculated by Sum of Absolute Difference (SAD), Sum of Square Difference (SSD), or Sum of Absolute Transformed Difference (SATD).
  • FIG. 5 depicts a search process for a new reference block according to some embodiments.
  • Transcoder 106 or decoder 112 fixes the anchor point and then searches in another reference frame in a search region 502.
  • Transcoder 106 or decoder 112 may perform the search in reference frame 1, or in any reference frame. For example, if the distance distl is above a threshold, then decoder 112 may decide to search in another reference frame. But, if the distance distl is below the threshold, then this may indicate the reference frame 1 may still be a good candidate for use as a reference because the distance may not be large.
  • reference block R0 is the anchor point
  • the search attempts to find a better reference block in reference frame 1 than reference block Rl.
  • Transcoder 106 or decoder 112 refines the search in a search region centered at reference block Rl in reference frame 1.
  • the search region may be pre-defined according to the video resolution and the strength of motion in the video. For example, for l080p video, a search region of 64x64 centered at anchor point may be enough to capture regular motion.
  • the search attempts to find a reference block Rl ' at 504 that minimizes the distance distRORl ', which is a distance between reference block R0 and reference block Rl '.
  • the distance can be calculated by Sum of Absolute Difference (SAD), Sum of Square Difference (SSD), or Sum of Absolute Transformed Difference (SATD), such as by: where R0 is reference block R0 and Rl is reference block Rl
  • SAD Sum of Absolute Difference
  • SSD Sum of Square Difference
  • SSD Sum of Absolute Transformed Difference
  • SSD Sum of Absolute Transformed Difference
  • SSD Sum of Absolute Transformed Difference
  • SSD Sum of Absolute Transformed Difference
  • Transcoder 106 or decoder 112 may calculate the distance distRORl ' using:
  • Rl' argmin(distRORl'), where arg min is the minimum of the distance between reference block R0 and reference block Rl ' . If decoder 112 finds a reference block Rl ' that includes a distance smaller than the distance distRORl in search region 502, transcoder 106 or decoder 112 updates the reference block position to the position of reference block Rl ' . For example, transcoder 106 or decoder 112 updates the motion vector MV 1 to point to reference block Rl ' .
  • Transcoder 106 or decoder 112 uses the same procedure as described above to select templates. For example, transcoder 106 or decoder 112 selects reference frame 0 and reference frame 1, which are the same in this case. Then, transcoder 106 or decoder 112 uses motion vectors to select the positions of the reference blocks R0 and Rl. The motion vector for reference block Rl may have changed because decoder 112 selected a new reference block Rl.
  • transcoder 106 or decoder 112 selects templates for reference blocks R0 and Rl.
  • transcoder 106 or decoder 112 re-calculates distance distTO and distance distTl. For example, transcoder 106 or decoder 112 recalculates the distance distTO between the template 306-1 of reference block R0 and the template 302 of the current block. Also, transcoder 106 or decoder 112 recalculates the distance distTl between the template 306-2 of new reference block Rl and the template 302 of the current block. Transcoder 106 or decoder 112 then selects a new anchor point based on the distances.
  • transcoder 106 or decoder 112 determines if an iteration count has reached a maximum. The iteration count limit may be used to save the processing time because the refinement may be converging to the acceptable range and further iteration may not bring additional benefits. If the iteration count is reached, at 610, the search ends and transcoder 106 or decoder 112 calculates the average blending pixel by pixel as described above to create a prediction block. At 608, transcoder 106 or decoder 112 determined if the distance distO or the distance distl is larger than the prior iteration.
  • transcoder 106 or decoder 112 restores reference block R0 and reference block Rl to the prior iteration and the search process may end. Also, at 608, transcoder 106 or decoder 112 calculates the average blending pixel by pixel as described above. The process may end because the distance has gotten worse and the last iteration may be the best result from within region 502.
  • transcoder 106 or decoder 112 determines if the anchor point changes to the other reference frame. If so, the process reiterates to perform the process described in FIG. 4. Transcoder 106 or decoder 112 uses the new anchor point and performs the search in a search region in the other reference frame. The process continues at 602 in FIG. 6 after the search for a new reference block.
  • the use of a new anchor point may improve the reference block in the other reference frame. For instance, transcoder 106 or decoder 112 may find a new reference block R0' that is closer in distance to the current block.
  • transcoder 106 or decoder 112 may determine another iteration should be performed. Since both reference blocks have changed positions, the search region may change again, and transcoder 106 or decoder 112 may find another reference block that may be closer in distance to the current block.
  • transcoder 106 or decoder 112 calculates the average blending pixel by pixel.
  • the reference block in the other reference frame may be close to the optimal reference block. That is, there may not be another reference block in the search region that is better.
  • the prediction block may provide a better prediction for the current block.
  • the better prediction may enhance the encoder side motion prediction using the resources used by transcoder 106 or decoder 112 to find the one or more reference blocks.
  • FIG. 7 depicts an example of a transcoding system according to some embodiments.
  • a video codec framework includes a set of fundamental components: block partitioning, inter and intra prediction, transform and quantization, and entropy coding.
  • Transcoder 306 receives a frame of a video, which is firstly split into non-overlapping coding blocks for further processing. To cope with different video content characteristics, complex regions will be covered by partitions with smaller sizes, while simple regions will be covered by larger partitions. Multiple block patterns and shapes are may be both used together, for example quad-tree pattern, triple-tree pattern and binary-tree pattern can be all used together, while square blocks and rectangular blocks can also be used together.
  • Prediction is used to remove the redundancy of a video signal. By subtracting the predicted pixel values from the pixels being processed, the amplitude of a residual signal can be significantly reduced, thus the resulting bitstream size can be reduced.
  • An intra prediction block 710 which is using reference pixels in the current frame, aims to reduce the spatial redundancy within the frame.
  • An inter prediction block 712 which is using reference pixels from neighboring frames, attempts to remove the temporal redundancy between frames a motion estimation and compensation block 716 may be a sub-module of inter prediction at the transcoder side, which captures the motion trace of objects among adjacent frames and generates reference pixels for inter prediction.
  • a transform and quantization block 704 uses the residual pixels after intra or inter prediction. Transform and quantization block 704 performs a transform operation that represents the residual signal in a frequency domain. Considering the human visual system is more sensitive on low frequency components of video signal than the high frequency components, quantization is designed to further compress the residual signal by reducing the precision on high frequency signals.
  • transcoder 306 contains decoding modules to make sure both transcoder 306 and decoder 112 are using identical mathematical processes.
  • an inverse transform and inverse quantization block 708 is similar to the same block on the decoder side.
  • Inverse transform and inverse quantization block 708 reconstructs pixels using the intra and inter prediction.
  • An in-loop filter 714 removes any visual artifacts that are introduced by the above- mentioned processes.
  • Various filtering methods are applied on the reconstructed frame in a cascaded way to reduce different artifacts, including but not limited to the blocking artifacts, mosquito artifacts, color banding effects, etc.
  • An entropy encoding block 706 may further compress the bitstream using a model- based method.
  • Transcoder 306 transmits the resulting encoded bitstream to decoder 310 over a network or other types of medium.
  • FIG. 8 depicts an example of a decoding system according to some embodiments.
  • Decoder 310 receives the encoded bitstream and inputs it into an entropy decoding block 802 to recover the information needed for decoding process.
  • a decoded frame can be decoded by using an inverse transform and inverse quantization block 804, an intra prediction block 806 or inter prediction block 808, motion compensation block 810, and in-loop filtering block 812 in the same way to build a decoded frame.
  • Some embodiments may be implemented in a non-transitory computer-readable storage medium for use by or in connection with the instruction execution system, apparatus, system, or machine.
  • the computer-readable storage medium contains instructions for controlling a computer system to perform a method described by some embodiments.
  • the computer system may include one or more computing devices.
  • the instructions, when executed by one or more computer processors, may be configured to perform that which is described in some embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

Selon un mode de réalisation, la présente invention concerne un procédé consistant à sélectionner un premier bloc de référence dans une première trame de référence, et un deuxième bloc de référence dans une seconde trame de référence. Le premier bloc de référence et le deuxième bloc de référence sont utilisés pour prédire un bloc actuel. Le premier bloc de référence est sélectionné en tant que point d'ancrage. Ensuite, le procédé consiste à rechercher dans la seconde trame de référence un troisième bloc de référence et à déterminer si le troisième bloc de référence est un meilleur bloc de référence que le deuxième bloc de référence pour prédire le bloc actuel.
PCT/US2019/025046 2018-03-30 2019-03-29 Bi-prédiction à modèle affiné pour codage vidéo Ceased WO2019191717A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP19774721.5A EP3777176A4 (fr) 2018-03-30 2019-03-29 Bi-prédiction à modèle affiné pour codage vidéo

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201862650871P 2018-03-30 2018-03-30
US62/650,871 2018-03-30
US16/370,322 US10992930B2 (en) 2018-03-30 2019-03-29 Template refined bi-prediction for video coding
US16/370,322 2019-03-29

Publications (1)

Publication Number Publication Date
WO2019191717A1 true WO2019191717A1 (fr) 2019-10-03

Family

ID=68054071

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2019/025046 Ceased WO2019191717A1 (fr) 2018-03-30 2019-03-29 Bi-prédiction à modèle affiné pour codage vidéo

Country Status (3)

Country Link
US (2) US10992930B2 (fr)
EP (1) EP3777176A4 (fr)
WO (1) WO2019191717A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11800088B2 (en) 2018-03-30 2023-10-24 Hulu, LLC Template refined bi-prediction for video coding using anchor point

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20250150598A1 (en) * 2022-02-13 2025-05-08 Lg Electronics Inc. Image encoding/decoding method and device, and recording medium storing bitstream
US12526400B2 (en) * 2023-01-18 2026-01-13 Tencent America LLC Multi-template based intra-frame template matching prediction

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110002387A1 (en) * 2009-07-03 2011-01-06 Yi-Jen Chiu Techniques for motion estimation
US20110002388A1 (en) * 2009-07-02 2011-01-06 Qualcomm Incorporated Template matching for video coding
US20140327819A1 (en) * 2013-04-03 2014-11-06 Huawei Technologies Co., Ltd. Multi-level bidirectional motion estimation method and device
KR20150090454A (ko) * 2014-01-29 2015-08-06 강원대학교산학협력단 다중 프레임을 이용한 양방향 움직임 탐색 방법 및 이러한 양방향 움직임 탐색 기능이 탑재된 영상 장치
US20180077424A1 (en) * 2011-03-10 2018-03-15 Huawei Technologies Co., Ltd. Encoding/decoding method, encoding apparatus, decoding apparatus, and system for video image

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2269379B1 (fr) * 2008-04-11 2019-02-27 InterDigital Madison Patent Holdings Procédés et appareil permettant de prévoir une concordance de gabarit (tmp) dans le codage et décodage de données vidéo
US11330284B2 (en) * 2015-03-27 2022-05-10 Qualcomm Incorporated Deriving motion information for sub-blocks in video coding
MX382830B (es) * 2015-09-02 2025-03-13 Hfi Innovation Inc Método y aparato de derivación de movimiento de lado de decodificador para codificación de vídeo.
CN110140355B (zh) * 2016-12-27 2022-03-08 联发科技股份有限公司 用于视频编解码的双向模板运动向量微调的方法及装置
US10701366B2 (en) * 2017-02-21 2020-06-30 Qualcomm Incorporated Deriving motion vector information at a video decoder
JP7036628B2 (ja) * 2017-03-10 2022-03-15 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ 符号化装置、復号装置、符号化方法及び復号方法
US20190007699A1 (en) * 2017-06-28 2019-01-03 Futurewei Technologies, Inc. Decoder Side Motion Vector Derivation in Video Coding
WO2019001741A1 (fr) * 2017-06-30 2019-01-03 Huawei Technologies Co., Ltd. Affinement de vecteur de mouvement pour une prédiction multi-référence
WO2019072373A1 (fr) * 2017-10-09 2019-04-18 Huawei Technologies Co., Ltd. Mise à jour de modèles pour raffinement de vecteurs de mouvement
US10986360B2 (en) * 2017-10-16 2021-04-20 Qualcomm Incorproated Various improvements to FRUC template matching
WO2019191717A1 (fr) 2018-03-30 2019-10-03 Hulu, LLC Bi-prédiction à modèle affiné pour codage vidéo

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110002388A1 (en) * 2009-07-02 2011-01-06 Qualcomm Incorporated Template matching for video coding
US20110002387A1 (en) * 2009-07-03 2011-01-06 Yi-Jen Chiu Techniques for motion estimation
US20180077424A1 (en) * 2011-03-10 2018-03-15 Huawei Technologies Co., Ltd. Encoding/decoding method, encoding apparatus, decoding apparatus, and system for video image
US20140327819A1 (en) * 2013-04-03 2014-11-06 Huawei Technologies Co., Ltd. Multi-level bidirectional motion estimation method and device
KR20150090454A (ko) * 2014-01-29 2015-08-06 강원대학교산학협력단 다중 프레임을 이용한 양방향 움직임 탐색 방법 및 이러한 양방향 움직임 탐색 기능이 탑재된 영상 장치

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ATHANASIOS LEONTARIS ET AL.: "Foundations and Trends in Signal Processing", vol. 2, 1 January 2008, NOW PUBLISHERS, article "Multiple Reference Motion Compensation: A Tutorial Introduction and Survey", pages: 247 - 364

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11800088B2 (en) 2018-03-30 2023-10-24 Hulu, LLC Template refined bi-prediction for video coding using anchor point

Also Published As

Publication number Publication date
US20190306495A1 (en) 2019-10-03
US10992930B2 (en) 2021-04-27
EP3777176A4 (fr) 2022-08-17
US11800088B2 (en) 2023-10-24
US20210227216A1 (en) 2021-07-22
EP3777176A1 (fr) 2021-02-17

Similar Documents

Publication Publication Date Title
JP7368554B2 (ja) Dmvrのためのブロックサイズ制限
CN110809887B (zh) 用于多参考预测的运动矢量修正的方法和装置
EP3416386B1 (fr) Décisions de codeur basées sur un algorithme de hachage pour un codage vidéo
US8681866B1 (en) Method and apparatus for encoding video by downsampling frame resolution
EP4583510A2 (fr) Estimation de mouvement basée sur un hachage local pour des scénarios d'affichage à distance
US20150010086A1 (en) Method for encoding/decoding high-resolution image and device for performing same
US20190182505A1 (en) Methods and apparatuses of predictor-based partition in video processing system
US11202070B2 (en) Parallel bi-directional intra-coding of sub-partitions
US10869042B2 (en) Template based adaptive weighted bi-prediction for video coding
US12149729B2 (en) Selective template matching in video coding
US20190273920A1 (en) Apparatuses and Methods for Encoding and Decoding a Video Coding Block of a Video Signal
US11800088B2 (en) Template refined bi-prediction for video coding using anchor point
US12483693B2 (en) Reset of historical motion vector prediction
US20190306499A1 (en) Intra Prediction Mode Signaling For Video Coding
US11089308B1 (en) Removing blocking artifacts in video encoders

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19774721

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2019774721

Country of ref document: EP