[go: up one dir, main page]

WO2018128232A1 - Procédé et appareil de décodage d'image dans un système de codage d'image - Google Patents

Procédé et appareil de décodage d'image dans un système de codage d'image Download PDF

Info

Publication number
WO2018128232A1
WO2018128232A1 PCT/KR2017/007360 KR2017007360W WO2018128232A1 WO 2018128232 A1 WO2018128232 A1 WO 2018128232A1 KR 2017007360 W KR2017007360 W KR 2017007360W WO 2018128232 A1 WO2018128232 A1 WO 2018128232A1
Authority
WO
WIPO (PCT)
Prior art keywords
motion information
block
current block
derived
prediction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/KR2017/007360
Other languages
English (en)
Korean (ko)
Inventor
서정동
남정학
박내리
임재현
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
LG Electronics Inc
Original Assignee
LG Electronics Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by LG Electronics Inc filed Critical LG Electronics Inc
Publication of WO2018128232A1 publication Critical patent/WO2018128232A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Definitions

  • the present invention relates to an image coding technique, and more particularly, to an image decoding method and apparatus in an image coding system.
  • the demand for high resolution and high quality images such as high definition (HD) images and ultra high definition (UHD) images is increasing in various fields.
  • the higher the resolution and the higher quality of the image data the more information or bit rate is transmitted than the existing image data. Therefore, the image data can be transmitted by using a medium such as a conventional wired / wireless broadband line or by using a conventional storage medium. In the case of storage, the transmission cost and the storage cost are increased.
  • a high efficiency image compression technique is required to effectively transmit, store, and reproduce high resolution, high quality image information.
  • An object of the present invention is to provide a method and apparatus for improving image coding efficiency.
  • Another technical problem of the present invention is to provide an inter prediction method and apparatus for updating motion information of a current block.
  • Another technical problem of the present invention is to provide a method and apparatus for calculating modified motion information of a current block without receiving additional information and updating the motion information of the current block based on the modified motion information.
  • an image decoding method performed by a decoding apparatus.
  • the method may include obtaining information on inter prediction of a current block through a bitstream, generating a motion information candidate list based on neighboring blocks of the current block, information on the inter prediction, and the motion information candidate list. Deriving the motion information of the current block based on the modified information on the current block based on a prediction block derived based on the derived motion information, a template of the current block, or a position indicated by the motion information ( modified) deriving motion information, and updating the motion information of the current block based on the modified motion information.
  • a decoding apparatus for performing image decoding.
  • the decoding apparatus may include an entropy decoding unit configured to obtain information about inter prediction of a current block through a bitstream, and generate a motion information candidate list based on neighboring blocks of the current block, and may include information about the inter prediction and the motion. Deriving motion information of the current block based on an information candidate list, and correcting the current block based on a prediction block derived based on the derived motion information, a template of the current block, or a position indicated by the motion information. And a prediction unit for deriving modified motion information and updating the motion information of the current block based on the modified motion information.
  • a video encoding method performed by an encoding apparatus may further include generating motion information about the current block, modifying the current block based on a prediction block derived based on the derived motion information, a template of the current block, or a position indicated by the motion information. modified) deriving motion information, updating motion information of the current block based on the modified motion information, and encoding and outputting information on inter prediction of the current block. do.
  • a video encoding apparatus generates motion information on the current block and modifies the current block based on the prediction block derived based on the derived motion information, the template of the current block, or the position indicated by the motion information.
  • a prediction unit for deriving modified motion information, updating motion information of the current block based on the modified motion information, and an entropy encoding unit for encoding and outputting information on inter prediction of the current block. It is done.
  • the modified motion information of the current block can be calculated and updated with more accurate motion information, thereby improving prediction efficiency.
  • modified motion information of the current block can be derived without receiving additional side information, thereby improving the overall coding efficiency.
  • FIG. 1 is a diagram schematically illustrating a configuration of a video encoding apparatus to which the present invention may be applied.
  • FIG. 2 is a diagram schematically illustrating a configuration of a video decoding apparatus to which the present invention may be applied.
  • FIG. 3 shows an example of a method of directly encoding and transmitting the motion information.
  • FIG. 4 illustrates a method of generating a list based on motion information of neighboring blocks with respect to the current block and transmitting an index indicating a candidate included in the list.
  • 5 shows an example of updating motion information of a current block.
  • 6 exemplarily shows neighboring samples that may be used to update motion information of the current block.
  • FIG. 7 illustrates an example of updating motion information of the current block based on neighboring samples of the current block.
  • FIG. 8 exemplarily illustrates a method of updating motion information of the current block based on reference blocks having a minimum difference between corresponding samples.
  • FIG. 9 exemplarily illustrates a method of updating motion information of the current block through information indicating a method of updating motion information of the current block.
  • FIG. 10 exemplarily shows a search area set based on a reference point derived based on the MV.
  • FIG. 11 illustrates search areas set based on a reference point derived based on the MV.
  • FIG. 12 illustrates a method of updating motion information of the current block based on a template of the current block when an AMVP mode is applied to the current block.
  • FIG. 13 exemplarily illustrates a method of updating motion information of the current block based on reference blocks having a minimum difference of samples.
  • 15 exemplarily shows pair prediction predicted based on a reference picture including reference blocks.
  • FIG. 16 schematically shows a video encoding method by an encoding device according to the present invention.
  • FIG. 17 schematically illustrates a video decoding method by a decoding apparatus according to the present invention.
  • each configuration in the drawings described in the present invention are shown independently for the convenience of description of the different characteristic functions, it does not mean that each configuration is implemented by separate hardware or separate software.
  • two or more of each configuration may be combined to form one configuration, or one configuration may be divided into a plurality of configurations.
  • Embodiments in which each configuration is integrated and / or separated are also included in the scope of the present invention without departing from the spirit of the present invention.
  • a picture generally refers to a unit representing one image of a specific time zone
  • a slice is a unit constituting a part of a picture in coding.
  • One picture may be composed of a plurality of slices, and if necessary, the picture and the slice may be mixed with each other.
  • a pixel or a pel may refer to a minimum unit constituting one picture (or image). Also, 'sample' may be used as a term corresponding to a pixel.
  • a sample may generally represent a pixel or a value of a pixel, and may only represent pixel / pixel values of the luma component, or only pixel / pixel values of the chroma component.
  • a unit represents the basic unit of image processing.
  • the unit may include at least one of a specific region of the picture and information related to the region.
  • the unit may be used interchangeably with terms such as block or area in some cases.
  • an M ⁇ N block may represent a set of samples or transform coefficients composed of M columns and N rows.
  • FIG. 1 is a diagram schematically illustrating a configuration of a video encoding apparatus to which the present invention may be applied.
  • the video encoding apparatus 100 may include a picture divider 105, a predictor 110, a subtractor 115, a transformer 120, a quantizer 125, a reordering unit 130,
  • the entropy encoding unit 135, the residual processing unit 140, the adding unit 150, the filter unit 155, and the memory 160 may be included.
  • the residual processor 140 may include an inverse quantizer 141 and an inverse transform unit 142.
  • the picture divider 105 may divide the input picture into at least one processing unit.
  • the processing unit may be called a coding unit (CU).
  • the coding unit may be recursively split from the largest coding unit (LCU) according to a quad-tree binary-tree (QTBT) structure.
  • LCU largest coding unit
  • QTBT quad-tree binary-tree
  • one coding unit may be divided into a plurality of coding units of a deeper depth based on a quad tree structure and / or a binary tree structure.
  • the quad tree structure may be applied first and the binary tree structure may be applied later.
  • the binary tree structure may be applied first.
  • the coding procedure according to the present invention may be performed based on the final coding unit that is no longer split.
  • the maximum coding unit may be used as the final coding unit immediately based on coding efficiency according to the image characteristic, or if necessary, the coding unit is recursively divided into coding units of lower depths and optimized.
  • a coding unit of size may be used as the final coding unit.
  • the coding procedure may include a procedure of prediction, transform, and reconstruction, which will be described later.
  • the processing unit may include a coding unit (CU) prediction unit (PU) or a transform unit (TU).
  • the coding unit may be split from the largest coding unit (LCU) into coding units of deeper depths along the quad tree structure.
  • LCU largest coding unit
  • the maximum coding unit may be used as the final coding unit immediately based on coding efficiency according to the image characteristic, or if necessary, the coding unit is recursively divided into coding units of lower depths and optimized.
  • a coding unit of size may be used as the final coding unit. If a smallest coding unit (SCU) is set, the coding unit may not be split into smaller coding units than the minimum coding unit.
  • the final coding unit refers to a coding unit that is the basis of partitioning or partitioning into a prediction unit or a transform unit.
  • the prediction unit is a unit partitioning from the coding unit and may be a unit of sample prediction. In this case, the prediction unit may be divided into sub blocks.
  • the transform unit may be divided along the quad tree structure from the coding unit, and may be a unit for deriving a transform coefficient and / or a unit for deriving a residual signal from the transform coefficient.
  • a coding unit may be called a coding block (CB)
  • a prediction unit is a prediction block (PB)
  • a transform unit may be called a transform block (TB).
  • a prediction block or prediction unit may mean a specific area in the form of a block within a picture, and may include an array of prediction samples.
  • a transform block or a transform unit may mean a specific area in a block form within a picture, and may include an array of transform coefficients or residual samples.
  • the prediction unit 110 may perform a prediction on a block to be processed (hereinafter, referred to as a current block) and generate a predicted block including prediction samples of the current block.
  • the unit of prediction performed by the prediction unit 110 may be a coding block, a transform block, or a prediction block.
  • the prediction unit 110 may determine whether intra prediction or inter prediction is applied to the current block. As an example, the prediction unit 110 may determine whether intra prediction or inter prediction is applied on a CU basis.
  • the prediction unit 110 may derive a prediction sample for the current block based on reference samples outside the current block in the picture to which the current block belongs (hereinafter, referred to as the current picture). In this case, the prediction unit 110 may (i) derive the prediction sample based on the average or interpolation of neighboring reference samples of the current block, and (ii) the neighbor reference of the current block.
  • the prediction sample may be derived based on a reference sample present in a specific (prediction) direction with respect to the prediction sample among the samples. In case of (i), it may be called non-directional mode or non-angle mode, and in case of (ii), it may be called directional mode or angular mode.
  • the prediction mode may have, for example, 33 directional prediction modes and at least two non-directional modes.
  • the non-directional mode may include a DC prediction mode and a planner mode (Planar mode).
  • the prediction unit 110 may determine the prediction mode applied to the current block by using the prediction mode applied to the neighboring block.
  • the prediction unit 110 may derive the prediction sample for the current block based on the sample specified by the motion vector on the reference picture.
  • the prediction unit 110 may apply one of a skip mode, a merge mode, and a motion vector prediction (MVP) mode to derive a prediction sample for the current block.
  • the prediction unit 110 may use the motion information of the neighboring block as the motion information of the current block.
  • the skip mode unlike the merge mode, the difference (residual) between the prediction sample and the original sample is not transmitted.
  • the MVP mode the motion vector of the current block may be derived using the motion vector of the neighboring block as a motion vector predictor.
  • the neighboring block may include a spatial neighboring block existing in the current picture and a temporal neighboring block present in the reference picture.
  • a reference picture including the temporal neighboring block may be called a collocated picture (colPic).
  • the motion information may include a motion vector and a reference picture index.
  • Information such as prediction mode information and motion information may be encoded (entropy) and output in the form of a bitstream.
  • the highest picture on the reference picture list may be used as the reference picture.
  • Reference pictures included in a reference picture list may be sorted based on a difference in a picture order count (POC) between a current picture and a corresponding reference picture.
  • POC picture order count
  • the subtraction unit 115 generates a residual sample which is a difference between the original sample and the prediction sample.
  • residual samples may not be generated as described above.
  • the transform unit 120 generates a transform coefficient by transforming the residual sample in units of transform blocks.
  • the transform unit 120 may perform the transformation according to the size of the transform block and the prediction mode applied to the coding block or the prediction block that spatially overlaps the transform block. For example, if intra prediction is applied to the coding block or the prediction block that overlaps the transform block, and the transform block is a 4 ⁇ 4 residual array, the residual sample uses a discrete sine transform (DST). In other cases, the residual sample may be transformed by using a discrete cosine transform (DCT).
  • DST discrete sine transform
  • DCT discrete cosine transform
  • the quantization unit 125 may quantize the transform coefficients to generate quantized transform coefficients.
  • the reordering unit 130 rearranges the quantized transform coefficients.
  • the reordering unit 130 may reorder the quantized transform coefficients in the form of a block into a one-dimensional vector form through a coefficient scanning method. Although the reordering unit 130 has been described in a separate configuration, the reordering unit 130 may be part of the quantization unit 125.
  • the entropy encoding unit 135 may perform entropy encoding on the quantized transform coefficients.
  • Entropy encoding may include, for example, encoding methods such as exponential Golomb, context-adaptive variable length coding (CAVLC), context-adaptive binary arithmetic coding (CABAC), and the like.
  • the entropy encoding unit 135 may encode information necessary for video reconstruction other than the quantized transform coefficients (for example, a value of a syntax element) together or separately. Entropy encoded information may be transmitted or stored in units of network abstraction layer (NAL) units in the form of bitstreams.
  • NAL network abstraction layer
  • the inverse quantization unit 141 inverse quantizes the quantized values (quantized transform coefficients) in the quantization unit 125, and the inverse transform unit 142 inverse transforms the inverse quantized values in the inverse quantization unit 141 to generate a residual sample.
  • the adder 150 reconstructs the picture by combining the residual sample and the predictive sample.
  • the residual sample and the predictive sample may be added in units of blocks to generate a reconstructed block.
  • the adder 150 may be part of the predictor 110.
  • the adder 150 may be called a restoration unit or a restoration block generation unit.
  • the filter unit 155 may apply a deblocking filter and / or a sample adaptive offset to the reconstructed picture. Through deblocking filtering and / or sample adaptive offset, the artifacts of the block boundaries in the reconstructed picture or the distortion in the quantization process can be corrected.
  • the sample adaptive offset may be applied on a sample basis and may be applied after the process of deblocking filtering is completed.
  • the filter unit 155 may apply an adaptive loop filter (ALF) to the reconstructed picture. ALF may be applied to the reconstructed picture after the deblocking filter and / or sample adaptive offset is applied.
  • ALF adaptive loop filter
  • the memory 160 may store reconstructed pictures (decoded pictures) or information necessary for encoding / decoding.
  • the reconstructed picture may be a reconstructed picture after the filtering process is completed by the filter unit 155.
  • the stored reconstructed picture may be used as a reference picture for (inter) prediction of another picture.
  • the memory 160 may store (reference) pictures used for inter prediction.
  • pictures used for inter prediction may be designated by a reference picture set or a reference picture list.
  • FIG. 2 is a diagram schematically illustrating a configuration of a video decoding apparatus to which the present invention may be applied.
  • the video decoding apparatus 200 may include an entropy decoding unit 210, a residual processor 220, a predictor 230, an adder 240, a filter 250, and a memory 260. It may include.
  • the residual processor 220 may include a reordering unit 221, an inverse quantization unit 222, and an inverse transform unit 223.
  • the video decoding apparatus 200 may reconstruct the video in response to a process in which the video information is processed in the video encoding apparatus.
  • the video decoding apparatus 200 may perform video decoding using a processing unit applied in the video encoding apparatus.
  • the processing unit block of video decoding may be, for example, a coding unit, and in another example, a coding unit, a prediction unit, or a transform unit.
  • the coding unit may be split along the quad tree structure and / or binary tree structure from the largest coding unit.
  • the prediction unit and the transform unit may be further used in some cases, in which case the prediction block is a block derived or partitioned from the coding unit and may be a unit of sample prediction. At this point, the prediction unit may be divided into subblocks.
  • the transform unit may be divided along the quad tree structure from the coding unit, and may be a unit for deriving a transform coefficient or a unit for deriving a residual signal from the transform coefficient.
  • the entropy decoding unit 210 may parse the bitstream and output information necessary for video reconstruction or picture reconstruction. For example, the entropy decoding unit 210 decodes information in a bitstream based on a coding method such as exponential Golomb coding, CAVLC, or CABAC, quantized values of syntax elements necessary for video reconstruction, and residual coefficients. Can be output.
  • a coding method such as exponential Golomb coding, CAVLC, or CABAC, quantized values of syntax elements necessary for video reconstruction, and residual coefficients. Can be output.
  • the CABAC entropy decoding method receives a bin corresponding to each syntax element in a bitstream, and decodes syntax element information and decoding information of neighboring and decoding target blocks or information of symbols / bins decoded in a previous step.
  • the context model may be determined using the context model, the probability of occurrence of a bin may be predicted according to the determined context model, and arithmetic decoding of the bin may be performed to generate a symbol corresponding to the value of each syntax element. have.
  • the CABAC entropy decoding method may update the context model by using the information of the decoded symbol / bin for the context model of the next symbol / bean after determining the context model.
  • the information related to the prediction among the information decoded by the entropy decoding unit 210 is provided to the prediction unit 230, and the residual value on which the entropy decoding has been performed by the entropy decoding unit 210, that is, the quantized transform coefficient, is used as a reordering unit ( 221 may be input.
  • the reordering unit 221 may rearrange the quantized transform coefficients in a two-dimensional block form.
  • the reordering unit 221 may perform reordering in response to coefficient scanning performed by the encoding apparatus.
  • the rearrangement unit 221 has been described in a separate configuration, but the rearrangement unit 221 may be part of the inverse quantization unit 222.
  • the inverse quantization unit 222 may dequantize the quantized transform coefficients based on the (inverse) quantization parameter and output the transform coefficients.
  • information for deriving a quantization parameter may be signaled from the encoding apparatus.
  • the inverse transform unit 223 may inversely transform transform coefficients to derive residual samples.
  • the prediction unit 230 may perform prediction on the current block and generate a predicted block including prediction samples for the current block.
  • the unit of prediction performed by the prediction unit 230 may be a coding block, a transform block, or a prediction block.
  • the prediction unit 230 may determine whether to apply intra prediction or inter prediction based on the information about the prediction.
  • a unit for determining which of intra prediction and inter prediction is to be applied and a unit for generating a prediction sample may be different.
  • the unit for generating a prediction sample in inter prediction and intra prediction may also be different.
  • whether to apply inter prediction or intra prediction may be determined in units of CUs.
  • a prediction mode may be determined and a prediction sample may be generated in PU units
  • intra prediction a prediction mode may be determined in PU units and a prediction sample may be generated in TU units.
  • the prediction unit 230 may derive the prediction sample for the current block based on the neighbor reference samples in the current picture.
  • the prediction unit 230 may derive the prediction sample for the current block by applying the directional mode or the non-directional mode based on the neighbor reference samples of the current block.
  • the prediction mode to be applied to the current block may be determined using the intra prediction mode of the neighboring block.
  • the prediction unit 230 may derive the prediction sample for the current block based on the sample specified on the reference picture by the motion vector on the reference picture.
  • the prediction unit 230 may apply any one of a skip mode, a merge mode, and an MVP mode to derive a prediction sample for the current block.
  • motion information required for inter prediction of the current block provided by the video encoding apparatus for example, information about a motion vector, a reference picture index, and the like may be obtained or derived based on the prediction information.
  • the motion information of the neighboring block may be used as the motion information of the current block.
  • the neighboring block may include a spatial neighboring block and a temporal neighboring block.
  • the prediction unit 230 may construct a merge candidate list using motion information of available neighboring blocks, and may use information indicated by the merge index on the merge candidate list as a motion vector of the current block.
  • the merge index may be signaled from the encoding device.
  • the motion information may include a motion vector and a reference picture. When the motion information of the temporal neighboring block is used in the skip mode and the merge mode, the highest picture on the reference picture list may be used as the reference picture.
  • the difference (residual) between the prediction sample and the original sample is not transmitted.
  • the motion vector of the current block may be derived using the motion vector of the neighboring block as a motion vector predictor.
  • the neighboring block may include a spatial neighboring block and a temporal neighboring block.
  • a merge candidate list may be generated by using a motion vector of a reconstructed spatial neighboring block and / or a motion vector corresponding to a Col block, which is a temporal neighboring block.
  • the motion vector of the candidate block selected from the merge candidate list is used as the motion vector of the current block.
  • the information about the prediction may include a merge index indicating a candidate block having an optimal motion vector selected from candidate blocks included in the merge candidate list.
  • the prediction unit 230 may derive the motion vector of the current block by using the merge index.
  • a motion vector predictor candidate list may be generated using a motion vector of a reconstructed spatial neighboring block and / or a motion vector corresponding to a Col block which is a temporal neighboring block.
  • the prediction information may include a prediction motion vector index indicating an optimal motion vector selected from the motion vector candidates included in the list.
  • the prediction unit 230 may select the predicted motion vector of the current block from the motion vector candidates included in the motion vector candidate list using the motion vector index.
  • the prediction unit of the encoding apparatus may obtain a motion vector difference (MVD) between the motion vector of the current block and the motion vector predictor, and may encode the output vector in a bitstream form. That is, MVD may be obtained by subtracting the motion vector predictor from the motion vector of the current block.
  • the prediction unit 230 may obtain a motion vector difference included in the information about the prediction, and derive the motion vector of the current block by adding the motion vector difference and the motion vector predictor.
  • the prediction unit may also obtain or derive a reference picture index or the like indicating a reference picture from the information about the prediction.
  • the adder 240 may reconstruct the current block or the current picture by adding the residual sample and the predictive sample.
  • the adder 240 may reconstruct the current picture by adding the residual sample and the predictive sample in block units. Since the residual is not transmitted when the skip mode is applied, the prediction sample may be a reconstruction sample.
  • the adder 240 has been described in a separate configuration, the adder 240 may be part of the predictor 230. On the other hand, the adder 240 may be called a restoration unit or a restoration block generation unit.
  • the filter unit 250 may apply the deblocking filtering sample adaptive offset, and / or ALF to the reconstructed picture.
  • the sample adaptive offset may be applied in units of samples and may be applied after deblocking filtering.
  • ALF may be applied after deblocking filtering and / or sample adaptive offset.
  • the memory 260 may store reconstructed pictures (decoded pictures) or information necessary for decoding.
  • the reconstructed picture may be a reconstructed picture after the filtering process is completed by the filter unit 250.
  • the memory 260 may store pictures used for inter prediction.
  • pictures used for inter prediction may be designated by a reference picture set or a reference picture list.
  • the reconstructed picture can be used as a reference picture for another picture.
  • the memory 260 may output the reconstructed picture in an output order.
  • the inter prediction when inter prediction is performed on the current block, the inter prediction may be performed through motion compensation using motion information.
  • the motion information for the current block may be generated by applying a skip mode, a merge mode, or an adaptive motion vector prediction (AMVP) mode, and may be encoded and output.
  • the motion information may include L0 motion information for the L0 direction and / or L1 motion information for the L1 direction.
  • the L0 motion information may include an L0 reference picture index and a motion vector L0 indicating a L0 reference picture included in the reference picture list L0 (List 0, L0) for the current block.
  • the L1 motion information may include an L1 reference picture index and an MVL1 indicating an L1 reference picture included in a reference picture list L0 (List 1, L1) for the current block.
  • the L0 direction may be referred to as a past direction or a forward direction.
  • the L1 direction may also be called a future direction or a reverse direction.
  • the reference picture list L0 may include pictures that are earlier in the output order than the current picture
  • the reference picture list L1 may include pictures that are later in the output order than the current picture.
  • the MVL0 may be called an L0 motion vector
  • the MVL1 may be called an L1 motion vector.
  • inter prediction when performing inter prediction based on L0 motion information, it may be called LO prediction, and when performing inter prediction based on L1 motion information, it may be called L1 prediction.
  • L0 motion information when performing inter prediction based on L0 motion information, it may be called LO prediction, and when performing inter prediction based on L1 motion information, it may be called L1 prediction.
  • inter prediction when inter prediction is performed based on the motion information and the L1 motion information, it may be called bi-prediction.
  • the method for transmitting the motion information includes a method of directly encoding and transmitting the motion information (for example, an AMVP mode) and generating a list based on motion information of neighboring blocks with respect to the current block, and selecting candidates included in the list.
  • a method eg, merge mode
  • a method of transmitting the index that is indicated may be included.
  • pair prediction may be performed on the current block.
  • elements included in motion information for the current block may be encoded and transmitted.
  • an L0 reference picture index and an L1 reference picture index of the current block may be transmitted, and MVL0 and MVL1 of the current block may be transmitted, respectively.
  • motion information of neighboring blocks of the current block may be indexed, that is, motion vector predictors (MVPs) may be derived based on the motion information of the neighboring blocks.
  • MVPs motion vector predictors
  • an MVP index indicating an MVP most similar to a motion vector of the current block may be transmitted, and a difference value between the MVP most similar to the motion vector of the current block and the motion vector of the current block , MVD) may be transmitted.
  • a merge candidate list for deriving motion information of the current block may be derived based on neighboring blocks of the current block.
  • the merge candidate list may be configured to include motion information of spatially adjacent neighboring blocks of the current block as merge candidates.
  • the spatially adjacent peripheral blocks may be referred to as spatial peripheral blocks.
  • the spatial peripheral block may include a lower left peripheral block A0, a left peripheral block A1, a right upper peripheral block B0, an upper peripheral block B1, and / or an upper left peripheral block B2 of the current block.
  • the merge candidate list may be configured to include motion information of a temporal neighboring block, for example, the temporal neighboring block T0 or the temporal neighboring block T1 shown in FIG. 4, as a merge candidate.
  • the reference picture including the temporal neighboring block may be referred to as a collocated picture (colPic), and the temporal neighboring block is the right bottom peripheral block T0 of the same position block in the collocated picture, or Center right bottom block T1 may be included.
  • the merge candidate list may be configured to include a combined bi-predicted candidate or a zero vector derived by combining the motion information of the neighboring blocks as a merge candidate.
  • the merge candidate list may be configured to include a combined pair prediction candidate derived by combining the motion information of the spatial neighboring block of the current block, the motion information of the temporal neighboring block, and / or the motion information of the neighboring blocks as a merge candidate.
  • a merge candidate most similar to the motion information of the current block may be selected, and a merge index indicating the merge candidate may be transmitted.
  • a motion is derived because the motion information of the neighboring block is borrowed in a merge mode in which a list is generated based on the motion information of the neighboring block of the current block and an index is transmitted.
  • the information may not be the information that best represents the motion information of the current block.
  • motion information of neighboring blocks of the current block may be used as motion information of the current block, and thus may be different from actual motion information of the current block.
  • motion information derived through the merge index may be updated, and the updated motion information may be used for prediction of the current block.
  • information used to update the motion information selected based on the merge index in merge mode may be limited. In other words, additional selection information may not be transmitted for coding efficiency.
  • the same process may be performed in both the encoding apparatus and the decoding apparatus without transmitting additional information, and the motion information may be updated through the above process so that more accurate motion information may be applied to the prediction of the current block without increasing an additional bit amount.
  • the coding apparatus may determine whether prediction applied to the current block is pair prediction (S500). In performing prediction on the current block, when inter prediction is performed based on L0 motion information and L1 motion information, it may be called bi-prediction.
  • the coding apparatus may generate a predicted block based on the L0 motion information and the L1 motion information (S510).
  • the coding apparatus may derive the L0 reference block derived based on the L0 motion information, and may derive the L1 reference block based on the L1 motion information.
  • the coding apparatus may generate the prediction block based on the L0 reference block and the L1 reference block. For example, as illustrated in (b) of FIG. 5, the coding apparatus may add the L0 reference sample of the L0 reference block and the L1 reference sample of the L1 reference block corresponding to the L0 reference sample by the same specific gravity to form the prediction block.
  • a prediction sample of may be generated. Specifically, the prediction sample may be generated by halving the sum of the sample value of the L0 reference sample and the sample value of the L1 reference sample.
  • the coding apparatus may derive new motion information for each direction by using the generated prediction block (S520).
  • the prediction block is an average value of reference blocks in each direction, and in a general prediction method, a reconstructed signal may be generated by adding a residual signal to the prediction block.
  • the coding apparatus may update the motion information by assuming the prediction block as an original block of motion estimation.
  • the above-described process may be performed in the same manner in the encoding apparatus and the decoding apparatus.
  • Deriving new motion information may be represented as finding a block most similar to the prediction block in a reference picture. That is, a reference block most similar to the prediction block in the reference picture may be derived, and motion information indicating the reference block may be derived as the new motion information.
  • the new motion information may be referred to as modified motion information.
  • the reference block most similar to the prediction block may be searched around the reference block derived based on the motion information in each direction selected through the merge index.
  • a cost function representing a degree similar to the prediction block may be applied in the same manner as the conventional motion estimation. For example, Sum of Absolute Differences (SAD) may be applied that simply adds the difference between corresponding samples of the predicted block and the searched reference block as the cost function, or corresponds to the corresponding of the predicted block and the searched reference block. Sum of Squared Differences (SSD) that squares the difference between samples may be applied, or Sum of Absolute Transformed Differences (SATD) that apply a transform to the difference between the corresponding samples of the prediction block and the searched reference block. Can be applied.
  • the updated motion information may be used not only for the current block but also stored in a memory and used for coding a neighboring block.
  • the coding device may update the motion information of the current block based on the new motion information for each direction (S530).
  • the new motion information in each direction may be derived as updated motion information of the current block.
  • the prediction block may be used to update motion information of the current block, but may be updated based on neighboring samples of the current block.
  • additional information such as the method of updating the motion information based on the prediction block, also in the method of updating the motion information based on the neighboring samples.
  • the information used in the process can be limited.
  • FIG. 6 exemplarily shows neighboring samples that may be used to update motion information of the current block.
  • upper peripheral samples, upper left peripheral samples, and left peripheral samples of the current block may be used for updating motion information of the current block.
  • upper and left neighboring samples of the current block may be used for updating motion information of the current block.
  • upper neighboring samples of the current block may be used for updating motion information of the current block.
  • the left neighboring samples of the current block may be used for updating motion information of the current block.
  • a specific area including neighboring samples that can be used to update motion information of the current block may be called a template of the current block.
  • templates shown in FIGS. 6A to 6D various types may be used, but the template should include samples that are already decoded at the time of decoding of the current block, and also include too many samples. If included, it may not be representative of the current picture (i.e., not representative of the current block in searching for new motion information of the current block) or may be expensive in hardware implementation.
  • motion information of the current block may be updated based on a template of the current block and a template of a reference block in a reference picture. That is, a reference block for a template having a minimum difference from the template of the current block may be derived, and motion information indicating the reference block may be determined as more accurate motion information and thus derived as updated motion information.
  • the specific method in the coding apparatus may be the same as the method illustrated in FIG. 7B.
  • the coding apparatus may derive new motion information for each direction based on the neighboring samples of the current block and the reference block (S700).
  • An L0 reference block having a template most similar to a template of the current block among L0 reference blocks in the L0 reference picture of the current block may be derived, and motion information representing the L0 reference block may be derived as new L0 motion information.
  • an L1 reference block having a template most similar to a template of the current block among L1 reference blocks in the L1 reference picture of the current block may be derived, and motion information representing the L1 reference block is derived as new L1 motion information. Can be.
  • the cost function applied to derive a template similar to the template of the current block may be applied a Sum of Absolute Differences (SAD) that simply adds the difference between the corresponding sample of the template of the current block and the searched template, or Sum of Squared Differences (SSD) that adds the squared difference between the template of the current block and the corresponding samples of the searched template may be applied, or may be applied to the difference between the template of the current block and the corresponding samples of the searched template.
  • SATD Sud of Absolute Transformed Differences
  • the coding device may update the motion information of the current block (S710).
  • the new L0 motion information may be derived as updated L0 motion information of the current block
  • the new L1 motion information may be derived as updated L1 motion information of the current block.
  • the update of the motion information may be performed even if the pair prediction is not applied to the current block, and if the pair prediction is applied to the current block, the motion information update may be independently performed in each direction. have.
  • FIG. 8 exemplarily illustrates a method of updating motion information of the current block based on reference blocks having a minimum difference between corresponding samples.
  • the motion information of the current block is bi-predictive motion information
  • a reference block in each direction indicated by a motion vector in each direction derived based on a merge index may be derived, and the neighboring blocks of the reference block in each direction may be searched for corresponding samples.
  • An L0 reference block and an L1 reference block with a minimum difference may be derived.
  • the derived L0 reference block and the motion information representing the L1 reference block may be used as updated motion information of the current block.
  • FIG. 8B exemplarily illustrates a flowchart of a method of updating motion information of the current block based on reference blocks having a minimum difference in samples.
  • the coding apparatus may determine whether pair prediction is applied to the current block (S800). When pair prediction is applied to the current block, the coding apparatus may derive the reference block in each direction based on the motion information in each direction (S810).
  • the motion information derived based on the merge index of the current block may include L0 motion information and L1 motion information.
  • An L0 reference block may be derived based on the L0 motion information
  • an L1 reference block may be derived based on the L1 motion information.
  • the L0 reference block may be called a first reference block
  • the L1 reference block may be called a second reference block.
  • the coding apparatus derives an updated L0 reference block and an updated L1 reference block in which the difference between the L0 reference blocks around the L0 reference block and the corresponding samples among the L1 reference blocks around the L1 reference block is minimal.
  • updated motion information representing the updated L0 reference block and the updated L1 reference block may be searched for.
  • the coding apparatus may derive the L0 reference block as the updated L0 reference block, and select the L1 reference block having a minimum difference from the L0 reference block among the L1 reference blocks in the updated L1 reference block. Can be derived.
  • the difference may be called a cost.
  • the cost may be derived as the sum of the absolute values of the differences between the L0 reference block and the corresponding samples of the L1 reference block.
  • the coding apparatus may derive the updated L0 reference block and the motion information indicating the updated L1 reference block as the updated motion information.
  • the coding apparatus may derive the L1 reference block to the updated L1 reference block, and may derive the L0 reference block having a minimum cost from the L0 reference blocks to the updated L0 reference block. Can be. In this case, the coding apparatus may derive the updated L0 reference block and the motion information indicating the updated L1 reference block as the updated motion information.
  • the coding apparatus derives any first temporary L0 reference block of the L0 reference blocks and references the first temporary L0 on an L1 reference picture that includes the L1 reference block based on the current block of the current picture.
  • a first temporary L1 reference block at a position symmetric with the block may be derived.
  • the coding apparatus may derive a first difference between the first temporary L0 reference block and the first temporary L1 reference block.
  • the coding apparatus changes the position of the temporary L0 reference block in a certain region and / or based on a certain number of times based on the L0 reference block, while the second temporary L1 is symmetric to the second L0 temporary reference block at the changed position.
  • a reference block may be derived and a second difference between the second temporary L0 reference block and the second temporary L1 reference block may be derived.
  • the coding apparatus repeats the above-described procedure based on the predetermined area and / or the predetermined number of times to identify the nth temporary L0 reference block and the nth L1 temporary reference block having the minimum difference among the first to nth differences, respectively.
  • the updated L0 reference block and the updated L1 reference block may be derived.
  • the coding apparatus may derive the motion information indicating the updated L0 reference block and the updated L1 reference block as the updated motion information.
  • the search region may be set to an appropriate level. The same can be set for.
  • the search area may not be set to a fixed value in advance, and information about the search area may be adaptively signaled through syntax.
  • the information about the search region may be transmitted through a parameter set such as a sequence parameter set (SPS), a picture parameter set (PPS), a video parameter set (VPS), and a slice header. That is, the information about the search region may be transmitted in an SPS unit, PPS unit, VPS unit, or slice unit.
  • SPS sequence parameter set
  • PPS picture parameter set
  • VPN video parameter set
  • the coding apparatus may update the motion information of the current block based on the updated L0 reference block and the motion information indicating the updated L1 reference block (S830).
  • the coding device may derive the updated L0 reference block and the motion information indicating the updated L1 reference block as the updated motion information.
  • the above-described methods of updating the motion information of the current block may be applied in combination.
  • the above-described methods are methods for improving the accuracy of motion prediction by performing the same process in the encoding apparatus and the decoding apparatus without signaling of the additional information for updating the motion information when the merge mode is applied to the current block.
  • the above-described method is applied in combination with a method of updating motion information based on a predicted block, a method of updating motion information based on neighboring samples of the current block, that is, a template, and between reference blocks.
  • the method of updating the motion information based on the similarity may represent a method to be selectively applied. Specifically, information indicating a method selected by selecting an optimal method for the current block among two or three methods among the above-described methods may be signaled, and the decoding apparatus may motion information based on the selected method.
  • the process of updating may be performed.
  • a flag of 1 bit indicating one of the two methods may be signaled, and a method applied to the current block is selected from three methods.
  • an index having a variable number of bits may be signaled. For example, binary codes of 0, 10, and 11 may be allocated according to the value of the index.
  • the motion information updating method represented by each binarization code may be mapped to reflect a selection ratio, and the coding efficiency may be improved by allowing the motion information updating method having a high selection ratio to be mapped to a binarization code having a small number of bits.
  • the flag or the index may be signaled in a CU unit or a PU unit. When the flag or the index is signaled in a CU unit, the syntax including the flag or the index may be as shown in the following table.
  • MV_update_idx may indicate a syntax element of a flag or index indicating a method of updating motion information of the current block.
  • the MV_update_idx is a method of updating motion information based on the above-described predicted block, a method of updating motion information based on a template of a current block, and similarity between reference blocks (that is, a difference between corresponding samples is minimal). Refer to one of the methods of updating the motion information based on the reference blocks), and the motion information of the current block may be updated through the method indicated by the MV_update_idx.
  • the flag or the index may be signaled in the PU unit.
  • the syntax including the flag or the index may be as follows.
  • MV_update_idx may indicate a syntax element of a flag or index indicating a method of updating motion information of the current block.
  • the motion information update method may be applied more accurately in each CU or PU unit, but the frequency of information transmission is high. Therefore, loss may occur in terms of bit rate.
  • the information indicating the motion information update method of the current block is not only a CU or PU unit but also a higher level syntax such as a CTU, a sequence parameter set (SPS), a picture parameter set (PPS), or a slice unit. Can be sent.
  • the flag or index indicating the motion information update method of the current block may be called an update mode index.
  • the decoding apparatus may parse information about a method of updating motion information of the current block (S900).
  • the decoding apparatus may update the motion information based on the above-described predicted block, the method of updating the motion information based on the template of the current block, and the similarity between the reference blocks (that is, the difference between the corresponding samples may be minimal). Indexes or flags indicating one of the methods of updating the motion information based on the reference blocks).
  • the decoding apparatus may perform a rescan process of the motion information of the current block by the method selected based on the information (S910).
  • the decoding apparatus may rescan the motion information through the method indicated by the information.
  • the decoding apparatus may update the motion information of the current block based on the motion information derived through the rescan process (S920).
  • the decoding apparatus may derive new motion information through a method selected from among the above methods, and may deduce the new motion information as updated motion information of the current block and store it in a memory.
  • the updated motion information may be used in a decoding process of another block performed after the decoding process of the current block.
  • the search range and method should be set in advance. However, there are cases where the search method that is set does not show optimal performance. In this embodiment, a method of adaptively selecting a search method is proposed.
  • the process of searching for new motion information in the above-described motion information updating methods should be applied to the encoding apparatus and the decoding apparatus in the same manner, and thus the search area and method for deriving the new motion information may be limited.
  • the search region may be set based on a reference point indicated by a motion vector (MV) derived based on the merge index of the current block.
  • MV motion vector
  • FIG. 10 exemplarily shows a search area set based on a reference point derived based on the MV.
  • a plurality of points eg, nine points
  • Points in the search region may correspond to positions of upper left samples of reference blocks included in the search region. That is, within one sub pixel from the reference point may be set as a range of the search area.
  • the subpixel may be called a fractional sample.
  • the above-described motion information updating methods may be performed on the points of the search area, and the motion information of the current block may be updated.
  • the search area used in the motion information update process may be adjusted as follows.
  • the range of the search area may be set within a range of 1 subpixel from the reference point, but may be set within a range of 2, 3, 4 subpixels, or more subpixels from the reference point. Can be.
  • the range of the search area is wider, the accuracy of updated motion information may be improved, but computational complexity may also increase.
  • information on the range of the search region is signaled so that the range of the search region can be adaptively adjusted according to the prediction of each block.
  • the information about the range of the search region may be signaled in units of LCU, slice, picture parameter set (PSP) or sequence parameter set (SPS) as well as PU and CU units.
  • the number of bits of a binarization code to which information about the range of the search area is allocated may increase.
  • the number of cases in the range of the search area can be appropriately limited.
  • the range of 3 subpixels or more from the reference point may be limited for adjusting the computational complexity in the decoding apparatus.
  • a method of deriving the range of the search region based on the size of the corresponding block, the resolution of the input image, and / or the magnitude of the absolute value of the motion vector of the corresponding block without signaling information about the range of the search region Can be applied.
  • nine points including the reference point are derived into the search area, and as described above, the eight points are not searched in the eight directions based on the reference point. It can be set in various ways.
  • the search areas may be set such that the search is performed in a vertical direction, a horizontal direction, and a diagonal direction.
  • the search area may be set to perform a search in the direction of the MV, and as shown in (e) of FIG. 11, the search area may be opposite to the MV.
  • the search may be set to be performed. As illustrated in FIG.
  • the information on the search direction of the search area may include not only a PU or a CU unit but also an LCU, a slice, a picture parameter set (PSP), or a SPS (sequence). parameter set).
  • the search direction of the search region may be derived based on the size of the corresponding block, the resolution of the input image, and / or the magnitude of the absolute value of the motion vector of the corresponding block without signaling the information on the search direction.
  • the unit of the sub-pixel indicating the range of the search area may be 1/4 pixel or 1/8 pixel or 1/16 pixel depending on the characteristics of the encoding apparatus.
  • a unit of a subpixel representing the range of the search region may be 1/2, 1/4, 1/8, or 1/16 pixel.
  • the resolution of the search area performed in the motion information update process of the current block may be variously applied to 1/2, 1/4, 1/8, or 1/16 pixel.
  • the resolution of the search region may be derived based on the information about the resolution signaled or derived based on the size of the corresponding block, the resolution of the input image and / or the magnitude of the absolute value of the motion vector of the corresponding block. May be
  • the above-described methods for setting the search area may be applied independently, or two or three of the above-described methods may be applied in combination.
  • the derivation method according to the signaling or the condition of the information on each method may be applied in combination.
  • the information on the range of the search area is signaled, and the method of deriving the direction of the search area and the resolution of the search area based on a specific condition may be applied.
  • the search region may be set through various combinations such as information about the range of the search region and information about the resolution of the search region, and a method in which the direction of the search region is derived based on a specific condition.
  • the reliability of the derived motion information may be higher than that when the merge mode for constructing a list and transmitting an index is higher, but the rate-distortion optimization process is performed. Through the motion information with low reliability may be selected.
  • the reliability of the motion information may be lower than when the algorithm is not applied.
  • motion vector predictors may be derived based on motion information of neighboring blocks of the current block, and most similar to the motion vector of the current block among the MVPs.
  • An MVP index indicating an MVP may be transmitted, and a motion vector difference (MVD) between the MVP most similar to the motion vector of the current block and the motion vector of the current block may be transmitted.
  • the MV of the current block may be derived based on the following equation.
  • the MV represents the MV of the current block
  • MV Difference may represent the MVD of the current block.
  • the MV updated indicates an updated MV of the current block
  • the MV pred MVD and MV refinement of the current block may represent the additional motion information
  • a method of updating motion information of the current block may be as follows. Specifically, the method may include a method of updating based on neighboring samples of the current block, that is, a template, a method using similarity between reference blocks, and a method of updating based on a predicted block of the current block. .
  • the motion information of the current block may be updated based on a template of the current block.
  • the method of updating the motion information based on the template may be similar to the method of updating the motion information based on the template when a merge mode is applied to the current block described above.
  • the coding apparatus may search for new motion information in each direction based on the available peripheral samples of the current block and the available peripheral samples of the reference block (S1200).
  • a specific area including neighboring samples that can be used to update motion information of the current block may be called a template of the current block.
  • An L0 reference block having a template most similar to a template of the current block among L0 reference blocks in the L0 reference picture of the current block may be derived, and motion information representing the L0 reference block may be derived as new L0 motion information.
  • an L1 reference block having a template most similar to a template of the current block among L1 reference blocks in the L1 reference picture of the current block may be derived, and motion information representing the L1 reference block is derived as new L1 motion information. Can be.
  • the cost function applied to derive a template similar to the template of the current block may be applied a Sum of Absolute Differences (SAD) that simply adds the difference between the corresponding sample of the template of the current block and the searched template, or Sum of Squared Differences (SSD) that adds the squared difference between the template of the current block and the corresponding samples of the searched template may be applied, or may be applied to the difference between the template of the current block and the corresponding samples of the searched template.
  • SATD Sud of Absolute Transformed Differences
  • the coding device may update the motion information of the current block (S710).
  • the new L0 motion information may be derived as updated L0 motion information of the current block
  • the new L1 motion information may be derived as updated L1 motion information of the current block.
  • the update of the motion information may be performed even if the pair prediction is not applied to the current block, and if the pair prediction is applied to the current block, the motion information update may be independently performed in each direction. have.
  • an AMVP mode when an AMVP mode is applied to the current block, reference blocks having a small difference between corresponding samples among the reference blocks in the reference pictures of the current block may be derived, and the motions representing the derived reference blocks may be derived.
  • the motion information of the current block may be updated based on the information. That is, when AMVP mode is applied to the current block and pair prediction is applied, a method of updating motion information using similarity between reference blocks may be applied. The method may be similar to the method of updating motion information by using similarity between reference blocks when merge mode is applied to the current block described above.
  • FIG. 13 exemplarily illustrates a method of updating motion information of the current block based on reference blocks having a minimum difference of samples.
  • the coding apparatus may determine whether pair prediction is applied to the current block (S1300). When pair prediction is applied to the current block, the coding apparatus may derive the reference block in each direction based on the motion information in each direction (S1310).
  • the motion information derived based on the information on the motion information of the current block may include L0 motion information and L1 motion information.
  • An L0 reference block may be derived based on the L0 motion information
  • an L1 reference block may be derived based on the L1 motion information.
  • the L0 reference block may be called a first reference block
  • the L1 reference block may be called a second reference block.
  • the coding apparatus derives an updated L0 reference block and an updated L1 reference block in which the difference between the L0 reference blocks around the L0 reference block and the corresponding samples among the L1 reference blocks around the L1 reference block is minimal.
  • updated motion information representing the updated L0 reference block and the updated L1 reference block may be searched.
  • the coding apparatus may derive the L0 reference block as the updated L0 reference block, and select the L1 reference block having a minimum difference from the L0 reference block among the L1 reference blocks in the updated L1 reference block. Can be derived.
  • the difference may be called a cost.
  • the cost may be derived as the sum of the absolute values of the differences between the L0 reference block and the corresponding samples of the L1 reference block.
  • the coding apparatus may derive the updated L0 reference block and the motion information indicating the updated L1 reference block as the updated motion information.
  • the coding apparatus may derive the L1 reference block to the updated L1 reference block, and may derive the L0 reference block having a minimum cost from the L0 reference blocks to the updated L0 reference block. Can be. In this case, the coding apparatus may derive the updated L0 reference block and the motion information indicating the updated L1 reference block as the updated motion information.
  • the coding apparatus derives any first temporary L0 reference block of the L0 reference blocks and references the first temporary L0 on an L1 reference picture that includes the L1 reference block based on the current block of the current picture.
  • a first temporary L1 reference block at a position symmetric with the block may be derived.
  • the coding apparatus may derive a first difference between the first temporary L0 reference block and the first temporary L1 reference block.
  • the coding apparatus changes the position of the temporary L0 reference block in a certain region and / or based on a certain number of times based on the L0 reference block, while the second temporary L1 is symmetric to the second L0 temporary reference block at the changed position.
  • a reference block may be derived and a second difference between the second temporary L0 reference block and the second temporary L1 reference block may be derived.
  • the coding apparatus repeats the above-described procedure based on the predetermined area and / or the predetermined number of times to identify the nth temporary L0 reference block and the nth L1 temporary reference block having the minimum difference among the first to nth differences, respectively.
  • the updated L0 reference block and the updated L1 reference block may be derived.
  • the coding apparatus may derive the motion information indicating the updated L0 reference block and the updated L1 reference block as the updated motion information.
  • the coding apparatus may update the motion information of the current block based on the updated L0 reference block and the motion information indicating the updated L1 reference block (S1330).
  • the coding device may derive the updated L0 reference block and the motion information indicating the updated L1 reference block as the updated motion information.
  • the motion information of the current block may be derived based on the MVP index and the MVD of the transmitted current block, and the prediction block derived based on the motion information.
  • the motion information of the current block may be updated using a predicted block.
  • the method may be similar to the method of updating motion information using a predicted block when merge mode is applied to the current block described above.
  • the coding apparatus may determine whether pair prediction is applied to the current block (S1400). When pair prediction is applied to the current block, the coding apparatus may derive a predicted block based on motion information in each direction (S1410).
  • the motion information derived based on the information on the motion information of the current block may include L0 motion information and L1 motion information.
  • An L0 reference block may be derived based on the L0 motion information
  • an L1 reference block may be derived based on the L1 motion information.
  • the L0 reference block may be called a first reference block, and the L1 reference block may be called a second reference block.
  • the coding apparatus may generate the prediction block based on the L0 reference block and the L1 reference block.
  • the prediction block may be derived as an average value of reference blocks in each direction.
  • the coding apparatus may generate the prediction sample of the prediction block by adding the L0 reference sample of the L0 reference block and the L1 reference sample of the L1 reference block corresponding to the L0 reference sample with the same weight.
  • the prediction sample may be generated by halving the sum of the sample value of the L0 reference sample and the sample value of the L1 reference sample.
  • a prediction block may be generated based on a weight for a specific direction.
  • the weight of the L0 reference block may be 3, the weight of the L1 reference block may be applied to 1, and a prediction block may be generated based on the weights. That is, the prediction sample of the prediction block may be generated by adding the sample value of the L0 reference block and the sample value of the L1 reference block with a weight ratio of 3: 1.
  • the method for determining the weight for each reference block may include a method of directly signaling the information on the weight, and a method of deriving the weight based on the same condition in the decoding apparatus and the encoding apparatus.
  • the method of deriving the weight based on the same condition includes a method of deriving the weight based on a difference between a POC of a reference picture and a POC of a current picture, a ratio between an absolute value of MVL0 and an absolute value of MVL1 (or L0 motion information). And a method for deriving the weight based on a ratio between an absolute MVD value and an absolute MVD value of L1 motion information.
  • the coding device may update the motion information of the current block based on the generated prediction block (S1420).
  • the coding apparatus may update the motion information by assuming the prediction block as an original block of motion estimation.
  • the above-described process may be performed in the same manner in the encoding apparatus and the decoding apparatus. Deriving new motion information may be represented as finding a block most similar to the prediction block in a reference picture. That is, a reference block most similar to the prediction block in the reference picture may be derived, and motion information indicating the reference block may be derived as the new motion information.
  • the method of updating the motion information of the current block based on the prediction block may be performed only when pair prediction is applied to the current block.
  • the prediction block represents the current block, that is, when searching for new motion information of the current block based on the prediction block, the past based on the current picture among the pair predictions.
  • the method may be applied only when prediction is performed with reference to a reference block in a direction and a reference block in a future direction.
  • the prediction performed based on the reference block in the past direction and the reference block in the future direction may be called true bi-directional prediction.
  • the past direction may indicate a prediction direction for reference pictures having a POC of a value smaller than the POC of the current picture
  • the future direction may correspond to reference pictures having a POC of a value larger than the POC of the current picture.
  • the prediction direction may be indicated. Details of the true bi-directional prediction may be as follows.
  • Tn may represent a difference between a POC value of a reference picture and a POC value of the current picture.
  • the difference may be called the time distance.
  • Reference blocks included in the T-2 reference picture and reference blocks included in the T1 reference picture may be L0 reference blocks, L1 reference blocks, or reference blocks included in the T-2 reference picture and T1 reference pictures may be included in the reference block.
  • the reference blocks may be L1 reference blocks and L0 reference blocks, respectively.
  • pair prediction may be performed based on reference blocks in the past direction of the current block as shown in FIG. 15B, and future directions of the current block as shown in FIG. 15C. Pair prediction may be performed based on the reference blocks of the P L.
  • the reference block most similar to the prediction block may be searched around the reference block derived based on each of the L0 motion information and the L1 motion information. Since the search of the most similar reference block should be performed identically in the encoding apparatus and the decoding apparatus, the same cost function and the same search range may be specified.
  • Sum of Absolute Differences may be applied that simply adds a difference between corresponding samples of the predicted block and the searched reference block as the cost function.
  • the SAD may be derived as in the following equation.
  • block cur (i, j) is a prediction sample of (i, j) coordinates in the predicted block of the current block
  • Block ref (i, j) is (i, j) coordinates in the reference block
  • the reconstructed sample of, width represents the width of the prediction block
  • height represents the height of the prediction block.
  • SSD sum of squared differences
  • block cur (i, j) is a prediction sample of (i, j) coordinates in the predicted block of the current block
  • Block ref (i, j) is (i, j) coordinates in the reference block
  • the reconstructed sample of, width represents the width of the prediction block
  • height represents the height of the prediction block.
  • a sum of absolute transformed differences (SATD) for applying a transform to a difference between corresponding samples of the prediction block and the searched reference block may be applied as the cost function.
  • SATD may be derived as in the following equation.
  • block cur (i, j) is a prediction sample of (i, j) coordinates in the predicted block of the current block
  • Block ref (i, j) is (i, j) coordinates in the reference block
  • the reconstructed sample of, width is the width of the prediction block
  • height is the height of the prediction block
  • Tr () represents a function for the transform.
  • the search range may be preset to the same range in the encoding apparatus and the decoding apparatus, or may be set to a higher level syntax such as a sequence parameter set (SPS), a picture parameter set (PPS), a video parameter set (VPS), or a slice header.
  • SPS sequence parameter set
  • PPS picture parameter set
  • VPS video parameter set
  • the new motion information in each direction derived through the above-described method may be derived as updated motion information of the current block. Meanwhile, the updated motion information may be used not only for the current block but also stored in a memory and used for coding a neighboring block.
  • whether to update the motion information of the current block may be determined according to an arbitrary condition. For example, it may be determined whether the motion information is updated based on whether pair prediction is applied to the current block. In addition, a flag indicating whether to update the motion information may be signaled, and whether to update the motion information of the current block may be determined based on the flag. In addition, whether the motion information is updated may be determined based on whether an adaptive motion vector resolution adjustment algorithm (AMVR) is applied to the current block. In addition, the encoding apparatus and the decoding apparatus may determine whether to update the motion information of the current block based on the size of the current block or the absolute value of the motion information of the current block. Details of the above-described methods may be as follows.
  • the method for determining whether to update the motion information based on whether pair prediction is applied to the current block may determine whether to perform pair prediction on the current block, and when the pair prediction is performed, the current The motion information of the block can be updated.
  • the method of determining whether to update the motion information based on whether the pair prediction is applied to the current block is the same as the method of determining whether to update the motion information based on whether the pair prediction is applied when the merge mode is applied. Can be.
  • the method of determining whether to update the motion information based on whether pair prediction is applied to the current block may be applied only when actual pair prediction, that is, true bi-directional prediction is applied.
  • a flag for updating the motion information when a flag for updating the motion information is signaled, it may be determined whether to update the motion information of the current block based on the flag. For example, when the value of the flag is 1, the motion information of the current block may be updated. In the method of signaling the flag, whether the method of updating the above-described motion information is directly transmitted to the decoding apparatus, the computational complexity of the decoding apparatus may be lowered, but the bit amount may be increased.
  • the method of determining whether to update the motion information based on whether the AMVR is applied may be determined whether the AMVR is applied to the current block, and when the AMVR is applied, the motion information of the current block may be determined. Can be updated. Since the motion vector when the AMVR is applied has low reliability, updated motion information may be more accurate, and prediction may be performed based on the updated motion information, thereby improving prediction accuracy of the current block.
  • the encoding apparatus considers the method of updating the above-described motion information when the AMVR is applied to the current block. distortion optimization).
  • the AMVR may be set to be applied only when the optimal result is considered in consideration of the method of updating the motion information.
  • the AMVR may indicate a method of signaling the MV converted into an integer sample unit or 4 sample units by rounding the MV in integer sample units (ie, 1 sample unit) or 4 sample units.
  • the resolution of the MV may be expressed in sub-pixel units (for example, 1/4 pixel unit or 1/16 pixel unit), and when the AMVR is applied, the resolution of the MV may be reduced to obtain a gain in terms of bit rate. However, the reliability of the MV can be lowered.
  • each MV components of the MV are rounded. May be integerized, and may be transmitted in integer units of MV.
  • the integer MV may be derived based on the following equation.
  • MV is MV of the current block
  • MV pred is The motion vector predictor (MVP)
  • MV difference of the current block MVD (motion vector difference)
  • Round () represent a rounding function.
  • the resolution of the MV may be reduced to obtain a gain in terms of bit rate, but the reliability of the MV may be lowered.
  • the updated motion information that is, the updated MV may be derived based on the following equation.
  • MV updated is the updated MV of the current block
  • MV pred is The motion vector predictor (MVP)
  • the MV difference of the current block the motion vector difference (MVD)
  • the round () rounding function the MV refinement
  • the additional motion information may be derived through the same process in the encoding apparatus and the decoding apparatus.
  • the value of the MVD may be reduced.
  • the resolution of the MVD is reduced, and thus the reliability of the motion vector may be increased through the additional motion information.
  • the additional motion information can be derived without signaling of additional information, the reliability of the MV can be improved without increasing the bit rate.
  • the combination of information such as the size of the current block, the magnitude of the absolute value of the MV may be derived, and the combination may be derived as a condition for determining whether to update the motion information.
  • the combination may be derived through experimental performance of situations in which motion information update is performed through a combination of various conditions.
  • whether to update the motion information of the current block may be determined through a combination of the methods for determining whether to update the motion information of the current block.
  • a method of deriving updated motion information based on the prediction block of the current block, a method of deriving updated motion information based on a template of the current block, or a method using similarity between reference blocks of the current block may be applied. Can be.
  • FIG. 16 schematically shows a video encoding method by an encoding device according to the present invention.
  • the method disclosed in FIG. 16 may be performed by the encoding apparatus disclosed in FIG. 1.
  • S1600 to S1620 of FIG. 16 may be performed by the prediction unit of the encoding apparatus
  • S1630 may be performed by the memory of the encoding apparatus.
  • the encoding apparatus generates motion information of the current block (S1600).
  • the encoding apparatus may apply inter prediction on the current block.
  • the encoding apparatus When inter prediction is applied to the current block, the encoding apparatus generates any motion information for the current block by applying one of a skip mode, a merge mode, and an adaptive motion vector prediction (AMVP) mode. can do.
  • the encoding apparatus may generate the motion information of the current block based on the motion information of the neighboring blocks of the current block.
  • the motion information may include a motion vector and a reference picture index.
  • the motion information may be bi-prediction motion information or uni-prediction motion information.
  • the bi-predictive motion information may include an L0 reference picture index and an L0 motion vector, an L1 reference picture index, and an L1 motion vector
  • the short-predicted motion information may include an L0 reference picture index and an L0 motion vector, or It may include an L1 reference picture index and an L1 motion vector.
  • L0 represents a reference picture list L0 (List 0)
  • L1 represents a reference picture list L1 (List 1).
  • the encoding apparatus may derive the motion vector of the current block by using the motion vector of the neighboring block of the current block as a motion vector predictor (MVP) of the current block.
  • the motion information including a vector and a reference picture index of the motion vector may be generated.
  • the encoding apparatus may modify the motion information of the current block based on a predicted block derived from the generated motion information, a template of the current block, or a position indicated by the motion information. To derive (S1610).
  • the encoding apparatus may derive the modified motion information for the current block through various methods.
  • the modified motion information when the merge mode is applied to the current block, the modified motion information may indicate a modified motion vector, and when the AMVP mode is applied to the current block, the modified motion information is a modified motion vector or modified It may represent a motion vector predictor (MVP).
  • MVP motion vector predictor
  • the encoding apparatus may derive the modified motion information based on a predicted block of the current block.
  • a predicted block may be derived based on the motion information of the current block, and a specific reference block having a minimum cost with the predicted block among the reference blocks in a specific reference picture is derived.
  • the specific reference picture may be a reference picture indicated by the reference picture index included in the motion information.
  • motion information indicating the specific reference block may be derived as the modified motion information.
  • the cost may be a sum of absolute difference (SAD), and the SAD may be determined based on Equation 3 described above.
  • the cost may be a sum of squared difference (SSD), and the SSD may be determined based on Equation 4 described above.
  • the cost may be Sum of Absolute Transformed Differences (SATD), and the SATD may be determined based on Equation 5 described above.
  • a predetermined region including a reference block derived based on the motion vector included in the motion information may be derived as a search region, and a cost with the prediction block among the reference blocks included in the search region is minimal.
  • a reference block can be derived to the specific reference block. That is, a predetermined region including a position indicated by the motion information may be derived as a search region, and a reference block having a minimum cost with the prediction block among the reference blocks included in the search region is derived as the specific reference block.
  • the search area may be derived as follows.
  • an area within 1, 2, 3, or 4 subpixels (or fractional samples) from the reference point (the position indicated by the motion information) in the reference block derived based on the motion vector is derived to the search area.
  • the reference point (or position) may be an upper left sample of the reference block.
  • points in the search region may correspond to positions of upper left samples of reference blocks included in the search region.
  • the unit of the sub-pixel may be 1/4 pixel (or sample), and may be 1/8 pixel or 1/16 pixel.
  • nine pixels (or samples) including a reference point in the reference block derived based on the motion vector may be set as a search region.
  • the search area may be set such that the search is performed in a vertical direction, a horizontal direction, or a diagonal direction.
  • the search area may be set to perform a search in the direction of the motion vector, and the search area may be set to perform a search in a direction opposite to the motion vector.
  • the search region may be derived based on the size of the current block, the resolution of the input image, and / or the magnitude of the absolute value of the motion vector of the current block.
  • information about the search area may be generated.
  • the encoding apparatus may derive the modified motion information based on a template of the current block.
  • the template of the current block may be represented as a specific area including neighboring samples of the current block.
  • a template of the current block may be derived based on neighboring samples of the current block, and among the reference blocks in a specific reference picture, a reference block having a template having a minimum cost with the template of the current block is a specific reference block.
  • Motion information indicating the specific reference block may be derived as the modified motion information.
  • the cost may be a sum of absolute difference (SAD), and the SAD may be determined based on Equation 3 described above.
  • the cost may be a sum of squared difference (SSD), and the SSD may be determined based on Equation 4 described above.
  • the cost may be Sum of Absolute Transformed Differences (SATD), and the SATD may be determined based on Equation 5 described above.
  • a predetermined region including a reference block derived based on the motion vector included in the motion information may be derived as a search region, and the cost of the current block among the reference blocks included in the search region with the template of the current block.
  • a reference block having a template with a minimum of may be derived as the specific reference block. That is, a predetermined region including a position indicated by the motion information may be derived as a search region, and among the reference blocks included in the search region, a reference block having a template having a minimum cost with the template of the current block is It can be derived to the specific reference block.
  • the search area may be derived as follows.
  • an area within 1, 2, 3, or 4 subpixels (or fractional samples) from the reference point (the position indicated by the motion information) in the reference block derived based on the motion vector is derived to the search area.
  • the reference point (or the position) may be an upper left sample of the reference block, and the size of the reference block may be the same as the size of the current block.
  • points in the search region may correspond to positions of upper left samples of reference blocks included in the search region.
  • the unit of the sub-pixel may be 1/4 pixel (or sample), and may be 1/8 pixel or 1/16 pixel.
  • nine pixels (or samples) including a reference point in the reference block derived based on the motion vector may be set as a search region.
  • the search area may be set such that the search is performed in a vertical direction, a horizontal direction, or a diagonal direction.
  • the search area may be set to perform a search in the direction of the motion vector, and the search area may be set to perform a search in a direction opposite to the motion vector.
  • the search region may be derived based on the size of the current block, the resolution of the input image, and / or the magnitude of the absolute value of the motion vector of the current block.
  • information about the search area may be generated.
  • the encoding apparatus may derive the modified motion information based on the reference blocks of the current block.
  • the encoding apparatus may derive the modified motion information based on the position indicated by the motion information.
  • the motion information may be bi-predictive motion information. That is, the motion information may include L0 motion information and L1 motion information.
  • an L0 reference block of the current block may be derived based on the L0 motion information
  • an L1 reference block of the current block may be derived based on the L1 motion information, and includes the L0 reference block.
  • a specific L0 reference block and a specific L1 reference block having the lowest cost among the L0 reference blocks in the search region and the L1 reference blocks in the search region including the L1 reference block can be derived and pointing to the specific L0 reference block.
  • Motion information including L0 motion information and L1 motion information indicating the specific L1 reference block may be derived as the modified motion information. That is, a specific L0 reference block having a minimum cost among the L0 reference blocks in the search region including the location indicated by the L0 motion information and the L1 reference blocks in the search region including the location indicated by the L1 motion information and a specific L1.
  • a reference block may be derived, and motion information including L0 motion information indicating the specific L0 reference block and L1 motion information indicating the specific L1 reference block may be derived as the modified motion information.
  • the cost may be a sum of absolute difference (SAD), and the SAD may be determined based on Equation 3 described above.
  • the cost may be a sum of squared difference (SSD), and the SSD may be determined based on Equation 4 described above.
  • the cost may be Sum of Absolute Transformed Differences (SATD), and the SATD may be determined based on Equation 5 described above.
  • the search region including the L0 reference block or the L1 reference block may be derived as follows. That is, the search region including the position indicated by the L0 motion information or the search region including the position indicated by the L1 motion information may be derived as follows. For example, an area within 1, 2, 3, or 4 subpixels (or fractional samples) from the reference point (the position indicated by the motion information) in the reference block derived based on the motion vector is derived to the search area. Can be.
  • the reference point (or the position) may be an upper left sample of the reference block, and the size of the reference block may be the same as the size of the current block.
  • points in the search region may correspond to positions of upper left samples of reference blocks included in the search region.
  • the unit of the sub-pixel may be 1/4 pixel (or sample), and may be 1/8 pixel or 1/16 pixel.
  • nine pixels (or samples) including a reference point in the reference block derived based on the motion vector may be set as a search region.
  • nine points including the reference point are derived to the search area, and the search area may be set such that the search is performed in a vertical direction, a horizontal direction, or a diagonal direction.
  • the search area may be set to perform a search in the direction of the motion vector, and the search area may be set to perform a search in a direction opposite to the motion vector.
  • the search region may be derived based on the size of the current block, the resolution of the input image, and / or the magnitude of the absolute value of the motion vector.
  • information about the search area may be generated.
  • the encoding apparatus updates the motion information of the current block based on the modified motion information (S1620).
  • the encoding apparatus may update and store the motion information of the current block based on the modified motion information.
  • the encoding apparatus may update the motion information of the current block by replacing the motion information of the current block with the modified motion information.
  • the updated motion information may be used for motion information of a block to be decoded after the decoding process of the current block.
  • the AMVR rounds the motion vector included in the motion information into integer sample units or 4 sample units to generate a motion vector of integer sample units or 4 sample units, and generates the generated integer sample unit or 4 sample units. It may indicate a method of signaling a motion vector.
  • the resolution of the motion vector may be a motion vector in units of 1/4 fractional samples or 1/16 fractional samples.
  • the combination of information such as the size of the current block, the magnitude of the absolute value of the MV may be derived, and the combination may be derived as a condition for determining whether to update the motion information.
  • the combination may be derived through experimental performance of situations in which motion information update is performed through a combination of various conditions.
  • whether to update the motion information of the current block may be determined through a combination of the methods for determining whether to update the motion information of the current block.
  • the encoding apparatus encodes and outputs information on inter prediction of the current block (S1630).
  • the encoding apparatus may generate a merge index indicating the merge candidate selected to derive the motion information of the current block.
  • the encoding device may encode and output the merge index.
  • the merge index may be included in the information about the inter prediction.
  • the information on the inter prediction may include information related to motion information of the current block.
  • the inter prediction information may include an L0 reference picture index and an L1 reference picture index of the current block, and may include MVPL0 (motion vector predictor L0 and MVPL0) and MVPL1 (motion vector predictor L1 and MVPL1). have.
  • the information on the inter prediction may include MVDL0 (motion vector difference L0, MVDL0) and MVDL1 (motion vector difference L1, MVDL1).
  • the information on the inter prediction of the current block may include an update mode index.
  • the update mode index may indicate that the updated mode index is used to derive the modified motion information among the prediction block, the template of the current block, or the location indicated by the motion information.
  • Information on inter prediction of the current block may include an update mode index.
  • the update mode index may indicate a method of deriving the modified motion information.
  • the binarization code of the update mode index may have a variable number of bits based on the value of the update mode index, and a value smaller than a value indicating a high selection ratio among methods of deriving the modified motion information. A bit number binarization code may be allocated.
  • the update mode index may be transmitted in units of PUs.
  • the update flag may be transmitted in units of a CU, a CTU, and a slice, or may be transmitted through a higher level such as a picture parameter set (PPS) unit or a sequence parameter set (SPS) unit.
  • PPS picture parameter set
  • SPS sequence parameter
  • the encoding apparatus may generate additional information indicating whether the motion information of the current block is updated, and may encode and output the encoded information through the bitstream.
  • the additional information indicating whether the update is performed may be called an update flag.
  • Information on inter prediction of the current block may include the update flag. When the value of the update flag is 1, it may indicate that the motion information is updated. When the value of the update flag is 0, it may indicate that the motion information is not updated.
  • the update flag may be transmitted in units of PUs. Alternatively, the update flag may be transmitted in units of a CU, a CTU, and a slice, or may be transmitted through a higher level such as a picture parameter set (PPS) unit or a sequence parameter set (SPS) unit.
  • PPS picture parameter set
  • SPS sequence parameter set
  • the encoding apparatus may generate information about the search region, encode and output the information through the bitstream. It may include information about the search area.
  • the information about the search area may include information about a range of the search area, a search direction set in the search area, and / or a resolution.
  • Information about the search area may be transmitted in units of PUs.
  • the update flag may be transmitted in units of a CU, a CTU, and a slice, or may be transmitted through a higher level such as a picture parameter set (PPS) unit or a sequence parameter set (SPS) unit.
  • PPS picture parameter set
  • SPS sequence parameter set
  • prediction of the current block may be performed based on the updated motion information, and the updated motion information may be stored.
  • the updated motion information may be used for motion information of a block included in a neighboring block or another picture decoded after the decoding process of the current block.
  • the prediction of the current block may not be performed based on the updated motion information, but may be stored only. Even in this case, the updated motion information may be used for motion information of a block included in a neighboring block or another picture decoded after the decoding process of the current block.
  • the merge candidate list of the neighboring block may include the current block representing the modified motion information.
  • the updated motion information may be included in the MVP candidate list of the neighboring block as an MVP candidate.
  • the encoding apparatus may generate a prediction sample based on the updated motion information.
  • the encoding apparatus may generate a residual sample based on the original sample and the generated prediction sample.
  • the encoding apparatus may generate information about the residual based on the residual sample.
  • the information about the residual may include transform coefficients related to the residual sample.
  • the encoding apparatus may derive the reconstructed sample based on the prediction sample and the residual sample. That is, the encoding apparatus may derive the reconstructed sample by adding the prediction sample and the residual sample.
  • the encoding apparatus may encode the information about the residual and output the bitstream.
  • the bitstream may be transmitted to a decoding apparatus via a network or a storage medium.
  • FIG. 17 schematically illustrates a video decoding method by a decoding apparatus according to the present invention.
  • the method disclosed in FIG. 17 may be performed by the decoding apparatus disclosed in FIG. 2.
  • S1700 of FIG. 17 may be performed by the entropy decoding unit of the decoding apparatus, and S1710 to S1740 may be performed by the prediction unit of the decoding apparatus.
  • the decoding apparatus obtains information on inter prediction of the current block through the bitstream (S1300).
  • inter prediction or intra prediction may be applied.
  • the decoding apparatus may obtain information about inter prediction of the current block through the bitstream.
  • the decoding apparatus may generate a merge candidate list based on neighboring blocks of the current block, and obtain a merge index through a bitstream.
  • the merge index may indicate a merge candidate included in the merge candidate list, and the information on the inter prediction may include the merge index.
  • the information on the inter prediction may include information related to motion information of the current block.
  • the inter prediction information may include an L0 reference picture index and an L1 reference picture index of the current block, and may include MVPL0 (motion vector predictor L0 and MVPL0) and MVPL1 (motion vector predictor L1 and MVPL1). have.
  • the information on the inter prediction may include MVDL0 (motion vector difference L0, MVDL0) and MVDL1 (motion vector difference L1, MVDL1).
  • the decoding apparatus may obtain the update mode index through the bitstream.
  • Information on inter prediction of the current block may include an update mode index.
  • the update mode index may indicate a method of deriving modified motion information.
  • the update mode index may indicate that the updated mode index is used to derive the modified motion information among the prediction block, the template of the current block, or the location indicated by the motion information.
  • the binarization code of the update mode index may have a variable number of bits based on the value of the update mode index, and indicates a method having a high selection ratio among methods used to derive the modified motion information. A small bit number binarization code can be assigned to the value.
  • the update mode index may be received in units of PUs.
  • the update flag may be received in a CU unit, a CTU unit, a slice unit, or may be received through a higher level such as a picture parameter set (PPS) unit or a sequence parameter set (SPS) unit.
  • PPS picture parameter set
  • SPS sequence parameter set
  • the decoding apparatus may obtain additional information indicating whether the motion information of the current block is updated through the bitstream.
  • the additional information indicating whether the update is performed may be called an update flag.
  • the information on the inter prediction may include the update flag.
  • the update flag may be received in a CU unit, a CTU unit, a slice unit, or may be received through a higher level such as a picture parameter set (PPS) unit or a sequence parameter set (SPS) unit.
  • PPS picture parameter set
  • SPS sequence parameter set
  • the decoding apparatus may generate information on the search region through the bitstream, and may encode and output the information on the search region.
  • Information on inter prediction of the current block may include information on the search region.
  • the information about the search area may include information about a range of the search area, a search direction set in the search area, and / or a resolution.
  • the information about the search area may be received in a CU unit, a CTU unit, a slice unit, or may be received through a higher level such as a picture parameter set (PPS) unit or a sequence parameter set (SPS) unit.
  • PPS picture parameter set
  • SPS sequence parameter set
  • the decoding apparatus generates a motion information candidate list based on neighboring blocks of the current block (S1710).
  • the decoding apparatus may generate a motion information candidate list based on neighboring blocks of the current block.
  • the motion information candidate list may indicate a merge candidate list.
  • the motion information candidate list is a motion vector predictor (MVP) candidate list. Can be represented.
  • MVP motion vector predictor
  • the decoding apparatus may generate a merge candidate list including neighboring blocks of the current block, and when the AMVP mode is applied to the current block, the decoding apparatus may apply the
  • the MVP candidate list may be generated based on motion vectors of neighboring blocks of the current block.
  • the motion vectors may be derived from MVP candidates included in the MVP candidate list.
  • the decoding apparatus derives motion information of the current block based on the information on the inter prediction and the motion information candidate list (S1720).
  • the information on the inter prediction may indicate whether any one of a skip mode, a merge mode, and an adaptive motion vector prediction (AMVP) mode is applied to the current block.
  • AMVP adaptive motion vector prediction
  • the decoding apparatus may obtain a merge index indicating one neighboring block among neighboring blocks included in the merge candidate list.
  • the merge index may be included in the information about the inter prediction.
  • the decoding apparatus may derive the motion information of the neighboring block indicated by the merge index as the motion information of the current block.
  • the decoding apparatus may include an MVP index indicating one of the MVP candidates included in the generated MVP candidate list, a motion vector (MVP candidate) of a neighboring block indicated by the MVP index, and the current block.
  • a motion vector difference (MVD) with a motion vector of may be obtained.
  • the MVP index and the MVD may be included in the information about the inter prediction.
  • the decoding apparatus may derive the motion information of the current block based on the motion vector (MVP candidate) of the neighboring block indicated by the MVP index and the MVD.
  • the motion information may include a motion vector and a reference picture index.
  • the motion information may be bi-prediction motion information or uni-prediction motion information.
  • the bi-predictive motion information may include an L0 reference picture index and an L0 motion vector, an L1 reference picture index, and an L1 motion vector
  • the short-predicted motion information may include an L0 reference picture index and an L0 motion vector, or It may include an L1 reference picture index and an L1 motion vector.
  • L0 represents a reference picture list L0 (List 0)
  • L1 represents a reference picture list L1 (List 1).
  • the decoding apparatus may modify modified motion information of the current block based on a predicted block derived based on the derived motion information, a template of the current block, or a position indicated by the motion information. To derive (S1730).
  • the decoding apparatus may derive the modified motion information for the current block through various methods.
  • the modified motion information when the merge mode is applied to the current block, the modified motion information may indicate a modified motion vector, and when the AMVP mode is applied to the current block, the modified motion information is a modified motion vector or modified It may represent a motion vector predictor (MVP).
  • MVP motion vector predictor
  • the decoding apparatus may derive the modified motion information based on a predicted block of the current block.
  • a predicted block may be derived based on the motion information of the current block, and a specific reference block having a minimum cost with the predicted block among the reference blocks in a specific reference picture is derived.
  • the specific reference picture may be a reference picture indicated by the reference picture index included in the motion information.
  • motion information indicating the specific reference block may be derived as the modified motion information.
  • the cost may be a sum of absolute difference (SAD), and the SAD may be determined based on Equation 3 described above.
  • the cost may be a sum of squared difference (SSD), and the SSD may be determined based on Equation 4 described above.
  • the cost may be Sum of Absolute Transformed Differences (SATD), and the SATD may be determined based on Equation 5 described above.
  • a predetermined region including a reference block derived based on the motion vector included in the motion information may be derived as a search region, and a cost with the prediction block among the reference blocks included in the search region is minimal.
  • a reference block can be derived to the specific reference block. That is, a predetermined region including a position indicated by the motion information may be derived as a search region, and a reference block having a minimum cost with the prediction block among the reference blocks included in the search region is derived as the specific reference block.
  • the search area may be derived as follows.
  • an area within 1, 2, 3, or 4 subpixels (or fractional samples) from the reference point (the position indicated by the motion information) in the reference block derived based on the motion vector is derived to the search area.
  • the reference point (or position) may be an upper left sample of the reference block.
  • points in the search region may correspond to positions of upper left samples of reference blocks included in the search region.
  • the unit of the sub-pixel may be 1/4 pixel (or sample), and may be 1/8 pixel or 1/16 pixel.
  • nine pixels (or samples) including a reference point in the reference block derived based on the motion vector may be set as a search region.
  • the search area may be set such that the search is performed in a vertical direction, a horizontal direction, or a diagonal direction.
  • the search area may be set to perform a search in the direction of the motion vector, and the search area may be set to perform a search in a direction opposite to the motion vector.
  • the search region may be derived based on the size of the current block, the resolution of the input image, and / or the magnitude of the absolute value of the motion vector of the current block.
  • the decoding apparatus may derive the modified motion information based on a template of the current block.
  • the template of the current block may be represented as a specific area including neighboring samples of the current block.
  • the template may include left samples and / or top samples of the current block.
  • a template of the current block may be derived based on neighboring samples of the current block, and among the reference blocks in a specific reference picture, a reference block having a template having a minimum cost with the template of the current block is a specific reference block.
  • Motion information indicating the specific reference block may be derived as the modified motion information.
  • the cost may be a sum of absolute difference (SAD), and the SAD may be determined based on Equation 3 described above.
  • the cost may be a sum of squared difference (SSD), and the SSD may be determined based on Equation 4 described above.
  • the cost may be Sum of Absolute Transformed Differences (SATD), and the SATD may be determined based on Equation 5 described above.
  • a predetermined region including a reference block derived based on the motion vector included in the motion information may be derived as a search region, and the cost of the current block among the reference blocks included in the search region with the template of the current block.
  • a reference block having a template with a minimum of may be derived as the specific reference block. That is, a predetermined region including a position indicated by the motion information may be derived as a search region, and among the reference blocks included in the search region, a reference block having a template having a minimum cost with the template of the current block is It can be derived to the specific reference block.
  • the search area may be derived as follows.
  • an area within 1, 2, 3, or 4 subpixels (or fractional samples) from the reference point (the position indicated by the motion information) in the reference block derived based on the motion vector is derived to the search area.
  • the reference point (or the position) may be an upper left sample of the reference block, and the size of the reference block may be the same as the size of the current block.
  • points in the search region may correspond to positions of upper left samples of reference blocks included in the search region.
  • the unit of the sub-pixel may be 1/4 pixel (or sample), and may be 1/8 pixel or 1/16 pixel.
  • nine pixels (or samples) including a reference point in the reference block derived based on the motion vector may be set as a search region.
  • the search area may be set such that the search is performed in a vertical direction, a horizontal direction, or a diagonal direction.
  • the search area may be set to perform a search in the direction of the motion vector, and the search area may be set to perform a search in a direction opposite to the motion vector.
  • the search region may be derived based on the size of the current block, the resolution of the input image, and / or the magnitude of the absolute value of the motion vector of the current block.
  • the decoding apparatus may derive the modified motion information based on the reference blocks of the current block.
  • the decoding apparatus may derive the modified motion information based on the position indicated by the motion information.
  • the motion information may be bi-predictive motion information. That is, the motion information may include L0 motion information and L1 motion information.
  • an L0 reference block of the current block may be derived based on the L0 motion information
  • an L1 reference block of the current block may be derived based on the L1 motion information
  • a specific L0 reference block and a specific L1 reference block having the lowest cost among the L0 reference blocks in the search region and the L1 reference blocks in the search region including the L1 reference block can be derived and pointing to the specific L0 reference block.
  • Motion information including L0 motion information and L1 motion information indicating the specific L1 reference block may be derived as the modified motion information. That is, a specific L0 reference block having a minimum cost among the L0 reference blocks in the search region including the location indicated by the L0 motion information and the L1 reference blocks in the search region including the location indicated by the L1 motion information and a specific L1.
  • a reference block may be derived, and motion information including L0 motion information indicating the specific L0 reference block and L1 motion information indicating the specific L1 reference block may be derived as the modified motion information.
  • the cost may be a sum of absolute difference (SAD), and the SAD may be determined based on Equation 3 described above.
  • the cost may be a sum of squared difference (SSD), and the SSD may be determined based on Equation 4 described above.
  • the cost may be Sum of Absolute Transformed Differences (SATD), and the SATD may be determined based on Equation 5 described above.
  • the search region including the L0 reference block or the L1 reference block may be derived as follows. That is, the search region including the position indicated by the L0 motion information or the search region including the position indicated by the L1 motion information may be derived as follows. For example, an area within 1, 2, 3, or 4 subpixels (or fractional samples) from the reference point (the position indicated by the motion information) in the reference block derived based on the motion vector is derived to the search area. Can be.
  • the reference point (or the position) may be an upper left sample of the reference block, and the size of the reference block may be the same as the size of the current block.
  • points in the search region may correspond to positions of upper left samples of reference blocks included in the search region.
  • the unit of the sub-pixel may be 1/4 pixel (or sample), and may be 1/8 pixel or 1/16 pixel.
  • nine pixels (or samples) including a reference point in the reference block derived based on the motion vector may be set as a search region.
  • nine points including the reference point are derived to the search area, and the search area may be set such that the search is performed in a vertical direction, a horizontal direction, or a diagonal direction.
  • the search area may be set to perform a search in the direction of the motion vector, and the search area may be set to perform a search in a direction opposite to the motion vector.
  • the search region may be derived based on the size of the current block, the resolution of the input image, and / or the magnitude of the absolute value of the motion vector.
  • the update mode index may indicate that the updated mode index is used to derive the modified motion information among the prediction block, the template of the current block, or the location indicated by the motion information.
  • Information on inter prediction of the current block may include an update mode index.
  • the update mode index may indicate a method of deriving the modified motion information.
  • search region may be derived based on the information about the search region.
  • the decoding apparatus updates the motion information of the current block based on the modified motion information (S1740).
  • the decoding apparatus may update and store the motion information of the current block based on the modified motion information.
  • the decoding apparatus may update the motion information of the current block by replacing the motion information of the current block with the modified motion information.
  • the updated motion information may be used for motion information of a block to be decoded after the decoding process of the current block.
  • the motion information of the current block may be updated based on a specific condition. For example, whether to perform pair prediction on the current block may be determined, and whether to update the motion information may be determined based on whether to perform the pair prediction.
  • the motion information of the current block may be updated.
  • pair prediction is performed, and a first reference block among the reference blocks derived based on the motion information is included in a reference picture having a POC value smaller than the POC value of the current picture, and the second reference block among the reference blocks.
  • motion information of the current block may be updated.
  • the AMVR rounds the motion vector included in the motion information into integer sample units or 4 sample units to generate a motion vector of integer sample units or 4 sample units, and generates the generated integer sample unit or 4 sample units. It may indicate a method of signaling a motion vector.
  • the resolution of the motion vector may be a motion vector in units of 1/4 fractional samples or 1/16 fractional samples.
  • the combination of information such as the size of the current block, the magnitude of the absolute value of the MV may be derived, and the combination may be derived as a condition for determining whether to update the motion information.
  • the combination may be derived through experimental performance of situations in which motion information update is performed through a combination of various conditions.
  • whether to update the motion information of the current block may be determined through a combination of the methods for determining whether to update the motion information of the current block.
  • Information on inter prediction of the current block may include the update flag.
  • the value of the update flag is 1, it may indicate that the motion information is updated.
  • the value of the update flag is 0, it may indicate that the motion information is not updated.
  • prediction of the current block may be performed based on the updated motion information, and the updated motion information may be stored.
  • the updated motion information may be used as reference information for deriving motion information of a neighboring block decoded after the decoding process of the current block or a block included in another picture.
  • the prediction of the current block may be performed based on the pre-update motion information instead of the updated motion information, and the updated motion information may be stored for decoding of a subsequent block / picture.
  • the updated motion information may be used as reference information for deriving motion information of a neighboring block decoded after the decoding process of the current block or a block included in another picture.
  • a merge candidate list of the neighboring block may include a candidate indicating the modified motion information.
  • the updated motion information may be included in the MVP candidate list of the neighboring block as an MVP candidate.
  • a prediction block of the current block may be derived based on the updated motion information, and a reconstruction block may be derived based on the prediction block.
  • the decoding apparatus may generate a prediction sample based on the updated motion information, and may use the prediction sample directly as a reconstruction sample according to the prediction mode, or add a residual sample to the prediction sample to reconstruct the sample. You can also create If there is a residual sample for the current block, the decoding apparatus may receive information about the residual for the current block from the bitstream. The information about the residual may include transform coefficients regarding the residual sample. The decoding apparatus may derive the residual sample (or residual sample array) for the current block based on the residual information.
  • the decoding apparatus may generate a reconstructed sample based on the prediction sample and the residual sample, and may derive a reconstructed block or a reconstructed picture based on the reconstructed sample. Thereafter, as described above, the decoding apparatus may apply an in-loop filtering procedure, such as a deblocking filtering and / or SAO procedure, to the reconstructed picture in order to improve subjective / objective picture quality as necessary.
  • an in-loop filtering procedure such as a deblocking filtering and / or SAO procedure
  • the modified motion information of the current block can be calculated and updated with more accurate motion information, thereby improving prediction efficiency.
  • the above-described method according to the present invention may be implemented in software, and the encoding device and / or the decoding device according to the present invention may perform image processing of, for example, a TV, a computer, a smartphone, a set-top box, a display device, and the like. It can be included in the device.
  • the above-described method may be implemented as a module (process, function, etc.) for performing the above-described function.
  • the module may be stored in memory and executed by a processor.
  • the memory may be internal or external to the processor and may be coupled to the processor by various well known means.
  • the processor may include application-specific integrated circuits (ASICs), other chipsets, logic circuits, and / or data processing devices.
  • the memory may include read-only memory (ROM), random access memory (RAM), flash memory, memory card, storage medium and / or other storage device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

La présente invention concerne un procédé de décodage d'une image par un appareil de décodage qui comprend les étapes consistant : à acquérir des informations sur une inter-prédiction pour un bloc actuel au moyen d'un train de bits ; à générer une liste de candidats d'informations de mouvement sur la base de blocs voisins du bloc actuel ; à dériver des informations de mouvement du bloc actuel sur la base des informations sur une inter-prédiction et de la liste d'informations de mouvement candidates ; à dériver des informations de mouvement modifiées du bloc actuel sur la base d'un bloc de prédiction dérivé en fonction des informations de mouvement dérivées, d'un modèle du bloc actuel ou d'une position indiquée par les informations de mouvement ; et à mettre à jour les informations de mouvement du bloc actuel sur la base des informations de mouvement modifiées.
PCT/KR2017/007360 2017-01-03 2017-07-10 Procédé et appareil de décodage d'image dans un système de codage d'image Ceased WO2018128232A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201762441586P 2017-01-03 2017-01-03
US62/441,586 2017-01-03

Publications (1)

Publication Number Publication Date
WO2018128232A1 true WO2018128232A1 (fr) 2018-07-12

Family

ID=62790961

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2017/007360 Ceased WO2018128232A1 (fr) 2017-01-03 2017-07-10 Procédé et appareil de décodage d'image dans un système de codage d'image

Country Status (1)

Country Link
WO (1) WO2018128232A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112913233A (zh) * 2018-10-02 2021-06-04 Lg电子株式会社 基于hmvp构造预测候选的方法和设备
CN116347096A (zh) * 2018-12-20 2023-06-27 阿里巴巴集团控股有限公司 视频解码方法、视频编码方法及计算机可读存储介质
WO2025007250A1 (fr) * 2023-07-03 2025-01-09 Oppo广东移动通信有限公司 Procédé et appareil de décodage vidéo, dispositif et support d'enregistrement
WO2025208348A1 (fr) * 2024-04-02 2025-10-09 Oppo广东移动通信有限公司 Procédé de codage, procédé de décodage, codeur, décodeur, flux de code et support de stockage

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080070374A (ko) * 2007-01-26 2008-07-30 삼성전자주식회사 외부 메모리 액세스를 최소화하는 움직임 탐색 방법 및장치
KR20090084311A (ko) * 2008-01-31 2009-08-05 삼성전기주식회사 프레임 레이트 변환 방법
KR20110039516A (ko) * 2008-07-16 2011-04-19 소니 주식회사 움직임 추정을 위한 방법, 시스템 및 애플리케이션
KR20160030140A (ko) * 2016-02-24 2016-03-16 삼성전자주식회사 영상 복호화 방법 및 장치
KR20160132863A (ko) * 2014-03-17 2016-11-21 퀄컴 인코포레이티드 비-자연스러운 비디오 데이터의 모션 추정을 위한 방법

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080070374A (ko) * 2007-01-26 2008-07-30 삼성전자주식회사 외부 메모리 액세스를 최소화하는 움직임 탐색 방법 및장치
KR20090084311A (ko) * 2008-01-31 2009-08-05 삼성전기주식회사 프레임 레이트 변환 방법
KR20110039516A (ko) * 2008-07-16 2011-04-19 소니 주식회사 움직임 추정을 위한 방법, 시스템 및 애플리케이션
KR20160132863A (ko) * 2014-03-17 2016-11-21 퀄컴 인코포레이티드 비-자연스러운 비디오 데이터의 모션 추정을 위한 방법
KR20160030140A (ko) * 2016-02-24 2016-03-16 삼성전자주식회사 영상 복호화 방법 및 장치

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112913233A (zh) * 2018-10-02 2021-06-04 Lg电子株式会社 基于hmvp构造预测候选的方法和设备
CN112913233B (zh) * 2018-10-02 2023-08-04 Lg电子株式会社 基于hmvp构造预测候选的方法和设备
US11968355B2 (en) 2018-10-02 2024-04-23 Tcl King Electrical Appliances (Huizhou) Co. Ltd. Method and apparatus for constructing prediction candidate on basis of HMVP
CN116347096A (zh) * 2018-12-20 2023-06-27 阿里巴巴集团控股有限公司 视频解码方法、视频编码方法及计算机可读存储介质
WO2025007250A1 (fr) * 2023-07-03 2025-01-09 Oppo广东移动通信有限公司 Procédé et appareil de décodage vidéo, dispositif et support d'enregistrement
WO2025208348A1 (fr) * 2024-04-02 2025-10-09 Oppo广东移动通信有限公司 Procédé de codage, procédé de décodage, codeur, décodeur, flux de code et support de stockage

Similar Documents

Publication Publication Date Title
WO2020166897A1 (fr) Procédé et dispositif d'inter-prédiction sur la base d'un dmvr
WO2017188566A1 (fr) Procédé et appareil d'inter-prédiction dans un système de codage d'images
WO2018105757A1 (fr) Procédé et dispositif de décodage d'image dans un système de codage d'images
WO2018212578A1 (fr) Procédé et dispositif de traitement de signal vidéo
WO2017052081A1 (fr) Procédé et appareil de prédiction inter dans un système de codage d'images
WO2018070632A1 (fr) Procédé et dispositif de décodage vidéo dans un système de codage vidéo
WO2019117640A1 (fr) Procédé et dispositif de décodage d'image selon une inter-prédiction dans un système de codage d'image
WO2019177429A1 (fr) Procédé permettant de coder une image/vidéo sur la base d'une prédiction intra et dispositif associé
WO2018021585A1 (fr) Procédé et appareil d'intra-prédiction dans un système de codage d'image
WO2017082443A1 (fr) Procédé et appareil pour prédire de manière adaptative une image à l'aide d'une valeur de seuil dans un système de codage d'image
WO2019117634A1 (fr) Procédé de codage d'image fondé sur une transformation secondaire et dispositif associé
WO2018008905A1 (fr) Procédé et appareil de traitement de signal vidéo
WO2018008904A2 (fr) Procédé et appareil de traitement de signal vidéo
WO2016085231A1 (fr) Procédé et dispositif de traitement de signal vidéo
WO2020262931A1 (fr) Procédé et dispositif de signalisation permettant de fusionner une syntaxe de données dans un système de codage vidéo/image
WO2019199141A1 (fr) Procédé et dispositif d'interprédiction dans un système de codage de vidéo
WO2021137597A1 (fr) Procédé et dispositif de décodage d'image utilisant un paramètre de dpb pour un ols
WO2020262930A1 (fr) Procédé et dispositif pour éliminer une syntaxe redondante d'une syntaxe de données de fusion
WO2020184953A1 (fr) Codage de vidéo ou d'image permettant d'induire des informations d'indice de pondération pour une bi-prédiction
WO2020141932A1 (fr) Procédé et appareil de prédiction inter utilisant des mmvd de cpr
WO2020141831A2 (fr) Procédé et appareil de codage d'image faisant appel à une prédiction de copie intra-bloc
WO2019013363A1 (fr) Procédé et appareil permettant de réduire le bruit dans un domaine de fréquence dans un système de codage d'image
WO2019117659A1 (fr) Procédé de codage d'images basé sur l'élaboration d'un vecteur de mouvement, et dispositif associé
WO2019066175A1 (fr) Procédé et dispositif de décodage d'image conformes à une structure divisée de blocs dans un système de codage d'image
WO2020251270A1 (fr) Codage d'image ou de vidéo basé sur des informations de mouvement temporel dans des unités de sous-blocs

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17890802

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17890802

Country of ref document: EP

Kind code of ref document: A1